score
int64 50
2.08k
| text
stringlengths 698
618k
| url
stringlengths 16
846
| year
int64 13
24
|
---|---|---|---|
103 | See something needing your input? Click here to Join us in providing quality, expert-guided information to the public for free!
From Citizendium, the Citizens' Compendium
Modern mathematics treats space quite differently than classical mathematics did. The differences are listed below; their origin and meaning are explained afterwards.
|axioms are obvious implications of definitions||axioms are conventional|
|theorems are absolute objective truths||theorems are implications of the corresponding axioms|
|relationships between points, lines etc. are determined by their nature||relationships between points, lines etc. are essential; their nature is not|
|mathematical objects are given to us with their structure||each mathematical theory describes its objects by some of their properties|
|geometry corresponds to an experimental reality||geometric theorems are mathematical truths|
|all geometric properties of the space follow from the axioms||the axioms of a space need not determine all geometric properties|
|geometry is an autonomous and living science||classical geometry is a universal language of mathematics|
|the space is three-dimensional||different concepts of dimension apply to different kinds of spaces|
|the space is the universe of geometry||spaces are just mathematical structures, they occur in various branches of mathematics|
Before the golden age of geometry
In ancient mathematics, "space" was a geometric abstraction of the three-dimensional space observed in the everyday life. The axiomatic method was the main research tool since Euclid (about 300 BC). The coordinate method (analytic geometry) was added by René Descartes in 1637. At that time geometric theorems were treated as an absolute objective truth that can be known through intuition and reason, similar to the objects of natural science; and axioms were treated as obvious implications of definitions.
Two equivalence relations between geometric figures were used: congruence and similarity. Translations, rotations and reflections transform a figure into congruent figures; homotheties into similar figures. For example, all circles are mutually similar, but ellipses are not similar to circles. A third equivalence relation, introduced by projective geometry (Gaspard Monge, 1795), corresponds to projective transformations. Not only ellipses but also parabolas and hyperbolas turn into circles under appropriate projective transformations; they all are projectively equivalent figures.
The relation between the two geometries, Euclidean and projective, shows that mathematical objects are not given to us with their structure. Rather, each mathematical theory describes its objects by some of their properties, precisely those that are put as axioms at the foundations of the theory.
Distances and angles are never mentioned in the axioms of the projective geometry and therefore cannot appear in its theorems. The question "what is the sum of the three angles of a triangle" is meaningful in the Euclidean geometry but meaningless in the projective geometry.
A different situation appeared in the 19th century: in some geometries the sum of the three angles of a triangle is well-defined but different from the classical value (180 degrees). The non-Euclidean hyperbolic geometry, introduced by Nikolai Lobachevsky in 1829 and Janos Bolyai in 1832 (and Carl Friedrich Gauß in 1816, unpublished) stated that the sum depends on the triangle and is always less than 180 degrees. Eugenio Beltrami in 1868 and Felix Klein in 1871 have obtained Euclidean "models" of the non-Euclidean hyperbolic geometry, and thereby completely justified this theory.
This discovery forced the abandonment of the pretensions to the absolute truth of Euclidean geometry. It showed that axioms are not "obvious", nor "implications of definitions". Rather, they are hypotheses. To what extent do they correspond to an experimental reality? This important physical problem has nothing anymore to do with mathematics. Even if a "geometry" does not correspond to an experimental reality, its theorems remain no less "mathematical truths".
A Euclidean model of a non-Euclidean geometry is a clever choice of some objects existing in Euclidean space and some relations between these objects that satisfy all axioms (therefore, all theorems) of the non-Euclidean geometry. These Euclidean objects and relations "play" the non-Euclidean geometry like contemporary actors playing an ancient performance! Relations between the actors only mimic relations between the characters in the play. Likewise, the chosen relations between the chosen objects of the Euclidean model only mimic the non-Euclidean relations. It shows that relations between objects are essential in mathematics, while the nature of the objects is not.
The golden age and afterwards: dramatic change
According to Nicolas Bourbaki, the period between 1795 ("Géométrie descriptive" of Monge) and 1872 (the "Erlangen program (Erlanger Programm)" of Klein) can be called the golden age of geometry. Analytic geometry made a great progress and succeeded in replacing theorems of classical geometry with computations via invariants of transformation groups. Since that time new theorems of classical geometry interest amateurs rather than professional mathematicians.
However, it does not mean that the heritage of the classical geometry was lost. Quite the contrary! According to Bourbaki, "passed over in its role as an autonomous and living science, classical geometry is thus transfigured into a universal language of contemporary mathematics".
According to the famous Habilitation lecture given by Bernhard Riemann in 1854, every mathematical object parametrized by n real numbers may be treated as a point of the n-dimensional space of all such objects. Nowadays mathematicians follow this idea routinely and find it extremely suggestive to use the terminology of classical geometry nearly everywhere.
In order to fully appreciate the generality of this approach one should note that mathematics is "a pure theory of forms, which has as its purpose, not the combination of quantities, or of their images, the numbers, but objects of thought" (Hermann Hankel, 1867).
Functions are important mathematical objects. Usually they form infinite-dimensional spaces, as noted already by Riemann and elaborated in the 20th century by functional analysis.
An object parametrized by n complex numbers may be treated as a point of a complex n-dimensional space. However, the same object is also parametrized by 2n real numbers (real parts and imaginary parts of the complex numbers), thus, a point of a real 2n-dimensional space. The complex dimension differs from the real dimension. This is only the tip of the iceberg. The "algebraic" concept of dimension applies to linear spaces. The "topological" concept of dimension applies to topological spaces. There is also Hausdorff dimension for metric spaces; this one can be non-integer (especially for fractals). Some kinds of spaces (for instance, measure spaces) admit no concept of dimension at all.
The original space investigated by Euclid is now called "the three-dimensional Euclidean space". Its axiomatization, started by Euclid 23 centuries ago, was finalized in the 20 century by David Hilbert, Alfred Tarski and George Birkhoff. This approach describes the space via undefined primitives (such as "point", "between", "congruent") constrained by a number of axioms. Such a definition "from scratch" is now of little use, since it hides the standing of this space among other spaces. The modern approach defines the three-dimensional Euclidean space more algebraically, via linear spaces and quadratic forms, namely, as an affine space whose difference space is a three-dimensional inner product space.
Also a three-dimensional projective space is now defined non-classically, as the space of all one-dimensional subspaces (that is, straight lines through the origin) of a four-dimensional linear space.
A space consists now of selected mathematical objects (for instance, functions on another space, or subspaces of another space, or just elements of a set) treated as points, and selected relationships between these points. It shows that spaces are just mathematical structures. One may expect that the structures called "spaces" are more geometric than others, but this is not always true. For example, a differentiable manifold (called also smooth manifold) is much more geometric than a measurable space, but no one calls it "differentiable space" (nor "smooth space").
Taxonomy of spaces
Three taxonomic ranks
Spaces are classified on three levels. Given that each mathematical theory describes its objects by some of their properties, the first question to ask is: which properties?
For example, the upper-level classification distinguishes between Euclidean and projective spaces, since the distance between two points is defined in Euclidean spaces but undefined in projective spaces. These are spaces of different type.
Another example. The question "what is the sum of the three angles of a triangle" makes sense in a Euclidean space but not in a projective space; these are spaces of different type. In a non-Euclidean space the question makes sense but is answered differently, which is not an upper-level distinction.
Also the distinction between a Euclidean plane and a Euclidean 3-dimensional space is not an upper-level distinction; the question "what is the dimension" makes sense in both cases.
In terms of Bourbaki the upper-level classification is related to "typical characterization" (or "typification"). However, it is not the same (since two equivalent structures may differ in typification).
On the second level of classification one takes into account answers to especially important questions (among the questions that make sense according to the first level). For example, this level distinguishes between Euclidean and non-Euclidean spaces; between finite-dimensional and infinite-dimensional spaces; between compact and non-compact spaces, etc.
In terms of Bourbaki the second-level classification is the classification by "species". Unlike biological taxonomy, a space may belong to several species.
On the third level of classification, roughly speaking, one takes into account answers to all possible questions (that make sense according to the first level). For example, this level distinguishes between spaces of different dimension, but does not distinguish between a plane of a three-dimensional Euclidean space, treated as a two-dimensional Euclidean space, and the set of all pairs of real numbers, also treated as a two-dimensional Euclidean space. Likewise it does not distinguish between different Euclidean models of the same non-Euclidean space.
More formally, the third level classifies spaces up to isomorphism. An isomorphism between two spaces is defined as a one-to-one correspondence between the points of the first space and the points of the second space, that preserves all relations between the points, stipulated by the given "typification". Mutually isomorphic spaces are thought of as copies of a single space. If one of them belongs to a given species then they all do.
The notion of isomorphism sheds light on the upper-level classification. Given a one-to-one correspondence between two spaces of the same type, one may ask whether it is an isomorphism or not. This question makes no sense for two spaces of different type.
Isomorphisms to itself are called automorphisms. Automorphisms of a Euclidean space are motions and reflections. Euclidean space is homogeneous in the sense that every point can be transformed into every other point by some automorphism.
Two relations between species, and a property of species
Topological notions (continuity, convergence, open sets, closed sets etc.) are defined naturally in every Euclidean space. In other words, every Euclidean space is also a topological space. Every isomorphism between two Euclidean spaces is also an isomorphism between the corresponding topological spaces (called "homeomorphism"), but the converse is wrong: a homeomorphism may distort distances. In terms of Bourbaki, "topological space" is an underlying structure of the "Euclidean space" structure. Similar ideas occur in category theory: the category of Euclidean spaces is a concrete category over the category of topological spaces; the forgetful (or "stripping") functor maps the former category to the latter category.
A three-dimensional Euclidean space is a special case of a Euclidean space. In terms of Bourbaki, the species of three-dimensional Euclidean space is richer than the species of Euclidean space. Likewise, the species of compact topological space is richer than the species of topological space.
Euclidean axioms leave no freedom, they determine uniquely all geometric properties of the space. More exactly: all three-dimensional Euclidean spaces are mutually isomorphic. In this sense we have "the" three-dimensional Euclidean space. In terms of Bourbaki, the corresponding theory is univalent. In contrast, topological spaces are generally non-isomorphic, their theory is multivalent. A similar idea occurs in mathematical logic: a theory is called categorical if all its models are mutually isomorphic. According to Bourbaki, the study of multivalent theories is the most striking feature which distinguishes modern mathematics from classical mathematics.
Zoo of spaces
Linear and topological spaces
Two basic species are linear spaces (also called vector spaces) and topological spaces.
Linear spaces are of algebraic nature; there are real linear spaces (over the field of real numbers), complex linear spaces (over the field of complex numbers), and more generally, linear spaces over any field. Every complex linear space is also a real linear space (the latter underlies the former), since each real number is also a complex number. Linear operations, given in a linear space by definition, lead to such notions as straight lines (and planes, and other linear subspaces); parallel lines; ellipses (and ellipsoids). However, orthogonal (perpendicular) lines cannot be defined, and circles cannot be singled out among ellipses. The dimension of a linear space is defined as the maximal number of linearly independent vectors or, equivalently, as the minimal number of vectors that span the space; it may be finite or infinite. Two linear spaces over the same field are isomorphic if and only if they are of the same dimension.
Topological spaces are of analytic nature. Open sets, given in a topological space by definition, lead to such notions as continuous functions, paths, maps; convergent sequences, limits; interior, boundary, exterior. However, uniform continuity, bounded sets, Cauchy sequences, differentiable functions (paths, maps) remain undefined. Isomorphisms between topological spaces are traditionally called "homeomorphisms"; these are one-to-one correspondences continuous in both directions. The open interval is homeomorphic to the whole line but not homeomorphic to the closed interval , nor to a circle. The surface of a cube is homeomorphic to a sphere (the surface of a ball) but not homeomorphic to a torus. Euclidean spaces of different dimensions are not homeomorphic, which seems evident, but is not easy to prove. Dimension of a topological space is difficult to define; "inductive dimension" and "Lebesgue covering dimension" are used. Every subset of a topological space is itself a topological space (in contrast, only linear subsets of a linear space are linear spaces). Arbitrary topological spaces, investigated by general topology (called also point-set topology) are too diverse for a complete classification (up to homeomorphism). They are inhomogeneous (in general). Compact topological spaces are an important class of topological spaces ("species" of this "type"). Every continuous function is bounded on such space. The closed interval and the extended real line are compact; the open interval and the line are not. Geometric topology investigates manifolds (another "species" of this "type"); these are topological spaces locally homeomorphic to Euclidean spaces. Low-dimensional manifolds are completely classified (up to homeomorphism).
The two structures discussed above (linear and topological) are both underlying structures of the "linear topological space" structure. That is, a linear topological space is both a linear (real or complex) space and a (homogeneous, in fact) topological space. However, an arbitrary combination of these two structures is generally not a linear topological space; the two structures must conform, namely, the linear operations must be continuous.
Every finite-dimensional (real or complex) linear space is a linear topological space in the sense that it carries one and only one topology that makes it a linear topological space. The two structures, "finite-dimensional (real or complex) linear space" and "finite-dimensional linear topological space", are thus equivalent, that is, mutually underlying. Accordingly, every invertible linear transformation of a finite-dimensional linear topological space is a homeomorphism. In the infinite dimension, however, different topologies conform to a given linear structure, and invertible linear transformations are generally not homeomorphisms.
Affine and projective spaces
It is convenient to introduce affine and projective spaces by means of linear spaces, as follows. An -dimensional linear subspace of an -dimensional linear space, being itself an -dimensional linear space, is not homogeneous; it contains a special point, the origin. Shifting it by a vector external to it, one obtains an -dimensional affine space. It is homogeneous. In the words of John Baez, "an affine space is a vector space that's forgotten its origin". A straight line in the affine space is, by definition, its intersection with a two-dimensional linear subspace (plane through the origin) of the -dimensional linear space. Every linear space is also an affine space.
Every point of the affine space is its intersection with a one-dimensional linear subspace (line through the origin) of the -dimensional linear space. However, some one-dimensional subspaces are parallel to the affine space; in some sense, they intersect it at infinity. The set of all one-dimensional linear subspaces of an -dimensional linear space is, by definition, an -dimensional projective space. Choosing an -dimensional affine space as before one observes that the affine space is embedded as a proper subset into the projective space. However, the projective space itself is homogeneous. A straight line in the projective space, by definition, corresponds to a two-dimensional linear subspace of the -dimensional linear space.
Defined this way, affine and projective spaces are of algebraic nature; they can be real, complex, and more generally, over any field.
Every real (or complex) affine or projective space is also a topological space. An affine space is a non-compact manifold; a projective space is a compact manifold.
Distances between points are defined in a metric space. Every metric space is also a topological space. Bounded sets and Cauchy sequences are defined in a metric space (but not just in a topological space). Isomorphisms between metric spaces are called isometries. A metric space is called complete if all Cauchy sequences converge. Every incomplete space is isometrically embedded into its completion. Every compact metric space is complete; the real line is non-compact but complete; the open interval is incomplete.
A topological space is called metrizable, if it underlies a metric space. All manifolds are metrizable.
Every Euclidean space is also a complete metric space. Moreover, all geometric notions immanent to a Euclidean space can be characterized in terms of its metric. For example, the straight segment connecting two given points and consists of all points such that the distance between and is equal to the sum of two distances, between and and between and .
Uniform space does not introduce distances, but still allows one to use uniform continuity, Cauchy sequences, completeness and completion. Every uniform space is also a topological space. Every linear topological space (metrizable or not) is also a uniform space. More generally, every commutative topological group is also a uniform space. A non-commutative topological group, however, carries two uniform structures, one left-invariant, the other right-invariant. Linear topological spaces are complete in finite dimension but generally incomplete in infinite dimension.
Normed, Banach, inner product, and Hilbert spaces
Vectors in a Euclidean space are a linear space, but each vector has also a length, in other words, norm, . A (real or complex) linear space endowed with a norm is a normed space. Every normed space is both a linear topological space and a metric space. A Banach space is defined as a complete normed space. Many spaces of sequences or functions are infinite-dimensional Banach spaces.
The set of all vectors of norm less than one is called the unit ball of a normed space. It is a convex, centrally symmetric set, generally not an ellipsoid; for example, it may be a polygon (on the plane). The parallelogram law (called also parallelogram identity) generally fails in normed spaces, but holds for vectors in Euclidean spaces, which follows from the fact that the squared Euclidean norm of a vector is its inner product to itself.
An inner product space is a (real or complex) linear space endowed with a bilinear (or sesquilinear) form satisfying some conditions and called inner product. Every inner product space is also a normed space. A normed space underlies an inner product space if and only if it satisfies the parallelogram law, or equivalently, if its unit ball is an ellipsoid. Angles between vectors are defined in inner product spaces. A Hilbert space is defined as a complete inner product space. (Some authors insist that it must be complex, others admit also real Hilbert spaces.) Many spaces of sequences or functions are infinite-dimensional Hilbert spaces. Hilbert spaces are very important for quantum theory.
All -dimensional real inner product spaces are mutually isomorphic. One may say that the -dimensional Euclidean space is the -dimensional real inner product space that's forgotten its origin.
Smooth and Riemannian manifolds (spaces)
Smooth manifolds are not called "spaces", but could be. Smooth (differentiable) functions, paths, maps, given in a smooth manifold by definition, lead to tangent spaces. Every smooth manifold is a (topological) manifold. Smooth surfaces in a finite-dimensional linear space (like the surface of an ellipsoid, not a polytope) are smooth manifolds. Every smooth manifold can be embedded into a finite-dimensional linear space. A smooth path in a smooth manifold has (at every point) the tangent vector, belonging to the tangent space (attached to this point). Tangent spaces to an -dimensional smooth manifold are -dimensional linear spaces. A smooth function has (at every point) the differential, – a linear functional on the tangent space. Real (or complex) finite-dimensional linear, affine and projective spaces are also smooth manifolds.
A Riemannian manifold, or Riemann space, is a smooth manifold whose tangent spaces are endowed with inner product (satisfying some conditions). Euclidean spaces are also Riemann spaces. Smooth surfaces in Euclidean spaces are Riemann spaces. A hyperbolic non-Euclidean space is also a Riemann space. A curve in a Riemann space has the length. A Riemann space is both a smooth manifold and a metric space; the length of the shortest curve is the distance. The angle between two curves intersecting at a point is the angle between their tangent lines.
Waiving positivity of inner product on tangent spaces one gets pseudo-Riemann (especially, Lorentzian) spaces very important for general relativity.
Waiving distances and angles while retaining volumes (of geometric bodies) one moves toward measure theory. Besides the volume, a measure generalizes area, length, mass (or charge) distribution, and also probability distribution, according to Andrei Kolmogorov's approach to probability theory.
A "geometric body" of classical mathematics is much more regular than just a set of points. The boundary of the body is of zero volume. Thus, the volume of the body is the volume of its interior, and the interior can be exhausted by an infinite sequence of cubes. In contrast, the boundary of an arbitrary set of points can be of non-zero volume (an example: the set of all rational points inside a given cube). Measure theory succeeded in extending the notion of volume (or another measure) to a vast class of sets, so-called measurable sets. Indeed, non-measurable sets never occur in applications, but anyway, the theory must restrict itself to measurable sets (and functions).
Measurable sets, given in a measurable space by definition, lead to measurable functions and maps. In order to turn a topological space into a measurable space one endows it with a σ-algebra. The σ-algebra of Borel sets is most popular, but not the only choice (Baire sets, universally measurable sets etc. are used sometimes). Alternatively, a σ-algebra can be generated by a given collection of sets (or functions) irrespective of any topology. Quite often, different topologies lead to the same σ-algebra (for example, the norm topology and the weak topology on a separable Hilbert space). Every subset of a measurable space is itself a measurable space.
Standard measurable spaces (called also standard Borel spaces) are especially useful. Every Borel set (in particular, every closed set and every open set) in a Euclidean space (and more generally, in a complete separable metric space) is a standard measurable space. All uncountable standard measurable spaces are mutually isomorphic.
A measure space is a measurable space endowed with a measure. A Euclidean space with Lebesgue measure is a measure space. Integration theory defines integrability and integrals of measurable functions on a measure space.
Sets of measure 0, called null sets, are negligible. Accordingly, a isomorphism is defined as isomorphism between subsets of full measure (that is, with negligible complement).
A probability space is a measure space such that the measure of the whole space is equal to 1. The product of any family (finite or not) of probability spaces is a probability space. In contrast, for measure spaces in general, only the product of finitely many spaces is defined. Accordingly, there are many infinite-dimensional probability measures (especially, Gaussian measures), but no infinite-dimensional Lebesgue measure.
Standard probability spaces are especially useful. Every probability measure on a standard measurable space leads to a standard probability space. The product of a sequence (finite or not) of standard probability spaces is a standard probability space. All non-atomic standard probability spaces are mutually isomorphic one of them is the interval with Lebesgue measure.
These spaces are less geometric. In particular, the idea of dimension, applicable (in one form or another) to all other spaces, does not apply to measurable, measure and probability spaces.
Harmonic spaces. Conformal spaces. Analytic (called also complex analytic) spaces. Affinely connected spaces. Algebraic spaces. Symplectic spaces. | http://en.citizendium.org/wiki/Space_(mathematics) | 13 |
103 | Volume is the quantity of three-dimensional space enclosed by some closed boundary, for example, the space that a substance (solid, liquid, gas, or plasma) or shape occupies or contains. Volume is often quantified numerically using the SI derived unit, the cubic metre. The volume of a container is generally understood to be the capacity of the container, i. e. the amount of fluid (gas or liquid) that the container could hold, rather than the amount of space the container itself displaces.
Three dimensional mathematical shapes are also assigned volumes. Volumes of some simple shapes, such as regular, straight-edged, and circular shapes can be easily calculated using arithmetic formulas. The volumes of more complicated shapes can be calculated by integral calculus if a formula exists for the shape's boundary. One-dimensional figures (such as lines) and two-dimensional shapes (such as squares) are assigned zero volume in the three-dimensional space.
The volume of a solid (whether regularly or irregularly shaped) can be determined by fluid displacement. Displacement of liquid can also be used to determine the volume of a gas. The combined volume of two substances is usually greater than the volume of one of the substances. However, sometimes one substance dissolves in the other and the combined volume is not additive.
In differential geometry, volume is expressed by means of the volume form, and is an important global Riemannian invariant. In thermodynamics, volume is a fundamental parameter, and is a conjugate variable to pressure.
Any unit of length gives a corresponding unit of volume, namely the volume of a cube whose side has the given length. For example, a cubic centimetre (cm3) would be the volume of a cube whose sides are one centimetre (1 cm) in length.
In the International System of Units (SI), the standard unit of volume is the cubic metre (m3). The metric system also includes the litre (L) as a unit of volume, where one litre is the volume of a 10-centimetre cube. Thus
- 1 litre = (10 cm)3 = 1000 cubic centimetres = 0.001 cubic metres,
- 1 cubic metre = 1000 litres.
Small amounts of liquid are often measured in millilitres, where
- 1 millilitre = 0.001 litres = 1 cubic centimetre.
Various other traditional units of volume are also in use, including the cubic inch, the cubic foot, the cubic mile, the teaspoon, the tablespoon, the fluid ounce, the fluid dram, the gill, the pint, the quart, the gallon, the minim, the barrel, the cord, the peck, the bushel, and the hogshead.
[ Related terms
Volume and capacity are sometimes distinguished, with capacity being used for how much a container can hold (with contents measured commonly in litres or its derived units), and volume being how much space an object displaces (commonly measured in cubic metres or its derived units).
Volume and capacity are also distinguished in capacity management, where capacity is defined as volume over a specified time period. However in this context the term volume may be more loosely interpreted to mean quantity.
The density of an object is defined as mass per unit volume. The inverse of density is specific volume which is defined as volume divided by mass. Specific volume is a concept important in thermodynamics where the volume of a working fluid is often an important parameter of a system being studied.
The volumetric flow rate in fluid dynamics is the volume of fluid which passes through a given surface per unit time (for example cubic meters per second [m3 s-1]).
[ Volume formulas
||a = length of any side (or edge)
||r = radius of circular face, h = height
||B = area of the base, h = height
||l = length, w = width, h = height
||r = radius of sphere
which is the integral of the surface area of a sphere
||a, b, c = semi-axes of ellipsoid
||B = area of the base, h = height of pyramid
||r = radius of circle at base, h = distance from base to tip or height
|a, b, and c are the parallelepiped edge lengths, and α, β, and γ are the internal angles between the edges
|Any volumetric sweep
||h = any dimension of the figure,
A(h) = area of the cross-sections perpendicular to h described as a function of the position along h. a and b are the limits of integration for the volumetric sweep.
(This will work for any figure if its cross-sectional area can be determined from h).
|Any rotated figure (washer method)
|| and are functions expressing the outer and inner radii of the function, respectively.
[ Volume ratios for a cone, sphere and cylinder of the same radius and height
A cone, sphere and cylinder of radius r
and height h
The above formulas can be used to show that the volumes of a cone, sphere and cylinder of the same radius and height are in the ratio 1 : 2 : 3, as follows.
Let the radius be r and the height be h (which is 2r for the sphere), then the volume of cone is
the volume of the sphere is
while the volume of the cylinder is
The discovery of the 2 : 3 ratio of the volumes of the sphere and cylinder is credited to Archimedes.
[ Volume formula derivations
The volume of a sphere is the integral of an infinite number of infinitesimally small circular disks of thickness dx. The calculation for the volume of a sphere with center 0 and radius r is as follows.
The surface area of the circular disk is .
The radius of the circular disks, defined such that the x-axis cuts perpendicularly through them, is;
where y or z can be taken to represent the radius of a disk at a particular x value.
Using y as the disk radius, the volume of the sphere can be calculated as
Combining yields gives
This formula can be derived more quickly using the formula for the sphere's surface area, which is . The volume of the sphere consists of layers of infinitesimally thin spherical shells, and the sphere volume is equal to
The cone is a type of pyramidal shape. The fundamental equation for pyramids, one-third times base times altitude, applies to cones as well.
However, using calculus, the volume of a cone is the integral of an infinite number of infinitesimally small circular disks of thickness dx. The calculation for the volume of a cone of height h, whose base is centered at (0,0,0) with radius r, is as follows.
The radius of each circular disk is r if x = 0 and 0 if x = h, and varying linearly in between—that is,
The surface area of the circular disk is then
The volume of the cone can then be calculated as
and after extraction of the constants:
Integrating gives us
[ See also
[ External links | http://www.algebra.com/algebra/homework/Volume/Volume.wikipedia | 13 |
78 | Types and Effects of Potentially Hazardous Volcanic Events
Debris AvalanchesThe term debris avalanche is used to refer to the sudden and very rapid movement of an incoherent, unsorted mass of rock and soil mobilized by gravity (Schuster and Crandell, 1984). Movement is characterized by flowage in a dry or wet state, or both. Debris avalanches commonly originate in massive rockslides which, during their movement, disintegrate into fragments ranging in size from small particles to blocks hundreds of meters across. If the avalanche has a large water content, its matrix may continue to flow downslope as a lahar after its coarser parts have come to rest.
Volcanic-debris avalanches occur occasionally at large, steep-sided volcanoes and are among the most hazardous of volcanic events (Voight and others, 1981; Crandell and others, 1984. Such avalanches form when part of a volcanic edifice fails catastrophically and moves downslope. Disruption of a volcanic cone may be the result of intrusion of magma and earthquake shaking, as at Mount St. Helens in 1980 (Voight and others, 1981), or the result of a volcanic explosion as at Bezymianny in Kamchatka, U.S.S.R., in 1956 (Gorshkov, 1959; Bogoyavlenskaya and others, 1985). Steep-sided volcanoes may also fail from other causes, e.g., after gradual weakening by hydrothermal alteration, or after heavy rains which may saturate and weaken parts of the edifice.
Debris avalanches typically produce thick hummocky deposits that can extend tens of kilometers from a volcano and cover hundreds of square kilometers. A debris avalanche that occurred at Mount Shasta between about 300,000 and 360,000 yrs ago (Crandell and others, 1984) traveled more than 64 km from the summit of the volcano, covered more than 675 km2, and had a volume of at least 45 km3 (D. R. Crandell, personal commun., 1986).
Debris avalanches can destroy everything in their paths by impact or burial beneath tens of meters
of debris. Because debris avalanches can occur with little or no warning and can travel at high speeds
(Voight and others, 1981), areas that might be affected should be evacuated if an avalanche is anticipated.
Pyroclastic FlowsPyroclastic flows are high-density mixtures of hot, dry rock fragments and hot gases that move away from their source vents at high speeds. They may result from the explosive eruption of molten or solid rock fragments, or both, or from the collapse of vertical eruption columns of ash and larger rock fragments. Pyroclastic flows may also result from a laterally directed explosion, or the fall of hot rock debris from a dome or thick lava flow.
Rock fragments in pyroclastic flows range widely in grain size and consist of dense rock, pumice, or both. Individual pyroclastic flows, worldwide, range in length from less than one to more than 200 km, cover areas from less than one to more than 20,000 km2, and have volumes from less than 0.001 to more than 1000 km3 (Crandell and others, 1984). Pumiceous pyroclastic flows with volumes of 1-10 km3 can reach distances of several tens of kilometers from a vent and travel downslope at speeds of 50 to more than 150 km/hr (Crandell and Mullineaux, 1978), their velocity depending largely on their volume and on the steepness of slopes over which they travel. Pyroclastic flows and their deposits commonly contain rock debris and gases with temperatures of several hundred degrees Celsius (Banks and Hoblitt, 1981; Blong, 1984, p. 36).
Most pyroclastic flows consist of two parts: a basal flow of coarse fragments that moves along the ground, and a turbulent cloud of finer particles (ash cloud) that rises above the basal flow (Crandell and Mullineaux, 1978). Ash may fall from the cloud over a wide area downwind from the basal flow.
Pyroclastic flows generally follow valleys or other depressions, but can have enough momentum to overtop hills or ridges in their paths. The larger the mass of a flow and the faster it travels, the higher it will rise onto obstacles in its path. Some pumiceous pyroclastic flows erupted during the climactic eruptions of Mount Mazama (Crater Lake) about 6850 years ago moved 231 m upslope to cross a divide 17 km from the volcano (Crandell and others, 1984) and ultimately reached a downvalley distance of 60 km from the vent (Williams, 1942; Bacon, 1983).
Pyroclastic flows are extremely hazardous because of their high speeds and temperatures. Objects and structures in their paths are generally destroyed or swept away by the impact of debris or by accompanying hurricane-force winds (Blong, 1984). Wood and other combustible materials are commonly burned by the basal flow; people and animals may also be burned or killed beyond the margins of a pyroclastic flow by inhalation of hot ash and gases.
Pyroclastic flows have been erupted repeatedly at many volcanic centers in the Cascade Range
during Holocene time. Moreover, large silicic magma chambers may exist at
several volcanic centers in the Cascade Range that have had explosive eruptions of large volume (101 - 102
km3). Such eruptions can produce pyroclastic flows which could travel more than 50 km from a vent and
could be extremely destructive over wide areas. Because pyroclastic flows move at such high speeds,
escape from their paths is unlikely once they start to move; areas subject to pyroclastic flows must be
evacuated before flows are formed.
Pyroclastic SurgesPyroclastic surges are turbulent, low-density clouds of rock debris and air or other gases that move over the ground surface at high speeds. They typically hug the ground and depending on their density and speed, may or may not be controlled by the underlying topography. Pyroclastic surges are of two types: "hot" pyroclastic surges that consist of "dry" clouds of rock debris and gases that have temperatures appreciably above 100 degrees C, and "cold" pyroclastic surges, also called base surges, that consist of rock debris and steam or water at or below a temperature of 100 degrees C (Crandell and others, 1984).
Both hot and cold pyroclastic surges damage or destroy structures and vegetation by impact of rock
fragments moving at high speeds and may bury the ground surface with a layer of ash and coarser debris
tens of centimeters or more thick (Crandell and others, 1984). Because of their high
temperatures, hot pyroclastic surges may start fires and kill or burn people and animals. Both types of surges
can extend as far as 10 km from their source vents and devastate life and property within their paths. During
an eruption of Mont Pelee on Martinique in 1902, a cloud of hot ash and gases swept into the town of St.
Pierre at an estimated speed of 160 km/hr or more (Macdonald, 1972). About 30,000 people died within
minutes, most from inhalation of hot ash and gases. Pyroclastic surges have occurred at volcanoes in the
Cascade Range in the past and can be expected to occur again. Future cold surges
(base surges) are most likely to occur where magma can contact water at volcanic vents near lakes, those
that have crater lakes, and at vents in areas with a shallow water table.
Volcanic BlastsVolcanic blasts are explosions which may be directed vertically or at some lower angle. Vertically directed explosions may produce mixtures of rock debris and gases that flow, motivated chiefly by gravity, down one or more sides of a volcano. Such a blast at Mount Lamington, New Guinea, in 1952 produced pyroclastic surges that moved down all sides of the volcano, killing about 3,000 people and destroying nearly everything within an area of about 230 km2 (Taylor, 1958).
A volcanic explosion that has a significant low-angle component and is principally directed toward a sector of no more than 180 degrees is referred to as a lateral blast (Crandell and Hoblitt, 1986). Such a blast may produce a mixture of rock debris and gases hundreds of meters thick that moves at high speed along the ground surface as a pyroclastic flow, a pyroclastic surge, or both. The high velocity of the mixture of rock debris and gases, which may be at least 100 m/s, is due both to the initial energy of the explosion and to gravity as the mixture moves downslope.
Lateral blasts may affect only narrow sectors or spread out from a volcano to cover a sector as broad as 180 degrees, and they can reach distances of several tens of kilometers from a vent (Crandell and Hoblitt, 1986). The resulting deposits form a blanket of blocks, lapilli, and ash that thins from a few meters near the source to a few centimeters near the margin (Hoblitt and others, 1981; Waitt, 1981; Moore and Sisson, 1981). Because of they carry rock debris at high speeds, lateral blasts can devastate areas of tens to hundreds of square kilometers within a few minutes, and can destroy manmade structures and kill all living things by abrasion, impact, burial, and heat.
A lateral blast at Mount St. Helens in 1980 moved outward at a speed of at least 100 m/s (Malone and others, 1981), devastated an area of 600 km2 out to a distance of 28 km from the volcano, and killed more than 60 people (Christiansen and Peterson, 1981). A similar blast in 1956 at Bezymianny volcano, U.S.S.R., affected an area of about 500 km2 out to a distance of 30 km from the volcano (Gorshkov, 1959; Bogoyavlenskaya, and others, 1985). Both events were closely associated with debris avalanches.
Volcanic blasts are most likely at steep-sided stratovolcanoes and may occur when viscous gas-rich
magma is emplaced at a shallow level within the volcano (Bogoyavlenskaya and others, 1985). For
purposes of long-range land-use planning, Crandell and Hoblitt (1986) have suggested that circular hazard
zones with a radius of 35 km be drawn around symmetrical volcanoes where lateral blasts are possible. The
sector beyond the volcano that is most likely to be affected cannot be forecast unless and until precursory
seismic activity and deformation suggest the possible site of a lateral blast (Gorshkov, 1963; Crandell and
Hoblitt, 1986). Although short-term warnings suggested by such precursory activity obviously are not
useful for determining safe locations for fixed structures, they may allow people to evacuate threatened
areas (Crandell and Hoblitt, 1986).
Lava FlowsLava flows are streams of molten rock that erupt relatively nonexplosively from a volcano and move downslope. The distance traveled by a lava flow depends on such variables as the effusion rate, fluidity of the lava, volume erupted, steepness of the slope, channel geometry, and obstructions in the flows path. Basalt flows are characterized by relatively low viscosity and may reach more than 50 km from their sources; in fact, one Icelandic basalt flow reached 150 km (Williams and McBirney, 1979). Andesite flows have higher viscosity and few extend more than 15 km; however, one andesite flow of Pleistocene age in the Cascades is 80 km long (Warren, 1941). Because of their high viscosity, dacite and rhyolite lava extrusions typically form short, thick flows or domes.
Lava flows cause extensive damage or total destruction by burning, crushing, or burying everything in their paths. They seldom threaten human life, however, because of their typically slow rate of movement, which may be a few meters to a few hundred meters per hour. In addition, their paths of movement generally can be predicted. However, lava flows that move onto snow or ice can cause destructive lahars and floods, and those that move into forests can start fires. The flanks of moving lava flows typically are unstable and collapse repeatedly, occasionally producing small explosive blasts or small pyroclastic flows.
Lava flows have been erupted at many vents in the Cascade Range during
Holocene time; their compositions range from
basalt to rhyolite. The longest known basalt, andesite,
and rhyolite lava flows erupted at Cascade volcanic centers during
Holocene time are, respectively,
the 45-km-long Giant Crater basalt flow at
Medicine Lake volcano,
the 12-km-long Schriebers Meadow andesite flow at
and the 2-km-long Rock Mesa rhyolite flow at
Lava flows of varied composition are likely to erupt again in the
Cascade Range and will endanger all non-moveable objects in
Lava DomesVolcanic domes are mounds that form when viscous lava is erupted slowly and piles up over the vent, rather than moving away as a lava flow. The sides of most domes are very steep and typically are mantled with unstable rock debris formed during or shortly after dome emplacement. Most domes are composed of silica-rich lava which may contain enough pressurized gas to cause explosions during dome extrusion.
The direct effects of dome eruption include burial or disruption of the preexisting ground surface by the dome itself and burial of adjacent areas by rock debris shed from the dome. Because of their high temperatures, domes may start fires if they are erupted in forested areas. Domes are extruded so slowly that they can be avoided by people, but they may endanger man-made structures that cannot be moved. The principal hazard associated with domes is from pyroclastic flows produced by explosions or collapses. Such pyroclastic flows can occur without warning during active dome growth and can move very rapidly, endangering life and property up to 20 kilometers from their sources (Miller, 1978; 1980). Such pyroclastic flows can also cause lahars if they are erupted onto snow and ice or incorporate water during movement.
Domes ranging in composition from dacite to rhyolite have been erupted repeatedly during late
Pleistocene and Holocene time in the Cascade Range.
Mount St. Helens,
Mount Hood, and near
have collapsed or exploded to produce hot
some extending as far as 20 km from their sources (Miller, 1980).
Lines of domes erupted at
Medicine Lake and
volcanoes within the last several
thousand years appear to have formed
over short intervals of time when vertical dikelike magma bodies
reached the surface (Fink and Pollard,
1983; Scott, 1987). Dome emplacement typically follows more explosive eruptions.
LaharsLahars (also called volcanic debris flows or mudflows) are mixtures of water-saturated rock debris that flow downslope under the force of gravity. For simplicity in the discussions and compilations in this report, we have followed the usage of Crandell and others (1984) and used the term lahar to include both true lahars (Crandell, 1971), and downstream lahar-runout flows (Scott, 1985). Lahar-runout flows are hyperconcentrated streamflows that form by downstream transformation of lahars through loss of sediment and dilution by streamflow (Pierson and Scott, 1985; Scott; 1985, 1986). Additional dilution downstream may result in transformation of hyperconcentrated flows into normal streamflows, or floods.
Rock debris in lahars ranges in size from clay to blocks several tens of meters in maximum dimension. When moving, lahars resemble masses of wet concrete and tend to be channeled into stream valleys. Lahars are formed when loose masses of unconsolidated, wet debris become mobilized. Rocks within a volcano may already be saturated, or water may be supplied by rainfall, by rapid melting of snow or ice, or by a debris-dammed lake or crater lake. Lahars may be formed directly when pyroclastic flows or pyroclastic surges are erupted onto snow and ice, as apparently occurred in November 1985 at Nevado del Ruiz, in Colombia, where about 23,000 people lost their lives (Herd and Comite\" de Estudios Vulcanologicos, 1986). Lahars may be either hot or cold, depending on the temperature of the rock debris they carry.
Lahars can travel great distances down valleys, and lahar fronts can move at high speeds--as much as 100 km/hr. Lahars produced during an eruption of Cotopaxi volcano in Ecuador, in 1877, traveled more than 320 km down one valley at an average speed of 27 km/hr (Macdonald, 1972). Lahars that descended the southeast flank of Mount St. Helens in 1980 had initial flow velocities that exceeded 100 km/hr; average lahar flow velocities were about 67 km/hr over the 22.5 km traveled before the lahars entered a reservoir (Pierson, 1985). High-speed lahars may climb valley walls on the outside of bends, and their momentum may also carry them over obstacles. Lahars confined in narrow valleys, or dammed by constrictions in valleys, can temporarily thicken and fill valleys to heights of 100 m or more (Crandell, 1971).
The major hazard to human life from lahars is from burial and impact by boulders and other debris. Buildings and other property in the path of a lahar can be buried, smashed, or carried away. Because of their relatively high density and viscosity, lahars can move and carry away vehicles and other large objects such as bridges.
An inverse relation exists between the volume and length of lahars and their frequency; that is, large lahars are far less frequent than small ones. For this reason, lahar hazard progressively decreases downvalley from a volcano, and at any point along the valley, hazard from lahars decreases with increasing height above the valley floor.
Lahars have occurred repeatedly during eruptions at snow-covered volcanoes in the northwestern
U. S. during Holocene time. Large lahars originating in debris avalanches have occurred at
Mounts Shasta, Hood, St. Helens, Rainier, and Baker, and some have been caused by the failure of debris-
or moraine-dammed lakes. Small lahars are frequently generated at ice-covered volcanoes by climatic
events such as heavy rainstorms and periods of rapid snowmelt due to hot weather (Miller, 1980).
FloodsFloods related to volcanism can be produced by melting of snow and ice during eruptions of ice-clad volcanoes, by heavy rains that may accompany eruptions, and by transformation of lahars to stream flow. Floods carrying unusually large amounts of rock debris can leave thick deposits at and beyond the mouths of canyons and on valley floors leading away from volcanoes. Eruption-caused floods can occur suddenly and can be of large volume; if rivers are already high because of heavy rainfall or snow melt, such floods can be far larger than normal.
Danger from eruption-caused floods is similar to that from floods having other origins, but floods
caused by eruptions may be more damaging because of an unusually high content of sediment. The
hydrology of river systems may be altered for decades following the rapid accumulation of great quantities
of sediment (e.g., U.S. Army Corps of Engineers, 1984). Subsequent reworking of this sediment may lead
to further channel aggradation, and aggravate overbank flooding during high river stages. Floods can also
be generated by waves in lakes that overtop or destroy natural or man-made dams; such waves can be
produced by large masses of volcanic material moving into the lake suddenly as a
TephraTephra consists of fragments of lava or rock blasted into the air by explosions or carried upward by a convecting column of hot gases (e.g., Fisher and Schmincke, 1984; Shipley and Sarna-Wojcicki, 1983). These fragments fall back to earth on and downwind from their source volcano to form a tephra, pyroclastic-fall, or volcanic "ash" deposit. Large fragments fall close to the erupting vent, and progressively smaller ones are carried farther away by wind. Dust-size particles can be carried many hundreds of kilometers from the source. Tephra deposits blanket the ground with a layer that decreases in thickness and particle size away from the source. Near the vent, tephra deposits may be tens of meters thick. According to Blong (1984), rates of drift of clouds containing ash are usually in the range of 20-100 km/hr, but can be higher where wind speeds are higher.
Tephra deposits consist of combinations of pumice, glass shards, dense-rock, and crystals that range in size from ash (< 2 mm), through lapilli (2-64 mm), to blocks (> 64 mm). Eruptions that produce tephra range from those that eject debris only a few meters into the air, to cataclysmic explosions that throw debris to heights of several tens of kilometers. Explosive eruptions that produce voluminous tephra deposits also typically produce pyroclastic flows.
Effects of tephra are closely related to the amount of material deposited and its grain size. Thickness versus distance relationships for several well-known tephra deposits in the Cascade Range are shown in Figure 3-1 (not available). Figure 3-2 (not available) shows median particle diameter versus distance from source for various tephra deposits. The relationship generally approximates an exponential one, but shows wide scatter. Within about 100 km of a vent, the median particle diameter of a tephra deposit varies by several orders of magnitude depending on the intensity of the eruption, fall velocity of particles, and velocity of the wind. Beyond several hundred kilometers, the mean particle diameter typically is silt-size (about 0.063 mm) or less, but still shows considerable variation.
Tephras generally do not completely destroy facilities or kill people; instead they adversely affect both in many ways. Tephra can be carried to great distances and in all directions; no site in the Pacific Northwest is immune from tephra hazards. The magnitude of hazard from tephra varies directly with deposit thickness. In general, deposit thickness and grain size decrease with increasing distance from a vent. However, the tephra fall from the May 18, 1980, eruption of Mount St. Helens displays a secondary maximum of tephra thickness about 300 km from the volcano (Sarna-Wojcicki and others, 1981). Carey and Sigurdsson (1982) proposed that aggregation of very fine ash into larger particles caused premature fallout at the secondary thickness maximum; they suggested that the same process may accompany other tephra eruptions. The few data points for some of the larger tephra falls, and the problems of determining original fall thicknesses from prehistoric deposits, leaves open the possibility of secondary thickness maxima in these layers. The tephra-thickness plot for Mazama tephra is a composite from many sources and suggests that a secondary thickness maximum may occur in the Mazama tephra at about 200-500 km from its vent. Alternatively, it may reflect varying methods used by different workers to determine original fall thickness.
Close to an erupting vent, the main tephra hazards to man-made structures include high temperatures, burial, and impact of falling fragments. Large blocks thrown on ballistic trajectories from an erupting vent can damage structures and kill or injure unprotected people. Most blocks will fall within 5 km of the vent (Blong, 1984), but unusually powerful explosions may throw some blocks at least twice as far (Crandell and Hoblitt, 1986). Hot tephra may set fire to forests and flammable structures, but this is not likely to be a hazard beyond a distance of 15-20 km. Structural damage can also result from the weight of tephra, especially if it is wet. A tephra layer 10 cm thick may weigh 20-100 kg/m2 when dry, but 50-150 kg/m2 when wet (Crandell and others, 1984). Also, tephra is much more cohesive when wet than when dry, and can adhere to steeper surfaces and is much more difficult to remove. Tephra 10 cm or more thick may cause buildings to collapse (Blong, 1984, p.212). Drifting of tephra by winds can locally increase accumulations and loads on sloping structures far above that resulting from unmodified fall thicknesses.
At distances of tens to hundreds of kilometers, the chief hazards from tephra falls are the effects of ash on machinery and electrical equipment and on human and animal respiratory systems. Ash only 1 cm thick can impede the movement of most vehicles and disrupt transportation, communication, and utility systems (Schuster, 1981, 1983; Warrick and others, 1981). Machinery is especially susceptible to the abrasive and corrosive effects of ash (Schuster, 1981, 1983; Shipley and Sarna-Wojcicki, 1983).
Specific possible effects of airfall tephra on nuclear power plants have been outlined by Shipley and Sarna-Wojcicki (1983). They include (1) loading of structures, particularly by thick accumulations of wet tephra, (2) clogging of water and air filtering systems by influx of tephra, (3) abrasive effects of ash on machinery, (4) corrosion and shorting out of electrical systems by freshly fallen ash (Sarkinen, 1980), (5) effects of tephra accumulations in circulation of water-cooling systems, and (6) a variety of secondary or indirect effects on maintenance and emergency systems that may be impacted by factors 1-5. Shipley and Sarna-Wojcicki (1983) also pointed out the likelihood of "cascading effects", when the impact of tephra on one function or group of functions impairs additional dependent systems, each of which may produce further cascading effects.
In addition to the specific effects discussed above, the fall of tephra may severely decrease visibility or cause darkness, which could further disrupt transportation, disrupt outdoor activities, and possibly result in psychological stress and panic even among people whose lives are not threatened (Blong, 1984). These effects could impair the ability of personnel to perform even routine tasks in areas affected by tephra fall. A wide range of compositions and volumes of tephra have been erupted during the past 15,000 years from Cascade volcanoes. These tephra deposits range in volume from the 116-km3 Mazama tephra (Bacon, 1983; Druitt and Bacon, 1986) to those of only a few thousand cubic meters. The May 18, 1980 eruption of Mount St. Helens deposited an estimated minimum volume of 1.1 km3 of uncompacted tephra on areas east-northeast of the volcano (Sarna-Wojcicki and others, 1981). Most tephra eruptions in the Cascade Range have produced elongate lobe-shaped deposits that extend primarily into a broad sector northeast of the source volcano owing to prevailing wind directions.
Relatively small volumes (<0.1 km3) of basaltic and basaltic-andesite tephra have been erupted at
many vents during Holocene time. Such eruptions have been far less explosive than more silicic eruptions
and have produced cinder cones and tephra deposits that are restricted chiefly to within a few tens of
kilometers downwind. Similar small-volume eruptions of tephra are anticipated in the future at new vents
within fields of basaltic volcanism in the Cascade Range.
Emission of Volcanic GasesAll magmas contain dissolved gases that are released both during and between eruptive episodes. Volcanic gases generally consist predominantly of steam (H2O), followed in abundance by carbon dioxide and compounds of sulfur and chlorine (Wilcox, 1959; Thorarinsson, 1979). Minor amounts of carbon monoxide, fluorine and boron compounds, ammonia, and several other compounds are found in some volcanic gases.
The distribution of volcanic gases is mostly controlled by the wind; they may be concentrated near (1-10 km) a vent but become diluted rapidly downwind. Even very dilute gases can have a noticeable odor and can harm plants and some animals tens of kilometers downwind from a vent.
Within about 10 km of a vent, volcanic gases can endanger life and health as well as property.
Acids, ammonia, and other compounds present in volcanic gases can damage eyes and respiratory systems
of people and animals, and heavier-than-air gases, such as carbon dioxide, can accumulate in closed
depressions and suffocate people or animals. Corrosion of metals and other susceptible materials can also
be severe (Crandell and others, 1984; Blong, 1984).
Volcanic SeismicityThree main sources of earthquakes in the vicinity of volcanoes (Blong, 1984) are (1) those generated by the movement of magma or by formation of cracks through which magma can move, and those resulting from gas explosions within a conduit; (2) other earthquakes that result from readjustments of a volcanic edifice following eruption or movement of magma; and (3) tectonic earthquakes, which may also facilitate the rise of magma. Volcanic earthquakes belonging to the first category rarely have Richter magnitudes greater than 5.0 (Okada and others, 1981; Latter, 1981) and generally have foci at depths of less than 10 km. Damage from such earthquakes is limited to a relatively small area (Rittmann, 1962; Shimozuru, 1972).
The relationship between volcanic activity and earthquakes of categories 2 and 3 above is less well understood. Few quantitative data are available concerning the maximum magnitude of such earthquakes, although events larger than magnitude 5 have been described. A sequence of tectonic earthquakes that occurred near Mammoth Lakes, California, in 1980 included four events of magnitude 6+ (Urhammer and Ferguson, 1981); these may have been triggered by magmatic processes (Bailey, 1981). One of the largest earthquakes of possible magmatic origin occurred at Sakura-jima volcano, Japan, in 1914. The earthquake had a focal depth of 13 km, a magnitude of 6.7 (Shimozuru, 1972), and caused considerable damage and some loss of life in Kagoshima, 10 km from the volcano. Earthquakes at least as large as magnitude 7.2 have occurred on Kilauea volcano, Hawaii (Tilling and others, 1976); however, these earthquakes are related to displacements of large sectors of the volcanic edifice rather than to a specific volcanic event (Swanson and others, 1976) and thus resemble tectonic earthquakes.
In summary, earthquakes directly associated with movement or eruption of magma seldom exceed
a magnitude of about 5.0, and structures at distances greater than a few tens of kilometers from the volcano
are not likely to be damaged by such events. Structures situated outside of the proximal-hazard
zone are not likely to be damaged by volcanic seismicity. Volcanoes located in geologic
settings that are tectonically active are likely to be at risk from tectonic earthquakes that are far larger than
volcanogenic ones. Structures sited and designed to withstand the maximum credible tectonic earthquake
should not be threatened by volcanogenic seismicity.
Atmospheric Shock Waves Induced By EruptionsEruption induced atmospheric shock waves are strong compressive waves driven by rapidly moving volcanic ejecta. Although most volcanic eruptions are not associated with such waves, a number of examples are known. Some of the eruptions best known for this type of behavior are: Vesuvius, 1906 (Perret, 1912); Krakatau, 1883 (Verbeek, 1885, in Simkin and Fiske, 1983); Tambora, 1815 (Stewart, 1820); Sakura-jima, 1914 (Omori, 1916); and Asama, 1958 (Aramaki, 1956). Air-shock waves can be sufficiently energetic to damage structures far from their source. The 1815 eruption of Tambora, on the island of Sumbawa, produced a shock wave that broke windows at a distance of about 400 km (Stewart, 1820). In 1883, a barograph deflection of about 7 millibars (0.7 kPa) was recorded 150 km from Krakatau (Strachey, 1888). Air shocks can apparently couple to the ground strongly enough to cause damage to buildings at 100 km (Simkin and Howard, 1970).
Few quantitative observational data are available upon which to construct a model relating shock strength (overpressure and rate of compression), distance, and energy release. Considering the uncertainties, the simple theory of self-similar motion is adequate for a first approximation. This theory (Thompson, 1972; Landau and Lifshitz, 1959; Zeldovich and Razier, 1966) was developed for the motion of the atmosphere in response to nuclear blasts. The source pressures in volcanic explosions, however, are much lower than those in nuclear blasts.
Assume (1) the atmosphere is uniform in structure and (2) is at rest at the time of the eruption; (3) at time t = 0, a large energy, E, is released at the volcano; (4) the dimensions of the region over which E is released are small compared to the distances of interest here; (5) the resulting motion of the atmosphere is spherically symmetric. For shock pressures of 6 bars or more, the shock pressure will decay as 1/R3:
PS = E/R3, (1 bar = 1 X 106 dyne cm-2) (1 erg = 1 dyne-cm) (1 cm = 1 X 10-5 km)
where PS is the pressure immediately behind the shock front, R is the radial distance from the source, and E is the energy of the shock wave. Volcanologists currently consider about 500 bars as an upper limit to the initial value of PS (Self and others, 1979; Kieffer, 1981). The value here assigned to E is 5 x 1024 ergs, the energy thought to have been dissipated in the atmosphere by the 1883 eruption of Krakatau (Press and Harkrider, 1966), perhaps the greatest explosion ever recorded. This eruption was of the same order of magnitude as the climactic eruption of Mount Mazama 6850 yr ago (Friedman and others, 1981; Simkin and others, 1981), taken here as the largest credible future eruption of a Cascade volcano. Using the Krakatau energy value,
PS = 5 x 1024/R3.
This equation holds approximately between the source and the radial distance at which Ps decays to about 6 bars, about 9 km. Beyond this distance, strong shock theory is inappropriate, and the pressure decays approximately as 1/R:
PS/PSS = RSS/R,
where PSS is the pressure (6 bars) at the lower limit of the strong shock regime, and RSS is the distance at the limit of the strong shock approximation. For the case considered here (E = 5 x 1024 ergs, RSS = 9 km), PS is about 1 bar at 50 km, the radius of the proximal-hazard zone, and is about 0.4 bars at 150 km (approximately 50 times the observed value at this range at Krakatau). The wave would be calculated to decay to 0.1 bar, the threshold for damage, at about 540 km. These overpressure estimates are maximum values for at least two reasons. First the energy we have used is probably an upper limit on the energy of Krakatau. Secondly, the density structure of the atmosphere, neglected in this formulation, tends to reduce the pressure by a factor of 2 to 3 in the region of a few tens to hundreds of kilometers.
A more empirical approach is to take the observed damage threshold distance, assume an overpressure of 0.1 bar, then calculate the overpressure at lesser distances. The 1883 eruption of Krakatau caused windows to break at 150 km from source (Verbeek, 1885, in Simkin and Fiske, 1983, p. 202). Accordingly, RSS = 2.5 km. Then, E = 9 x 1022 ergs, and PS = 0.3 bar at 50 km. Based on the preceding analyses, a reasonable worst-case overpressure range for large eruptions of Cascade volcanoes at 50 km, the margin of the proximal-hazard zone, is about 0.3-1.0 bars.
One of the only detailed calculations done of atmospheric response to an observed volcanic eruption event is that of Bannister (1984), in which he calculated the response of the atmosphere within 1000 km to the accelerations of the May 18 blast at Mount St. Helens. The calculated overpressures were in good agreement with barograph records observed in the range 50 to 400 km. The peak positive overpressure at 10 km was 1600 Pa (0.16 bar) and at 50 km was nearly 400 Pa (0.04 bars). These pressures are directly dependent on the initial velocity and time history of the ejecta. Since ejecta velocities substantially larger than the 147 m/s used by Bannister for the Mount St. Helens ejecta are plausible, higher overpressures for larger events are conceivable. These cannot be predicted without numerical modelling, but we believe that overpressures that could exceed the Mount St. Helens example by factors of 2, 3, or 5 are plausible. This reasoning supports the above estimates of worst case overpressures of several tenths of a bar. These estimates, however, are too poorly supported to be used as design criteria. If eruption-induced overpressures are to be considered in design, we recommend that additional research be undertaken to develop better-constrained overpressure estimates.
[Report Menu ] ... | http://vulcan.wr.usgs.gov/Hazards/NRC_Report/nrc_hazards.html | 13 |
71 | The coordinate systems considered here are all based at one reference point in space with respect to which the positions are measured, the origin of the reference frame (typically, the location of the observer, or the center of Earth, the Sun, or the Milky Way Galaxy). Any location in space is then described by the "radius vector" or "arrow" between the origin and the location, namely by the distance (length of the vector) and its direction. The direction is given by the straight half line from the origin through the location (to infinity). In the spherical coordinate systems used here, the direction is fixed by two angles, which are given as follows:
A reference plane containing the origin is fixed, or equivalently the axis through the origin and perpendicular to it (typically, an "equatorial" plane and a "polar" axis); elementarily, each of these uniquely determines the other. One can assign an orientation to the polar axis from "negative" to "positive", or "south" to "north", and simultaneously to the equatorial plane by assigning a positive sense of rotation to the equatorial plane; these orientations are, by convention, usually combined by the right hand rule: If the thumb of the right hand point to the positive (north) polar axis, the fingers show in the positive direction of rotation (and vice versa, so that a physical rotation defines a north direction).
The reference plane or the reference axis define the set of planes which contain the origin and are perpendicular to the "equatorial" reference plane (or equivalently, contain the "polar" reference axis); each direction in space then lies precisely in one of these "meridional" planes (or half planes, if the reference axis is taken to divide each plane into halfs), with the exception of the (positive and negative) polar axis which lies in all of them by definition.
The first angle used to characterize a direction, typically the "latitude", is taken between the direction and the reference plane, within the "meridional" plane. For the second angle, it is required to select and fix one of the "meridional" half planes as zero, from which the angle (of "longitude") is measured to the "meridional" half plane containing our direction.
Note that this selection of angles to characterize a direction in a given reference frame is chosen by convention, which is especially common in astronomy and geography, and which is used in the following here, as well as in most astronomical databases. Other, equivalent, conventions are possible, e.g. physicists often use instead of the "latitude" angle to the reference plane, the angle between the direction and the "positive" or "north" polar axis (called "co-latitude"; co-latitde = 90 deg - latitude). It depends on taste at last what the reader likes to use, but here we will stay as close to standard astronomical convention as possible. In order to minimize the requirement of case-to-case enumeration of conventions, we also recommend the reader to do the same.
The natural reference plane here is that of the Earth's equator, and the natural reference axis is the rotational, polar axis which cuts the Earth's surface at the planet's North and South pole. The circles along Earth's surface which are parallel to the equator are the latitude circles, where the angle at the planet's center is constant for all points on these circles. Half circles from pole to pole, which are all perpendicular to the equatorialplane, are called meridians. One of the meridians, in practice that through the Greenwich Observatory near London, England, is taken as reference meridian, or Null meridian. Geographical Longitude is measured as the angle between this and the meridian under consideration (or more precisely, between the half planes containing them); it is of course the same for all points of the meridian.
Because Earth is not exactly circular, but slightly flattened, its surface (defined by the ocean surface, or the corresponding gravitational potential) forms a specific figure, the so-called Geoid, which is very similar to a slightly oblate spheroid (the reference ellipsoid). This is the reason why there are two common but different definitions of latitude on Earth:
Taking any of the meridional planes, the meridian has the approximate shape of a half ellipse. The major half axis represents the equatorial radius of the planet, while the minor axis is the polar (and thus the rotational) half axis, which is about 1/298 shorter than the equatorial radius. More precisely:
tan B' = (b/a)^2 * tan BThe maximal difference occurs for B = 45 deg, and amounts to 11.5 arc minutes.
In the following, we always deal with geographic latitude unless otherwise mentioned.
Thus each observer can look at the skies as being manifested on the interior of a big sphere, the so-called celestial sphere. Then each direction away from the observer will intersect the celestial sphere in one unique point, and positions of stars and other celestial objects can be measured in angular coordinates (similar to longitude and latitude on Earth) on this virtual sphere. This can be done without knowing the actual distances of the stars. Moreover, any plane through the origin cuts the sphere in a great circle. Examples for celestial coordinate systems are treated below.
In times up to Copernicus, people believed that there is actually a
solid sphere to which the stars beyond the solar system are fixed: This idea
was overcome when it was realized that stars are sunlike bodies, in the time
of Newton and Halley. Today, the celestial sphere is only a virtual
construct to make our understanding of positional astronomy easier.
Through any direction, or point on the celestial sphere, e.g. the position of a star, a unique [half] plane (or great [half] circle) perpendicular to the horizon can be found; this is called vertical circle; all vertical [half] circles contain (and intersect in) both the zenith and the nadir. Within the plane of its verticle circle, the position under consideration can be characterized by the angle to the horizon, called altitude a. Alternatively and equivalently, one could take the angle between the direction and the zenith, the zenith distance z, which is related to the altitude by the relation: z = 90 deg - a. All objects above the horizon have positive altitudes (or zenith distances smaller than 90 deg). The horizon itself can be defined, or recovered, as the set of all points for which a = 0 deg (or z = 90 deg).
In contrast to the apparent horizon which defines coordinates of objects as the observer perceives them, the true horizon is defined by the plane parallel to the apparent horizon, but through the center of Earth. The angle between the position of an object and the true horizon is referred to as true altitude. For nearby objects such as the Moon, the measured position can vary notably between these two reference systems (up to 1 deg for the Moon). Also, the apparent altitudes are subject to the effect of refraction by Earth's atmosphere.
The second coordinate of a position in the horizon system is defined by the point where the verticle circle of the position cuts the horizon. It is called azimuth A and, in astronomy and on the Northern hemisphere (the present author does not know the southern standards for this thread), is the angle from the south point (or direction) taken to the west, north, and east to the foot point of the vertical circle on the horizon, thus running from 0 to 360 deg. In geodesy, the north direction is often taken as zero point (this angle is sometimes called bearing and is given by A +/- 180 deg). Note that these conventions are not always uniquely used so that it may be advisable to clear up which conventions are used (e.g., by saying A is taken to the West).
Taking the astronomical standard, the south, west, north, and east points on the horizon are defined by A = 0 deg, 90 deg, 180 deg, and 270 deg, respectively. The vertical circle passing through the south and north point (as well as zenith and nadir) is called local meridian; the one perpendicular to it through west point, zenith, east point and nadir is called prime vertical. The local meridian coincides with the projection of the geographical meridian of the observer's location to the sky (celestial sphere) from Earth's center.
The terms introduced here are helpful in understanding the effects of Earth's rotation.
In principle, the celestial coordinate system can be introduced in the simplest way by projecting Earth's geocentric coordinates to the sky at a certain moment of time (actually, each time when star time is O:00 at Greenwich or anywhere on the Zero meridian on Earth, which occurs once each siderial day); the reader will hopefully understand this statement after reading this section. These coordinates are then left fixed at the celestial sphere, while Earth will rotate away below them.
Practically, projecting Earth's equator and poles to the celestial sphere by imagining straight half lines from the Earth's center produces the celestial equator as well as the north and the south celestial pole. Great circles through the celestial poles are always perpendicular to the celestial equator and called hour circles for reasons explained below.
The first coordinate in the equatorial system, corresponding to the latitude, is called Declination (Dec), and is the angle between the position of an object and the celestial equator (measured along the hour circle). Alternatively, sometimes the polar distance (PD) is used, which is given by PD = 90 deg - Dec; the most prominent reference known to the present author using PD instead of Dec is John Herschel's General Catalogue of Non-stellar Objects (GC) of 1864, but this (equivalent) alternative has come more and more out of use since, so that virtually all current astronomical databases use Dec.
It remains to fix the zero point of the longitudinal coordinate, called Right Ascension (RA). For this, the intersection points of the equatorial plane with Earth's orbital plane, the ecliptic, are taken, more precisely the so-called vernal equinox or "First Point of Aries". During the year, as Earth moves around the Sun, the Sun appears to move through this point each year around March 21 when spring begins on the Northern hemisphere, and crosses the celestial equator from south to north (Southerners are asked to forgive a certain amount of "hemispherism" in the official nomenclature). The opposite point is called the "autumnal equinox", and the Sun passes it around September 23 when it returns to the Southern celestial hemisphere. As a longitudinal coordinate, RA can take values between 0 and 360 deg. However, this coordinate is more often given in time units hours (h), minutes (m), and seconds (s), where 24 hours correspond to 360 degrees (so that RA takes values between 0 and 24 h); the correspondence of units is as follows:
24 h = 360 deg 1 h = 15 deg, 1 m = 15', 1 s = 15" 1 deg = 4 m, 1' = 4 sSo the vernal equinox, where the Sun appears to be when Northern spring begins around March 21, is at RA = 0 h = 0 deg, the summer solstice where the Sun is when Northern summer begins around June 21, is at RA = 6 h = 90 deg, the autumnal equinox is at RA = 12 h = 180 deg, and the winter solstice is at RA = 18 h = 270 deg. Thus RA is measured from west to east in the celestial sphere.
Because of small periodic and secular changes of the rotation axis of Earth, especially precession, the vernal equinox is not constant but varies slowly, so that the whole equatorial coordinate system is slowly changing with time. Therefore, it is necessary to give an epoch (a moment of time) for which the equatorial system is taken; currently, most sources use epoch 2000.0, the beginning of the year 2000 AD.
To go over from equatorial coordinates fixed to the stars to the horizon system, the concept of the hour angle (HA) is useful. In principle, this means introducing a new, second equatorial coordinate system which co-rotates with Earth. This system has again the celestial equator and poles as reference quantities, and declination as latitudinal coordinate, but a co-rotating longitudinal coordinate called hour angle. In this system, a star or other celestial object moves contrary to Earth's rotation along a circle of constant declination during the course of the day; various effects of this diurnal motion are discussed below. This rotation leaves the celestial poles in the same invariant position for all time: They always stay on the local meridian of the observer (which goes through south and north point also), and the altitude of the north celestial pole is equal to the geographic latitude of the observer (thus negative for southerners, who cannot see it for this reason, but the south celestial pole instead). This meridian always coincides with in hour circle for this reason. Thus, as may be suggestive, the local meridian is taken as the hour circle for HA=0.
Celestial objects are at constant RA, but change their hour angle as time proceeds. If measured in units of hours, minutes and seconds, HA will change for the same amount as the elapsed time interval is, as measured in star time (ST), which is defined so that a siderial rotation of Earth takes 24 hours star time, which corresponds to 23 h 56 m 4.091 s standard (mean solar) time; see our article on Astronomical Time Keeping for more details. This is actually the reason why RA and HA are measured in time units. The standard convention is that HA is measured from east to west so that it increases with time, and this is opposite to the convention for RA !
Star time is ST = 0 h by definition whenever the vernal equinox, RA = 0 h, crosses the local meridian, HA = 0. As time proceeds, RA stays constant, and both HA and ST grow by the amount of time elapsed, thus star time is always equal to the hour angle of the vernal equinox. Moreover, objects with "later" RA come into the meridian HA = 0, more precisely with RA which is later by the amount of elapsed star time, so that also star time is equal to the current Right Ascension of the local meridian.
More generally, for any object in the sky, the following relation between right ascension, hour angle, and star time always holds:
HA = ST - RA(here given to determine the current HA from known RA and ST).
cos Dec * sin HA = cos a * sin A sin Dec = sin B * sin a + cos B * cos a * cos A cos Dec * cos HA = cos B * sin a + sin B * cos a * cos AThe inverse transformation formulae from given HA, Dec to A, a read:
cos a * sin A = cos Dec * sin HA sin a = sin B * sin Dec + cos B * cos Dec * cos HA cos a * cos A = - cos B * sin Dec + sin B * cos Dec * cos HAFor practical calculation in either case, evaluate e.g. the second formula first to obtain Dec or a, and then use the result in the first formula to get HA or A, respectively. (Get HA from or transform it to Right Ascension according to the relation given at the end of the last section if star time is known)
By doing so, stars will cross the local meridian (defined e.g. by zero hour angle HA) twice a day; these events are called transits or culminations, i.e., the upper and the lower transit, or the upper and the lower culmination. These events also mark the maximal and minimal altitude a the objects can reach in the observer's sky, and may both take place above or below the horizon of the observer, depending on the declination Dec of the object and the geographic latitude B of the observer.
The altitudes for upper transits are as follows:
a = 90 deg - |B - Dec|where the transit takes place north of the zenith if Dec > B and south otherwise. If |B - Dec| > 90 deg, the upper transit will take place at negative altitude, i.e. below the horizon, so that the object will never come above the horizon and thus never be visible; for the Northern hemisphere, this is true for all objects with
Dec < B - 90 deg (< 0),and for the Southern hemisphere for
Dec > B + 90 deg (> 0).The altitudes for the lower transit are given by
a = (B + Dec) - 90 deg B > 0 (North) a = - (B + Dec) - 90 deg B < 0 (South)For an observer on the Northern hemisphere, stars with Dec > 90 deg - B (> 0), and for southern hemisphere observers, stars with Dec < - 90 deg - B (< 0) will have their lower transit at positive altitudes, i.e., above horizon, and will never set; such stars are called circumpolar.
All stars which are neither circumpolar nor never visible will have their upper transit above and their lower transit below horizon, and thus rise and set during a siderial day. Disregarding refraction effects, the hour angle of the rise and set of a celestial object, the semidiurnal arc H0, is given by
cos H0 = - tan Dec * tan Bwhile the azimuth of the rising and setting points, the evening and morning elongation A0 is
cos A0 = - sin Dec / cos Bwhere A0 > 90 dec if Dec and B have same sign (i.e., are on the same hemisphere). Rising and setting times differ from transit time by the amount of the diurnal arc H0, given in time units (hours), taken as hours of star time.
If Dec and B have same sign (i.e., are on the same hemisphere), one of the following situations occurs:
sin a = sin Dec / sin B cos HA = tan Dec * cot B
sin a = sin B / sin Dec cos HA = cot Dec * tan B
The ecliptic latitude (be) is defined as the angle between a position and the ecliptic and takes values between -90 and +90 deg, while the ecliptic longitude (le) is again starting from the vernal equinox and runs from 0 to 360 deg in the same eastward sense as Right Ascension.
The obliquity, or inclination of Earth's equator against the ecliptic, amounts eps[ilon] = 23deg 26' 21.448" (2000.0) and changes very slightly with time, due to gravitational perturbations of Earth's motion. Knowing this quantity, the transformation formulae from equatorial to ecliptical coordinates are quite simply given (mathematically, by a rotation around the "X" axis pointing to the vernal equinox by angle eps):
cos be * cos le = cos Dec * cos RA cos be * sin le = cos Dec * sin RA * cos eps + sin Dec * sin eps sin be = - cos Dec * sin RA * sin eps + sin Dec * cos epsand the reverse transformation:
cos Dec * cos RA = cos be * cos le cos Dec * sin RA = cos be * sin le * cos eps - sin be * sin eps sin Dec = cos be * sin le * sin eps + sin be * cos eps
Ecliptical coordinates are most frequently used for solar system calculations such as planetary and cometary orbits and appearances. For this purpose, two ecliptical systems are used: The heliocentric coordinate system with the Sun in its center, and the geocentric one with the Earth in its origin, which can be transferred into each other by a coordinate translation.
Here, the galactic plane, or galactic equator, is used as reference plane. This is the great circle of the celestial sphere which best approximates the visible Milky Way. For historical reasons, the direction from us to the Galactic Center has been selected as zero point for galactic longitude l, and this was counted toward the direction of our Sun's rotational motion which is therefore at l = 90 deg. This sense of rotation, however, is opposite to the sense of rotation of our Galaxy, as can be easily checked ! Therefore, the galactic north pole, defined by the galactic coordinate system, coincides with the rotational south pole of our Galaxy, and vice versa.
Galactic latitude b is the angle between a position and the galactic equator and runs from -90 to +90 deg. Glalactic longitude runs of course from 0 to 360 deg.
The galactic north pole is at RA = 12:51.4, Dec = +27:07 (2000.0), the galactic center at RA = 17:45.6, Dec = -28:56 (2000.0). The inclination of the galactic equator to Earth's equator is thus 62.9 deg. The intersection, or node line of the two equators is at RA = 18:51.4, Dec = 0:00 (2000.0), and at l = 33 deg, b=0.
The transformation formulae for this frame get more complicated, as the transformation is consisted of (1.) a rotation around the celestial polar axis by 18:51.4 hours, so that the reference zero longitude matches the node, (2.) a rotation around the node by 62.9 deg, followed by (3.) a rotation around the galactic polar axis by 33 deg so that the zero longitude meridian matches the galactic center. This complicated transformation will not be given here formally.
Before 1959, the intersection line had been taken as zero galactic longitude, so that the old differred from the new latitude by 33.0 deg (the longitude of the node just discussed, but for the celestial equator of the epoch 1950.0):
l(old) = l(new) - 33.0 degFor a transition time, the old coordinate had been assigned a superscript "I", the new longitude a superscript "II", which can be found in some literature.
For some considerations, besides the geo- or heliocentric galactic coordinates described above, galactocentric galactic coordinates are useful, which have the galactic center in their origin; these can be obtained from the helio/geocentric ones by a parallel translation.
Angles are most often measured in degrees (deg), arc minutes (arc min, ') and arc seconds (arc sec, "), where
1 deg = 60' = 3,600"and the full circle, or revolution, is 360 deg. Mathematicians and physicists often use units of arc instead, where the full circle (i.e., 360 deg) is given by 2 pi, so that
pi = 180 deg = 10,800' = 648,000" 1 deg = pi/180 = 1/57.2958 = 0.0174533 1' = pi/10,800 = 1/3,437.75 = 0.000290888 1" = pi/648,000 = 1/206,265 = 0.00000484813 | http://spider.seds.org/spider/ScholarX/coords.html | 13 |
61 | Home • People • Courses • Program • Research • Clinic • Goals • Kiosk • NewsUnderstanding Basic Statistics • Fitting • Exercise • Excel • Igor • Kaleidagraph • Origin • Power Laws • Dimensional Analysis
To learn how to make a graph such as the one shown above, follow the discussion below the graph. Click on a feature of the graph, or the text links beneath it, to jump to the instructions for that feature.
Origin is a convenient data analysis and graphics program that runs in Windows on PCs. You can use Origin to plot data, transform raw data to more meaningful quantities through column-based calculations, compare data to a theoretical model using linear and nonlinear least-squares fitting, and determine the quantitative agreement between the data and model.
You may type data directly into a data sheet or import data from the clipboard, from a text file, from an Excel data sheet, or from a large variety of other file formats. The action starts in the File|Import menu item, and you can learn about various file formats in the online help. Instructions for importing common kinds of data follow here.
Bear in mind this important point: the basic unit of data is the column. In Origin, a column may be designated to represent X values, Y values, Z values, X error bars, Y error bars, or labels. By default, the first column is called A(X), the second is B(Y), and additional columns may be added using Column|Add New Columns... You can set the function of a column using the column box obtained by double-clicking on the column head, or with the pop-up menu obtained by right-clicking on the column head.
If the data exist in some other program, copy them to the clipboard, switch to Origin, select the upper left corner of the region of a data sheet into which you wish to place the data, and paste. If the data are not tab-delimited (e.g., comma separated values), this method does not work. Save the data in a text file and proceed with the following, instead.
For simple files, you can use the File|Open command to access the text file. Select the kind of file from the popup menu, then pick the file. If Origin's default options for parsing the text file don't work, try the File|Import command and provide more information about the structure of your data file.
Origin 7.5 can work with data from Excel without having to import it into an Origin worksheet. Just open the Excel file with File|Open Excel.... On the other hand, performance may be better if you import the data. Just use copy and paste.
You can enter values into a column using a formula based on another column.
Residuals are the difference between the actual data points and the fitted line or curve. To compute residuals, you must first perform the fit. In version 3.5 or higher, you then select the same fit function from the fitting menu to bring up the fit dialog, which will now contain a popup View menu of options. Select Paste Residuals to Data Window... A column is appended to the data sheet linked to the graph, and given the title Residuals. (A new column is always generated, even if you already have a column called Residuals.)
To produce a panel of residuals on the graph, you must use the Double Y style. If you already have the graph made using another style, bring it to the front, go back to the Gallery menu and select Double Y and put the column of residuals on the Y2 axis. Then click to Replot the graph. Unfortunately, you have to fix up all the error bars again.
A data set is placed in a single column. Each Y column is associated with the nearest X column to its left. These associations are indicated by affixing a number after the Y in the column heading. For example, a column marked Y2 is associated with the X2 column. In each layer, you can have multiple X columns and multiple Y columns. Unless you specify otherwise, a Y column will be automatically graphed against its associated X column. In addition, you can have a x and y error columns for each X or Y data set. Note that version 5 allows you to plotted data from more than one data sheet on a plot.
Once you have entered your data, do the following to create a graph:
The procedure outlined above only works if all your columns are contiguous. If they are not, the following alternate procedure is necessary. Please note that all data sets that form a single data plot must be on the same worksheet.
If you want to change the size of the layer, do it before adding any labels, so that you can pick a font size that fits the graph. There are two ways to adjust the size the layer or move it around on the page. The first is:
The other is:
By default, Origin will put a frame around the plot area only if you plot the scatter type. If you made some other type of graph, you will need to add one. Here's how:
By default, Origin adjusts the range of each axis automatically. You can override its choice as follows:
Data points should be plotted as individual points with a symbol size that makes sense for the number of data points in the plot and the plot size. There should not be a line connecting successive points. Points should be shown with error bars, if available.
The easiest way to put error bars on a plot is to "bless" the appropriate column(s) of errors before creating the plot. You can bless the column by right-clicking in the column head and using the Set As... command. Alternatively, you can add one or more error bar columns to a data set after the graph is made using the Plot | Add Error Bars... command.
If the positive-going error bar differs from the negative-going one, you need to have two error bar columns in your data sheet. Bless both of them as y error bars, as described in the previous section. Then make the plot. (Or add one or more error bar columns to an existing graph.) You will see two overlapping sets of error bars on your data series.
Now double-click on the error bar for a data point on the graph. A dialog opens that shows the name of the error bar column and has (among other items) check boxes positive and negative. Uncheck one or the other. Then repeat this for the second error bar on a data point.Adding a function graph
Function graphs are exactly what their name implies: graphs of functions you specify. They are most useful for adding a theoretical curve to a plot of experimental data. The only restriction on the types of graphs is that y must be an explicit function ofx which can be represented using Origin's built-in functions. They can be added as follows:
You may add additional text labels using the text tool ("T" inthe Toolbox) and add lines, with or without arrows, with the line tool. Labels you don't want can be deleted by selecting them and pressing Del. In any text editing box, there are several buttons which can be used to embellish your text:
These may be used in one of two ways. One is to select text already written and then click on the button. The other is to click the button, type your text, and then end the effect by either clicking on Normal or pressing Right-Arrow.
These are some of the most important Greek letters:
|Alpha: a.||Beta: b.||Gamma: g.||Delta: d.||Epsilon: e.||Mu: m.||Chi: c.|
|Theta: q||Phi: f.||Pi: p.||Nu: n.||Lamba: l.||Omega: w.||Psi: y.|
The method described below makes use of Origin's built in linear regression tool. This has the advantages of being quick and easy, but has the disadvantage of ignoring the uncertainties (errors) in your data. It does not calculate a meaningful χ2, so you cannot readily determine how confident you can be of the fit. In general, you should define an appropriate fitting function, as described in Fitting to an Arbitrary Curve.
The result will appear in the Script Window. You will need to enlarge the window and scroll up several lines in order to see it. To enlarge a window in Windows, click on the lower right-hand corner and drag it to the new size. You can cut and paste the results from there into a text label on the plot as follows:
The arbitrary curve fitter (called NLSF for Nonlinear Least-Squares Fitter) in Origin is both powerful and complex. Consult the Origin manual for a complete description of its capabilities. The following section will simply provide a tutorial for basic operation.
Warning: unless you set the options correctly, Origin 5 will NOT use your uncertainties, even though they appear on the graph, and will give incorrect values for χ2 and the uncertainties in the fit parameters. Follow the instructions below carefully to be sure Origin does your fit correctly.
The example will be a linear fit function of the form y = mx+ b. This function has two free parameters, namely m andb.
Nonlinear curve fitting is a tricky business. Most often its success rides on choosing initial guesses for the parameters that are close to the best-fit values. If they are too far away, the process may get stuck in a local minimum, unable to find the best fit.
There are four main possibilities that arise when Origin gives you an error message while during LM.
If Origin never settles on stable values of the parameters, then you probably have too many. Try either eliminating some of them or prevent them from being varied by clicking on the Vary? check box in the Fit window.
For use in a lab notebook, it is very convenient to print a version of your graph that is small enough to permit you to annotate the graph and explain its significance on the same notebook page. A graph with a plot area of about 4 inches by 3 inches is quite good for this.
Left to its own devices, Origin will fill the entire page. This is usually bigger than you want. To shrink it down, click on the lower right corner of the plot area until you get a square drag handle. Resize the plot area until it is the size you want. Even better, you can double-click the gray square at the top left corner of the plot window and enter the size you wish directly.
Written by Itai Seggev and Peter N. Saeta.Understanding Basic Statistics • Fitting • Exercise • Excel • Igor • Kaleidagraph • Origin • Power Laws • Dimensional Analysis
Copyright © 2013
Harvey Mudd College Physics Department
241 Platt Blvd., Claremont, CA 91711
WebMaster (at) physics.hmc.edu
Last modified: 05 January 2010 | http://www.physics.hmc.edu/analysis/origin.php | 13 |
68 | Nonlinear functions are functions whose graphs are not straight
lines. While there are many types of nonlinear functions, this course
will focus on three that are commonly used in business: parabolic functions, demand functions, and exponential functions. A basic graphic representation of each of these functions is shown below.
In this course, you may notice that graphs of functions often appear
only in the upper right-hand quadrant on a set of axes, in the region
where both the x and y values are positive. This is because, in business situations, a function's x and y
values usually stand for non-negative amounts, such as time, units
produced, dollar amounts, and so on. Therefore, when the functions
above are used in a business context, they will usually appear entirely
in the upper right-hand quadrant (called the first quadrant) on a set
A parabolic function is a symmetric, U-shaped function that has x2 as its highest term. The basic form of this function is organized into an equation below on the right.
This form is useful because it tells you a number of things about
the shape of the parabolic function (illustrated above on the left),
allowing you to graph the function quickly and to understand several
||The direction of the parabola is determined by the constant c.
- If the constant c is positive, as in the graph above, the parabola opens upward in the shape of a U.
- If the constant c is negative, the parabola opens downward in the shape of an inverted U.
|| The sharpness of the parabola is also determined by c. The further the constant is from zero, the steeper the curve will be.
The parabola's highest or lowest point, known as the vertex, has the coordinates (a, b).
Consider the following production function, in which a company's marginal output in units per employee, f(x), depends on how many workers are employed, x.
You can recognize that this function is a parabola because its highest term, (x – 5)2, is squared. Simply by looking at the function's formula, you can determine a number of things about the parabola's shape.
Because the constant c, –.5, is negative, the parabola opens downward.
Because the absolute value of constant c, –.5, is between 0 and 1, the parabola is relatively wide.
The parabola's highest point, the vertex, has the coordinates (5,
10). The vertex is the point at which marginal production is maximized.
When 5 people are employed, 10 units of output are produced per
This function is graphed below.
The graph below illustrates how the constant c influences the width
of a parabolic function. This graph illustrates the parabolic functions
–.5(x – 5)2 + 10 and –2(x – 5)2 +
10. The constant c is –.5 and –2, respectively. Because the constants
are negative, both functions open downward. As you can see, the
further the constant is from 0, the sharper the curve. When c has the
value of –2, the curve is sharper than when c is –.5.
A demand function has a general form , which can also be written as x-1. The demand function used often in business to describe demand; it is represented in equation form below on the right.
This form of the function tells you some things about the shape and
behavior of the demand function, which is illustrated above on the left.
Graphically, the constant c determines how close the graph will be to the x and y axes. The smaller c is, the closer the graph is to the origin, the point (0, 0).
In general, demand functions will approach but will never cross the x or y axes.
Consider the following production function, in which the demand for a company's cookies,
f(x), depends on the price of the cookies, x.
When graphed on a coordinate plane, demand functions have two
sections that are mirror images of each other. In business situations,
only the section in the first quadrant is used because the values being
examined are positive. For example, the demand for cookies and the
price of cookies would not be negative. Therefore, the company's demand
function for cookies would be illustrated in this way.
The constant, 2, determines how close the graph is to the x and y
axes. If the demand for cookies increases, the constant in the demand
function would increase. How would this change the shape of the demand
function? Consider the following graph, which shows the original demand
function, where c is 2, and the increased demand function, where c is 5.
Notice that the larger c is, the further away the graph is from the origin (0, 0).
Any function where a constant (a) is raised to a power of x
is an exponential function. Exponential functions in business take the
form of exponential growth and decay functions. In business scenarios,
exponential functions often appear in the equation form that appears
below on the right.
This form tells you two important things about the function's shape, which is graphed above on the left.
The point (0, c) is the function's y-intercept.
The sign of the exponent x (positive or negative) indicates the direction of the function.
- If the exponent is positive, the function increases to the right. (This is known as "exponential growth.")
- If the exponent is negative, the function decreases to the right. (This is known as "exponential decay.")
The exponential function is commonly used for investments with
compounded interest. For example, imagine that you put $10 into an
account that compounds annually at a rate of 7 percent. The function
used to find the value of this investment at a future point in time is
FV = the future value
N = the number of years in the future
Simply by looking at this function, you can tell that it is an
exponential function because it has a constant, 1.07, raised to a power N. Using this function, you can quickly determine two things about its shape.
Because the constant c is 10, the function's y-intercept is (0, 10).
The exponent N is positive, which means the function is increasing as N increases to the right.
Think about the business situation and you can see that this
information makes sense. Consider the graph above with the following
scenario—the y-intercept tells you that at the time of the
initial investment, $10 was deposited. This investment will increase
over the years, as indicated by the positive exponent. You can find the
exact shape of the function by calculating the value of the investment
for a number of years and then plotting the coordinates.
1. Name the following nonlinear functions.
a. f(x) = 4(x – 2)2 + 6
b. f(x) = 6,400(1.05)x
c. f(x) =
2. Given the following function, answer the questions below.
f(x) = 20,000(1.1)x
a. What type of nonlinear function is this?
b. What is the y-intercept?
c. When x = 3, what does f(x) equal?
3. Given the following function, answer the questions below.
f(x) = 3(x – 4)2 + 7
a. What type of nonlinear function is this?
b. What is the vertex?
c. Is the function thin or wide compared to a parabola where c = 1?
4. An individual puts $1,500 into a bank account that has an
interest rate of 7 percent. The future value of the investment is
modeled by the function FV = 1,500(1.07)T, where FV is the future value and N is the number of years in the future. Using this information, answer the following questions about this function.
a. What type of nonlinear function defines this relationship?
b. What is the y-intercept?
c. How much money will the individual have in 10 years?
5. The demand for book bags is modeled by f(x) = , where x equals the number of book bags demanded and f(x) is the price.
a. What is the price when nine book bags are demanded?
b. After it is found that the book bags have faulty straps, the demand changed to f(x) = , where x = number of book bags demanded and f(x) is the price. Is this new demand closer or further from the origin (0, 0)? | http://ci.columbia.edu/ci/premba_test/c0331/s4/s4_3.html | 13 |
89 | Everything we experience comes to us through our five senses—sight, hearing, touch, smell and taste. While our senses are truly amazing, most of what goes on around us occurs unnoticed. Since we can only see a small range of the electromagnetic spectrum as visible light, we can be in the vicinity of a radio-transmitting tower radiating 50,000 watts of power and be totally unaware of its presence. The fluttering of a hummingbird wing and changes in mountain ranges are undetectable to the average human. Extreme distances, both short and long, are equally elusive. We can see the dot above an “i”, but cannot see a grain of pollen. At the other outer limits of length, we can only imagine what a light year is.
Scientific research includes the study of subatomic particles as well as the mind-boggling distances that exist between the earth and neighboring stars and nebulae. This great breadth of investigation involves extending our senses and developing new ways of "seeing".
Scientific instruments that enable us to overcome our sensory limitations have been, and continue to be, essential to the progress of science. The microscope and the telescope provide mankind with windows to two previously unseen worlds. The stroboscope has enabled us to “freeze” motion. X-rays have provided a non-invasive way of probing the body. Radio telescopes enable us to extend our grasp to the far reaches of space. Cloud and bubble chambers allow us to study events occurring on the subatomic scale.
In a similar way, the following experiments will allow your students to extend their senses and make measurements they never dreamed possible. They will determine the size of a molecule, time events that occur in an instant, and measure dimensions on an astronomical scale. In the process, they will learn how scientists make observations and measurements in the invisible world.
1. Measuring New Heights
Students are asked to indirectly measure the height of an object much larger than their available measuring instrument.
Give the students instructions below, and turn them loose! Give them time to plan in the classroom before going out as a group to make measurements. Give as little advice as possible. Their methods (and the results) may vary a lot, and that’s okay!
The challenge: Determine the height of a tall object on the school grounds such as a flagpole, a chimney, or other structure identified by your teacher. You will only be allowed the use of a meter stick. You may think that you don’t have the knowledge to make such a measurement with so little equipment, but you would be wrong!
Before leaving the classroom, do some brainstorming with members of your group. You will be surprised to learn that you have the ability to measure such a tall object indirectly. Each group will be asked to share not only their value for the height of the object, but more importantly, their method. So be ready!
2. Blast Off!
The measurement method known as triangulation can be used to indirectly determine the heights of tall structures or the altitudes of projectiles.
Demonstrate the calculations for the class before assigning each group a height or altitude to measure. Trigonometry is involved, but students really only need to do some basic algebra to grasp this concept. You can use triangulation to find the height of buildings or as part of other labs and activities like Bottle Rockets.
Trigonometry provides an easy way to determine the heights of structures or even the altitude of a toy rocket. Trigonometry deals with ratios of the lengths of pairs of the sides of a right triangle. You may have heard of the sine, cosine, and tangent. Scary sounding? Perhaps, but don’t worry, they’re all just ratios. To make things easier, we’ll only consider the tangent.
The tangent of an angle (Θ, “theta”) is the ratio of the length of the side opposite the angle to the length of the side adjacent to the angle. In other words, it’s the ratio of side a to side b. This ratio increases as the angle of inclination Θ increases. The tangent for angles between 0 and 90 degrees may found in a table or calculator.
Suppose you fire a rocket into the air and wish to know its altitude. If you know the distance from you to the launch point (b) and the angle of inclination (Θ), you can find the rocket’s altitude (a) becausetangent Θ = a/b
a = b tangent Θ(Hint: You can look up the tangent of any angle from 0 to 90 in a table or by using your scientific calculator.)Voila! The tangent makes the indirect measurement of heights a snap.
To actually carry out a measurement of a rocket’s altitude, you will need a protractor (an instrument used for measuring angles), a string with a small weight on the end (also known as a plum bob), a meter stick, and a tangent table or calculator.
After tying a weight to the end of a string, attach the string to the center of the protractor (see fig. 2). This device will enable you to determine the angle of inclination. Now all you need is the baseline b, the distance between the launch pad and where you stand when you sight on the rocket.
When the rocket reaches its maximum altitude, view the rocket along the edge of the protractor. Have your lab partner observe the angle indicated by the string. Because the protractor is inverted, this angle must be subtracted from 900 to obtain the angle of inclination. To find the altitude of the rocket, simply multiply the tangent of the angle of incidence by the length of the baseline.
3. Measuring the Moon
Students will use the concept of similar triangles to indirectly measure the diameter of the moon.
This activity can be done in the classroom, if the moon happens to be visible from your windows, or it can be done at home by each student.
You may find this hard to believe, but you can measure the diameter of the moon from the comfort of your home. The equipment needed includes an index card, a pin, two strips of opaque tape (masking or electrical tape works well), and a centimeter ruler.
Oh, and one other thing, you’ll need to know that the moon is 3 x 105 km from earth.
When the moon is full, place the two strips of tape 2cm apart on a windowpane facing the moon. After making a pinhole in the index card, observe the moon through the pinhole and two strips of tape. Back away from the window until the moon appears to just fill the space between the two strips of tape. Measure the distance from the card to the window. Using the proportionality of sides that exists for similar triangles (see figure above), calculate the diameter of the moon.
Product # P1-1050$1.30
4. Measuring Molecular Monolayers
Students will use the volume of a large number of items and the area covered by a single layer of those items to indirectly find the diameter of a single item.
As with any lab using chemicals and glassware, this lab requires appropriate safety measures, such as goggles. The final result for the height of an oleic acid molecule might not be very accurate, but the exercise is still worthwhile. When students are able to measure something that they cannot see, they understand a bit more about how scientists work.
Suppose you wanted to find the diameter of a BB, but didn’t have an instrument, such as a micrometer, suited for the job. What could you do?
One way to obtain the diameter of a single BB requires the use of many BB’s. Begin by placing a large number of BB’s in a graduated cylinder. Record the total volume of BB’s. (In carrying out this measurement, you are making an assumption. Do you know what it is?)
Now spread the BB's out in a circular pattern on a table. This results in a monolayer, a cylindrical volume whose depth is a single BB. Measure the diameter of this circle. Because you may have difficulty making a perfect circle, make this measurement a number of times and find the average diameter. Divide the diameter by two, and use this radius to find the area of the circle.
The diameter of one BB is the same as the height of the very flat cylinder you just made. We can use the area of the circle and the total volume of BB’s (measured earlier) to find the height. The volume of a cylinder is the area of its base times the height.
V = A x h
h = V/A
Note: 1 mL = 1 cm3.
If you have a micrometer, use it to check your answer.
Believe it or not, you can estimate the size of a single molecule using a similar approach. This time however, you will be dealing with a monolayer of molecules rather than a monolayer of BBs. To perform the experiment you’ll need a pizza pan, some chalk dust, an eyedropper, a 10-ml graduated cylinder, and oleic acid solution. The oleic acid solution is prepared by adding 5-ml oleic acid to 995-ml ethanol.
After filling the pizza pan with water, spread chalk dust over the surface of the water. Easy does it, for too much powder will hinder the spread of the oleic acid. Using an eyedropper, carefully add just one drop of the oleic acid solution to the center of the pan. The alcohol will dissolve in the water, but the oleic acid will spread out to form a nearly circular shape. As you did with the BBs, measure the diameter of this rough circle a number of times and find the average. Then find the area of the circle.
Remembering that you put a single drop of oleic acid solution on the surface of the water, you will have to determine the volume of acid in a single drop of solution. To do this, count the number of drops needed to occupy 1-ml in the graduated cylinder. Do this several times and take an average. The volume of a single drop is found by dividing 1-ml (=1 cm3) by the average number of drops in a cm3. The actual volume of oleic acid is only 0.005 of the volume of a drop (Why?). Multiple the volume of a single drop by 0.005 to obtain the volume of oleic acid.
Just as with BBs, you can now find the size of a single molecule by dividing this volume by the area of the circle.
What assumption are you making regarding the shape of an oleic acid molecule?
Product # C4-1000$39.00
5. Measuring Short Time Intervals with a Stroboscope
Stroboscopes are instruments that allow the viewing of repetitive motion in such a way as to make the moving object appear stationary. Stroboscopes may also be used to measure short time intervals.
Stroboscopes may either be mechanical or electronic. Mechanical, or hand-held, stroboscopes consist of a disk with equally spaced slits around its circumference. The disk is spun around a handle while the viewer looks at a moving object through the slits. Electronic stroboscopes consist of a light source whose flash rate is controlled electronically. The activities that follow can be used as individual student labs or as a large class demonstration.
A stroboscope is able to “freeze” repetitive motions because it only permits viewing at specific times. For example, if we are only allowed to see an object each time it makes one complete rotation, the object will always appear to be in the same place, and hence stationary. If the viewing frequency is slightly greater than the object’s rotational frequency, the object will appear to drift backward because it will be seen before it is able to complete a complete rotation. Conversely, the object will appear to drift forward if its frequency of rotation is slightly greater than the viewing frequency. Most of us are familiar with the apparent forward and backward motion of wheel covers on cars when the imperceptible flashing of streetlights illuminates them.
To freeze motion with a mechanical stroboscope, the rate of rotation of the strobe disk is adjusted until the number of slits passing the eye of the viewer each second equals the rate of the repetitive motion. For example, a fan will appear stopped if the rate of viewing equals the rate of rotation of the fan.
When an electronic stroboscope illuminates a moving object in a darkened room, the object will only be seen when the strobe light is on. When the rate of flashing matches the rate of the repetitive motion, the object will appear stopped.
If the viewing rate obtained with either type of strobe is known, it’s possible to measure the short time required for one rotation of a fan (or for one vibration of a tuning fork, or any other repetitive motion). Here’s how to measure the frequency of a fan’s rotation with a hand-held stroboscope.
- Put a distinguishing mark on one fan blade.
- View the fan in motion through a rotating strobe disk.
- Adjust the rate of rotation of the disc until the marked blade appears stationary.
- To insure that the rate of viewing is synchronized with the motion of the object, a condition known as resonance, increase the rate of rotation of the strobe disk until you see two images of the blade. Reducing the rate of rotation of strobe disk until a single image is seen will guarantee resonance. (Why?)
- Have your partner use a stopwatch to determine the time it takes for ten rotations of the strobe disk.
- Divide the number of rotations of the strobe disk (10) by the time obtained in step 5. This equals the number of rotations of the strobe disk per second.
- Multiply the number of open slits in the disk by the number of rotations per second. This will yield the number of slits per second.
- Because your viewing rate was synchronized with the rate of the repetitive motion, the number of slits per second equals the frequency of the fan’s rotation.
- The period, or time required for one complete rotation of the fan blade, is found by finding the reciprocal of the frequency. For example, if the frequency equals 20 rotations per second, the time required for one rotation is 1/20 second per rotation.
To measure the period of a repetitive motion, in this case, a tuning fork, using an electronic stroboscope:
- Strike the tuning fork and view it with the strobe flashing.
- Obtain resonance by adjusting the strobe’s flash rate and read the flash rate from the strobe’s tachometer.
- Find the reciprocal of the flash rate to find the period of the motion.
Product # P2-9005$39.00
Product # P2-9010$349.00
The National Institute of Standards and Technology (nist.gov) This site has a ton of interesting and useful information on units of measure and other things that had to be standardized. (like the color of traffic signals!).
Virtual Museum (museum.nist.gov )
A Walk through Time:( physics.nist.gov/GenInt/Time/time.html) | http://www.arborsci.com/cool/extending-our-senses-indirect-measurement | 13 |
118 | The population of China doubled in size during the 10th and 11th centuries. This growth came through expanded rice cultivation in central and southern China, the use of early-ripening rice from southeast and southern Asia, and the production of abundant food surpluses. Within its borders, the Northern Song Dynasty had a population of some 100 million people.Plantilla:Cref This dramatic increase of population fomented and fueled an economic revolution in premodern China. The expansion of the population was partially the cause for the gradual withdrawal of the central government from heavily regulating the market economy. A much larger populace also increased the importance of the lower gentry's role in grassroots administration and maintaining local affairs, while the appointed officials in county and provincial centers relied upon these scholarly gentry for their services, sponsorship, and local supervision.
The Song Dynasty is divided into two distinct periods: the Northern Song and Southern Song. During the Northern Song (Plantilla:Zh-c, 960–1127), the Song capital was in the northern city of Bianjing (now Kaifeng) and the dynasty controlled most of inner China. The Southern Song (Plantilla:Zh-c, 1127–1279) refers to the period after the Song lost control of northern China to the Jin Dynasty. During this time, the Song court retreated south of the Yangtze River and established their capital at Lin'an (now Hangzhou). Although the Song had lost control of the traditional birthplace of Chinese civilization along the Yellow River, the Song economy was not in ruins, as the Southern Song contained 60 percent of China's population and a majority of the most productive agricultural land. The Southern Song Dynasty considerably bolstered naval strength to defend its waters and land borders and to conduct maritime missions abroad. To repel the Jin (and then the Mongols), the Song developed revolutionary new military technology augmented by the use of gunpowder.Plantilla:Cref In 1234, the Jin Dynasty was conquered by the Mongols, who subsequently took control of northern China and maintained uneasy relations with the Southern Song. Möngke Khan, the fourth Great Khan of the Mongol Empire, died in 1259 while besieging a city in Chongqing. His successor Kublai Khan was perceived both as the new Great Khan of the Mongols and by 1271 as the Emperor of China. After two decades of sporadic warfare, Kublai Khan's armies conquered the Song Dynasty in 1279. China was once again unified, under the Yuan Dynasty, which was a division of the vast Mongol Empire.
Social life during the Song was vibrant; social elites gathered to view and trade precious artworks, the populace intermingled at public festivals and private clubs and cities had lively entertainment quarters. The spread of literature and knowledge was enhanced by the earlier innovation of woodblock printing and the 11th century innovation of movable type printing. There were numerous intellectual pursuits, while pre-modern technology, science, philosophy, mathematics, and engineering flourished in the Song. Philosophers such as Cheng Yi and Zhu Xi reinvigorated Confucianism with new commentary, infused with Buddhist ideals, and emphasized a new organization of classic texts that brought out the core doctrine of Neo-Confucianism. Although the institution of the civil service examinations had existed since the Sui Dynasty, it became much more prominent in the Song period, and was a leading factor in the shift of an aristocratic elite to a bureaucratic elite. Although exam-drafted scholar-officials scorned any emphasis or favor shown to the growing merchant class and those of petty commercial vocations, commercialism was nonetheless heavily embedded into Song culture and society.
Key industries were controlled by the government under strict monopolies, while private industry and businesses produced goods and services not officially monopolized by the state. The Song court received tributary missions from foreign countries while scholar-officials, tenant landlords, merchants, and other wealthy individuals invested money in the booming overseas trade and shipbuilding industry. Independent, state-sponsored, and state-employed architects, engineers, carpenters, and craftsmen erected thousands of bridges, pagoda towers, temple halls, palace halls, ancestral shrines, shops and storefronts, and other buildings throughout the empire. Plantilla:History of China
Northern SongEditar sección
Emperor Taizu of Song (r. 960–976) unified China through military conquest during his reign, ending the upheaval of the Five Dynasties and Ten Kingdoms Period. In Kaifeng, he established a strong central government over the empire. He ensured administrative stability by promoting the civil service examination system of drafting state bureaucrats by skill and merit (instead of aristocratic or martial status) and promoted projects that ensured efficiency in communication throughout the empire. One such project was the creation by cartographers of detailed maps of each province and city which were then collected in a large atlas. He also promoted groundbreaking science and technological innovations by supporting such works as the astronomical clock tower designed and built by the engineer Zhang Sixun.
The Song court upheld foreign relations with Chola India, Fatimid Egypt, Srivijayan Malaysia, and other countries that were also maritime trade partners. However, it was China's closest neighboring states who would have the biggest impact upon its domestic and foreign policy. From its inception with the first emperor Taizu, the Song Dynasty alternated between warfare and diplomacy with the ethnic Khitans of the Liao Dynasty in the northeast and with the Tanguts of the Western Xia Dynasty in the northwest. The Song Dynasty used military force in an attempt to quell the Liao Dynasty and recapture the Sixteen Prefectures, a territory under Khitan control that was traditionally considered to be part of the Chinese domain. However, Song forces were repulsed by the Liao forces who engaged in aggressive yearly campaigns into northern Song territory until 1005 when the signing of the Shanyuan Treaty ended these northern frontier border clashes. The Chinese were forced to pay heavy tribute to the Khitans, although the paying of this tribute did little damage to the overall Song economy since the Khitans were heavily dependent upon importing massive amounts of goods from the Song Dynasty. More significantly, the Song state recognized the Liao state as its diplomatic equal. The Song Dynasty managed to win several military victories over the Tanguts in the early 11th century, culminating in a campaign led by the polymath scientist, general, and statesman Shen Kuo (1031–1095). However, this campaign was ultimately a failure due to a rival military officer of Shen disobeying direct orders, and the territory gained from the Western Xia was eventually lost.
During the 11th century, political rivalries thoroughly divided members of the court due to the ministers' differing approaches, opinions, and policies regarding the handling of the Song's complex society and thriving economy. The idealist Chancellor Fan Zhongyan (989–1052) was the first to receive a heated political backlash when he attempted to make such reforms as improving the recruitment system of officials, increasing the salaries for minor officials, and establishing sponsorship programs to allow a wider range of people to be well educated and eligible for state service. After Fan was forced to step down from his office, Wang Anshi (1021–1086) became chancellor of the imperial court. With the backing of Emperor Shenzong of Song (1067–1085), Wang Anshi severely criticized the educational system and state bureaucracy. Seeking to resolve what he saw as state corruption and negligence, Wang implemented a series of reforms called the New Policies. These involved land tax reform, the establishment of several government monopolies, the support of local militias, and the creation of higher standards for the Imperial examination to make it more practical for men skilled in statecraft to pass. The reforms created political factions in the court with Wang Anshi's New Policies Group (Xin Fa), or the 'Reformers' in one camp, opposed by the ministers in the 'Conservative' faction led by the historian and Chancellor Sima Guang (1019–1086) in the other. As one faction supplanted another in the majority position of the court ministers, it would demote rival officials and exile them to govern remote frontier regions of the empire. One of the prominent victims of the political rivalry, the famous poet and statesman Su Shi (1037–1101), was jailed and eventually exiled for criticizing Wang's reforms.
While the central Song court remained politically divided and focused upon its internal affairs, alarming new events to the north in the Liao state finally came to its attention. The Jurchen, a subject tribe within the Liao empire, rebelled against the Liao and formed their own state, the Jin Dynasty (1115–1234). The Song official Tong Guan (1054–1126) advised the reigning Emperor Huizong of Song (1100–1125) to form an alliance with the Jurchens and their joint military campaign toppled and completely conquered the Liao Dynasty by 1125. However, the poor performance and military weakness of the Song army was observed by the Jurchens, who immediately broke the alliance with the Song and launched an invasion into Song territory in 1125 and another in 1127 when the Jurchens managed to capture not only the Song capital at Kaifeng, but the retired emperor Huizong and the succeeding Emperor Qinzong of Song as well as most of his court. This took place in the year of Jingkang (Chinese 靖康) and it is known as the Humiliation of Jingkang (Chinese 靖康之恥). The remaining Song forces rallied under the self appointed Emperor Gaozong of Song (1127–1162), fleeing south of the Yangtze River to establish the Song Dynasty's new capital at Lin'an (in modern Hangzhou). This Jurchen conquest of northern China and shift of capitals from Kaifeng to Lin'an marks the period of division between the Northern Song Dynasty and Southern Song Dynasty.
Southern SongEditar sección
Although weakened and pushed south along the Huai River, the Southern Song found new ways to bolster their already strong economy and defend their state against the Jin Dynasty. They had able military officers such as Yue Fei and Han Shizhong. The government sponsored massive shipbuilding and harbor improvement projects, and the construction of beacons and seaport warehouses in order to support maritime trade abroad and the major international seaports, including Quanzhou, Guangzhou, and Xiamen that were sustaining China's commerce. To protect and support the multitudes of ships sailing for maritime interests into the waters of the East China Sea and Yellow Sea (to Korea and Japan), Southeast Asia, the Indian Ocean, and the Red Sea, it was a necessity to establish an official standing navy. The Song Dynasty therefore established China's first permanent navy in 1132, with the admiral's main headquarter stationed at Dinghai. With a permanent navy, the Song were prepared to face the naval forces of the Jin on the Yangtze River in 1161, in the Battle of Tangdao and the Battle of Caishi. During these battles the Song navy employed swift paddle wheel driven naval crafts armed with trebuchet catapults aboard the decks that launched gunpowder bombs. Although the Jin forces boasted 70,000 men on 600 warships, and the Song forces only 3,000 men on 120 warships, the Song Dynasty forces were victorious in both battles due to the destructive power of the bombs and the rapid assaults by paddle wheel ships. The strength of the navy was heavily emphasized after that. A century after the navy was founded it had grown in size to 52,000 fighting marines. The Song government confiscated portions of land owned by the landed gentry in order to raise revenue for these projects, an act which caused dissension and loss of loyalty amongst leading members of Song society but did not stop the Song's defensive preparations. Financial matters were made worse by the fact that many wealthy, land-owning families—some which had officials working for the government—used their social connections with those in office in order to obtain tax-exempt status.
Although the Song Dynasty was able to hold back the Jin, a new considerable foe came to power over the steppe, deserts, and plains north of the Jin Dynasty. The Mongols, led by Genghis Khan (r. 1206–1227), initially invaded the Jin Dynasty in 1205 and 1209, engaging in large raids across its borders, and in 1211 an enormous Mongol army was assembled to invade the Jin. The Jin Dynasty was forced to submit and pay tribute to the Mongols as vassals; when the Jin suddenly moved their capital city from Beijing to Kaifeng, the Mongols saw this as a revolt. Under the leadership of Ögedei Khan (r.1229–1241), both the Jin Dynasty and Western Xia Dynasty were conquered by Mongol forces. The Mongols also invaded and conquered Korea, the Abbasid Caliphate of the Middle East, and Kievan Rus' of Russia. The Mongols were at one time allied with the Song, but this alliance was broken when the Song recaptured the former imperial capitals of Kaifeng, Luoyang, and Chang'an at the collapse of the Jin Dynasty. The Mongol leader Möngke Khan led a campaign against the Song in 1259, but died on August 11 during the Battle of Fishing Town in Chongqing. Mongke's death and succession crisis prompted Hulagu Khan to pull the bulk of Mongol forces out of the Middle East where they were poised to fight the Egyptian Mamluks (who defeated the Mongols at Ain Jalut). Although Hulagu was allied with Kublai Khan, his forces were unable to help in the assault against the Song, due to Hulagu's war with the Golden Horde.
Although Mongke died, Kublai continued the assault against the Song, gaining a temporary foothold on the southern banks of the Yangzi. Kublai made preparations to take Ezhou, but a pending civil war with his brother Ariq Böke — a rival claimant to the Mongol Khaganate — forced Kublai to move with the bulk of his forces back north. In Kublai's absence, the Song forces were ordered by Chancellor Jia Sidao to make an opportune assault, and succeeded in pushing the Mongol forces back to the northern banks of the Yangzi. There were minor border skirmishes until 1265, when Kublai won a significant battle in Sichuan. From 1268 to 1273, Kublai blockaded the Yangzi River with his navy and besieged Xiangyang, the last obstacle in his way to invading the rich Yangzi River basin. In 1271, Kublai officially declared the creation of the Yuan Dynasty. In 1275, a Song force of 130,000 troops under Chancellor Jia Sidao was defeated by Kublai's newly-appointed commander-in-chief, the Turkic general Bayan. By 1276, most of the Song Chinese territory had been captured by Yuan forces. In the Battle of Yamen on the Pearl River Delta in 1279 the Yuan army led by the Chinese general Zhang Hongfan finally crushed the Song resistance, and the last remaining ruler, the child emperor Bing, committed suicide along with the official Lu Xiufu. On Kublai's orders carried out by his commander Bayan, the rest of the former imperial family of Song were unharmed; the deposed Emperor Gong was given the title 'Duke of Ying' but was eventually exiled to Tibet where he took up a monastic life.
Society and cultureEditar sección
The Song Dynasty was an era of administrative sophistication and complex social organization. Some of the largest cities in the world were found in China during this period (Kaifeng and Hangzhou had boasted populations of over a million). People enjoyed various social clubs and entertainments in the cities, and there were numerous schools and temples to provide the public with education and religious services. The Song government supported multiple forms of social welfare programs, including the establishment of retirement homes, public clinics, and pauper's graveyards. The Song Dynasty supported a widespread postal service that was modeled on the earlier Han Dynasty postal system to provide swift communication throughout the empire. The central government employed thousands of postal workers of various ranks and responsibilities to provide service for post offices and larger postal stations. In rural areas, farming peasants either owned their own plots of land, paid rents as tenant farmers, or were serfs on large estates.
Although women were on a lower social tier than men (according to Confucian ethics), they enjoyed many social and legal privileges and wielded considerable power at home and in their own small businesses. As Song society became more and more prosperous and parents on the bride's side of the family provided larger dowries for her marriage, women naturally gained many new legal rights in ownership of property. They were also equal in status to men in inheriting family property. There were many notable and well-educated women and it was a common practice for women to educate their sons during their earliest youth. The mother of the scientist, general, diplomat, and statesman Shen Kuo taught him essentials of military strategy. There were also exceptional women writers and poets such as Li Qingzhao (1084–1151), who became famous even in her lifetime.
Religion in China during this period had a great effect on people's lives, beliefs and daily activities, and Chinese literature on spirituality was popular. The major deities of Daoism and Buddhism, ancestral spirits and the many deities of Chinese folk religion were worshiped with sacrificial offerings. With many ethnic foreigners traveling to China to conduct trade or live permanently, there came many foreign religions; religious minorities in China included Middle Eastern Muslims, the Kaifeng Jews, and Persian Manichaeans.
The populace engaged in a vibrant social and domestic life, enjoying such public festivals as the Lantern Festival or the Qingming Festival. The were entertainment quarters in the cities provided a constant array of amusements. There were puppeteers, acrobats, theater actors, sword swallowers, snake charmers, storytellers, singers and musicians, prostitutes, and places to relax including tea houses, restaurants, and organized banquets. People attended social clubs in large numbers; there were tea clubs, exotic food clubs, antiquarian and art collectors' clubs, horse-loving clubs, poetry clubs and music clubs. Like regional cooking and cuisines in the Song, the era was known for its regional varieties of performing arts styles as well. Theatrical drama was very popular amongst the elite and general populace, although Classical Chinese—not the vernacular language—was spoken by actors on stage. The four largest drama theatres in Kaifeng could hold audiences of several thousand each. There were also notable domestic pastimes, as people at home enjoyed activities such as the go board game and the xiangqi board game.
Civil service examinations and the gentryEditar sección
- Main gallery: Society of the Song Dynasty.
During this period greater emphasis was laid upon the civil service system of recruiting officials; this was based upon degrees acquired through competitive examinations, in an effort to select the most capable individuals for governance. Selecting men for office through proven merit was an ancient idea in China. The civil service system became institutionalized on a small scale during the Sui and Tang dynasties, but by the Song period it became virtually the only means for drafting officials into the government. The advent of widespread printing helped to widely circulate Confucian teachings and to educate more and more eligible candidates for the exams. This can be seen in the number of exam takers for the low-level prefectural exams rising from 30,000 annual candidates in the early 11th century to 400,000 candidates by the late 13th century. The civil service and examination system allowed for greater meritocracy, social mobility, and equality in competition for those wishing to attain an official seat in government. By using Song state-gathered statistics, Edward A. Kracke, Sudō Yoshiyuki, and Ho Ping-ti supported the hypothesis that simply because one had a father, grandfather, or great-grandfather who had served as an official of state, it did not guarantee that one would obtain the same level of authority. Robert Hartwell and Robert P. Hymes criticized this model, stating that it places too much emphasis on the role of the nuclear family and demonstrates only three paternal ascendants of exam candidates while ignoring the demographic reality of Song China, the significant proportion of males in each generation that had no surviving sons, and the role of the extended family. Many felt disenfranchised by what they saw as a bureaucratic system that favored the land-holding class able to afford the best education. One of the greatest literary critics of this was the official and famous poet Su Shi. Yet Su was a product of his times, as the identity, habits, and attitudes of the scholar-official had become less aristocratic and more bureaucratic with the transition of the periods from Tang to Song.
Due to China's enormous population growth and the body of its appointed scholar-officials being accepted in limited size (about 20,000 active officials during the Song period), the larger scholarly gentry class would now take over grassroots affairs on the vast local level. Excluding the scholar-officials in office, this elite social class consisted of exam candidates, examination degree-holders not yet assigned to an official post, local tutors, and retired officials. These learned men, degree-holders, and local elites supervised local affairs and sponsored necessary facilities of local communities; any local magistrate appointed to his office by the government relied upon the cooperation of the few or many local gentry elites in the area. For example, the Song government—excluding the educational-reformist government under Emperor Huizong—spared little amount of state revenue to maintain prefectural and county schools; instead, the bulk of the funds for schools was drawn from private financing. This limited role of government officials was a departure from the earlier Tang Dynasty (618–907), when the government strictly regulated commercial markets and local affairs; now the government withdrew heavily from regulating commerce and relied upon a mass of local gentry to perform necessary duties in local communities.
The gentry distinguished themselves in society through their intellectual and antiquarian pursuits, while the homes of prominent landholders attracted a variety of courtiers including artisans, artists, educational tutors, and entertainers. Despite the disdain for trade, commerce, and the merchant class exhibited by the highly cultured and elite exam-drafted scholar-officials, commercialism played a prominent role in Song culture and society. A scholar-official would be frowned upon by his peers if he pursued means of profiteering outside of his official salary; however, this did not stop many scholar-officials from managing business relations through the use of intermediary agents.
Law, justice, and forensic scienceEditar sección
- Main gallery: Society of the Song Dynasty.
The Song judicial system retained most of the legal code of the earlier Tang Dynasty, the basis of traditional Chinese law up until the modern era. Roving sheriffs maintained law and order in the municipal juridsictions and occasionally ventured into the countryside. Official magistrates overseeing court cases were not only expected to be well-versed in written law but also to promote morality in society. Magistrates such as the famed Bao Qingtian (999–1062) embodied the upright, moral judge who upheld justice and never failed to live up to his principles. Song judges specified the guilty person or party in a criminal act and meted out punishments accordingly, often in the form of caning. In an opposite development of the West, the guilty individual or parties brought to court for a criminal or civil offense were guilty until proven innocent, while even the defendant party was viewed with a high level of suspicion by the judge. Due to this and the immediate jailing of those accused of criminal offenses, people in the Song preferred to settle disputes and quarrels privately, without the court's interference.
Shen Kuo's Dream Pool Essays argued against traditional Chinese beliefs in anatomy (such as his argument for two throat valves instead of three); this perhaps spurred the interest in the performance of post-mortem autopsies in China during the 12th century. The physician and judge known as Song Ci (1186–1249) wrote a pioneering work of forensic science on the examination of corpses in order to determine cause of death (strangulation, poisoning, drowning, blows, etc.) and to prove whether death resulted from murder, suicide, or accidental death. Song Ci stressed the importance of proper coroner's conduct during autopsies and the accurate recording of the inquest of each autopsy by official clerks.
Military and methods of warfareEditar sección
- Main gallery: Society of the Song Dynasty.
Although the scholar-officials viewed military soldiers as lower members in the hierarchic social order, a person could gain status and prestige in society by becoming a high ranking military officer with a record of victorious battles. At its height, the Song military had one million soldiers divided into platoons of 50 troops, companies made of two platoons, and one battalion composed of 500 soldiers. Crossbowmen were separated from the regular infantry and placed in their own units as they were prized combatants, providing effective missile fire against cavalry charges. The government was eager to sponsor new crossbow designs that could shoot at longer ranges, while crossbowmen were also valuable when employed as long-range snipers. Song cavalry employed a slew of different weapons, including halberds, swords, bows, spears, and 'fire lances' that discharged a gunpowder blast of flame and shrapnel.
Military strategy and military training were treated as science that could be studied and perfected; soldiers were tested in their skills of using weaponry and in their athletic ability. The troops were trained to follow signal standards to advance at the waving of banners and to halt at the sound of bells and drums.
The Song navy was of great importance during the consolidation of the empire in the 10th century; during the war against the Southern Tang state the Song navy employed tactics such as defending large floating pontoon bridges across the Yangzi River in order to secure movements of troops and supplies. There were large naval ships in the Song that could carry 1,000 soldiers aboard their decks, while the swift-moving paddle-wheel crafts were viewed as essential fighting ships in any successful naval battle.Error en la cita: Código de apertura <ref> sin su código de cierre </ref>
In a battle on January 23, 971, a mass of arrow fire from Song Dynasty crossbowmen decimated the war elephant corps of the Southern Han army. This defeat not only marked the eventual submission of the Southern Han to the Song Dynasty, but also the last instance where a war elephant corps was employed as a regular division within a Chinese army.
There was a total of 347 military treatises written during the Song period, as listed by the history text of the Song Shi (compiled in 1345). However, only a handful of these military treatises have survived, which includes the Wujing Zongyao written in 1044. It was the first known book to have listed formulas for gunpowder; it gave appropriate formulas for use in several different kinds of gunpowder bombs. It also provided detailed description and illustrations of double-piston pump flamethrowers, as well as instructions for the maintenance and repair of the components and equipment used in the device.
Arts, literature, and philosophyEditar sección
- Main gallery: Culture of the Song Dynasty.
The visual arts during the Song Dynasty were heightened by new developments such as advances in landscape and portrait painting. An aristocratic elite engaged in the arts as accepted pastimes of the cultured scholar-official, including painting, composing poetry, and writing calligraphy. The poet and statesman Su Shi and his associate Mi Fu (1051–1107) enjoyed antiquarian affairs, often borrowing or buying art pieces to study and copy. Poetry and literature profited from the rising popularity and development of the ci poetry form. Enormous encyclopedic volumes were compiled, such as works of historiography and dozens of treatises on technical subjects. This included the universal history text of the Zizhi Tongjian, compiled into 1000 volumes of 9.4 million written Chinese characters. The genre of Chinese travel literature also became popular with the writings of the geographer Fan Chengda (1126–1193) and Su Shi, the latter of whom wrote the 'daytrip essay' known as Record of Stone Bell Mountain that used persuasive writing to argue for a philosophical point. Although an early form of the local geographic gazetteer existed in China since the 1st century, the matured form known as "treatise on a place", or fangzhi, replaced the old "map guide", or tujing, during the Song Dynasty.
The imperial courts of the emperor's palace were filled with his entourage of court painters, calligraphers, poets, and storytellers. Emperor Huizong was a renowned artist as well as a patron of the arts. A prime example of a highly venerated court painter was Zhang Zeduan (1085–1145) who painted an enormous panoramic painting, Along the River During the Qingming Festival. Emperor Gaozong of Song initiated a massive art project during his reign, known as the Eighteen Songs of a Nomad Flute from the life story of Cai Wenji (b. 177). This art project was a diplomatic gesture to the Jin Dynasty while he negotiated for the release of his mother from Jurchen captivity in the north.
In philosophy, Chinese Buddhism had waned in influence but it retained its hold on the arts and on the charities of monasteries. Buddhism had a profound influence upon the budding movement of Neo-Confucianism, led by Cheng Yi (1033–1107) and Zhu Xi (1130–1200). Mahayana Buddhism influenced Fan Zhongyan and Wang Anshi through its concept of ethical universalism, while Buddhist metaphysics had a deep impact upon the pre–Neo-Confucian doctrine of Cheng Yi. The philosophical work of Cheng Yi in turn influenced Zhu Xi. Although his writings were not accepted by his contemporary peers, Zhu's commentary and emphasis upon the Confucian classics of the Four Books as an introductory corpus to Confucian learning formed the basis of the Neo-Confucian doctrine. By the year 1241, under the sponsorship of Emperor Lizong, Zhu Xi's Four Books and his commentary on them became standard requirements of study for students attempting to pass the civil service examinations. The East Asian countries of Japan and Korea also adopted Zhu Xi's teaching, known as the Shushigaku (朱子学, School of Zhu Xi) of Japan, and in Korea the Jujahak (주자학). Buddhism's continuing influence can be seen in painted artwork such as Lin Tinggui's Luohan Laundering. However, the ideology was highly criticized and even scorned by some. The statesman and historian Ouyang Xiu (1007–1072) called the religion a "curse" that could only be remedied by uprooting it from Chinese culture and replacing it with Confucian discourse. Buddhism would not see a true revival in Chinese society until the Mongol rule of the Yuan Dynasty, with Kublai Khan's sponsorship of Tibetan Buddhism and Drogön Chögyal Phagpa as the leading lama. The Christian sect of Nestorianism — which had entered China in the Tang era — would also be revived in China under Mongol rule.
Cuisine and apparelEditar sección
- Main gallery: Culture of the Song Dynasty.
The food that one consumed and the clothes that one wore in Song China were largely dictated by one's status and social class. The main food staples in the diet of the lower classes remained rice, pork, and salted fish; their clothing materials were made of hempen or cotton cloths, restricted to a color standard of black and white. Pant trousers were the acceptable form of attire for farming peasants, soldiers, artisans, and merchants, although wealthy merchants chose to flaunt more ornate clothing and male blouses that came down below the waist. Acceptable apparel for scholar-officials was rigidly confined to a social hierarchic ranking system. However, as time went on this rule of rank-graded apparel for officials was not as strictly enforced as it was in the beginning of the dynasty. Each official was able to flaunt his awarded status by wearing different-colored traditional silken robes that hung to the ground around his feet, specific types of headgear, and even specific styles of girdles that displayed his graded-rank of officialdom.
Women in the Song period wore long dresses, blouses that came down to the knee, skirts and jackets with long or short sleeves, while women from wealthy families could wear purple scarves around their shoulders. The main difference in women's apparel from that of men was that it was fastened on the left, not on the right.
There is a multitude of existing restaurant and tavern menus and listed entrées for feasts, banquets, festivals, and carnivals during the Song period, all of which reveal a very diverse and lavish diet for those of the upper class. In their meals they could choose from a wide variety of meats, including shrimp, geese, duck, mussel, shellfish, fallow deer, hare, partridge, pheasant, francolin, quail, fox, badger, clam, crab, and many others. Dairy products were absent from Chinese cuisine and culture altogether, beef was rarely consumed since the bull was a valuable draft animal, and dog meat was absent from the diet of the wealthy, although the poor could choose to eat dog meat if necessary (yet it was not part of their regular diet). People also consumed dates, raisins, jujubes, pears, plums, apricots, pear juice, lychee-fruit juice, honey and ginger drinks, pawpaw juice, spices and seasonings of Sichuan pepper, ginger, pimento, soy sauce, oil, sesame oil, salt, and vinegar. The common diet of the poor was pork, salted fish, and rice.
Economy, industry, and tradeEditar sección
- Main gallery: Economy of the Song Dynasty.
The economy of the Song Dynasty was one of the most prosperous and advanced economies in the medieval world. Song Chinese invested their funds in joint stock companies and in multiple sailing vessels at a time when monetary gain was assured from the vigorous overseas trade and indigenous trade along the Grand Canal and Yangzi River. Prominent merchant families and private businesses were allowed to occupy industries that were not already government-operated monopolies. Both private and government-controlled industries met the needs of a growing Chinese population in the Song. Both artisans and merchants formed guilds which the state had to deal with when assessing taxes, requisitioning goods, and setting standard worker's wages and prices on goods.Error en la cita: Código de apertura <ref> sin su código de cierre </ref>
The iron industry was pursued by both private entrepreneurs who owned their own smelters as well as government-supervised smelting facilities. The Song economy was stable enough to produce over a hundred million kg (over two hundred million lb) of iron product a year. Large scale deforestation in China would have continued if not for the 11th century innovation of the use of coal instead of charcoal in blast furnaces for smelting cast iron. Much of this iron was reserved for military use in crafting weapons and armoring troops, but some was used to fashion the many iron products needed to fill the demands of the growing indigenous market. The iron trade within China was furthered by the building of new canals which aided the flow of iron products from production centers to the large market found in the capital city.
The annual output of minted copper currency in 1085 alone reached roughly six billion coins. The most notable advancement in the Song economy was the establishment of the world's first government issued paper-printed money, known as Jiaozi (see also Huizi). For the printing of paper money alone, the Song court established several government-run factories in the cities of Huizhou, Chengdu, Hangzhou, and Anqi. The size of the workforce employed in paper money factories was large; it was recorded in 1175 that the factory at Hangzhou employed more than a thousand workers a day.
The economic power of Song China heavily influenced foreign economies abroad. The Moroccan geographer al-Idrisi wrote in 1154 of the prowess of Chinese merchant ships in the Indian Ocean and of their annual voyages that brought iron, swords, silk, velvet, porcelain, and various textiles to places such as Aden (Yemen), the Indus River, and the Euphrates in modern-day Iraq. Foreigners, in turn, had an impact on the Chinese economy. For example, many West Asian and Central Asian Muslims went to China to trade, becoming a preeminent force in the import and export industry, while some were even appointed as officers supervising economic affairs. Sea trade with the Southeast Pacific, the Hindu world, the Islamic world, and the East African world brought merchants great fortune and spurred an enormous growth in the shipbuilding industry of Song-era Fujian province. However, there was risk involved in such long overseas ventures. To reduce the risk of losing money on maritime trade missions abroad, the historians Ebrey, Walthall, and Palais write:
[Song era] investors usually divided their investment among many ships, and each ship had many investors behind it. One observer thought eagerness to invest in overseas trade was leading to an outflow of copper cash. He wrote, 'People along the coast are on intimate terms with the merchants who engage in overseas trade, either because they are fellow-countrymen or personal acquaintances...[They give the merchants] money to take with them on their ships for purchase and return conveyance of foreign goods. They invest from ten to a hundred strings of cash, and regularly make profits of several hundred percent'.
Technology, science, and engineeringEditar sección
Gunpowder warfareEditar sección
- Further information: History of gunpowder
Advancements in weapons technology enhanced by Greek fire and gunpowder, including the evolution of the early flamethrower, explosive grenade, firearm, cannon, and land mine, enabled the Song Chinese to ward off their militant enemies until the Song's ultimate collapse in the late 13th century. The Wujing Zongyao manuscript of 1044 was the first book in history to provide formulas for gunpowder and their specified use in different types of bombs. While engaged in a war with the Mongols, in the year 1259 the official Li Zengbo wrote in his Kozhai Zagao, Xugaohou that the city of Qingzhou was manufacturing one to two thousand strong iron-cased bomb shells a month, dispatching to Xiangyang and Yingzhou about ten to twenty thousand such bombs at a time. In turn, the invading Mongols employed northern Chinese soldiers and used these same type of gunpowder weapons against the Song Chinese. By the 14th century the firearm and cannon could also be found in Europe, India, and the Islamic Middle East, during the early age of gunpowder warfare.
- Main gallery: Technology of the Song Dynasty.
As early as the Han Dynasty (202 BCE–220), when the state needed to effectively measure distances traveled throughout the empire, the Chinese relied on the mechanical odometer device. The Chinese odometer came in the form of a wheeled-carriage, its inner gears functioning off the rotated motion of the wheels, and specific units of distance — the Chinese li — marked by the mechanical striking of a drum or bell for auditory alarm. The specifications for the 11th century odometer was written by Chief Chamberlain Lu Daolong, who is quoted extensively in the historical text of the Song Shi (compiled by 1345). In the Song period, the odometer vehicle was also combined with another old complex mechanical device known as the South Pointing Chariot. This device, originally crafted by Ma Jun in the 3rd century, incorporated a differential gear that allowed a figure mounted on the vehicle to always point in the southern direction, no matter how the vehicle's wheels' turned about. The device concept of the differential gear for this navigational vehicle is now found in all modern automobiles in order to apply the equal amount of torque to wheels rotating at different speeds.
Polymaths, inventions, and astronomyEditar sección
Polymath figures such as the statesmen Shen Kuo and Su Song (1020–1101) embodied advancements in all fields of study, including biology, botany, zoology, geology, minerology, mechanics, horology, astronomy, pharmaceutical medicine, archeology, mathematics, cartography, optics, art criticism, and more.
Shen Kuo was the first to discern magnetic declination of true north while experimenting with a compass. Shen theorized that geographical climates gradually shifted over time. He created a theory of land formation involving concepts accepted in modern geomorphology. He performed optical experiments with camera obscura just decades after Ibn al-Haytham was the first to do so. He also improved the designs of astronomical instruments such as the widened astronomical sighting tube, which allowed Shen Kuo to fix the position of the pole star (which had shifted over centuries of time). Shen Kuo was also known for hydraulic clockworks, as he invented a new overflow-tank clepsydra which had more efficient higher-order interpolation instead of linear interpolation in calibrating the measure of time.
Su Song was best known for his horology treatise written in 1092, which described and illustrated in great detail his hydraulic-powered, 12 m (40 ft) tall astronomical clock tower built in Kaifeng. The clock tower featured large astronomical instruments of the armillary sphere and celestial globe, both driven by an escapement mechanism (roughly two centuries before the verge escapement could be found in clockworks of Europe). In addition, Su Song's clock tower featured the world's first endless power-transmitting chain drive, an essential mechanical device found in many practical uses throughout the ages, such as the bicycle. Su's tower featured a rotating gear wheel with 133 clock jack manikins who were timed to rotate past shuttered windows while ringing gongs and bells, banging drums, and presenting announcement plaques. In his printed book, Su published a celestial atlas of five star charts. These star charts feature a cylindrical projection similar to Mercator projection, the latter being a cartographic innovation of Gerardus Mercator in 1569.
Although the endeavors of the polymaths Shen and Su represent perhaps the highest achievements in technology and science during the Song period, there were many other significant technical writers and inventions. For example, Qin Guan's book published in 1090, the Can Shu (Book of Sericulture), described a silk-reeling machine that employed the first known use of a mechanical belt drive.
Mathematics and cartographyEditar sección
There were many notable improvements to Chinese mathematics during the Song era. The book published in 1261 by the mathematician Yang Hui (c. 1238–1298) provided the earliest Chinese illustration of Pascal's triangle, although it was described earlier around 1100 by Jia Xian. Yang Hui also provided rules for constructing combinatorial arrangements in magic squares, provided theoretical proof for Euclid's forty-third proposition about parallelograms, and was the first to use negative coefficients of 'x' in quadratic equations. Yang's contemporary Qin Jiushao (c. 1202–1261) was the first to introduce the zero symbol into Chinese mathematics; before this blank spaces were used instead of zeros in the system of counting rods. He is also known for working with Chinese remainder theorem, Heron's formula, and astronomical data used in determining the winter solstice.
Geometry and surveying were essential mathematics in the realm cartography and precision map-making. The earliest extant Chinese maps date to the 4th century BCE, yet it was not until the time of Pei Xiu (224–271) that topographical elevation, a formal rectangular grid system, and use of a standard graduated scale of distances was applied to terrain maps. In the Song period, Shen Kuo was the first to create a raised-relief map, while his other maps featured a uniform graduated scale of 1:900,000. A 3 ft (0.91 m) squared map of 1137 — carved into a stone block — followed a uniform grid scale of 100 li for each gridded square, and accurately mapped the outline of the coasts and river systems of China, extending all the way to India. Furthermore, the world's oldest known terrain map in printed form comes from the edited encyclopedia of Yang Jia in 1155, which displayed western China without the formal grid system that was characteristic of more professionally-made Chinese maps. Although gazetteers had existed since 52 CE during the Han Dynasty and gazetteers accompanied by illustrative maps (Chinese: tujing) since the Sui Dynasty, the illustrated gazetteer became much more common in the Song Dynasty, when the foremost concern was for illustrative gazetteers to serve political, administrative, and military purposes.
Movable type printingEditar sección
- Further information: History of typography in East Asia
The innovation of movable type printing was made by the artisan Bi Sheng (990–1051), first described by the scientist and statesman Shen Kuo in his Dream Pool Essays of 1088. The collection of Bi Sheng's original clay-fired typeface was passed on to one of Shen Kuo's nephews, and was carefully preserved. Movable type enhanced the already widespread use of woodblock methods of printing thousands of documents and volumes of written literature, consumed eagerly by an increasingly literate public. The advancement of printing had a deep impact on education and the scholar-official class, since more books could be made faster while mass-produced, printed books were cheaper in comparison to laborious handwritten copies. The enhancement of widespread printing and print culture in the Song period was thus a direct catalyst in the rise of social mobility and expansion of the educated class of scholar elites, the latter which expanded dramatically in size from the 11th to 13th centuries.
The movable type invented by Bi Sheng was ultimately trumped by the use of woodblock printing due to the limitations of the enormous Chinese character writing system, yet movable type printing continued to be used and was improved in later periods. The Yuan Dynasty scholar-official Wang Zhen (fl. 1290–1333) implemented a faster typesetting process, improved Bi's baked-clay movable type character set with a wooden one, and experimented with tin-metal movable type. The wealthy printing patron Hua Sui (1439–1513) of the Ming Dynasty established China's first metal movable type (using bronze) in 1490. In 1638 the Beijing Gazette switched their printing process from woodblock to movable type printing. Yet it was during the Qing Dynasty that massive printing projects began to employ movable type printing. This includes the printing of sixty six copies of a 5,020 volume long encyclopedia in 1725, the Gujin Tushu Jicheng (Complete Collection of Illustrations and Writings from the Earliest to Current Times), which necessitated the crafting of 250,000 movable type characters cast in bronze. By the 19th century the European style printing press replaced the old Chinese methods of movable type, while traditional woodblock printing in modern East Asia is used sparsely and for aesthetic reasons.
Hydraulic engineering and nauticsEditar sección
- Main gallery: Technology of the Song Dynasty.
There were considerable advancements in hydraulic engineering and nautical technology during the Song Dynasty. The 10th century invention of the pound lock for canal systems allowed different water levels to be raised and lowered for separated segments of a canal, which significantly aided the safety of canal traffic and allowed for larger barges to pass through. There was the Song era innovation of watertight bulkhead compartments for ships that allowed possible damage to the hull without sinking. If ships were damaged, the Chinese of the 11th century discovered how to employ a drydock to repair boats while suspended out of water. There Song Chinese used crossbeams to brace the ribs of ships in order to strengthen them in a skeletal like structure. Stern-mounted rudders had been mounted on Chinese ships since the 1st century, as evidenced with a preserved Han tomb model of a ship. In the Song period the Chinese devised a way to mechanically raise and lower rudders in order for ships to travel in a wider range of water depths. The Song Chinese arranged the protruding teeth of anchors in a circular pattern instead of in one direction. David Graff and Robin Higham state that this arrangement "[made] them more reliable" for anchoring ships. Arguably the most important nautical innovation of the Song period was the introduction of the magnetic mariner's compass for navigation at sea. The magnetic compass was first written of by Shen Kuo in his Dream Pool Essays of 1088, as well as Zhu Yu in his Pingzhou Table Talks published in 1119.
Structural engineering and architectureEditar sección
- Main gallery: Architecture of the Song Dynasty.
Architecture during the Song period reached new heights of sophistication. Authors such as Yu Hao and Shen Kuo wrote books outlining the field of architectural layouts, craftsmanship, and structural engineering in the 10th and 11th centuries, respectively. Shen Kuo preserved the written dialogues of Yu Hao when describing technical issues such as slanting struts built into pagoda towers for diagonal wind bracing. Shen Kuo also preserved Yu's specified dimensions and units of measurement for various building types. The architect Li Jie (1065–1110), who published the Yingzao Fashi ('Treatise on Architectural Methods') in 1103, greatly expanded upon the works of Yu Hao and compiled the standard building codes used by the central government agencies and by craftsmen throughout the empire. He addressed the standard methods of construction, design, and applications of moats and fortifications, stonework, greater woodwork, lesser woodwork, wood-carving, turning and drilling, sawing, bamboo work, tiling, wall building, painting and decoration, brickwork, glazed tile making, and provided proportions for mortar formulas in masonry. In his book, Li provided detailed and vivid illustrations of architectural components and cross-sections of buildings. These illustrations displayed various applications of corbel brackets, cantilever arms, mortise and tenon work of tie beams and cross beams, and diagrams showing the various building types of halls in graded sizes. He also outlined the standard units of measurement and standard dimensional measurements of all building components described and illustrated in his book.
Grandiose building projects were supported by the government, including the erection of towering Buddhist Chinese pagodas and the construction of enormous bridges (wood or stone, trestle or segmental arch bridge). Many of the pagoda towers built during the Song period were erected at heights that exceeded ten stories. Some of the most famous are the Iron Pagoda built in 1049 during the Northern Song and the Liuhe Pagoda built in 1165 during the Southern Song, although there were many others. The tallest is the Liaodi Pagoda of Hebei built in the year 1055, towering 84 m (275 ft) in total height. Some of the bridges reached lengths of 1220 m (4000 ft), with many being wide enough to allow two lanes of cart traffic simultaneously over a waterway or ravine.
The professions of the architect, craftsman, carpenter, and structural engineer were not seen as professionally equal to that of a Confucian scholar-official as architectural knowledge had been passed down orally for thousands of years in China, from a father craftsman to his son. However, structural engineering and architecture schools were known to have existed during the Song period; one prestigious engineering school was headed by the renowned bridge-builder Cai Xiang (1012–1067) in medieval Fujian province.
Besides existing buildings and technical literature of building manuals, Song Dynasty artwork portraying cityscapes and other buildings aid modern-day scholars in their attempts to reconstruct and realize the nuances of Song archicture. Song Dynasty artists such as Li Cheng, Fan Kuan, Guo Xi, Zhang Zeduan, Emperor Huizong of Song, Ma Lin, and Zhang Zerui painted close-up depictions of buildings as well as large expanses of cityscapes featuring arched bridges, halls and pavilions, pagoda towers, and distinct Chinese city walls. The scientist and statesman Shen Kuo was known for his criticism of artwork relating to architecture, saying that it was more important for an artist to capture a wholistic view of a landscape than it was to focus on the angles and corners of buildings. For example, Shen criticized the work of the painter Li Cheng for failing to observe the principle of "seeing the small from the viewpoint of the large" in portraying buildings.
There were also pyramidal tomb structures in the Song era, such as the Song imperial tombs located in Gongxian, Henan province. About 100 km from Gongxian is another Song Dynasty tomb at Baisha, which features "elaborate facsimiles in brick of Chinese timber frame construction, from door lintels to pillars and pedestals to bracket sets, that adorn interior walls." The two large chambers of the Baisha tomb also feature conical-shaped roofs.
In addition to the Song gentry's antiquarian pursuits of art collecting, scholar-officials during the Song became highly interested in retrieving ancient relics from archaeological sites, in order to revive the use of ancient vessels in ceremonies of state ritual. Scholar-officials of the Song period claimed to have discovered ancient bronze vessels that were created as far back as the Shang Dynasty (1600–1046 BCE) which bore the writing characters of the Shang era. Some attempted to recreate these bronze vessels by using imagination alone, not by observing tangible evidence of relics; this practice was criticized by Shen Kuo in his work of 1088. Yet Shen Kuo had much more to criticize than this practice alone. Shen objected to the idea of his peers that ancient relics were products created by famous "sages" in lore or the ancient aristocratic class; Shen rightfully attributed the discovered handicrafts and vessels from ancient times as the work of artisans and commoners from previous eras. He also disapproved of his peers' pursuit of archaeology simply to enhance state ritual, since Shen not only took an interdisciplinary approach with the study of archaeology, but he also emphasized the study of functionality and investigating what was the ancient relics' original processes of manufacture. Shen used ancient texts and existing models of armillary spheres to create one based on ancient standards; Shen described ancient weaponry such as the use of a scaled sighting device on crossbows; while experimenting with ancient musical measures, Shen suggested hanging an ancient bell by using a hollow handle.
Despite the gentry's overriding interest in archaeology simply for reviving ancient state rituals, some of Shen's peers took a similar approach to the study of archaeology. His contemporary Ouyang Xiu (1007–1072) compiled an analytical catalogue of ancient rubbings on stone and bronze which pioneered ideas in early epigraphy and archeology. On the unreliability of historical works written after the fact, scholar-official Zhao Mingcheng (1081–1129) stated "...the inscriptions on stone and bronze are made at the time the events took place and can be trusted without reservation, and thus discrepancies may be discovered." Historian R.C. Rudolph states that Zhao's emphasis on consulting contemporary sources for accurate dating is parallel with the concern of the German historian Leopold von Ranke (1795–1886), and was in fact emphasized by many Song scholars. The Song scholar Hong Mai (1123–1202) heavily criticized what he called the court's "ridiculous" archaeological catalogue Bogutu compiled during the Huizong reign periods of Zheng He and Xuan He (1111–1125). Hong Mai obtained old vessels from the Han Dynasty and compared them with the descriptions offered in the catalogue, which he found so inaccurate he stated he had to "hold my sides with laughter." Hong Mai pointed out that the erroneous material was the fault of Chancellor Cai Jing (1047–1126), who prohibited scholars from reading and consulting the written histories.
See alsoEditar sección
- Four Great Books of Song
- Lu You
- Longquan celadon
- Shao Yong
- Tiger Cave Kiln
- Wang Chongyang
- Water Margin
- Wen Tianxiang
- Zeng Gong
- ↑ 1,0 1,1 1,2 1,3 Ebrey et al., 156.
- ↑ Brook, 96.
- ↑ 3,0 3,1 3,2 3,3 3,4 3,5 Ebrey et al., 167.
- ↑ Rossabi, 115
- ↑ Needham, Volume 3, 518.
- ↑ Needham, Volume 4, Part 2, 469–471.
- ↑ Hall, 23.
- ↑ Sastri, 173, 316.
- ↑ Shen, 158.
- ↑ Mote, 69.
- ↑ Ebrey et al., 154.
- ↑ Mote, 70–71.
- ↑ Sivin, III, 8.
- ↑ Sivin, III, 9.
- ↑ 15,0 15,1 Ebrey et al., 163.
- ↑ 16,0 16,1 16,2 16,3 16,4 16,5 Ebrey et al., 164.
- ↑ Sivin, III, 3–4.
- ↑ 18,0 18,1 Ebrey et al., 165.
- ↑ Wang, 14
- ↑ Sivin, III, 5.
- ↑ 21,0 21,1 Paludan, 136.
- ↑ Shen, 159–161.
- ↑ 23,0 23,1 23,2 Needham, Volume 4, Part 3, 476.
- ↑ Levathes, 43–47
- ↑ Needham, Volume 1, 134.
- ↑ Ebrey, 239.
- ↑ Embree, 385.
- ↑ Rossabi, 80.
- ↑ Ebrey et al., 235.
- ↑ 30,0 30,1 Ebrey et al., 236.
- ↑ 31,0 31,1 Needham, Volume 1, 139.
- ↑ Ebrey et al., 240.
- ↑ Rossabi, 55–56.
- ↑ Rossabi, 49.
- ↑ Rossabi, 50–51.
- ↑ Rossabi, 56.
- ↑ 37,0 37,1 Rossabi, 82.
- ↑ Rossabi, 88.
- ↑ Rossabi, 94.
- ↑ Rossabi, 90.
- ↑ Fairbank, 89.
- ↑ Needham, Volume 4, Part 3, 35.
- ↑ Needham, Volume 4, Part 3, 36.
- ↑ Ebrey, Cambridge Illustrated History of China, 155.
- ↑ 45,0 45,1 Ebrey, Cambridge Illustrated History of China, 158.
- ↑ Ebrey et al., 170.
- ↑ 47,0 47,1 Ebrey et al., 171.
- ↑ 48,0 48,1 Sivin, III, 1.
- ↑ Ebrey, 172.
- ↑ Gernet, 82–83
- ↑ Needham, Volume 4, Part 3, 465.
- ↑ 52,0 52,1 China. (2007). In Encyclopædia Britannica. From Encyclopædia Britannica Online. Retrieved on 2007-06-28
- ↑ Gernet, 222–225.
- ↑ West, 69–70.
- ↑ Gernet, 223.
- ↑ Rossabi, 162.
- ↑ West, 76.
- ↑ Ebrey, Cambridge Illustrated History of China, 145-146.
- ↑ 59,0 59,1 59,2 59,3 Ebrey, Cambridge Illustrated History of China, 147.
- ↑ 60,0 60,1 60,2 Ebrey et al., 162.
- ↑ 61,0 61,1 Hartwell, 417–418.
- ↑ 62,0 62,1 Hymes, 35–36.
- ↑ 63,0 63,1 63,2 63,3 63,4 Ebrey, 159.
- ↑ 64,0 64,1 64,2 Fairbank, 106.
- ↑ Fairbank, 101–106.
- ↑ Yuan, 196–199
- ↑ Ebrey, 162–163.
- ↑ 68,0 68,1 68,2 Ebrey, Cambridge Illustrated History of China, 148.
- ↑ Fairbank, 104.
- ↑ Gernet, 92–93.
- ↑ Gernet, 60–61, 68–69.
- ↑ 72,0 72,1 72,2 Ebrey, 161.
- ↑ McKnight, 155–157.
- ↑ 74,0 74,1 Gernet, 107.
- ↑ Sivin, III, 30–31
- ↑ Sivin, III, 30–31, footnote 27.
- ↑ Gernet, 170.
- ↑ Sung, 12.
- ↑ Sung, 72.
- ↑ Graff, 25–26.
- ↑ Lorge, 43.
- ↑ Lorge, 45.
- ↑ 83,0 83,1 83,2 Peers, 130.
- ↑ Peers, 130-131.
- ↑ Peers, 131.
- ↑ Peers, 129.
- ↑ Graff, 87.
- ↑ Graff, 86-87.
- ↑ 89,0 89,1 Schafer, 291.
- ↑ Needham, Volume 5, Part 7, 19.
- ↑ Needham, Volume 5, Part 7, 119.
- ↑ Needham, Volume 5, Part 7, 122-124.
- ↑ Needham, Volume 5, Part 7, 82-84.
- ↑ Ebrey, 81–83.
- ↑ Hargett (1985), 74–76.
- ↑ Bol, 44.
- ↑ Ebrey, Cambridge Illustrated History of China, 151.
- ↑ 98,0 98,1 Ebrey et al., 168.
- ↑ Wright, 93.
- ↑ Ebrey et al., 169.
- ↑ Wright, 88–89.
- ↑ Gernet, 215.
- ↑ 103,0 103,1 Gernet, 136.
- ↑ 104,0 104,1 Gernet, 128.
- ↑ 105,0 105,1 Gernet, 130.
- ↑ Gernet, 127–128.
- ↑ 107,0 107,1 Gernet, 129.
- ↑ 108,0 108,1 Gernet, 133.
- ↑ Gernet, 134.
- ↑ Gernet, 136–137.
- ↑ 111,0 111,1 Rossabi, 78.
- ↑ West, 73.
- ↑ Gernet, 135-136.
- ↑ Gernet, 134–135.
- ↑ Gernet, 138.
- ↑ West, 86.
- ↑ Ebrey et al., 157.
- ↑ 118,0 118,1 Needham, Volume 4, Part 2, 23.
- ↑ Wagner, 178–179.
- ↑ Wagner, 181–183.
- ↑ 121,0 121,1 Ebrey et al., 158.
- ↑ Embree 339.
- ↑ 123,0 123,1 Needham, Volume 5, Part 1, 48.
- ↑ Shen, 159–161.
- ↑ Islam in China (650–present): Origins. Religion & Ethics - Islam. BBC. Retrieved on 2007-08-01.
- ↑ Needham, Volume 4, Part 3, 465.
- ↑ Golas, Peter (1980). "Rural China in the Song". The Journal of Asian Studies: 291. doi:10.2307/2054291. http://mutex.gmu.edu:2112/view/00219118/di973705/97p01513/4?searchUrl=http%3a//www.jstor.org/search/BasicResults%3fhp%3d25%26si%3d1%26gw%3djtx%26jtxsi%3d1%26jcpsi%3d1%26artsi%3d1%26Query%3dSong%2bDynasty%2bTechnology%2bScience%26wc%3don&frame=noframe¤tResult=00219118%2bdi973705%2b97p01513%2b0%2cEFFFFFFF0F&[email protected]/01cc993312ea3113b864848d&dpi=3&config=jstor.
- ↑ Needham, Volume 5, Part 7, 117.
- ↑ Needham, Volume 5, Part 7, 80.
- ↑ Needham, Volume 5, Part 7, 82.
- ↑ Needham, Volume 5, Part 7, 220–221.
- ↑ Needham, Volume 5, Part 7, 192.
- ↑ Rossabi, 79.
- ↑ Needham, Volume 5, Part 7, 117.
- ↑ Needham, Volume 5, Part 7, 173–174.
- ↑ Needham, Volume 5, Part 7, 174–175.
- ↑ Needham, Volume 4, Part 2, 283.
- ↑ Needham, Volume 4, Part 2, 281–282.
- ↑ Needham, Volume 4, Part 2, 283–284.
- ↑ Needham, Volume 4, Part 2, 291.
- ↑ Needham, Volume 4, Part 2, 287.
- ↑ Needham, Volume 1, 136.
- ↑ Needham, Volume 4, Part 2, 446.
- ↑ Mohn, 1.
- ↑ Embree, 843.
- ↑ Chan, 15.
- ↑ Needham, Volume 3, 614.
- ↑ Sivin, III, 23–24.
- ↑ Needham, Volume 4, Part 1, 98.
- ↑ 150,0 150,1 Sivin, III, 17.
- ↑ Needham, Volume 4, Part 2, 445.
- ↑ Needham, Volume 4, Part 2, 448.
- ↑ Needham, Volume 4, Part 2, 111.
- ↑ Needham, Volume 4, Part 2, 165 & 445.
- ↑ Needham, Volume 4, Part 3, 569.
- ↑ 156,0 156,1 Needham, Volume 3, 208.
- ↑ Needham, Volume 4, Part 2, 107–108.
- ↑ Needham, Volume 3, 134-137.
- ↑ Needham, Volume 3, 59-60.
- ↑ Needham, Volume 3, 46.
- ↑ Needham, Volume 3, 104.
- ↑ Needham, Volume 3, 43.
- ↑ Needham, Volume 3, 62–63.
- ↑ Hsu, 90–93.
- ↑ Hsu, 96–97
- ↑ Needham, Volume 3, 538–540.
- ↑ 167,0 167,1 Sivin, III, 22.
- ↑ Needham, Volume 3, 547–549, Plate LXXXI.
- ↑ Needham, Volume 3, 549, Plate LXXXII.
- ↑ Hargett (1996), 406, 409–412.
- ↑ Needham, Volume 4, Part 3, 569.
- ↑ Sivin, III, 32.
- ↑ Needham, Volume 5, Part 1, 201–203.
- ↑ 174,0 174,1 Sivin, III, 27.
- ↑ Needham, Volume 4, Part 2, 33.
- ↑ Ebrey, 160.
- ↑ Needham, Volume 5, Part 1, 206.
- ↑ Needham, Volume 5, Part 1, 208.
- ↑ Needham, Volume 5, Part 1, 217.
- ↑ Needham, Volume 5, Part 1, 212–213.
- ↑ Brook, xxi
- ↑ Needham, Volume 5, Part 1, 215–216.
- ↑ Needham, Volume 4, Part 3, 350.
- ↑ Needham, Volume 4, Part 3, 350–351.
- ↑ Needham, Volume 4, Part 3, 463.
- ↑ Needham, Volume 4, Part 3, 660.
- ↑ 187,0 187,1 187,2 187,3 Graff, 86.
- ↑ Needham, Volume 4, Part 3, 141.
- ↑ Needham, Volume 4, Part 3, 82-84.
- ↑ Guo, 4.
- ↑ 191,0 191,1 Guo, 6.
- ↑ Needham, Volume 4, Part 3, 85.
- ↑ Guo, 5.
- ↑ Needham, Volume 4, Part 3, 96.
- ↑ Needham, Volume 4, Part 3, 98.
- ↑ Needham, Volume 4, Part 3, 100.
- ↑ Needham, Volume 4, Part 3, 108.
- ↑ Needham, Volume 4, Part 3, 109.
- ↑ Guo, 1.
- ↑ Needham, Volume 4, Part 3, 151.
- ↑ Needham, Volume 4, Part 3, 153.
- ↑ 202,0 202,1 Needham, Volume 4, Part 3, 115.
- ↑ 203,0 203,1 Steinhardt, 375.
- ↑ Steinhardt, 376.
- ↑ 205,0 205,1 205,2 205,3 205,4 Fraser & Haber, 227.
- ↑ Fairbank, 33.
- ↑ 207,0 207,1 Rudolph, 170.
- ↑ Rudolph, 172.
- ↑ Rudolph, 170–171.
- ↑ 210,0 210,1 Rudolph, 171.
|This article contains Chinese text. Without proper rendering support, you may see question marks, boxes, or other symbols instead of Chinese characters.|
- Bol, Peter K. "The Rise of Local History: History, Geography, and Culture in Southern Song and Yuan Wuzhou," Harvard Journal of Asiatic Studies (Volume 61, Number 1, 2001): 37–76.
- Brook, Timothy (1998). The Confusions of Pleasure: Culture and Commerce in Ming China. Berkeley: University of California Press. ISBN 978-0-520-22154-3
- Ebrey, Patricia Buckley, Anne Walthall, James B. Palais (2006). East Asia: A Cultural, Social, and Political History. Boston: Houghton Mifflin Company. ISBN 0-618-13384-4.
- Ebrey, Patricia Buckley (1999). The Cambridge Illustrated History of China. Cambridge: Cambridge University Press. ISBN 0-521-66991-X (paperback).
- Embree, Ainslie Thomas (1997). Asia in Western and World History: A Guide for Teaching. Armonk: ME Sharpe, Inc.
- Chan, Alan Kam-leung and Gregory K. Clancey, Hui-Chieh Loy (2002). Historical Perspectives on East Asian Science, Technology and Medicine. Singapore: Singapore University Press. ISBN 9971692597
- Fairbank, John King and Merle Goldman (1992). China: A New History; Second Enlarged Edition (2006). Cambridge; London: The Belknap Press of Harvard University Press. ISBN 0-674-01828-1
- Fraser, Julius Thomas and Francis C. Haber. (1986). Time, Science, and Society in China and the West. Amherst: University of Massachusetts Press. ISBN 0-87023-495-1.
- Gernet, Jacques (1962). Daily Life in China on the Eve of the Mongol Invasion, 1250-1276. Translated by H.M. Wright. Stanford: Stanford University Press. ISBN 0-8047-0720-0
- Graff, David Andrew and Robin Higham (2002). A Military History of China. Boulder: Westview Press.
- Guo, Qinghua. "Yingzao Fashi: Twelfth-Century Chinese Building Manual," Architectural History: Journal of the Society of Architectural Historians of Great Britain (Volume 41 1998): 1–13.
- Hall, Kenneth (1985). Maritime trade and state development in early Southeast Asia. Hawaii: University of Hawaii Press. ISBN 0-8248-0959-9.
- Hargett, James M. "Some Preliminary Remarks on the Travel Records of the Song Dynasty (960–1279)," Chinese Literature: Essays, Articles, Reviews (CLEAR) (July 1985): 67–93.
- Hargett, James M. "Song Dynasty Local Gazetteers and Their Place in The History of Difangzhi Writing," Harvard Journal of Asiatic Studies (Volume 56, Number 2, 1996): 405–442.
- Hartwell, Robert M. "Demographic, Political, and Social Transformations of China, 750-1550," Harvard Journal of Asiatic Studies (Volume 42, Number 2, 1982): 365–442.
- Hymes, Robert P. (1986). Statesmen and Gentlemen: The Elite of Fu-Chou, Chiang-Hsi, in Northern and Southern Sung. Cambridge: Cambridge University Press. ISBN 0521306310.
- Hsu, Mei-ling. "The Qin Maps: A Clue to Later Chinese Cartographic Development," Imago Mundi (Volume 45, 1993): 90-100.
- Levathes, Louise (1994). When China Ruled the Seas. New York: Simon & Schuster. ISBN 0-671-70158-4.
- Lorge, Peter (2005). War, Politics and Society in Early Modern China, 900–1795: 1st Edition. New York: Routledge.
- McKnight, Brian E. (1992). Law and Order in Sung China. Cambridge: Cambridge University Press.
- Mohn, Peter (2003). Magnetism in the Solid State: An Introduction. New York: Springer-Verlag Inc. ISBN 3540431837
- Mote, F.W. (1999). Imperial China: 900–1800. Harvard: Harvard University Press.
- Needham, Joseph (1986). Science and Civilization in China: Volume 1, Introductory Orientations. Taipei: Caves Books, Ltd.
- Needham, Joseph (1986). Science and Civilization in China: Volume 3, Mathematics and the Sciences of the Heavens and the Earth. Taipei: Caves Books, Ltd.
- Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 2: Mechanical Engineering. Taipei: Caves Books, Ltd.
- Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 3: Civil Engineering and Nautics. Taipei: Caves Books, Ltd.
- Needham, Joseph (1986). Science and Civilization in China: Volume 5, Chemistry and Chemical Technology, Part 7: Military Technology; The Gunpowder Epic. Taipei: Caves Books, Ltd.
- Paludan, Ann (1998). Chronicle of the Chinese Emperors. London: Thames & Hudson. ISBN 0500050902.
- Peers, C.J. (2006). Soldiers of the Dragon: Chinese Armies 1500 BC-AD 1840. Oxford: Osprey Publishing.
- Rossabi, Morris (1988). Khubilai Khan: His Life and Times. Berkeley: University of California Press. ISBN 0-520-05913-1.
- Rudolph, R.C. "Preliminary Notes on Sung Archaeology," The Journal of Asian Studies (Volume 22, Number 2, 1963): 169–177.
- Sastri, Nilakanta, K.A. The CōĻas, University of Madras, Madras, 1935 (Reprinted 1984).
- Schafer, Edward H. "War Elephants in Ancient and Medieval China," Oriens (Volume 10, Number 2, 1957): 289–291.
- Shen, Fuwei (1996). Cultural flow between China and the outside world. Beijing: Foreign Languages Press. ISBN 7-119-00431-X.
- Sivin, Nathan (1995). Science in Ancient China. Brookfield, Vermont: VARIORUM, Ashgate Publishing.
- Steinhardt, Nancy Shatzman. "The Tangut Royal Tombs near Yinchuan", Muqarnas: An Annual on Islamic Art and Architecture (Volume X, 1993): 369-381.
- Sung, Tz’u, translated by Brian E. McKnight (1981). The Washing Away of Wrongs: Forensic Medicine in Thirteenth-Century China. Ann Arbor: University of Michigan Press. ISBN 0892648007
- Wagner, Donald B. "The Administration of the Iron Industry in Eleventh-Century China," Journal of the Economic and Social History of the Orient (Volume 44 2001): 175–197.
- Wang, Lianmao (2000). Return to the City of Light: Quanzhou, an eastern city shining with the splendour of medieval culture. Fujian People's Publishing House.
- West, Stephen H. "Playing With Food: Performance, Food, and The Aesthetics of Artificiality in The Sung and Yuan," Harvard Journal of Asiatic Studies (Volume 57, Number 1, 1997): 67–106.
- Wright, Arthur F. (1959). Buddhism in Chinese History. Stanford: Stanford University Press.
- Yuan, Zheng. "Local Government Schools in Sung China: A Reassessment," History of Education Quarterly (Volume 34, Number 2; Summer 1994): 193–213.
Further readingEditar sección
- Gascoigne, Bamber (2003). The Dynasties of China: A History. New York: Carroll & Graf. ISBN 1-84119-791-2.
- Giles, Herbert Allen (1939). A Chinese biographical dictionary (Gu jin xing shi zu pu). Shanghai: Kelly & Walsh. (see here for more)
- Gernet, Jacques (1982). A history of Chinese civilization. Cambridge: Cambridge University Press. ISBN 0-521-24130-8.
- Kruger, Rayne (2003). All Under Heaven: A Complete History of China. Chichester: John Wiley & Sons. ISBN 0-470-86533-4.
- Tillman, Hoyt C. and Stephen H. West (1995). China Under Jurchen Rule: Essays on Chin Intellectual and Cultural History. New York: State University of New York Press.
- Song Dynasty in China
- China 7 BC To 1279
- Song Dynasty at China Heritage Quarterly
- Song Dynasty at bcps.org
- Song Dynasty at MSN encarta
- Song and Liao artwork
- Paintings of Song, Liao and Jin dynasties
- Song Dynasty art with video commentary
ca:Dinastia Song cv:Сун (патшалăх) cs:Dynastie Sung da:Song-dynastiet de:Song-Dynastie es:Dinastía Song eo:Dinastio Song eu:Song dinastia fr:Dynastie Song zh-classical:趙宋 ko:송나라 hr:Dinastija Sung id:Dinasti Song is:Songveldið it:Dinastia Song nl:Song-dynastie ja:宋 (王朝) no:Song-dynastiet nn:Song-dynastiet pl:Dynastia Song pt:Dinastia Sung ro:Dinastia Song ru:Сун (государство) fi:Song-dynastia sv:Songdynastin th:ราชวงศ์ซ่ง vi:Nhà Tống zh:宋朝 | http://ceramica.wikia.com/wiki/Song_Dynasty | 13 |
75 | Traditional Japanese legend maintains that Japan was founded in 600 BC by the Emperor Jimmu, a direct descendant of the sun goddess and ancestor of the present ruling imperial family. About AD 405, the Japanese court officially adopted the Chinese writing system. During the sixth century, Buddhism was introduced. These two events revolutionized Japanese culture and marked the beginning of a long period of Chinese cultural influence. From the establishment of the first fixed capital at Nara in 710 until 1867, the emperors of the Yamato dynasty were the nominal rulers, but actual power was usually held by powerful court nobles, regents, or "shoguns" (military governors).
The first contact with the West occurred about 1542, when a Portuguese ship, blown off its course to China, landed in Japan. During the next century, traders from Portugal, the Netherlands, England, and Spain arrived, as did Jesuit, Dominican, and Franciscan missionaries. During the early part of the 17th century, Japan's shogunate suspected that the traders and missionaries were actually forerunners of a military conquest by European powers. This caused the shogunate to place foreigners under progressively tighter restrictions. Ultimately, Japan forced all foreigners to leave and barred all relations with the outside world except for severely restricted commercial contacts with Dutch and Chinese merchants at Nagasaki. This isolation lasted for 200 years, until Commodore Matthew Perry of the U.S. Navy forced the opening of Japan to the West with the Convention of Kanagawa in 1854.
Within several years, renewed contact with the West profoundly altered Japanese society. The shogunate was forced to resign, and the emperor was restored to power. The "Meiji restoration" of 1868 initiated many reforms. The feudal system was abolished, and numerous Western institutions were adopted, including a Western legal system and constitutional government along quasi-parliamentary lines.
In 1898, the last of the "unequal treaties" with Western powers was removed, signaling Japan's new status among the nations of the world. In a few decades, by creating modern social, educational, economic, military, and industrial systems, the Emperor Meiji's "controlled revolution" had transformed a feudal and isolated state into a world power.
Japanese leaders of the late 19th century regarded the Korean Peninsula as a "dagger pointed at the heart of Japan." It was over Korea that Japan became involved in war with the Chinese Empire in 1894-95 and with Russia in 1904-05. The war with China established Japan's dominant interest in Korea, while giving it the Pescadores Islands and Formosa (now Taiwan). After Japan defeated Russia in 1905, the resulting Treaty of Portsmouth awarded Japan certain rights in Manchuria and in southern Sakhalin, which Russia had received in 1875 in exchange for the Kurile Islands. Both wars gave Japan a free hand in Korea, which it formally annexed in 1910.
World War I permitted Japan, which fought on the side of the victorious Allies, to expand its influence in Asia and its territorial holdings in the Pacific. The postwar era brought Japan unprecedented prosperity. Japan went to the peace conference at Versailles in 1919 as one of the great military and industrial powers of the world and received official recognition as one of the "Big Five" of the new international order. It joined the League of Nations and received a mandate over Pacific islands north of the Equator formerly held by Germany.
During the 1920s, Japan progressed toward a democratic system of government. However, parliamentary government was not rooted deeply enough to withstand the economic and political pressures of the 1930s, during which military leaders became increasingly influential.
Japan invaded Manchuria in 1931 and set up the puppet state of Manchukuo. In 1933, Japan resigned from the League of Nations. The Japanese invasion of China in 1937 followed Japan's signing of the "anti-Comintern pact" with Nazi Germany the previous year and was part of a chain of developments culminating in the Japanese attack on the United States at Pearl Harbor, Hawaii, on December 7, 1941.
After almost 4 years of war, resulting in the loss of 3 million Japanese lives and the atomic bombings of Hiroshima and Nagasaki, Japan signed an instrument of surrender on the U.S.S. Missouri in Tokyo Harbor on September 2, 1945. As a result of World War II, Japan lost all of its overseas possessions and retained only the home islands. Manchukuo was dissolved, and Manchuria was returned to China; Japan renounced all claims to Formosa; Korea was granted independence; southern Sakhalin and the Kuriles were occupied by the U.S.S.R.; and the United States became the sole administering authority of the Ryukyu, Bonin, and Volcano Islands. The 1972 reversion of Okinawa completed the United States' return of control of these islands to Japan.
After the war, Japan was placed under international control of the Allies through the Supreme Commander, Gen. Douglas MacArthur. U.S. objectives were to ensure that Japan would become a peaceful nation and to establish democratic self-government supported by the freely expressed will of the people. Political, economic, and social reforms were introduced, such as a freely elected Japanese Diet (legislature) and universal adult suffrage. The country's constitution took effect on May 3, 1947. The United States and 45 other Allied nations signed the Treaty of Peace with Japan in September 1951. The U.S. Senate ratified the treaty in March 1952, and under the terms of the treaty, Japan regained full sovereignty on April 28, 1952.
The public and the government appear to tolerate certain forms of public disorder as inherent to a properly functioning democracy. Demonstrations usually follow established forms. Groups receive legal permits and keep to assigned routes and areas. Placards and bullhorns are used to express positions. Traffic is sometimes disrupted, and occasional shoving battles between police and protesters result. But arrests are rare and generally are made only in cases involving violence.
Political extremists have not hesitated to use violence and are held responsible for bombings in connection with popular causes. In January 1990, the mayor of Nagasaki was shot by a member of the right-wing Seikijuku (Sane Thinkers School), presumably for a statement he had made that was perceived as critical of the late Emperor Hirohito. That attack came two days after the left-wing Chukakuha (Middle Core Faction), opposed to the imperial system, claimed responsibility for firing a rocket onto the grounds of the residence of the late emperor's brother and a day before the government announced the events leading to the enthronement of Emperor Akihito in November 1990. The enthronement ceremonies were considered likely targets for extremist groups on the left and the right who saw the mysticism surrounding the emperor as being overemphasized or excessively reduced respectively, but no serious incidents took place. Although membership in these groups represent only a minute portion of the population and present no serious threat to the government, authorities are concerned about the example set by the groups' violence, as well as by the particular violent events. Violent protest by radicals also occur in the name of causes apparently isolated from public sentiments. Occasional clashes between leftist factions and between leftists and rightists have injured both participants and bystanders. Security remains heavy at New Tokyo International Airport at Narita-Sanrizuka in Chiba Prefecture, the scene of violent protests in the 1970s by radical groups supporting local farmers opposed to expropriation of their land.
The most notorious extremists were the Japanese Red Army, a Marxist terrorist group. This group was responsible for an attack on Lod International Airport in Tel Aviv, Israel, in support of the Popular Front for the Liberation of Palestine in 1972. It participated in an attack on a Shell Oil refinery in Singapore in 1974 and seized the French embassy in The Hague that same year and the United States and Swedish embassies in Kuala Lumpur in 1975. In 1977 the Japanese Red Army hijacked a Japan Airlines jet over India in a successful demand for a US$6 million ransom and the release of six inmates in Japanese prisons. Following heavy criticism at home and abroad for the government's "caving in" to terrorists' demands, the authorities announced their intention to recall and reissue approximately 5.6 million valid Japanese passports to make hijacking more difficult. A special police unit was formed to keep track of the terrorist group, and tight airport security measures were instigated. Despite issuing regular threats, the Japanese Red Army was relatively inactive in the 1980s. In 1990 its members were reported to be in North Korea and Lebanon undergoing further training and were available as mercenaries to promote various political causes.
Japan's relationships with the newly industrialized economies (NIEs) of South Korea, Taiwan, Hong Kong, and Singapore, together often called the Four Tigers, were marked by both cooperation and competition. After the early 1980s, when Tokyo extended a large financial credit to South Korea for essentially political reasons, Japan avoided significant aid relationships with the NIEs. Relations instead involved capital investment, technology transfer, and trade. Increasingly, the NIEs came to be viewed as Japan's rivals in the competition for export markets for manufactured goods, especially the vast United States market.
Japan is an extremely homogeneous society with non-Japanese, mostly Koreans, making up less than 1% of the population. The Japanese people are primarily the descendants of various peoples who migrated from Asia in prehistoric times; the dominant strain is N Asian or Mongoloid, with some Malay and Indonesian admixture. One of the earliest groups, the Aino, who still persist to some extent in Hokkaido, are physically somewhat similar to Caucasians.
Contemporary Japan is a secular society. Creating harmonious relations with others through reciprocity and the fulfillment of social obligations is more significant for most Japanese than an individual's relationship to a transcendent God. Harmony, order, and self-development are three of the most important values that underlie Japanese social interaction. Basic ideas about self and the nature of human society are drawn from several religious and philosophical traditions. Religious practice, too, emphasizes the maintenance of harmonious relations with others (both spiritual beings and other humans) and the fulfillment of social obligations as a member of a family and a community.
Japan's principal religions are Shinto and Buddihism; most Japanese adhere to both faiths. While the development of Shinto was radically altered by the influence of Buddhism, which was brought from China in the 6th cent., Japanese varieties of Buddhism also developed in sects such as Jodo, Shingon, and Nichiren. Numerous sects, called the "new religions," formed after World War II and have attracted many members. One of these, the Soka Gokkai, a Buddhist sect, grew rapidly in the 1950s and 60s and became a strong social and political force. Less than 1% of the population are Christians. Confucianism has deeply affected Japanese thought and was part of the generally significant influence that Chinese culture wielded on the formation of Japanese civilization.
Three basic features of the nation's system of criminal justice characterize its operations. First, the institutions--police, government prosecutor's offices, courts, and correctional organs-- maintain close and cooperative relations with each other, consulting frequently on how best to accomplish the shared goals of limiting and controlling crime. Second, citizens are encouraged to assist in maintaining public order, and they participate extensively in crime prevention campaigns, apprehension of suspects, and offender rehabilitation programs. Finally, officials who administer criminal justice are allowed considerable discretion in dealing with offenders.
Until the Meiji Restoration in 1868, the criminal justice system was controlled mainly by daimyo. Public officials, not laws, guided and constrained people to conform to moral norms. In accordance with the Confucian ideal, officials were to serve as models of behavior; the people, who lacked rights and had only obligations, were expected to obey. Such laws as did exist were transmitted through local military officials in the form of local domain laws. Specific enforcement varied from domain to domain, and no formal penal codes existed. Justice was generally harsh, and severity depended upon one's status. Kin and neighbors could share blame for an offender's guilt: whole families and villages could be flogged or put to death for one member's transgression.
After 1868 the justice system underwent rapid transformation. The first publicly promulgated legal codes, the Penal Code of 1880 and the Code of Criminal Instruction of 1880, were based on French models. Offenses were specified, and set punishments were established for particular crimes. Both codes were innovative in that they treated all citizens as equals, provided for centralized administration of criminal justice, and prohibited punishment by ex post facto law. Guilt was held to be personal; collective guilt and guilt by association were abolished. Offenses against the emperor were spelled out for the first time.
Innovative aspects of the codes notwithstanding, certain provisions reflected traditional attitudes toward authority. The prosecutor represented the state and sat with the judge on a raised platform--his position above the defendant and the defense counsel suggesting their relative status. Under a semi-inquisitorial system, primary responsibility for questioning witnesses lay with the judge and defense counsel could question witnesses only through the judge. Cases were referred to trial only after a judge presided over a preliminary fact-finding investigation in which the suspect was not permitted counsel. Because in all trials available evidence had already convinced the court in a preliminary procedure, the defendant's legal presumption of innocence at trial was undermined, and the legal recourse open to his counsel was further weakened.
The Penal Code was substantially revised in 1907 to reflect the growing influence of German law in Japan, and the French practice of classifying offenses into three types was eliminated. More important, where the old code had allowed very limited judicial discretion, the new one permitted the judge to apply a wide range of subjective factors in sentencing.
After World War II, occupation authorities initiated reform of the constitution and laws in general. Except for omitting offenses relating to war, the imperial family, and adultery, the 1947 Penal Code remained virtually identical to the 1907 version. The criminal procedure code, however, was substantially revised to incorporate rules guaranteeing the rights of the accused. The system became almost completely accusatorial, and the judge, although still able to question witnesses, decided a case on evidence presented by both sides. The preliminary investigative procedure was suppressed. The prosecutor and defense counsel sat on equal levels, below the judge. Laws on indemnification of the wrongly accused and laws concerning juveniles, prisons, probation, and minor offenses were also passed in the postwar years to supplement criminal justice administration.
INCIDENCE OF CRIME
The National Police Agency divides crime into six main categories. Felonies--the most serious and carrying the stiffest penalties--includes murder and conspiracy to murder, robbery, rape, and arson. Violent offenses consist of unlawful assembly while possessing a dangerous weapon, simple and aggravated assault, extortion, and intimidation. Larceny encompasses burglary, vehicle theft, and shoplifting. Crimes classified as intellectual include fraud, embezzlement, counterfeiting, forgery, bribery, and breach of trust. Moral offenses include gambling, indecent exposure, and the distribution of obscene literature. Miscellaneous offenses frequently involve the obstruction of official duties, negligence with fire, unauthorized entry, negligent homicide or injury (often in traffic accidents), possession of stolen property, and destruction of property. Special laws define other criminal offenses, among them prostitution, illegal possession of swords and firearms, customs violations, and possession of controlled substances, including narcotics and marijuana.
In 1990 the police identified over 2.2 million Penal Code violations. Two types of violations--larceny (65.1 percent of total violations) and negligent homicide or injury as a result of accidents (26.2 percent)--accounted for over 90 percent of criminal offenses in Japan. Major crimes occur in Japan at a very low rate. In 1989 Japan experienced 1.3 robberies per 100,000 population, compared with 48.6 for West Germany, 65.8 for Britain, and 233.0 for the United States; and it experienced 1.1 murder per 100,000 population, compared with 3.9 for West Germany, 9.1 for Britain, and 8.7 for the United States that same year. Japanese authorities also solve a high percentage of robbery cases (75.9 percent, compared with 43.8 percent for West Germany, 26.5 percent for Britain, and 26.0 percent for the United States) and homicide cases (95.9 percent, compared with 94.4 percent for Germany, 78.0 percent for Britain, and 68.3 percent for the United States).
An important factor keeping crime low is the traditional emphasis on the individual as a member of groups to which he or she must not bring shame. Within these groups--family, friends, and associates at work or school--a Japanese citizen has social rights and obligations, derives valued emotional support, and meets powerful expectations to conform. These informal social sanctions display remarkable potency despite competing values in a changing society. Other important factors keeping the crime rate low are the prosperous economy and a strict and effective weapons control law. Ownership of handguns is forbidden to the public, hunting rifles and ceremonial swords are registered with the police, and the manufacture and sale of firearms are regulated. The production and sale of live and blank ammunition are also controlled, as are the transportation and importation of all weapons. Crimes are seldom committed with firearms.
Despite Japan's status as a modern, urban nation--a condition linked by many criminologists to growing rates of crime--the nation does not suffer from steadily rising levels of criminal activity. Although crime continues to be higher in urban areas, rates of crime remain relatively constant nationwide, and rates of violent crime continue to decrease.
The nation is not problem free, however; of particular concern to the police are crimes associated with modernization. Increased wealth and technological sophistication has brought new whitecollar crimes, such as computer and credit card fraud, larceny involving coin dispensers, and insurance falsification. Incidence of drug abuse is minuscule, compared with other industrialized nations and limited mainly to stimulants. Japanese law enforcement authorities endeavor to control this problem by extensive coordination with international investigative organizations and stringent punishment of Japanese and foreign offenders. Traffic accidents and fatalities consume substantial law enforcement resources.
Juvenile delinquency, although not nearly as serious as in most industrialized nations, is of great concern to the authorities. In 1990 over 52 percent of persons arrested for criminal offenses (other than negligent homicide or injuries) were juveniles. Over 70 percent of the juveniles arrested were charged with larceny, mainly shoplifting and theft of motorcycles and bicycles. The failure of the Japanese education system to address the concerns of nonuniversity-bound students is cited as an important factor in the rise of juvenile crime.
The yakuza (underworld) had existed in Japan well before the 1800s and followed codes based on bushido. Their early operations were usually close-knit, and the leader and gang members had father-son relationships. Although this traditional arrangement continues to exist, yakuza activities are increasingly replaced by modern types of gangs that depend on force and money as organizing concepts. Nonetheless, yakuza often picture themselves as saviors of traditional Japanese virtues in a postwar society, sometimes forming ties with right-wing groups espousing the same views and attracting dissatisfied youths to their ranks.
Yakuza groups in 1990 were estimated to number more than 3,300 and together contained more than 88,000 members. Although concentrated in the largest urban prefectures, yakuza operate in most cities and often receive protection from highranking officials in exchange for their assistance in keeping the crime rate low by discouraging criminals operating individually or in small groups. Following concerted police pressure in the 1960s, smaller gangs either disappeared or began to consolidate in syndicate-type organizations. In 1990, three large syndicates dominated underworld crime in the nation and controlled more than 1,600 gangs and 42,000 gangsters.
Today, the crime rate in Japan is low compared to other industrialized countries. An analysis was done using INTERPOL data for Japan. For purpose of comparison, data were drawn for the seven offenses used to compute the United States FBI's index of crime. Index offenses include murder, forcible rape, robbery, aggravated assault, burglary, larceny, and motor vehicle theft. The combined total of these offenses constitutes the Index used for trend calculation purposes. According to the INTERPOL data, for murder, the rate in 2001 was 1.05 per 100,000 population for Japan, and 5.61 for USA. For rape, the rate in 2001 was 1.75 for Japan and 31.77 for USA. For robbery, the rate in 2001 was 5.02 for Japan, and 148.50 for USA. For aggravated assault, the rate in 2001 was 26.68 for Japan, and 318.55 for USA. For burglary, the rate in 2001 was 238.59 for Japan, and 740.80 for USA. The rate of larceny for 2001 was 1550.41 for Japan, and 2484.64 for USA. The rate for motor vehicle theft in 2001 was 49.71 for Japan and 430.64 for USA. The rate for all index offenses combined was 1873.21 for Japan and 4160.51 for USA.
TRENDS IN CRIME
Between 1995 and 2001, according to Interpol data, the rate of murder increased from 1.02 to 1.05 per 100,000 population, an increase of 2.9%. The rate for rape increased from 1.19 to 1.75, an increase of 47.1% The rate of robbery increased from 1.81 to 5.02, an increase of 177.3%. The rate for aggravated assault increased from 13.92 to 26.68, an increase of 91.7%. The rate for burglary increased from 186.82 to 238.59, an increase of 27.7%. The rate of larceny increased from 1035.44 to 1,550.41 an increase of 49.7%. The rate of motor vehicle theft increased from 28.45 to 49.71, an increase of 74.7%. The rate of total index offenses increased from 1268.65 to 1873.21, an increase of 47.7%.
The Japanese government established a European-style civil police system in 1874, under the centralized control of the Police Bureau within the Home Ministry, to put down internal disturbances and maintain order during the Meiji Restoration. By the 1880s, the police had developed into a nationwide instrument of government control, providing support for local leaders and enforcing public morality. They acted as general civil administrators, implementing official policies and thereby facilitating unification and modernization. In rural areas especially, the police had great authority and were accorded the same mixture of fear and respect as the village head. Their increasing involvement in political affairs was one of the foundations of the authoritarian state in Japan in the first half of the twentieth century.
The centralized police system steadily acquired responsibilities, until it controlled almost all aspects of daily life, including fire prevention and mediation of labor disputes. The system regulated public health, business, factories, and construction, and it issued permits and licenses. The Peace Preservation Law of 1925 gave police the authority to arrest people for "wrong thoughts." Special Higher Police were created to regulate the content of motion pictures, political meetings, and election campaigns. Military police operating under the army and navy and the justice and home ministries aided the civilian police in limiting proscribed political activity. After the Manchurian Incident of 1931, military police assumed greater authority, leading to friction with their civilian counterparts. After 1937 police directed business activities for the war effort, mobilized labor, and controlled transportation.
After Japan's surrender in 1945, occupation authorities retained the prewar police structure until a new system was implemented and the Diet passed the 1947 Police Law. Contrary to Japanese proposals for a strong, centralized force to deal with postwar unrest, the police system was decentralized. About 1,600 independent municipal forces were established in cities, towns, and villages with 5,000 inhabitants or more, and a National Rural Police was organized by prefecture. Civilian control was to be ensured by placing the police under the jurisdiction of public safety commissions controlled by the National Public Safety Commission in the Office of the Prime Minister. The Home Ministry was abolished and replaced by the less powerful Ministry of Home Affairs, and the police were stripped of their responsibility for fire protection, public health, and other administrative duties.
When most of the occupation forces were transferred to Korea in 1950-51, the 75,000 strong National Police Reserve was formed to back up the ordinary police during civil disturbances, and pressure mounted for a centralized system more compatible with Japanese political preferences. The 1947 Police Law was amended in 1951 to allow the municipal police of smaller communities to merge with the National Rural Police. Most chose this arrangement, and by 1954 only about 400 cities, towns, and villages still had their own police forces. Under the 1954 amended Police Law, a final restructuring created an even more centralized system in which local forces were organized by prefectures under a National Police Agency.
The revised Police Law of 1954, still in effect in the 1990s, preserves some strong points of the postwar system, particularly measures ensuring civilian control and political neutrality, while allowing for increased centralization. The National Public Safety Commission system has been retained. State responsibility for maintaining public order has been clarified to include coordination of national and local efforts; centralization of police information, communications, and recordkeeping facilities; and national standards for training, uniforms, pay, rank, and promotion. Rural and municipal forces were abolished and integrated into prefectural forces, which handled basic police matters. Officials and inspectors in various ministries and agencies continue to exercise special police functions assigned to them in the 1947 Police Law.
The mission of the National Public Safety Commission is to guarantee the neutrality of the police by insulating the force from political pressure and to ensure the maintenance of democratic methods in police administration. The commission's primary function is to supervise the National Police Agency, and it has the authority to appoint or dismiss senior police officers. The commission consists of a chairman, who holds the rank of minister of state, and five members appointed by the prime minister with the consent of both houses of the Diet. The commission operates independently of the cabinet, but liaison and coordination with it are facilitated by the chairman's being a member of that body.
As the central coordinating body for the entire police system, the National Police Agency determines general standards and policies; detailed direction of operations is left to the lower echelons. In a national emergency or large-scale disaster, the agency is authorized to take command of prefectural police forces. In 1989 the agency was composed of about 1,100 national civil servants, empowered to collect information and to formulate and execute national policies. The agency is headed by a commissioner general who is appointed by the National Public Safety Commission with the approval of the prime minister. The central office includes the Secretariat, with divisions for general operations, planning, information, finance, management, and procurement and distribution of police equipment, and five bureaus. The Administration Bureau is concerned with police personnel, education, welfare, training, and unit inspections. The Criminal Investigation Bureau is in charge of research statistics and the investigation of nationally important and international cases. This bureau's Safety Department is responsible for crime prevention, combating juvenile delinquency, and pollution control. In addition, the Criminal Investigation Bureau surveyes, formulates, and recommends legislation on firearms, explosives, food, drugs, and narcotics. The Communications Bureau supervises police communications systems.
The Traffic Bureau licenses drivers, enforces traffic safety laws, and regulates traffic. Intensive traffic safety and driver education campaigns are run at both national and prefectural levels. The bureau's Expressway Division addresses special conditions of the nation's growing system of express highways.
The Security Bureau formulates and supervises the execution of security policies. It conducts research on equipment and tactics for suppressing riots and oversaw and coordinates activities of the riot police. The Security Bureau is also responsible for security intelligence on foreigners and radical political groups, including investigation of violations of the Alien Registration Law and administration of the Entry and Exit Control Law. The bureau also implements security policies during national emergencies and natural disasters.
The National Police Agency has seven regional police bureaus, each responsible for a number of prefectures. Metropolitan Tokyo and the island of Hokkaido are excluded from these regional jurisdictions and are run more autonomously than other local forces, in the case of Tokyo, because of its special urban situation, and of Hokkaido, because of its distinctive geography. The National Police Agency maintains police communications divisions in these two areas to handle any coordination needed between national and local forces.
There are some 258,000 police officers nationwide, about 97 percent of whom were affiliated with local police forces. Local forces include forty-three prefectural (ken) police forces; one metropolitan (to) police force, in Tokyo; two urban prefectural (fu) police forces, in Osaka and Kyoto; and one district (d ) police force, in Hokkaido. These forces have limited authority to initiate police actions. Their most important activities are regulated by the National Police Agency, which provides funds for equipment, salaries, riot control, escort, and natural disaster duties, and for internal security and multiple jurisdiction cases. National police statutes and regulations establish the strength and rank allocations of all local personnel and the locations of local police stations. Prefectural police finance and control the patrol officer on the beat, traffic control, criminal investigations, and other daily operations.
Each prefectural police headquarters contains administrative divisions corresponding to those of the bureaus of the National Police Agency. Headquarters are staffed by specialists in basic police functions and administration and are commanded by an officer appointed by the local office of the National Public Safety Commission. Most arrests and investigations are performed by prefectural police officials (and, in large jurisdictions, by police assigned to substations), who are assigned to one or more central locations within the prefecture. Experienced officers are organized into functional bureaus and handle all but the most ordinary problems in their fields.
Below these stations, police boxes (koban)--substations near major transportation hubs and shopping areas and in residential districts--form the first line of police response to the public. About 20 percent of the total police force is assigned to koban. Staffed by three or more officers working in eight-hour shifts, they serve as a base for foot patrols and usually have both sleeping and eating facilities for officers on duty but not on watch. In rural areas, residential offices usually are staffed by one police officer who resides in adjacent family quarters. These officers endeavor to become a part of the community, and their families often aid in performing official tasks.
Officers assigned to koban have intimate knowledge of their jurisdictions. One of their primary tasks is to conduct twice-yearly house-by-house residential surveys of homes in their areas, at which time the head of the household at each address fills out a residence information card detailing the names, ages, occupations, business addresses, and vehicle registration numbers of household occupants and the names of relatives living elsewhere. Police take special note of names of the aged or those living alone who might need special attention in an emergency. They conduct surveys of local businesses and record employee names and addresses, in addition to such data as which establishments stay open late and which employees might be expected to work late. Participation in the survey is voluntary, and most citizens cooperate, but an increasing segment of the population has come to regard the surveys as invasions of privacy.
Information elicited through the surveys is not centralized but is stored in each police box, where it is used primarily as an aid to locating people. When a crime occurs or an investigation is under way, however, these files are invaluable in establishing background data for a case. Specialists
Within their security divisions, each prefectural level police department and the Tokyo police maintain special riot units. These units were formed after riots at the Imperial Palace in 1952, to respond quickly and effectively to large public disturbances. They are also used in crowd control during festival periods, at times of natural disaster, and to reinforce regular police when necessary. Full-time riot police can also be augmented by regular police trained in riot duties.
In handling demonstrations and violent disturbances, riot units are deployed en masse, military style. It is common practice for files of riot police to line streets through which demonstrations passed. If demonstrators grows disorderly or deviated from officially countenanced areas, riot police stand shoulder-to- shoulder, sometimes three and four deep, to push with their hands to control the crowds. Individual action is forbidden. Three-person units sometimes perform reconnaissance duties, but more often operations are carried out by squads of nine to eleven, platoons of twenty-seven to thirty-three, and companies of eighty to one hundred. Front ranks are trained to open to allow passage of special squads to rescue captured police or to engage in tear gas assaults. Each person wears a radio with an earpiece to hear commands given simultaneously to the formation.
The riot police are committed to using disciplined, nonlethal force and carry no firearms. They are trained to take pride in their poise under stress. Demonstrators also are usually restrained. Police brutality is rarely an issue. When excesses occur, the perpetrator is disciplined and sometimes transferred from the force if considered unable to keep his temper.
Extensive experience in quelling violent disorders led to the development of special uniforms and equipment for the riot police units. Riot dress consists of a field-type jacket, which covered several pieces of body armor and includes a corselet hung from the waist, an aluminum plate down the backbone, and shoulder pads. Armored gauntlets cover the hands and forearms. Helmets have faceplates and flared padded skirts down the back to protect the neck. In case of violence, the front ranks carry 1.2-meter shields to protect against staves and rocks and hold nets on high poles to catch flying objects. Specially designed equipment includes water cannons, armored vans, and mobile tunnels for protected entry into seized buildings.
Because riot police duties require special group action, units are maintained in virtually self-sufficient compounds and trained to work as a coordinated force. The overwhelming majority of officers are bachelors who live in dormitories within riot police compounds. Training is constant and focuses on physical conditioning, mock battles, and tactical problems. A military atmosphere prevails--dress codes, behavior standards, and rank differentiations are more strictly adhered to than in the regular police. Esprit de corps is inculcated with regular ceremonies and institutionalization of rituals such as applauding personnel dispatched to or returning from assignments and formally welcoming senior officers to the mess hall at all meals.
Riot duty is not popular because it entails special sacrifices and much boredom in between irregularly spaced actions. Although many police are assigned riot duty, only a few are volunteers. For many personnel, riot duty serves as a stepping stone because of its reputation and the opportunities it presents to study for the advanced police examinations necessary for promotion. Because riot duties demands physical fitness--the armored uniform weighed 6.6 kilograms--most personnel are young, often serving in the units after an initial assignment in a koban.
In addition to regular police officers, there are several thousand officials attached to various agencies who perform special duties relating to public safety. They are responsible for such matters as railroad security, forest preservation, narcotics control, fishery inspection, and enforcement of regulations on maritime, labor, and mine safety.
The largest and most important of these ministry-supervised public safety agencies is the Maritime Safety Agency, an external bureau of the Ministry of Transportation that deals with crime in coastal waters and maintains facilities for safeguarding navigation. The agency operates a fleet of patrol and rescue craft in addition to a few aircraft used primarily for antismuggling patrols and rescue activities. In 1990 there were 2,846 incidents in and on the waters. In those incidents, 1,479 people drowned or were lost and 1,347 people were rescued.
There are other agencies having limited public safety functions. These agencies include the Labor Standards Inspection Office of the Ministry of Labor, railroad police of Japan Railways Group, immigration agents of the Ministry of Justice, postal inspectors of the Ministry of Posts and Telecommunications, and revenue inspectors in the Ministry of Finance.
A small intelligence agency, the Public Security Investigation Office of the Ministry of Justice, handles national security matters both inside and outside the country. Its activities are not generally known to the public.
Despite legal limits on police jurisdiction, many citizens retain their views of the police as authority figures to whom they can turn for aid. The public often seeks police assistance to settle family quarrels, counsel juveniles, and mediate minor disputes. Citizens regularly consult police for directions to hotels and residences--an invaluable service in cities where streets are often unnamed and buildings are numbered in the order in which they have been built rather than consecutively. Police are encouraged by their superiors to view these tasks as answering the public's demands for service and as inspiring community confidence in the police. Public attitudes toward the police are generally favorable, although a series of incidents of forced confessions in the late 1980s raised some concern about police treatment of suspects held for pretrial detention.
Education is highly stressed in police recruitment and promotion. Entrance to the force is determined by examinations administered by each prefecture. Examinees are divided into two groups: upper-secondary-school graduates and university graduates. Recruits underwent rigorous training--one year for upper-secondary school graduates and six months for university graduates--at the residential police academy attached to the prefectural headquarters. On completion of basic training, most police officers are assigned to local police boxes. Promotion is achieved by examination and requires further course work. In-service training provides mandatory continuing education in more than 100 fields. Police officers with upper-secondary school diplomas are eligible to take the examination for sergeant after three years of on-the- job experience. University graduates can take the examination after only one year. University graduates are also eligible to take the examination for assistant police inspector, police inspector, and superintendent after shorter periods than upper-secondary school graduates. There are usually five to fifteen examinees for each opening.
About fifteen officers per year pass advanced civil service examinations and are admitted as senior officers. Officers are groomed for administrative positions, and, although some rise through the ranks to become senior administrators, most such positions are held by specially recruited senior executives.
The police forces are subject to external oversight. Although officials of the National Public Safety Commission generally defer to police decisions and rarely exercise their powers to check police actions or operations, police are liable for civil and criminal prosecution, and the media actively publicizes police misdeeds. The Human Rights Bureau of the Ministry of Justice solicits and investigates complaints against public officials, including police, and prefectural legislatures could summon police chiefs for questioning. Social sanctions and peer pressure also constrain police behavior. As in other occupational groups in Japan, police officers develop an allegiance to their own group and a reluctance to offend its principles.
Conditions of public order compare favorably with those in other industrialized countries. The overall crime rate is low by North American and West European standards and has shown a general decline since the mid-1960s. The incidence of violent crime is especially low, owing in part to effective enforcement of stringent firearms control laws. Problems of particular concern are those associated with a modern industralized nation, including juvenile delinquency, traffic control, and white-collar crime.
Civil disorders occurred beginning in the early 1950s, chiefly in Tokyo, but did not seriously threaten the internal security of the state. Far less frequent after the early 1970s, they were in all cases effectively countered by efficient and well-trained police units employing the most sophisticated techniques of riot control.
Japan's police are an apolitical body under the general supervision of independent agencies, free of direct central government executive control. They are checked by an independent judiciary and monitored by a free and active press. The police are generally well respected and can rely on considerable public cooperation in their work.
Officials involved in the criminal justice system are usually highly trained professionals interested in preventing crime and rehabilitating offenders. They are allowed considerable discretion in dealing with legal infractions and appear to deserve the trust and respect accorded to them by the general public. Constitutionally guaranteed rights of habeas corpus, protection against self-incrimination, and the inadmissability of confessions obtained under duress are enforced by criminal procedures.
The self-defense forces are responsible for external security and have limited domestic security responsibilities. The well-organized and disciplined police force was effectively under the control of the civilian authorities. However, there continued to be credible reports that police committed some human rights abuses.
The Constitution provides for freedom from torture and cruel, inhuman, or degrading treatment or punishment, and the Penal Code prohibits violence and cruelty toward suspects under criminal investigation; however, reports by several bar associations, human rights groups, and some prisoners indicated that police and prison officials sometimes used physical violence, including kicking and beating, as well as psychological intimidation, to obtain confessions from suspects in custody or to enforce discipline. Unlike in 2000, there were no allegations of beatings of detainees by employees of private security companies that operated immigration detention facilities at Narita International Airport. The 2000 revision of the National Police Law permits persons to lodge complaints against the police with national and local public safety commissions. These commissions may direct the police to conduct investigations. However, public confidence in this system remained low, and allegations that the police and the public safety commissions remained lax in investigating charges of police misconduct persisted.
Constitutional provisions for freedom from arbitrary arrest or imprisonment generally are respected in practice. The law provides for judicial determination of the legality of detention. Persons may not be detained without charge, and prosecuting authorities must be prepared to demonstrate before trial that probable cause exists in order to detain the accused. Under the law, a suspect may be held in detention at either a regular detention facility or "substitute" (police) detention facility for up to 72 hours. A judge interviewed suspects prior to detention. A judge may extend preindictment custody by up to two consecutive 10-day periods based on a prosecutor's application. These extensions were sought and granted routinely. Under extraordinary circumstances, prosecutors may seek an additional 5-day extension, bringing the maximum period of preindictment custody to 25 days.
In 1999 the Supreme Court upheld as constitutional the section of the Criminal Procedure Code under which police and prosecutors have the power to control and may limit access by legal counsel when deemed necessary for the sake of an investigation. Counsel may not be present during interrogations at any time before or after indictment. As a court-appointed attorney is not approved until after indictment, suspects must rely on their own resources to hire an attorney before indictment, although local bar associations provide detainees with limited free counseling. Critics charge that access to counsel is limited both in duration and frequency; the Government denies that this is the case. In 2000 presentencing bail was available in roughly 13 percent of cases.
Bar associations and human rights groups have criticized the use of a "substitute prison system" for prisoners awaiting court hearings. Although the law stipulates that suspects should be held in "houses of detention" between arrest and sentencing, a police detention facility may be substituted at the order of the court. This provision originally was added to cover a shortage of normal detention facilities. According to year-end Ministry of Justice data, normal prison facilities were filled to 104 percent of capacity in 2000, a 9.1 percent increase over 1999. Approximately 30 percent of normal detention facilities suffered from overcrowding in 2000. Critics charged that allowing suspects to be detained by the same authorities who interrogated them heightens the potential for abuse and coercion. The Government countered that cases sent to police detention facilities tend to be those where the facts were not in dispute. A Justice Ministry regulation permits detention house officials to limit the amount of documentation related to ongoing court cases retained by prisoners.
The length of time before a suspect is brought to trial depends on the nature of the crime but rarely exceeds 3 months from the date of arrest; the average is 1 to 2 months. In one case, an accused allegedly was held for 3 years. In March in its final report, an advisory panel to the Prime Minister on judicial reform called for a substantial increase in judges, prosecutors, and Justice Ministry personnel to shorten the time between arrest and trial.
The law does not permit forced exile, and it is not used.
Japanís legal system is modeled after European civil law system with English-American influence. Judicial review of legislative acts occurs in the Supreme Court.
In contrast to the prewar system, in which executive bodies had much control over the courts, the postwar constitution guarantees that "all judges shall be independent in the exercise of their conscience and shall be bound only by this constitution and the Laws" (Article 76). They cannot be removed from the bench "unless judicially declared mentally or physically incompetent to perform official duties," and they cannot be disciplined by executive agencies (Article 78). A Supreme Court justice, however, may be removed by a majority of voters in a referendum that occurs at the first general election following the justice's appointment and every ten years thereafter. As of the early 1990s, however, the electorate had not used this unusual system to dismiss a justice.
The Supreme Court, the highest court, is the final court of appeal in civil and criminal cases. The constitution's Article 81 designates it "the court of last resort with power to determine the constitutionality of any law, order, regulation, or official act." The Supreme Court is also responsible for nominating judges to lower courts, determining judicial procedures, overseeing the judicial system, including the activities of public prosecutors, and disciplining judges and other judicial personnel. It renders decisions from either a grand bench of fifteen justices or a petit bench of five. The grand bench is required for cases involving constitutionality. The court includes twenty research clerks, whose function is similar to that of the clerks of the United States Supreme Court.
The judicial system is unitary: there is no independent system of prefectural level courts equivalent to the state courts of the United States. Below the Supreme Court, the Japanese system included eight high courts, fifty district courts, and fifty family courts in the late 1980s. Four of each of the last two types of courts were located in Hokkaido, and one of each in the remaining forty-six rural prefectures, urban prefectures, and the Tokyo Metropolitan District. Summary courts, located in 575 cities and towns in the late 1980s, performed the functions of small courts and justices of the peace in the United States, having jurisdiction over minor offenses and civil cases.
The Constitution provides for an independent judiciary, and the judiciary generally is independent and free from executive branch interference. The Cabinet appoints judges for 10-year terms, which can be renewed until judges reach the age of 65. Justices of the Supreme Court can serve until the age of 70 but face periodic review through popular referendums.
There are several levels of courts, including high courts, district courts, family courts, and summary courts, with the Supreme Court serving as the final court of appeal. Normally a trial begins at the district court level, and a verdict may be appealed to a higher court, and ultimately, to the Supreme Court.
The Government generally respects in practice the constitutional provisions for the right to a speedy and public trial by an impartial tribunal in all criminal cases. Although most criminal trials are completed within a reasonable length of time, cases may take several years to work their way through the trial and appeals process. Responding to the final report of a Government advisory panel established in 1999 to outline structural reforms to the judicial system, in June the Government announced plans to begin drafting legislation aimed at reducing the average time required to complete criminal trials and civil trials that include witness examination (which lasted an average of 20.5 months in 1999). Its proposals included hiring substantial numbers of additional court and Justice Ministry personnel, revising bar examinations, establishing new graduate law schools to increase the overall number of legal professionals (judges, lawyers, and prosecutors) three-fold by 2010, and requiring that courts and opposing litigants jointly work to improve trial planning by allowing for earlier evidence collection and disclosure. In the complex case of the Aum Shinrikyo 1995 sarin gas attack on the Tokyo subway system, the trials of seven senior members of the group were still underway in district courts at year's end.
The nation's criminal justice officials follows specified legal procedures in dealing with offenders. Once a suspect is arrested by national or prefectural police, the case is turned over to attorneys in the Supreme Public Prosecutors Office, who are the government's sole agents in prosecuting lawbreakers. Although under the Ministry of Justice's administration, these officials work under Supreme Court rules and are career civil servants who can be removed from office only for incompetence or impropriety. Prosecutors presented the government's case before judges in the Supreme Court and the four types of lower courts: high courts, district courts, summary courts, and family courts. Penal and probation officials administer programs for convicted offenders under the direction of public prosecutors.
After identifying a suspect, police have the authority to exercise some discretion in determining the next step. If, in cases pertaining to theft, the amount is small or already returned, the offense petty, the victim unwilling to press charges, the act accidental, or the likelihood of a repetition not great, the police can either drop the case or turn it over to a prosecutor. Reflecting the belief that appropriate remedies are sometimes best found outside the formal criminal justice mechanisms, in 1990 over 70 percent of criminal cases were not sent to the prosecutor.
Police also exercise wide discretion in matters concerning juveniles. Police are instructed by law to identify and counsel minors who appear likely to commit crimes, and they can refer juvenile offenders and nonoffenders alike to child guidance centers to be treated on an outpatient basis. Police can also assign juveniles or those considered to be harming the welfare of juveniles to special family courts. These courts were established in 1949 in the belief that the adjustment of a family's situation is sometimes required to protect children and prevent juvenile delinquency. Family courts are run in closed sessions, try juvenile offenders under special laws, and operate extensive probationary guidance programs. The cases of young people between the ages of fourteen and twenty can, at the judgment of police, be sent to the public prosecutor for possible trial as adults before a judge under the general criminal law.
Safeguards protect the suspects' rights. Police have to secure warrants to search for or seize evidence. A warrant is also necessary for an arrest, although if the crime is very serious or the perpetrator likely to flee, it can be obtained immediately after arrest. Within forty-eight hours after placing a suspect under detention, the police have to present their case before a prosecutor, who is then required to apprise the accused of the charges and of the right to counsel. Within another twenty-four hours, the prosecutor has to go before a judge and present a case to obtain a detention order. Suspects can be held for ten days (extensions were granted in special cases), pending an investigation and a decision whether or not to prosecute. In the 1980s, some suspects were reported to have been mistreated during this detention to exact a confession.
Prosecution can be denied on the grounds of insufficient evidence or on the prosecutor's judgment. Under Article 248 of the Code of Criminal Procedure, after weighing the offender's age, character, and environment, the circumstances and gravity of the crime, and the accused's rehabilitative potential, public action does not have to be instituted, but can be denied or suspended and ultimately dropped after a probationary period. Because the investigation and disposition of a case can occur behind closed doors and the identity of an accused person who is not prosecuted is rarely made public, an offender can successfully reenter society and be rehabilitated under probationary status without the stigma of a criminal conviction.
Institutional safeguards check the prosecutors' discretionary powers not to prosecute. Lay committees are established in conjunction with branch courts to hold inquests on a prosecutor's decisions. These committees meet four times yearly and can order that a case be reinvestigated and prosecuted. Victims or interested parties can also appeal a decision not to prosecute.
Most offenses are tried first in district courts before one or three judges, depending on the severity of the case. Defendants are protected from self-incrimination, forced confession, and unrestricted admission of hearsay evidence. In addition, defendants have the right to counsel, public trial, and cross-examination. Trial by jury was authorized by the 1923 Jury Law but was suspended in 1943. It had not been reinstated as of 1993, chiefly owing to defendants' distrust of jurors, who were believed to be emotional and easily influenced, and the generally greater public confidence in the competence of judges.
The judge conducts the trial and is authorized to question witnesses, independently call for evidence, decide guilt, and affix a sentence. The judge can also suspend any sentence or place a convicted party on probation. Should a judgment of not guilty be rendered, the accused is entitled to compensation by the state based on the number of days spent in detention.
Criminal cases from summary courts, family courts, and district courts can be appealed to the high courts by both the prosecution and the defense. Criminal appeal to the Supreme Court is limited to constitutional questions and a conflict of precedent between the Supreme Court and high courts.
The criminal code sets minimum and maximum sentences for offenses to allow for the varying circumstances of each crime and criminal. Penalties range from fines and short-term incarceration to compulsory labor and the death penalty. Heavier penalties are meted out to repeat offenders. Capital punishment consist of death by hanging and can be imposed on those convicted of leading an insurrection, inducing or aiding foreign armed aggression, arson, or homicide.
As in other industrialized countries, law plays a central role in Japanese political, social, and economic life. Fundamental differences between Japanese and Western legal concepts, however, have often led Westerners to believe that Japanese society is based more on quasi-feudalistic principles of paternalism (the oyabun-kobun relationship) and social harmony, or wa. Japan has a relatively small number of lawyers, about 13,000 practicing in the mid-1980s, compared with 667,000 in the United States, a country with only twice Japan's population. This fact has been offered as evidence that the Japanese are strongly averse to upsetting human relationships by taking grievances to court. In cases of liability, such as the crash of a Japan Airlines jetliner in August 1985, which claimed 520 lives, Japanese victims or their survivors were more willing than their Western counterparts would be to accept the ritualistic condolences of company presidents (including officials' resignations over the incident) and nonjudicially determined compensation, which in many cases was less than they might have received through the courts.
Factors other than a cultural preference for social harmony, however, explain the court-shy behavior of the Japanese. The Ministry of Justice closely screens university law faculty graduates and others who wish to practice law or serve as judges. Only about 2 percent of the approximately 25,000 persons who applied annually to the Ministry's Legal Training and Research Institute two-year required course were admitted in the late 1980s. The institute graduates only a few hundred new lawyers each year. Plagued by shortages of attorneys, judges, clerks, and other personnel, the court system is severely overburdened. Presiding judges often strongly advise plaintiffs to seek out-of-court settlements. The progress of cases through even the lower courts is agonizingly slow, and appeals carried to the Supreme Court can take decades. Faced with such obstacles, most individuals choose not to seek legal remedies. If legal personnel are dramatically increased, which seems unlikely, use of the courts might approach rates found in the United States and other Western countries.
In the English-speaking countries, law has been viewed traditionally as a framework of enforceable rights and duties designed to protect the legitimate interests of private citizens. The judiciary is viewed as occupying a neutral stance in disputes between individual citizens and the state. Legal recourse is regarded as a fundamental civil right. The reformers of the Meiji era (1868-1912), however, were strongly influenced by legal theories that had evolved in Germany and other continental European states. The Meiji reformers viewed the law primarily as an instrument through which the state controls a restive population and directs energies to achieving the goals of fukoku kyohei (wealth and arms).
The primary embodiment of the spirit of the law in modern Japan has not been the attorney representing private interests but the bureaucrat who exercises control through what sociologist Max Weber has called "legal-rational" methods of administration. Competence in law, acquired through university training, consists of implementing, interpreting, and, at the highest levels, formulating law within a bureaucratic framework. Many functions performed by lawyers in the United States and other Western countries are the responsibility of civil servants in Japan. The majority of the country's ruling elite, both political and economic, has been recruited from among the graduates of the Law Faculty of the University of Tokyo and other prestigious institutions, people who have rarely served as private attorneys.
Legal and bureaucratic controls on many aspects of Japanese society were extremely tight. The Ministry of Education, Science, and Culture, for example, closely supervised both public and private universities. Changes in undergraduate or graduate curricula, the appointment of senior faculty, and similar actions required ministry approval in conformity with very detailed regulations. Although this "control-oriented" use of law did not inhibit the freedom of teaching or research (protected by Article 23 of the constitution), it severely limited the universities' scope for reform and innovation. Controls were even tighter on primary and secondary schools.
There is no trial by jury. The defendant is informed of the charges upon arrest and is assured a public trial by an independent civilian court with defense counsel and the right of cross-examination. However, in June the Government's Judicial Reform Council recommended in its final report that randomly chosen members of the public be allowed to participate in determining rulings and penalties in criminal trials by deliberating the cases alongside professional judges. The Government submitted implementing legislation to the Diet in November with the aim of adopting all of the advisory panel's reform proposals by 2004; it was enacted in December.
A defendant is presumed innocent. The Constitution provides defendants with the right not to be compelled to testify against themselves as well as to free and private access to counsel; however, the Government contended that the right to consult with attorneys is not absolute and can be restricted if such restriction is compatible with the spirit of the Constitution. Access sometimes was abridged in practice; for example, the law allows prosecutors to control access to counsel before indictment, and there are allegations of coerced confessions. Defendants are protected from the retroactive application of laws and have the right of access to incriminating evidence after a formal indictment has been made. However, the law does not require full disclosure by prosecutors, and material that the prosecution does not use in court may be suppressed. Critics claimed that legal representatives of defendants do not always have access to all needed relevant material in the police record to prepare their defense. A defendant who was dissatisfied with the decision of a trial court of first instance may, within the period prescribed by law, appeal to a higher court.
No guidelines mandate the acceptable quality of communications between judges, lawyers, and non-Japanese speaking defendants, although the Supreme Court publishes handbooks explaining the legal procedures and terms for court interpreters. In 2000 the Supreme Court introduced a training system to help court interpreters understand complicated trial procedures. However, no standard licensing or qualification system for certifying court interpreters exists, and a trial may proceed even if the accused does not understand what is happening or being said. The Supreme Court's 1998 statistics show a chronic shortage of qualified court interpreters, particularly for non-English speaking defendants. Foreign detainees frequently claim that police urged them to sign statements in Japanese that they cannot read and that are not translated adequately.
The Constitution provides for freedom from torture and cruel, inhuman, or degrading treatment or punishment, and the Penal Code prohibits violence and cruelty toward suspects under criminal investigation; however, reports by several bar associations, human rights groups, and some prisoners indicate that police and prison officials sometimes used physical violence, including kicking and beating, as well as psychological intimidation, to obtain confessions from suspects in custody or to enforce discipline. Unlike in 2000, there were no allegations of beatings of detainees by employees of private security companies that operated immigration detention facilities at Narita International Airport. A revised National Police Law passed by the Diet in 2000 in response to a series of internal police allegations of misconduct, corruption, and bullying went into effect in February. The new law allows individuals to lodge complaints against the police with national and local public safety commissions. These commissions may direct the police to conduct investigations. However, public confidence remained low, and allegations persisted that the police and public safety commissions remained lax in investigating charges of police misconduct.
The Constitution and the Criminal Code include safeguards to ensure that no criminal suspect can be compelled to make a self-incriminating confession, nor convicted or punished in cases where the only evidence against the accused is his own confession. The appellate courts have overturned some convictions in recent years on the grounds that they were obtained as a result of coerced confessions. In addition civil and criminal suits alleging abuse during interrogation and detention have been brought against some police and prosecution officials.
About 90 percent of all criminal cases going to trial included confessions, reflecting the priority the judicial system placed on admissions of guilt. The Government points out that the high percentage of confessions, like the high conviction rate, is reflective of a higher standard of evidence needed to bring about indictment in the judicial system. Confession is regarded as the first step in the rehabilitative process.
Physical restraints, such as leather handcuffs, continued to be used as a form of punishment, and some prisoners have been forced to eat and relieve themselves unassisted while wearing these restraints. Ministry of Justice officials stated that restraints were used inside prisons only when prisoners have been violent and posed a threat to themselves and others, or when there was concern that a prisoner might attempt to escape. In June the Tokyo District Court ordered the Government to pay $10,000 (1 million yen) in damages to a prisoner for being confined in handcuffs for prolonged periods of isolation at a Tokyo immigration detention facility in 1994. The Court ruled that manacling the man's hand behind him and detaining him in isolation for long periods was unlawful.
Prisons, in existence in some feudal domains as early as the late sixteenth century, originally functioned to hold people for trial or prior to execution. Because of the costs and difficulties involved in long-term incarceration and the prevailing standards of justice that called for sentences of death or exile for serious crimes, life imprisonment was rare. Facilities were used sometimes for shorter confinement. Prisoners were treated according to their social status and housed in barracks-like quarters. In some cases, the position of prison officer was hereditary, and staff vacancies were filled by relatives.
During the Meiji period (1868-1912), the country adopted Western-style penology along with systems of law and legal administration. In 1888 an aftercare hostel (halfway house) was opened for released prisoners. Staffed mainly by volunteers, this institution helped ex-convicts reenter society. Many ex-convicts had been ostracized by their families for the shame they had incurred and had literally nowhere to go. The Prison Law of 1908 provided basic rules and regulations for prison administration, stipulating separate facilities for those sentenced to confinement with and without labor and for those detained for trial and short sentences.
The Juvenile Law of 1922 established administrative organs to handle offenders under the age of eighteen and recognized volunteer workers officially as the major forces in the community-based treatment of juveniles. After World War II, juvenile laws were revised to extend their jurisdiction to those under the age of twenty. Volunteer workers were reorganized under a new law and remain an indispensable part of the rehabilitation system.
The Correctional Bureau of the Ministry of Justice administers the adult prison system as well as the juvenile correctional system and three women's guidance homes (to rehabilitate prostitutes). The ministry's Rehabilitation Bureau operates the probation and parole systems. Prison personnel are trained at an institute in Tokyo and in branch training institutes in each of the eight regional correctional headquarters under the Correctional Bureau. Professional probation officers study at the Legal Training and Research Institute of the Ministry of Justice.
In 1990 Japan's prison population stood at somewhat less than 47,000; nearly 7,000 were in short-term detention centers, and the remaining 40,000 were in prisons. Approximately 46 percent were repeat offenders. Japanese recidivism was attributed mainly to the discretionary powers of police, prosecutors, and courts and to the tendency to seek alternative sentences for first offenders.
The penal system is intended to resocialize, reform, and rehabilitate offenders. On confinement, prisoners are first classified according to gender, nationality, kind of penalty, length of sentence, degree of criminality, and state of physical and mental health. They are then placed in special programs designed to treat individual needs. Vocational and formal education are emphasized, as is instruction in social values. Most convicts engage in labor, for which a small stipend is set aside for use on release. Under a system stressing incentives, prisoners are initially assigned to community cells, then earn better quarters and additional privileges based on their good behavior.
Although a few juvenile offenders are handled under the general penal system, most are treated in separate juvenile training schools. More lenient than the penal institutions, these facilities provide correctional education and regular schooling for delinquents under the age of twenty.
According to the Ministry of Justice, the government's responsibility for social order does not end with imprisoning an offender, but also extends to aftercare treatment and to noninstitutional treatment to substitute for or supplement prison terms. A large number of those given suspended sentences are released to the supervision of volunteer officers under the guidance of professional probation officers. Adults are usually placed on probation for a fixed period, and juveniles are placed on probation until they reach the age of twenty. Volunteers are also used in supervising parolees, although professional probation officers generally supervise offenders considered to have a high risk of recidivism. Volunteers hail from all walks of life and handle no more than five cases at one time. They are responsible for overseeing the offenders' conduct to prevent the occurrence of further offenses. Volunteer probation officers also offer guidance and assistance to the ex-convict in assuming a law-abiding place in the community. Although volunteers are sometimes criticized for being too old compared with their charges (more than 70 percent are retired and are age fifty-five or over) and thus unable to understand the problems their charges faced, most authorities believe that the volunteers are critically important in the nation's criminal justice system.
Public support and cooperation with law enforcement officials help hold down Japan's crime rate, with little or no threat to internal security. The external security threat in the is also considerably reduced from previous years. The Japanese government is confident that diplomatic activity and a limited SDF, backed by United States treaty commitments, will be sufficient to deter any potential adversary.
Today, prison conditions meet international standards; however, the National Police Agency and Ministry of Justice reported that some prisons and detention facilities were overcrowded during the year. Prisons in most areas of the country were not heated, and prisoners were given only minimal additional clothing to protect themselves against cold weather. There have been cases of frostbite among the prison population in recent years. The Ministry of Justice requested funding in August as part of a 3-year plan to install heaters in prison buildings nationwide. Individual cells will remain unheated. Prisoners may not purchase or be given supplementary food. They were discouraged strongly from complaining about conditions. Prisoners faced severe restrictions on the quantity of their incoming and outgoing correspondence. The authorities read letters to and from prisoners, and the letters may be censored, or, with a court order, confiscated. All visits with convicted prisoners were monitored; however, those whose cases are pending were allowed private access to their legal representatives. Prison officials claimed that the "no complaining" policy was designed to keep family members from worrying about their relatives. For the same reason, the Justice Ministry usually does not inform a condemned inmate's family prior to the person's execution. Human rights organizations reported that lawyers also were not told of an execution until after the fact, and that death row prisoners were held for years in solitary confinement with little contact with anyone but prison guards. Parole may not be granted for any reason, including medical and humanitarian, until an inmate has served two-thirds of his or her sentence.
In the past, the Japanese Federation of Bar Associations and human rights groups have criticized the prison system, with its emphasis on strict discipline and obedience to numerous rules. Prison rules remain confidential. Wardens continued to have broad leeway in enforcing punishments selectively, including "minor solitary confinement," which may be imposed for a minimum of 1 and not more than 60 days in which the prisoner is made to sit (for foreigners) or kneel (for Japanese) motionless in the middle of an empty cell.
Women and juveniles are housed in separate facilities from men; at times during the year, some women's detention facilities also were operating over stated capacity. Pre-trial detainees also are held separately from convicted prisoners. Conditions in immigration detention facilities meet most international standards.
According to year-end Ministry of Justice data, normal prison facilities were filled to 103 percent of capacity in 2001. Nongovernmental organization (NGO) and press sources indicated that this overcrowding was a contributing factor in the 6,373 reported violent incidents in prisons in 2001, a 1.6 fold increase in incidents since 1996.
In the past, the Japanese Federation of Bar Associations and human rights groups have criticized the prison system, with its emphasis on strict discipline and obedience to numerous rules. Prison rules remained confidential. Wardens continued to have broad leeway in enforcing punishments selectively, including "minor solitary confinement," which may be imposed for a minimum of 1 and not more than 60 days in which the prisoner is made to sit (for foreigners) or kneel (for citizens) motionless in the middle of an empty cell.
Violence against women, particularly domestic violence, often goes unreported due to social and cultural concerns about shaming one's family or endangering the reputation of one's spouse or offspring. Also, women who are victims of domestic violence typically return to the home of their parents rather than file reports with the authorities. Therefore, National Police Agency statistics on violence against women probably understated the magnitude of the problem. According to the Health Ministry, 9,176 consultations on domestic violence were handled at 47 women's counseling centers in the year ending March 31. The National Police Agency reported 1,096 injuries or killings due to domestic violence in 2000, a 50 percent increase over 1999. In April the Diet passed a new law to combat domestic violence which allows district courts to impose 6-month restraining orders on perpetrators and sentence violators to up to 1 year in prison or fines of up to $7,910 (1 million yen). In addition the law, which came into effect in October, also covers common-law marriages and divorced individuals; it also encourages prefectures to expand shelter facilities for domestic abuse victims and stipulates that local governments offer financial assistance to 40 private institutions already operating such shelters. In December police in Kanagawa Prefecture arrested a man for violating a restraining order, which had been issued under the new law in November. According to National Police Agency statistics, 2,228 rapes and 9,326 indecent assaults were reported during the year. Husbands have been prosecuted for spousal rape; usually these cases involved a third party who assisted in the rape. The National Police Agency confirmed three cases of spousal rape during the year.
Many local governments were responding positively to a need for confidential assistance by establishing special women's consultation departments in police and prefectural offices. An antistalking law went into effect in November 2000 in response to rising complaints about women's lack of recourse in dealing with stalkers. Through June police received 9,142 stalking complaints; they arrested 66 persons and issued 453 warnings between November 2000 and May.
Local governments and private rail operators continued to implement measures designed to address the widespread problem of groping and molestation of female commuters. According to the National Police Agency, in 2000 police arrested 1,854 persons in Tokyo and 982 persons elsewhere in the country for groping. The Tokyo Metropolitan Police organized a council with representatives of train companies to discuss antigroping measures in June. As a result, several railway companies started a poster campaign to raise awareness of antigroping ordinances and to advertise railway police contact information, including contact information for the molestation complaint offices established by the Metropolitan Police Department in 1995. At the suggestion of the Metropolitan Police, the Tokyo Metropolitan Assembly also revised its antigroping ordinance in September to make first-time offenders subject to imprisonment. In March Keio Electric Railway Company decided to make a trial women-only rail car program permanent, reserving one car only for women on all express and limited express trains running after 11 p.m. Monday to Friday.
Trafficking in women was a problem in year 2001. The Constitution and the Equal Employment Opportunity (EEO) Law prohibit sexual discrimination; however, sexual harassment in the workplace remains widespread. A National Personnel Authority survey of female public servants conducted in 2000 found that 69.2 percent of all female respondents believe they have been subjected to acts that constitute sexual harassment. The National Personnel Authority established workplace rules in April 1999 in an effort to stop harassment in public servants' workplaces. New survey data indicates that the most severe forms of sexual harassment may be declining in government workplaces; female public servants who stated that their bosses had pressured them into a sexual relationship dropped from 17 percent in 1997 to 2.2 percent in 2000. In 1999 a revision to the Equal Employment Opportunity (EEO) Law intended to address problems of sexual harassment and discrimination against women went into effect. The revised EEO Law includes measures to identify companies that fail to prevent sexual harassment, although it does not include punitive measures to enforce compliance; the law's only penalty is that names of companies that practice sexual discrimination can be publicized. The Ministry of Labor does not enforce compliance through fines or other punitive penalties. However, since the 1999 revision, there has been a 35 percent increase in consultations over workplace sexual harassment cases. Under the Labor Standards Law, an arbitration committee is allowed to initiate procedures to help ensure the rights of female workers at the worker's request, without first having to obtain approval from both management and the worker's union. A number of government entities have established hot lines and designated ombudsmen to handle complaints of discrimination and sexual harassment.
The Labor Standards Law forbids wage discrimination against women. Under the revised EEO Law, women may work overtime shifts. Women make up 40 percent of the labor force, and women between the ages of 15 and 64 have a labor force participation rate of 51 percent. In response to a 2000 Government survey that revealed that potential employers had discriminated against one in five women entering the work force on the basis of gender, in April the Labor Ministry distributed 100,000 manuals outlining 25 hiring or recruiting practices that violated the EEO law. Although the Labor Standards and the EEO laws prohibit wage discrimination against women, in 2000 female workers on average earned only 65.5 percent of average male earnings. In general younger women (age 20-24) tended to make almost as much as men do; older women (50 and older) tended to make much less. Much of this disparity results from the "two-track" personnel administration system found in most larger companies under which new hires are put into one of two categories: Managerial track (those engaged in planning and decisionmaking jobs and with the potential to become top executives); or general track (those engaged in general office work). According to a 1998 survey by the Management and Coordination Agency, women held 9.2 percent of managerial positions. A 1998 Labor Ministry survey found that over half of the companies with a two-track personnel system did not even consider women for managerial track positions. In March the Osaka District Court dismissed a wage bias suit filed by female employees of Sumitomo Chemical Company who had been placed in a nonmanagerial career track in 1970 when the company introduced a dual-track system. However, in August the Tokyo District High Court ruled against conventional wage compensation assessment methods that used exisiting gender income disparities to determine future earnings potential in the case of minors. According to the Prime Minister's Bureau of Gender Equality, women held 4.1 percent of top local government positions through March, although they make up as much as a third of all local government workers. According to the Home Ministry, some of the 4,200 local governments that urged employees to retire before the mandatory age of 60 regularly urged female employees to retire at younger ages than male employees. In January a Kanazawa District Court found the town government of Toriya in violation of the Local Civil Service Law and ordered it to pay redress to a female civil servant who refused to retire when asked to do so in 1996. The town's retirement system urged female employees to retire at 48 and males at 58.
In addition to discrimination, the traditional male and female division of labor at home places disproportionate burdens on working women, who were still responsible for almost all child care and household duties.
Advocacy groups for women and persons with disabilities continued to press for a government investigation into sterilization cases that were carried out between 1949-92, a formal government apology, and compensation.
In 1993 the Government publicly acknowledged and apologized for the former Imperial Government's involvement in the army's practice of forcing as many as 200,000 women (including Koreans, Filipinos, Chinese, Indonesians, Dutch, and Japanese) to provide sex to soldiers between 1932-45. A 1999 U.N. Subcommission on Prevention of Discrimination and Protection of Minorities report included a recommendation that the Government provide state compensation to former "comfort women" and prosecute those responsible for setting up and operating "comfort stations" during World War II. The Government has been unwilling to pay direct compensation to individual victims, on the grounds that postwar treaties settled all war claims. In March the Hiroshima High Court reversed a 1998 Yamaguchi District Court ruling that had ordered the Government to pay $2,542 (300,000 yen) in state compensation to three Korean former comfort women for neglecting its constitutional duty to enact compensation legislation following the Government's 1993 admission. The District Court ruling had been the first court judgment rendered in favor of foreign war victims. Over 50 damage suits have been filed in Japanese courts; approximately 10 cases were pending at year's end. In October a U.S. federal judge dismissed a lawsuit brought by 15 comfort women, ruling that U.S. courts do not have jurisdiction over claims arising from Japan's wartime conduct.
The "Asian Women's Fund" (AWF) is a private, government-sponsored fund established to "extend atonement and support" to former comfort women. The AWF supports three types of projects: Payments to individual victims; medical and welfare assistance to individual comfort women; and funding projects to improve the general status of women and girls. Projects in the first category were funded by private donations, while the second and third types of projects were financed by the Government and administered by the AWF. As of November 21, the AWF had collected donations totaling approximately $4.33 million (548 million yen) and given lump sum payments of almost $2.97 million (376 million yen) and a letter of apology signed by the Prime Minister to more than 188 women from the Philippines, Korea, and Taiwan. These women also received medical and welfare assistance from the AWF. The AWF has reached an agreement with a Dutch affiliate to make compensation payments to former Dutch comfort women; government officials estimate that up to 100 Dutch women were forced to provide sexual services during World War II. However, the Government's refusal to pay direct compensation continues to draw international criticism.
The Government is committed to children's rights and welfare, and in general the rights of children were protected adequately. Boys and girls have equal access to health care and other public services. Education was free and compulsory through the lower secondary level (age 14, or ninth grade). Education is available widely to students who meet minimum academic standards at the upper secondary level through the age of 18. Society places an extremely high value on education, and enrollment levels for both boys and girls through the free upper secondary level (to age 18) exceed 96 percent.
Public attention is focused increasingly on reports of frequent child abuse in the home. In 2000 the Diet enacted a law granting child welfare officials the authority to prohibit abusive parents from meeting or communicating with their children. This law raised public awareness of the problem of child abuse. The law also bans abuse under the guise of discipline and obliges teachers, doctors, and welfare officials to report any suspicious circumstances to the 174 local child counseling centers located nationwide or to municipal welfare centers. According to the National Police Agency, through June 31 children died of abuse or neglect. Through March police investigated 51 cases of child abuse, in which 20 adults were arrested, an increase of 30.8 percent over the previous year for the same period. From April 2000 through March, family courts mandated the transfer of 6,168 children into protective state custody. Child protection centers also received 18,800 reports of abuse in the year ending in April, an increase of 17 percent since 1990. A 1999 report by the Ministry of Health and Welfare warned that, since caseloads at counseling centers nearly doubled from 1988-96, cuts in funding by local governments to centers handling child abuse cases were exacerbating the problem. In August the Ministry conducted a nationwide survey of how municipal governments responded to abuse cases with the aim of increasing subsidies to local governments to develop child abuse prevention networks. In November the Government announced its intention to rehire counselors dismissed from child protection centers in recent years. Also in November, the Tokyo chapter of the Japan Legal Aid Fund established a $237,285 (30 million yen) fund to provide free legal services to children in family court protective custody hearings.
Incidents of student-on-student violence in schools and severe bullying ("ijime") also continued to be a societal and government concern. At elementary and junior high schools, bullying most often involved verbal abuse, with physical abuse occurring more often at the high school level. An Education Ministry survey released in August reported 20,751 cases of student-on-student violence in public schools during the 2000-01 academic year, a 10 percent increase from the previous year. In past years, surveys have suggested that as many as one in three elementary and junior high school students had been bullied, but more than one-third of the victims did not report the bullying. In addition to compiling statistics on bullying and consulting with various groups concerned with children's welfare, the Ministry of Justice's Office of the Ombudsman for Children's Rights provided counseling services for children 18 years of age and younger who have been victims of bullying. In December the Fukuoka District Court ruled that the Jojima Municipal and Fukuoka Prefectural governments had not taken sufficient action in the case of a boy who committed suicide in 1996 after being harassed and beaten by classmates and ordered the governments to pay $79,095 (10 million yen) in compensation to the boy's parents. Teachers also increasingly are becoming the targets of student violence. Education Ministry statistics for 2000 showed a 16.2 percent increase in assaults on teachers by students over the previous year.
In previous years, both the Government and society in general appeared to take a lenient attitude toward teenage prostitution and dating for money (which may or may not have involved sexual activity). However, in 1999 the Diet passed a law banning sex with persons under age 18 as well as the production, sale, or distribution of child pornography. The law was passed following heightened public attention to a growing problem of teenage prostitution and international criticism over the country's lax laws on child pornography. The law has reduced the open availability of child pornography. Whereas in 1998 INTERPOL estimated that 80 percent of Internet sites with child pornography originated in Japan, by late 1999, after passage of the law, the police reported most of these sites either had disappeared entirely or were accessible only at random hours to avoid detection and arrest. Since April 1999, operators of pornographic home pages and suppliers of pornographic images have been required to register with local safety commissions and not to offer such pages to persons under the age of 18. According to the National Police Agency, the police arrested 108 persons between January and June for patronizing teenage prostitutes and child pornography, double the number for the same period in 2000. However, teenage prostitution and dating for money continues to be a concern. In one high profile case, in August the Tokyo District Court sentenced a Tokyo High Court judge to a 2-year suspended sentence for patronizing a teenage prostitute. In impeachment proceedings concluded in November, he lost his certificate as an officer of the court and was barred from requesting reinstatement for 5 years. In December the Government hosted an international conference on combating sexual exploitation of children.
In February 2001, revisions to the Juvenile Law went into effect that lowered the age at which children can be held criminally responsible for their actions from 16 to 14. Under juvenile law, juvenile suspects are tried in family court and have the right of appeal to an appellate court. Family court proceedings were not open to the public, a policy that has been criticized by family members of juvenile crime victims. The number of juveniles arrested and sent to prosecutors was down 6.6 percent in 2000, according to the National Police Agency.
In 2000 the Tokyo prefectural government put into effect programs to protect the welfare of stateless children, whose births their illegal immigrant mothers refused to register for fear of forcible repatriation. According to Justice Ministry statistics, 720 stateless minors under the age of 5 were in the country in 2000.
TRAFFICKING IN PERSONS
The Constitution prohibits holding persons in bondage, and the Penal Code contains several provisions that could be used to combat trafficking of persons; however, there are no specific laws that prohibit trafficking in persons, and trafficking of women and girls into the country was a problem. Women and girls, primarily from Thailand, the Philippines, and the former Soviet Union, were trafficked into the country for sexual exploitation and forced labor. Women and girls from Colombia, Brazil, Mexico, South Korea, Malaysia, Burma, and Indonesia also were trafficked into the country in smaller numbers. Japan also was a destination for illegal immigrants from China who were trafficked by organized crime groups who often hold such persons in debt bondage for sexual exploitation and indentured servitude in sweatshops and restaurants. In recent years, the Government has reported that some smugglers use killings and abduction to ensure payment.
There is evidence that trafficking takes place within the country to the extent that some recruited women subsequently were forced, through the sale of their "contracts," to work for other employers.
Reliable statistics on the number of women trafficked to the country were unavailable. In 2000 the National Police Agency identified 104 women as potential trafficking victims during criminal investigations involving entertainment businesses. However, the Government does not consider an individual who has willingly entered into an agreement to work illegally in the country to be a trafficking victim, regardless of that person's working conditions once in the country. Thus, government figures may understate the problem as persons who agreed to one kind of work find themselves doing another, or are subject to force, fraud, or coercion. Traffickers were prosecuted for crimes ranging from violations of employment law to Penal Code offenses such as abduction, and the Government does not compile statistics on the number of trafficking victims associated with these cases. Since trafficked women generally are deported under immigration law as prostitutes, immigration statistics may provide only a rough picture of the scale of the problem. A government-funded study released in August 2000 found that nearly two-thirds of foreign women surveyed following arrests for immigration offenses stated that they were working in the sex industry under duress. Ministry of Justice statistics indicated that 1.5 percent of the 24,661 women deported in 1999, the latest figures available, were deported as prostitutes (others who worked in the sex industry were deported for other reasons). Many women who are trafficked into the country, particularly from the Philippines, also enter legally on entertainment visas. An estimated 40,000 women from the Philippines enter the country each year on such visas. "Entertainers" are not covered by the Labor Standards Law, and have no minimum wage protections; however, there are indications that they may be somewhat less vulnerable to abuse by employers than female migrant workers entering on other types of visas or illegally.
Brokers in the countries of origin recruit women and "sell" them to Japanese intermediaries, who in turn subject them to debt bondage and coercion. Agents, brokers, and employers involved in trafficking for sexual exploitation often have ties to organized crime.
Women trafficked to the country generally are employed as prostitutes under coercive conditions in businesses that are licensed to provide commercial sex services. Sex entertainment businesses are classified as "store form" businesses such as strip clubs, sex shops, hostess bars, and private video rooms, and as "nonstore form" businesses such as escort services and mail order video services which arrange for sexual services to be conducted elsewhere. According to NGO's and other credible sources, most women who were trafficked to the country for the purpose of sexual exploitation were employed as hostesses in "snack" bars, where they were required to provide sexual services off premises.
For example, many Thai women were enticed to come to the country with offers of lucrative legitimate employment, only to be sexually exploited; many others reportedly know that they will work as prostitutes. However, whether or not they understand the nature of the work they will be doing, trafficked women generally do not understand the debts they will be forced to repay, the amount of time it will take them to repay the debts, or the conditions of employment they will be subjected to upon arrival. According to Human Rights Watch, the passports of Thai women trafficked to work in "dating" bars usually were confiscated by their "employers," who also demand repayment for the cost of their "purchase." Typically, the women were charged $25,000 to $40,000 (3 million to 5 million yen); their living expenses and expenses for medical care (when provided by the employer) and other necessities, as well as "fines" for misbehavior, were added on to the original "debt" over time. How the debt was calculated was left to the employers; the process was not transparent, and the employers reportedly often used the debt to coerce additional unpaid labor from the trafficked women. Employers also may "resell" or threaten to resell troublesome women or women found to be HIV positive, thereby increasing the debt they must repay and possibly worsening their working conditions. In order to repay the debts they incur, trafficked women generally must work long hours (often with no days off) for several months, essentially without pay. Many women were not allowed to refuse clients, even those known to be physically abusive. Most Thai women trafficked into the sex trade have their movements strictly controlled by their employers while working off their debt, and were threatened with reprisals, perhaps through members of organized crime groups, to themselves, or their families if they try to escape. Employers often isolated the women, subjected them to constant surveillance, and used violence to punish them for disobedience. Most trafficked women also knew that they were subject to arrest if found without their passports or other identification documents. In any case, few spoke Japanese well, making escape even more difficult.
In 1999 the Diet amended the Law on Control and Improvement of Amusement Businesses in order to supplement the Prostitution Prevention Act as an instrument against trafficking. The amended law sanctions employers rather than just sanctioning victims and requires the Government to refuse to grant or to revoke the business license of anyone convicted of the "crime of encouragement" to engage in prostitution. In 1999 the Diet also enacted a law intended to prevent all forms of sexual exploitation of children, whether trafficked or not, which imposes a 1- to 3-year sentence on anyone convicted of trading in children for the purpose of child prostitution or child pornography. Traffickers can also be prosecuted for violations of employment, immigration, or labor laws, and for Penal Code offenses such as abduction and kidnaping. However, relatively few persons ever are prosecuted in connection with trafficking and forced sexual servitude; those who were prosecuted generally are prosecuted in connection with violations of immigration law. There were allegations that some law enforcement units have been reluctant to investigate reports of trafficking and that the Government has not been aggressive in arresting and prosecuting suspected traffickers.
Domestic NGO's and lawyers have compiled credible anecdotal evidence that suggested that some individual police officials have returned trafficking victims to their employers when these individuals sought police protection. NGO's also reported that police sometimes declined to investigate suspected brokers when presented with information obtained from trafficking victims. In a 1991 incident widely reported in the press, a government investigation found that two local police officials from Mie prefecture had returned the trafficking victims to their employers after having taken bribes from organized criminal groups that were associated with the traffickers. The two officials were forced to resign their positions, but were not prosecuted.
Except for the Tokyo Metropolitan Government, which funds a Tokyo-based NGO assisting victims of trafficking, the Government does not assist victims of trafficking for sexual purposes other than to house them temporarily in facilities established under the Antiprostitution Law, in detention centers for illegal immigrants, or through referrals to shelters run by NGO's; generally they are deported as illegal aliens. Victims often are treated as criminals because the Government does not consider persons who willingly enter for illegal work to be trafficking victims. Women without documentation or sufficient funds to return to their country of origin may be detained for long periods. Several NGO's throughout the country provided shelter, medical and legal assistance to trafficking victims. The Government funded trafficking prevention efforts in Asian source countries, sponsored public information campaigns targeted at potential victims, and provided equipment and training to police and customs officials in those countries.
The illegal drug of choice in Japan overwhelmingly is methamphetamine. Several distinct trends are observed in an overview of drug crimes during the past year. First, the number of arrests for methamphetamine-related crimes increased for the third straight year, clearly indicating that Japan is experiencing a bounce in methamphetamine abuse similar to the mid-1970's, when methamphetamine-related arrests climbed to over 20,000 per year. Second, large seizures of methamphetamine have increased. During August 1998, police and customs in Japan seized a record 312 kilograms of methamphetamine concealed inside machine tools shipped from Hong Kong to Japan. This record seizure followed the arrest of three persons who allegedly were involved in an attempt to smuggle 300 kilograms of methamphetamine into Japan aboard a fishing boat. The captain of the fishing boat reportedly told police he obtained the methamphetamine in a rendezvous with another ship at sea. Believing he was being observed, the captain threw the methamphetamine overboard, after which packages of methamphetamine washed ashore on Japan's southern islands. To put these shipments into perspective, Japanese authorities seized 171.9 kilograms of methamphetamine during all of 1997. Third, methamphetamine abuse is spreading to younger users. Police report increases in the arrest of both junior and senior high school age users reflecting lower prices and greater availability. Fourth, indiscriminate street sales of methamphetamine and other drugs, particularly by Iranians, are on the increase. Police note that Iranians involved in drug sales usually are residing illegally in Japan and offer a smorgasbord of illegal drugs. Police report seizure of the proceeds of drug sales and cellular telephones from drug dealers showing an increased use of anti-money laundering laws. In 1998, seizures and arrests for other drugs were relatively small: approximately 1,100 marijuana and hashish arrests with seizures of 136 kilograms of marijuana and 105 kilograms of hashish. Cocaine accounted for 60 arrests and seizure of 25 kilograms. Forty-four people were arrested for heroin related offenses and six kilograms were seized.
According to National Police Agency (NPA) officials, approximately 94 percent of all drug offenses in Japan involve violation of the stimulant (methamphetamine) control law. Other abused drugs in Japan, in descending order of abuse, are marijuana, cocaine, heroin, opium, and MDMA ("ecstasy"). Drug use among juveniles has risen rapidly in recent years.
Japan is not a major producer of illicit drugs. Nearly all are smuggled in from foreign sources. Japan produces many precursor chemicals, which also have legitimate industrial uses.
Because the money laundering law in Japan criminalizes only drug money laundering, any criminal money laundering which occurs in Japan is, by definition, related to narcotics. Law enforcement officers report that drug and money laundering investigations initiated in the U.S. periodically show a link between drug-related money laundering activities in the U.S. and bank accounts in Japan. The extent of such activity is unknown. Japan is not, and is unlikely to become, a significant producer of narcotics. Japan, however, is believed to have one of the largest methamphetamine markets in Asia. An estimated one million methamphetamine addicts are believed to consume approximately seven tons of methamphetamine a year. Cocaine use is believed to be on the rise. Police in Japan and U.S. Customs conducted a controlled delivery of three kilograms of cocaine transported by a courier from Columbia to Japan in November 1998 resulting in the arrest of a Colombian trafficker-the first such controlled delivery and arrest accomplished in Japan.
There is a growing drug market in Japan, especially among juveniles, and increased trafficking activity within the illegal immigrant population. Narcotics' trafficking in Japan is a source of income for Japanese organized crime. Organized crime members sometimes try to sell their product overseas, including in the U.S. The U.S. FBI, and other law enforcement offices, coordinate closely with the Japanese National Police Agency in trying to prevent this activity.
Japan is suspected of being a major money laundering center, and police believe criminal organizations are behind much of the drug trafficking and money laundering which takes place in Japan. Laws criminalizing drug-related money laundering and authorizing execution of controlled drug deliveries were enacted in 1992. Although the 1992 law also authorizes the reporting of suspicious transactions by Japanese financial institutions, such reporting rarely occurs.
Amendments to Japan's foreign exchange control law took effect on April 1, 1998, requiring travelers entering and departing Japan to report to customs authorities physically transported currency, monetary instruments, and gold. All currency over one million yen (approximately $8,000) and gold over one kilogram are reportable under the new legislation.
U.S. law enforcement agencies in Japan report informal cooperation with authorities in Japan is generally good, and the U.S. anticipates beginning the negotiation of an MLAT in 1999.
Legislation enacted in 1992 created a system to confiscate illegal profits gained through drug crimes, criminalized money laundering, and authorized execution of controlled narcotics deliveries. Seizure provisions apply to tangible and intangible assets, direct illegal profit, substitute assets, and criminally derived property commingled with legitimate assets.
Police have seized a total of about $6.3 million (723 million yen) in drug proceeds in 82 investigations since 1992, when the 1992 law allowing police to seize such money took effect. The largest of the cases originated in September 1997, when police in Osaka arrested an organized crime gang leader who had allegedly received $1.3 million (148 million yen) from junior gang members in exchange for offering them a place to sell drugs. The Osaka District Court imposed a penalty of 148 million yen on the gang leader in February 1998 in the first use of Article Ten of the money laundering law which makes it unlawful to receive money that has been illegally earned. In March 1997, the Tokyo District Court sentenced a senior organized crime gang member to life in prison and imposed a $291,000 (33.48 million yen) fine for his involvement in the manufacture and sale of methamphetamine. As of October 1998, police have applied the money laundering law in eighteen cases. Police have found it difficult, however, to apply the money laundering law because in addition to proving the smuggling or sale of drugs, investigators also must show the amount of money illegally earned.
The seizures are believed to account for only a small fraction of the estimated $3.5 billion (police estimate) earned through sales of illegal drugs in Japan. Although statutorily authorized since 1992, police seldom used their asset seizure authority in drug investigations until 1996 when police used the law in dealing with 25 money laundering cases. Indications are that the police may be making greater use of their seizure authority in drug cases and the NPA has called on prefecture police to apply the law more widely to crack down on drug dealers.
Proceeds of seized assets go into a general treasury. Japan will not accept asset sharing even on drug and money laundering investigations on which it provides substantial cooperation. Japan does not appear likely to create a U.S.-style forfeiture fund to collect and disburse forfeited proceeds because of a belief that law enforcement should not directly "profit" from seizure of illegally obtained assets.
Japan continued its sponsorship of many annual international drug enforcement and prevention programs, including the Asia-Pacific Operational Drug Enforcement Conference, a seminar on control of drug offenses, and a training course on drug prevention activities. Japan also is an active participant in all major conferences conducted throughout the world each year, which concern narcotics trafficking and related crimes.
Japan is an active member of the UNDCP Major Donors Group, and finances and participates in many UNDCP programs. Japan allocated $5 million for UNDCP programs in 1998.
Police anti-narcotics efforts tend to focus on Japanese organized crime groups, the main smugglers and distributors of drugs. Police and prosecutors, however, have been hesitant to pursue cases in which a conviction is uncertain. In addition to smuggling and distribution activities, law enforcement officials are paying increased attention to drug-related financial crimes.
Although money laundering is a criminal offense in Japan, the current money laundering law is largely untested and only beginning to be used. The police are the only government entity authorized to conduct criminal financial/money laundering investigations. Creation of a financial intelligence unit, to collect and analyze financial data, is part of the comprehensive crime package awaiting Diet approval. In addition, the burden of proof on law enforcement to link money and assets to specific drug activity limits the law's effectiveness. The money laundering law would be more useful if expanded from being a drug-only statute to cover a wider range of criminal activity giving rise to illicit proceeds.
Underground banking systems operate in Japan via a series of personal relationships among individuals and businesses in other locations, including abroad. In June 1997, two Chinese nationals were arrested in Yokohama on suspicion of operating an "underground bank" which allegedly generated approximately $522,000 (60 million yen) in commissions and more than $870,000 (100 million yen) in exchange benefits. The pair was arrested for violating the banking law of Japan by engaging in overseas cash transfers without a license. Japan has no known drug-related corruption.
Following conclusion of a Customs Mutual Assistance Agreement between Japan and U.S. Customs in 1997, the U.S. and Japan have agreed to initiate formal negotiation of a Mutual Legal Assistance Treaty during 1999. Japan and the U.S. continue to cooperate under a 1978 extradition treaty. Japan is a party to the 1988 UN Drug Convention.
Although not a significant cultivator or producer of controlled substances, Japan is a major producer of 60 types of precursor chemicals, which have legitimate industrial uses. Japan is one of only a handful of countries that produce ephedrine, which is used to create antihistamines, but also is an essential ingredient in methamphetamine. Japan is a member of the Chemical Action Task Force, and DEA agents in Japan are conscientious about monitoring end users of precursors.
Arrests of foreigners for violating drug-related laws are increasing. The major nationalities represented are: Filipinos, Iranians, South Koreans, Thais and Nigerians. Almost all drugs illicitly trafficked in Japan are smuggled from overseas. According to sources from the NPA, China and Thailand are the principal overseas sources/transit points.
Domestic programs primarily focus on interdiction rather than consumers. Domestic demand is rising, especially among minors. According to police statistics, from January to June 1997, minors' abuse of stimulants was up 26 percent from the same period in 1996. The Japanese Government is concerned over the rise in abuse of amphetamines among Japan's youth. The Government supports prevention and education programs in Japan's schools, and works with and encourages NGOs engaged in prevention and treatment. | http://www-rohan.sdsu.edu/faculty/rwinslow/asia_pacific/japan.html | 13 |
51 | Stereophonic sound or, more commonly, stereo, is a method of sound reproduction that creates an illusion of directionality and audible perspective. This is usually achieved by using two or more independent audio channels through a configuration of two or more loudspeakers in such a way as to create the impression of sound heard from various directions, as in natural hearing. Thus the term "stereophonic" applies to so-called "quadraphonic" and "surround-sound" systems as well as the more common 2-channel, 2-speaker systems. It is often contrasted with monophonic, or "mono" sound, where audio is in the form of one channel, often centered in the sound field (analogous to a visual field). Stereo sound is now common in entertainment systems such as broadcast radio and TV, recorded music and the cinema.
The word stereophonic derives from the Greek "στερεός" (stereos), "firm, solid" + "φωνή" (phōnē), "sound, tone, voice" and it was coined in 1927 by Western Electric, by analogy with the word "stereoscopic".
Stereo sound systems can be divided into two forms: The first is "true" or "natural" stereo in which a live sound is captured, with any natural reverberation or ambience present, by an array of microphones. The signal is then reproduced over multiple loudspeakers to recreate, as closely as possible, the live sound.
Secondly "artificial" or "pan-pot" stereo, in which a single-channel (mono) sound is reproduced over multiple loudspeakers. By varying the relative amplitude of the signal sent to each speaker an artificial direction (relative to the listener) can be suggested. The control which is used to vary this relative amplitude of the signal is known as a "pan-pot" (panoramic potentiometer). By combining multiple "pan-potted" mono signals together, a complete, yet entirely artificial, sound field can be created.
In technical usage, true stereo means sound recording and sound reproduction that uses stereographic projection to encode the relative positions of objects and events recorded.
During two-channel stereo recording, two microphones are placed in strategically chosen locations relative to the sound source, with both recording simultaneously. The two recorded channels will be similar, but each will have distinct time-of-arrival and sound-pressure-level information. During playback, the listener's brain uses those subtle differences in timing and sound level to triangulate the positions of the recorded objects. Stereo recordings often cannot be played on monaural systems without a significant loss of fidelity. Since each microphone records each wavefront at a slightly different time, the wavefronts are out of phase; as a result, constructive and destructive interference can occur if both tracks are played back on the same speaker. This phenomenon is known as phase cancellation.
Clément Ader demonstrated the first two-channel audio system in Paris in 1881, with a series of telephone transmitters connected from the stage of the Paris Opera to a suite of rooms at the Paris Electrical Exhibition, where listeners could hear a live transmission of performances through receivers for each ear. Scientific American reported,
- "Every one who has been fortunate enough to hear the telephones at the Palais de l'Industrie has remarked that, in listening with both ears at the two telephones, the sound takes a special character of relief and localization which a single receiver cannot produce... This phenomenon is very curious, it approximates to the theory of binauricular audition, and has never been applied, we believe, before to produce this remarkable illusion to which may almost be given the name of auditive perspective."
This two-channel telephonic process was commercialized in France from 1890 to 1932 as the Théâtrophone, and in England from 1895 to 1925 as the Electrophone. Both were services available by coin-operated receivers at hotels and cafés, or by subscription to private homes.
In the 1930s, Alan Blumlein at EMI patented stereo records, stereo films, and also surround sound. The two stereophonic recording methods, using two channels and coincident microphone techniques (X-Y with bidirectional transducers/Blumlein setup + M/S stereophony), were developed by Blumlein at EMI in 1931 and patented in 1933. A stereo disc, using the two walls of the groove at right angles in order to carry the two channels, was cut at EMI in 1933, twenty-five years before that method became the standard for stereo phonograph discs. Harvey Fletcher of Bell Laboratories investigated techniques for stereophonic recording and reproduction. One of the techniques investigated was the "wall of sound", which used an enormous array of microphones hung in a line across the front of an orchestra. Up to 80 microphones were used, and each fed a corresponding loudspeaker, placed in an identical position, in a separate listening room. Several stereophonic test recordings, using two microphones connected to two styli cutting two separate grooves on the same wax disc, were made with Leopold Stokowski and the Philadelphia Orchestra at Philadelphia's Academy of Music in March 1932. The first (made on March 12, 1932), of Scriabin's Prometheus: Poem of Fire, is the earliest known surviving intentional stereo recording.
Accidental stereophonic recordings from these years also exist. On some occasions, RCA Victor used two microphones, two amplifiers and two recording lathes to make two simultaneous but completely separate recordings of a performance. Although this may have been done to compare the results obtained with different microphones or other technical variations, normally, only one of the resulting pair of recordings was released, but the other-channel recording was sometimes used for a foreign issue or survived in the form of a test pressing.
When such pairs of recordings have been located and matched up, authentic stereophonic sound has been recovered, its character and degree of spatial accuracy dependent on the fortuitous placement of the two microphones and the accurate synchronization of the two recordings. Recovered stereophonic versions of two recordings made in February 1932 by Duke Ellington and His Orchestra have been issued on LP and CD under the title Stereo Reflections in Ellington and are also included in the 24-CD set The Duke Ellington Centennial Edition.
Early development of Fantasound
Bell Laboratories gave a demonstration of three-channel stereophonic sound on April 27, 1933, with a live transmission of the Philadelphia Orchestra from Philadelphia to Constitution Hall in Washington, D.C. over multiple Class A telephone lines. Leopold Stokowski, normally the orchestra's conductor, was present in Constitution Hall to control the sound mix. Five years later, the same system would be exapanded onto multi-channel film recording and used from the concert hall in Philadelphia to the recording labs at Bell Labs in New Jersey in order to record Walt Disney's Fantasia (1940) in what Disney called Fantasound.
Later that same year, Bell Labs also demonstrated binaural sound, at the Chicago World's Fair in 1933 using a dummy with microphones instead of ears, . The two signals were sent out over separate AM station bands.
1940 to 1970
Carnegie Hall demonstration
Utilizing selections recorded by the Philadelphia Orchestra, under the direction of Leopold Stokowski, intended for but not used in Walt Disney's Fantasia, the Carnegie Hall demonstration by Bell Laboratories on April 9 and 10, 1940, used three huge speaker systems. Synchronization was achieved by making the recordings in the form of three motion picture soundtracks recorded on a single piece of film with a fourth track being used to regulate volume expansion.
This was necessary due to the limitations of dynamic range on optical motion picture film of the period, however the volume compression and expansion were not fully automatic, but were designed to allow manual studio "enhancement"; i.e., the artistic adjustment of overall volume and the relative volume of each track in relation to the others. Stokowski, who was always interested in sound reproduction technology personally participated in the "enhancement" of the sound at the demonstration.
The speakers produced sound levels of up to 100 decibels, and the demonstration held the audience "spellbound, and at times not a little terrified", according to one report. Sergei Rachmaninoff, who was present at the demonstration, commented that it was "marvellous" but "somehow unmusical because of the loudness." "Take that Pictures at an Exhibition", he said. "I didn't know what it was until they got well into the piece. Too much 'enhancing', too much Stokowski."
Motion picture era and further Fantasound development
In 1937, Bell Laboratories in New York City gave a demonstration of two-channel stereophonic motion pictures, developed by Bell Labs and Electrical Research Products, Inc. Once again, conductor Leopold Stokowski was on hand to try out the new technology, recording onto a special proprietary nine-track sound system at the Academy of Music in Philadelphia, during the making of the movie One Hundred Men and a Girl for Universal Pictures in 1937, after which the tracks were mixed down to one for the final soundtrack. A year later, MGM started using three tracks instead of one to record the musical selections of movie soundtracks, and very quickly upgraded to four. One track was used for dialogue, two for music, and one for sound effects. The purpose for this form of multitrack recording was to make mixing down to a single optical track easier and was not intended to be a recording for stereophonic purposes. The very first two-track recording MGM made (although released in mono) was "It Never Rains But What It Pours" by Judy Garland, recorded on June 21, 1938, for the movie Love Finds Andy Hardy.
Fantasound in its own right
Walt Disney began experimenting with multi-channel sound in the early 1930s as noted above. The first commercial motion picture to be exhibited with stereophonic sound was Walt Disney's Fantasia, released in November 1940, for which a specialized sound process (Fantasound) was developed. As in the Carnegie Hall demonstrations six months earlier, Fantasound used a separate film containing four optical sound tracks. Three of the tracks were used to carry left, center and right audio, while the fourth track carried three tones which individually controlled the volume level of the other three. The film was not a financial success, however, and after two months of road-show exhibition in selected cities, its soundtrack was remixed into mono sound for general release. It was not until its 1956 re-release that stereo sound was restored to the film. In the early 1940s, composer-conductor Alfred Newman directed the construction of a sound stage equipped for multichannel recording for 20th Century Fox studios. Several soundtracks from this era still exist in their multichannel elements, some of which have been released on DVD, including How Green Was My Valley, Anna and the King of Siam, The Day the Earth Stood Still and Sun Valley Serenade which, along with Orchestra Wives, feature the only stereophonic recordings of the Glenn Miller Orchestra as it was during its heyday of the Swing Era.
The advent of multi-track magnetic tape and film recording made high fidelity synchronized multichannel recording more technically straightforward, though costly. By the early 1950s, all of the major studios were recording on 35mm magnetic film for mixing purposes, and many of these so-called individual angles still survive, allowing for soundtracks to be remixed into Stereo or even Surround.
Cinerama and widescreen experimentation
Motion picture theatres, however, are where the real introduction of stereophonic sound to the public occurred. Amid great fanfare, Stereo sound was officially proven commercially viable by the public on September 30, 1952 with the release of a Cinerama demonstration film entitled This is Cinerama. The format was a then-spectacular widescreen process featuring three separate motion picture films running in synchronization with one another, adding one film panel each to the viewer's left and right in addition to the usual front and center, creating a truly immersive panoramic visual experience, comparable in some ways to today's IMAX.
Similarly, the audio soundtrack technology, developed by Hazard E. Reeves, a pioneer in magnetic recording, utilized seven discrete magnetic sound tracks in order to envelop the theatregoer in an aural experience just as spectacular as that playing on the screen: five behind the screen, plus two surround channels. By all accounts (including accounts by those who have recently experienced the process in rare anniversary presentations), the sound was reported to be as spectacular as the picture, even excellent, by modern standards.
In April 1953, while This is Cinerama was still playing only in New York City, most moviegoing audiences heard stereophonic sound for the first time with House of Wax, an early 3-D film starring Vincent Price and produced by Warner Bros. Unlike the 4-track mag release-print stereo films of the period which featured four thin strips of magnetic material running down the length of the film, inside and outside the sprocket holes, the sound system developed for House of Wax, dubbed WarnerPhonic, was a combination of a 35MM fully coated magnetic film that contained the audio tracks for Left-Center-Right, interlocked with the two dual-strip Polaroid system projectors, one of which carried a mono optical surround track and one that carried a mono backup track, should anything go wrong.
Only two other films featured this strange hybrid WarnerPhonic sound: the 3-D production of The Charge at Feather River, and Island in the Sky. Unfortunately, as of 2012, the stereo magnetic tracks to both these films are considered lost forever. In addition, a large percentage of 3-D films carried variations on three-track magnetic sound: It Came from Outer Space; I, the Jury; The Stranger Wore a Gun; Inferno; Kiss Me, Kate; and many others.
Widescreen in its own right
Inspired by Cinerama, the movie industry moved quickly to create simpler and cheaper widescreen systems, such as Warner Bros. Panavision, Paramount Pictures' VistaVision and Twentieth Century-Fox Film Corporation's CinemaScope, the latter of which used up to four separate magnetic sound tracks.
Because of the standard 35MM-size film, CinemaScope and its stereophonic sound was capable of being retrofitted into existing theaters. CinemaScope 55 was created by the same company in order to use a larger form of the system (55MM instead of 35MM) to allow for greater image clarity onscreen, and was supposed to have had 6-track stereo instead of four as Super Panavision 70 would have over a decade later. However, because the film needed a new, specially designed projector, the system proved impractical, and the two films made in the process, Carousel and The King and I, were released in 35MM CinemaScope reduction prints. To compensate, the premiere engagement of Carousel used a six-track magnetic full-coat in an interlock, and a 1961 re-release of The King and I, featured the film "printed down" to 70 mm, used a six-track stereo soundtrack as well.
However, 50 complete sets of combination 55/35MM projectors and "penthouse" reproducers were eventually completed and delivered by Century and Ampex, respectively, and 55MM release print sounding equipment was delivered by Western Electric. Several samples of 55MM sound prints can be found in the Sponable Collection at the Film and Television Archives at Columbia University. The subsequently abandoned 55/35MM Century projector eventually became the Century JJ 70/35MM projector.
After this disappointing experience with a proprietary "wide gauge" system, Fox purchased the Todd-AO system and re-engineered it into a more modern 24 fps system with brand-new 65MM self-blimped production cameras (Mitchell BFC ... "Blimped Fox Camera") and brand-new 65MM MOS cameras (Mitchell FC ... "Fox Camera") and brand-new Super Baltar lenses in a wide variety of focal lengths, first employed on South Pacific. Essentially, although Todd-AO was also available to others, the format became Fox's premier origination and presentation apparatus, replacing CinemaScope 55. Current DVDs of the two CinemaScope 55 feature titles were transferred from the original 55mm negatives, often including the separate 35MM films as extras for comparison.
Back to mono
However, beginning in 1957, films recorded in stereo (except for those shown in Cinerama) carried an alternate mono track for theatres not ready or willing to re-equip for stereo. From then until about 1975, when Dolby Stereo was used for the first time in films, most motion pictures—even some from which stereophonic soundtrack albums were made, such as Zeffirelli's Romeo and Juliet—were still released in monaural sound, stereo being reserved almost exclusively for expensive musicals such as West Side Story, My Fair Lady, or Camelot; epics such as Ben-Hur or Cleopatra;. Stereo was also reserved for dramas with a strong reliance on sound effects or music, such as Rosemary's Baby or The Graduate, with its Simon and Garfunkel score.
Development of Dolby Stereo
|This section does not cite any references or sources. (January 2013)|
Today, virtually all films are released in stereophonic sound as the Westrex Stereo Variable-Area system developed in 1977 for Star Wars is no more expensive to manufacture than mono. The format employs the same Western Electric/Westrex/Nuoptix RA-1231 recorder, and coupled with QS quadraphonic matrixing technology licensed to Dolby Labs from Sansui, this SVA system can produce the same Left, Center, Right and Surround sound of the original CinemaScope system of 1953 by using a single standard width optical track. This important development finally brought stereo sound to so-called Flat (Non-Wide-Screen) films presented at the most common aspect ratio of 1.85:1 although a number of `flat' films are photographed and presented at a ratio of 1.66:1, common in Europe or 1.75:1 common in museums.
Producers often took advantage of the six magnetic soundtracks available for 70mm film release prints, and productions shot in either 65MM or to save money, in 35MM and then blown up to 70MM. In these instances, the 70MM prints would be mixed for stereo, while the 35MMreduction prints would be remixed for mono.
Some films shot in 35MM, such as Camelot, featured four-track stereophonic sound and were then "blown-up" to 70MM so that they could be shown on a giant screen with six-track stereophonic sound. Unfortunately however, many of these presentations were only pseudo stereo, utilizing a somewhat artificial six-track panning method. A process known somewhat derogatorily as the "Columbia Spread" was often used to synthesize Left Center and Right Center from a combination of Left and Center and Right and Center, respectively, or, for effects, the effect could be "panned" anywhere across the five stage speakers using a one-in/five-out pan pot.
Dolby Stereo was succeeded by Dolby Digital 5.1 in the cinema and more recently with the introduction of digital cinema, Dolby Surround 7.1 and Dolby Atmos in 2010 and 2012 respectively. All films released after 1996 include Dolby Digital and DTS (now renamed to DATASAT).
Modern Home Audio and Video
From 1940 to 1970, the progress of stereophonic sound was paced by the technical difficulties of recording and reproducing two or more channels in synchronization with one another and by the economic and marketing issues of introducing new audio media and equipment. A stereo system cost roughly twice as much as a monophonic system, since a stereo system had to be assembled by the user after purchasing two preamplifiers, two amplifiers, and two speaker systems in addition to purchasing a twin-tuner radio, upgrading his tape recorder to a stereo model and having his phonograph fitted with a stereo cartridge. In the early days it was not clear whether consumers would think the sound was so much better as to be worth twice the price.
Stereo experiments on disc
Early lateral, vertical and double-sided stereo
Edison had been recording in a hill-and-dale or vertically modulated format on his cylinders and discs since 1877, and Berliner had been recording in a side-to-side or lateral format since shortly thereafter. Each format developed onto its own trajectory until the late 1920s when electric recording on disc, utilizing a microphone surpassed acoustic recording where the performer needed to shout or play very loudly into what basically amounted to a megaphone in reverse.
At that time, AM radio had been around for roughly a decade, and broadcasters were looking for both better materials from which to make phonograph records as well as a better format in which to record them to play over the narrow and thus inherently noisy radio channel. As radio had been playing the same shellacque discs available to the public, it was found that, even though the playback system was now electric rather than acoustic, the surface noise on the disc would mask the music after just a few plays.
Enter Acetate Bakelite and Vinyl and Radio Broadcast Transcriptions. Once these considerably more quiet compounds were developed, it was discovered that the rubber-idler-wheel driven turntables of the period had a great deal of low-frequency rumble - but only in the lateral plane. So, even though with all other factors being equal, the lateral plane of recording on disc had the higher fidelity, it was decided to record vertically to produce higher-fidelity recordings on these new `silent-surface' materials, for two reasons, the increase in fidelity and the incompatibility with home phonographs which, with their lateral-only playback systems would only produce silence from a vertically modulated disc.
After 33-1/3 RPM recording had been perfected for the movies in 1927, the speed of radio program transcriptions was reduced to match, once again to inhibit playback of the discs on normal home consumer equipment. Even though the stylus size remained the same as consumer records at either 3 mils or 2.7 mils, the disc size was increased from 12-inches to the same 16-inches as those used in early talking pictures in order to inhibit the practice even further. Now, not only could the records not be played on home equipment due to incompatible recording format and speed, they wouldn't even fit on the player either, which suited the copyright holders just fine.
Two-channel high fidelity and other experiments
During the same period, engineers got a bright idea. Split the signal into two parts, bass and treble and record the treble on its own track near the edge of the disc in a lateral format so that there would be no high-frequency distortion, and then record the bass on its own track in a vertical fashion to get rid of the rumble. Trouble with that one was, vertical grooves take up more space than lateral grooves. So, even though the bass track was full, starting halfway through the disc and ending up at the center, the treble track had all this unused space at the end, or else had to be recorded at a wider pitch i.e. lines-per-inch in order to match up with the bass track and keep both styli in the same place, limiting the playing time to that slightly longer than a single, even at 33-1/3 RPM on a 12-inch disc.
Another failed experiment in the late 1920s and early '30s involved recording the left channel on the left side of the disc (when held vertically with its edge facing the user) and recording the right channel on the right side of the disc. These were manufactured on twin film-company recording lathes which ran in perfect sync with one another with no variation, and were capable of not only outside-in as well as inside-out recordings (see Radio Programming Vinyl Sequence under Gramophone record) but also counter-clockwise as well as conventional clockwise recording by mounting the cutting head wrong-way-out with a special adapter. One master was recorded conventionally and the other was recorded counterclockwise, each master was run separately through the plating process, lined up to match, and subsequently mounted in a press. This recording method was later used to record counter-clockwise discs by Mattel for one of its answers to the GAF Talking View Master in the mid-60s.
The dual-sided stereo disc was then played vertically, first in a system that featured two tonearms on the same post facing one another, and later on in an offset system where one tonearm was placed conventionally and the other tonearm was placed opposite, i.e. not only on the other side of the mechanism, but facing the other way as well so that both tonearms could start at the edge and play to the center. But, even with playing the disc vertically in a rotating clamp, the same trouble was observed with keeping the two tonearms in their respective synchronous revolutions. The system was developed further however and adapted so that a single tonearm could play one side of a record or the other in jukeboxes of the late 1930s and early '40s.
Five years later, Bell Labs was experimenting with a two-channel Lateral-Vertical system, where the left channel was recorded laterally and the right channel was recorded vertically, still utilizing a standard 3-mil 78-RPM groove, over three times larger than the modern LP stylus of the late 20th Century. The trouble with that was, once again, all the low-frequency rumble was in the left channel and all the high-frequency distortion was in the right channel. Over a quarter of a century later, it was decided to tilt the recording head 45 degrees off to the right side so that both the low frequency rumble and high frequency distortion were shared equally by both channels, producing the 45/45 system we know today.
Emory Cook
In 1952, Emory Cook (1913–2002), who already had become famous by designing new feedback disk-cutter heads to improve sound from tape to vinyl, took the two-channel high-fidelity system described above and developed a somewhat misnamed "binaural" record out of it, which consisted of the same two separate channels cut into two separate groups of grooves running next to each other as described above, i.e. one running from the edge of the disc to halfway through and the other starting at the halfway point and ending up towards the label, but he used two LATERAL grooves with a 500 Hz crossover in the inner track to try and compensate for the lower fidelity and high frequency distortion on the inner track.
Each groove needed its own monophonic needle and cartridge on its own branch of tonearm, and each needle was connected to a separate amplifier and speaker. This setup was intended to give a demonstration at a New York audio fair of Cook's cutter heads rather than to sell the record; but soon afterward, the demand for such recordings and the equipment to play it grew, and Cook Records began to produce such records commercially. Cook recorded a vast array of sounds, ranging from railroad sounds to thunderstorms.[note 1] By 1953, Cook had a catalog of about 25 stereo records for sale to audiophiles.
Magnetic tape recording
Stereo magnetic tape recording was demonstrated on standard 1/4-inch tape for the first time in 1952, using two sets of recording and playback heads, upside-down and offset from one another. A year later, Remington Records began recording a number of its sessions in stereo, including performances by Thor Johnson and the Cincinnati Symphony Orchestra.
Later that same year, more experimental stereo recordings were conducted with Leopold Stokowski and a group of New York studio musicians at RCA Victor Studios in New York City. In February 1954, the label also recorded a performance of Berlioz' masterpiece The Damnation of Faust by the Boston Symphony Orchestra under the direction of Charles Münch, the success of which led to the practice of regularly recording sessions in stereo.
Shortly afterwards, the last two public concerts directed by famed conductor Arturo Toscanini were recorded on stereophonic magnetic tape, however they were not released as such until 1987 and 2007, respectively. In the UK, Decca Records began recording sessions in stereo in mid-1954, and by that time even smaller labels in the U.S. such as Concertapes, Bel Canto and Westminster along with major labels such as RCA Victor began releasing stereophonic recordings on two-track prerecorded reel-to-reel magnetic tape, priced at twice or three times the cost of monaural recordings, which retailed for around $2.95 to $3.95 apiece for a standard monaural LP. Even 2-track MONAURAL tape which had to be flipped over halfway through and carried exactly the same information as the monaural LP - but without the crackles and pops - were being sold for $6.95.
One has to understand that the average working man in 1954 might be taking home $50–$60 a week if he was lucky and paying $75–$100 a month in rent for his two-room apartment. Therefore the price of a great deal of 2-track stereo tape recordings of the period being upwards of $12.95-$18.95 apiece for a full-length album when the corresponding mono LP was only $3.95, would be prohibitive. In addition, the cost of the stereophonic recorder upon which to play them may have been equal to or greater than the cost of a new car.
However, audiophiles, with little or no regard for the cost, bought them and the players anyway, and stereophonic sound came to at least a select few living rooms of the mid-1950s. Stereo recording became widespread in the music business by the 3rd quarter of 1957.
Stereo on disc
In November 1957, the small Audio Fidelity Records label released the first mass-produced stereophonic disc. Sidney Frey, founder and president, had Westrex engineers, owners of one of the two rival stereo disk-cutting systems, cut a disk for release before any of the major record labels could do so. Side 1 featured the Dukes of Dixieland, and Side 2 featured railroad and other sound effects designed to engage and envelop the listener. This demonstration disc was introduced to the public on December 13, 1957 at the Times Auditorium in New York City. Only 500 copies of this initial demonstration record were pressed and three days later, Frey advertised in Billboard Magazine that he would send a free copy to anyone in the industry who wrote to him on company letterhead. The move generated such a great deal of publicity that early stereo phonograph dealers were forced to demonstrate on Audio Fidelity Records.
Also in December 1957, Bel Canto Records, another small label, produced its own stereophonic demonstration disc on multicolored vinyl so that stereo dealers would have more than one choice for demonstration. With the supplied special turntables featuring a clear platter lighted from underneath to show off the color as well as the sound, the stunt worked even better for Bel Canto, whose roster of jazz, easy listening and lounge music, pressed onto their trademark Caribbean-blue vinyl sold well throughout 1958 and early into 1959.
Affordable cartridges
After the release of the demonstration discs and the respective libraries from which they were culled, the other spur to the popularity of stereo discs was the reduction in price of a stereo magnetic cartridge, for playing the disks, from $250 to $29.95 in June 1958. The first four mass-produced stereophonic discs available to the buying public were released in March, 1958—Johnny Puleo and his Harmonica Gang Volume 1 (AFSD 5830), Railroad – Sounds of a Vanishing Era (AFSD 5843), Lionel – Lionel Hampton and his Orchestra (AFSD 5849) and Marching Along with the Dukes of Dixieland Volume 3 (AFSD 5851). By the end of March, the company had four more stereo LPs available, interspersed with several Bel Canto releases.
Although both monaural as well as stereo LP records were manufactured for the first ten years of stereo on disc, the major record labels stopped making monaural albums after 1968, relegating the format to 45 RPM singles, flexidiscs and radio promotional materials which continued on through the end of 1974.
Early broadcasting
Radio: In December 1925, the BBC's experimental transmitting station, 5XX, in Daventry, Northamptonshire, made radio's first stereo broadcast—a concert from Manchester, conducted by Sir Hamilton Harty—with 5XX broadcasting the right channel nationally by long wave and local BBC stations broadcasting the left channel by medium wave. The BBC repeated the experiment in 1926, using 2LO in London and 5XX at Daventry. Following experimental FM stereo transmissions in the London area in 1958 and regular Saturday morning demonstration transmissions using TV sound and medium wave (AM) radio to provide the two channels, the first regular BBC transmissions using an FM stereo signal began on the BBC's Third Programme network on August 28, 1962.
Chicago AM radio station WGN (and its sister FM station, WGNB) collaborated on an hourlong stereophonic demonstration broadcast on May 22, 1952, with one audio channel broadcast by the AM station and the other audio channel by the FM station. New York City's WQXR initiated its first stereophonic broadcasts in October 1952, and by 1954, was broadcasting all of its live musical programs in stereophonic sound, using its AM and FM stations for the two audio channels. Rensselaer Polytechnic Institute began a weekly series of live stereophonic broadcasts in November 1952 by using two campus-based AM stations, although the listening area did not extend beyond the campus.
Tests of six competing FM-only systems were conducted on KDKA-FM in Pittsburgh, Pennsylvania during July and August 1960. The Federal Communications Commission announced stereophonic FM technical standards in April 1961, with licensed regular stereophonic FM radio broadcasting set to begin in the United States on June 1, 1961. WEFM (in the Chicago area) and WGFM (in Schenectady, New York) were reported as the first stereo stations.
Television: A December 11, 1952 closed-circuit television performance of Carmen, from the Metropolitan Opera House in New York City to 31 theaters across the United States, included a stereophonic sound system developed by RCA. The first several shows of the 1958–59 season of The Plymouth Show (AKA The Lawrence Welk Show) on the ABC (America) network were broadcast with stereophonic sound in 75 media markets, with one audio channel broadcast via television and the other over the ABC radio network. By the same method, NBC Television and the NBC Radio Network offered stereo sound for two three-minute segments of The George Gobel Show on October 21, 1958. On January 30, 1959, ABC's Walt Disney Presents made a stereo broadcast of The Peter Tchaikovsky Story—including scenes from Disney's latest animated feature, Sleeping Beauty—by using ABC-affiliated AM and FM stations for the left and right audio channels.
With the advent of FM stereo in 1961, a small number of music-oriented TV shows were broadcast with stereo sound using a process called simulcasting, in which the audio portion of the show was carried over a local FM stereo station. In the 1960s and 1970s, these shows were usually manually synchronized with a reel-to-reel tape delivered by mail to the FM station (unless the concert or music originated locally). In the 1980s, satellite delivery of both television and radio programs made this fairly tedious process of synchronization unnecessary. One of the last of these simulcast programs was Friday Night Videos on NBC, just before MTS stereo was approved by the FCC.
The BBC made extensive use of simulcasting between 1974 and around 1990. The first such transmission was in 1974, when the BBC broadcast a recording of Van Morrison's London Rainbow Concert simultaneously on BBC2 TV and Radio 2. After that it was used for many other music programmes, live and recorded, including the annual BBC Promenade concerts and the Eurovision Song Contest. The advent of NICAM stereo sound with TV rendered this unnecessary.
Cable TV systems delivered many stereo programs utilizing this method for many years until prices for MTS stereo modulators dropped. One of the first stereo cable stations was The Movie Channel, though the most popular cable TV station that drove up usage of stereo simulcasting was MTV.
Japanese television began multiplex (stereo) sound broadcasts in 1978, and regular transmissions with stereo sound came in 1982. By 1984, about 12% of the programming, or about 14 or 15 hours per station per week, made use of the multiplex technology. West Germany's second television network, ZDF, began offering stereo programs in 1984.
MTS: Stereo for television
In 1979, The New York Times reported, "What has prompted the [television] industry to embark on establishing high-fidelity [sound] standards now, according to engineering executives involved in the project, is chiefly the rapid march of the new television technologies, especially those that are challenging broadcast television, such as the video disk."
Multichannel television sound, better known as MTS (often still as BTSC, for the Broadcast Television Systems Committee that created it), is the method of encoding three additional channels of audio into an NTSC-format audio carrier. It was adopted by the FCC as the United States standard for stereo television transmission in 1984. Sporadic network transmission of stereo audio began on NBC on July 26, 1984, with The Tonight Show Starring Johnny Carson—although at the time, only the network's New York City flagship station, WNBC, had stereo broadcast capability. Regular stereo transmission of programs began in 1985.
Recording methods
A-B technique: time-of-arrival stereophony
This uses two parallel omnidirectional microphones some distance apart, capturing time-of-arrival stereo information as well as some level (amplitude) difference information—especially if employed in close proximity to the sound source(s). At a distance of about 60 cm (24 in), the time delay (time-of-arrival difference) for a signal reaching the first microphone and then the other one from the side is approximately 1.5 ms (1 to 2 ms). If you increase the distance between the microphones, you effectively decrease the pickup angle. At a 70 cm (28 in) distance, it is approximately equivalent to the pickup angle of the near-coincident ORTF setup.
This technique can produce phase issues when the stereo signal is mixed to mono.
X-Y technique: intensity stereophony
Here, two directional microphones are at the same place, typically pointing at an angle between 90° and 135° to each other. The stereo effect is achieved through differences in sound pressure level between two microphones. A difference in levels of 18 dB (16 to 20 dB) is needed for hearing the direction of a loudspeaker. Due to the lack of differences in time-of-arrival/phase ambiguities, the sonic characteristic of X-Y recordings has less sense of space and depth when compared to recordings employing an A-B setup. When two figure-eight microphones are used, facing ±45° with respect to the sound source, the X-Y setup is called a Blumlein Pair. The sonic image produced is realistic.
M/S technique: mid/side stereophony
This coincident technique employs a bidirectional microphone facing sideways and another microphone at an angle of 90°, facing the sound source. The second microphone is generally a variety of cardioid, although Alan Blumlein described the usage of an omnidirectional transducer in his original patent.
The left and right channels are produced through a simple matrix: Left = Mid + Side; Right = Mid − Side (the polarity-reversed side signal). This configuration produces a completely mono-compatible signal and, if the Mid and Side signals are recorded (rather than the matrixed Left and Right), the stereo width can be manipulated after the recording has taken place. This makes it especially useful for film-based projects.
Near-coincident technique: mixed stereophony
These techniques combine the principles of both A-B and X-Y (coincident pair) techniques. For example, the ORTF stereo technique of the Office de Radiodiffusion Télévision Française (Radio France) calls for a pair of cardioid microphones placed 17 cm apart at a total angle between microphones of 110°, which results in a stereophonic pickup angle of 96° (Stereo Recording Angle, or SRA). In the NOS stereo technique of the Nederlandse Omroep Stichting (Holland Radio), the total angle between microphones is 90° and the distance is 30 cm, thus capturing time-of-arrival stereo information as well as level information. It is noteworthy that all spaced microphone arrays and all near-coincident techniques use a spacing of at least 17 cm or more. 17 cm roughly equals the human ear distance and therefore provides the same interaural time difference (ITD) or more, depending on the spacing between microphones. Although the recorded signals are generally intended for playback over stereo loudspeakers, reproduction over headphones can provide remarkably good results, depending on the microphone arrangement.
In the course of restoration or remastering of monophonic records, various techniques of "pseudo-stereo", "quasi-stereo", or "rechanneled stereo" have been used to create the impression that the sound was originally recorded in stereo. These techniques first involved hardware methods (see Duophonic) or, more recently, a combination of hardware and software. Multitrack Studio, from Bremmers Audio Design (The Netherlands), uses special filters to achieve a pseudo-stereo effect: the "shelve" filter directs low frequencies to the left channel and high frequencies to the right channel, and the "comb" filter adds a small delay in signal timing between the two channels, a delay barely noticeable by ear,[note 2] but contributing to an effect of "widening" original "fattiness" of mono recording.
The special pseudo-stereo circuit—invented by Kishii and Noro, from Japan—was patented in the United States in 2003, with already previously issued patents for similar devices. Artificial stereo techniques have been used to improve the listening experience of monophonic recordings or to make them more "saleable" in today's market, where people expect stereo. Some critics have expressed concern about the use of these methods.
Binaural recording
Engineers make a technical distinction between "binaural" and "stereophonic" recording. Of these, binaural recording is analogous to stereoscopic photography. In binaural recording, a pair of microphones is put inside a model of a human head that includes external ears and ear canals; each microphone is where the eardrum would be. The recording is then played back through headphones, so that each channel is presented independently, without mixing or crosstalk. Thus, each of the listener's eardrums is driven with a replica of the auditory signal it would have experienced at the recording location. The result is an accurate duplication of the auditory spatiality that would have been experienced by the listener had he or she been in the same place as the model head. Because of the inconvenience of wearing headphones, true binaural recordings have remained laboratory and audiophile curiosities. However "loudspeaker-binaural" listening is possible with Ambiophonics.
Numerous early two-track-stereo reel-to-reel tapes as well as several experimental stereo disc formats of the early 1950s branded themselves as binaural, however they were merely different incarnations of the above-described stereo or 2-track mono recording methods (lead vocal or instrument isolated on one channel and orchestra on the other sans lead.)
Stereophonic sound attempts to create an illusion of location for various sound sources (voices, instruments, etc.) within the original recording. The recording engineer's goal is usually to create a stereo "image" with localization information. When a stereophonic recording is heard through loudspeaker systems (rather than headphones), each ear, of course, hears sound from both speakers. The audio engineer may, and often does, use more than two microphones (sometimes many more) and may mix them down to two tracks in ways that exaggerate the separation of the instruments, in order to compensate for the mixture that occurs when listening via speakers.
Descriptions of stereophonic sound tend to stress the ability to localize the position of each instrument in space, but this would only be true in a carefully engineered and installed system, where speaker placement and room acoustics are taken into account. In reality, many playback systems, such as all-in-one boombox units and the like, are incapable of recreating a realistic stereo image. Originally, in the late 1950s and 1960s, stereophonic sound was marketed as seeming "richer" or "fuller-sounding" than monophonic sound, but these sorts of claims were and are highly subjective, and again, dependent on the equipment used to reproduce the sound. In fact, poorly recorded or reproduced stereophonic sound can sound far worse than well done monophonic sound. When playing back stereo recordings, the best results are obtained by using two identical speakers, in front of and equidistant from the listener, with the listener located on a center line between the two speakers. In effect, an equilateral triangle is formed, with the angle between the two speakers around 60 degrees as seen from the listener's point of view.
Vinyl records
Although Decca had recorded Ansermet's conducting of Antar in stereo May 1954 it took four years for the first stereo LPs to be sold. In 1958, the first group of mass-produced stereo two-channel vinyl records was issued, by Audio Fidelity in the USA and Pye in Britain, using the Westrex "45/45" single-groove system. Whereas the stylus moves horizontally when reproducing a monophonic disk recording, on stereo records, the stylus moves vertically as well as horizontally. One could envision a system in which the left channel was recorded laterally, as on a monophonic recording, with the right channel information recorded with a "hill and dale" vertical motion; such systems were proposed but not adopted, due to their incompatibility with existing phono pickup designs (see below).
In the Westrex system, each channel drives the cutting head at a 45-degree angle to the vertical. During playback, the combined signal is sensed by a left-channel coil mounted diagonally opposite the inner side of the groove and a right-channel coil mounted diagonally opposite the outer side of the groove. The Westrex system provided for the polarity of one channel to be inverted: this way large groove displacement would occur in the horizontal plane and not in the vertical one. The latter would require large up-and-down excursions and would encourage cartridge skipping during loud passages.
The combined stylus motion is, in terms of the vector, the sum and difference of the two stereo channels. Effectively, all horizontal stylus motion conveys the L+R sum signal, and vertical stylus motion carries the L−R difference signal. The advantages of the 45/45 system are that it has greater compatibility with monophonic recording and playback systems.
Even though a monophonic cartridge will technically reproduce an equal blend of the left and right channels, instead of reproducing only one channel, this was not recommended in the early days of stereo due to the larger stylus (1.0 mil vs 0.7 mil for stereo) coupled with the lack of vertical compliance of the mono cartridges available in the first ten years of stereo. These factors would result in the stylus ``digging into' the stereo vinyl and carving up the stereo portion of the groove, destroying it for subsequent playback on stereo cartridges. This is why one often notices the banner PLAY ONLY WITH STEREO CARTRIDGE AND STYLUS on stereo vinyl issued between 1958 and 1964.
Conversely, and with the benefit of no damage to any type of disc even from the beginning, a stereo cartridge reproduces the lateral grooves of monophonic recording equally through both channels, rather than through one channel. Also, it gives a more balanced sound, because the two channels have equal fidelity as opposed to providing one higher-fidelity laterally recorded channel and one lower-fidelity vertically recorded channel. Overall, this approach may give higher fidelity, because the "difference" signal is usually of low power, and is thus less affected by the intrinsic distortion of "hill and dale"-style recording.
Additionally, surface noise tends to be picked up in a greater capacity in the vertical channel, therefore a mono record played on a stereo system can be in worse shape than the same record in stereo and still be enjoyable. (See Gramophone record for more on lateral and vertical recording.)
This system was conceived by Alan Blumlein of EMI in 1931 and was patented in the U.K. the same year, but was not reduced to actual practice as was a requirement for patenting in the U.S. and elsewhere at that time. (Blumlein was killed in a plane crash while testing radar equipment during WW-II, and he, therefore, never reduced the system to actual practice through both a recording and a reproducing means.) EMI cut the first stereo test discs using the system in 1933, but it was not applied commercially until a quarter of a century later, and by another company (Westrex division of Litton Industries Inc, as a successor to Western Electric Company), and dubbed StereoDisk. Stereo sound provides a more natural listening experience, since the spatial location of the source of a sound is (at least in part) reproduced.
In the 1960s, it was common practice to generate stereo versions of music from monophonic master tapes, which were normally marked "electronically reprocessed" or "electronically enhanced" stereo on track listings. These were generated by a variety of processing techniques to try to separate out various elements; this left noticeable and unsatisfactory artifacts in the sound, typically sounding "phasey". However, as multichannel recording became increasingly available, it has become progressively easier to master or remaster more plausible stereo recordings out of the archived multitrack master tapes.
Compact disc
The Red Book CD specification includes two channels by default, and so a mono recording on CD either has one empty channel, or else the same signal being relayed to both channels simultaneously. However, noncommercial CD's in other formats such as White Book or Orange Book can feature up to four hours of stereo music on one CD for the purposes of extended programming in public spaces, such as malls.
These formats slightly reduce both the sampling frequency from 44.1 kHz as well as the bit-depth frequency from 16-bit and employ other proprietary technologies in order to increase the time on the disc and, as in the 16-inch transcriptions above - render them unplayable on the vast majority of consumer equipment.
In FM broadcasting, the Zenith-GE pilot-tone stereo system is used throughout the world.
Because of the limited audio quality of the majority of AM receivers, and also because AM stereo receivers are relatively scarce, relatively few AM stations employ stereo. Various modulation schemes are used for AM stereo, of which the best-known is Motorola's C-QUAM, the official method for most countries in the world that transmit in AM stereo. More AM stations are adopting digital HD Radio, which allows the transmission of stereo sound on AM stations. For Digital Audio Broadcasting, MP2 audio streams are used. DAB is one of the Digital Radio formats that is used to broadcast Digital Audio over terrestrial broadcast networks or satellite networks. DAB is extended to video, and the new format is called DMB.
In Sweden, Televerket invented a different stereo broadcasting system called the Compander System. It had a high level of channel separation and could even be used to broadcast two mono signals - for example for language studies (with two languages at the same time). But tuners and receivers with the pilot-tone system were sold so people in southern Sweden could listen to, for example, Danish radio. At last Sweden (the Televerket) decided to start broadcasting in stereo according to the pilot-tone system in 1977. But stereo radio was delayed in Sweden because of the two competing systems.
For analog TV (PAL and NTSC), various modulation schemes are used in different parts of the world to broadcast more than one sound channel. These are sometimes used to provide two mono sound channels that are in different languages, rather than stereo. Multichannel television sound is used mainly in the Americas. NICAM is widely used in Europe, except in Germany, where Zweikanalton is used. The EIAJ FM/FM subcarrier system is used in Japan. For Digital TV, MP2 audio streams are widely used within MPEG-2 program streams. Dolby Digital is the audio standard used for Digital TV in North America, with the capability for anywhere between 1 and 6 discrete channels.
Common usage
In common usage, a "stereo" is a two-channel sound reproduction system, and a "stereo recording" is a two-channel recording. This is cause for much confusion, since five (or more)-channel home theater systems are not popularly described as "stereo".
Most two-channel recordings are stereo recordings only in this weaker sense. Pop music, in particular, is usually recorded using close miking techniques, which artificially separate signals into several tracks. The individual tracks (of which there may be hundreds) are then "mixed down" into a two-channel recording. The audio engineers determine where each track will be placed in the stereo "image", by using various techniques that may vary from very simple (such as "left-right" panning controls) to more sophisticated and extensively based on psychoacoustic research (such as channel equalization, compression and mid-side processing). The end product using this process often bears little or no resemblance to the actual physical and spatial relationship of the musicians at the time of the original performance; indeed, it is not uncommon for different tracks of the same song to be recorded at different times (and even in different studios) and then mixed into a final two-channel recording for commercial release.
Classical music recordings are a notable exception. They are more likely to be recorded without having tracks dubbed in later as in pop recordings, so that the actual physical and spatial relationship of the musicians at the time of the original performance is preserved on the recording.
Balance can mean the amount of signal from each channel reproduced in a stereo audio recording. Typically, a balance control in its center position will have 0 dB of gain for both channels and will attenuate one channel as the control is turned, leaving the other channel at 0 dB.
See also Panning.
See also
- 3D audio effect
- Ambiophonics – binaural with speakers, not headphones
- Ambisonics – generalized MS-Stereo to three dimensions
- Wave Field Synthesis – The physically reconstruction of the spatial sonic field
- Binaural recording
- Blumlein Pair
- Joint stereo
- Stereo photography
- Stereographic projection
- Subwoofer (Stereo separation)
- Surround sound
- Sweet spot (acoustics)
- The term "binaural" that Cook used should not be confused with the modern use of the word, where "binaural" is an inner-ear recording using small microphones placed in the ear. Cook used conventional microphones, but used the same word, "binaural", that Alan Blumlein had used for his experimental stereo records almost 20 years earlier.
- The comb filter allows range of manipulation between 0 and 100 milliseconds.
- stereo. (n.d.). © Encyclopedia Britannica, Inc.. Retrieved March 10, 2012, from Dictionary.com website: http://dictionary.reference.com/browse/stereo
- στερεός, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- φωνή, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- Early Radio History. Scientific American, December 31, 1881, pages 422–23. The Telephone at the Paris Opera Retrieved March 27, 2009.
- "Court Circular", The Times (London), Nov. 6, 1895, p. 7. "Post Office Electrical Engineers. The Electrophone Service", The Times (London), Jan. 15, 1913, p. 24. "Wired Wireless", The Times (London), June 22, 1925, p. 8.
- "Early stereo recordings restored". BBC. August 1, 2008. Retrieved August 7, 2008. "Blumlein lodged the patent for 'binaural sound' in 1931, in a paper that patented stereo records, stereo films, and also surround sound. He and his colleagues then made a series of experimental recordings and films to demonstrate the technology to see if there was any commercial interest from the fledgling film and audio industry."
- Fox, Barry ). "A hundred years of stereo: fifty of hi-fi", p 911. Scientific American, December 24–31, 1981. Retrieved March 1, 2012. The performance was part of an all Russian program including Mussorgsky's Pictures at an Exhibition in the Ravel orchestration, excerpts of which were also recorded in stereo. See: Stokowski, Harvey Fletcher, and the Bell Labs Experimental Recordings, stokowski.org, rccessed March 1, 2012.
- B.B. Bauer, "Some Techniques Toward Better Stereophonic Perspective", IEEE Transactions on Audio, May–June 1963, p. 89.
- "Radio Adds Third Dimension", Popular Science, Jan. 1953, p. 106.
- "Sound Waves 'Rock' Carnegie Hall as 'Enhanced Music' is Played", The New York Times, April 10, 1940, p. 25.
- "New Sound Effects Achieved in Film", The New York Times, Oct. 12, 1937, p. 27.
- Nelson B. Bell, "Rapid Strides are Being Made in Development of Sound Track", The Washington Post, April 11, 1937, p. TR1.
- Motion Picture Herald, September 11, 1937, p. 40.
- T. Holman, Surround Sound: Up and Running, Second Edition, Elsevier, Focal Press (2008), 240 pp.
- Andrew R. Boone, "Mickey Mouse Goes Classical", Popular Science, January 1941, p. 65.
- "Fantasound" by W.E,Garity and J.N.A.Hawkins. Journal odf the SMPE Vol 37 August 1941
- "The CinemaScope Wing 5". Widescreen Museum. Retrieved October 17, 2011.
- "Commercial Binaural Sound Not Far Off", Billboard, Oct. 24, 1953, p. 15.
- "Adventures in Sound", Popular Mechanics, September 1952, p. 216.
- "Tape Trade Group to Fix Standards", Billboard, July 10, 1954, p. 34.
- "Hi-Fi: Two-Channel Commotion", The New York Times, November 17, 1957, p. XX1.
- "Jazzbeat October 26, 2007". Jazzology.com. Retrieved October 17, 2011.
- "Harry R. Porter history". Thedukesofdixieland.com. Retrieved October 17, 2011.
- "Mass Produced Stereo Disc is Demonstrated," Billboard, Dec. 16, 1957, p. 27.
- Audio Fidelity advertisement, Billboard, Dec. 16, 1957, p. 33.
- "Mass Produced Stereo Disk is Demonstrated", Billboard, Dec. 16, 1957, p. 27.
- Alfred R. Zipser, "Stereophonic Sound Waiting for a Boom", The New York Times, August 24, 1958, p. F1.
- "Audio Fidelity Bombshell Had Industry Agog", Billboard, Dec. 22, 1962, p. 36.
- "CBS Discloses Stereo Step," Billboard, March 31, 1958, p. 9.
- Sylvan Fox, "Disks Today: New Sounds and Technology Spin Long-Playing Record of Prosperity", The New York Times, August 28, 1967, p. 35.
- RCA Victor Red Seal Labelography (1950–1967).
- "Mfrs. Strangle Monaural", Billboard, Jan. 6, 1968, p. 1.
- W-G-N and WGNB to Unveil New 'Visual' Sound", The Chicago Tribune, May 19, 1952, p. B-6.
- "News of TV and Radio", The New York Times, Oct. 26, 1952, p. X-11. "Binaural Devices", The New York Times, March 21, 1954, p. XX-9.
- "Binaural Music on the Campus", Popular Science, April 1953, p. 20.
- "Commentary: Dick Burden on FM Stereo Revisited". RADIOWORLD. February 1, 2007. Retrieved September 22, 2009.
- "Conversion to Stereo Broadcasts on FM is Approved by F.C.C.", The New York Times, April 20, 1961, p. 67.
- "Stereophonic FM Broadcast Begun by WEFM", The Chicago Tribune, June 2, 1961, p. B-10.
- "Theater to Have Special Sound System for TV", Los Angeles Times, Dec. 5, 1952, p. B-8.
- "A Television First! Welk Goes Stereophonic" (advertisement), Los Angeles Times, September 10, 1958, p. A-7.
- "Dealers: Lawrence Welk Leads in Stereo!" (advertisement), Billboard, Oct. 13, 1958, p. 23.
- "Expect Giant TV Stereo Audience", Billboard, Oct. 20, 1958, p. 12.
- For example: Jack Gould, "TV: Happy Marriage With FM Stereo", The New York Times, Dec. 26, 1967, p. 67.
- "Japan's Stereo TV System", The New York Times, June 16, 1984.
- Chronomedia: 1982.
- Les Brown, "Hi-fi Stereo TV Coming in 2 to 4 Years", The New York Times, Oct. 25, 1979, p. C-18.
- Peter W. Kaplan, "TV Notes", New York Times, July 28, 1984, sec. 1, p. 46.
- ""The Stereophonic Zoom" by Michael Williams" (PDF). Retrieved October 17, 2011.
- Eberhard Sengpiel. "Forum für Mikrofonaufnahme und Tonstudiotechnik. Eberhard Sengpiel". Sengpielaudio.com. Retrieved October 17, 2011.
- "Pseudo-Stereo". Multitrackstudio.com. Retrieved October 17, 2011.
- Hyperprism Manipulation Process—Quasi stereo[dead link]
- A Review and an Extension of Pseudo-Stereo...[dead link]
- "Pseudo-stereo circuit—Patent 6636608". Freepatentsonline.com. October 21, 2003. Retrieved October 17, 2011.
- "Psycho acoustic pseudo-stereo fold system". Patentstorm.us. Retrieved October 17, 2011.
- Friday, Jan. 20, 1961 (January 20, 1961). "''Pseudo Stereo'', Time magazine, Jan. 20, 1961". Time. Retrieved October 17, 2011.
- Grammy pulse - Volumes 2 à 3 - Page vi National Academy of Recording Arts and Sciences (U.S.) - 1984 "After all, stereo sound was developed in 1931, but it took until 1958 for the first stereo LPs to be sold. Keeping in mind corporate and bureaucratic red tape, we could easily be in for a long haul before stereo TV is as natural as having two ears."
- "Stereo disc recording". Retrieved October 4, 2006.
- "Rane Professional Audio Reference Home". Retrieved January 20, 2008.
- Online Left/Right Stereo Test
- Online Stereo Polarity Test
- Visualization of All Stereo Microphone Systems with Two Microphones
- "Two Ears- Two Loudspeakers?" The limitations of traditional audio | http://en.wikipedia.org/wiki/Stereo | 13 |
65 | Part of Unit: Operating Systems
Lesson Plan Overview / Details
Our decimal number system we use is based on the fact that we humans are born with ten fingers (including thumbs!). But computers naturally speak in multiples of 2, like 2 (binary), 8 (octal), and 16 (hexadecimal).
In this lesson, we are going to explore how computers think in octal and how it relates to decimal and binary.
Total Time 3 Hours
Student Objectives / Goals
- By the end of this lesson, students will be able to count in octal.
- By the end of this lesson, students will be able to convert from octal to binary to decimal. Understand why octal is important in computer systems.
- By the end of this lesson, students will understand why octal is important in computer systems.
California Career and Technical Education Standards
Activities in this Lesson
- Beginning Activity 40min - Hooks / Set
Ask the students this question: “What would it be like if you had no thumbs?”
Use duct tape to tape their thumbs to their hands so they can only use their fingers. Pass out common household objects and challenge students to try doing everyday tasks without their thumbs like taking off a screw-on top from a bottle or paperclip some papers together. Ask them to brainstorm and come up with things that they think would be the most difficult to do without thumbs. If possible, have them try some of them. You could even have a competition!
Show the video “Thumbless” where some students accepted the challenge to go a day with their thumbs disabled. If you wish to go a little further, show the video of “Charity”, a girl who was born without thumbs and with her fingers deformed. You can then show the video “Charity's Hands” which shows the difficulty the handicapped can have using a computer. This would be a good segue into a discussion of the need for accessibility options in an operating system.
If you wish to make a historical connection, talk about how a common method in ancient times to humiliate a defeated opponent would be to cut off their thumbs and sometimes their big toes. This is referred to in the Bible in the book of Judges 1:6-7. Outside of the obvious pain, why would this be a humiliation?
There is also a good place here for a biological connection if you have time. The existence of the opposable thumb is a major turning point in biology as it allows the use of tools.
(Note: if this hook seems a little too graphic or you feel it distracts from the main point of the lesson, you can make the same point by using Mickey Mouse as an example. Mickey Mouse, as many of the older cartoons, had only three fingers due to the animators wanting to make their jobs simpler with fewer fingers to draw. Therefore, cartoon characters probably count in octal!)
(Another historical connection would be to mention that some Native American tribes, like the Yuki from the Mendocino area, counted in octal. They counted using the spaces between the fingers rather than counting the fingers themselves.)
- Lecture 20min - Lecture
But there is one small way that having no thumbs would make using a computer easier. What if we had no thumbs and only counted on our fingers? Then we would be counting more like computers do! When we count using all our fingers and thumbs, we count using a base of ten digits, because we have a total of ten fingers and thumbs. If we only counted with our fingers, we would count with only eight digits, which is much more like computers do. Counting with only eight digits is called “octal” and is very common in many computer systems, especially older ones. Unix uses octal values to calculate permissions..
So what would it be like to count with only eight fingers? Well, you would only be able to use eight digits, 0,1,2,3,4,5,6,7. When you got to seven, you would treat it as you would a nine in decimal and the next number would be a “ten”, meaning a one and a zero, the one indicating one group of eights with none left over.
So the place values in the octal system will be 1’s, (8 0), 8’s, (8 1), 64’s (8 2), 512’s (8 3) and so forth.
- Demonstration 20min - Demo / Modeling
On the board, begin counting in octal. Have the students count with you. 0,1,2,3,4,5,6,7,10,11,12,13— Stop and ask them what the number 13 in octal is in decimal. It would be 11 in decimal! Write the two number side by side, 13 and 11. How would a computer programmer know just by looking at one of these numbers what the actual amount was?
One way would be to add the notation mathematicians use- a subscript number after the number to indicate the base. For example, 13 8 would mean the number 13 counting in base 8 or octal. However, this is difficult to do in computer work since there is no easy way to create a subscript in a simple line of text like a computer would see, so computer programmers have come up with another way. They put a zero (which you would not normally begin a number with) and then a letter to indicate the most common types of numbering systems computers use. So if a number begins with 0b, like 0b1101, it means the number is binary. If it begins with a zero, like 0356, it probably is octal. To indicate hexadecimal, they begin the number with 0x, like 0x856a. Normal decimal numbers do not have the leading zero.
Add a zero before the number 13 to make it 013. Now can they tell which number is octal?
- Check for Understanding 5min - Check Understanding
Write several numbers on the board-
015, 16, 31, 025, 25, 016.
Ask the students which of these numbers indicate the same quantity. If they are understanding correctly, they should answer 025 and 31.
- Guided Practice 20min - Guided Practice
Have the students take out a piece of paper and write their names in the top right corner. Have them label it “Octal Numbers Lab”.
Have them fold the page lengthwise into quarters. Then have them number in the first column to 25, move to the next column and number from 26-50 and so on until they reach 100.
Now tell them to begin counting in octal beside the number- 0,1,2,3,4,5,6,7,10,11,12- all the way to 100. Tell them to be sure to leave space after the octal number, as there will be one more number added later. When they reach 100 decimal, what number do they get octal? They should get 0144.
Write this question on the board—
“Why do computer programmers always mix up Halloween and Christmas?”
If they are having difficulty, have them circle the number pair 25-031 on their papers. If they still don’t get it, the joke is that decimal 25 and octal 31 are the same number- Dec.25 and Oct. 31, the dates of Christmas and Halloween!
For a little added humor, play the attached video from “Nightmare Before Christmas” where Jack Skellington from Halloween is trying to understand Christmas.
Now have the students write the same number in binary beside the octal number. So, for example, the number 25 would look like this: 25. 031 0b11001. Have them draw lines dividing the binary into groups of three bits each. So the binary equivalent of decimal 100 would be 1|100|100. Have them compare the groups of three bits with the octal number. Point out that every three binary bits can be represented as one octal number. In this case, 1=1, 100=4 and 100=4 for binary 0144. This is one of the reasons computer programmers find it easier to work in octal. Octal can be more easily translated back and forth to binary.
- Activity #6 30min - Lab / Shop
Now have the students turn their pages over. On the back, have them title the first three columns “decimal” “octal” and “binary”. Have them think up 20 numbers that can be either decimal or octal or binary and put them in the correct column, using correct notation. Remind them to be reasonable and keep the numbers below 1000 decimal. Now have them trade their papers with another student and have them fill out each other’s papers solving the numbers for the missing decimal, octal and binary. When they are done, have them compare their results with each other and check for accuracy. Collect the papers and check them.
- Closure 45min - Closure
Tell the students that there also is a way to count on your fingers in binary. Show the video clips below and challenge them to learn to count in binary through 31 on their fingers. 31 is how high you can count on one hand. Have them work together in small groups until each student can count to 31. Have them demonstrate their new skill to you.
- Assessment - Assessment
Hand out the “Decimal Octal Binary Lab” below. Do not let the students work together so you can monitor whether all of them are understanding the concept, not just copying the others. Collect and check.
- Total Time
- 3 Hours | http://www.cteonline.org/portal/default/Curriculum/Viewer/Curriculum?action=2&cmobjid=270672&view=viewer&refcmobjid=232999 | 13 |
85 | By David Jefferies
What is a loop antenna?
loop antenna has a continuous conducting path leading from one conductor of a two-wire transmission line to the other conductor. You may think of it as a "coil that radiates". The coil may have only a single turn. It may have arbitrary shaped perimeter, but the essence of a coil is that the defining wire encloses an area. Thus, a folded dipole is not a loop antenna in this sense, since the area inside the conductor path is vanishingly small.
Symmetric loop antennas have a plane of symmetry running along the feed and through the loop. Planar loop antennas lie in a single plane which also contains the conductors of the feed.
Three-dimensional loop antennas have wire which runs in all of the x,y, and z directions (in a rectangular Cartesian system). By definition they are not planar. They may, however, be symmetric about planes which contain the feed.
It is possible for the loop antenna plane not to contain the run of the feed. This matters for situations where the feed currents are not perfectly balanced.
What size is a loop antenna?
There are at least two distances which define the "notion of size" in a loop antenna. These are, the total length of wire between the "go" and "return" of the feed, and the largest distance from one point on the loop conductor to another, measured in a straight line (as light would propagate). One might also think of another distance that "matters", namely the distance from the feed junction with the loop, to the most remote point on the loop conductor. All these distances need to be thought of in units of a wavelength at the carrier frequency handled by the antenna.
Loops and probes in waveguide
If one wants to couple radiation from a two-wire feed (possibly coax) to a waveguide, one commonly does this by means of a probe (which couples to the electric field in the guide, and is the equivalent of a monopole) or by means of a loop (which couples to the magnetic field in the guide; the maximum of magnetic field lines pass through the loop). Waveguide may be regarded as a microcosm of the great outside world.
Phase delay across a loop
Critical to the functioning of any loop antenna is the concept of the "phase delay" that occurs for em radiation to get from one point on the loop to another, some distance away. In the case of vanishingly small loops, the traditional calculation assumes that the current is the same everywhere around the loop perimeter,
as can be seen diagrammatically in this figure. In this case, the radiation along any loop diameter arrives from an oppositely directed but parallel element of current after a short time delay, which puts in a phase shift so that the radiation contributions do not entirely cancel.
This traditional argument quickly leads to the result that the radiation resistance of a small circular loop rises as the (ratio of loop diameter to wavelength) raised to the fourth power. A small increase in loop diameter therefore results in a greatly increased radiation resistance.
Just as with a small rod antenna, where the radiation resistance rises as the square of the length of the exposed radiating wire (see radimp.html), so also in a loop antenna the radiation resistance, were it not for the cancellation effects, might be expected to rise as the square of the circumference, and therefore as the square of the loop radius or diameter in the case of a small circular loop. However, there are additional cancellation effects, and this puts in an additional factor proportional to the square of the diameter, radius, or circumference.
One of the most significant attributes of a loop antenna is that "go" current in one part of the loop is offset by "return" current in another. It is only because these go and return paths are physically separated in space that a small loop antenna can radiate at all. Otherwise the radiation from one little current element would exactly cancel that from the other. In fact, this does happen for radiation directions normal to the plane of a vanishingly small planar loop. In such directions there is a deep radiation null.
Quantitatively, for a circular loop of radius R, when R/lambda = 0.25, the diameter is half a wavelength and the 180 degree phase shift, for the radiation to get from the "go" current at one end of the diameter to the oppositely-directed "return" current at the other end of the diameter, results in an enhancement factor of 2 over the radiation from just a single current element. The radiation from one element arrives "in phase" with the contribution from the other element, half a wavelength away but opposite in sign. The perimeter is then (2 pi 0.25) wavelengths which is 1.57 wavelengths and so the assumption of constant current around the loop perimeter has broken down.
For R/lambda = 1/100 or 0.01, the field contributions nearly cancel. The expression for the "enhancement factor" is [2 sin(2 pi R/lambda)] which then evaluates to 0.126 very nearly. This is a lot less (1/16th) than that due to the quarter-wave radius loop (enhancement of 2) and will make (1/16)^2 = 1/256 difference to the contribution of these little elements to the radiated power and to the radiation resistance.
For R/lambda = 1/50 (or 0.02), the enhancement factor is 0.25, and for R/lambda = 1/20 (or 0.05) the enhancement factor is 0.61 and at this point the perimeter has got to 0.3142 of a wavelength.
Of course, in certain loop structures the size of the currents in different elements of length along the loop wire will vary. Thus, loop antennas which have a total wire length approaching or exceeding an appreciable fraction of a wavelength can be efficient radiators with radiation resistance that approaches a match to common feed-line impedances. It is only in vanishingly small loop antennas that we are justified in assuming that the current is the same at every point along the loop wire. In intermediate cases, this may sometimes be a justifiable approximation, but certain textbooks which treat a circular loop antenna of radius lambda/25 (which has a loop wire length of about lambda/4) as if the approximation were sufficiently valid, may be in serious error. It is partly for this reason that there is some controversy about the radiation resistance of intermediate-sized loop antennas.
The folded-dipole approximation
For a loop where the perimeter is about a whole wavelength, the folded dipole analogy may be better. We imagine the loop as being formed from a "bulged-out" folded half-wave dipole for which the current distribution looks like this :-
The current in the element of wire diametrically opposite the feed is now directed from right to left, rather than from left to right as it would be in the vanishingly small loop. The currents up and down in the elements on the horizontal diameter have vanished, and it is for this reason that there is no radiation from this scenario in the plane of the loop in the horizontal direction.
The radiation resistance of this mode is quite high because the cancellation between currents on opposite ends of a diameter is no longer so complete. For a circular loop of perimeter one wavelength, the radius is 1/(2 pi) wavelengths, or about 0.16 wavelengths. We are still talking about a physically small-sized loop therefore. For example, at a wavelength of 20 metres such a loop would have a diameter of 6.4 metres and would be an effective wideband radiator.
The quarter-wave shorted line approximation
Now consider a loop which has a perimeter of just one half of a wavelength. At 20 metres wavelength this would be a loop of diameter 3.2 metres. We may consider a bulged-out length of transmission line having the same total wire length. A little thought shows that the transmission line model is a short circuited quarter-wave section of line. The input current is zero in the parallel line approximation, as the line presents an open circuit to the generator. Of course, this will not be exactly true when we have "bulged out" the line section; for one thing, the short circuit point will have moved physically closer to the feed.
What is clear, however, is that in this approximation the current in the element diametrically opposite the feed runs from left to right, not from right to left as it did in the folded dipole example. It is also going to be significantly larger than the current supplied by the feed.
As we increase the perimeter of the loop from a quarter wavelength to a half wavelength, there must therefore be a region where the current opposite the feed is smaller than the feed current, and indeed it must at some point pass through zero. Thus it is apparent that for all intermediate loops of diameter greater than 0.16 wavelength, the "small loop" approximation is not valid, and very significant radiation occurs. The Q will be reasonably low and the bandwidth and radiation resistance will have usefully large values. Those people with simulation packages may like to quantify these "general statements".
Phase shifts in the current distribution
Now, it is known that the radiation resistance for a small loop antenna is often swamped by the loss resistance due to the current being confined to a small skin depth of conductor at the wire or tube surface. Thus we have to consider phase shifts between oppositely-directed currents on opposite ends of a loop diameter (for the special case of a circular loop) brought about by the distributed inductance, resistance, and capacitance of the loop line. Loop radiation is often (usually) measured with the loop mounted in a vertical plane, and one goes away a significant distance on a flat ground so that the radiation is measured on a horizontal (level) path in the plane of the loop. It is easy to see that for a symmetric loop (as discussed above) the current elements contributing to the radiation, up and down on opposite sides of the loop, have balanced amplitudes and phases. Thus we expect the traditional formula for the radiation from a vanishingly small loop to be approximately correct for intermediate loops, in this scenario of horizontal path radiation. For those loops of this kind of size where radiated field strengths have been measured and reported, it is said that this was the measurement geometry.
For other directions of the diameters of the loop, which are at a slant (intermediate between horizontal and vertical) angle to the ground, there will be phase shifts and amplitude differences between the little elements of current flow at the ends of this diameter. As stated, these phase shifts are due to the combined effects of distributed series inductance and shunt capacitance, and series skin-effect loss resistance. However, for loops where the phase shifts have a significant effect, the total wire perimeter will probably be long enough so that the amplitudes of the current elements change as well, and this will also generate a contribution to the radiation.
Quantifying these phase and amplitude shifts would appear to be quite a difficult problem. In terms of the current flow through a continuous conductor having distributed inductance and resistance per unit length, Kirchoff's current law indicates that the current is the same everywhere and that there are no phase shifts. However, if we then allow for the shunt capacitance, between elements of the loop tube or wire (which has non-vanishing surface area), then the phasing of current flow around the loop becomes a function of the loop wire diameter (or tube diameter) as well as the skin depth loss. A simulation might sort out some of these issues, but as it would return global values for the antenna properties, the local behaviour might not be transparent.
It is not unreasonable to expect, therefore, that intermediate-sized loops will radiate more strongly along such slant diameters than the traditional theory might predict. This effect is expected to be quite small, overall. For, the cancellation of the oppositely-directed current elements is no longer so complete: they have differing amplitudes and phases. This will put up the radiation resistance of the loop. Paradoxically, therefore, the presence of loss resistance in the loop due to joule heating in the skin depth where the current flows, may enhance the total radiation over what it would be for a lossless conductor having the same geometry.
The loop (intermediate size) will therefore radiate up and down preferentially. If we mount the loop with its plane horizontal, it should be possible to check on this effect by moving around the loop at constant range, measuring the fields radiated as we go. The prediction is that there will be some anisotropy in the radiation, symmetrically disposed with reference to the feed axis.
For the case of non-constant amplitudes and phases, there will also be radiation normal to the plane of the loop. This forms the basis of a simple and sensitive experimental method of deciding whether a loop antenna is functionally "small", or in the "intermediate size" range. In the case of a truly "small" antenna, there should be a very deep null in the far field region at directions on the axis of rotation of the loop. This null progressively fills in as one makes the loop diameter larger. By the time the loop diameter is about lambda/10 there should be appreciable radiation along the loop axis. As remarked above, this will be accompanied by anisotropy in the radiation in the plane of the loop.
The folded dipole approximation to an intermediate loop antenna has deep nulls along the horizontal diameter (if the feed runs in from underneath) and for this reason radiation in this mode is not detected in the standard loop field-strength measurements reported by some others.
Inductance and self-resonance
Loop antennas have area, and generate magnetic fields which thread this area. These changing magnetic fields generate a back emf at the loop terminals which provides the loop with inductive impedance. Generally speaking, the larger the area, the larger the inductance. However, as the loop wire becomes longer, the phase shift between induced voltage and the current that gives rise to it changes. At a certain wire length, generally held for circular loops to be about 1/3 wavelength, the loop becomes "self resonant". Another way of looking at this phenomenon is to consider a loop to be a "bulged-out" length of parallel wire transmission line, shorted at the end remote from the feed point. In the case of a true parallel wire line, self resonance may be defined to occur when the total wire length (go and return) is 1/2 a wavelength plus the length of the short at the end. The line is then a quarter wave shorted stub.
A self-resonant antenna might be thought of as being "optimally efficient". Smaller loops require additional series or shunt capacitance to tune them to resonance so that the impedance presented to the feed becomes real.
In a self-resonant loop, then, it is clear that the standard small-loop theory breaks down. As we have indicated, this happens for total wire length of between 1/3 and 1/2 of a wavelength. The current distribution around the loop will be very non-uniform; the radiation resistance will be significantly large, and will swamp the loss resistance in all likelihood. As we gradually increase the dimensions of the loop antenna, nothing suddenly happens to the radiation properties. Therefore, we propose that the small loop limit really needs the loop radius to be very much less than the reported 1/25 of a wavelength. The controversy about small loops, however, deals with loops of precisely this size. We are not surprised.
Recently there have been reports in antenneX magazine about designs for three-dimensional small loops. In such loops, the ratio of wire length to maximum linear dimension of the antenna may be made significantly larger. Therefore, the small-loop limit will apply only for even yet smaller overall dimensions. The phase shifts, as we travel along the loop conductor, will result in less cancellation for oppositely-directed current elements and there will be enhanced radiation resistance and efficiency.
In three-dimensional small loops, it becomes easier to make the total wire runs longer than a wavelength, and to make adjacently-placed wire runs carry currents which run in the same directions, whose radiation therefore reinforces rather than subtracts.
Also, by wrapping up the wire runs, into a folded structure, the total current-carrying (and therefore radiating) elements inside the compact antenna volume may be significantly increased in length. This may be done without endlessly increasing the loop inductance, and so reaching self resonance at too short a length of radiating wire. For, the wire in 3-d may be run in such a way that the local magnetic fields generated subtract, from the contributions of different parts of the wire run. It appears, therefore, that we are still in the early stages of finding out what may be achieved, in small linear dimensions, with this exciting new class of antenna structures. -30-
~ antenneX ~ December 2003 Online Issue #80 ~
Send mail to [email protected]
with questions or comments.
Copyright © 1988-2011 All rights reserved - antenneX©
Last modified: December 31, 2010 | http://www.antennex.com/preview/Dec03/Dec603/Loopantennas.htm | 13 |
189 | Radio propagation is the behavior of radio waves when they are transmitted, or propagated from one point on the Earth to another, or into various parts of the atmosphere. As a form of electromagnetic radiation, like light waves, radio waves are affected by the phenomena of reflection, refraction, diffraction, absorption, polarization and scattering.
Radio propagation is affected by the daily changes of water vapor in the troposphere and ionization in the upper atmosphere, due to the Sun. Understanding the effects of varying conditions on radio propagation has many practical applications, from choosing frequencies for international shortwave broadcasters, to designing reliable mobile telephone systems, to radio navigation, to operation of radar systems.
Radio propagation is also affected by several other factors determined by its path from point to point. This path can be a direct line of sight path or an over-the-horizon path aided by refraction in the ionosphere, which is a region between approximately 60 and 600 km. Factors influencing ionospheric radio signal propagation can include sporadic-E, spread-F, solar flares, geomagnetic storms, ionospheric layer tilts, and solar proton events.
Radio waves at different frequencies propagate in different ways. At extra low frequencies (ELF) and very low frequencies the wavelength is very much larger than the separation between the earth's surface and the D layer of the ionosphere, so electromagnetic waves may propagate in this region as a waveguide. Indeed, for frequencies below 20 kHz, the wave propagates as a single waveguide mode with a horizontal magnetic field and vertical electric field. The interaction of radio waves with the ionized regions of the atmosphere makes radio propagation more complex to predict and analyze than in free space. Ionospheric radio propagation has a strong connection to space weather. A sudden ionospheric disturbance or shortwave fadeout is observed when the x-rays associated with a solar flare ionize the ionospheric D-region. Enhanced ionization in that region increases the absorption of radio signals passing through it. During the strongest solar x-ray flares, complete absorption of virtually all ionospherically propagated radio signals in the sunlit hemisphere can occur. These solar flares can disrupt HF radio propagation and affect GPS accuracy.
Predictions of the average propagation conditions were needed and made during the Second world war. A most detailed code developed by Karl Rawer was applied in the German Wehrmacht, and after the war by the French Navy.
Since radio propagation is not fully predictable, such services as emergency locator transmitters, in-flight communication with ocean-crossing aircraft, and some television broadcasting have been moved to communications satellites. A satellite link, though expensive, can offer highly predictable and stable line of sight coverage of a given area.
Radio Wave Propagation
Fluxes & Indices Used For Forecasting
The higher the K-index, the more unstable propagation becomes, the effect is stronger at high latitudes, but weaker near low latitudes.
When storm level is reached, propagation strongly degrades, possibly fade out at high latitudes.
Classification of K-indices are as follows:
K8=Very severe storm
K9=Extremely severe storm
As with the K-index, the higher the A-index, the more unstable propagation becomes.
Classification of A-indices are as follows:
A0 - A7 = quiet
A8 - A15 = unsettled
A16 - A29 = active
A30 - A49 = minor storm
A50 - A99 = major storm
A100 - A400 = severe storm
GLOSSARY OF SOLAR/PROPAGATION TERMINOLOGY
Aa index. See ak index.
aa index. A daily and half daily index of geomagnetic activity determined from the k indices scaled at two nearly antipodal stations at invariant magnetic latitude 50 degrees(Hartland, England, and Canberra, Australia). The aa values are in units of 1 nT. The index is available back to 1868, and is provided by the Institut de Physique du Globe de Paris, France. absorption line. In spectroscopy, and in particular the solar Fraunhofer spectrum, a characteristic wavelength of emitted radiation that is partially absorbed by the medium between the source and the observer. (See H alpha.) active. A descriptive word specifically meaning (1) a probability of > or = 50% for an M-class x-ray flare (see x-ray flare class) in a sunspot region; (2) disturbed geomagnetic levels such that 16 < or = Ak index < 30. active dark filament (ADF). A filament displaying motion or changes in shape, location, or absorption characteristics. active longitude. The approximate center of a range of heliographic longitudes in either the northern or southern solar hemisphere (seldom both at the same time) containing one or more large and complex active regions formed by the frequent, localized emergence of new magnetic flux. Individual sunspot groups within the complex can have relatively short lifetimes (a week or two); the complex may persist for several solar rotations because additional spot groups form as earlier ones decay.
active prominence. A prominence moving and changing in appearance over a few minutes of time.
active prominence region. A portion of the solar limb displaying active prominences; typically associated with an active region.
active region (AR). A localized, transient volume of the solar atmosphere in which plages, sunspots, facula, flares, etc., may be observed. Active regions are the result of enhanced magnetic fields; they are at least bipolar and may be complex if the region contains two or more bipolar groups.
active surge region (ASR). An active region that exhibits a group or series of spike-like surges that rise no higher than 0.15 solar radii above the limb. (See bright surge on the limb.)
ADF. See active dark filament.
AE index. A geomagnetic index of the auroral electrojet, which characterizes the maximum range of excursion (both positive and negative) from quiet levels; measured at a given universal time by using the combined data from a worldwide ring of high-latitude magnetic observatories. AU (A upper) refers to the greatest positive deviation from the quiet time reference and AL (A lower) to the most negative. By definition AE = AU - AL. AO refers to the mean of AU and AL: AO = 1/2 (AU + AL). The AE and companion indices are provided by the Data Analysis Center for Geomagnetism and Spacemagnetism of Kyoto University, Kyoto, Japan.
AFR. The Ak index observed at Fredericksburg, Virginia.
AFS. See arch filament system.
ak index. A 3-hourly "equivalent amplitude" index of geomagnetic activity for a specific station or network of stations (represented generically here by k) expressing the range of disturbance in the horizontal magnetic field. "ak" is scaled from the 3-hourly K index according to the following table:
At SESC these values are used directly for operational purposes. But to convert the ak values to nanoteslas (nT), a local (station-dependent) conversion factor must be found by dividing the station's lower limit for K=9 by 250. For example, at Boulder and Fredericksburg the lower limit for K=9 is 500 nT so the factor is 2; therefore the ak values for these stations are in units of 2 nT. (To obtain an equivalent amplitude in nanoteslas for Boulder or Fredericksburg, the index value must be doubled).
Ak index. A daily index of geomagnetic activity for a specific station or network of stations (represented generically here by k) derived as the average of the eight 3-hourly ak indices in a Universal Time day.
Alfven wave. A transverse wave in magnetized plasma characterized by a change of direction of the magnetic field (rather than a change of intensity).
am index. A mean, 3-hourly "equivalent amplitude" of geomagnetic activity based on standardized K index data from a global network of 23 Northern and Southern Hemisphere stations by the Institut de Physique du Globe de Paris, France; am values are given in units of 1 nT.
Am index. The daily Ak index determined from the eight daily am indices.
An index. The daily Ak index determined from only the Northern Hemisphere stations of the am index network. anomaly. In typical SESC use, an unexpected response of a spacecraft.
ap index. A mean, 3-hourly "equivalent amplitude" of magnetic activity based on K index data from a planetary network of 11 Northern and 2 Southern Hemisphere magnetic observatories between the geomagnetic latitudes of 46 degrees and 63 degrees by the Institut fur Geophysik at Gottingen, F.R. Germany; ap values Ap index. Formally the daily Ak index, determined from the eight daily ap indices. However, for daily operational uses (since several weeks are required to collect the data and calculate the index), Air Force Space Forecast Center estimates the value of the Ap index by measuring the geomagnetic field in near-real time at several Western Hemisphere magnetometer stations and statistically weighting the data to represent the Gottingen Ap. The value of this estimated Ap index is reported in SESC daily and weekly summaries of geophysical activity.
aphelion. That point on the path of a sun-orbiting object most distant from the center of the sun. Compare perihelion.
apogee. That point on the path of an earth-orbiting satellite most distant from the center of the earth. Compare perigee.
APR. See active prominence region.
AR. See active region.
arcade. A series of magnetic loops, overlying a solar inversion line.
arch filament system (AFS). A system of small, arched linear-absorption features connecting bright, compact plage of opposite polarity. An AFS is a sign of emerging bipolar magnetic flux and possibly rapid or continued growth in an active region.
As index. The daily Ak index determined from only the Southern Hemisphere stations of the am index network.
ASR. See active surge region.
atmospherics. Also known as "sferics," transient radio waves produced by naturally occurring electric discharges (e.g., lightning) in the earth's atmosphere.
AU. The mean distance between the earth and sun, equal to 214.94 solar radii or 1.496E+11m.
aurora. A sporadic, faint visual phenomenon associated with geomagnetic activity that occurs mainly in the high-latitude night sky. Auroras occur within a band of latitudes known as the auroral oval, the location of which is dependent on geomagnetic activity. Auroras are a result of collisions between atmospheric gases and precipitating charged particles (mostly electrons) guided by the geomagnetic field from the magnetotail. Each gas (oxygen and nitrogen molecules and atoms) gives out its own particular color when bombarded, and atmospheric composition varies with altitude. Since the faster precipitating particles penetrate deeper, certain auroral colors originate preferentially from certain heights in the sky. The auroral altitude range is 80 to 1000 km, but typical auroras are 100 to 250 km above the ground; the color of the typical aurora is yellow-green, from a specific transition of atomic oxygen. Auroral light from lower levels in the atmosphere is dominated by blue and red bands from molecular nitrogen and molecular oxygen. Above 250 km, auroral light is characterized by a red spectral line of atomic oxygen. To an observer on the ground, the combined light of these three fluctuating, primary colors produces an extraordinary visual display. Auroras in the Northern Hemisphere are called the aurora borealis or "northern lights." Auroras in the Southern Hemisphere are called aurora australis. The patterns and forms of the aurora include quiescent arcs, rapidly moving rays and curtains, patches, and veils.
auroral electrojet. See electrojet.
auroral oval. An elliptical band around each geomagnetic pole ranging from about 75 degrees magnetic latitude at local noon to about 67 degrees magnetic latitude at midnight under average conditions. It is the locus of those locations of the maximum occurrence of auroras and widens to both higher and lower latitudes during the expansion phase of a magnetic substorm.
autumnal equinox. The equinox that occurs in September. Compare vernal equinox.
B-angle. As viewed from the earth, the heliographic latitude of the center of the solar disk. The center of the solar disk usually does not coincide with the heliographic equator, due to a tilt of the solar axis with respect to the ecliptic. (See Bo under solar coordinates).
Bartels' rotation number. The serial number assigned to 27-day rotation periods of solar and geophysical parameters. Rotation 1 in this sequence was assigned arbitrarily by Bartels to begin in January 1833, and the count has continued by 27-day intervals to the present. (For example, rotation 2000 began on 12 November 1979, rotation 2030 on 30 January 1982.) The 27-day period was selected empirically from the observed recurrence of geo- magnetic activity attributed to co-rotating features on the sun. The sun has an average rotation period (as seen from the earth) of 27.27 days; therefore, solar longitude slowly drifts with respect to the Bartels rate. Compare Carrington longitude.
bipolar magnetic region (bmr). A region of the solar photosphere containing at least two areas of enhanced magnetic fields of opposing polarity.
birefringent filter. An optical device that passes a narrow range of wavelengths near a selected optical wavelength. Used especially in solar telescopes to pass selected lines (usually H alpha) in the Fraunhofer spectrum.
bow shock. A collisionless shock wave in front of the magnetosphere arising from the interaction of the supersonic solar wind with the earth's magnetic field.
bright point. A short-lived brightening of flare intensity, less than ten millionths of the solar hemisphere in area.
bright surge on the disk (BSD). A bright (high temperature) stream of gas (surge) seen against the solar disk. BSDs are often flare related and commonly fan out from the flare site. See also bright surge on the limb, and dark surge on the disk.
bright surge on the limb (BSL). A bright stream of gas (surge) emanating from the chromosphere that moves outward more than 0.15 solar radius above the limb. It may decelerate and return to the sun. Most BSLs assume a linear radial shape but can be inclined and/or fan shaped because they apparently follow magnetic lines of force.
brightness temperature. The equivalent blackbody temperature at a specified wavelength of a uniform source filling the resolution element of the telescope.
BSD. See bright surge on the disk.
BSL. See bright surge on the limb.
burst. A transient enhancement of the solar radio emission, usually associated with an active region or flare.
butterfly diagram. A plot of observed solar active region latitudes vs. time. This diagram, which resembles a butterfly, shows that the average latitude of active region formation drifts from high to low latitudes during a sunspot cycle.
C index. A subjective daily character figure (index) of geomagnetic activity for a single observatory; for each UTC day the figure is 0 for very quiet magnetic conditions, 1 for moderately disturbed conditions, and 2 for severely disturbed conditions.
Carrington longitude. A system of fixed solar longitudes rotating at a uniform synodic period of 27.2753 days (a sidereal period of 25.38 days). Carrington selected the meridian that passed through the ascending node of the sun's equator at 1200 UTC on 1 January 1854 as the original prime meridian. The daily Carrington longitude of the central point of the apparent solar disk is listed (with other solar coordinates in The Astronomical Almanac published annually by the U.S. Naval Observatory. Compare Bartels' rotation number.
Castelli U. See U burst.
celestial equator. The projection of earth's geographic equator onto the celestial sphere.
celestial sphere. An imaginary spherical shell around the earth and concentric with it.
centimeter burst. A solar radio burst in the centimeter wavelength range (1 to 10 cm or 0.01 to 0.1 m), or 30 000 to 3000 MHz in the frequency range.
central meridian passage (CMP). The rotation of an active region or other feature across the longitude meridian that passes through the apparent center of the solar disk.
CFI. See comprehensive flare index.
chromosphere. The layer of the solar atmosphere above the photosphere and beneath the transition region and the corona. The chromosphere is the source of the strongest lines in the solar spectrum, including the Balmer alpha line of hydrogen and the H and K lines of calcium, and is the source of the red (chromium) color often seen around the rim of the moon at total solar eclipses.
Ci index. The daily international magnetic character figure formed by taking the arithmetic mean of the C index values from all reporting observatories.
cleft. See cusp.
CMD. Central Meridian Distance. (See solar coordinates).
CME. See coronal mass ejection.
CMP. See central meridian passage.
comprehensive flare index (CFI). A method of evaluating the significance of a complex flare event. The CFI = A + B + C + D + E. The value of each component is given below; a value of zero is assigned if the effect did not occur. The CFI values range from 1 to 17 (non-occurrence gives a zero value); values >10 indicate flares with unusually strong electromagnetic radiation.
A-Originally the importance of ionizing radiation as indicated by the importance of associated SID, scale 1-3; but currently scaled from the x-ray flare class, class C being 1, class M being 2, and class X being 3.
B-Importance of H alpha flare; scale 1-3 (3 includes flare importance classes 3 and 4).
C-Log of 10.7-cm peak radio flux in units of 10E-22 W/sq m/Hz.
D-Effects associated with the dynamic radio spectrum: Type II burst = 1, continuum storm = 2, Type IV burst = 3.
E-Log of 200-MHz flux in same units as C. The CFI was devised and documented by Helen Dodson Prince and Ruth Hedeman at the McMath-Hulbert Observatory.
conjugate points. Two points on the earth's surface at opposite ends of a geomagnetic field line.
continuum. Optical radiation arising from broadband emission from the photosphere.
continuum storm (CTM). General term for solar noise lasting for hours and sometimes days, in which the intensity varies smoothly with frequency over a wide range in the meter and decimeter wavelengths.
convection. The bulk transport of plasma (or gas) from one place to another, in response to mechanical forces (for example, viscous interaction with the solar wind) or electromagnetic forces.
Coordinated Universal Time (UTC). By international agreement, the local time at the prime meridian, which passes through Greenwich, England. It was formerly known as Greenwich Mean Time, or sometimes simply Universal Time. There are 24 time zones around the world, labeled alphabetically. The time zone centered at Greenwich has the double designation of A and Z. Especially in the military community, Coordinated Universal Time is often referenced as Z or Zulu Time.
corona. The outermost layer of the solar atmosphere, characterized by low densities (<10E+9 per cubic cm or 10E+15 per cubic m and high temperatures (>10E+6 K).
coronagraph. An optical device that makes it possible to observe the corona at times other than during an eclipse. A simple lens focuses the sun onto an occulting disk that prevents the light from the solar disk from proceeding farther along the optical path, effectively providing an artificial eclipse.
coronal hole. An extended region of the corona, exceptionally low in density and associated with unipolar photospheric regions having "open" magnetic field topology. Coronal holes are largest and most stable at or near the solar poles, and are a source of high-speed solar wind. Coronal holes are visible in several wavelengths, most notably solar x-ray s, but at SESC, coronal holes are determined from solar images in He 1083 nm provided by the Kitt Peak National Solar Observatory.
coronal loops. A typical structure of enhanced corona observed in EUV lines and soft x-rays. They are sometimes related to H alpha loops. Coronal loops represent "closed" magnetic topology. coronal mass ejection (CME). A transient outflow of plasma from or through the solar corona. CMEs are often but not always associated with erupting prominences, disappearing solar filaments, and flares.
coronal rain (CRN). Material condensing in the corona and appearing to rain down into the chromosphere as observed in H alpha at the solar limb above strong sunspots.
coronal streamer. A large-scale structure in the white-light corona often overlying a principal inversion line in the solar photospheric magnetic fields. (See helmet streamer ).
coronal transients. A general term for short-time-scale changes in the corona.
corrected geomagnetic coordinates. A nonspherical coordinate system based on a magnetic dipole axis that is offset from the earth's center by about 450 km toward a location in the Pacific Ocean (15.6 N 150.9 E). This "eccentric dipole" axis intersects the surface at 81N 85 W, and 75 S 120 E.
cosmic noise. The broad spectrum of radio noise arriving at the earth from sources outside the solar system.
cosmic ray. An extremely energetic (relativistic) charged particle primarily originating outside the earth's magnetosphere.
Cp index. A daily index of geomagnetic activity analogous to the Ci index, obtained from the sum of the eight daily values of the ap index. The range of Cp is 0.0 to 2.5, 2.5 representing the most disturbed.
critical frequency. In ionospheric radio propagation, that frequency capable of penetration just to the layer of maximum ionization with vertical propagation. Radiowaves of lower frequencies are refracted back to the ground; higher frequencies pass through.
CRN. See coronal rain.
crochet. A sudden deviation in the sunlit geomagnetic field H component (see geomagnetic elements ) associated with extraordinary solar flare x-ray emission. The effect can be as much as 50 nT and last up to 30 minutes. The event is also known as an SFE (solar flare effect).
CTM. See continuum storm.
cusp(s). In the magnetosphere, two regions near magnetic local noon and approximately 15 degrees of latitude equatorward of the north and the south magnetic poles. The cusps mark the division between geomagnetic field lines on the sunward side (which are approximately dipolar but somewhat compressed by the solar wind ) and the field lines in the polar cap that are swept back into the magnetotail by the solar wind. The term cusp implies conical symmetry around the axis of the bundle of converging (Northern Hemisphere) or diverging (Southern Hemisphere) field lines. In practice, "cusp" and "cleft" are often used interchangeably. However, "cleft" implies greater extension in longitude (local time) and hence a wedge-shaped structure.
D component of the geomagnetic field. See geomagnetic elements.
D region. A daytime region of the earth's ionosphere beginning at approximately 40 km, to 90 km altitude. Radiowave absorption in layers in this region can be significantly increased in response to increased ionization associated with solar activity.
dark surge on the disk (DSD). Dark gaseous ejections on the sun visible in H alpha. They usually originate from small subflare-like brightenings. Material is usually seen to be ejected, to decelerate at a gravitational rate, and to flow back to the point of origin. DSDs can occur intermittently for days from an active region.
dB (decibel). A unit used to express the ratio between two levels of power. By definition dB = 10 log (P2/P1). (Doubling the power ration is approx an increase of 3 dB).
DB. disparition brusque. See disappearing solar filament.
declination. (1) The angular distance of an astronomical body north (+) or south (-) of the celestial equator. (2) In geomagnetic applications, the angle between true north and the horizontal component of the local geomagnetic field.
differential charging. The charging of different areas of a spacecraft or satellite to different potentials in response to sunlight, the charged particle environment, and the design and composition of the structural materials themselves. Discharge may occur through arcing and generally is detrimental.
differential particle flux. The differential particle directional flux j (E,w ) denotes the number of particles of energy E per unit energy interval, per unit area, per unit time, per unit solid angle of observation, passing through an area perpendicular to the viewing direction; the angle w is the angle between the viewing direction and the local magnetic field. It is approximately obtained from the count rate of a physical detector measuring the flux of particles between energy E and E +dE, geometric factor G, and solid angle of view dW through the relationship
j(E,w) = C/(G * dE * dW * dt),
where C is the number of detector counts in time dt.
differential rotation. The change in solar rotation rate with latitude. Low latitudes rotate at a faster angular rate (approx. 14 degrees/day) than do high latitudes (approx. 12 degrees/day).
dip. The geomagnetic inclination angle. See geomagnetic elements.
dip equator. An irregular, imaginary line around the earth where the geomagnetic inclination angle is measured to be zero. It lies near the geographic equator.
disappearing solar filament (DSF). A solar filament (prominence) that disappears suddenly (on a time scale of minutes to hours). The prominence material is often seen to ascend but is also seen to fall into the sun or just fade. (Historically, DSFs have been called disparitions brusques because they were first studied by French astronomers.) DSFs are a possible indicator of coronal mass ejections.
disk. The visible surface of the sun (or any heavenly body) projected against the sky.
disparition brusque (DB). See disappearing solar filament.
Doppler shift. A change in the perceived frequency of a radiated signal caused by motion of the source relative to the observer.
dose rate. The rate at which radiation energy is absorbed in living tissue, expressed in centisieverts per unit time.
DSD. See dark surge on the disk.
DSF. See disappearing solar filament.
Dst index. A measure of variation in the geomagnetic field due to the equatorial ring current. It is computed from the H-components at approximately four near-equatorial stations at hourly intervals. At a given time, the Dst index is the average of variation over all longitudes; the reference level is set so that Dst is statistically zero on internationally designated quiet days. An index of -50 or deeper indicates a storm-level disturbance, and an index of -200 or deeper is associated with middle- latitude auroras. Dst is determined by the World Data Center C2 for Geomagnetism, Kyoto University, Kyoto, Japan.
E region. A daytime region of the earth's ionosphere roughly between the altitudes of 90 and 160 km. E region characteristics (electron density, height, etc.) depend on the solar zenith angle and solar activity. The ionization in the E layer is caused mainly by x-ray s in the range 0.8 to 10.4 nm. (See also sporadic E ).
eccentric dipole. See corrected geomagnetic coordinates.
eclipse. The obscuring of one celestial body by another.
(1) A Solar Eclipse occurs when the moon comes between the earth and the sun. In a total eclipse, the solar disk is completely obscured; in a partial eclipse the solar disk is only partly obscured. An annular eclipse occurs when the moon is near its apogee and the apparent diameter of the moon is less than that of the sun so that the sun is never completely obscured. "First and last contacts" are defined as the times of tangency of the solar and lunar disks. A central eclipse (which can be total or annular) has two additional times of tangency: "second contact," when maximum eclipse begins, and "third contact," when it ends. The last glimpses of the sun through the lunar valleys, just before second contact, are known as Baily's beads.
(2) A lunar eclipse occurs when the moon enters the shadow cast by the earth.
(3) Spacecraft in the earth's shadow are said to be in eclipse.
ecliptic. The great circle made by the intersection of the plane of the earth's orbit with the celestial sphere. (Less properly, the apparent path of the sun around the sky during the year.)
EFR. See emerging flux region.
EHF. See extremely high frequency.
electrojet. (1) Auroral: A current that flows in the ionosphere in the auroral zone. (2) Equatorial: A thin electric current layer in the ionosphere over the dip equator at about 100 to 115 km altitude.
electrostatic discharge (ESD). An abrupt equalization of electric potentials. In space, ESD can occur between objects or portions of a single object (see differential charging ); ESD may occur locally within a dielectric or cable. The consequences may include material damage, a spacecraft anomaly, phantom command s, disrupted telemetry, and contaminated data.
ELF. See extremely low frequency.
emerging flux region (EFR). An area on the sun where new magnetic flux is erupting. An EFR is a bipolar magnetic region that first produces a small bipolar plage visible in the chromosphere, which may develop an arch filament system and the initial spots of a sunspot group. An EFR may be isolated from other solar activity or may occur within an active region.
emission line. In spectroscopy, a particular wavelength of emitted radiation, more intense than the background continuum.
emission measure. The integral of the square of the electron density over volume; the units are inverse volume (per cubic m).
ephemeris. An astronomical almanac listing solar coordinates and the positions of the sun and other heavenly bodies at regular intervals in time.
EPL. See eruptive prominence on limb.
equatorial electrojet. See electrojet.
equinox. One of the two points of intersection of the celestial equator and the ecliptic. The sun passes through the vernal equinox on about 21 March and through the autumnal equinox on about 22 September.
eruptive. With regard to solar flare predictions, a probability of >50% that an active region will produce C class x-ray flares. (See x-ray flare class.)
eruptive prominence on limb (EPL). A solar prominence that becomes activated and is seen to ascend from the sun; sometimes associated with a coronal mass ejection. (See also disappearing solar filament).
ESD. See electrostatic discharge.
estimated hemispherical power input. For the earth, an estimate made from NOAA/TIROS particle measurements of the instantaneous power dissipated daily in a single auroral zone by auroral particle precipitation. The power ranges from approximately 5 gigawatts during quiet intervals up to more than 100 in very active times. The magnitude of this power input corresponds closely to the level of geomagnetic activity.
EUV. See extreme ultraviolet.
Evershed effect. Horizontal motion of the solar atmosphere near a sunspot, having velocities of a few kilometers per second. In the photosphere, matter streams away from the umbra. In the chromosphere, the direction of flow is toward the umbra.
exosphere. The earth's atmosphere above 500-600 km.
expert system. A computer program intended to simulate human logic for analyzing a complex situation on the basis of a sequence of behavior rules supplied by a human expert. (See Theophrastus).
extraordinary mode. One of the two modes of propagation of electromagnetic waves in a magnetic plasma. For propagation along the direction of the magnetic field, it is the mode in which the electric vector rotates in the same sense that an electron gyrates freely about the field. For propagation perpendicular to the magnetic field, the electric vector oscillates perpendicular to the primary magnetic field. (See also ordinary mode.)
extreme ultraviolet (EUV). A portion of the electromagnetic spectrum from approximately 10 to 100 nm.
extremely high frequency (EHF). That portion of the radio frequency spectrum from 30-300 GHz.
extremely low frequency (ELF). That portion of the radio frequency spectrum from 30 to 3000 Hz.
F corona. Of the white-light corona (the corona seen by the eye at a total solar eclipse ), that portion which is caused by sunlight scattered or reflected by solid particles (dust) in interplanetary space. The same phenomenon produces zodiacal light.
F region. The upper region of the ionosphere, above approximately 160 km altitude. F region electron densities are highly variable, depending on the local time, solar activity, season, and geomagnetic activity. The F region contains the F1 and F2 layers. The F2 layer is more dense and peaks at altitudes between 200 and 600 km. The F1 layer, which forms at lower altitudes in the daytime, has a smaller peak in electron density.
f-spot. See follower spot.
facula. White-light plage-a bright region of the photosphere seen in white light, seldom visible except near the solar limb. Corresponds with concentrated magnetic fields that may presage sunspot formation.
fibril. A linear feature in the H alpha chromosphere of the sun, occurring near strong sunspots and plage or in filament channels. Fibrils parallel strong magnetic fields, as if mapping the field direction.
filament. A mass of gas suspended over the chromosphere by magnetic fields and seen as dark ribbons threaded over the solar disk. A filament on the limb of the sun seen in emission against the dark sky is called a prominence. Filaments occur directly over magnetic-polarity inversion lines, unless they are active.
filament channel. A broad pattern of fibrils in the chromosphere, marking a portion of a magnetic polarity inversion line where a filament may soon form or where a filament recently disappeared. Filament channels interconnect separate filaments and active regions on a common inversion line.
flare. A sudden eruption of energy in the solar atmosphere lasting minutes to hours, from which radiation and particles are emitted. Flares are classified on the basis of area at the time of maximum brightness in H alpha.
Importance 0 (Subflare): < = 2.0 hemispheric square degrees
Importance 1: 2.1-5.1 square degrees
Importance 2: 5.2-12.4 square degrees
Importance 3: 12.5-24.7 square degrees
Importance 4: > = 24.8 square degrees
[One square degree is equal to (1.214 x 10E+4 km)squared = 48.5 millionths of the visible solar hemisphere.] A brightness qualifier F, N, or B is generally appended to the importance character >to indicate faint, normal, or brilliant (for example, 2B). fluence. Time integrated flux. In SESC use, a specified particle flux accumulated over 24 hours. flux. The rate of flow of a physical quantity through a reference surface.
fmin. The lowest frequency at which echo traces are observed on an ionogram. It increases with increasing D region absorption.
foEs. The maximum ordinary mode radiowave frequency capable of vertical reflection from the sporadic E layer of the ionosphere.
foF2. The maximum ordinary mode radiowave frequency capable of vertical reflection from the F2 layer of the ionosphere. (See F region. )
follower spot. In a magnetically bipolar or multipolar sunspot group, the main spot in that portion of the group east of the principal inversion line is called the follower or f-spot. Leader and follower describe the positions of spots with respect to apparent motion due to solar rotation. (Compare leader spot.)
Forbush decrease. An abrupt decrease, of at least 10%, of the background galactic cosmic ray intensity as observed by neutron monitors. It is associated with major plasma and magnetic field enhancements in the solar wind at or beyond the earth.
Fraunhofer spectrum. The system of dark lines superposed on the continuous solar spectrum formed by the absorption of photons by atoms and molecules in the solar and terrestrial atmospheres.
gamma rays. High-energy radiation (energies in excess of 100 keV) observed during large, extremely energetic solar flares.
GEOALERT. An IUWDS special message summarizing by code the current and predicted levels of solar activity and geomagnetic activity.
geocorona. The outer region of the earth's atmosphere lying above the thermosphere and composed mostly of hydrogen.
geomagnetic activity. Natural variations in the geomagnetic field classified quantitatively into quiet, unsettled, active, and geomagnetic storm levels according to the observed a index:
Category Range of index
quiet 0 - 7
unsettled 8 - 15
active 16 - 29
minor storm 30 - 49
major storm 50 - 99
severe storm 100 - 400
geomagnetic elements. The components of the geomagnetic field at the surface of the earth. These elements are usually denoted thus in the literature:
X-the geographic northward component
Y-the geographic eastward component
Z-the vertical component, reckoned positive downward
H-the horizontal intensity, of magnitude sq rt((X)squared + (Y)squared)
F-the total intensity sq rt((H)squared + (Z)squared)
I-the inclination (or dip) angle, arctan (Z/H)
D-the declination angle, measured from the geographic north direction to the H component direction, positive in an eastward direction. D = arctan (Y/X)
However, in SESC use, the geomagnetic northward and geomagnetic eastward components are called the H and D components. The H axis direction is defined by the mean direction of the horizontal component of the field; the D component is expressed in nanoteslas and is related to the direction of the horizontal component relative to geomagnetic north by using the small-angle approximation. Thus the D component = H (the horizontal intensity) multiplied by delta D (the declination angle relative to geomagnetic north, expressed in radians).
geomagnetic field. The magnetic field in and around the earth. The intensity of the magnetic field at the earth's surface is approximately 32,.000 nT at the equator and 62,000 nT at the north pole (the place where a compass needle points vertically downward). The geomagnetic field is dynamic and undergoes continual slow secular changes as well as short-term disturbances (see geomagnetic activity ). The geomagnetic field can be approximated by a centered dipole field, with the axis of the dipole inclined to the earth's rotational axis by about 11.5 degrees. Geomagnetic dipole north is near geographic coordinate 78.3 N 69 W (Thule, Greenland), and dipole south is near 79 S 110 E (near Vostok, Antarctica). The observed or dip poles, where the magnetic field is vertical to the earth's surface, are near 76 N 101 W, and 66 S 141 E. The adopted origin of geomagnetic longitude is the meridian passing through the geomagnetic poles (dipole model) and the geographic south pole. (See also corrected geomagnetic coordinates.)
geomagnetic storm. A worldwide disturbance of the earth's magnetic field, distinct from regular diurnal variations. A storm is precisely defined as occurring when the daily Ap index exceeds 29. (See geomagnetic activity ).
Initial Phase: Of a geomagnetic storm, that period when there may be an increase of the middle-latitude horizontal intensity (H) (see geomagnetic elements ) at the surface of the earth. The initial phase can last for hours (up to a day), but some storms proceed directly into the main phase without showing an initial phase.
Main Phase: Of a geomagnetic storm, that period when the horizontal magnetic field at middle latitudes is generally decreasing, owing to the effects of an increasing westward-flowing magnetospheric ring current. The northward component can be depressed as much as several hundred nanoteslas in intense storms. The main phase can last for hours, but typically lasts less than 1 day.
Recovery Phase: Of a geomagnetic storm, that period when the depressed northward field component returns to normal levels. Recovery is typically complete in one to two days, but can take longer.
geomagnetic storm levels. The reported storm levels that are determined by NOAA based on the estimated 3-hourly Planetary K-indices that are derived in real time from a network of western hemisphere ground-based magnetometers. The geomagnetic storm levels are based on the Planetary K indices as follows:
G1: K = 5
G2: K = 6
G3: K = 7
G4: K = 8
G5: K = 9
geomagnetic time. See magnetic local time.
geosynchronous. Term applied to any equatorial satellite with an orbital velocity equal to the rotational velocity of the earth. The geosynchronous altitude is near 6.6 earth radii (approximately 36 000 km above the earth's surface). To be geostationary as well, the satellite must satisfy the additional restriction that its orbital inclination be exactly zero degrees. The net effect is that a geostationary satellite is virtually motionless with respect to an observer on the ground.
GLE. See ground-level event.
GMT. Greenwich Mean Time. (See Coordinated Universal Time.)
GPS. Global Positioning System: a network of earth-orbiting satellites used for precise position-finding in surveying and navigation.
gradual commencement. The commencement of a geomagnetic storm that has no well-defined onset. (See also sudden commencement.)
granulation. Cellular structure of the photosphere visible at high spatial resolution. Individual granules, which represent the tops of small convection cells, are 200 to 2000 km in diameter and have lifetimes of 8 to 10 minutes.
Greenwich Mean Time (GMT). See Coordinated Universal Time.
green line. A coronal emission line at 530.3 nm from Fe XIV (an iron atom from which 13 electrons have been stripped). The green line is one of the strongest (and first-recognized) visible coronal lines. It identifies moderate-temperature regions of the corona ; it is enhanced in coronal streamers above inversion lines, and diminished in coronal holes.
ground-level event (GLE). A sharp increase in ground-level cosmic ray count to at least 10% above background, associated with solar protons of energies greater than 500 MeV. GLEs are relatively rare, occurring only a few times each solar cycle. When they occur, GLEs begin a few minutes after flare maximum and last for a few tens of minutes to hours. Intense particle fluxes at lower energies can be expected to follow this initial burst of relativistic particles. GLEs are detected by neutron monitors, e.g., the monitor at Thule, Greenland.
H component (of the geomagnetic field ). See geomagnetic elements.
H alpha. The first atomic transition in the hydrogen Balmer series; wavelength = 656.3 nm. This absorption line of neutral hydrogen falls in the red part of the visible spectrum and is convenient for solar observations. The H alpha line is universally used for patrol observations of solar flare s, filaments, prominence s, and the fine structure of active regions.
Hale boundary. A large-scale magnetic inversion line of a particular magnetic orientation in the solar photosphere or across a sector boundary in the solar wind. If the polarity of the western (leading) side of the boundary is the same as that of the nearer solar pole at the start of a sunspot cycle, the boundary is said to be "Hale." If the polarity is opposite, the boundary is "anti-Hale." At the beginning of Cycle 22 (1987), the northern solar pole was negative; therefore, in the northern hemisphere a Hale boundary separates a leading negative polarity region from a following positive one. The boundary between the leader spot and follower spot of a typical sunspot group in either hemisphere is a Hale boundary.
heliographic. Referring to coordinates on the solar surface referenced to the solar rotational axis.
heliopause. The boundary surface between the solar wind and the external galactic medium.
heliosphere. The magnetic cavity surrounding the sun, carved out of the galaxy by the solar wind.
helmet streamer. A feature of the white light corona (seen in eclipse or with a coronagraph) that looks like a ray extending away from the sun out to about 1 solar radius, having an arch-like base containing a cavity usually occupied by a prominence.
hemispherical power input (HPI). See estimated hemispherical power input.
HF. See high frequency.
high frequency (HF). That portion of the radio frequency spectrum between 3 and 30MHz.
high latitude. With reference to zones of geomagnetic activity, 50 degrees to 80 degrees geomagnetic latitude. The other zones are equatorial, polar, and middle latitude.
high-speed stream. A feature of the solar wind having velocities exceeding approximately 600 km/s (about double average solar wind values). High-speed streams that originate in coronal holes are less dense than those originating in the average solar wind.
homologous flares. Solar flares that occur repetitively in an active region, with essentially the same position and with a common pattern of development.
Hyder flare. A filament -associated two-ribbon flare, often occurring in spotless regions. The flare is generally slow (30-60 minutes rise time in H alpha and x-ray) and follows the disappearance of a quiescent filament. The flare presumably results from the impact on the chromosphere of infalling filament material. The Hyder flare is named for Dr. C. Hyder, who published studies of such flares in 1967.
IMF. See interplanetary magnetic field.
inclination of the geomagnetic field. The angle between the local geomagnetic field direction and the horizon. (See geomagnetic elements.)
initial phase. See geomagnetic storm.
integral particle flux. The integral directional particle flux J(E,w ) is literally the mathematical integral, with respect to the energy E, of the differential particle flux j (E,w ). It denotes the number of particles of energy equal to or greater than E, per unit area, per unit solid angle, per unit time, passing through an area perpendicular to the viewing direction.
interplanetary magnetic field (IMF). The magnetic field carried with the solar wind.
INTERMAGNET. An international consortium or magnetic observatories that exchange data in near-real time by satellite relay.
invariant magnetic latitude. The geomagnetic latitude at which a particular line of force of the geomagnetic field, characterized by L (the altitude of the field line at the equator), intersects the earth. The relationship is given by L (cos)squared = 1 L is expressed in earth radii and the geomagnetic field is approximated by a dipole model.
inversion line. The locus of points on the solar surface where the radial magnetic field vanishes. Inversion lines separate regions of opposing polarity and are often superposed by thin, dark filaments, which can be used as tracers. Inside active regions, the areas close to and along inversion lines are preferred places of flare occurrence. Filament channels, plage corridors, arch-filament systems, and fibril patterns surrounding active regions can be used to infer the positions of inversion lines.
ion-acoustic waves. Longitudinal waves in a plasma similar to sound waves in a neutral gas. Amplitudes of electron and ion oscillations are not quite the same, and the resulting Coulomb repulsion provides the potential energy to drive the waves.
ionogram. A plot (record) of the group path height of reflection of ionospherically returned (echoed) radio waves as a function of frequency.
ionosphere. The region of the earth's upper atmosphere containing free electrons and ions produced by ionization of the constituents of the atmosphere by solar ultraviolet radiation at very short wavelengths (<100 nm) and energetic precipitating particles. The ionosphere influences radiowave propagation of frequencies less than about 300 MHz. (See D region, E region, F region.)
ionospheric storm. A disturbance in the F region of the ionosphere, which occurs in connection with geomagnetic activity. In general, there are two phases of an ionospheric storm, an initial increase in electron density (the positive phase) lasting a few hours, followed by a decrease lasting a few days. At low latitudes only the positive phase is usually seen. Individual storms can vary, and their behavior depends on geomagnetic latitude, season, and local time. The phases of an ionospheric storm are not related to the initial and main phases of a geomagnetic storm.
K (kelvin). A unit of absolute temperature. One kelvin is equal to 1 degree C, but zero on the kelvin scale corresponds to absolute zero (-273.15 degrees C).
K corona. Of the white-light corona (that is, the corona seen by the eye at a total solar eclipse), that portion which is caused by sunlight scattered by electrons in the hot outer atmosphere of the sun. This is the "true" corona. Corona graphs are specifically constructed to separate the K corona from the F corona.
K index. A 3-hourly quasi-logarithmic local index of geomagnetic activity relative to an assumed quiet-day curve for the recording site. Range is from 0 to 9. The K index measures the deviation of the most disturbed horizontal component (see geomagnetic elements ).
Kelvin-Helmholtz instability. A mechanism often invoked to explain phenomena at the magnetopause (and sometimes the plasmapause), especially the observed magnetic pulsations.
Km index. A 3-hourly planetary index of geomagnetic activity calculated by the Institut de Physique du Globe de Paris, France, from the K indices observed at a large, symmetrically located network of stations. The Km indices are used to determine the am indices.
Kp index. A 3-hourly planetary index of geomagnetic activity calculated by the Institut fur Geophysik der Gottingen Universitat, F.R. Germany, from the K indices observed at 13 stations primarily in the Northern Hemisphere. The Kp indices, which date from 1932, are used to determine the ap indices.
L. Heliographic longitude of a solar feature. (See solar coordinates.)
latchup. With reference to the effect of energetic particles on spacecraft microcircuits, a serious type of single event upset in which the microcircuit is either permanently stuck or cannot be reset without being turned off and on.
LDE. See long duration (or decay) event.
leader spot. In a magnetically bipolar or multipolar sunspot group, the main spot in that portion of the group west of the principal inversion line ; also called the preceding or p-spot. Leader and follower describe the positions of spots with respect to apparent motion due to solar rotation. (Compare follower spot.)
LEO. Among satellite operators, a common abbreviation for Low Earth Orbit.
LET. See linear energy transfer.
LF. See low frequency.
light bridge. Observed in white light, a bright tongue or streaks penetrating or crossing sunspot umbra e. Light bridges typically develop slowly and have lifetimes of several days. The appearance of a light bridge is frequently a sign of impending active region division or dissolution. The more brilliant forms occur with overlying bright plage and often occur during the most active phase of the sunspot group.
light curve. A plot of intensity in a particular wavelength or band of wavelengths against time, especially with reference to a solar flare; for example, the time history of the x-ray output of a flare.
limb. The edge of the solar disk, corresponding to the level at which the solar atmosphere becomes transparent to visible light.
limb darkening. For certain solar spectral lines, a lessening of the intensity of the line from the center of the solar disk to the limb, caused by the existence of a temperature gradient in the sun and the line-of-sight through the solar atmosphere.
limb flare. A flare at the edge (limb) of the solar disk ; the elevated portions of the flare are seen with particular clarity against the dark sky background.
linear energy transfer (LET). The energy per unit path length that an ionizing particle loses to the medium through which it is traveling. The greater the LET, the more damaging the particle.
lobes. In the magnetotail, the two regions (north and south) separated by the neutral sheet.
long duration ( or decay) event (LDE). With reference to x-ray events, those events that are not impulsive in appearance. The exact time threshold separating impulsive from long-duration events is not well defined, but operationally, any event requiring hours (1 or more) to return to background levels would probably be regarded as an LDE. It has been shown that the likelihood of a coronal mass ejection increases with the duration of an x-ray event, and becomes virtually certain for durations of 6 hours or more.
longitudinal component. That component of magnetic field vector parallel to the direction of view, radial from the solar surface at disk center.
loop prominence system (LPS). A system of prominences in the form of loops associated with major flares, bridging the magnetic inversion line. The lifetime of an LPS is a few hours.
Loop prominences observed in H alpha are distinctly brighter than other prominences, and material typically flows downward along both legs from condensation "knots" near the top of the loop. LPSs show a high correlation with proton flares.
low frequency (LF). That portion of the radio frequency spectrum from 30 to 300 kHz.
lowest usable frequency (LUF). The lowest frequency that allows reliable long-range HF radio communication by ionospheric refraction.
LPS. See loop prominence system.
LUF. See lowest usable frequency.
M(3000). The ratio of the maximum frequency reflected once from an ionospheric layer over a 3000-km range to the critical frequency of the layer.
magnetic bay. A relatively smooth excursion of the H (horizontal) component (see geomagnetic elements ) of the geomagnetic field away from and returning to quiet levels. Bays are "positive" if H increases and "negative" if H decreases.
magnetic cloud. In general, any identifiable parcel of solar wind. More specifically, a region of about 0.25 AU in radial dimension in which the magnetic field strength is high and the direction of one component of the magnetic field changes appreciably by means of a rotation nearly parallel to a plane. Magnetic clouds may be one manifestation of coronal mass ejections in the interplanetary medium.
magnetic local time (MLT). On earth, analogous to geographic local time; MLT at a given location is determined by the angle subtended at the geomagnetic axis between the geomagnetic midnight meridian and the meridian that passes through the location. 15 degrees = 1 h. The geomagnetic meridian containing the sub-solar point defines geomagnetic local noon, and the opposite meridian defines geomagnetic midnight. (See geomagnetic field.)
magnetic sunspot classifications. See Mount Wilson magnetic classification.
magnetogram. A plot showing the amplitude of one or more vector components of a magnetic field versus space or time. Solar magnetograms are a graphic representation of solar magnetic field strengths and polarity.
magnetohydrodynamics (MHD). The study of the dynamics of an electrically conducting fluid in the presence of a magnetic field.
magnetopause. The boundary surface between the solar wind and the magnetosphere, where the pressure of the magnetic field of the object effectively equals the dynamic pressure of the solar wind.
magnetopause current sheet. An electric current sheet that more or less coincides with the magnetopause.
magnetosheath. The region between the bow shock and the magnetopause, characterized by very turbulent plasma. For the earth, along the sun-earth axis, the magnetosheath is about 2 earth radii thick.
magnetosphere. The magnetic cavity surrounding a magnetized body, carved out of the passing solar wind by virtue of the magnetic field, which prevents, or at least impedes, the direct entry of the solar wind plasma into the cavity.
magnetotail. The extension of the magnetosphere in the antisunward direction as a result of interaction with the solar wind. In the inner magnetotail, the field lines maintain a roughly dipolar configuration. But at greater distances in the antisunward direction, the field lines are stretched into northern and southern lobes, separated by a plasmasheet. There is observational evidence for traces of the earth's magnetotail as far as 1000 earth radii downstream.
MAGSTORM. A telegraphic abbreviation used to denote a geomagnetic storm.
main phase. See geomagnetic storm.
Maunder minimum. An approximately 70-year period, centered near 1670, during which practically no sunspots were observed.
maximum usable frequency (MUF). The highest frequency that allows reliable HF radio communication over a given ground range by ionospheric refraction. Frequencies higher than the MUF penetrate the ionosphere and become useful for extraterrestrial communications.
MDP. See mound prominence.
medium frequency (MF). That portion of the radio frequency spectrum from 0.3 to 3 MHz.
mesosphere. The region of the earth's atmosphere between the upper limit of the stratosphere (approximately 30 km altitude) and the lower limit of the thermosphere (approximately 80 km altitude).
MHD. See magnetohydrodynamics.
micropulsation. See pulsation.
microwave burst. A radiowave signal associated with optical and/or x-ray flare s. Microwave bursts occur mostly at centimeter wavelengths (6 cm = 4995 MHz) but are generally broadband, often extending into the millimeter and decimeter domains. (See also U - burst.)
microwaves. Generically, any radio frequency of 500 MHz or more.
middle latitude. With reference to zones of geomagnetic activity, 20 degrees to 50 degrees geomagnetic latitude. Other zones are equatorial, polar, and high latitude.
Moreton wave. A wave disturbance (also known as a flare blast wave) generated by large flares, which is seen to propagate horizontally across the disk of the sun at a typical velocity of about 1000 km /s. Its presence is more visible in wings of the H alpha line. It can cause filaments to erupt as the wave apparently disturbs supporting magnetic fields.
mound prominence (MDP). H alpha structure at the solar limb that is the elevated top of numerous small surges and/or a dense, low-lying prominence. Mount Wilson magnetic classification. Classification of the magnetic character of sunspot s according to rules set forth by the Mount Wilson Observatory in California:
alpha. A unipolar sunspot group.
beta. A sunspot group having both positive and negative magnetic polarities (bipolar), with a simple and distinct division between the polarities.
gamma. A complex active region in which the positive and negative polarities are so irregularly distributed as to prevent classification as a bipolar group.
beta-gamma. A sunspot group that is bipolar but which is sufficiently complex that no single, continuous line can be drawn between spots of opposite polarities delta. A qualifier to magnetic class (see below) indicating that umbra e separated by less than 2 degrees within one penumbra have opposite polarity.
beta-delta. A sunspot group of general beta magnetic classification but containing one (or more) delta spot(s).
beta-gamma-delta. A sunspot group of beta-gamma magnetic classification but containing one (or more) delta spot(s).
gamma-delta. A sunspot group of gamma magnetic classification but containing one (or more) delta spot(s).
multipath. Describing a degraded condition of radio propagation in which the radio wave splits and arrives at the receiver via different paths. Because each path will generally have different lengths, arrival times, and phases, the signal received will suffer fading.
network. (1) Chromospheric: a large-scale brightness pattern in chromospheric (see chromosphere ) and transition region spectral lines, which is located at the borders of the photospheric (see photosphere ) supergranulation and coincides with regions of local magnetic enhancement. These cellular patterns are typically 3 x 10E+4 km across. (2) Photospheric: a bright pattern that appears in spectroheliograms in certain Fraunhofer spectrum lines. It coincides in gross outline with the chromospheric network. neutral line. The line that separates solar magnetic fields of opposite polarity, typically determined from solar magnetograms recording the longitudinal magnetic component. Neutral lines are, more properly, inversion line s).
neutron monitor. A ground-based detector that counts secondary neutrons generated by processes originating with the impact of atmospheric molecules and atoms by very energetic particles (galactic or solar cosmic rays).
nm (nanometer). A unit of length, 10E-9m.
noise storm. A transient enhancement of solar radio emission, particularly at 245 MHz, consisting of an elevated background emission (radiation) and Type I radio bursts.
non-great-circle propagation. Describing a degraded condition of radio propagation caused by horizontal gradients in the ionospheric electron density. The radio wave is refracted away from its normal great-circle path, which is the shortest distance between two points on the earth. Strong horizontal gradients are associated with the equatorward boundary of the auroral oval (especially in the night sector) and the sunrise terminator.
nT (nanotesla ). 10E-9 tesla or 0.000000001 tesla.
ordinary mode. One of the two modes of propagation of electromagnetic waves in a magnetic plasma. For propagation along the direction of the magnetic field, it is the mode in which the electric vector rotates opposite to the direction of an electron gyrating freely about the field. For propagation perpendicular to the magnetic field, the electric vector oscillates parallel to the primary magnetic field. (See also extraordinary mode.)
P-angle. See solar coordinates.
p-spot. See leader spot.
PCA. See polar cap absorption.
particle flux unit (p.f.u.). 1 p/sq cm/s/sr(steradian).
penumbra. The sunspot area that may surround the darker umbra or umbrae. In its mature form it consists of linear bright and dark elements radial from the sunspot umbra.
perigee. That point on the orbit of an earth-orbiting satellite nearest to the earth. Compare apogee.
perihelion. That point on the orbit of a sun-orbiting body nearest to the sun. Compare aphelion.
persistence. Continuation of existing conditions. When a physical parameter varies slowly, the best prediction is often persistence.
p.f.u. See particle flux unit.
phantom command. An apparent (but unintended) spacecraft command caused by the natural environment. (See single event upset or electrostatic discharge.)
photosphere. The lowest visible layer of the solar atmosphere; corresponds to the solar surface viewed in white light. Sunspots and faculae are observed in the photosphere.
pitch angle. In a plasma, the angle between the velocity vector of a charged particle and the direction of the ambient magnetic field.
plage. On the sun, an extended emission feature of an active region that is seen from the time of emergence of the first magnetic flux until the widely scattered remnant magnetic fields merge with the background. Magnetic fields are more intense in plage, and temperatures are higher than in surrounding, quiescent regions.
plage corridor. A low-intensity division in chromospheric (see chromosphere ) plage coinciding with a polarity inversion line and marked by narrow filament segments and/or fibrils spanning the corridor.
plasma. A gas that is sufficiently ionized so as to affect its dynamical behavior.
plasma frequency. The characteristic frequency of free plasma oscillations, determined by the balance between electron kinetic energy and ion Coulomb attraction.
plasmapause. The outer surface of the plasmasphere.
plasmasheet. In the magnetosphere, the core of the magnetotail in which the plasma is hotter and denser than in the tail lobes north and south of it. The plasmasheet is thought to be separated from the tail lobes by the sheet of the "last closed field lines" and it typically lies beyond geosynchronous orbit.
plasmasphere. In the magnetosphere, a region of relatively cool (low energy) and dense plasma that may be considered an outer extension of the ionosphere with which it is coupled. Like the ionosphere, the plasmasphere tends to co-rotate with the earth.
polar cap absorption (PCA). An anomalous condition of the polar ionosphere whereby HF and VHF (3-300 MHz) radiowaves are absorbed, and LF and VLF (3-300 kHz) radiowaves are reflected at lower altitudes than normal. PCAs generally originate with major solar flares, beginning within a few hours of the event and maximizing within a day or two of onset. As measured by a riometer, the PCA event threshold is 2 dB of absorption at 30MHz for daytime and 0.5 dB at night. In practice, the absorption is inferred from the proton flux at energies greater than 10 MeV, so that PCAs and proton event s are simultaneous. However, the transpolar radio paths may still be disturbed for days, up to weeks, following the end of a proton event, and there is some ambiguity about the operational use of the term PCA.
polar crown. A nearly continuous ring of filaments occasionally encircling either polar region of the sun (latitudes higher than 50 degrees).
polar plumes. Fine, ray-like structures of the solar corona, best observed in the solar polar regions during solar minimum.
polar rain. In the earth's upper atmosphere, a weak, structureless, near-isotropic flux of electrons precipitating into the polar caps.
pore. A feature in the photosphere, 1 to 3 arc seconds in extent, usually not much darker than the dark spaces between photospheric granules. It is distinguished from a sun spot by its short lifetime, 10 to 100 minutes.
post-flare loops. A loop prominence system often seen after a major two-ribbon flare, which bridges the ribbons. Lifetimes are several hours.
preheating. A slow brightening of an active region, both optically and in x-rays, that sometimes precedes moderate and larger solar flare events by some tens of minutes.
PRESTO. An alert issued by a Regional Warning Center to give rapid notification of significant solar or geophysical activity in progress or just concluded.
prominence. A term identifying cloud-like features in the solar atmosphere. The features appear as bright structures in the corona above the solar limb and as dark filaments when seen projected against the solar disk. Prominences are further classified by their shape (for example, mound prominence, coronal rain ) and activity. They are most clearly and most often observed in H alpha.
proton event. The measurement of proton flux reaching and sustaining > = 10 p.f.u. for at least 15 min at energies > 10 MeV by the primary SESC geosynchronous satellite. (See polar cap absorption.) The start time of the event is defined as the earliest time at which event thresholds have been reached. There are two event thresholds, namely p10 and p100. (p10, a proton event reaching 10 p.f.u. at > 10 MeV and p100 reaching 100 p.f.u. at > 100 MeV).
proton flare. Any flare producing significant counts of protons with energies exceeding 10 MeV in the vicinity of the earth.
pulsation. A rapid fluctuation of the geomagnetic field having periods from a fraction of a second to tens of minutes and lasting from minutes to hours. There are two main patterns: Pc (a continuous, almost sinusoidal pattern), and Pi (an irregular pattern). Pulsations occur at magnetically quiet as well as disturbed times. Pc's are grouped, according to their physical and morphological properties, into five categories:
Pc1 - periods 0.2-5 s. May occur in bursts ("pearls"), or in consecutive groups of pulsations with sharply decreasing frequency.
Pc2 - periods 5-10 s. Do not seem to be physically related to Pc1 or Pc3.
Pc3 - periods 10-45 s. Are observed over a wide range of latitudes.
Pc4 - periods 45-150 s. Are also known as Pc II or Pc.
Pc5 - periods 150-600 s. Are sometimes called giant micropulsations.
Q index. A 15-minute index of geomagnetic activity intended for high-latitude (auroral) stations. After quiet diurnal variations are removed, Q is the largest deviation scaled from the undisturbed level for the two horizontal components. (This differs from the K index, which is scaled from the largest relative deviation.) The 15-minute periods are centered on the hour and at 15, 30, and 45 minutes past each hour. The range of Q is from 0 to 11; the upper limit, in nanoteslas, for each index value is given below.
QDC. See quiet day curve.
quiescent prominence. A long, sheet-like prominence nearly vertical to the solar surface. Except in an occasional activated phase, shows little large-scale motion, develops very slowly, and has a lifetime of several solar rotations. Quiescent prominences form within the remnants of decayed active regions, in quiet areas of the sun between active regions, or at high solar latitudes where active regions seldom or never form. (See filament).
quiet. A descriptive word specifically meaning (1) a probability of less than 50% for a C-class flare (see x-ray flare class ) in a sunspot region; (2) geomagnetic activity levels such that Ak < 8. quiet day curve (QDC). Especially in connection with the components of the geomagnetic field (see geomagnetic elements ), the trace expected in the absence of activity. The K index and Q index are measured from deviations relative to a QDC. Riometer and neutron monitor deviations are also measured relative to a QDC.
R-number. See sunspot number.
radar aurora. Radar returns from electron density irregularities in auroral regions. The strength of radar auroral returns is aspect dependent.
radiation belts . Regions of the magnetosphere roughly 1.2 to 6 earth radii above the equator in which charged particles are stably trapped by closed geomagnetic field lines. There are two belts. The inner belt is part of the plasmasphere and corotates with the earth; its maximum proton density lies near 5000 km. Inner belt protons are mostly high energy (MeV range) and originate from the decay of secondary neutrons created during collisions between cosmic ray s and upper atmospheric particles. The outer belt extends on to the magnetopause on the sunward side (10 earth radii under normal quiet conditions) and to about 6 earth radii on the nightside. The altitude of maximum proton density is near 16 000-20 000 km. Outer belt protons are lower energy (about 200 eV to 1 MeV) and come from the solar wind. The outer belt is also characterized by highly variable fluxes of energetic electrons. The radiation belts are often called the "Van Allen radiation belts" because they were discovered in 1968 by a research group at the University of Iowa led by Professor J. A. Van Allen.
radio blackouts: Communication blackouts that are predicted from the x-ray level measured by the primary GOES satellite. These radio blackout levels (R) are related to the peak x-ray level as follows:
Radio Blackout level: Peak x-ray level and flux
R1: M1 and (10-5)
R2: M5 and (5 x 10-5)
R3: X1 and (10-4)
R4: X10 and (10-3)
R5: X20 and (2 x 10-3)
radio burst. See radio emission.
radio emission. Emission of the sun in radio wavelengths from centimeters to dekameters, under both quiet and disturbed conditions. Some patterns, known variously as noise storms, bursts, and sweeps, are identified as described below. These types of emission are subjectively rated on an importance scale of 1 to 3, 3 representing the most intense.
Type I. A noise storm composed of many short, narrow-band bursts in the meter wavelength range (300-50 MHz), of extremely variable intensity. The storm may last from several hours to several days.
Type II. Narrow-band emission (sweep) that begins in the meter range (300 MHz) and sweeps slowly (tens of minutes) toward dekameter wavelengths (10 MHz). Type II emissions occur in loose association with major flares and are indicative of a shock wave moving through the solar atmosphere.
Type III. Narrow-band bursts that sweep rapidly (seconds) from decimeter to dekameter wavelengths (500-0.5 MHz). They often occur in groups and are an occasional feature of complex solar active regions.
Type IV. A smooth continuum of broad-band bursts primarily in the meter range (300-30MHz). These bursts occur with some major flare events; they begin 10 to 20 minutes after the flare maximum and can last for hours.
Type V. Short-duration (a few minutes) continuum noise in the dekameter range usually associated with Type III bursts.
Rayleigh-Taylor instability. A fluted or ripple-like instability that can develop on a fluid or plasma boundary surface and propagate along it. This instability is often invoked to explain phenomena in the ionosphere and magnetosphere.
reconnection. A process by which differently directed field lines link up, allowing topological changes of the magnetic field to occur, determining patterns of plasma flow, and resulting in conversion of magnetic energy to kinetic and thermal energy of the plasma. Reconnection is invoked to explain the energization and acceleration of the plasma s that are observed in solar flares, magnetic substorms, and elsewhere in the solar system.
recurrence. Used especially to express a tendency of some solar and geophysical parameters to repeat a trend and sometimes the actual value of the parameter itself every 27 days (the approximate rotation period of the sun).
red line. An intense coronal emission line at 637.4 nm from Fe X (an iron atom from which nine electrons have been stripped). It identifies relatively cooler regions of the corona.
region number. A number assigned by SESC to a plage region or sunspot group if one of the following conditions exists: (1) the region is a group of at least sunspot classification C; (2) two or more separated optical reports confirm the presence of smaller spots; (3) the region produces a solar flare; (4) the region is clearly evident in H alpha and exceeds 5 heliographic degrees in either latitude or longitude. (See also active region.)
regression. A functional relationship between two or more correlated variables that is often empirically determined from data and is used especially to predict values of one variable when values of the others are given.
RI. The international standard relative sunspot number.
right ascension. The angular distance measured eastward along the celestial equator from the vernal equinox. It is expressed in hours, minutes, and seconds (the circumference of the celestial equator is defined as 24 hours).
rigidity. A measure of how easily a particle is deflected by a magnetic field, expressed in megavolts (MV) per nucleon. It is the momentum per unit charge. The integral proton spectrum of a flare can be expressed as an exponential function of rigidity rather than a power function of energy.
ring current. In the magnetosphere, a region of current that flows in a disk-shaped region near the geomagnetic equator in the outer of the Van Allen radiation belts. The current is produced by the gradient and curvature drift of the trapped charged particles. The ring current is greatly augmented during magnetic storms because of the hot plasma injected from the magnetotail. The ring current causes a worldwide depression of the horizontal geomagnetic field during a magnetic storm.
riometer (Relative Ionospheric Opacity meter). A specially designed ground-level radio receiver for continuous monitoring of cosmic noise. The absorption of cosmic noise in the polar regions is very sensitive to the solar low-energy cosmic ray flux. Absorption events are known as PCA s (polar cap absorption) and are primarily associated with major solar flares.
rudimentary. A type of sunspot penumbra characterized by granular (rather than filamentary) structure, brighter intensity than the umbra, and narrow extent, and possibly only partially surrounding the umbra. Penumbrae are typically rudimentary during the sunspot formative and decay phases.
Satellite Anomaly. The usually undesirable response of spacecraft systems to variations in the space environment. High energy particles cause detector noise and/or physical damage to solar cells, electronics, and memory devices (single event upsets or "bitflips"). Large and varying low-to-medium energy particle fluxes can result in a charge buildup between spacecraft components, especially during the eclipse season and during spacecraft maneuvers. Atmospheric drag on spacecraft below approximately 1,000 km can increase during geomagnetic storms, resulting in cross-track and in-track orbit errors and orientation problems. Various communication interference problems result during solar radio bursts from flares when the Sun is within the field of view of the ground tracking dish. Ionospheric irregularities during geomagnetic storms can cause radio telemetry scintillation and fading S-band. Radio frequencies between 1.55 and 5.20 GHz. For satellite communication, the term usually refers to frequencies used for earth-space communication near 2.2 GHz.
S component. The slowly varying (weeks or longer) fluctuation observed in solar radio emission at microwave frequencies (wavelengths from 3 to 100 cm).
SC. See sudden commencement.
scintillation. Describing a degraded condition of radio propagation characterized by a rapid variation in amplitude and/or phase of a radio signal (usually on a satellite communication link) caused by abrupt variations in electron density anywhere along the signal path. It is positively correlated with spread F and to a lesser degree, sporadic E. Scintillation effects are the most severe at low latitudes, but can also be a problem at high latitudes, especially in the auroral oval and over the polar caps.
sector boundary. In the solar wind, the area of demarcation between sectors, which are large-scale features distinguished by the predominant direction of the interplanetary magnetic field, toward the sun (a negative sector), or away from the sun (a positive sector). The sector boundary separating fields of opposite polarity is normally narrow, passing the earth within minutes to hours as opposed to the week or so needed for passage of a typical sector. The solar wind velocities in the boundary region are typically among the lowest observed.
SEU. See single event upset.
SFE. Solar flare effect. (See crochet.)
s.f.u. See solar flux unit. 10E-22 W/sq m/Hz = 10 000 jansky.
SHF. See super high frequency.
shock. A discontinuity in pressure, density, and particle velocity, propagating through a compressible fluid or plasma.
short wave fade (SWF). An abrupt decrease of HF radio signal strength, lasting from minutes to hours, caused by increased day-side ionization from some solar flares. An SWF is one effect under the broad category of sudden ionospheric disturbances (SIDs).
SI. See sudden impulse.
SID. See sudden ionospheric disturbance.
sidereal. Referring to a coordinate system fixed with respect to the distant stars.
simultaneous flares. Unrelated solar flares that occur at nearly the same time. Compare sympathetic flares.
single event upset (SEU). With reference to the effects of energetic particles on spacecraft microcircuits, an unexpected change in the logic state of a single digital bit. SEUs can be either "soft" (the microcircuit is not damaged and can be rewritten to either state), or a latchup, which cannot easily be reset.
smoothed sunspot number. An average of 13 monthly RI numbers, centered on the month of concern. The 1st and 13th months are given a weight of 0.5.
solar activity. Transient perturbations of the solar atmosphere as measured by enhanced x-ray emission (see x-ray flare class ), typically associated with flares. Five standard terms are used to describe the activity observed or expected within a 24-h period:
Very low - x-ray events less than C-class.
Low - C-class x-ray events.
Moderate - isolated (one to 4) M-class x-ray events.
High - several (5 or more) M-class x-ray events, or isolated (one to 4) M5 or greater x-ray events.
Very high - several (5 or more) M5 or greater x-ray events.
solar constant. The total radiant energy received vertically from the sun, per unit area per unit of time, at a position just outside the earth's atmosphere when the earth is at its average distance from the sun. Radiation at all wavelengths from all parts of the solar disk is included. Its value is approximately 2.00 cal/sq cm/min = 1.37 kW/sq m and it varies slightly (by approximately 0.l%) from day to day in response to overall solar features.
solar coordinates. Specifications for a location on the solar surface. The location of a specific feature on the sun (for example, a sunspot ) is complicated by the fact that there is a tilt of 7.25 degrees between the ecliptic plane and the solar equatorial plane as well as a true wobble of the solar rotational axis. (Only twice a year are the solar north pole and the celestial north pole aligned.) Consequently, to specify a location on the solar surface, three coordinates (P, B, L) are necessary to define a grid. Daily values for the coordinates in Coordinated Universal Time (UTC) are listed in The Astronomical Almanac published annually by the U.S. Naval Observatory. The terms used to refer to the coordinates are defined as follows:
P-angle (or P): The position angle between the geocentric north pole and the solar rotational north pole measured eastward from geocentric north. The range in P is +/- 26.3l degrees.
Bo: Heliographic latitude of the central point of the solar disk; also called the B-angle. The range of Bo is +/- 7.23 degrees, correcting for the tilt of the ecliptic with respect to the solar equatorial plane.
Example: If (P,Bo) = (-26.21 degrees, -6.54 degrees), the heliographic latitude of the central point on the solar disk is -6.54 degrees (the north rotational pole is not visible), and the angle between the projection onto the disk of the geocentric north pole and the solar north rotational pole is 26.21 degrees to the west.
Lo: Heliographic longitude of the central point of the solar disk. The longitude value is determined with reference to a system of fixed longitudes rotating on the sun at a rate of 13.2 degrees /day (the mean rate of rotation observed from central meridian transits of sunspots). The standard meridian on the sun is defined to be the meridian that passed through the ascending node of the sun's equator on 1 January 1854 at 1200 UTC and is calculated for the present day by assuming a uniform sidereal period of rotation of 25.38 days.
Once P, Bo, and Lo are known, the latitude, central meridian distance, and longitude of a specific solar feature can be determined as follows:
Latitude. The angular distance from the solar equator, measured north or south along the meridian.
Central meridian distance (CMD). The angular distance in solar longitude measured from the central meridian. This position is relative to the view from earth and will change as the sun rotates; therefore, this coordinate should not be confused with heliographic positions that are fixed with respect to the solar surface.
Longitude. The angular distance from a standard meridian (0 degrees heliographic longitude), measured from east to west (0 degrees to 360 degrees) along the sun's equator. It is computed by combining CMD with the longitude of the central meridian at the time of the observation, interpolating between ephemeris values (for 0000 UT) by using the synodic rate of solar rotation (27.2753 days, 13.2 degrees per day).
solar cycle. See sunspot cycle.
solar flare effect (SFE). See crochet.
solar flux unit (s.f.u.). See s.f.u.
solar maximum. The month(s) during the sunspot cycle when the smoothed sunspot number reaches a maximum. A recent solar maximum occurred in December 1979.
solar minimum. The month(s) during the sunspot cycle when the smoothed sunspot number reaches a minimum. A recent solar minimum occurred in September 1986.
solar radiation storm levels: Storm levels that are determined by the proton flux measurements made by the primary GOES satellite. These levels are rated according to the following proton flux ranges:
solar radiation storm level (S): flux level of > 10 MeV particles
solar radio emission. See radio emission.
solar rotation rate. (1) synodic: l3.39 degrees -2.7 degrees sin squared (solar latitude)/day. (2) sidereal: 14.38 degrees -2.7 sin squared(solar latitude)/day. The difference between sidereal and synodic rates is the earth orbital motion of 0.985 degrees/day.
solar sector boundary (SSB). The boundary between large-scale unipolar magnetic regions on the sun's surface, as determined from inversion lines mapped using filaments and filament channels, or large-scale magnetograms. The supposed solar signature of an interplanetary sector boundary.
solar wind. The outward flow of solar particles and magnetic fields from the sun. Typically at 1 AU, solar wind velocities are near 375 km/s and proton and electron densities are near 5 per cubic centimeter. The total intensity of the interplanetary magnetic field is nominally 5 nT.
solstice. A point on the ecliptic where the sun reaches its greatest absolute declination. There are two of these points, halfway between the equinoxes; they mark the beginning of summer and winter.
South Atlantic anomaly (SAA). A region of the earth centered near 25 degrees S 50 degrees W (geographic coordinates, near the Atlantic coast of Brazil) of low geomagnetic field intensity owing to the fact that the geomagnetic field axis is offset from the center of the earth (see corrected geomagnetic coordinates.) One consequence of the SAA is that trapped particles in the plasmasphere drift closer to the earth's surface and can more easily be lost into the atmosphere. The result is that the F region (see ionosphere ) is highly variable in this region, and satellites in low earth orbits suffer greater radiation doses when they pass through the SAA. There is a corresponding location of maximum geomagnetic field intensity in Southeast Asia.
spacecraft charging. A term that encompasses all the charging effects on a spacecraft due to the environment in space. Occasionally this term is used in a more limited sense to mean surface charging.
spicules. Rapidly changing, predominantly vertical, spike-like structures in the solar chromosphere observed above the limb. Spicules appear to be ejected from the low chromosphere at velocities of 20 to 30 km/s reaching a height of about 9000 km and then falling back or fading. The total lifetime is 5 to 10 minutes.
sporadic E (Es). Transient, localized patches of relatively high electron density in the E region of the ionosphere, which significantly affect radiowave propagation. Sporadic E can occur during daytime or nighttime, and it varies markedly with latitude. Es can be associated with thunderstorms, meteor showers, solar activity and geomagnetic activity.
spray (SPY). Luminous material ejected from a solar flare with sufficient velocity to escape the sun (675 km/s). Sprays are usually seen in H alpha with complex and rapidly changing form. There is little evidence that sprays are focused by magnetic fields. Compare surge.
spread F. A condition of the F region of the ionosphere caused by patches of ionization that scatter or duct radio signals, characterized on ionograms by a wide range of heights of reflected pulses. In equatorial latitudes spread F is most commonly observed at night and may be negatively correlated with geomagnetic activity; at high latitudes spread F occurs throughout the daytime and is positively correlated with magnetic activity. The latitude of minimum occurrence of spread F is near 30 degrees magnetic latitude.
SPY. See spray.
Sq. The diurnal variation of the geomagnetic field. The Sq variation is explained in terms of solar tidal motions of the ionosphere and thermally driven ionospheric winds.
SSB. See solar sector boundary.
SSC. See sudden commencement.
storm. See geomagnetic storm.
stratosphere. That region of the earth's atmosphere between the troposphere and the mesosphere. It begins at an altitude of temperature minimum at approximately 13 km and defines a layer of increasing temperature up to about 50 km.
STRATWARM. A code word designating a major disturbance of the winter, polar, middle atmosphere from the tropopause to the ionosphere, lasting for several days at a time and characterized by a warming of the stratospheric temperature by some tens of degrees. There is no evidence that stratwarms are caused by solar events, or that they affect the lower atmosphere. (In fact, the disturbance may be generated by tropospheric conditions).
subflare. See flare.
substorm. A geomagnetic perturbation lasting 1 to 2 hours, which tends to occur during local post-midnight nighttime. The magnitude of the substorm is largest in the auroral zone, potentially reaching several thousand nanoteslas. A substorm corresponds to an injection of charged particles from the magnetotail into the auroral oval.
sudden commencement ( SC, or SSC for Storm Sudden Commencement). An abrupt increase or decrease in the northward component (see geomagnetic elements) of the geomagnetic field, which marks the beginning of a geomagnetic storm. SCs occur almost simultaneously worldwide but with locally varying magnitudes.
sudden impulse (SI + or SI - ). A sudden perturbation, positive or negative, of several nanoteslas in the northward component (see geomagnetic elements ) of the low-latitude geomagnetic field, not associated with a following geomagnetic storm. (An SI becomes an SC if a storm follows.)
Sudden ionospheric disturbance (SID). Any of several radio propagation anomalies due to ionospheric changes resulting from solar flares. Anomalies include short wave fades, enhancements of atmospherics, phase shifts, cosmic noise absorptions, and signal enhancements.
sudden ionospheric disturbance (SID). Any of several radio propagation anomalies due to ionospheric changes resulting from solar flares. Anomalies include short wave fades, enhancements of atmospherics, phase shifts, cosmic noise absorptions, and signal enhancements.
sunspot. An area seen as a dark spot, in contrast with its surroundings, on the photosphere of the sun. Sunspots are concentrations of magnetic flux, typically occurring in bipolar clusters or groups. They appear dark because they are cooler than the surrounding photosphere. Larger and darker sunspots sometimes are surrounded (completely or partially) by penumbrae. The dark centers are umbrae. The smallest, immature spots are sometimes called pores.
sunspot classification (Modified Zurich Sunspot Classification). As devised by McIntosh, a 3-letter designation of the optical, white-light characteristics of a sunspot group. The general form of the designation is Zpc. One letter is chosen from each of the following three categories.
Z (the modified Zurich class of the group):
A - A small single sunspot or very small group of spots with the same magnetic polarity, without penumbra.
B - Bipolar sunspot group with no penumbra.
C - An elongated bipolar sunspot group. One sunspot must have penumbra, and penumbra does not exceed 5 degrees in longitudinal extent.
D - An elongated bipolar sunspot group with penumbra on both ends of the group; longitudinal extent of penumbra is more than 5 degrees, but does not exceed 10 degrees.
E - An elongated bipolar sunspot group with penumbra on both ends. Longitudinal extent of penumbra exceeds 10 degrees but not 15 degrees.
F - An elongated bipolar sunspot group with penumbra on both ends. Longitudinal extent of penumbra exceeds 15 degrees.
H - A unipolar sunspot group with penumbra. Class H sunspot groups become compact Class D or larger when the penumbra exceeds 5 degrees in longitudinal extent.
p (the penumbra type of the largest spot in the group):
x - no penumbra
r - rudimentary
s - small (< = 2.5 degrees north-south diameter), symmetric
a - small, asymmetric
h - large (> 2.5 degrees north-south diameter), symmetric
k - large, asymmetric
c (the compactness of the group):
x - a single spot
o - open
i - intermediate
c - compact
sunspot cycle. The approximately 11-year quasi-periodic variation in the sunspot number. The polarity pattern of the magnetic field reverses with each cycle. Other solar phenomena, such as the 10.7-cm solar radio emission, exhibit similar cyclical behavior.
sunspot number. A daily index of sunspot activity (R), defined as R = k (10g +s ) where s = number of individual spots, g = number of sunspot groups, and k is an observatory factor (equal to 1 for the Zurich Observatory and adjusted for all other observatories to obtain approximately the same R number). The standard number, RI, once derived at Zurich (see Wolf number), is now being derived at Brussels and is denoted by RI. Often, the term "sunspot number" is used in reference to the widely distributed smoothed sunspot number.
super high frequency (SHF). That portion of the radio frequency spectrum from 3 GHz to 30 GHz.
supergranulation. A system of large-scale velocity cells that does not vary significantly over the quiet solar surface or with phase of the solar cycle. The cells are presumably convective in origin with weak upward motions in the center, downward motions at the borders, and horizontal motions of typically 0.3 to 0.4 km/s. Magnetic flux is more intense along the borders of the cells.
surge. A jet of material from active regions that reaches coronal heights and then either fades or returns into the chromosphere along the trajectory of ascent. Surges typically last 10 to 20 minutes and tend to recur at a rate of approximately 1 per hour. Surges are linear and collimated in form, as if highly directed by magnetic fields. Compare spray.
SWF. See short wave fade.
sympathetic flares. Solar flares in different active regions that apparently occur as the common result of activation of a coronal connection between the regions. Compare simultaneous flares.
synodic. Referring to a coordinate system fixed on the earth.
synoptic chart. A map of the whole sun in absolute heliographic coordinates, displaying an integrated view of solar features observed during a Carrington rotation.
TEC. See total electron content.
TED. Total (particle) Energy Deposition. The TIROS/NOAA instrument used to estimate the hemispherical power input. (See estimated hemispherical power input.)
tenflare. A solar flare accompanied by a 10-cm radio noise burst of intensity greater than 100% of the pre-event 10-cm flux value.
Theophrastus (Theo). The name of the rule-based expert system used to assist SESC solar region analysis and solar flare prediction.
thermosphere. That region of the earth's atmosphere where the neutral temperature increases with height. It begins above the mesosphere at about 80-85 km and extends to the exosphere.
total electron content (TEC). The number of electrons along a ray path between a transmitter and a receiver. Units are electrons per square meter. This number is significant in determining ionospheric effects such as refraction, dispersion, and group delay on radio waves, and can be used to estimate critical frequencies. The TEC is strongly affected by solar activity and geomagnetic activity.
transition region. That region of the solar atmosphere lying between the chromosphere and the corona where the temperature rises from 10000 K to 1000000 K. The transition region is only a few thousand kilometers thick.
transverse. Component of magnetic field vector perpendicular to direction of view, parallel to solar surface at disk center.
troposphere. The lowest layer of the earth's atmosphere, extending from the ground to the stratosphere at approximately 13 km of altitude.
two-ribbon flare. A flare that has developed as a pair of bright strands (ribbons) on both sides of an inversion line of the solar magnetic field.
Type I, II, III, IV, V. See radio emission.
U-burst. A radio noise burst associated with some flares. It has a U-shaped appearance in an intensity-vs.-frequency plot. The minimum intensity falls roughly between 500 and 2000 MHz. A U-burst is sometimes called a Castelli U.
UHF. See ultrahigh frequency.
ultrahigh frequency (UHF). That portion of the radio frequency spectrum from 300 MHz to 3 GHz.
ultraviolet (UV). That part of the electromagnetic spectrum between 5 and 400 nm.
umbra. The dark core or cores (umbrae) in a sunspot with penumbra, or a sunspot lacking penumbra.
UMR. See unipolar magnetic region.
unipolar magnetic region (UMR). A large-scale photospheric region where the magnetic elements are predominantly of one polarity (for example, the solar polar regions).
Universal Time (UT). A shortened form of the more correct Coordinated Universal Time (UTC).
unsettled. With regard to geomagnetic activity, a descriptive word between quiet and active specifically meaning that the Ak index is between 8 and 16.
upsets. See single event upsets.
UT or UTC. See Coordinated Universal Time.
UV. See ultraviolet.
Van Allen radiation belts. See radiation belts.
vernal equinox. The equinox that occurs in March. Compare autumnal equinox.
very high frequency (VHF). That portion of the radio frequency spectrum from 30 to 300 MHz.
very low frequency (VLF). That portion of the radio frequency spectrum from 3 to 30 kHz.
VHF. See very high frequency.
VLF. See very low frequency.
white light (WL). The sum of all visible wavelengths of light (400-700 nm) so that all colors are blended to appear white to the eye. No pronounced contribution from any one spectral line (or light-emitting element) is implied.
white-light flare. A major flare in which small parts become visible in white light. This rare continuum emission is caused by energetic particle beams bombarding the lower solar atmosphere. Such flares are usually strong x-ray, radio, and particle emitters.
wing. Portion of a spectroscopic absorption (or emission) line between the core of the line and the continuum adjacent to the line.
WL. See white light.
Wolf number. An historic term for sunspot number. In 1849, R. Wolf of Zurich originated the general procedure for computing the sunspot number. The record of sunspot numbers that he began has continued to this day.
WWV. Call letters of the radio station over which National Institute of Standards and Technology broadcasts time-standard signals at 2.5, 5, 10, 15, and 20 MHz. Solar-terrestrial conditions and forecasts are broadcast at 18 minutes past the hour.
X-band. Designates those radio frequencies between 5.2 and 10.9 GHz.
x-ray. Radiation of extremely short wavelength (generally less than 1 nm).
x-ray background. A daily average background x-ray flux in the 0.1 to 0.8 nm range. It is a midday minimum given in terms of x-ray flare class.
x-ray burst. A temporary enhancement of the x-ray emission of the sun. The time-intensity profile of soft x-ray bursts is similar to that of the H alpha profile of an associated flare. Soft x-rays are those of energies less than 20 keV, or wavelengths longer than 0.05 nm.
x-ray flare class. Rank of a flare based on its x-ray energy output. Flares are classified by the Space Environment Services Center according to the order of magnitude of the peak burst intensity (I) measured at the earth in the 0.1 to 0.8 nm band as follows:
Class Peak, 0.1 to 0.8 nm band, W/square m ergs/square cm/s
B I < 10.0E-06 I < 10.0E-03
C 10.0E-06 < = I < 10.0E-05 10.0E-03 < = I < 10.0E-02
M 10.0E-05 < = I < 10.0E-04 10.0E -02< = I < 10.0E-01
X I > = 10.0E-04 I > = 10.0E-01
x-ray flare termination. The end time is defined as the time the flux has decayed to 1/2 the peak flux of the event.
yellow line. A coronal emission line at 569.4 nm from Ca XV (a calcium atom from which 14 electrons have been stripped). It identifies the hottest regions of the corona.
Z. Zulu Time. (See Coordinated Universal Time.)
Z component of the geomagnetic field. See geomagnetic elements.
Zeeman effect. The splitting of spectral emission lines due to the presence of a strong magnetic field. Briefly, the lines split into three or more components of characteristic polarization; the components are circular if the local magnetic field is parallel to the line of sight, and linear if the field is perpendicular to the line of sight. The amount of splitting is proportional to the strength of the field.
Zurich sunspot classification. See sunspot classification.
Zurich sunspot number. See sunspot number. | http://www.va6jb.ca/propogation.php | 13 |
67 | Almost as soon as we are born, we can use negation, indicating by gesture or other behavior that we reject, exclude, or disagree with something. A few months later, when infants are just learning to talk, their first ten words almost always include a negation operator (Westbury & Nicoladis, 1998). Because it is so common and so easily-mastered, negation may seem to be a simple concept. However, it has bedeviled all efforts to be easily defined and understood. Two researchers who have studied it extensively have described negation as "curiously difficult" (Wilden, 1980) and "far from simple and transparent" (Horn, 1989). One reason for its complexity is that negation serves a wide variety of roles. A logician uses the negation operator in the process of proving a complex logical syllogism. A pre-linguistic uses gestural negation to reject the broccoli being offered her. Do such disparate uses of negation have anything in common? If so, what is it? In trying to formulate an answer to these questions by defining negation, it is useful to consider two approaches to the topic: negation as a technical tool for use in logic, and negation in natural language. We begin with the former.
Negation in logic
Classical (Aristotelean) term logic is the earliest and simplest formal logic.
It is limited to single-predicate propositions that are necessarily either true
or false. A single-predicate proposition is one like 'Mary is beautiful' or
'Snow is red', in which one single thing is said (whether rightly or not) to
have a single characteristic, or predicate.
Negation of a proposition in term logic may be defined by listing two necessary
and sufficient properties of that function with respect to an object or set,
i.) X and its complement must include everything, and
ii.) The intersection of X and its negation must be empty.
In simple terms, this means that what a thing is and what it is not together
make up everything. Consider, for example, the proposition 'All men are happy'.
This proposition means that the set of all men that are either happy or not-happy
('X and its complement') contains all men, and that set of all men that are
both happy and not-happy ('the intersection of X and its negation') contains
nothing. This corresponds to what Kant would later call 'active negation', since
the use of this form of negation is an active affirmation of the opposite of
the negated term.
The astute reader will notice already that there are complications. One complication
arises because there are several ways to deny or contradict the truth value
of a proposition. In Aristotle's logic, no proposition is allowed to have more
than one negation operator. However, that single negation operator be attached
either to the predicate (the characteristic being ascribed) or to its subject
(the entity to which the characteristic is ascribed). Thus Aristotle's term
logic recognizes a second form of negation along with the one we have just considered:
one can negate the subject term, as in 'Not-man is happy', meaning 'Whatever
is not a man is happy'.
Aristotle also recognized that one can negate the predicate term by denying
it, without thereby asserting its contrary. For example, one can state 'Man
is not happy', and mean that 'Whatever is a man is not happy', but not that
'Whatever is a man is unhappy'. As a stranger noted in Plato's dialog Sophist
(§257B), the assertion that something is 'not big' does not necessarily
mean that it is small. This corresponds to what Kant called 'passive negation'
(see Elster, 1984), since it does not actively affirm the contrary of the negated
Aristotle's logical definition of negation is further complicated by the fact
that he recognized two other ways in which negation could vary: by quantity
or by mode. The first distinction (quantity) captures the differences between
universal predication ('All men are not happy'), particular predication ('Some
men are not happy'), singular predication ('I am not happy'), and indefinite
predication ('At least one man is not happy'). The second distinction (mode)
captures differences in the force of the predication, which Aristotle defined
as assertoric ('All men are [or 'are not'] happy'), apodeictic ('All men must
be [needn't be] happy') or problematic ('All men may be [cannot be] happy').
As the natural language translations indicate, all of the distinctions recognized
by Aristotle can be easily (and, in most cases, are naturally) expressed in
ordinary English. Despite this ease of translation, it has long been clear that
Aristotle's logical negation has a different function than natural language
negation in English. English allows negation constructions that would be disallowed
under the definition of negation given by classical logic (see Horn, 1989; Sharpe
et al, 1996, for a detailed discussion of this matter). For example, in English
it is not considered to be contradictory or improper to say that an entity is
both X and not(X). We can perfectly well understand the sentences "I did
and didn't like my college". Such contradictions are ruled out in logic,
since they allow one to deduce anything at all.
In the propositional logic introduced after Aristotle by the Stoics, logical
negation was defined in more powerful and more complex manner. In this propositional
logic, the negation operator need not be attached only to the subject or single
predicate of a simple proposition. Instead, it can be attached externally to
an entire proposition, which may itself contain many predicates. Moreover, in
propositional logic subjects and predicates may be quantified, by having descriptors
like 'every' and 'a little' attached to them. These complications unleash the
problem that Aristotle tried to control by definitional fiat when he limited
negation to subject and predicates in simple propositions: the problem of the
scope of negation. This is the problem of deciding which part of a proposition
is being negated by any negator.
This complication bedevils ordinary language negation. Consider the denial
of the proposition 'Everybody loves somebody a little bit sometimes'. What exactly
is denied is not absolutely clear. Is the denial intended to reflect that there
are some people who never love anyone at all? Or that there are some people
who only love a lot? Or that some people love all people a little bit all of
the time? Or that no one ever loves anyone at all?
This problem of scope of the negation operator over quantified subjects and
predicates is "one of the most extensively studied and least understood
phenomena within the semantics of negation" (Horn, 1989). Although we cannot
hope to clear up this complication here, it is important to address one aspect
of it: the claim that the negation of this predicate logic is simply equivalent
to assertion of falsity. Many people in both the philosophical and linguistic
literatures have adopted such a view at one time. Most notably, it was adopted
by Russell and Whitehead (1910) in their Principia Mathematica (for a most explicit
statement, see Russell, 1940; others who have advocated a similar position include
Apostel, 1972a, Givón, 1979; Pea, 1980a, Strawson, 1952).
Few contemporary logicians would equate negation with the assertion of falsity,
for two reasons. One is that there is a well-defined distinction to be drawn
between the syntax of negation- how a negator may be properly used and manipulated-
and the semantics of negation- what a negator means. Logicians deal mainly with
the syntax of a logical symbol, and the specific formal semantics prescribed
by that syntax, rendering many issues of interpretation moot.
The second reason that negation cannot be associated with asserion of falsity
has to do with logical levels. Russell and Whitehead's book introduced the distinction
between logical levels. It is therefore ironic that Frege (1919), Austin (1950),
Quine (1951), and Geach (1980), among others, have all argued that Russell and
Whitehead's view of negation as applying to propositions is an error resulting
from a confusion of logical levels. Specifically, the view confuses language
with meta-language. Austin wrote that "Affirmation and negation are exactly
on a level, in this sense, that no language can exist which does not contain
conventions for both and that both refer to the world equally directly, not
to statements about the world" (Austin, 1950, p, 128-129, emphasis added).
Statements of falsity, in contrast, are necessarily statements about statements-
that is, statements in a meta-language. A statement about the truth value of
a proposition is therefore not a form of negation at all. It is rather a meta-statement
about truth value. Negation is always an assertion about the state of the world.
It is never a statement about a proposition.
This assertion is complicated by two facts that lie at the root of the confusion
about whether negation is equivalent to an assertion of falsity of a proposition:
i.) The fact that negation may be a statement about the act of stating a proposition, since the act of stating a proposition constitutes a factual aspect of the state of the world which may be negated like any other fact about the world, and
ii.) The fact that any proposition about the act of stating a proposition admits of a simple transformation into a statement about the stated proposition itself.
For example, consider the proposition 'The former President Of The United States
did not tell his intern to lie'. That statement is a statement about what a
former President said- that it, it is a statement about the empirically-observable
physical act of a human being stating a proposition aloud. The error lies in
claiming that this sentence is semantically identical to the sentence 'The proposition
'The President told his intern to lie' is false', which is a proposition about
a proposition. The first statement is a statement about what phonemes actually
could have been heard to issue from the President's mouth. The second is a statement
about the truth value of a proposition. These cannot be semantically identical,
anymore than the set of all English sentences about an elephant could be semantically
identical to the elephant. One is a bunch of ordered letters, and the other
is a heavy grey mammal.
A second argument against the position that natural-language negation simply
negates the proposition to which it applies is given by Horn (1989, p. 58).
He points out that the error in equating statements about propositions with
statements about the world is very clear when we consider nondeclarative sentences.
Consider a cornered criminal who throws down his gun, yelling 'Don't shoot!'.
It is absurd to argue that this command is identical to the meta-statement 'Let
the statement 'You shot me' be false!'.
Quine (1976) gives a third reason that a great deal of ordinary discourse could
not admit of negation as a statement of the truth-value of a proposition, in
his discussion of what would be required to 'purify' ordinary language so that
it could be considered equivalent to the formalized language of science. Quine
argued that "we may begin by banishing what are known as indicator words
(Goodman) or egocentric particulars (Russell): 'I', 'you', 'this', 'that', 'here',
'there', 'now', 'then' and the like". He explained this banishment by writing:
"It is only thus...that we come to be able to speak of sentences, i.e. certain linguistic forms, as true and false. As long as indicator words are retained, it is not the sentence but only the several events of its utterance that can be said to be true or false" (p. 222. Emphasis of the final sentence added).
A great deal of ordinary speech contains indicator words of the type Quine
was objecting to. Quine is pointing out that these common sentences cannot bear
truth values on their own, but only bear truth when they are properly placed
in their extra-logical (real world) context.
The point of this discussion is that negation, as defined as a technical tool
for logicians, is not the same as the ordinary negation as used in natural language.
Some logicians have tried to re-define logical negation in such a way as to
capture its uses in natural language. La Palme Reyes et al (1994) defined a
non-classical logical model of natural language negation. It includes two negation
functions, neither of which is in its most general form equivalent to Aristotelean
negation. Those two negation functions take into account the fact that objects
to which one might apply negation have a structure whose components may be differentially
affected by that negation. The first negation function, which La Palme Reyes
et al call 'heyting' or strong negation, is used when the negation function
applies to all components of its negated object. The second, called 'co-heyting'
or weak negation, is used when the negation function refers to only some components
of the negated object. The formal aspects of this non-classical logic have been
worked out under certain highly idealized assumptions (La Palme Reyes et al,
1994). However it is not clear if or how that formal analysis could be widely
applied to real life natural language uses of negation, in situations where
those assumptions might not or clearly do not hold.
Let us now turn our attention to the development and use of negation in natural language.
Negation in natural language
The reader who has read this far will probably not be surprised to learn that
natural language negation is also complicated. There are many apparently different
forms of negation in natural languages. Natural language negation words such
as the English word 'not' can (but need not always) function in a way that is
closely analogous to the logical 'not' discussed above. Natural language also
contains words whose assertion functions as an implicit negation of their opposite,
as well as linguistic constructions which do not contain any negation markers,
but which can nevertheless function as negations for pragmatic reasons. For
example, the positive assertion "What Joe saw was an aircraft glittering
in the moonlight" functions as a negation when uttered in response to the
claim "Joe saw a UFO!"
Such complex constructions provide new means of using negation, but add no
new meanings to negation. For this reason, in this section we will concentrate
only upon forms of natural language negation that are explicit in the lexicon.
I present six categories of natural language negation, in roughly the order
they appear developmentally. Others have proposed distinctions and commonalties
that would increase or decrease this number. No definite and universally agreed-upon
i.) Negation as rejection/ emphasis of rejection of external entities
The simplest form of negation appearing in the lexicon is the use of the word
'no' (or its equivalent in other languages) in what Peirce (Horn, 1989, p.163)
called its subjective or pre-logical sense, to reject or to signal displeasure
with an undesirable situation or object. This use of negation as 'affective-volitional
function' was identified in the earliest study of the development of negation
(Stern & Stern, 1928) as the first form to appear. It is reliably present
by the age of 10-14 months (Pea, 1980b). The production of the word 'no' plays
roughly the same role for young human infants as do the gestures that often
accompany it (and that appear even earlier developmentally; see Pea, 1980b;
Ruke-Dravina, 1972), such as pushing away or turning the head away from an undesired
object. Such a gesture, either alone or accompanied by non-linguistic verbal
productions expressing displeasure, often suffices to communicate the desired
message. For this reason, the production of the word 'no' in this situation
may not necessarily be used as a rejection in itself, but may rather play a
role in emphasizing the rejection already being communicated non-linguistically.
I will expand on this notion in the next section.
Clearly such negation is very simple. Any animal able to recognize what it
does not want- and capable of acting on that recognition- is capable of this
first form of negation as a rejection of an undesirable external entity.
ii.) Negation as a statement of refusal to stop or start action
There are two superficially similar forms to negation as a rejection that,
however, function pragmatically in a markedly different way from the simple
rejection of external entities. Both necessarily involve an element of social
manipulation, which can also, but need not necessarily, play a role in object
rejection. The first form of such social negation is the use of the word 'no'
to signal a refusal to comply with a request or command for action or for a
cessation of a particular action. Such use is thereby an expression of personal
preference (Royce, 1917).
Three requirements must be satisfied for this form of negation to appear. The
first is that the negating organism must have the ability to associate a command
with a behavior or cessation of a behavior. The second is that the negating
organism's environment must provide the means by which that command is issued
in a regular manner. Although the first requirement is common enough among non-humans,
the latter is not. The appearance of negation as refusal to comply with a request
or command is missing is many mammals because there is a deficit in their natural
social environment that makes it unnecessary for them to grasp it. We must therefore
include among the necessary functionality for the appearance of these forms
of negation a third requirement: the appearance in another of the ability to
regularly recognize and enforce codes of behavior in the infant who is developing
negation. For these reasons, this form of negation is intimately tied to social
organization and environmental structure. Because of its intimate interaction
with such external factors, it becomes difficult to say whether it is 'innate'
iii) Negation as an imperative
The second of the two forms of negation that differ pragmatically from rejection
of an external object is the use of the word 'no' as a directive to others to
act differently. As well as denying a request or a command to act or cease acting,
and refusing objects offered to them, young infants are able to use negation
to refuse to accept the actions of others. Such denial often functions pragmatically
as a command, denying one action in the hopes of producing an alternate.
iv.) Negation as a comment on one's own unsuccessful or prohibited action
Gopnik & Meltzoff (1985) identified another form of negation, as the second
stage in their three-stage model of negation leading to negation of linguistically-stated
propositions. In the first stage infants use negation as a social device to
refuse parental requests, as discussed above. In the second stage, a child uses
negation to comment on his or her failure to achieve an intended goal. According
the Gopnik and Meltzoff, the word 'no' becomes a cognitive device for the first
time when it is used in such a manner. Many researchers have also noted early
uses of negation as self-prohibition, uttered by the child when he or she is
about to do something or is doing something that is prohibited. The use of negation
in this manner is typically of brief duration (Pea, 1980b).
v.) Negation as scalar predication
Negation may also be used in natural language to compare or quantify scalar
Negation is often used for the concept of zero, or non-existence, as when we
say 'there is no way to get there from here' or an infant notes an unexpected
absence by saying 'no car'. The general case of using negation to mark non-existence
includes sub-categories that are sometimes distinguished. For example, Pea (1980b)
distinguishes between disappearance negation, which is used to note something
that has just disappeared, and unfulfilled expectation negation, which is used
to mark the non-existence of an expected entity. Although there are individual
differences in the appearance of these subtypes (Pea, 1980b), the appearance
of negation as scalar predication appears reliably as the most highly developed
(i.e. latest-appearing) form of negation prior to the appearance of negation
of linguistic propositions.
The use of negation to mark nonexistence (in the sense of a referent not being
manifest in a context where it was expected) appears very early in children's
words. In their study of sententially-expressed negation (i.e. of negation which
appears after the one-word stage) McNeill and McNeill (1968) claimed that the
first uses of negation among Japanese children were all uses which marked nonexistence.
McNeill and McNeill claim that this finding is of particular interest because
Japanese has four common forms of negation that are differentiated in the lexicon.
One form functions as an assertion of non-existence, another as a denial of
a previous predication, a third as an expression of rejection, and a fourth
as a denial of a previous predication while implying that the speaker knows
something else to be true. Note, however, that there can be no question that
these infants were already displaying behavioral forms of negation by the time
they put words together to form a sentence.
Negation is not only used to indicate the total absence of a quality, but can
also be used to indicate a quantity less or greater than another to which it
is compared. For example, to say that something is 'not bad' is not to say that
it was entirely good, but only that it was 'less than all bad'. In appropriate
circumstances, the negation term may also indicate a greater quantity. Jespersen
(1924) identified the pragmatic circumstances that allow the negation operator
to function in this way. He noted that the word following 'not' must be strongly
stressed, and a more exact statement must immediately follow the negated statement,
as in the sentence: "He earns not twenty thousand, but thirty thousand
dollars per game".
The use of negation in natural language for scalar predication has a strong
constraint on its use, which shows how intimately negation is tied to other
cognitive functions: it can only be properly used as an expression of a departure
from an expected state of affairs. Neither an infant nor an adult will use negation
as a quantifier unless the value expressed thereby is or could be unexpected.
As many commentators (e.g. Sigwart, 1895; Bergson, 1911; Baldwin, 1928; Ryle,
1929; Wood, 1933; Strawson, 1952) have pointed out, to assert the negation of
a proposition is to imply that there is something surprising or unexpected at
the proposition's negation- to imply that some (imagined or real) interlocutor
believes, or might reasonably be expected to believe, the non-negated proposition
(see Horn, 1989, S1.2 for a detailed history of this ideas). To use a graphic
example suggested by Givón (1979): one cannot deny that ones wife is
pregnant without implying that one believes that ones listener has reason to
expect that she might be. The reason for this constraint is that "It is
no good knowing what something is not unless that helps to eliminate possibilities
of what it is." (Wason, 1959, p. 103). There is no use negating unless
the negation is informative. This is a specific case of the more general pragmatic
rule that utterances should be (or will be assumed to be) relevant (Grice, 1975;
Sperber & Wilson, 1986).
vi.) Negation of stated propositions
No one disputes that negation as denial of a stated utterance is the last form
of negation to appear developmentally. Indeed, since it is the only form of
negation to require sentence comprehension, it is predictable from its very
definition that it is likely to appear later in development than the other forms,
which can all be expressed with simpler components of language.
It is remarkable that children are able to negate propositions about as soon
as they can produce them. Many studies have estimated that the ability for this
form of negation appears between 1.5 years to 2.5 years (Hummer, Wimmer, &
Antes, 1993), which is about the same time that children are first able to put
two words together.
As discussed above, the ability to negate propositions should not be treated
as if it were equivalent to denial of the truth value of propositions. What
infants who are just putting together words are able to do is to deny that an
actual aspect of the world matches its linguistic description. If the child
screams 'No!' upon being told that it is bath-time, it is not to deny that the
sentence 'It is bath time' is a true sentence, nor is it to to assert the proposition
'The sentence 'It is bath time' is false'. What the child is doing is denying
that it is in fact a desirable plan to submerge his body in soapy water. To
assert otherwise is to impose a post-literate interpretive framework upon a
child who is very far from being able to understand such a framework.
Because of these considerations, there are two distinct forms of negation of sentences. The form that an infant exhibits might be termed referential negation, since the child is denying a fact of the world that has been described to him using language. Truth-functional negation - true logical negation- is a learned technical tool for which there is no evidence of innate or inevitably-developing ability. Indeed, the failure rate in college introductory logic classes suggests that truth-functional negation is extremely difficult for most human beings to grasp.
Is there a common meaning to natural language terms of negation?
The plethora of uses might make it seem that natural language negation does
not admit of any simple definition that covers all cases. However, numerous
philosophers have proposed the same unifying definition, that side steps many
of the logical complications discussed above. They have re-cast negation as
a positive assertion of the existence of a relevant difference- that is, they
have taken negation to mean 'other than', to use the pithy expression suggested
by Peirce (1869). This expression is similar to that put forth by Plato in Sophist
(§257B), in which he insisted that negation was not enantion (contrary)
but heteron (other). Hegel also characterized negation in a similar way (though
his heavily metaphysical views on negation are unique in other respects) when
he interpreted (or perhaps, as Horn, 1989, puts it, "stood on its head")
a dictum stated by Spinoza: Determinatio est negatio [Determination is negation].
Under Hegels' reading, Spinoza's dictum was taken as a statement of identity,
meaning that every negation is a determination or limitation, and vice versa.
The definition also appears in Brown's (1969) attempt to give a naturalized,
non-mathematical account of Boolean algebra. Brown begins by taking distinction
(defined as 'perfect continence') as his only primitive. He then proceeds to
define negation in terms of distinction. He presents this as an idea he had
Wilden (1980) also defined negation as distinction, again without mentioning
any earlier proposals to do so. The fact that this principle has apparently
been repeatedly independently discovered suggests that it may accurately capture
the meaning of negation.
Wilden's formulation of the definition of negation suggested that negation
should be considered as a rule about how to make either/or distinctions. Any
expression of negation divides the world into three parts: the negated object
or set (say, X), everything else (not-X), and the system which applies the rule
for drawing the distinction between X and not-X. That system itself belongs
neither to X nor to not-X, but stands outside (at a higher logical level than)
both, in virtue of the fact that it defines the composition of those two sets.
In discussing Wilden's definition of negation, Hoffmeyer (1993) implicitly
argues that the act of negation is equivalent to the creation of a sign, as
defined by Peirce: something which stands for something to somebody in some
respect. In order to assess this claim, it is necessary to understand something
of the distinctions Peirce drew between three different forms of representation:
iconic, indexical, and symbolic.
Iconic representation, the simplest form, is representation that occurs in
virtue of a perceptual similarity between the sign and the signified, as a picture
of a bird represents a bird. Indexical representation is representation in which
the signifier is associated with what it signifies by correlation in space or
time- i.e. in virtue of the fact that the signifier has a quality that is linked
with the entity that it signifies by some cognizable relation other than perceptual
similarity. Indexical representation is commonly used in animal learning studies,
as when a light is paired with punishment. The important defining feature of
both iconic and indexical representation is that the connection between the
primary sign and the signified exists independently of the representing organism.
Simplifying Peirce's own view somewhat, we may say that the connection is objective,
in the sense that an organism or machine with access only to the appropriate
sensory and temporal information about the object could in theory learn to connect
the signifier with the signified.
This is not the case with the third form of representation, symbolic representation.
Symbolic representation is (by definition) independent of the relations that
define iconic and indexical representation - similarity, contiguity, and correlation.
This means that symbolic representation can be sustained in the absence of any
objectively discernible relation between the structure of the sign or its production,
and the signifier. Human beings with symbolic representation are able to talk
about the dark side of the planet Mercury, Santa Claus's older sister, or integrity
in politics, despite the impossibility of ever having direct sensory acquaintance
with these non-existent entities.
One major limitation of iconic and indexical reference is that it is not possible
to use them make a statement about any entities that do not have an unambiguously
perceptible existence in space and time. Such entities have no perceptible qualities
in which their signifier could partake. In particular, therefore, there could
be no way to use iconic or indexical reference as scalar negation, to refer
to the abstract quality of a particular absence. As Wittgenstein (1953, §446)
pointed out "It would be odd to say: 'A process looks different when it
happens from when it doesn't happen.' Or 'A red patch looks different when it
is there from when it isn't there.'" (see also Russell, 1940).
This is why the complex forms of linguistic negation must be fundamentally symbolic. In the complex forms of linguistic negation, the boundary that marks the negated from the unnegated has no perceptible qualities of the kind that are necessary for reference by similarity or spatio-temporal contiguity (by iconic or indexical reference). The lack of relevant perceptible qualities is also what defines a symbol. Viewing a symbol 'as if' it stood for something requires that it be dissociated from what it actually is. There are (by definition) no hints from what a symbol is which help one decide to what it stands for (c.f. Premack and Premack's definition of a piece of plastic as a word for their monkeys "when the properties ascribed to it are not those of the plastic but of the object it signifies" (p. 32)). Since there can be no linguistic symbolism that is not built upon negation and since negation is itself a form of symbolism, the act of negation must be the first fundamentally linguistic symbolic act. It underlies the ability of language users to use a word to stand in for something that it in no way resembles and with which it it never co-occurs.
It seems simple to 'just say no', but negation is in fact astonishingly complicated.
In logic the role of negation is so complex as to have defied complete understanding
despite over two thousand years of concerted effort. In natural language, negation
proves impossible to bound, spilling over to take in constraints at the social
and environmental levels, and to be intimately tied to deep and complex issues
of memory, expectation, general cognition, and symbolic manipulation that are
themselves still largely mysterious. Because of these intimate ties, the function
of negation as heteron may be plausibly argued to be a fundamental building
block of human language.
In Jonothan Swift's novel Gulliver's Travels, the hero reports of meeting,
in the grand academy of Lagado, a group of nominalist philosophers. Those men
contended that "Since Words are only names for Things, it would be more
convenient for all Men to carry about them, such Things as were necessary to
express the particular business they are to discourse on." (Swift, 1735/1977,
p. 181). This, of course, proves to be difficult for those who have much to
say, since they are obliged to haul a huge bundle of objects everywhere they
go. If Swift's radical nominalists had thought about it a bit longer, they might
have arrived at slightly more convenient solution that would still save their
lungs from the 'Diminution by Corrosion' that they were trying to avoid by not
speaking. Instead of carrying the individual objects themselves, they could
simply carry around the means to quickly create any object they might need.
Perhaps they might carry a block of soft clay with them. By this expedient they
could lighten the load they had to carry while greatly extending their possible
range of reference. Whoever first began to carry the clay would be capable of
astonishing feats of communication, conversing easily about matters of which
his fellow philosophers, having failed to load precisely the required the object
into their sacks, were forced to remain silent.
The human ability to use symbolic reference differs from animal communication in an analogous fashion to the way that the clay language differs from the object language, and for an analogous reason. Whereas most animals are limited to distinguishing only those dimensions in the world that they are born 'carrying' or learned dimensions that have direct biological significance, human beings can construct an infinite number of dimensions. The clay that we use to construct those dimensions is negation as heteron: the ability to formulate rules about how to reliably make either/or distinctions. Although it is clear that many of the distinctions we make are made possible by language, the opposite relation holds true for some early forms of negation. Rather than being made possible by language, those forms of negation make language possible, in virtue of their role as a sine qua non of linguistic reference. Because we can carve up the world in such subtle ways, we humans have mastered our environment in ways no other animal can do. And because we can negate, we can so carve up the world. | http://www.semioticon.com/dse/encyclopedia/n/long/negationlong.html | 13 |
136 | In physics, the Lorentz transformation (or transformations) is named after the Dutch physicist Hendrik Lorentz. It was the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame, and to understand the symmetries of the laws of electromagnetism. The Lorentz transformation is in accordance with special relativity, but was derived well before special relativity.
The transformations describe how measurements of space and time by two observers are related. They reflect the fact that observers moving at different velocities may measure different distances, elapsed times, and even different orderings of events. They supersede the Galilean transformation of Newtonian physics, which assumes an absolute space and time (see Galilean relativity). The Galilean transformation is a good approximation only at relative speeds much smaller than the speed of light.
The Lorentz transformation is a linear transformation. It may include a rotation of space; a rotation-free Lorentz transformation is called a Lorentz boost.
In the Minkowski space, the Lorentz transformations preserve the spacetime interval between any two events. They describe only the transformations in which the spacetime event at the origin is left fixed, so they can be considered as a hyperbolic rotation of Minkowski space. The more general set of transformations that also includes translations is known as the Poincaré group.
Early in 1889, Oliver Heaviside had shown from Maxwell's equations that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the ether. FitzGerald then conjectured that Heaviside’s distortion result might be applied to a theory of intermolecular forces. Some months later, FitzGerald published the conjecture that bodies in motion are getting contracted, in order to explain the baffling outcome of the 1887 ether-wind experiment of Michelson and Morley. In 1892, Lorentz independently presented the same idea in a more detailed manner, which was subsequently called FitzGerald–Lorentz contraction hypothesis. Their explanation was widely known before 1905.
Lorentz (1892–1904) and Larmor (1897–1900), who believed the luminiferous ether hypothesis, were also seeking the transformation under which Maxwell's equations were invariant when transformed from the ether to a moving frame. They extended the FitzGerald–Lorentz contraction hypothesis and found out that the time coordinate has to be modified as well ("local time"). Henri Poincaré gave a physical interpretation to local time (to first order in v/c) as the consequence of clock synchronization under the assumption that the speed of light is constant in moving frames. Larmor is credited to have been the first to understand the crucial time dilation property inherent in his equations.
In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group, and named it after Lorentz. Later in the same year Albert Einstein published what is now called special relativity, by deriving the Lorentz transformation under the assumptions of the principle of relativity and the constancy of the speed of light in any inertial reference frame, and by abandoning the mechanical aether.
Lorentz transformation for frames in standard configuration
Consider two observers O and O′, each using their own Cartesian coordinate system to measure space and time intervals. O uses (t, x, y, z) and O′ uses (t′, x′, y′, z′). Assume further that the coordinate systems are oriented so that, in 3 dimensions, the x-axis and the x′-axis are collinear, the y-axis is parallel to the y′-axis, and the z-axis parallel to the z′-axis. The relative velocity between the two observers is v along the common x-axis; O measures O′ to move at velocity v along the coincident xx′ axes, while O′ measures O to move at velocity −v along the coincident xx′ axes. Also assume that the origins of both coordinate systems are the same, that is, coincident times and positions. If all these hold, then the coordinate systems are said to be in standard configuration.
The inverse of a Lorentz transformation relates the coordinates the other way round; from the coordinates O′ measures (t′, x′, y′, z′) to the coordinates O measures (t, x, y, z), so t, x, y, z are in terms of t′, x′, y′, z′. The mathematical form is nearly identical to the original transformation; the only difference is the negation of the uniform relative velocity (from v to −v), and exchange of primed and unprimed quantities, because O′ moves at velocity v relative to O, and equivalently, O moves at velocity −v relative to O′. This symmetry makes it effortless to find the inverse transformation (carrying out the exchange and negation saves a lot of rote algebra), although more fundamentally; it highlights that all physical laws should remain unchanged under a Lorentz transformation.
Below, the Lorentz transformations are called "boosts" in the stated directions.
Boost in the x-direction
- v is the relative velocity between frames in the x-direction,
- c is the speed of light,
- is the Lorentz factor (Greek lowercase gamma),
- (Greek lowercase beta), again for the x-direction.
The use of β and γ is standard throughout the literature. For the remainder of the article – they will be also used throughout unless otherwise stated. Since the above is a linear system of equations (more technically a linear transformation), they can be written in matrix form:
According to the principle of relativity, there is no privileged frame of reference, so the inverse transformations frame F′ to frame F must be given by simply negating v:
where the value of γ remains unchanged.
Boost in the y or z directions
The above collection of equations apply only for a boost in the x-direction. The standard configuration works equally well in the y or z directions instead of x, and so the results are similar.
For the y-direction:
where v and so β are now in the y-direction.
For the z-direction:
where v and so β are now in the z-direction.
The Lorentz transform for a boost in one of the above directions can be compactly written as a single matrix equation:
Boost in any direction
Vector form
For a boost in an arbitrary direction with velocity v, that is, O observes O′ to move in direction v in the F coordinate frame, while O′ observes O to move in direction −v in the F′ coordinate frame, it is convenient to decompose the spatial vector r into components perpendicular and parallel to v:
are "warped" by the Lorentz factor:
The parallel and perpendicular components can be eliminated, by substituting into r′:
Since r‖ and v are parallel we have
where geometrically and algebraically:
- v/v is a dimensionless unit vector pointing in the same direction as r‖,
- r‖ = (r • v)/v is the projection of r into the direction of v,
substituting for r‖ and factoring v gives
This method, of eliminating parallel and perpendicular components, can be applied to any Lorentz transformation written in parallel-perpendicular form.
Matrix forms
These equations can be expressed in block matrix form as
and β is the magnitude of β:
More explicitly stated:
The transformation Λ can be written in the same form as before,
which has the structure:
and the components deduced from above are:
where δij is the Kronecker delta, and by convention: Latin letters for indices take the values 1, 2, 3, for spatial components of a 4-vector (Greek indices take values 0, 1, 2, 3 for time and space components).
Note that this transformation is only the "boost," i.e., a transformation between two frames whose x, y, and z axis are parallel and whose spacetime origins coincide. The most general proper Lorentz transformation also contains a rotation of the three axes, because the composition of two boosts is not a pure boost but is a boost followed by a rotation. The rotation gives rise to Thomas precession. The boost is given by a symmetric matrix, but the general Lorentz transformation matrix need not be symmetric.
Composition of two boosts
- B(v) is the 4 × 4 matrix that uses the components of v, i.e. v1, v2, v3 in the entries of the matrix, or rather the components of v/c in the representation that is used above,
- is the velocity-addition,
- Gyr[u,v] (capital G) is the rotation arising from the composition. If the 3 × 3 matrix form of the rotation applied to spatial coordinates is given by gyr[u,v], then the 4 × 4 matrix rotation applied to 4-coordinates is given by:
- gyr (lower case g) is the gyrovector space abstraction of the gyroscopic Thomas precession, defined as an operator on a velocity w in terms of velocity addition:
- for all w.
The composition of two Lorentz transformations L(u, U) and L(v, V) which include rotations U and V is given by:
Visualizing the transformations in Minkowski space
The yellow axes are the rest frame of an observer, the blue axes correspond to the frame of a moving observer
The red lines are world lines, a continuous sequence of events: straight for an object travelling at constant velocity, curved for an object accelerating. Worldlines of light form the boundary of the light cone.
The purple hyperbolae indicate this is a hyperbolic rotation, the hyperbolic angle ϕ is called rapidity (see below). The greater the relative speed between the reference frames, the more "warped" the axes become. The relative velocity cannot exceed c.
The black arrow is a displacement four-vector between two events (not necessarily on the same world line), showing that in a Lorentz boost; time dilation (fewer time intervals in moving frame) and length contraction (shorter lengths in moving frame) occur. The axes in the moving frame are orthogonal (even though they do not look so).
Then the Lorentz transformation in standard configuration is:
Hyperbolic expressions
From the above expressions for eφ and e−φ
Hyperbolic rotation of coordinates
Substituting these expressions into the matrix form of the transformation, we have:
Thus, the Lorentz transformation can be seen as a hyperbolic rotation of coordinates in Minkowski space, where the parameter ϕ represents the hyperbolic angle of rotation, often referred to as rapidity. This transformation is sometimes illustrated with a Minkowski diagram, as displayed above.
Transformation of other physical quantities
or in tensor index notation:
in which the primed indices denote indices of Z in the primed frame.
where is the inverse matrix of
Special relativity
The crucial insight of Einstein's clock-setting method is the idea that time is relative. In essence, each observer's frame of reference is associated with a unique set of clocks, the result being that time as measured for a location passes at different rates for different observers. This was a direct result of the Lorentz transformations and is called time dilation. We can also clearly see from the Lorentz "local time" transformation that the concept of the relativity of simultaneity and of the relativity of length contraction are also consequences of that clock-setting hypothesis.
Transformation of the electromagnetic field
Lorentz transformations can also be used to prove that magnetic and electric fields are simply different aspects of the same force — the electromagnetic force, as a consequence of relative motion between electric charges and observers. The fact that the electromagnetic field shows relativistic effects becomes clear by carrying out a simple thought experiment:
- Consider an observer measuring a charge at rest in a reference frame F. The observer will detect a static electric field. As the charge is stationary in this frame, there is no electric current, so the observer will not observe any magnetic field.
- Consider another observer in frame F′ moving at relative velocity v (relative to F and the charge). This observer will see a different electric field because the charge is moving at velocity −v in their rest frame. Further, in frame F′ the moving charge constitutes an electric current, and thus the observer in frame F′ will also see a magnetic field.
This shows that the Lorentz transformation also applies to electromagnetic field quantities when changing the frame of reference, given below in vector form.
The correspondence principle
The correspondence limit is usually stated mathematically as: as v → 0, c → ∞. In words: as velocity approaches 0, the speed of light (seems to) approach infinity. Hence, it is sometimes said that nonrelativistic physics is a physics of "instantaneous action at a distance".
Spacetime interval
In a given coordinate system xμ, if two events A and B are separated by
the spacetime interval between them is given by
This can be written in another form using the Minkowski metric. In this coordinate system,
Then, we can write
or, using the Einstein summation convention,
Now suppose that we make a coordinate transformation xμ → x′ μ. Then, the interval in this coordinate system is given by
It is a result of special relativity that the interval is an invariant. That is, s2 = s′ 2. For this to hold, it can be shown that it is necessary (but not sufficient) for the coordinate transformation to be of the form
Here, Cμ is a constant vector and Λμν a constant matrix, where we require that
Such a transformation is called a Poincaré transformation or an inhomogeneous Lorentz transformation. The Ca represents a spacetime translation. When Ca = 0, the transformation is called an homogeneous Lorentz transformation, or simply a Lorentz transformation.
Taking the determinant of
The cases are:
- Proper Lorentz transformations have det(Λμν) = +1, and form a subgroup called the special orthogonal group SO(1,3).
- Improper Lorentz transformations are det(Λμν) = −1, which do not form a subgroup, as the product of any two improper Lorentz transformations will be a proper Lorentz transformation.
From the above definition of Λ it can be shown that (Λ00)2 ≥ 1, so either Λ00 ≥ 1 or Λ00 ≤ −1, called orthochronous and non-orthochronous respectively. An important subgroup of the proper Lorentz transformations are the proper orthochronous Lorentz transformations which consist purely of boosts and rotations. Any Lorentz transform can be written as a proper orthochronous, together with one or both of the two discrete transformations; space inversion P and time reversal T, whose non-zero elements are:
The set of Poincaré transformations satisfies the properties of a group and is called the Poincaré group. Under the Erlangen program, Minkowski space can be viewed as the geometry defined by the Poincaré group, which combines Lorentz transformations with translations. In a similar way, the set of all Lorentz transformations forms a group, called the Lorentz group.
A quantity invariant under Lorentz transformations is known as a Lorentz scalar.
The usual treatment (e.g., Einstein's original work) is based on the invariance of the speed of light. However, this is not necessarily the starting point: indeed (as is exposed, for example, in the second volume of the Course of Theoretical Physics by Landau and Lifshitz), what is really at stake is the locality of interactions: one supposes that the influence that one particle, say, exerts on another can not be transmitted instantaneously. Hence, there exists a theoretical maximal speed of information transmission which must be invariant, and it turns out that this speed coincides with the speed of light in vacuum. The need for locality in physical theories was already noted by Newton (see Koestler's The Sleepwalkers), who considered the notion of an action at a distance "philosophically absurd" and believed that gravity must be transmitted by an agent (such as an interstellar aether) which obeys certain physical laws.
Michelson and Morley in 1887 designed an experiment, employing an interferometer and a half-silvered mirror, that was accurate enough to detect aether flow. The mirror system reflected the light back into the interferometer. If there were an aether drift, it would produce a phase shift and a change in the interference that would be detected. However, no phase shift was ever found. The negative outcome of the Michelson–Morley experiment left the concept of aether (or its drift) undermined. There was consequent perplexity as to why light evidently behaves like a wave, without any detectable medium through which wave activity might propagate.
In a 1964 paper, Erik Christopher Zeeman showed that the causality preserving property, a condition that is weaker in a mathematical sense than the invariance of the speed of light, is enough to assure that the coordinate transformations are the Lorentz transformations.
From physical principles
The problem is usually restricted to two dimensions by using a velocity along the x axis such that the y and z coordinates do not intervene. The following is similar to that of Einstein. As in the Galilean transformation, the Lorentz transformation is linear since the relative velocity of the reference frames is constant as a vector; otherwise, inertial forces would appear. They are called inertial or Galilean reference frames. According to relativity no Galilean reference frame is privileged. Another condition is that the speed of light must be independent of the reference frame, in practice of the velocity of the light source.
Galilean and Einstein's relativity
- Galilean reference frames
In classical kinematics, the total displacement x in the R frame is the sum of the relative displacement x′ in frame R′ and of the distance between the two origins x − x′. If v is the relative velocity of R′ relative to R, the transformation is: x = x′ + vt, or x′ = x − vt. This relationship is linear for a constant v, that is when R and R′ are Galilean frames of reference.
In Einstein's relativity, the main difference from Galilean relativity is that space and time coordinates are intertwined, and in different inertial frames t ≠ t′.
Since space is assumed to be homogeneous, the transformation must be linear. The most general linear relationship is obtained with four constant coefficients, A, B, γ, and b:
The Lorentz transformation becomes the Galilean transformation when γ = B = 1, b = −v and A = 0.
An object at rest in the R′ frame at position x′ = 0 moves with constant velocity v in the R frame. Hence the transformation must yield x′ = 0 if x = vt. Therefore, b = −γv and the first equation is written as
- Principle of relativity
According to the principle of relativity, there is no privileged Galilean frame of reference: therefore the inverse transformation for the position from frame R′ to frame R should have the same form as the original. To take advantage of this, we arrange by reversing the axes that R′ sees R moving towards positive x′ (i.e. just as R sees R′ moving towards positive x ), so that we can write
which, when multiplied through by −1, becomes
- The speed of light is constant
Since the speed of light is the same in all frames of reference, for the case of a light signal, the transformation must guarantee that t = x/c and t′ = x′/c.
Substituting for t and t′ in the preceding equations gives:
Multiplying these two equations together gives,
At any time after t = t′ = 0, xx′ is not zero, so dividing both sides of the equation by xx′ results in
which is called the "Lorentz factor".
- Transformation of time
The transformation equation for time can be easily obtained by considering the special case of a light signal, satisfying
Substituting term by term into the earlier obtained equation for the spatial coordinate
which determines the transformation coefficients A and B as
So A and B are the unique coefficients necessary to preserve the constancy of the speed of light in the primed system of coordinates.
Einstein's popular derivation
In his popular book Einstein derived the Lorentz transformation by arguing that there must be two non-zero coupling constants λ and μ such that
that correspond to light traveling along the positive and negative x-axis, respectively. For light x = ct if and only if x′ = ct′. Adding and subtracting the two equations and defining
Substituting x′ = 0 corresponding to x = vt and noting that the relative velocity is v = bc/γ, this gives
The constant γ can be evaluated as was previously shown above.
The Lorentz transformations can also be derived by simple application of the special relativity postulates and using hyperbolic identities. It is sufficient to derive the result in for a boost in the one direction, since for an arbitrary direction the decomposition of the position vector into parallel and perpendicular components can be done after, and generalizations therefrom follow, as outlined above.
- Relativity postulates
Start from the equations of the spherical wave front of a light pulse, centred at the origin:
which take the same form in both frames because of the special relativity postulates. Next, consider relative motion along the x-axes of each frame, in standard configuration above, so that y = y′, z = z′, which simplifies to
Now assume that the transformations take the linear form:
where A, B, C, D are to be found. If they were non-linear, they would not take the same form for all observers, since fictitious forces (hence accelerations) would occur in one frame even if the velocity was constant in another, which is inconsistent with inertial frame transformations.
Substituting into the previous result:
and comparing coefficients of x2, t2, xt:
- Hyperbolic rotation
The formulae resemble the hyperbolic identity
Introducing the rapidity parameter ϕ as a parametric hyperbolic angle allows the self-consistent identifications
where the signs after the square roots are chosen so that x and t increase. The hyperbolic transformations have been solved for:
If the signs were chosen differently the position and time coordinates would need to be replaced by −x and/or −t so that x and t increase not decrease.
To find what ϕ actually is, from the standard configuration the origin of the primed frame x′ = 0 is measured in the unprimed frame to be x = vt (or the equivalent and opposite way round; the origin of the unprimed frame is x = 0 and in the primed frame it is at x′ = −vt):
and manipulation of hyperbolic identities leads to
so the transformations are also:
From group postulates
Following is a classical derivation (see, e.g., and references therein) based on group postulates and isotropy of the space.
- Coordinate transformations as a group
The coordinate transformations between inertial frames form a group (called the proper Lorentz group) with the group operation being the composition of transformations (performing one transformation after another). Indeed the four group axioms are satisfied:
- Closure: the composition of two transformations is a transformation: consider a composition of transformations from the inertial frame K to inertial frame K′, (denoted as K → K′), and then from K′ to inertial frame K′′, [K′ → K′′], there exists a transformation, [K → K′][K′ → K′′], directly from an inertial frame K to inertial frame K′′.
- Associativity: the result of ([K → K′][K′ → K′′])[K′′ → K′′′] and [K → K′]([K′ → K′′][K′′ → K′′′]) is the same, K → K′′′.
- Identity element: there is an identity element, a transformation K → K.
- Inverse element: for any transformation K → K′ there exists an inverse transformation K′ → K.
- Transformation matrices consistent with group axioms
Let us consider two inertial frames, K and K′, the latter moving with velocity v with respect to the former. By rotations and shifts we can choose the z and z′ axes along the relative velocity vector and also that the events (t, z) = (0, 0) and (t′, z′) = (0, 0) coincide. Since the velocity boost is along the z (and z′) axes nothing happens to the perpendicular coordinates and we can just omit them for brevity. Now since the transformation we are looking after connects two inertial frames, it has to transform a linear motion in (t, z) into a linear motion in (t′, z′) coordinates. Therefore it must be a linear transformation. The general form of a linear transformation is
where α, β, γ, and δ are some yet unknown functions of the relative velocity v.
Let us now consider the motion of the origin of the frame K′. In the K′ frame it has coordinates (t′, z′ = 0), while in the K frame it has coordinates (t, z = vt). These two points are connected by the transformation
from which we get
Analogously, considering the motion of the origin of the frame K, we get
from which we get
Combining these two gives α = γ and the transformation matrix has simplified,
Now let us consider the group postulate inverse element. There are two ways we can go from the K′ coordinate system to the K coordinate system. The first is to apply the inverse of the transform matrix to the K′ coordinates:
The second is, considering that the K′ coordinate system is moving at a velocity v relative to the K coordinate system, the K coordinate system must be moving at a velocity −v relative to the K′ coordinate system. Replacing v with −v in the transformation matrix gives:
Now the function γ can not depend upon the direction of v because it is apparently the factor which defines the relativistic contraction and time dilation. These two (in an isotropic world of ours) cannot depend upon the direction of v. Thus, γ(−v) = γ(v) and comparing the two matrices, we get
According to the closure group postulate a composition of two coordinate transformations is also a coordinate transformation, thus the product of two of our matrices should also be a matrix of the same form. Transforming K to K′ and from K′ to K′′ gives the following transformation matrix to go from K to K′′:
In the original transform matrix, the main diagonal elements are both equal to γ, hence, for the combined transform matrix above to be of the same form as the original transform matrix, the main diagonal elements must also be equal. Equating these elements and rearranging gives:
The denominator will be nonzero for nonzero v, because γ(v) is always nonzero;
If v = 0 we have the identity matrix which coincides with putting v = 0 in the matrix we get at the end of this derivation for the other values of v, making the final matrix valid for all nonnegative v.
For the nonzero v, this combination of function must be a universal constant, one and the same for all inertial frames. Define this constant as δ(v)/vγ(v) = κ where κ has the dimension of 1/v2. Solving
we finally get
and thus the transformation matrix, consistent with the group axioms, is given by
If κ > 0, then there would be transformations (with κv2 ≫ 1) which transform time into a spatial coordinate and vice versa. We exclude this on physical grounds, because time can only run in the positive direction. Thus two types of transformation matrices are consistent with group postulates:
- with the universal constant κ = 0, and
- with κ < 0.
- Galilean transformations
If κ = 0 then we get the Galilean-Newtonian kinematics with the Galilean transformation,
where time is absolute, t′ = t, and the relative velocity v of two inertial frames is not limited.
- Lorentz transformations
where the speed of light is a finite universal constant determining the highest possible relative velocity between inertial frames.
If v ≪ c the Galilean transformation is a good approximation to the Lorentz transformation.
Only experiment can answer the question which of the two possibilities, κ = 0 or κ < 0, is realised in our world. The experiments measuring the speed of light, first performed by a Danish physicist Ole Rømer, show that it is finite, and the Michelson–Morley experiment showed that it is an absolute speed, and thus that κ < 0.
See also
- Ricci calculus
- Electromagnetic field
- Galilean transformation
- Hyperbolic rotation
- Invariance mechanics
- Lorentz group
- Principle of relativity
- Velocity-addition formula
- Algebra of physical space
- Relativistic aberration
- Prandtl–Glauert transformation
- O'Connor, John J.; Robertson, Edmund F., A History of Special Relativity
- Brown, Harvey R., Michelson, FitzGerald and Lorentz: the Origins of Relativity Revisited
- Rothman, Tony (2006), "Lost in Einstein's Shadow", American Scientist 94 (2): 112f.
- Darrigol, Olivier (2005), "The Genesis of the theory of relativity", Séminaire Poincaré 1: 1–22
- Macrossan, Michael N. (1986), "A Note on Relativity Before Einstein", Brit. Journal Philos. Science 37: 232–34
- The reference is within the following paper: Poincaré, Henri (1905), "On the Dynamics of the Electron", Comptes rendus hebdomadaires des séances de l'Académie des sciences 140: 1504–1508
- Einstein, Albert (1905), "Zur Elektrodynamik bewegter Körper", Annalen der Physik 322 (10): 891–921, Bibcode:1905AnP...322..891E, doi:10.1002/andp.19053221004. See also: English translation.
- A. Halpern (1988). 3000 Solved Problems in Physics. Schaum Series. Mc Graw Hill. p. 688. ISBN 978-0-07-025734-4.
- University Physics – With Modern Physics (12th Edition), H.D. Young, R.A. Freedman (Original edition), Addison-Wesley (Pearson International), 1st Edition: 1949, 12th Edition: 2008, ISBN (10-) 0-321-50130-6, ISBN (13-) 978-0-321-50130-1
- Dynamics and Relativity, J.R. Forshaw, A.G. Smith, Manchester Physics Series, John Wiley & Sons Ltd, ISBN 978-0-470-01460-8
- http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html. Hyperphysics, web-based physics matrial hosted by Georgia State University, USA.
- Relativity DeMystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN 0-07-145545-0
- Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, ISBN 0-7167-0344-0
- Ungar, A. A. (1989). "The relativistic velocity composition paradox and the Thomas rotation". Foundations of Physics 19: 1385–1396. Bibcode:1989FoPh...19.1385U. doi:10.1007/BF00732759.
- Ungar, A. A. (2000). "The relativistic composite-velocity reciprocity principle". Foundations of Physics (Springer) 30 (2): 331–342. CiteSeerX: 10.1.1.35.1131.
- eq. (55), Thomas rotation and the parameterization of the Lorentz transformation group, AA Ungar – Foundations of Physics Letters, 1988
- M. Carroll, Sean (2004). Spacetime and Geometry: An Introduction to General Relativity (illustrated ed.). Addison Wesley. p. 22. ISBN 0-8053-8732-3.
- Einstein, Albert (1916). "Relativity: The Special and General Theory" (PDF). Retrieved 2012-01-23.
- Dynamics and Relativity, J.R. Forshaw, A.G. Smith, Wiley, 2009, ISBN 978 0 470 01460 8
- Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 9-780471-927129
- Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3
- Weinberg, Steven (1972), Gravitation and Cosmology, New York, [NY.]: Wiley, ISBN 0-471-92567-5: (Section 2:1)
- Weinberg, Steven (1995), The quantum theory of fields (3 vol.), Cambridge, [England] ; New York, [NY.]: Cambridge University Press, ISBN 0-521-55001-7 : volume 1.
- Zeeman, Erik Christopher (1964), "Causality implies the Lorentz group", Journal of Mathematical Physics 5 (4): 490–493, Bibcode:1964JMP.....5..490Z, doi:10.1063/1.1704140
- Stauffer, Dietrich; Stanley, Harry Eugene (1995). From Newton to Mandelbrot: A Primer in Theoretical Physics (2nd enlarged ed.). Springer-Verlag. p. 80,81. ISBN 978-3-540-59191-7.
- An Introduction to Mechanics, D. Kleppner, R.J. Kolenkow, Cambridge University Press, 2010, ISBN 978-0-521-19821-9
Further reading
- Einstein, Albert (1961), Relativity: The Special and the General Theory, New York: Three Rivers Press (published 1995), ISBN 0-517-88441-0
- Ernst, A.; Hsu, J.-P. (2001), "First proposal of the universal speed of light by Voigt 1887", Chinese Journal of Physics 39 (3): 211–230, Bibcode:2001ChJPh..39..211E
- Thornton, Stephen T.; Marion, Jerry B. (2004), Classical dynamics of particles and systems (5th ed.), Belmont, [CA.]: Brooks/Cole, pp. 546–579, ISBN 0-534-40896-6
- Voigt, Woldemar (1887), "Über das Doppler'sche princip", Nachrichten von der Königlicher Gesellschaft den Wissenschaft zu Göttingen 2: 41–51
|Wikisource has original works on the topic: Relativity|
|Wikibooks has a book on the topic of: special relativity|
- Derivation of the Lorentz transformations. This web page contains a more detailed derivation of the Lorentz transformation with special emphasis on group properties.
- The Paradox of Special Relativity. This webpage poses a problem, the solution of which is the Lorentz transformation, which is presented graphically in its next page.
- Relativity – a chapter from an online textbook
- Special Relativity: The Lorentz Transformation, The Velocity Addition Law on Project PHYSNET
- Warp Special Relativity Simulator. A computer program demonstrating the Lorentz transformations on everyday objects.
- Animation clip visualizing the Lorentz transformation.
- Lorentz Frames Animated from John de Pillis. Online Flash animations of Galilean and Lorentz frames, various paradoxes, EM wave phenomena, etc. | http://en.wikipedia.org/wiki/Lorentz_transformation | 13 |
81 | Jupiter's orbit lies beyond the asteroid belt at a mean distance of 483.6 million mi (778.3 million km) from the sun; its period of revolution is 11.86 years. In order from the sun it is the first of the Jovian planets—Jupiter, Saturn, Uranus, and Neptune—very large, massive planets of relatively low density, having rapid rotation and a thick, opaque atmosphere. Jupiter has a diameter of 88,815 mi (142,984 km), more than 11 times that of the earth. Its mass is 318 times that of the earth and about 21/2 times the mass of all other planets combined.
The atmosphere of Jupiter is composed mainly of hydrogen, helium, methane, and ammonia. However, the concentration of nitrogen, carbon, sulfur, argon, xenon, and krypton—as measured by an instrument package dropped by the space probe Galileo during its 1995 flyby of the planet—is more than twice what was expected, raising questions about the accepted theory of Jupiter's formation. The atmosphere appears to be divided into a number of light and dark bands parallel to its equator and shows a range of complex features, including a storm called the Great Red Spot. Located in the southern hemisphere and varying from c.15,600 to 25,000 mi (25,000 to 40,000 km) in one direction and 7,500 to 10,000 mi (12,000 to 16,000 km) in the other, the storm rotates counterclockwise and has been observed ever since 1664, when Robert Hooke first noted it. Also in the southern hemisphere is the Little Red Spot, c.8,000 mi (13,000 km) across. It formed from three white-colored storms that developed in the 1940s, merged in 1998-2000, and became clearly red by 2006. Analysis of the data obtained when massive pieces of the comet Shoemaker Levy 9 plunged into Jupiter in 1994 has extended our knowledge of the Jovian atmosphere.
Jupiter has no solid rock surface. One theory pictures a gradual transition from the outer ammonia clouds to a thick layer of frozen gases and finally to a liquid or solid hydrogen mantle. Beneath that Jupiter probably has a core of rocky material with a mass 10-15 times that of the earth. The spot and other markings of the atmosphere also provide evidence for Jupiter's rapid rotation, which has a period of about 9 hr 55 min. This rotation causes a polar flattening of over 6%. The temperature ranges from about -190°F; (-124°C;) for the visible surface of the atmosphere, to 9°F; (-13°C;) at lower cloud levels; localized regions reach as high as 40°F; (4°C;) at still lower cloud levels near the equator. Jupiter radiates about four times as much heat energy as it receives from the sun, suggesting an internal heat source. This energy is thought to be due in part to a slow contraction of the planet. Jupiter is also characterized by intense nonthermal radio emission; in the 15-m range it is the strongest radio source in the sky. Jupiter has a huge asymetrical magnetic field, extending past the orbit of Saturn in one direction but far less in the direction of the sun. This magnetosphere traps high levels of energetic particles far more intense than those found within earth's Van Allen radiation belts. Six space probes have encountered the Jovian system: Pioneers 10 and 11 (1973 and 1974), Voyagers 1 and 2 (both 1979), Ulysses (1992), and Galileo (1995-2003).
At least 63 natural satellites orbit Jupiter. They are conveniently divided into six main groups (in order of increasing distance from the planet): Amalthea, Galilean, Himalia, Ananke, Carme, and Pasiphae. The first group is comprised of the four innermost satellites—Metis, Adrastea, Amalthea, and Thebe. The red color of Amalthea (diameter: 117 mi/189 km), a small, elongated satellite discovered (1892) by Edward Barnard, probably results from a coating of sulfur particles ejected from Io. Metis (diameter: 25 mi/40 km), Adrastea (diameter: 12 mi/20 km), and Thebe (diameter: 62 mi/100 km) are all oddly shaped and were discovered in 1979 in photographs returned to earth by the Voyager 1 space probe. Metis and Adrastea orbit close to Jupiter's thin ring system; material ejected from these moons helps maintain the ring.
The four largest satellites—Io, Europa, Ganymede, and Callisto—were discovered by Galileo in 1610, shortly after he invented the telescope, and are known as the Galilean satellite group. Io (diameter: 2,255 mi/3,630 km), the closest to Jupiter of the four, is the most active geologically, with 30 active volcanoes that are probably energized by the tidal effects of Jupiter's enormous mass. Europa (diameter: 1,960 mi/3,130 km) is a white, highly reflecting body whose smooth surface is covered with dark streaks up to 43 mi/70 km in width and from several hundred to several thousand miles in length. Ganymede (diameter: 3,268 mi/5,262 km), second most distant of the four and the largest satellite in the solar system, has heavily cratered regions, tens of miles across, that are surrounded by younger, grooved terrain. Callisto (diameter: 3,000 mi/4,806 km), the most distant and the least active geologically of the four, has a heavily cratered surface. Themisto (diameter: 5 mi/8 km) orbits Jupiter midway between the Galilean and next main group of satellites, the Himalias. The Himalia group consists of five tightly clustered satellites with orbits outside that of Callisto—Leda (diameter: 6 mi/10 km), Himalia (diameter: 106 mi/170 km), Lysithea (diameter: 15 mi/24 km), Elara (diameter: 50 mi/80 km), and S/2000 J11 (diameter: 2.5 mi/4 km). These 14 inner satellites are regular, that is, their orbits are relatively circular, near equatorial, and prograde, i.e., moving in the same orbital direction as the planet. Almost all of the remainder are irregular in that their orbits are large, elliptical, inclined to that of the planet, and usually retrograde, i.e., motion opposite to that of the planet's rotation. (Jupiter's irregular satellites are distinguished from the regular by the spelling of their names, which all end in the letter "e".)
Situated between the Himalia and Ananke groups is Carpo (diameter: 2 mi/3 km), which like Thermisto doesn't seem to fit into any of the main groups. The Ananke group comprises 17 satellites, which share similar orbits and range from 1.2 to 2.5 mi (2-4 km) in diameter except for two: S/2003 J12, Euporie, Orthosie, Euanthe, Thyone, Mneme, Harpalyke, Hermippe, Praxidike (diameter: 4.5 mi/7 km), Thelexinoe, Iocaste, Ananke (diameter: 12.5 mi/20 km), S/2003 J16, S/2003 J3, S/2003 J18, Helike, and S/2003 J15.
Like the Ananke group, the Carme group is remarkably homogeneous. It comprises 17 satellites, which share similar orbits and, except for one, range from 1.2 to 3 mi (2-5 km) in diameter: Arche, Pasithee, Chaldene, Kale, Isonoe, Aitne, Erinome, Taygete, Carme (diameter: 28 mi/46 km), Kalyke, Eukelade, Kallichore, S/2003 J17, S/2003 J10, S/2003 J9, S/2003 J5, and S/2003 J19. The most distant of the groups from the planet is the Pasiphae, which comprises 14 widely dispersed satellites that, except for two, range from 1.2 to 4.5 mi (2-7 km) in diameter: S/2000 J12, Eurydome, Autonoe, Sponde, Pasiphae (diameter: 36 mi/58 km), Megaclite, Sinope (diameter: 23 mi/38 km), Hegemone, Aode, Callirrhoe, Cyllene, S/2000 J23, S/2000 J4, and S/2000 J14. The odd orbits of the irregular satellites indicate that they were captured after Jupiter's formation. Because they are small, irregularly shaped, and clustered into groups, it is believed that they originated as parts of a larger body that either shattered due to Jupiter's enormous gravity or broke apart in a collision with another body.
Jupiter has three rings—Halo, Main, and Gossamer—similar to those of Saturn but much smaller and fainter. An intense radiation belt lies between the rings and Jupiter's uppermost atmospheric layers.
Chief god of ancient Rome and Italy. Like his Greek counterpart, Zeus, he was worshiped as a sky god. With Juno and Minerva he was a member of the triad of deities traditionally believed to have been introduced into Rome by the Etruscans. Jupiter was associated with treaties, alliances, and oaths; he was the protecting deity of the republic and later of the reigning emperor. His oldest temple was on the Capitoline Hill in Rome. He was worshiped on the summits of hills throughout Italy, and all places struck by lightning became his property. His sacred tree was the oak.
Learn more about Jupiter with a free trial on Britannica.com.
Jupiter (pronounced ) is the fifth planet from the Sun and the largest planet within the Solar System. It is two and a half times as massive as all of the other planets in our Solar System combined. Jupiter is classified as a gas giant, along with Saturn, Uranus and Neptune. Together, these four planets are sometimes referred to as the Jovian planets, where Jovian is the adjectival form of Jupiter.
The planet was known by astronomers of ancient times and was associated with the mythology and religious beliefs of many cultures. The Romans named the planet after the Roman god Jupiter. When viewed from Earth, Jupiter can reach an apparent magnitude of −2.8, making it the third brightest object in the night sky after the Moon and Venus. (However, at certain points in its orbit, Mars can briefly exceed Jupiter's brightness.)
The planet Jupiter is primarily composed of hydrogen with a small proportion of helium; it may also have a rocky core of heavier elements under high pressure. Because of its rapid rotation, Jupiter's shape is that of an oblate spheroid (it possesses a slight but noticeable bulge around the equator). The outer atmosphere is visibly segregated into several bands at different latitudes, resulting in turbulence and storms along their interacting boundaries. A prominent result is the Great Red Spot, a giant storm that is known to have existed since at least the 17th century. Surrounding the planet is a faint planetary ring system and a powerful magnetosphere. There are also at least 63 moons, including the four large moons called the Galilean moons that were first discovered by Galileo Galilei in 1610. Ganymede, the largest of these moons, has a diameter greater than that of the planet Mercury.
Jupiter has been explored on several occasions by robotic spacecraft, most notably during the early Pioneer and Voyager flyby missions and later by the Galileo orbiter. The latest probe to visit Jupiter was the Pluto-bound New Horizons spacecraft in late February 2007. The probe used the gravity from Jupiter to increase its speed and adjust its trajectory toward Pluto, thereby saving years of travel. Future targets for exploration include the possible ice-covered liquid ocean on the Jovian moon Europa.
The atmospheric proportions of hydrogen and helium are very close to the theoretical composition of the primordial solar nebula. However, neon in the upper atmosphere only consists of 20 parts per million by mass, which is about a tenth as abundant as in the Sun. Helium is also depleted, although to a lesser degree. This depletion may be a result of precipitation of these elements into the interior of the planet. Abundances of heavier inert gases in Jupiter's atmosphere are about two to three times that of the sun.
Based on spectroscopy, Saturn is thought to be similar in composition to Jupiter, but the other gas giants Uranus and Neptune have relatively much less hydrogen and helium. However, because of the lack of atmospheric entry probes, high quality abundance numbers of the heavier elements are lacking for the outer planets beyond Jupiter.
Jupiter is 2.5 times more massive than all the other planets in our Solar System combined — this is so massive that its barycenter with the Sun actually lies above the Sun's surface (1.068 solar radii from the Sun's center). Although this planet dwarfs the Earth (with a diameter 11 times as great) it is considerably less dense. Jupiter's volume is equal to 1,317 Earths, yet is only 318 times as massive. A Jupiter mass (MJ) is used to describe masses of other gas giant planets, particularly extrasolar planets.
Theoretical models indicate that if Jupiter had much more mass than it does at present, the planet would shrink. For small changes in mass, the radius would not change appreciably, and above about four Jupiter masses the interior would become so much more compressed under the increased gravitation force that the planet's volume would actually decrease despite the increasing amount of matter. As a result, Jupiter is thought to have about as large a diameter as a planet of its composition and evolutionary history can achieve. The process of further shrinkage with increasing mass would continue until appreciable stellar ignition is achieved as in high-mass brown dwarfs around 50 Jupiter masses. This has led some astronomers to term it a "failed star", although it is unclear whether or not the processes involved in the formation of planets like Jupiter are similar to the processes involved in the formation of multiple star systems.
Although Jupiter would need to be about 75 times as massive to fuse hydrogen and become a star, the smallest red dwarf is only about 30 percent larger in radius than Jupiter. In spite of this, Jupiter still radiates more heat than it receives from the Sun. The amount of heat produced inside the planet is nearly equal to the total solar radiation it receives. This additional heat radiation is generated by the Kelvin-Helmholtz mechanism through adiabatic contraction. This process results in the planet shrinking by about 2 cm each year. When it was first formed, Jupiter was much hotter and was about twice its current diameter.
Jupiter is thought to consist of a dense core with a mixture of elements, a surrounding layer of liquid metallic hydrogen with some helium, and an outer layer predominantly of molecular hydrogen. Beyond this basic outline, there is still considerable uncertainty. The core is often described as rocky, but its detailed composition is unknown, as are the properties of materials at the temperatures and pressures of those depths (see below). In 1997, the existence of the core was suggested by gravitational measurements. indicating a mass of from 12 to 45 times the Earth's mass or roughly 3%-15% of the total mass of Jupiter. The presence of a core during at least part of Jupiter's history is suggested by models of planetary formation involving initial formation of a rocky or icy core that is massive enough to collect its bulk of hydrogen and helium from the protosolar nebula. Assuming it did exist, it may have shrunk as convection currents of hot liquid metallic hydrogen mixed with the molten core and carried its contents to higher levels in the planetary interior. A core may now be entirely absent, as gravitational measurements aren't yet precise enough to rule that possibility out entirely.
The uncertainty of the models is tied to the error margin in hitherto measured parameters: one of the rotational coefficients (J6) used to describe the planet's gravitational moment, Jupiter's equatorial radius, and its temperature at 1 bar pressure. The JUNO mission, scheduled for launch in 2011, is expected to narrow down the value of these parameters, and thereby make progress on the problem of the core.
The core region is surrounded by dense metallic hydrogen, which extends outward to about 78 percent of the radius of the planet. Rain-like droplets of helium and neon precipitate downward through this layer, depleting the abundance of these elements in the upper atmosphere.
Above the layer of metallic hydrogen lies a transparent interior atmosphere of liquid hydrogen and gaseous hydrogen, with the gaseous portion extending downward from the cloud layer to a depth of about 1,000 km. Instead of a clear boundary or surface between these different phases of hydrogen, there is probably a smooth gradation from gas to liquid as one descends. This smooth transition happens whenever the temperature is above the critical temperature, which for hydrogen is only 33 K (see hydrogen).
The temperature and pressure inside Jupiter increase steadily toward the core. At the phase transition region where liquid hydrogen (heated beyond its critical point) becomes metallic, it is believed the temperature is 10,000 K and the pressure is 200 GPa. The temperature at the core boundary is estimated to be 36,000 K and the interior pressure is roughly 3,000–4,500 GPa.
Jupiter is perpetually covered with clouds composed of ammonia crystals and possibly ammonium hydrosulfide. The clouds are located in the tropopause and are arranged into bands of different latitudes, known as tropical regions. These are sub-divided into lighter-hued zones and darker belts. The interactions of these conflicting circulation patterns cause storms and turbulence. Wind speeds of 100 m/s (360 km/h) are common in zonal jets. The zones have been observed to vary in width, color and intensity from year to year, but they have remained sufficiently stable for astronomers to give them identifying designations.
The cloud layer is only about 50 km deep, and consists of at least two decks of clouds: a thick lower deck and a thin clearer region. There may also be a thin layer of water clouds underlying the ammonia layer, as evidenced by flashes of lightning detected in the atmosphere of Jupiter. (Water is a polar molecule that can carry a charge, so it is capable of creating the charge separation needed to produce lightning.) These electrical discharges can be up to a thousand times as powerful as lightning on the Earth. The water clouds can form thunderstorms driven by the heat rising from the interior.
The orange and brown coloration in the clouds of Jupiter are caused by upwelling compounds that change color when they are exposed to ultraviolet light from the Sun. The exact makeup remains uncertain, but the substances are believed to be phosphorus, sulfur or possibly hydrocarbons. These colorful compounds, known as chromophores, mix with the warmer, lower deck of clouds. The zones are formed when rising convection cells form crystallizing ammonia that masks out these lower clouds from view.
Jupiter's low axial tilt means that the poles constantly receive less solar radiation than at the planet's equatorial region. Convection within the interior of the planet transports more energy to the poles, however, balancing out the temperatures at the cloud layer.
The best known feature of Jupiter is the Great Red Spot, a persistent anticyclonic storm located 22° south of the equator that is larger than Earth. It is known to have been in existence since at least 1831, and possibly since 1665. Mathematical models suggest that the storm is stable and may be a permanent feature of the planet. The storm is large enough to be visible through Earth-based telescopes.
The oval object rotates counterclockwise, with a period of about six days. The Great Red Spot's dimensions are 24–40,000 km × 12–14,000 km. It is large enough to contain two or three planets of Earth's diameter. The maximum altitude of this storm is about 8 km above the surrounding cloudtops.
Storms such as this are common within the turbulent atmospheres of gas giants. Jupiter also has white ovals and brown ovals, which are lesser unnamed storms. White ovals tend to consist of relatively cool clouds within the upper atmosphere. Brown ovals are warmer and located within the "normal cloud layer". Such storms can last as little as a few hours or stretch on for centuries.
Even before Voyager proved that the feature was a storm, there was strong evidence that the spot could not be associated with any deeper feature on the planet's surface, as the Spot rotates differentially with respect to the rest of the atmosphere, sometimes faster and sometimes more slowly. During its recorded history it has traveled several times around the planet relative to any possible fixed rotational marker below it.
In 2000, an atmospheric feature formed in the southern hemisphere that is similar in appearance to the Great Red Spot, but smaller in size. This was created when several smaller, white oval-shaped storms merged to form a single feature—these three smaller white ovals were first observed in 1938. The merged feature was named Oval BA, and has been nicknamed Red Spot Junior. It has since increased in intensity and changed color from white to red.
Jupiter has a faint planetary ring system composed of three main segments: an inner torus of particles known as the halo, a relatively bright main ring, and an outer "gossamer" ring. These rings appear to be made of dust, rather than ice as is the case for Saturn's rings. The main ring is probably made of material ejected from the satellites Adrastea and Metis. Material that would normally fall back to the moon is pulled into Jupiter because of its strong gravitational pull. The orbit of the material veers towards Jupiter and new material is added by additional impacts. In a similar way, the moons Thebe and Amalthea probably produce the two distinct components of the gossamer ring.
At about 75 Jupiter radii from the planet, the interaction of the magnetosphere with the solar wind generates a bow shock. Surrounding Jupiter's magnetosphere is a magnetopause, located at the inner edge of a magnetosheath, where the planet's magnetic field becomes weak and disorganized. The solar wind interacts with these regions, elongating the magnetosphere on Jupiter's lee side and extending it outward until it nearly reaches the orbit of Saturn. The four largest moons of Jupiter all orbit within the magnetosphere, which protects them from the solar wind.
The magnetosphere of Jupiter is responsible for intense episodes of radio emission from the planet's polar regions. Volcanic activity on the Jovian moon Io (see below) injects gas into Jupiter's magnetosphere, producing a torus of particles about the planet. As Io moves through this torus, the interaction generates Alfven waves that carry ionized matter into the polar regions of Jupiter. As a result, radio waves are generated through a cyclotron maser mechanism, and the energy is transmitted out along a cone-shaped surface. When the Earth intersects this cone, the radio emissions from Jupiter can exceed the solar radio output.
The axial tilt of Jupiter is relatively small: only 3.13°. As a result this planet does not experience significant seasonal changes, in contrast to Earth and Mars for example.
Jupiter's rotation is the fastest of all the Solar System's planets, completing a rotation on its axis in slightly less than ten hours; this creates an equatorial bulge easily seen through an Earth-based amateur telescope. This rotation requires a centripetal acceleration at the equator of about 1.67 m/s², compared to the equatorial surface gravity of 24.79 m/s²; thus the net acceleration felt at the equatorial surface is only about 23.12 m/s². The planet is shaped as an oblate spheroid, meaning that the diameter across its equator is longer than the diameter measured between its poles. On Jupiter, the equatorial diameter is 9275 km longer than the diameter measured through the poles.
Because Jupiter is not a solid body, its upper atmosphere undergoes differential rotation. The rotation of Jupiter's polar atmosphere is about 5 minutes longer than that of the equatorial atmosphere; three "systems" are used as frames of reference, particularly when graphing the motion of atmospheric features. System I applies from the latitudes 10° N to 10° S; its period is the planet's shortest, at 9h 50m 30.0s. System II applies at all latitudes north and south of these; its period is 9h 55m 40.6s. System III was first defined by radio astronomers, and corresponds to the rotation of the planet's magnetosphere; its period is Jupiter's "official" rotation.
Earth overtakes Jupiter every 398.9 days as it orbits the Sun, a duration called the synodic period. As it does so, Jupiter appears to undergo retrograde motion with respect to the background stars. That is, for a period of time Jupiter seems to move backward in the night sky, performing a looping motion.
Jupiter's 12-year orbital period corresponds to the dozen constellations in the zodiac. As a result, each time Jupiter reaches opposition it has advanced eastward by about the width of a zodiac constellation. The orbital period of Jupiter is also about two-fifths the orbital period of Saturn, forming a 5:2 orbital resonance between the two largest planets in the Solar System.
Because the orbit of Jupiter is outside the Earth's, the phase angle of Jupiter as viewed from the Earth never exceeds 11.5°, and is almost always close to zero. That is, the planet always appears nearly fully illuminated when viewed through Earth-based telescopes. It was only during spacecraft missions to Jupiter that crescent views of the planet were obtained.
Note, however, that Chinese historian of astronomy, Xi Zezong, has claimed that Gan De, a Chinese astronomer, made this discovery of one of Jupiter's moons in 362 BC with the unaided eye, nearly two millennia before any Europeans. Galileo's was also the first discovery of a celestial motion not apparently centered on the Earth. It was a major point in favor of Copernicus' heliocentric theory of the motions of the planets; Galileo's outspoken support of the Copernican theory placed him under the threat of the Inquisition.
During 1660s, Cassini used a new telescope to discover spots and colorful bands on Jupiter and observed that the planet appeared oblate; that is, flattened at the poles. He was also able to estimate the rotation period of the planet. In 1690 Cassini noticed that the atmosphere undergoes differential rotation.
The Great Red Spot, a prominent oval-shaped feature in the southern hemisphere of Jupiter, may have been observed as early as 1664 by Robert Hooke and in 1665 by Giovanni Cassini, although this is disputed. The pharmacist Heinrich Schwabe produced the earliest known drawing to show details of the Great Red Spot in 1831.
The Red Spot was reportedly lost from sight on several occasions between 1665 and 1708 before becoming quite conspicuous in 1878. It was recorded as fading again in 1883 and at the start of the twentieth century.
Both Giovanni Borelli and Cassini made careful tables of the motions of the Jovian moons, allowing predictions of the times when the moons would pass before or behind the planet. By the 1670s, however, it was observed that when Jupiter was on the opposite side of the Sun from the Earth, these events would occur about 17 minutes later than expected. Ole Rømer deduced that sight is not instantaneous (a finding that Cassini had earlier rejected), and this timing discrepancy was used to estimate the speed of light.
In 1892, E. E. Barnard observed a fifth satellite of Jupiter with the refractor at Lick Observatory in California. The discovery of this relatively small object, a testament to his keen eyesight, quickly made him famous. The moon was later named Amalthea. It was the last planetary moon to be discovered directly by visual observation. An additional eight satellites were subsequently discovered prior to the flyby of the Voyager 1 probe in 1979.
In 1932, Rupert Wildt identified absorption bands of ammonia and methane in the spectra of Jupiter.
Three long-lived anticyclonic features termed white ovals were observed in 1938. For several decades they remained as separate features in the atmosphere, sometimes approaching each other but never merging. Finally, two of the ovals merged in 1998, then absorbed the third in 2000, becoming Oval BA.
In 1955, Bernard Burke and Kenneth Franklin detected bursts of radio signals coming from Jupiter at 22.2 MHz. The period of these bursts matched the rotation of the planet, and they were also able to use this information to refine the rotation rate. Radio bursts from Jupiter were found to come in two forms: long bursts (or L-bursts) lasting up to several seconds, and short bursts (or S-bursts) that had a duration of less than a hundredth of a second.
Scientists discovered that there were three forms of radio signals being transmitted from Jupiter.
During the period July 16, 1994 to July 22, 1994, over 20 fragments from the comet Shoemaker-Levy 9 hit Jupiter's southern hemisphere, providing the first direct observation of a collision between two Solar System objects. This impact provided useful data on the composition of Jupiter's atmosphere.
|Pioneer 10||December 3, 1973||130,000 km|
|Pioneer 11||December 4, 1974||34,000 km|
|Voyager 1||March 5, 1979||349,000 km|
|Voyager 2||July 9, 1979||570,000 km|
|Ulysses||February 1992||409,000 km|
|February 2004||240,000,000 km|
|Cassini||December 30, 2000||10,000,000 km|
|New Horizons||February 28, 2007||2,304,535 km|
Beginning in 1973, several spacecraft have performed planetary flyby maneuvers that brought them within observation range of Jupiter. The Pioneer missions obtained the first close-up images of Jupiter's atmosphere and several of its moons. They discovered that the radiation fields in the vicinity of the planet were much stronger than expected, but both spacecraft managed to survive in that environment. The trajectories of these spacecraft were used to refine the mass estimates of the Jovian system. Occultations of the radio signals by the planet resulted in better measurements of Jupiter's diameter and the amount of polar flattening.
Six years later, the Voyager missions vastly improved the understanding of the Galilean moons and discovered Jupiter's rings. They also confirmed that the Great Red Spot was anticyclonic. Comparison of images showed that the Red Spot had changed hue since the Pioneer missions, turning from orange to dark brown. A torus of ionized atoms was discovered along Io's orbital path, and volcanoes were found on the moon's surface, some in the process of erupting. As the spacecraft passed behind the planet, it observed flashes of lightning in the night side atmosphere.
The next mission to encounter Jupiter, the Ulysses solar probe, performed a flyby maneuver in order to attain a polar orbit around the Sun. During this pass the spacecraft conducted studies on Jupiter's magnetosphere. However, since Ulysses has no cameras, no images were taken. A second flyby six years later was at a much greater distance.
In 2000, the Cassini probe, en route to Saturn, flew by Jupiter and provided some of the highest-resolution images ever made of the planet. On December 19, 2000, the spacecraft captured an image of the moon Himalia, but the resolution was too low to show surface details.
The New Horizons probe, en route to Pluto, flew by Jupiter for gravity assist. Closest approach was on February 28, 2007. The probe's cameras measured plasma output from volcanoes on Io and studied all four Galilean moons in detail, as well as making long-distance observations of the outer moons Himalia and Elara. Imaging of the Jovian system began September 4, 2006.
So far the only spacecraft to orbit Jupiter is the Galileo orbiter, which went into orbit around Jupiter on December 7, 1995. It orbited the planet for over seven years, conducting multiple flybys of all of the Galilean moons and Amalthea. The spacecraft also witnessed the impact of Comet Shoemaker-Levy 9 as it approached Jupiter in 1994, giving a unique vantage point for the event. However, while the information gained about the Jovian system from Galileo was extensive, its originally-designed capacity was limited by the failed deployment of its high-gain radio transmitting antenna.
An atmospheric probe was released from the spacecraft in July 1995, entering the planet's atmosphere on December 7. It parachuted through 150 km of the atmosphere, collecting data for 57.6 minutes, before being crushed by the pressure to which it was subjected by that time (about 22 times Earth normal, at a temperature of 153 °C). It would have melted thereafter, and possibly vaporized. The Galileo orbiter itself experienced a more rapid version of the same fate when it was deliberately steered into the planet on September 21, 2003 at a speed of over 50 km/s, in order to avoid any possibility of it crashing into and possibly contaminating Europa—a moon which has been hypothesized to have the possibility of harboring life.
Because of the possibility of a liquid ocean on Jupiter's moon Europa, there has been great interest in studying the icy moons in detail. A mission proposed by NASA was dedicated to doing so. The JIMO (Jupiter Icy Moons Orbiter) was expected to be launched sometime after 2012. However, the mission was deemed too ambitious and its funding was canceled. A European Jovian Europa Orbiter mission is being studied, but its launch is unscheduled.
Jupiter has 63 named natural satellites. Of these, 47 are less than 10 kilometres in diameter and have only been discovered since 1975. The four largest moons, known as the "Galilean moons", are Io, Europa, Ganymede and Callisto.
The eccentricity of their orbits causes regular flexing of the three moons' shapes, with Jupiter's gravity stretching them out as they approach it and allowing them to spring back to more spherical shapes as they swing away. This tidal flexing heats the moons' interiors via friction. This is seen most dramatically in the extraordinary volcanic activity of innermost Io (which is subject to the strongest tidal forces), and to a lesser degree in the geological youth of Europa's surface (indicating recent resurfacing of the moon's exterior).
|The Galilean moons, compared to Earth's Moon|
(Pronunciation respelling key)
|Diameter||Mass||Orbital radius||Orbital period|
Before the discoveries of the Voyager missions, Jupiter's moons were arranged neatly into four groups of four, based on commonality of their orbital elements. Since then, the large number of new small outer moons has complicated this picture. There are now thought to be six main groups, although some are more distinct than others.
A basic sub-division is a grouping of the eight inner regular moons, which have nearly circular orbits near the plane of Jupiter's equator and are believed to have formed with Jupiter. The remainder of the moons consist of an unknown number of small irregular moons with elliptical and inclined orbits, which are believed to be captured asteroids or fragments of captured asteroids. Irregular moons that belong to a group share similar orbital elements and thus may have a common origin, perhaps as a larger moon or captured body that broke up.
|Regular moons||Inner group||The inner group of four small moons all have diameters of less than 200 km, orbit at radii less than 200,000 km, and have orbital inclinations of less than half a degree.|
|Galilean moons||These four moons, discovered by Galileo Galilei and by Simon Marius in parallel, orbit between 400,000 and 2,000,000 km, and include some of the largest moons in the Solar System.|
|Irregular moons||Themisto||This is a single moon belonging to a group of its own, orbiting halfway between the Galilean moons and the Himalia group.|
|Himalia group||A tightly clustered group of moons with orbits around 11,000,000–12,000,000 km from Jupiter.|
|Carpo||Another isolated case; at the inner edge of the Ananke group, it revolves in the direct sense.|
|Ananke group||This group has rather indistinct borders, averaging 21,276,000 km from Jupiter with an average inclination of 149 degrees.|
|Carme group||A fairly distinct group that averages 23,404,000 km from Jupiter with an average inclination of 165 degrees.|
|Pasiphaë group||A dispersed and only vaguely distinct group that covers all the outermost moons.|
In addition to its moons, Jupiter's gravitational field controls numerous asteroids that have settled into the regions of the Lagrangian points preceding and following Jupiter in its orbit around the sun. These are known as the Trojan asteroids, and are divided into Greek and Trojan "camps" to commemorate the Iliad. The first of these, 588 Achilles, was discovered by Max Wolf in 1906; since then more than two thousand have been discovered. The largest is 624 Hektor.
Jupiter has been called the Solar System's vacuum cleaner, because of its immense gravity well and location near the inner Solar System. It receives the most frequent comet impacts of the Solar System's planets. In 1994 comet Shoemaker-Levy 9 (SL9, formally designated D/1993 F2) collided with Jupiter and gave information about the structure of Jupiter. It was thought that the planet served to partially shield the inner system from cometary bombardment. However, recent computer simulations suggest that Jupiter doesn't cause a net decrease in the number of comets that pass through the inner Solar System, as its gravity perturbs their orbits inward in roughly the same numbers that it accretes or ejects them.
The majority of short-period comets belong to the Jupiter family—defined as comets with semi-major axes smaller than Jupiter's. Jupiter family comets are believed to form in the Kuiper belt outside the orbit of Neptune. During close encounters with Jupiter their orbits are perturbed into a smaller period and then circularized by regular gravitational interaction with the Sun and Jupiter.
It is considered highly unlikely that there is any Earth-like life on Jupiter, as there is only a small amount of water in the atmosphere and any possible solid surface deep within Jupiter would be under extraordinary pressures. However, in 1976, before the Voyager missions, it was hypothesized that ammonia- or water-based life, such as the so-called atmospheric beasts, could evolve in Jupiter's upper atmosphere. This hypothesis is based on the ecology of terrestrial seas which have simple photosynthetic plankton at the top level, fish at lower levels feeding on these creatures, and marine predators which hunt the fish.
The Romans named it after Jupiter (Iuppiter, Iūpiter) (also called Jove), the principal god of Roman mythology, whose name comes from the Proto-Indo-European vocative form *dyeu ph2ter, meaning "god-father." The astronomical symbol for the planet, , is a stylized representation of the god's lightning bolt. The Greek equivalent Zeus supplies the root zeno-, used to form some Jupiter-related words, such as zenographic.
Jovian is the adjectival form of Jupiter. The older adjectival form jovial, employed by astrologers in the Middle Ages, has come to mean "happy" or "merry," moods ascribed to Jupiter's astrological influence.
The Chinese, Korean, Japanese, and Vietnamese referred to the planet as the wood star, 木星, based on the Chinese Five Elements. The Greeks called it Φαέθων, Phaethon, "blazing". In Vedic Astrology, Hindu astrologers named the planet after Brihaspati, the religious teacher of the gods, and often called it "Guru," which literally means the "Heavy One". In the English language Thursday is rendered as Thor's day, with Thor being associated with the planet Jupiter in Germanic mythology. | http://www.reference.com/browse/Jupiter | 13 |
57 | NEWTON’S LAW OF UNIVERSAL GRAVITATION
The motion of the moon around the earth is accelerated motion, as is the motion of the earth around the sun. So there must be a force on the moon and one on the earth. If there were no force on the earth, it would just move past the sun in a straight line at constant speed, assuming it could somehow get started in the first place.
Force on the earth Path of the earth
Force on the earth
Path of the earth
Now what kind of force could there be on the moon pointed directly toward the earth? Well, it could be the same force that pulls an apple (or a rock, or a human, or anything else) directly toward the center of the earth – in other words -- down. It could be gravity, conjectured Newton. This was not a new idea, but Newton managed to work it out mathematically and predict the consequences of the idea much better than anyone else had done.
For one thing, there would also have to be a gravitational force on the earth due to the sun. So other objects besides the earth could attract things. In fact, it developed, anything will attract other things to it with a gravitational force as long as the attracting object has any mass at all. So this law became known as Newton’s Law of Universal Gravitation because it could apply to anything.
Newton also worked out a formula for the gravitational force of one object on another. It turned out that there was a specific formula for the gravitational force that had to be true, if gravity was going to explain Kepler’s Laws.
This formula involves the mass of each object. Mass is a concept that tells you several things about the object that possesses it. Newton at first called it “quantity of matter”, which could be loosely thought of as “how much stuff is in something”. In the next file, on Newton’s Second Law, mass will turn out to tell about the inertia of an object – its resistance to changing its motion. In any case, it is measured in kilograms, and it can be measured.
Newton found the gravitational force due to one object on another object to be this:
(A constant called G) x (mass of first object) x (mass of second object)
Force = --------------------------------------------------------------------------------------------
(divided by the square of the distance between them)
In this formula, suppose we use the mass of the earth for the first mass, my mass for the second one, and the radius of the earth for the distance. Then the force of gravity would turn out to be my weight. Of course, there needs to be units of measurement such as feet, miles, meters, or something for distance. The radius of the earth is 6.38 million meters (about 4000 miles), and the mass of the earth is huge, 5.97x1024 kilograms. Suppose that my mass is 100 kilograms (it is not really that large!). The constant G is something that is the same for all objects, and it has been measured to be 6.67x10-11 in standard metric system units. The gravitational formula gives the gravitational force due to the earth on me as about 978 in standard metric system units of force called “newtons”, named after Sir Isaac, of course. This is exactly the same thing as my weight. For you, me, or anyone else, the weight is the force with which the earth’s gravity pulls on us. In our everyday units of weight, the 978 newtons would be 220 pounds.
Notice that the mass of 100 kilograms is not the same thing as the weight of 220 pounds. The mass stays the same no matter where you go in the universe. If I went to the moon I would still have the same 100 kilograms of mass. However, to figure the force of the moon’s gravity on me I would have to use the mass of the moon and the radius of the moon in the formula. The answer for weight turns out to be about one sixth of the value on the earth. So your weight will depend on what planet you are on. Your mass just stays the same.
For the force of gravity of the earth on a person with only a 50 kilogram mass would be half of the 220 pounds mentioned above. Weight goes up and down with mass, as long as you stay on the same planet, but weight is not the same thing as mass.
In Newton’s gravitational formula, you have to divide by the square of the distance between the two objects. Newton was able to show, after a lot of difficulty, that this means the distance between the centers of spherical objects just as planets. When comparing a planet with an object that is much smaller (such as me), it doesn’t matter too much whether you use the distance to my head, my feet, or to some part of me in between. It comes out the radius of the earth anyway – about 4000 miles.
Suppose, though, that I move twice as far away from the center of the earth – 8000 miles. That is 4000 miles above the surface. Then my weight will turn out to be one fourth of its value on the surface. That is what happens when you divide by the square of the distance as in the above formula.
Suppose, for example, you divide 220 by 12. You get 220. But then suppose you divide 220 by 22. That means to divide by 4, and you get 55. If you triple the distance, you might divide by 32, or 9. Then you would get one ninth of 220, or 24.4 pounds.
In the same way, if I were to triple my distance from the center of the earth, I would be 12000 miles from it, or 4000 miles from the surface. My weight would be one ninth of its value on the surface. If I went as far away as the moon, I would be about 60 times by distance from the center of the earth. So my weight would be 220 divided by 602. That is only 0.0611 pounds. This represents the force of the earth on me at the distance to the moon (240,000 miles). It is not the pull of the moon’s gravity on me.
A law that behaves this way is called an inverse square law.
This law of gravitation is supposed to apply to any object with mass. Suppose there were two humans each with 50 kilograms of mass separated by one meter. They would, in fact, attract one another with a gravitational force, but it would be a very small one. Since humans have a funny shape, it is hard to figure out where the center is. So the distance is a little ambiguous. But we can get an approximate value by using 1 meter. If you put 50 kilograms into the above formula for each mass and one meter for the distance, you get:
Force of either person on the other = 0.000000167 newtons = 0.000000037 pounds.
They will not exactly stick together because of this force!
If you use 0 for the mass of either object, you get zero for the gravitational force. So an object has to have mass to have gravity. However, a lack of mass is not known to be a problem for any known object. Certain elementary particles are thought to be massless, or at least nearly so, but we won’t deal with that here. | http://faculty.eicc.edu/tgibbons/pscrgravity.htm | 13 |
55 | Many people erroneously think that plants draw their food from soil. In reality, plants manufacture their own food through photosynthesis in their green tissue. Soil provides most of the raw materials—mineral nutrients—that plants use as components for the food they produce.
Scientists currently recognize 17 elements as essential for plant growth and reproduction (see table, “Essential plant nutrients,” below). These elements are divided into macronutrients—those that constitute more than 1,000 parts per million (ppm) of plant tissue—and micronutrients— those that account for less than 100 ppm of plant tissue. In the turf and ornamental industry, however, many people use different terminology and refer to N, P and K as the macronutrients or just macros. Ca, S and Mg are the secondary nutrients, and the remainder are the micronutrients, or micros.
All essential elements are necessary to plants in some amount, so a deficiency of any one of them would theoretically produce symptoms. In practice, however, deficiencies of some of the essential elements—Mo, B, Cl and Ni—are virtually unknown because they are present in most soils and plants need very little of them. C, H and O account for more than 95 percent of the dry-tissue weight of plants, but plants obtain these elements from water and air, rather than from mineral soil. Thus, these elements are never limiting to plant growth. That leaves N, P, K and Fe as the nutrients that commonly become deficient, and Ca, S, Mn, Mg, Zn and Cu that occasionally are deficient. Nutrients often become deficient due to conditions that prevent their uptake or use by plants rather than actually being absent from the soil.
Nutrients also are classified as either mobile or immobile, depending on whether the plant can transfer the nutrient from one tissue to another. Deficiencies of mobile nutrients tend to show up first in older tissue, especially leaves, because the plant will withdraw mobile nutrients from these areas to supply the needs of newer growth. Deficiencies of immobile nutrients show up first in new growth because immobile nutrients cannot be transferred within the plant. Thus, new growth suffers if external sources are inadequate. In practice, this knowledge can be quite useful for diagnosis of deficiency symptoms.
You’ll notice in the following discussion that chlorosis— yellowing of normally green tissue—is a symptom common to many deficiencies. Though some clues help you narrow the problem down, it often is necessary to conduct soil and foliar testing to determine the cause of a chlorosis (or other) problem. Both types of testing are sometimes necessary because some deficiencies are caused by an excess of some other nutrient. Thus, adequate soil levels of a nutrient may not necessarily result in adequate tissue levels.
Most landscapes do not experience significant deficiency of nutrients other than N, P, K and Fe. Golf greens are more prone to deficiencies than other turf or ornamental sites due to their sand root zones, which do not retain nutrients well.
• Nitrogen. Plants require N, a component of many necessary compounds within plants, in relatively large quantities. Notably, N is vital for the production of chlorophyll, the green pigment involved in photosynthesis. The two major forms of N that plants obtain from soils are nitrate (NO3-) and ammonium (NH4+). Most other forms of N must undergo transformation (via microbial activity) into these forms before plants can use the N.
Deficiency symptoms include slow growth and chlorosis. Chlorosis occurs because N is a component of chlorophyll, the green photosynthetic pigment in leaves. Chlorophyll production slows or stops when N is inadequate, resulting in yellow coloration. Leaves turn tan and then die in more severe cases. N is a mobile nutrient, so symptoms show up on older leaves first.
Excess N promotes lush foliar growth, often at the expense of flowering, and creates a high shoot:root ratio. This often increases drought susceptibility and may delay dormancy in fall, increasing the chance for frost damage.
Plants perform best with consistent supplies of N. Slow-release fertilizers have become popular for this reason.
• Phosphorus. After N, P is the most frequently deficient nutrient. Like N, P is a component of many necessary compounds within the plant, such as DNA, RNA and energy-rich ATP that drives the synthesis and decomposition of organic compounds. Available forms include the phosphate ions H2PO4- and HPO42-. Soil pH controls which form is present—the latter predominates in pHs above 7.0, the former in pHs below 7.0. P-deficient plants are stunted and become darker colored, sometimes almost black. P is mobile in the plant, so deficiency symptoms show first in older leaves. Excess P promotes additional root growth, which decreases the shoot:root ratio—the opposite effect of N.
Although P traditionally has been promoted as a root-growth enhancer, this benefit is usually overstated where established turf and ornamentals are concerned. However,
P does seem to speed establishment of seedlings, sod and herbaceous transplants. This is reflected in the high P content of so-called starter fertilizers.
• Potassium. This nutrient is the third most commonly deficient element. It, too, is mobile in the plant, which uses K in the form of the positive ion K+. K is abundant in plant tissue. It plays a role in the osmotic potential of cells and therefore helps regulate turgor pressure. K also is important as an activator for many enzymes involved in photosynthesis, respiration, and protein and starch synthesis.
In most plants, K deficiency produces slight chlorosis followed by necrotic (dead) lesions. Often, leaf tips and margins are the first parts to die, giving a scorched appearance to the plant. Growth in general is stunted, as well. Excess K can cause deficiency of other nutrients such as Ca and Mg.
• Magnesium. Mg deficiency first appears as interveinal chlorosis (IVC)—yellowing between the leaf veins. Available to plants as Mg2+, deficiency is normally restricted to acidic soils with low cation-exchange capacity (CEC). Other deficiencies can cause IVC but only in neutral to alkaline soils, so a simple pH test often can narrow the problem down to Mg. Mg is mobile.
• Calcium. Ca deficiency, as with Mg, is mostly seen in plants growing in low-pH, low-CEC soils. Ca2+ is the form plants use, and it is immobile in the plant. Calcium plays a role in cell-wall formation as well as cell-division processes. Thus, deficiency often results in twisted or deformed tissues and death in shoot and root tips. Excess Ca can result in a deficiency of Mg or K.
• Sulfur. This element is rarely deficient. However, peculiar soil conditions in a few regions result in inadequate levels. Available to plants as SO42-, deficiency symptoms result in a general chlorosis difficult to distinguish from N deficiency without laboratory analysis. Excess N may cause S deficiency in leaf tissue of trees. S present as an environmental pollutant in rainfall is a significant source of S in many parts of the country.
• Iron. Fe is commonly deficient in turf and ornamentals and—after N, P and K—this element is the most frequent supplemental nutrient that grounds-care professionals apply. Fe is usually present in soil in fair amounts, and deficiencies often result from soil conditions—especially high pH—that restrict Fe uptake by the plant.
Fe deficiency produces pronounced IVC, though this will often spread to the veins as well. Fe is immobile in the plant, so symptoms first occur on younger growth.
• Manganese. Mn is not required in great amounts by plants, and deficiencies—which are not common—are most likely to occur in alkaline soils. The symptoms include IVC and, in severe cases, necrotic margins and spots. Mn is available to plants as Mn2+ and is immobile in plants.
• Zinc. Zn deficiency is rare in turf but occurs occasionally in ornamentals, where it causes IVC and rosette formation. Excessive Zn can reduce Fe levels.
• Boron, chlorine, copper, molybdenum and nickel are rarely or never deficient. If you ever experience a problem with any of these elements, it usually stems from excessive levels, not deficiencies. B and Cl toxicities are not uncommon in some regions, and Cu can reduce Fe levels in plants if present in high amounts.
Several of the micronutrients are available in chelate form. Chelates are much more available to plants than non-chelated forms and are the type you should use, when available, if you need to apply these nutrients.
Fertilizer labels list the fertilizer’s analysis, a three-part designation, such as 20-10-10, representing the percent content (by weight) of N, P and K respectively. Fertilizers that contain these three nutrients are known as complete. Products that contain equal amounts of each are balanced, such as 10-10-10 fertilizer. These three nutrients, because of their importance to plants and their frequent deficiency, are the primary components of commercial fertilizers. However, manufacturers often add other nutrients, as well. Labels usually state how much N is soluble and how much is insoluble, indicating how much is rapidly available and slowly available, respectively.
P content is expressed on fertilizer labels as if it were in the form of P2O5, even though no such compound exists in fertilizers. This is an old convention that the industry has not bothered to change. Because P2O5 is heavier than elemental P, you must multiply the stated content by 0.44 to get the actual content in terms of elemental P. For example, a reported P2O5 content of 20 percent equates to around 9 percent actual P content.
K is expressed similarly—in terms of K2O equivalent—and requires you to multiply by 0.83 to obtain the actual amount of elemental K present in fertilizer.
AND NUTRIENT AVAILABILITY
Obviously, if a nutrient is not present in soil, the plant will suffer. Commonly though, as we’ve pointed out, the problem is not actual absence of the nutrient. Rather, soil conditions directly or indirectly prevent plants from utilizing it.
The cation nutrients Mg and Ca are two notable examples that become less available in low-pH soils. Conversely, Fe is less available in many alkaline soils. Sandy soils—putting-green root zones being the extreme—with low CEC values hold fewer nutrients than heavier soils and soils rich in organic matter. Clay and organic matter have excellent abilities to hold nutrients in soil, especially those nutrients that form positive ions, including Ca, Mg and ammonium.
If you have soil conditions that cause nutritional problems, long-term solutions rest with soil modification. In the short-term, you easily can solve most deficiencies with supplemental fertility of the type needed. Chapter 2 discusses soil conditions and amendments in more detail.
One important factor affecting soil fertility is the carbon-to-nitrogen ratio (C:N) of organic amendments. Organic materials high in carbon—especially wood products—require the activity of microorganisms for decomposition. These microbes use N as they act on the organic matter and may withdraw so much from the soil that inadequate amounts remain for plant uptake. This effect is temporary because the N eventually is released as decomposition progresses. In the meantime, which may last a year or two, N deficiency may exist. You must add supplemental N, up to 1 pound per 1,000 square feet for some materials, to counter this effect and to speed up the decomposition of the organic matter in the soil. C:N ratios below 50:1 contain enough N to avoid most problems. Higher values may indicate the need for supplemental N. Wood products such as sawdust may have C:Ns in the range of 400:1 to 500:1 and can tie up large amounts of N. You should have some idea of the C:N of amendments you use in your soil (see table, Chapter 2, for some examples).
Another factor you need to consider, especially for woody ornamentals, is soil mobility of nutrients. Some nutrients are mobile within the soil profile. Others are not. Soil pH also affects soil mobility of nutrients. The practical implication of this is that you must place immobile nutrients directly into the root zone (see “Fertilizing trees and shrubs,” page 78). This is in contrast to soluble nutrients, which you can apply to the surface and water into the root zone. This does not apply so much to turf because turf roots grow so near the surface that surface applications are generally adequate.
Let’s look at some of the types of fertilizer we apply to turf and ornamentals. Quick-release N fertilizers provide N in several forms (see table, “Synthetic N sources,” page 69). These products are the traditional fertilizers and dissolve readily in water. Therefore, they enter the soil solution rapidly and are almost immediately available to plants. They also are quickly depleted.
Slowly available sources are not as immediately available to plants but release N over a longer period and at more consistent rates, which is advantageous for several reasons.
P fertilizers are in the form of superphosphate or treble superphosphate. However, ammonium phosphates also provide P. K fertilizer is derived mainly from potash—KCl. Potassium sulfate and potassium nitrate are also important sources of K.
The secondary and micronutrients are available in forms you can apply separately (see table, “Secondary and minor nutrients,” opposite page) if the need arises. However, special mixes that contain many of the secondary and micronutrients are available, and turf managers often use this “shotgun” approach to ensure that micronutrient deficiency is not a problem.
Various combinations of fertilizer materials can provide an endless array of analyses (see “Example fertilizer label,” right), and manufacturers provide fertilizers blended specifically for almost every type of turf or ornamental in a given climate. Custom blending often is available, too.
Turf and ornamental fertilizer products typically contain
N, P, and K and often Fe, as well. However, they may also include many of the other nutrients, too. This usually causes no harm, though the need for these nutrients in many situations is questionable.
• Salinity. Some fertilizers are salty—they will increase the salinity of your soil. This may or may not be a problem depending on a variety of factors, but you should be aware of the salinity of the fertilizer materials you are adding to your soil, especially if your soils are already prone to salt buildup (see table, “Salt index,” at right).
• Acidifying effects. Some fertilizers increase acidity more than others, and some have the opposite effect. You can use this as a means to alter (or maintain) pH levels by choosing a fertilizer with the desired quality (see table, “Acidifying effect,” at right).
Evidence exists that N-deficiency symptoms result not from low N levels, per se, but from unsteady levels of N and cycling between high N and low N levels. This may be one reason for the effectiveness of slow-release fertilizers and their popularity. Their gradual nutrient release helps level out the peaks and valleys of turf growth, resulting in more consistent turf quality and fewer deficiency symptoms.
Manufacturers produce two types of slow-release products: uncoated and coated. Coated products rely on semi-permeable or impermeable coatings to restrict water’s access to soluble fertilizers. Uncoated products take advantage of the low solubility of some N materials to slow their release.
• Uncoated products. Ureaform (UF) and methylene urea (MU) are similar products that result from a reaction of urea with formaldehyde. Both contain about 40 percent N in the form of long chains of molecules. Longer chains are less soluble than shorter chains and so take longer to become available in soil. UF and MU both consist of a variety of chain lengths and so release N over time. UF molecules are generally longer than MU chains, so UF releases N more slowly but over a longer period. These fertilizers ideally release N over 8 to 12 weeks. However, because they rely on microorganisms to attack the molecule and mineralize the N, this release time can vary depending on conditions, such as temperature, pH and soil moisture, that affect microbial activity. Thus, at certain times of the year, soil temperatures may be too low for UF and MU to supply adequate N to turf.
Isobutylidene diurea (IBDU). The other significant uncoated slow-release product available in the United States is IBDU. Formed by a reaction of urea with isobutraldehyde, it contains 32 percent N and is available in granular or powder form.
IBDU slowly releases N by hydrolysis in water, after which it is soon available to plants. IBDU acts as a slow-release product because of its low solubility—only small amounts dissolve and release over time. Moisture levels primarily affect IBDU release—dry conditions delay release. Superintendents also should be aware that powdered forms they apply to greens dissolve more quickly than the granular forms they apply to fairways.
LIQUID SLOW-RELEASE FERTILIZERS
Some products have chemistry and release patterns similar to UF and MU but are liquid-applied. These provide important advantages in certain situations.
For example, liquid fertilizers allow you to tank-mix with pesticides, which is a great advantage in some operations. Plus, it’s easy to customize the mix by altering the rate of each component you use in the mix, something not so simple to achieve with granular products. Gaining these benefits without sacrificing the advantages of a slow-release product make liquid slow-release fertilizers attractive options in many situations.
Another advantage of liquid application is more precise placement of the fertilizer than broadcast spreaders can achieve, making it efficient for areas such as golf greens and tees. Plus, liquids do not leave granular material on the green’s surface and so do not disrupt putting quality.
• Coated fertilizers. Since its introduction nearly 20 years ago, sulfur coated urea (SCU) has enjoyed great success in the turf market. SCU consists of a urea granule with a sealing coat of sulfur plus wax. Thus, SCU supplies turf with S in addition to N, which varies from 30 to 38 percent.
Small cracks and imperfections in the coating layers allow some water to enter the granule and dissolve the urea, which then escapes out into the soil. Plus, microbes attack the wax coating and destroy it over time. Once a granule takes in water, it can release its urea quickly. Coatings
vary in thickness and integrity, so the gradual release of urea is a result of some granules releasing urea soon, some later and others only after a long period. Overall release rates vary among products and manufacturers, who can control the coating thicknesses.
• Polymer-coated fertilizers date back to the introduction of Osmocote in the 1960s. However, many of the polymer-coated products available now are relatively recent introductions. Unlike SCU, many of these products contain other sources of N such as ammonium nitrate, as well as other nutrients such as P and K.
Manufacturers are able to manipulate the chemistry of the coatings (wherein lies the main difference between various polymer-coated products) to provide highly predictable release rates. They manufacture these products by applying successive polymer coatings to the fertilizer granule.
When water diffuses across the semipermeable polymer membrane, it dissolves some of the fertilizer inside. This creates a concentrated solution, which then diffuses back out into the soil. This continues over time until the fertilizer is completely released and all that remains is an empty polymer shell.
OTHER FERTILIZER PRODUCTS
• Natural organics. These products are derived from many different substances, such as composted sewage sludge, animal waste, feather meal and others. Nearly all of the N in these products is in organic form and relies on microbial activity for release. These products are considered slow-release fertilizers. Natural organics typically have relatively low N content and, therefore, generally cost more to apply the same amount of N. However, N-use efficiency tends to be high.
• Fertilizer/pesticide combinations. These products combine fertilizer with a pest-control product, often a herbicide or insecticide, for turf use. The economic advantages are obvious—fewer applications, fewer packages to deal with and simpler, less expensive application equipment. Thus, when appropriate, these products may offer significant time and money savings. However, a significant drawback is timing. Obviously, the proper timing for the fertilizer and pesticide must coincide or one of them will have less than maximum effectiveness. Despite this disadvantage, combination products are a popular option with turf managers.
Turf requires nutrient management significantly different than ornamentals, so we’ll discuss turf fertilization separately. N is the most significant nutrient so we’ll start with this nutrient and then touch on the remainder.
To understand turf’s N needs, it helps to understand the nitrogen cycle. The N cycle consists of numerous components
and processes, all interconnected and dependent on each other (see Figure 1, page 72). Let’s first discuss the components, or pools, of N in the cycle and then the processes that link them together.
• Pools. N exists in many forms—all are either organic or inorganic. The forms of organic N with which most turf managers are familiar are products such as IBDU, urea, urea formaldehyde, methylene urea or composted sewage sludge. While these are important in turf-management programs, their application represents a small fraction of the total organic N in soil. The bulk of organic N in soil exists in the form of plant material (dead or alive), bacteria, fungi and other soil organisms. Humus also contains organic N. A fertile soil may contain 3,000 to 5,000 pounds of N per acre in the top 6 inches.
By comparison, the inorganic N pool, consisting of nitrate and ammonium, is much smaller—in the range of 10 to 50 pounds per acre. Inorganic N is the critical link between soil organic N and turf growth because inorganic forms are the forms plants can use. N enters the inorganic pool either by fertilizer applications or microbial breakdown of soil organic matter into ammonium by mineralization. Additions of fertilizer are predictable and easy to manage. However, the contribution of mineralization to the inorganic pool is difficult to assess because it depends on the overall size of the organic-N pool, microorganism activity, temperature, pH and other factors.
Further confounding the picture is the fact that the soil organic-N pool is not constant but expands over time, especially in a young turf site. For example, a new golf course built on heavily cultivated agricultural soil may initially have relatively low levels of soil organic N. After several years under turf, however, organic-N levels begin to increase due to fertilizer applications and recycled clippings that ultimately deposit their N in the organic pool. This pool will continue to increase in N content over several decades until it levels off.
During the initial period of rapidly increasing organic N, the flow of N is primarily into the organic pool, with little N flowing back out into the soil. However once the pool is “full,” equal amounts of N flow into and out of the pool. This is important because the flow of N from a full pool back into the pool of inorganic N is significant and can reduce the amount of N fertilizer the turf requires. Thus, a young, high-quality turf may require 6 to 8 pounds of N per 1,000 square feet annually, whereas a mature turf may succeed with 2 or 3 pounds annually. Also, the flow of N from the organic pool is fairly steady through the growing season and will support turf growth without the peaks and valleys associated with applications of quickly available N fertilizers. Superintendents who have managed older courses are well aware of this phenomenon because it’s like having a huge supply of slow-release N “in the bank.”
One of the best ways to increase your N pool is to return
clippings. Research has found that you can remove as much as 4 pounds of N per 1,000 square feet annually in clippings.
• Processes. N pools in the soil are in constant flux, with flow from one pool to another. The links or paths between them are processes. We broadly group these into processes that conserve N in the system and those that result in permanent N loss from the system.
• Nitrogen loss. The three basic processes that steal N from turf systems are volatilization, denitrification and leaching (removal of clippings can be considered a fourth). Volatilization is the loss of N from the surface of the system into the atmosphere as ammonia gas (NH3). Depending on conditions, volatilization losses can range as high as 45 percent of the N you apply. Several factors increase this loss, including using ammonium-based or uncoated urea fertilizers, high soil pH, rapid drying conditions, urease activity and failure to irrigate after application. An irrigation of about 0.5 inch of water soon after application greatly reduces volatilization losses.
Denitrification is the second process that steals N from turf. Denitrification also results in the loss of N as a gas. But instead of loss as NH3, N is lost as nitrous oxide (NO2) and nitrogen gas (N2). Microbial activity causes denitrification and can result in 10 to 90 percent of applied N being lost. Conditions that favor microbial activity, such as warm, saturated soil and high fertility, also increase idenitrification. Fortunately, denitrification seems to be fairly insignificant in well-aerated soil, so maintaining good drainage should reduce this avenue of N loss.
Leaching, the third avenue of N loss, is of great concern considering the potential harm of nitrates in drinking water. However, most research
indicates that healthy, dense turf loses little nitrate through leaching.
• Nitrogen conservation. Several processes result in changes in the form of N without actual loss from the turf system. To illustrate, consider what happens when you apply urea to turf.
A granule of urea dissolves fairly quickly following application, especially with irrigation. Several processes act on the urea, but the most important is hydrolysis by the enzyme urease, which converts N from the urea form into the ammonium form. Urease is present all through turf systems. It is produced by many living organisms but functions independently in the environment as well.
The ammonium that urease releases can follow several paths. As a cation, it can stick to cation-exchange sites, or soil microorganisms can convert it to nitrate through nitrification. In either case, the N is now available to turfgrass roots. Urea N is too but is used more slowly than ammonium or nitrate.
Turfgrass roots can absorb 50 to 90 percent of applied N and can do so within 2 to 4 days. This is very efficient compared to most plants and demonstrates the value of healthy turf with extensive rooting. Rapid uptake fixes N in the system and prevents loss through other processes. A drawback of rapid uptake is that it causes cycling of N levels that results in inconsistent growth.
Microbial absorption competes with plants for N and may account for 10 to 30 percent of applied N. Plant and microbial assimilation of N into living tissue is called immobilization. This is the opposite of mineralization.
Living organic matter is just a temporary stopping point for N. When organisms die, they release their N back into the pool of soil organic N. Microorganisms convert fresh organic matter into humus through the complex process of decomposition. Microorganisms subsequently process the soil organic N back into NH4 by mineralization, completing
the cycle. Remember that all of these pools and processes are interconnected and interdependent.
• Potassium and phosphorus. Turf’s response to P and K is not usually as visible as it is with N. However, these nutrients are just as necessary for good turf health and vigor.
P is important to root growth and therefore affects establishment, both from seed and vegetatively. Deficiencies slow establishment and reduce vigor by reducing root development. Severe deficiencies result in reddish purple leaf blades and poor growth.
K is needed in large quantities by all plants. From a practical standpoint, improved stress tolerance is the most important function of K, so a visible response may not occur if the turf is not under stress at the time. However, adequate levels are vital for good tolerance to environmental stresses and diseases. Visible deficiency symptoms include leaf-tip death and thin turf.
Turf fertilizers usually contain enough P and K to prevent serious visible deficiencies. However, soil conditions—
especially low CEC—might cause low levels or reduced plant uptake to the point where vigor or stress tolerance is reduced. Therefore, soil and tissue analysis is warranted to ensure P and K levels are adequate. A lab with turf experience should perform these tests. Not only will the analyses detect deficiencies, they can tell you whether you are applying too much of these nutrients. Doing so is wasteful and could cause deficiencies of other elements such as Fe and Zn.
• Iron. Fe is the micronutrient most likely to be deficient in turf. Its primary function in the plant is in the formation of chlorophyll. Because of this, chlorosis is the main symptom of Fe deficiency. Fe chlorosis is common when soil pH is above 7.0, because alkaline conditions change Fe to a form unavailable to the plant. In low-pH soils, Fe deficiency is relatively uncommon.
Soil tests often are unreliable in reporting Fe levels. The easiest and surest way to test your turf is to apply a dose of iron to a section of turf. Turf’s response to Fe is quite rapid, and you will see a response within 48 hours or less if Fe is deficient. Turf response to Fe often is short lived so periodic applications may be necessary where Fe-availability problems exist.
• Magnesium. As already discussed, low-pH and low-CEC soil can cause Mg deficiency. This is common, therefore, on golf greens. Turf managers often mistake the chlorosis caused by Mg deficiency for Fe or N shortage. If turf does not green up after application of these two nutrients, suspect Mg deficiency. Test spray a small section of turf with Mg to confirm this. You’ll see a response in 24 hours if Mg has been lacking. Epsom salts is a widely available source of Mg you can use for this test. Use 1 teaspoon in 1 pint of water.
• Calcium can become deficient in the same conditions that reduce Mg. However, in practice, low-pH soils usually are limed long before they become Ca deficient. Because Ca is the primary component of lime, Ca deficiency symptoms are rare in the field.
• Manganese. Deficiencies are rare but may occur on golf greens. A simple test application of Mn-containing fertilizer next to a strip of turf fertilized without Mn will confirm any suspected deficiency.
• Sulfur, as already stated, is adequate in nearly all soils and also is supplied by rainfall in many areas. Even so, many fertilizer products include S, making deficiency even less likely. Again, a small test application of elemental S will confirm if S is deficient when you suspect a problem.
The remaining nutrients are not generally deficient in turf. The rare situations where they are inadequate can be dealt with by applying one of the available micronutrient solutions to turf. These products usually contain trace amounts of the micros and easily satisfy the needs of the turf.
It is difficult to generalize about fertilizer rates on turf. Different species require different amounts of N (see table, “Nitrogen needs of turfgrass species,” page 74). Climate and other environmental factors also vary nutrient demand, and intensity of use and level of maintenance are issues as well. Many turf sites, such as home lawns, can perform well with a range of fertilizer rates, depending on the level of quality the owners desire. Thus, no single formula exists for determining how much fertilizer to apply, and no perfect fertility program exists for any turf.
Fertility has many indirect effects, and the interaction between nutrient levels and other factors is often subtle and complex. Peculiar and unexpected problems sometimes arise, and grounds managers may be forced to experiment to find their own solutions to fertility problems. For example, some diseases become more prevalent with high N levels, while others react oppositely. Weed problems change with varying nutrient levels as well. Because so many factors are site specific, turf managers must find what works for their particular situation. Having said that, let’s be more specific.
Fertilizer rates usually are given in terms of pounds of N per 1,000 square feet. Thus, for example, when you hear “2 pounds of N,” it’s implied that this means 2 pounds of N per 1,000 square feet. That is the convention we’ll use here. Fertilizer products often have a nutrient ratio of 3:1:2 for N, P and K. This reflects the needs of turf for these nutrients. By keeping with the desired ratio, varying the level of N you apply also results in proportional variation of the other nutrients present in the formulation, keeping them properly balanced for turf’s needs.
As a general rule, 1 pound of quickly available N should be the maximum you apply at any one time to turf. In hot summer conditions, reduce this maximum to 0.5 pound of N for cool-season turfgrasses. You can safely apply up to 3 pounds of slow-release N at once, but more than this is risky. Close-cut turf such as golf greens should receive no more than 0.5 pounds of N from a quick-release source in one application.
Ideally, N levels should be kept as even as possible. This limits ups and downs in frequency of mowing and reduces deficiency symptoms between fertilization. That is why slow-release products are useful and, in the case of quick-release sources, why it is better to apply less fertilizer more often. However, this must be balanced against the cost of labor, which will be lower with less frequent (and presumably heavier) applications. Applying 1 pound of N at a time is a middle ground that seems to work for many turf managers. Here are some of the major turf uses and some typical N requirements for them. Keep in mind that these are only examples and that rates vary widely according to climate, species, turf use and maintenance intensity.
A reasonable range for golf greens is 0.75 to 1.5 pounds of N every 2 to 6 weeks during the growing season. This is a large range, and the actual rate depends on climate, rainfall and length of growing season. For example, warm, rainy, tropical climates result in nutrient leaching, rapid growth and a long (sometimes continuous) growing season. N demand in this type of situation could be close to 2 pounds of N per month all year long. You should base P and K (and other nutrient) applications on soil tests, but you can expect demand for these nutrients also to be high on golf greens. Temperate climates result in considerably less demand.
Fairway rates vary according to region and species, but 2 to 3 pounds of N annually is a reasonable average figure. The need will be higher in tropical climates, perhaps more than double. The objective is to apply the minimum necessary to maintain a dense, vigorous turf without creating undue mowing requirements.
Depending on many factors, 2 to 6 pounds of N annually are required by lawn turf. Low-N turf such as buffalograss may need no more than 1 pound each year to look its best.
Athletic fields require up to 10 pounds of N annually depending on the level of play. However, if fertilizations result in succulent turf, you should adjust rates or timing: succulent turf is more tender and suffers more from traffic. Conversely, inadequate levels will prevent turf from recovering from damage.
One obvious principle for timing fertilization is that nutrients must be present when turfgrasses are actively growing. For cool-season turfgrasses that is fall and spring, and summer for warm-season species. Conversely, you should withhold fertilizations when turf is dormant. This can be during summer drought or winter dormancy. Nutrients applied at these times will largely be lost by the time the turf becomes active again.
• Cool-season turf. During late fall and early winter, air temperatures may be low enough that shoot growth is minimal or non-existent. However, soil may still be relatively warm and root growth continuing. Fertilization during this time is beneficial because the plants are still photosynthesizing and producing food. However, because it’s too cold for shoot growth to occur, this food is either stored for use the following spring or used for root growth. Thus, fertilizing at this time increases winter hardiness and promotes earlier greenup and spring vigor. If you could make just one application of fertilizer each year, this would be the time. Though N is the main reason for fall applications, K is known to increase low-temperature tolerance.
Usually, you will apply more N than just a fall application.
Mid-spring is a typical time to apply additional fertility. This aids the turf during its spring growth push. However, increased N during the warm season increases disease susceptibility in cool-season species. Thus, although
you may wish to add additional fertilizer in late spring and summer to turf that is irrigated throughout the season, you must balance the need for nutrients with increased problems you may encounter. Any such applications
should be light.
• Warm-season turf. If you had to make just one application to warm-season turf, it would be in late spring, just as turf is starting its annual push. Additional applications are helpful through the summer, as the growth of warm-season species is at its peak during summer months. However, you should avoid late-season fertilizations because this will reduce winter hardiness.
• High-use turf. High-use and close-cut turf require specialized fertilization practices. Golf greens need more consistent and continuous nutrient supplies. Therefore, superintendents fertilize often with smaller nutrient doses. This so-called spoonfeeding takes place throughout the growing season. Athletic fields likewise benefit from numerous lighter applications. Because many sporting events take place during late fall, winter and early spring, it may be beneficial to fertilize earlier in the spring and later in the fall than you would ordinarily.
As you can see, many factors affect turf needs, and your fertilizer choices should reflect those needs. For example, a product with a high acidity rating may be a better choice for alkaline soils. Low-CEC soils benefit from higher fertilizer levels of cation nutrients. Inclusion of many of the secondary or micronutrients should be based on visible symptoms or soil and tissue tests. Of course, price and availability of fertilizers matter as well. You can satisfy N needs with a variety of materials, so other factors may be more critical than N form.
Fertilizers are delivered to turf in two basic forms: liquid and granular. Both have relative advantages and drawbacks.
• Liquid application results in the most rapid uptake of nutrients because of foliar absorption. When you use heavier amounts (3 to 5 gallons per 1,000 square feet) of water, this is referred to as liquid fertilization because much of the solution runs off into the soil where roots can take it up. Under about 0.5 gallons per 1,000 square feet, we call it liquid feeding. In this case, most of the liquid remains on the foliage, and a great proportion of the nutrients are directly absorbed by the leaf blades. Use only low rates (under 0.125 pound of N) for liquid feeding to avoid burning.
Although spray rigs can cover a large amount of turf efficiently, they cannot compare to large spreaders in coverage speed. However, they are an efficient method of spoonfeeding turf that many turf managers use with success. Further, liquid applications allow operators to tank-mix pesticides with the fertilizer, which is a great convenience.
• Fertigation is the practice of applying fertilizers through an irrigation system. Fertigation is problematic without high irrigation uniformity, because varying levels of fertilization can become visible. Even so, this practice is becoming more popular because it reduces spray applications.
• Granular application is the most widely used method in most situations. Granular formulations are easy to handle, and the equipment is simpler to clean and maintain. Further, many pesticide/fertilizer combination products are available for added efficiency.
√Drop spreaders are highly accurate tools for placing fertilizer exactly where you want it without material ending up in non-target areas. However, they cover turf rather slowly and may result in skips if the applicator is not careful. This problem is reduced by halving the application rate and then making two passes at right angles to one another.
√ Broadcast spreaders are used by many turf applicators because of their speed of application. Placement of material is not as precise as with drop spreaders, but their efficiency is high and that’s why they are the preferred type of spreader in most situations.
One problem with broadcast spreaders is differential distribution of different materials in a granular mix. This happens because particles of different size and density travel different distances when the spreader throws them. For this reason, you should apply materials of greatly different size or weight separately or with a drop spreader.
Broadcast spreaders come in a range of sizes and capacities,
from tiny models suitable for homeowners to professional
models with large-capacity hoppers. Belly grinders are over-the-shoulder types that hang across your chest and have a hand-turned crank that throws the fertilizer.
√ Pendulum spreaders are suitable for large areas such as parks, fairways and athletic fields. They typically are tractor-mounted and PTO-driven, with large-capacity hoppers. They use a discharge spout or tube that swings back and forth to spread the material. Spreading widths as high as 40 to 60 feet allow rapid coverage of large areas.
Follow all fertilizations (except liquid feeding) with irrigation of at least 0.5 inch of water to move the fertilizer into the turf root zone. Further, calibrate application equipment frequently to ensure you’re applying the correct amount of fertilizer.
If you suspect a tree or shrub nutrient deficiency, remember that woody plant fertilization is not an exact science. Compared to turfgrasses, trees and shrubs do not provide easy-to-read deficiency symptoms. While trees and shrubs need the same essential nutrients as all other plants, deficiencies usually result in simple growth reduction. Specific symptoms, though less common, include:
- Stunted, small leaves
- Leaf distortions
- Dead spots
- Early leaf drop.
Many other factors can produce similar symptoms, so you should not jump to the conclusion that fertility is the problem. Plus, nutrient interactions can be complex and are not well understood. Excess levels of one can result in deficiency of others. In this case, the basic problem is an excess of a nutrient, not a deficiency.
Unfortunately, tissue analysis is not always useful either. Researchers have not documented the relationships between foliar symptoms, tissue-analysis results and fertilizer responses well enough to make meaningful recommendations in many cases. This does not mean tissue analysis is not useful—it often is vital for solving specific problems. However, our knowledge of tree nutrition has not yet evolved to the point where it provides effective practical solutions to all nutrient problems.
Because of the complexity of a plant’s nutrient status, many researchers caution that just-in-case fertilizing is not necessarily a good practice and could cause more harm than good. Many others, however, recommend regular fertilizing and have good results. At this point, no definitive word exists on what is the ideal practical approach to fertilizing trees and shrubs. The discussion below illustrates fertilizing practices according to those that recommend fertilizing annually. Unless symptoms become apparent, micronutrients are not usually necessary, and annual applications should consist of N, P and K. But keep in mind that many authorities discourage the use of fertilizers on a “whether-it-needs-it” basis and recommend it only when trees and shrubs display lack of vigor or specific symptoms (see table, “Nutrient-deficiency symptoms of trees,” page 76).
Soil factors, as discussed, affect nutrient availability. In particular, pH above or below the 6.0 to 7.0 range can create unavailabilities. Another common problem for trees is competition with turfgrasses. If N is in short supply, trees usually suffer more than turfgrass growing in the same site.
Research has repeatedly shown that fertilization at planting time has no effect. Container plants or smaller specimens that can rapidly establish may respond to fertilization within the first year. Larger trees do not respond to fertilizer for a few years, so wait until the third or fourth year after transplanting.
Once established (this takes about 1 year for each inch of caliper), trees grow substantially faster with fertilizer. This is the time to concentrate most on fertility—after establishment, but while the tree is still young. Rapid growth is desirable at this time. As trees become mature, they respond less to heavy fertilization but may still benefit from light amounts. By maintaining reasonable vigor, older trees are less vulnerable to pests and diseases. However, you should remember that fertilizing large trees may encourage growth of specimens that already are as large as the surrounding landscape can safely or aesthetically accommodate.
Trees with restricted roots are special cases. Heavy fertilization may exaggerate their problems by increasing their shoot systems without a similar increase in roots. Unfortunately, fertilizer—including phosphorus—will not directly increase root growth in such trees and shrubs.
Some debate exists about the proper time to fertilize trees. Traditional recommendations suggest spring or early summer applications. However, some research indicates that trees absorb nutrients more readily in the fall. Conversely, fall applications risk some of the N being unused and lost from the root zone while trees are dormant. Apparently, neither time offers dramatically better benefits than the other. If you make fall applications, be careful not to over-apply N, which some arborists feel could delay fall hardening.
When turf covers the root zone, trees may benefit more from spring fertilization, before turf begins rapid growth. Or, with warm-season turf, apply fertilizer in fall after turf has gone dormant. Both of these strategies prevent turf from taking as much of the N before it can move lower into the tree’s root zone.
MATERIALS AND RATES
Some recommendations base rates on trunk diameter, but fail to consider the area of application. If you use a diameter-based rate for a large tree in a restricted space, such as a planter, you may be adding enough salts to the soil to injure the tree.
Tree-fertilizer materials are not fundamentally different than turf fertilizers. Ratios of about 3:1:1 or 3:1:2 are best for tree fertilization. Use at least 25 percent slow-release N. Standard rates suggest 3 pounds of N per 1,000 square feet annually over the root zone. Evergreens should receive about half of that amount.
For the fertilizer to be effective, it must reach the root zone. For most trees in most soils, the root zone is a uniform mat just below the soil surface. The small absorbing roots are in the top 4 to 8 inches of soil and extend from the trunk two or three times as far as the branches. Buildings, pavement and neighboring trees can restrict roots from spreading normally.
For surface applications, spread the fertilizer over the ground out to the drip line and then one-third farther. Afterward, lightly cultivate the soil with a rake. Water the area thoroughly after the application.
Broadcasting soil-mobile elements, such as N, over the soil surface and watering them in supplies the entire root zone. However, you must apply soil-immobile elements such P and K through 8- to 12-inch deep holes.
Drill applications offer several advantages:
They put P and K—nutrients relatively immobile in the soil—deeper into the root zone.
They reduce turf competition for nutrients.
They aerate soil and promote deeper rooting.
Drill applications involve drilling 2-inch-wide holes 8 to 12 inches deep around the tree. You can use an electric drill with an auger bit or a gas-powered auger. Start about 3 to 4 feet from the trunk of large trees and extend to the drip line or slightly further.
Drill applications do not require different rates of N than broadcast applications, but you must do some extra calculations to determine the application patterns (see “Drill-application rates,” at page 77).
Liquid injection is similar to drilling in that you place the fertilizer directly in the root zone. However, the fertilizer is, in this case, a liquid solution, and this method reduces the required time and labor. Use a sturdy gun with side-injection ports on the needle. These help distribute the liquid and prevent clogging. Use any spray rig that develops the necessary pressure—150 to 200 psi.
You can use as few as 8 or 10 gallons of solution or as many as 40 or 50 gallons. It’s more important to know how much fertilizer you’re injecting into each spot (see box, “Liquid soil injections,” below right).
Foliar applications. Applying nutrient solutions to foliage is a rapid, effective way to deliver nutrients to plants, which absorb them directly. However, the effects are relatively short-lived. Therefore, you should use this method in conjunction with soil fertilization for quick effect as well as longer-lasting fertility.
Foliar applications consist of spraying foliage with the same equipment and techniques you’d use for spraying pesticides. You should spray to the point of runoff and ensure complete coverage. The use of surfactants may increase plant uptake. Foliage burn is possible with many products so do not exceed the recommended rate.
Trunk injections. Several techniques allow you to perform trunk injections. One involves drilling holes and inserting injection tees. To these you attach tubes that deliver pressurized nutrient solutions into the sap stream. Another system operates by a similar principle with a small pressurized capsule attached to the feeder tube. A tap with a mallet breaks the seal and starts the injection. A third system also requires you to drill holes. But you then insert small capsules into the holes, tapping them in with a mallet or similar tool. The sap then carries the nutrients to other parts of the tree.
The value of trunk injection is unquestionable where pesticides are concerned—it places them directly into the plant’s tissue to an extent difficult or impossible to achieve with external applications. Although trunk injections also effectively deliver nutrients to the plant, many arborists feel that drilling holes in the trunk is not justified when other effective techniques are available.
TO LEARN MORE
Many situations—golf courses, athletic fields, difficult soil types—demand specialized fertility practices. To learn about specific practices in these situations, talk to other professionals in your area to see what works for them. Extension agents are another valuable source of recommendations and so are manufacturer representatives. Finally, remember that each situation is unique: Tinkering with materials, rates and timing can help you find methods that work best for your site. | http://grounds-mag.com/mag/chapter_10_fertility/ | 13 |
87 | Race (classification of human beings)
From Wikipedia, the free encyclopedia
|This article may be too long to read and navigate comfortably. Please consider splitting content into sub-articles and using this article for a summary of the key points of the subject. (May 2009)|
The term race or racial group usually refers to the categorization of humans into populations or groups on the basis of various sets of heritable characteristics. The physical features commonly seen as indicating race are salient visual traits such as skin color, cranial or facial features and hair texture.
Conceptions of race, as well as specific ways of grouping races, vary by culture and over time, and are often controversial for scientific as well as social and political reasons. The controversy ultimately revolves around whether or not the socially constructed and perpetuated beliefs regarding race are biologically warranted; and the degree to which differences in ability and achievement are a product of inherited "racial" (i.e., genetic) traits.
The term race is often used in taxonomy as a synonym for subspecies, in this sense human races are said not to exist, as taxonomically all humans are classified as the subspecies Homo sapiens sapiens. Many scientists have pointed out that traditional definitions of race are imprecise, arbitrary, have many exceptions, have many gradations, and that the numbers of races delineated vary according to the culture making the racial distinctions. Thus, those rejecting the notion of race typically do so on the grounds that such definitions and the categorizations which follow from them are contradicted by the results of genetic research.
Today many scientists study human genotypic and phenotypic variation using concepts such as "population" and "clinal gradation". Large parts of the academic community take the position that, while racial categories may be marked by sets of common phenotypic or genotypic traits, the popular idea of "race" is a social construct without base in scientific fact. Nonetheless, when divorced from its popular connotations, the concept of race may be useful. According to forensic anthropologist George W. Gill, blanket "race denial" not only contradicts biological evidence, but may stem from "politically motivated censorship" in the belief that "race promotes racism".
The concept of race may vary from country to country, that is, it changes according to specific cultures. For example, in the United States the term race is used in the description of individuals (e.g. white, black, Latin) whereas in Italy it applies only to few domestic species, that is, it does not apply to wild animals and to human species.
According to biologists and anthropology, The genus Homo were differentiated only by about 1%-2% from their nearest cousins Pan (chimpanzee) about 4 million years ago. The genus homo had several species: Homo habilis, Homo erectus, Homo neanderthalensis, and the lone survivor, Homo sapiens. African people and Asian people became very slightly differentiated some 200,000 years ago, and the various Ethnic groups in Europe became differentiated from those in Asia only about 100,000 years ago. Since Africans, Asians, and Europeans became recognizably different very recently in evolutionary terms, they all have only very minor local adaptations and very little genetic diversity; contrasting markedly with many other creatures that range over such vast and diverse areas. All homo sapiens are equally capable of cognition, communication and interbreeding regardless of appearance or location.
In ancient civilizations
Given visually complex social relationships, humans presumably have always observed and speculated about the physical differences among individuals and groups. But different societies have attributed markedly different meanings to these distinctions. For example, the Ancient Egyptian sacred text called Book of Gates identifies four categories that are now conventionally labeled "Egyptians", "Asiatics", "Libyans", and "Nubians", but such distinctions tended to conflate differences as defined by physical features such as skin tone, with tribal and national identity.
Classical civilizations from Rome to China tended to invest much more importance in familial or tribal affiliation than with one's physical appearance (Dikötter 1992; Goldenberg 2003). Ancient Greek and Roman authors also attempted to explain and categorize visible biological differences among peoples known to them. Such categories often also included fantastical human-like beings that were supposed to exist in far-away lands. Some Roman writers adhered to an environmental determinism in which climate could affect the appearance and character of groups (Isaac 2004). In many ancient civilizations, individuals with widely varying physical appearances became full members of a society by growing up within that society or by adopting that society's cultural norms (Snowden 1983; Lewis 1990).
Julian the Apostate was an early observer of the differences in humans, based on ethnic, cultural, and geographic traits, but as the idea of race had not yet been conceptualized, he believed that they were proof of randomness and the inexistence of "Providence":
Come, tell me why it is that the Celts and the Germans are fierce, while the Hellenes and Romans are, generally speaking, inclined to political life and humane, though at the same time unyielding and warlike? Why the Egyptians are more intelligent and more given to crafts, and the Syrians unwarlike and effeminate, but at the same time intelligent, hot-tempered, vain and quick to learn? For if there is anyone who does not discern a reason for these differences among the nations, but rather declaims that all this so befell spontaneously, how, I ask, can he still believe that the universe is administered by a providence?—Julian, the Apostate.
Medieval models of race mixed Classical ideas with the notion that humanity as a whole was descended from Shem, Ham and Japheth, the three sons of Noah, producing distinct Semitic (Asiatic), Hamitic (African), and Japhetic (Indo-European) peoples. In the 14th century, the Islamic sociologist Ibn Khaldun, an adherent of environmental determinism, wrote that black skin was due to the hot climate of sub-Saharan Africa and not due to the descendants of Ham being cursed.
In the 9th century, Al-Jahiz, an Afro-Arab biologist and Islamic philosopher, the grandson of a Zanj (Bantu) slave, was an early adherent of environmental determinism and explained how the environment can determine the physical characteristics of the inhabitants of a certain community. He used his theories on the struggle for existence and environmental determinism to explain the origins of different human skin colors, particularly black skin, which he believed to be the result of the environment. He cited a stony region of black basalt in the northern Najd as evidence for his theory:
"[It] is so unusual that its gazelles and ostriches, its insects and flies, its foxes, sheep and asses, its horses and its birds are all black. Blackness and whiteness are in fact caused by the properties of the region, as well as by the God-given nature of water and soil and by the proximity or remoteness of the sun and the intensity or mildness of its heat."
Age of Discovery
The word "race", along with many of the ideas now associated with the term, were first coined during the age of exploration, a time of European imperialism, exploration, technological superiority and colonization. As Europeans encountered people from different parts of the world, they speculated about the physical, social, and cultural differences among various human groups. The rise of the Atlantic slave trade, which gradually displaced an earlier trade in slaves from throughout the world, created a further incentive to categorize human groups to justify the subordination of African slaves.
Drawing on Classical sources and on their own internal interactions—for example, the hostility between the English and Irish—was a powerful influence on early thinking about the differences between people— Europeans began to sort themselves and others into groups associated with physical appearance and with deeply ingrained behaviors and capacities. A set of folk beliefs took hold that linked inherited physical differences between groups to inherited intellectual, behavioral, and moral qualities. Although similar ideas can be found in other cultures (Lewis 1990; Dikötter 1992), they appear not to have had as much influence on their social structures as was found in Europe and the parts of the world colonized by Europeans, although conflicts between ethnic groups have existed throughout history and across the world.
The first scientific attempts to classify humans by categories of race date from the 17th century. The first post-Classical published classification of humans into distinct races seems to be François Bernier's Nouvelle division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the different species or races which inhabit it"), published in 1684.
17th and 18th century
According to philosopher Michel Foucault, theories of both racial and class conflict can be traced to 17th century political debates about innate differences among ethnicities. In England, radicals such as John Lilburne emphasised conflicts between Saxon and Norman peoples. In France, Henri de Boulainvilliers argued that the Germanic Franks possessed a natural right to leadership, in contrast to descendants of the Gauls. In the 18th century, the differences among human groups became a focus of scientific investigation (Todorov 1993). Initially, scholars focused on cataloguing and describing "The Natural Varieties of Mankind," as Johann Friedrich Blumenbach titled his 1775 text (which established the five major divisions of humans still reflected in some racial classifications, i.e., the Caucasoid race, Mongoloid race, Ethiopian race (later termed the Negroid race), American Indian race, and Malayan race).
From the 17th through the 19th centuries, the merging of folk beliefs about group differences with scientific explanations of those differences produced what one scholar has called an "ideology of race". According to this ideology, races are primordial, natural, enduring and distinct. It was further argued that some groups may be the result of mixture between formerly distinct populations, but that careful study could distinguish the ancestral races that had combined to produce admixed groups.
The 19th century saw attempts to change race from a taxonomic to a biological concept. In the 19th century, several natural scientists wrote on race: Georges Cuvier, Charles Darwin, Alfred Wallace, Francis Galton, James Cowles Pritchard, Louis Agassiz, Charles Pickering, and Johann Friedrich Blumenbach. As the science of anthropology took shape in the 19th century, European and American scientists increasingly sought explanations for the behavioral and cultural differences they attributed to groups (Stanton 1960). For example, using anthropometrics, invented by Francis Galton and Alphonse Bertillon, they measured the shapes and sizes of skulls and related the results to group differences in intelligence or other attributes (Lieberman 2001).
These scientists made three claims about race: first, races are objective, naturally occurring divisions of humanity; second, there is a strong relationship between biological races and other human phenomena (such as forms of activity and interpersonal relations and culture, and by extension the relative material success of cultures), thus biologizing the notion of race, as Foucault demonstrated in his historical analysis; third, race is therefore a valid scientific category that can be used to explain and predict individual and group behavior. Races were distinguished by skin color, facial type, cranial profile and size, texture and color of hair. Moreover, races were almost universally considered to reflect group differences in moral character and intelligence.
The eugenics movement of the late 19th and early 20th centuries, inspired by Arthur Gobineau's An Essay on the Inequality of the Human Races (1853–1855) and Vacher de Lapouge's "anthroposociology", asserted as self-evident the biological inferiority of particular groups (Kevles 1985). In many parts of the world, the idea of race became a way of rigidly dividing groups by culture as well as by physical appearances (Hannaford 1996). Campaigns of oppression and genocide were often motivated by supposed racial differences (Horowitz 2001).
In Charles Darwin's most controversial book, The Descent of Man, he made strong suggestions of racial differences and European superiority. In Darwin's view, stronger tribes of humans always replaced weaker tribes. As savage tribes came in conflict with civilized nations, such as England, the less advanced people were destroyed. Nevertheless, he also noted the great difficulty naturalists had in trying to decide how many "races" there actually were (Darwin was himself a monogenist on the question of race, believing that all humans were of the same species and finding race to be a somewhat arbitrary distinction among some groups):
Man has been studied more carefully than any other animal, and yet there is the greatest possible diversity amongst capable judges whether he should be classed as a single species or race, or as two (Virey), as three (Jacquinot), as four (Kant), five (Blumenbach), six (Buffon), seven (Hunter), eight (Agassiz), eleven (Pickering), fifteen (Bory St. Vincent), sixteen (Desmoulins), twenty-two (Morton), sixty (Crawfurd), or as sixty-three, according to Burke. This diversity of judgment does not prove that the races ought not to be ranked as species, but it shows that they graduate into each other, and that it is hardly possible to discover clear distinctive characters between them.
The 20th century racial classification by American anthropologist Carleton S. Coon, divided humanity into five races:
File:Carleton Coon races after Pleistocene.PNG
Coon and his work drew some charges of obsolete thinking or outright racism from a few critics, but some of the terminology he employed continues to be used even today, although the "-oid" suffixes now have in part taken on negative connotations.
In the 21st-Century, Coon's role came under further critical scrutiny when Prof. John P Jackon Jr, noted that the American Coon, "actively aided the segregationist cause in violation of his own standards for scientific objectivity."
Models of human evolution
In a 1995 article, Leonard Lieberman and Fatimah Jackson suggested that any new support for a biological concept of race will likely come from another source, namely, the study of human evolution. They therefore ask what, if any, implications current models of human evolution may have for any biological conception of race.
Today, all humans are classified as belonging to the species Homo sapiens and sub-species Homo sapiens sapiens. However, this is not the first species of hominids: the first species of genus Homo, Homo habilis, evolved in East Africa at least 2 million years ago, and members of this species populated different parts of Africa in a relatively short time. Homo erectus evolved more than 1.8 million years ago, and by 1.5 million years ago had spread throughout Europe and Asia. Virtually all physical anthropologists agree that Homo sapiens evolved out of Homo erectus.
Anthropologists have been divided as to whether Homo sapiens evolved as one interconnected species from H. erectus (called the Multiregional Model, or the Regional Continuity Model), or evolved only in East Africa, and then migrated out of Africa and replaced H. erectus populations throughout Europe and Asia (called the Out of Africa Model or the Complete Replacement Model). Anthropologists continue to debate both possibilities, and the evidence is technically ambiguous as to which model is correct, although most anthropologists currently favor the Out of Africa model.
Lieberman and Jackson argued that while advocates of both the Multiregional Model and the Out of Africa Model use the word race and make racial assumptions, none define the term. They conclude that"Each model has implications that both magnify and minimize the differences between races. Yet each model seems to take race and races as a conceptual reality. The net result is that those anthropologists who prefer to view races as a reality are encouraged to do so" and conclude that students of human evolution would be better off avoiding the word race, and instead describe genetic differences in terms of populations and clinal gradations.
Race as subspecies
With the advent of the modern synthesis in the early 20th century, many biologists sought to use evolutionary models and populations genetics in an attempt to formalise taxonomy. The Biological Species Concept (BSC) is the most widely used system for describing species, this concept defines a species as a group of organisms that interbreed in their natural environment and produce viable offspring. In practice, species are not classified according to the BSC but according to typology by the use of a holotype, due to the difficulty of determining whether all members of a group of organisms do or can in practice potentially interbreed. BSC species are routinely classified on a subspecific level, though this classification is conducted differently for different taxons, for mammals the normal taxonomic unit below the species level is usually the subspecies.
More recently the Phylogenetic Species Concept (PSC) has gained a substantial following. The PSC is based on the idea of a least-inclusive taxonomic unit (LITU), in phylogenetic classification no subspecies can exist because they would automatically constitute a LITU (any monophyletic group). Technically species cease to exist as do all hierarchical taxa, a LITU is effectively defined as any monophyletic taxon, phylogenetics is strongly influenced by cladistics which classifies organisms based on evolution rather than similarities between groups of organisms. In biology the term "race" is used with caution because it can be ambiguous, "'Race' is not being defined or used consistently; its referents are varied and shift depending on context. The term is often used colloquially to refer to a range of human groupings. Religious, cultural, social, national, ethnic, linguistic, genetic, geographical and anatomical groups have been and sometimes still are called 'races'". Generally when it is used it is synonymous with subspecies. One main obstacle to identifying subspecies is that, while it is a recognised taxonomic term, it has no precise definition.
Species of organisms that are monotypic (i.e., form a single subspecies) display at least one of these properties:
- All members of the species are very similar and cannot be sensibly divided into biologically significant subcategories.
- The individuals vary considerably but the variation is essentially random and largely meaningless so far as genetic transmission of these variations is concerned (many plant species fit into this category, which is why horticulturists interested in preserving, say, a particular flower color avoid propagation from seed, and instead use vegetative methods like propagation from cuttings).
- The variation among individuals is noticeable and follows a pattern, but there are no clear dividing lines among separate groups: they fade imperceptibly into one another. Such clinal variation displays a lack of allopatric partition between groups (i.e., a clearly defined boundary demarcating the subspecies), which is usually required before they are recognised as subspecies.
A polytypic species has two or more subspecies. These are separate populations that are more genetically different from one another and that are more reproductively isolated, gene flow between these populations is much reduced leading to genetic differentiation.
Traditionally subspecies are seen as geographically isolated and genetically differentiated populations. Or to put it another way "the designation 'subspecies' is used to indicate an objective degree of microevolutionary divergence" One objection to this idea is that it does not identify any degree of differentiation. Therefore, any population that is somewhat biologically different could be considered a subspecies, even to the level of a local population. As a result it is necessary to impose a threshold on the level of difference that is required for a population to be designated a subspecies.
This effectively means that populations of organisms must have reached a certain measurable level of difference to be recognised as subspecies.Dean Amadon proposed in 1949 that subspecies would be defined according to the seventy-five percent rule which means that 75% of a population must lie outside 99% of the range of other populations for a given defining morphological character or a set of characters. The seventy-five percent rule still has defenders but other scholars argue that it should be replaced with ninety or ninety-five percent rule.
In 1978, Sewall Wright suggested that human populations that have long inhabited separated parts of the world should, in general, be considered different subspecies by the usual criterion that most individuals of such populations can be allocated correctly by inspection. It does not require a trained anthropologist to classify an array of Englishmen, West Africans, and Chinese with 100% accuracy by features, skin color, and type of hair despite so much variability within each of these groups that every individual can easily be distinguished from every other. However, it is customary to use the term race rather than subspecies for the major subdivisions of the human species as well as for minor ones.
On the other hand in practice subspecies are often defined by easily observable physical appearance, but there is not necessarily any evolutionary significance to these observed differences, so this form of classification has become less acceptable to evolutionary biologists. Likewise this typological approach to race is generally regarded as discredited by biologists and anthropologists.
Because of the difficulty in classifying subspecies morphologically, many biologists reject the concept altogether, citing problems such as:
- Visible physical differences do not correlate with one another, leading to the possibility of different classifications for the same individual organisms.
- Parallel evolution can lead to the existence of the appearance of similarities between groups of organisms that are not part of the same species.
- The existence of isolated populations within previously designated subspecies.
- That the criteria for classification are arbitrary.
Subspecies genetically differentiated populations
Another way to look at differences between populations is to measure genetic differences rather than physical differences. The Human Genome Project found only gradations in genetic variation, not sharp lines which would naturally define notions of race or ethnicity. "People who have lived in the same geographic region for many generations may have some alleles in common, but no allele will be found in all members of one population and in no members of any other."
Genetic differences between populations of organisms can be measured using the fixation index of Sewall Wright, which is often abbreviated to FST. This statistic is used to compare differences between any two given populations and can be used to measure genetic differences between populations for individual genes, or for many genes simultaneously. For example it is often stated that the fixation index for humans is about 0.15. This means that about 85% of the variation measured in the human population is within any population, and about 15% of the variation occurs between populations, or that any two individuals from different populations are almost as likely to be more similar to each other than either is to a member of their own group.
It is often stated that human genetic variation is low compared to other mammalian species, and it has been claimed that this should be taken as evidence that there is no natural subdivision of the human population. Wright himself believed that a value of 0.25 represented great genetic variation and that an FST of 0.15-0.25 represented moderate variation. It should, however, be noted that about 5% of human variation occurs between populations within continents, and therefore the FST between continental groups of humans (or races) is as low as 0.1 (or possibly lower).
In their 2003 paper "Human Genetic Diversity and the Nonexistence of Biological Races" Jeffrey Long and Rick Kittles give a long critique of the application of FST to human populations. They find that the figure of 85% is misleading because it implies that all human populations contain on average 85% of all genetic diversity. They claim that this does not correctly reflect human population history, because it treats all human groups as independent. A more realistic portrayal of the way human groups are related is to understand that some human groups are parental to other groups and that these groups represent paraphyletic groups to their descent groups. For example, under the recent African origin theory the human population in Africa is paraphyletic to all other human groups because it represents the ancestral group from which all non-African populations derive, but more than that, non-African groups only derive from a small non-representative sample of this African population.
This means that all non-African groups are more closely related to each other and to some African groups (probably east Africans) than they are to others, and further that the migration out of Africa represented a genetic bottleneck, with much of the diversity that existed in Africa not being carried out of Africa by the emigrating groups. This view produces a version of human population movements that do not result in all human populations being independent; but rather, produces a series of dilutions of diversity the further from Africa any population lives, each founding event representing a genetic subset of its parental population. Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 70% of human genetic diversity exists in a population derived from New Guinea. Long and Kittles make the observation that this still produces a global human population that is genetically homogeneous compared to other mammalian populations.
Wright's F statistics are not used to determine whether a group can be described as a subspecies or not, though the statistic is used to measure the degree of differentiation between populations, the degree of genetic differentiation is not a marker of subspecies status. Generally taxonomists prefer to use phylogenetic analysis to determine whether a population can be considered a subspecies. Phylogenetic analysis relies on the concept of derived characteristics that are not shared between groups, usually applying to populations that are allopatric (geographically separated) and therefore discretely bounded. This would make a subspecies, evolutionarily speaking, a clade - a group with a common evolutionary ancestor population. The smooth gradation of human genetic variation in general rules out any idea that human population groups can be considered monophyletic (cleanly divided) as there appears to always have been considerable gene flow between human populations.
Subspecies as clade
By the 1970s many evolutionary scientists were avoiding the concept of "subspecies" as a taxonomic category for four reasons:
- very few data indicate that contiguous subspecies ever become species
- geographically disjunct groups regarded as subspecies usually can be demonstrated to actually be distinct species
- subspecies had been recognized on the basis of only 2-5 phenotypic characters, which often were adaptations to local environments but which did not reflect the evolutionary differentiation of populations as a whole
- with the advent of molecular techniques used to get a better handle on genetic introgression (gene flow), the picture afforded by looking at genetic variation was often at odds with the phenotypic variation (as is the case with looking at genes versus percentage of epidermal melanin in human populations)
These criticisms have coincided with the rise of cladistics
A clade is a taxonomic group of organisms consisting of a single common ancestor and all the descendants of that ancestor. Every creature produced by sexual reproduction has two immediate lineages, one maternal and one paternal. Whereas Carolus Linnaeus established a taxonomy of living organisms based on anatomical similarities and differences, cladistics seeks to establish a taxonomy—the phylogenetic tree—based on genetic similarities and differences and tracing the process of acquisition of multiple characteristics by single organisms. Some researchers have tried to clarify the idea of race by equating it to the biological idea of the clade:
A phylogenetic tree like the one shown above is usually derived from DNA or protein sequences from populations. Often mitochondrial DNA or Y chromosome sequences are used to study ancient human migration paths. These single-locus sources of DNA do not recombine and are inherited from a single parent. Individuals from the various continental groups tend to be more similar to one another than to people from other continents, and tracing either mitochondrial DNA or non-recombinant Y-chromosome DNA explains how people in one place may be largely derived from people in some remote location. The tree is rooted in the common ancestor of chimpanzees and humans, which is believed to have originated in Africa. Horizontal distance corresponds to two things:
- Genetic distance. Given below the diagram, the genetic difference between humans and chimpanzees is roughly 2%, or 20 times larger than the variation among modern humans.
- Temporal remoteness of the most recent common ancestor. Rough estimates are given above the diagram, in millions of years. The mitochondrial most recent common ancestor of modern humans lived roughly 200,000 years ago, latest common ancestors of humans and chimpanzees between four and seven million years ago.
Chimpanzees and humans belong to different genera, indicated in Blue. Formation of species and subspecies is also indicated, and the formation of "races" is indicated in the green rectangle to the right (note that only a very rough representation of human phylogeny is given, and the points made in the preceding section, insofar as they apply to an "African race", are understood here). Note that vertical distances are not meaningful in this representation.
Most evolutionary scientists have rejected the identification of races with clades for two reasons. First, as Rachel Caspari (2003) argued, clades are by definition monophyletic groups (a taxon that includes all descendants of a given ancestor) since no groups currently regarded as races are monophyletic, none of those groups can be clades.
For anthropologists Lieberman and Jackson (1995), however, there are more profound methodological and conceptual problems with using cladistics to support concepts of race. They emphasize that "the molecular and biochemical proponents of this model explicitly use racial categories in their initial grouping of samples". For example, the large and highly diverse macroethnic groups of East Indians, North Africans, and Europeans are presumptively grouped as Caucasians prior to the analysis of their DNA variation.
This limits and skews interpretations, obscures other lineage relationships, deemphasizes the impact of more immediate clinal environmental factors on genomic diversity, and can cloud our understanding of the true patterns of affinity.They argue that however significant the empirical research, these studies use the term race in conceptually imprecise and careless ways. They suggest that the authors of these studies find support for racial distinctions only because they began by assuming the validity of race.
For empirical reasons we prefer to place emphasis on clinal variation, which recognizes the existence of adaptive human hereditary variation and simultaneously stresses that such variation is not found in packages that can be labeled races.
Indeed, recent research reports evidence for smooth, clinal genetic variation even in regions previously considered racially homogeneous, with the apparent gaps turning out to be artifacts of sampling techniques (Serre & Pääbo 2004). These scientists do not dispute the importance of cladistic research, only its retention of the word race, when reference to populations and clinal gradations are more than adequate to describe the results.
Population genetics: population and cline
At the beginning of the 20th century, anthropologists questioned, and eventually abandoned, the claim that biologically distinct races are isomorphic with distinct linguistic, cultural, and social groups. Shortly thereafter, the rise of population genetics provided scientists with a new understanding of the sources of phenotypic variation. This new science has led many mainstream evolutionary scientists in anthropology and biology to question the very validity of race as a scientific concept describing an objectively real phenomenon. Those who came to reject the validity of the concept of race did so for four reasons: empirical, definitional, the availability of alternative concepts, and ethical (Lieberman and Byrne 1993).
The first to challenge the concept of race on empirical grounds were anthropologists Franz Boas, who demonstrated phenotypic plasticity due to environmental factors (Boas 1912), and Ashley Montagu (1941, 1942), who relied on evidence from genetics. Zoologists Edward O. Wilson and W. Brown then challenged the concept from the perspective of general animal systematics, and further rejected the claim that "races" were equivalent to "subspecies" (Wilson and Brown 1953).
One crucial innovation in reconceptualizing genotypic and phenotypic variation was anthropologist C. Loring Brace's observation that such variations, insofar as it is affected by natural selection, migration, or genetic drift, are distributed along geographic gradations or clines (Brace 1964). In part this is due to isolation by distance. This point called attention to a problem common to phenotype-based descriptions of races (for example, those based on hair texture and skin color): they ignore a host of other similarities and differences (for example, blood type) that do not correlate highly with the markers for race. Thus, anthropologist Frank Livingstone's conclusion, that since clines cross racial boundaries, "there are no races, only clines" (Livingstone 1962: 279).
In a response to Livingston, Theodore Dobzhansky argued that when talking about race one must be attentive to how the term is being used: "I agree with Dr. Livingston that if races have to be 'discrete units,' then there are no races, and if 'race' is used as an 'explanation' of the human variability, rather than vice versa, then the explanation is invalid." He further argued that one could use the term race if one distinguished between "race differences" and "the race concept." The former refers to any distinction in gene frequencies between populations; the latter is "a matter of judgment." He further observed that even when there is clinal variation, "Race differences are objectively ascertainable biological phenomena ... but it does not follow that racially distinct populations must be given racial (or subspecific) labels." In short, Livingston and Dobzhansky agree that there are genetic differences among human beings; they also agree that the use of the race concept to classify people, and how the race concept is used, is a matter of social convention. They differ on whether the race concept remains a meaningful and useful social convention.
In 1964, biologists Paul Ehrlich and Holm pointed out cases where two or more clines are distributed discordantly—for example, melanin is distributed in a decreasing pattern from the equator north and south; frequencies for the haplotype for beta-S hemoglobin, on the other hand, radiate out of specific geographical points in Africa (Ehrlich and Holm 1964). As anthropologists Leonard Lieberman and Fatimah Linda Jackson observed, "Discordant patterns of heterogeneity falsify any description of a population as if it were genotypically or even phenotypically homogeneous" (Lieverman and Jackson 1995).Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. For example, if only skin color and a "two race" system of classification were used, then one might classify Indigenous Australians in the same race as Black people, and Caucasians in the same race as East Asian people, but biologists and anthropologists would dispute that these classifications have any scientific validity. Scientists discovered a skin-lighting mutation that partially accounts for the appearance of Light skin in humans (people who migrated out of Africa northward into what is now Europe) which they estimate occurred 20,000 to 50,000 years ago. The East Asians owe their relatively light skin to different mutations. On the other hand, the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, since traits and gene frequencies do not always correspond to the same geographical location. Or as Ossario and Duster (2005) put it:
Anthropologists long ago discovered that humans' physical traits vary gradually, with groups that are close geographic neighbors being more similar than groups that are geographically separated. This pattern of variation, known as clinal variation, is also observed for many alleles that vary from one human group to another. Another observation is that traits or alleles that vary from one group to another do not vary at the same rate. This pattern is referred to as nonconcordant variation. Because the variation of physical traits is clinal and nonconcordant, anthropologists of the late 19th and early 20th centuries discovered that the more traits and the more human groups they measured, the fewer discrete differences they observed among races and the more categories they had to create to classify human beings. The number of races observed expanded to the 30s and 50s, and eventually anthropologists concluded that there were no discrete races (Marks, 2002). Twentieth and 21st century biomedical researchers have discovered this same feature when evaluating human variation at the level of alleles and allele frequencies. Nature has not created four or five distinct, nonoverlapping genetic groups of people.
More recent genetic studies indicate that skin color may change radically over as few as 100 generations, or about 2,500 years, given the influence of the environment.
Population geneticists have debated as to whether the concept of population can provide a basis for a new conception of race. In order to do this, a working definition of population must be found. Surprisingly, there is no generally accepted concept of population that biologists use. It has been pointed out that the concept of population is central to ecology, evolutionary biology and conservation biology, but also that most definitions of population rely on qualitative descriptions such as "a group of organisms of the same species occupying a particular space at a particular time" Waples and Gaggiotti identify two broad types of definitions for populations; those that fall into an ecological paradigm, and those that fall into an evolutionary paradigm. Examples of such definitions are:
- Ecological paradigm: A group of individuals of the same species that co-occur in space and time and have an opportunity to interact with each other.
- Evolutionary paradigm: A group of individuals of the same species living in close-enough proximity that any member of the group can potentially mate with any other member.
Richard Lewontin, claiming that 85 percent of human variation occurs within populations and not among populations, argued that neither "race" nor "subspecies" were appropriate or useful ways to describe populations (Lewontin 1973). Nevertheless, barriers—which may be cultural or physical— between populations can limit gene flow and increase genetic differences. Recent work by population geneticists conducting research in Europe suggests that ethnic identity can be a barrier to gene flow. Others, such as Ernst Mayr, have argued for a notion of "geographic race" . Some researchers report the variation between racial groups (measured by Sewall Wright's population structure statistic FST) accounts for as little as 5% of human genetic variation². Sewall Wright himself commented that if differences this large were seen in another species, they would be called subspecies. In 2003 A. W. F. Edwards argued that cluster analysis supersedes Lewontin's arguments (see below).
These empirical challenges to the concept of race forced evolutionary sciences to reconsider their definition of race. Mid-century, anthropologist William Boyd defined race as:
- A population which differs significantly from other populations in regard to the frequency of one or more of the genes it possesses. It is an arbitrary matter which, and how many, gene loci we choose to consider as a significant "constellation" (Boyd 1950).
Lieberman and Jackson (1994) have pointed out that "the weakness of this statement is that if one gene can distinguish races then the number of races is as numerous as the number of human couples reproducing." Moreover, anthropologist Stephen Molnar has suggested that the discordance of clines inevitably results in a multiplication of races that renders the concept itself useless (Molnar 1992).
The distribution of many physical traits resembles the distribution of genetic variation within and between human populations (American Association of Physical Anthropologists 1996; Keita and Kittles 1997). For example, ~90% of the variation in human head shapes occurs within every human group, and ~10% separates groups, with a greater variability of head shape among individuals with recent African ancestors (Relethford 2002).
Conversely, in the paper "Genetic similarities within and between human populations" Witherspoon et al. (2007) show that even when individuals can be reliably assigned to specific population groups, it is still possible for two randomly chosen individuals from different populations/clusters to be more similar to each other than to a randomly chosen member of their own cluster. This is because multi locus clustering relies on population level similarities, rather than individual similarities, so that each individual is classified according to their similarity to the typical genotype for any given population. The paper claims that this masks a great deal of genetic similarity between individuals belonging to different clusters. Or in other words, two individuals from different clusters can be more similar to each other than to a member of their own cluster, while still both being more similar to the typical genotype of their own cluster than to the typical genotype of a different cluster.
When differences between individual pairs of people are tested, Witherspoon et al. found that the answer to the question "How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?" is not adequately addressed by multi locus clustering analyses. They found that even for just three population groups separated by large geographic ranges (European, African and East Asian) the inclusion of many thousands of loci is required before the answer can become "never". On the other hand, the accurate classification of the global population must include more closely related and admixed populations, which will increase this above zero, so they state "In a similar vein, Romualdi et al. (2002) and Serre and Paabo (2004) have suggested that highly accurate classification of individuals from continuously sampled (and therefore closely related) populations may be impossible". Witherspoon et al. conclude "The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population"
Molecular genetics: lineages and clusters
With the recent availability of large amounts of human genetic data from many geographically distant human groups, scientists have again started to investigate the relationships between people from various parts of the world. One method is to investigate DNA molecules that are passed down from mother to child (mtDNA) or from father to son (Y chromosomes). These form molecular lineages and can be informative regarding prehistoric population migrations. Alternatively, autosomal alleles are investigated in an attempt to understand how much genetic material groups of people share.
This work has led to a debate amongst geneticists, molecular anthropologists and medical doctors as to the validity of concepts such as race. Some researchers insist that classifying people into groups based on ancestry may be important from medical and social policy points of view, and claim to be able to do so accurately. Others claim that individuals from different groups share far too much of their genetic material for group membership to have any medical implications. This has reignited the scientific debate over the validity of human classification and concepts of race.
Summary of different biological definitions of race
|Essentialist||Hooton (1926)||"A great division of mankind, characterized as a group by the sharing of a certain combination of features, which have been derived from their common descent, and constitute a vague physical background, usually more or less obscured by individual variations, and realized best in a composite picture."|
|Taxonomic||Mayr (1969)||"An aggregate of phenotypically similar populations of a species, inhabiting a geographic subdivision of the range of a species, and differing taxonomically from other populations of the species."|
|Population||Dobzhansky (1970)||"Races are genetically distinct Mendelian populations. They are neither individuals nor particular genotypes, they consist of individuals who differ genetically among themselves."|
|Lineage||Templeton (1998)||"A subspecies (race) is a distinct evolutionary lineage within a species. This definition requires that a subspecies be genetically differentiated due to barriers to genetic exchange that have persisted for long periods of time; that is, the subspecies must have historical continuity in addition to current genetic differentiation."|
Current views across disciplines
One result of debates over the meaning and validity of the concept of race is that the current literature across different disciplines regarding human variation lacks consensus, though within some fields, such as biology, there is strong consensus. Some studies use the word race in its early essentialist taxonomic sense. Many others still use the term race, but use it to mean a population, clade, or haplogroup. Others eschew the concept of race altogether, and use the concept of population as a less problematical unit of analysis.
Since 1932, some college textbooks introducing physical anthropology have increasingly come to reject race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to 1984, thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race. According to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical Anthropology employed these or nearly synonymous terms reflecting a bio-race paradigm, only 36 percent did so in 1965, and just 28 percent did in 1996. The Statement on "Race" (1998) composed by a select committee of anthropologists and issued by the executive board of the American Anthropological Association as a statement they "believe [...] represents generally the contemporary thinking and scholarly positions of a majority of anthropologists", declares:
With the vast expansion of scientific knowledge in this century, ... it has become clear that human populations are not unambiguous, clearly demarcated, biologically distinct groups. [...] Given what we know about the capacity of normal humans to achieve and function within any culture, we conclude that present-day inequalities between so-called "racial" groups are not consequences of their biological inheritance but products of historical and contemporary social, economic, educational, and political circumstances.
In an ongoing debate, some geneticists argue that race is neither a meaningful concept nor a useful heuristic device, and even that genetic differences among groups are biologically meaningless, because more genetic variation exists within such races than among them, and that racial traits overlap without discrete boundaries.Other geneticists, in contrast, argue that categories of self-identified race/ethnicity or biogeographic ancestry are both valid and useful, that these categories correspond to clusters inferred from multilocus genetic data, and that this correspondence implies that genetic factors might contribute to unexplained phenotypic variation between groups.
In February, 2001, the editors of the medical journal Archives of Pediatrics and Adolescent Medicine asked authors to no longer use race as an explanatory variable and not to use obsolescent terms. Some other peer-reviewed journals, such as the New England Journal of Medicine and the American Journal of Public Health, have made similar endeavours. Furthermore, the National Institutes of Health recently issued a program announcement for grant applications through February 1, 2006, specifically seeking researchers who can investigate and publicize among primary care physicians the detrimental effects on the nation's health of the practice of medical racial profiling using such terms. The program announcement quoted the editors of one journal as saying that, "analysis by race and ethnicity has become an analytical knee-jerk reflex."
A survey, taken in 1985 (Lieberman et al. 1992), asked 1,200 American anthropologists how many disagree with the following proposition: "There are biological races in the species Homo sapiens." The responses were:
The figure for physical anthropologists at PhD granting departments was slightly higher, rising from 41% to 42%, with 50% agreeing. This survey, however, did not specify any particular definition of race (although it did clearly specify biological race within the species Homo sapiens); it is difficult to say whether those who supported the statement thought of race in taxonomic or population terms.
The same survey, taken in 1999, showed the following changing results for anthropologists:
In Poland the race concept was rejected by only 25 percent of anthropologists in 2001, although: "Unlike the U.S. anthropologists, Polish anthropologists tend to regard race as a term without taxonomic value, often as a substitute for population."
In the face of these issues, some evolutionary scientists have simply abandoned the concept of race in favor of "population." What distinguishes population from previous groupings of humans by race is that it refers to a breeding population (essential to genetic calculations) and not to a biological taxon. Other evolutionary scientists have abandoned the concept of race in favor of cline (meaning, how the frequency of a trait changes along a geographic gradient). (The concepts of population and cline are not, however, mutually exclusive and both are used by many evolutionary scientists.)
According to Jonathan Marks,
- By the 1970s, it had become clear that (1) most human differences were cultural; (2) what was not cultural was principally polymorphic - that is to say, found in diverse groups of people at different frequencies; (3) what was not cultural or polymorphic was principally clinal - that is to say, gradually variable over geography; and (4) what was left - the component of human diversity that was not cultural, polymorphic, or clinal - was very small.
- A consensus consequently developed among anthropologists and geneticists that race as the previous generation had known it - as largely discrete, geographically distinct, gene pools - did not exist.
In the face of this rejection of race by evolutionary scientists, many social scientists have replaced the word race with the word "ethnicity" to refer to self-identifying groups based on beliefs concerning shared culture, ancestry and history. Alongside empirical and conceptual problems with "race," following the Second World War, evolutionary and social scientists were acutely aware of how beliefs about race had been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum in the 1960s during the U.S. civil rights movement and the emergence of numerous anti-colonial movements worldwide. They thus came to believe that race itself is a social construct, a concept that was believed to correspond to an objective reality but which was believed in because of its social functions.
Races as social constructions
Even as the idea of race was becoming a powerful organizing principle in many societies, some observers criticized the concept. In Europe, the gradual transition in appearances from one group to adjacent groups suggested to Blumenbach that "one variety of mankind does so sensibly pass into the other, that you cannot mark out the limits between them" (Marks 1995, p. 54). As anthropologists and other evolutionary scientists have shifted away from the language of race to the term population to talk about genetic differences, Historians, anthropologists and social scientists have re-conceptualized the term "race" as a cultural category or social construct, in other words, as a particular way that some people have of talking about themselves and others.
Dr. Craig Venter and scientist Francis Collins of the National Institute of Health jointly made the announcement of the mapping of the human genome in 2000. Upon examining the data from the genome mapping, he realized that although we are indeed further apart genetically from each other, (1-3% instead of the assumed 1%), the types of variations don't warrant calling each other different races. Venter says quote.."Race is a social concept. It's not a scientific one. There are no bright lines (that would stand out), if we could compare all the sequenced genomes of everyone on the planet." "When we try to apply science to try to sort out these social differences, it all falls apart."
Stephan Palmie has recently summarized, race "is not a thing but a social relation"; or, in the words of Katya Gibel Mevorach, "a metonym," "a human invention whose criteria for differentiation are neither universal nor fixed but have always been used to manage difference." As such it cannot be a useful analytical concept; rather, the use of the term "race" itself must be analyzed. Moreover, they argue that biology will not explain why or how people use the idea of race: history and social relationships will.
In the United States
The immigrants to the Americas came ultimately from every region of Europe, Africa, and Asia. Throughout America the immigrants mixed among themselves and with the indigenous inhabitants of the continent. In the United States, for example, most people who self-identify as African American have some European ancestors—in one analysis of genetic markers that have differing frequencies between continents, European ancestry ranged from an estimated 7% for a sample of Jamaicans to ∼23% for a sample of African Americans from New Orleans (Parra et al. 1998). Similarly, many people who identify as European American have some African or Native American ancestors, either through openly interracial marriages or through the gradual inclusion of people with mixed ancestry into the majority population. In a survey of college students who self-identified as white in a northeastern U.S. university, ∼30% were estimated to have less than 90% European ancestry.
Since the early history of the United States, Native Americans, African Americans, and European Americans have been classified as belonging to different races. For nearly three centuries, the criteria for membership in these groups were similar, comprising a person’s appearance, his fraction of known non-White ancestry, and his social circle.2 But the criteria for membership in these races diverged in the late 19th century. During Reconstruction, increasing numbers of Americans began to consider anyone with "one drop" of known "Black blood" to be Black, regardless of appearance.3 By the early 20th century, this notion of invisible blackness was made statutory in many states and widely adopted nationwide.4 In contrast, Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood quantum), due in large part to American slavery ethics. Finally, to be White one had to have perceived "pure" White ancestry.
Efforts to sort the increasingly mixed population of the United States into discrete categories generated many difficulties (Spickard 1992). Efforts to track mixing between groups led to a proliferation of categories, such as mulatto and octoroon, and blood quantum distinctions that became increasingly untethered from self-reported ancestry. A person's racial identity can change over time, and self-ascribed race can differ from assigned race (Kressin et al. 2003).
The difference between how Native American and Black identities are defined today (blood quantum versus one-drop rule) has demanded explanation. According to anthropologists such as Gerald Sider, the goal of such racial designations was to concentrate power, wealth, privilege and land in the hands of Whites in a society of White hegemony and privilege (Sider 1996; see also Fields 1990). The differences have little to do with biology and far more to do with the history of racism and specific forms of White supremacy (the social, geopolitical and economic agendas of dominant Whites vis-à-vis subordinate Blacks and Native Americans), especially the different roles Blacks and Amerindians occupied in White-dominated 19th century America.
The theory suggests that the blood quantum definition of Native American identity enabled Whites to acquire Amerindian lands, while the one-drop rule of Black identity enabled Whites to preserve their agricultural labor force. The contrast presumably emerged because, as peoples transported far from their land and kinship ties on another continent, Black labor was relatively easy to control, thus reducing Blacks to valuable commodities as agricultural laborers. In contrast, Amerindian labor was more difficult to control; moreover, Amerindians occupied large territories that became valuable as agricultural lands, especially with the invention of new technologies such as railroads; thus, the blood quantum definition enhanced White acquisition of Amerindian lands in a doctrine of Manifest Destiny that subjected them to marginalization and multiple episodic localized campaigns of extermination.
The political economy of race had different consequences for the descendants of aboriginal Americans and African slaves. The 19th century blood quantum rule meant that it was relatively easier for a person of mixed Euro-Amerindian ancestry to be accepted as White. The offspring of only a few generations of intermarriage between Amerindians and Whites likely would not have been considered Amerindian at all (at least not in a legal sense). Amerindians could have treaty rights to land, but because an individual with one Amerindian great-grandparent no longer was classified as Amerindian, they lost any legal claim to Amerindian land. According to the theory, this enabled Whites to acquire Amerindian lands. The irony is that the same individuals who could be denied legal standing because they were "too White" to claim property rights, might still be Amerindian enough to be considered "breeds", stigmatized for their Native American ancestry.
The one-drop rule, on the other hand, made it relatively difficult for anyone of known Black ancestry to be accepted as White during the 20th century. The child of a Black sharecropper and a White person was considered Black. And, significantly, in terms of the economics of sharecropping, such a person also would likely be a sharecropper as well, thus adding to the employer's labor force.
In short, this theory suggests that in a 20th century economy that benefited from sharecropping, it was useful to have as many Blacks as possible. Conversely, in a 19th century nation bent on westward expansion, it was advantageous to diminish the numbers of those who could claim title to Amerindian lands by simply defining them out of existence.
It must be mentioned, however, that although some scholars of the Jim Crow period agree that the 20th century notion of invisible Blackness shifted the color line in the direction of paleness, thereby swelling the labor force in response to Southern Blacks' Great Migration northwards, others (Joel Williamson, C. Vann Woodward, George M. Fredrickson, Stetson Kennedy) see the one-drop rule as a simple consequence of the need to define Whiteness as being pure, thus justifying White-on-Black oppression. In any event, over the centuries when Whites wielded power over both Blacks and Amerindians and widely believed in their inherent superiority over people of color, it is no coincidence that the hardest racial group in which to prove membership was the White one.
In the United States, social and legal conventions developed over time that forced individuals of mixed ancestry into simplified racial categories (Gossett 1997). An example is the aforementioned one-drop rule implemented in some state laws that treated anyone with a single known African American ancestor as black (Davis 2001). The decennial censuses conducted since 1790 in the United States also created an incentive to establish racial categories and fit people into those categories (Nobles 2000). In other countries in the Americas where mixing among groups was overtly more extensive, social categories have tended to be more numerous and fluid, with people moving into or out of categories on the basis of a combination of socioeconomic status, social class, ancestry, and appearance (Mörner 1967).
The term "Hispanic" as an ethnonym emerged in the 20th century with the rise of migration of laborers from American Spanish-speaking countries to the United States. Today, the word "Latino" is often used as a synonym for "Hispanic". The definitions of both terms are non-race specific, and include people who consider themselves to be of distinct races (Black, White, Amerindian, Asian, and mixed groups). In contrast to "Latino" or "Hispanic", "Anglo" refers to non-Hispanic White Americans or non-Hispanic European Americans, most of whom speak the English language but are not necessarily of English descent.
|This section does not cite any references or sources.|
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (May 2009)
Compared to 19th century United States, 20th century Brazil was characterized by a perceived relative absence of sharply defined racial groups. According to anthropologist Marvin Harris (1989), this pattern reflects a different history and different social relations. Basically, race in Brazil was "biologized," but in a way that recognized the difference between ancestry (which determines genotype) and phenotypic differences. There, racial identity was not governed by rigid descent rule, such as the one-drop rule, as it was in the United States. A Brazilian child was never automatically identified with the racial type of one or both parents, nor were there only a very limited number of categories to choose from.
Over a dozen racial categories would be recognized in conformity with all the possible combinations of hair color, hair texture, eye color, and skin color. These types grade into each other like the colors of the spectrum, and no one category stands significantly isolated from the rest. That is, race referred preferentially to appearance, not heredity. The complexity of racial classifications in Brazil reflects the extent of miscegenation in Brazilian society, a society that remains highly, but not strictly, stratified along color lines. Henceforth, the Brazilian narrative of a perfect "post-racist" country, must be met with caution, as sociologist Gilberto Freyre demonstrated in 1933 in Casa Grande e Senzala.
Marketing of race: genetic lineages as social lineages
New research in molecular genetics, and the marketing of genetic identities through the analysis of one's Y chromosome, mtDNA or autosomal DNA, has reignited the debate surrounding race. Most of the controversy surrounds the question of how to interpret these new data, and whether conclusions based on existing data are sound. Although the vast majority of researchers endorse the view that continental groups do not constitute different subspecies, and molecular geneticists generally reject the identification of mtDNA and Y chromosomal lineages or allele clusters with "races", some anthropologists have suggested that the marketing of genetic analysis to the general public in the form of "Personalized Genetic Histories" (PGH) is leading to a new social construction of race.
Typically, a consumer of a commercial PGH service sends in a sample of DNA which is analyzed by molecular biologists and is sent a report, of which the following is a sample
- "African DNA Ancestry Report"
The subject's likely haplogroup L2 is associated with the so-called Bantu expansion from West and Central sub-Saharan Africa east and south, dated 2,000-4,000 years ago ... Between the 15th and 19th centuries C.E, the Atlantic slave trade resulted in the forced movement of approximately 13 million people from Africa, mainly to the Americas. Only approximately 11 million survived the passage and many more died in the early years of captivity. Many of these slaves were traded to the West African Cape Verde ports of embarkation through Portuguese and Arab middlemen and came from as far south as Angola. Among the African tribal groups, all Bantu-speaking, in which L2 is common are: Hausa, Kanuri, Fulfe, Songhai, Malunjin (Angola), Yoruba, Senegalese, Serer and Wolof.
Although no single sentence in such a report is technically wrong, through the combination of these sentences, anthropologists and others have argued, the report is telling a story that connects a haplotype with a language and a group of tribes. This story is generally rejected by research scientists because an individual receives his or her Y chromosome or mtDNA from only one ancestor in every generation; consequently, with every generation one goes back in time, the percentage of one's ancestors it represents halves; if one goes back hundreds (let alone thousands) of years, it represents only a tiny fragment of one's ancestry. As Mark Shriver and Rick Kittles recently remarked,
For many customers of lineage-based tests, there is a lack of understanding that their maternal and paternal lineages do not necessarily represent their entire genetic make-up. For example, an individual might have more than 85% Western European 'genomic' ancestry but still have a West African mtDNA or NRY lineage.
Nevertheless, they acknowledge, such stories are increasingly appealing to the general public. Thus, in his book Blood of the Isles (published in the US and Canada as Saxons, Vikings and Celts: The Genetic Roots of Britain and Ireland), however, Bryan Sykes discusses how people who have been mtDNA tested by his commercial laboratory and been found to belong to the same haplogroup have parties together because they see this as some sort of "bond", even though these people may not actually share very much ancestry.
Through these kinds of reports, new advances in molecular genetics are being used to create or confirm stories have about social identities. Although these identities are not racial in the biological sense, they are in the cultural sense in that they link biological and cultural identities. Nadia Abu el-Haj has argued that the significance of genetic lineages in popular conceptions of race owes to the perception that while genetic lineages, like older notions of race, suggests some idea of biological relatedness, unlike older notions of race they are not directly connected to claims about human behaviour or character. Abu el-Haj has thus argued that "postgenomics does seem to be giving race a new lease on life." Nevertheless, Abu el-Haj argues that to understand what it means to think of race in terms of genetic lineages or clusters, one must understand that
Race science was never just about classification. It presupposed a distinctive relationship between "nature" and "culture," understanding the differences in the former to ground and to generate the different kinds of persons ("natural kinds") and the distinctive stages of cultures and civilizations that inhabit the world.
Abu el-Haj argues that genomics and the mapping of lineages and clusters liberates "the new racial science from the older one by disentangling ancestry from culture and capacity." As an example, she refers to recent work by Hammer et al., which aimed to test the claim that present-day Jews are more closely related to one another than to neighbouring non-Jewish populations. Hammer et al. found that the degree of genetic similarity among Jews shifted depending on the locus investigated, and suggested that this was the result of natural selection acting on particular loci. They therefore focused on the non-recombining Y chromosome to "circumvent some of the complications associated with selection".
As another example she points to work by Thomas et al., who sought to distinguish between the Y chromosomes of Jewish priests (in Judaism, membership in the priesthood is passed on through the father's line) and the Y chromosomes of non-Jews. Abu el-Haj concluded that this new "race science" calls attention to the importance of "ancestry" (narrowly defined, as it does not include all ancestors) in some religions and in popular culture, and people's desire to use science to confirm their claims about ancestry; this "race science," she argues, is fundamentally different from older notions of race that were used to explain differences in human behaviour or social status:
As neutral markers, junk DNA cannot generate cultural, behavioural, or, for that matter, truly biological differences between groups ... mtDNA and Y-chromosome markers relied on in such work are not "traits" or "qualities" in the old racial sense. They do not render some populations more prone to violence, more likely to suffer psychiatric disorders, or for that matter, incapable of being fully integrated - because of their lower evolutionary development - into a European cultural world. Instead, they are "marks," signs of religious beliefs and practices ... it is via biological noncoding genetic evidence that one can demonstrate that history itself is shared, that historical traditions are (or might well be) true."
On the other hand, there are tests that do not rely on molecular lineages, but rather on correlations between allele frequencies, often when allele frequencies correlate these are called clusters. Clustering analyses are less powerful than lineages because they cannot tell a historical story, they can only estimate the proportion of a person's ancestry from any given large geographical region. These sorts of tests use informative alleles called Ancestry-informative marker (AIM), which although shared across all human populations vary a great deal in frequency between groups of people living in geographically distant parts of the world.
These tests use contemporary people sampled from certain parts of the world as references to determine the likely proportion of ancestry for any given individual. In a recent Public Service Broadcasting (PBS) programme on the subject of genetic ancestry testing the academic Henry Louis Gates: "wasn’t thrilled with the results (it turns out that 50 percent of his ancestors are likely European)". Charles Rotimi, of Howard University's National Human Genome Center, is one of many who have highlighted the methodological flaws in such research—that "the nature or appearance of genetic clustering (grouping) of people is a function of how populations are sampled, of how criteria for boundaries between clusters are set, and of the level of resolution used" all bias the results—and concluded that people should be very cautious about relating genetic lineages or clusters to their own sense of identity.
Thus, in analyses that assign individuals to groups it becomes less apparent that self-described racial groups are reliable indicators of ancestry. One cause of the reduced power of the assignment of individuals to groups is admixture. For example, self-described African Americans tend to have a mix of West African and European ancestry. Shriver et al. (2003) found that on average African Americans have ~80% African ancestry. Also, in a survey of college students who self-identified as "white" in a northeastern U.S. university, ~30% of whites had less than 90% European ancestry.
Stephan Palmie has responded to Abu el-Haj's claim that genetic lineages make possible a new, politically, economically, and socially benign notion of race and racial difference by suggesting that efforts to link genetic history and personal identity will inevitably "ground present social arrangements in a time-hallowed past," that is, use biology to explain cultural differences and social inequalities.
Race and intelligence
Researchers have reported differences in the average IQ test scores of various ethnic groups. The interpretation, causes, accuracy and reliability of these differences are highly controversial. Some researchers, such as Arthur Jensen, Richard Herrnstein, and Richard Lynn, have argued that such differences are at least partially genetic. Others, for example Thomas Sowell, argue that the differences largely owe to social and economic inequalities. Still others such as Stephen Jay Gould and Richard Lewontin have argued that categories such as "race" and "intelligence" are cultural constructs that render any attempt to explain such differences (whether genetically or sociologically) meaningless.
Political and practical uses
In biomedicineThere is an active debate among biomedical researchers about the meaning and importance of race in their research. The primary impetus for considering race in biomedical research is the possibility of improving the prevention and treatment of diseases by predicting hard-to-ascertain factors on the basis of more easily ascertained characteristics. Some have argued that without cheap and widespread genetic tests, racial identification is the best way to predict for certain diseases, such as Cystic fibrosis, Lactose intolerance, Tay-Sachs Disease and sickle cell anemia, which are genetically linked and more prevalent in some populations than others. The most well-known examples of genetically-determined disorders that vary in incidence among populations would be sickle cell disease, thalassaemia, and Tay-Sachs disease.
There has been criticism of associating disorders with race. For example, in the United States sickle cell is typically associated with black people, but this trait is also found in people of Mediterranean, Middle Eastern or Indian ancestry. The sickle cell trait offers some resistance to malaria. In regions where malaria is present sickle cell has been positively selected and consequently the proportion of people with it is greater. Therefore, it has been argued that sickle cell should not be associated with a particular race, but with having ancestors who lived in a malaria-prone region. Africans living in areas where there is no malaria, such as the East African highlands, have a prevalence of sickle cell as low as parts of Northern Europe.
Another example of the use of race in medicine is the recent U.S. FDA approval of BiDil, a medication for congestive heart failure targeted at black people in the United States. Several researchers have questioned the scientific basis for arguing the merits of a medication based on race, however. As Stephan Palmie has recently pointed out, black Americans were disproportionately affected by Hurricane Katrina, but for social and not climatological reasons; similarly, certain diseases may disproportionately affect different races, but not for biological reasons. Several researchers have suggested that BiDil was re-designated as a medicine for a race-specific illness because its manufacturer, Nitromed, needed to propose a new use for an existing medication to justify an extension of its patent and thus monopoly on the medication, not for pharmacological reasons.
Gene flow and intermixture also have an effect on predicting a relationship between race and "race linked disorders". Multiple sclerosis is typically associated with people of European descent and is of low risk to people of African descent. However, due to gene flow between the populations, African Americans have elevated levels of MS relative to Africans. Notable African Americans affected by MS include Richard Pryor and Montel Williams. As populations continue to mix, the role of socially constructed races may diminish in identifying diseases.
A problem with making this distinction between Africans and Americans of African descent, however are recent discoveries of a link of where someone grew up as a child based upon latitudinal distance from the equator and a link with developing MS as an adult. While the incidence of MS increases with degree of melanin in the skin and Vitamin D production from sunlight exposure, in this manner the importance of race exists primarily in regards to childhood development and sunlight exposure. Rates of MS among caucasians and other races are also higher among adults who grew up in higher latitudes. This would make expression of genotype more important than the genotype itself since all races experience greater rates of development of MS if childhood exposure to sunlight and Vitamin D production is decreased.
In law enforcement
|This section does not cite any references or sources.|
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2010)
In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, it is generally more important to arrive at a description that will readily suggest the general appearance of an individual than to make a scientifically valid categorization by DNA or other such means. Thus, in addition to assigning a wanted individual to a racial category, such a description will include: height, weight, eye color, scars and other distinguishing characteristics.
British Police use a classification based in the ethnic background of British society: W1 (White-British), W2 (White-Irish), W9 (Any other white background); M1 (White and black Caribbean), M2 (White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other black background); O1 (Chinese), O9 (Any other). Some of the characteristics that constitute these groupings are biological and some are learned (cultural, linguistic, etc.) traits that are easy to notice.
In many countries, such as France, the state is legally banned from maintaining data based on race, which often makes the police issue wanted notices to the public that include labels like "dark skin complexion", etc. One factor that encourages this kind of circuitous wordings is that there is controversy over the actual relationship between crimes, their assigned punishments, and the division of people into the so-called "races," leading officials to try to deemphasize the alleged race of suspects.
In the United States, the practice of racial profiling has been ruled to be both unconstitutional and also to constitute a violation of civil rights. There is active debate regarding the cause of a marked correlation between the recorded crimes, punishments meted out, and the country's "racially divided" people. Many consider de facto racial profiling an example of institutional racism in law enforcement. The history of misuse of racial categories to impact adversely one or more groups and/or to offer protection and advantage to another has a clear impact on debate of the legitimate use of known phenotypical or genotypical characteristics tied to the presumed race of both victims and perpetrators by the government.
More recent work in racial taxonomy based on DNA cluster analysis (see Lewontin's Fallacy) has led law enforcement to narrow their search for individuals based on a range of phenotypical characteristics found consistent with DNA evidence.
While controversial, DNA analysis has been successful in helping police identify both victims and perpetrators by indicating what phenotypical characteristics to look for and what community the individual may have lived in. For example, in one case phenotypical characteristics suggested that the friends and family of an unidentified victim would be found among the Asian community, but the DNA evidence directed official attention to missing Native Americans, where her true identity was eventually confirmed. In an attempt to avoid potentially misleading associations suggested by the word "race," this classification is called "biogeographical ancestry" (BGA), but the terms for the BGA categories are similar to those used as for race.
The difference is that ancestry-informative DNA markers identify continent-of-ancestry admixture, not ethnic self-identity, and provide a wide range of phenotypical characteristics such that some people in a biogeographical category will not match the stereotypical image of an individual belonging to the corresponding race. To facilitate the work of officials trying to find individuals based on the evidence of their DNA traces, firms providing the genetic analyses also provide photographs showing a full range of phenotypical characteristics of people in each biogeographical group. Of special interest to officials trying to find individuals on the basis of DNA samples that indicate a diverse genetic background is what range of phenotypical characteristics people with that general mixture of genotypical characteristics may display.
Similarly, forensic anthropologists draw on highly heritable morphological features of human remains (e.g. cranial measurements) to aid in the identification of the body, including in terms of race. In a recent article anthropologist Norman Sauer asked, "if races don't exist, why are forensic anthropologists so good at identifying them?" Sauer observed that the use of 19th century racial categories is widespread among forensic anthropologists:
- "In many cases there is little doubt that an individual belonged to the Negro, Caucasian, or Mongoloid racial stock."
- "Thus the forensic anthropologist uses the term race in the very broad sense to differentiate what are commonly known as white, black and yellow racial stocks."
- "In estimating race forensically, we prefer to determine if the skeleton is Negroid, or Non-Negroid. If findings favor Non-Negroid, then further study is necessary to rule out Mongoloid."
According to Sauer, "The assessment of these categories is based upon copious amounts of research on the relationship between biological characteristics of the living and their skeletons." Nevertheless, he agrees with other anthropologists that race is not a valid biological taxonomic category, and that races are socially constructed. He argued there is nevertheless a strong relationship between the phenotypic features forensic anthropologists base their identifications on, and popular racial categories. Thus, he argued, forensic anthropologists apply a racial label to human remains because their analysis of physical morphology enables them to predict that when the person was alive, a particular racial label would have been applied to them.
- ^ a b AAPA Statement on Biological Aspects of Race American Association of Physical Anthropologists "Pure races, in the sense of genetically homogeneous populations, do not exist in the human species today, nor is there any evidence that they have ever existed in the past."
- ^ Bamshad, Michael and Steve E. Olson. "Does Race Exist?", Scientific American Magazine (10 November 2003).
- ^ "NOVA Online: Mystery of the First Americans". Pbs.org. http://www.pbs.org/wgbh/nova/first/brace.html. Retrieved 2009-04-18.
- ^ a b "NOVA Online: Mystery of the First Americans". Pbs.org. http://www.pbs.org/wgbh/nova/first/gill.html. Retrieved 2009-04-18.
- ^ S. O. Y. Keita, R. A. Kittles, C. D. M. Royal, G. E. Bonney, P. Furbert-Harris, G. M. Dunston & C. N. Rotimi, 2004 "Conceptualizing human variation" in Nature Genetics 36, S17 - S20 Conceptualizing human variation
- ^ For example this statement expressing the official viewpoint of the American Anthropological Association at their web page: "Evidence from the analysis of genetics (e.g., DNA) indicates that most physical variation lies within so-called racial groups. This means that there is greater variation within 'racial' groups than between them."
- ^ John Lie Modern Peoplehood (Cambridge, Mass.: Harvard University Press, 2004)
- ^ Thompson, William; Joseph Hickey (2005). Society in Focus. Boston, MA: Pearson. ISBN 0-205-41365-X.
- ^ a b Gordon 1964
- ^ a b "American Anthropological Association Statement on "Race"". Aaanet.org. 1998-05-17. http://www.aaanet.org/stmts/racepp.htm. Retrieved 2009-04-18.
- ^ a b Palmie, Stephan (2007) "Genomics, Divination, 'Racecraft'" in American Ethnologist 34(2): 214
- ^ a b Mevorach, Katya Gibel (2007) "Race, Racism and Academic Complicity" in American Ethnologist 34(2): 239-240
- ^ Daniel A. Segal 'The European': Allegories of Racial Purity Anthropology Today, Vol. 7, No. 5 (Oct., 1991), pp. 7-9 doi:10.2307/3032780
- ^ a b Bindon, Jim. University of Alabama. "Post World War II". 2005. August 28, 2006.
- ^ The Mummies of Xinjiang, DISCOVER Magazine
- ^ A meeting of civilisations: The mystery of China's Celtic mummies, The Independent
- ^ Julian the Apostate, Against the Galileans: remains of the 3 books, excerpted from Cyril of Alexandria, Contra Julianum (1923) pp.319-433
- ^ El Hamel, Chouki (2002), [Expression error: Missing operand for > "'Race', slavery and Islam in Maghribi Mediterranean thought: the question of the Haratin in Morocco"], The Journal of North African Studies 7 (3): 29–52 [39–42]
- ^ Bethwell A. Ogot, Zamani: A Survey of East African History, (East African Publishing House: 1974), p.104
- ^ Cyril Glasse, The New Encyclopedia of Islam, (Rowman & Littlefield Publishers: 2008), p.631
- ^ Bernard Lewis, The Political Language of Islam, (University of Chicago Press: 1991), p.466
- ^ Lawrence I. Conrad (1982), "Taun and Waba: Conceptions of Plague and Pestilence in Early Islam", Journal of the Economic and Social History of the Orient 25 (3): 268-307
- ^ A. Smedley (1999) Race in North America: origin and evolution of a worldview, 2nd ed. Westview Press, Boulder
- ^ Meltzer M (1993) Slavery: a world history, rev ed. DaCapo Press, Cambridge, MA
- ^ Takaki R (1993) A different mirror: a history of multicultural America. Little, Brown, Boston
- ^ Banton M (1977) The idea of race. Westview Press, Boulder
- ^ Smedley A (1999) Race in North America: origin and evolution of a worldview, 2nd ed. Westview Press, Boulder
- ^ Huxley, T. H. "On the Geographical Distribution of the Chief Modifications of Mankind" (1870) Journal of the Ethnological Society of London
- ^ Charles Darwin, The Descent of Man, Chapter 7 - On the Races of Man. Consider, for instance, the following excerpt: "We thus see that many of the wilder races of man are apt to suffer much in health when subjected to changed conditions or habits of life, and not exclusively from being transported to a new climate. Mere alterations in habits, which do not appear injurious in themselves, seem to have this same effect; and in several cases the children are particularly liable to suffer. It has often been said, as Mr. Macnamara remarks, that man can resist with impunity the greatest diversities of climate and other changes; but this is true only of the civilised races."
- ^ Darwin, C. (1871/1874). The Descent of Man, 2nd. Ed., London: John Murray.
- ^ Carleton S. Coon, The Origin of Races, (New York: Knopf, 1962)
- ^ The American Heritage Book of English UsageA Practical and Authoritative Guide to Contemporary English. 1996. Entry on "Race"
- ^ "In Ways Unacademical": The Reception of Carleton S. Coon's The Origin of Races by Prof. John P Jackon Jr, from 'Journal of the History of Biology' published 2001
- ^ Leonard Lieberman and Fatimah Linda C. Jackson (1995) "Race and Three Models of Human Origin" in American Anthropologist Vol. 97, No. 2, pp. 232-234
- ^ Leonard Lieberman and Fatimah Linda C. Jackson (1995) "Race and Three Models of Human Origin" in American Anthropologist Vol. 97, No. 2, pp. 237
- ^ Leonard Lieberman and Fatimah Linda C. Jackson (1995) "Race and Three Models of Human Origin" in American Anthropologist Vol. 97, No. 2, pp. 239
- ^ a b Pleijel, F. and Rouse, G., W. (2000) "Least-inclusive taxonomic unit: a new taxonomic concept for biology" Proceedings of the Royal Society 267: 627–630 PDF
- ^ SUSAN M. HAIG, ERIK A. BEEVER, STEVEN M. CHAMBERS, HOPE M. DRAHEIM, BRUCE D. DUGGER, SUSIE DUNHAM,§ ELISE ELLIOTT-SMITH, JOSEPH B. FONTAINE, DYLAN C. KESLER, BRIAN J. KNAUS, IARA F. LOPES, PETE LOSCHL, THOMAS D. MULLINS, AND LISA M. SHEFFIELD (2006) "Taxonomic Considerations in Listing Subspecies Under the U.S. Endangered Species Act" Conservation Biology 20: 1584–1594 doi:10.1111/j.1523-1739.2006.00530.x
- ^ a b c d e f g h i j Keita et al. 2004
- ^ a b c d e f g h Templeton, 1998
- ^ Long and Kittles, 2003
- ^ O'Brien, S., J. and Meyr, E. (1991) "Bureaucratic mischief: recognizing endangered species and subspecies." Science 251: 1187-1190 PDF
- ^ AMADON, D. 1949. The seventy-five percent rule for subspecies. Condor 51:250-258.
- ^ MAYR, E. 1969. Principles of Systematic Zoology. McGraw-Hill, New York.
- ^ Patten MA & Unitt P. (2002). Diagnosability versus mean differences of sage sparrow subspecies. Auk. vol 119, no 1. p. 26-35.
- ^ Wright, S. 1978. Evolution and the Genetics of Populations, Vol. 4, Variability Within and Among Natural Populations. Univ. Chicago Press, Chicago, Illinois. p. 438
- ^ Human Genome Project Information: Minorities, Race, and Genomics
- ^ a b c Joseph L. Graves, (2006) What We Know and What We Don’t Know: Human Genetic Variation and the Social Construction of Race from Race and Genomics
- ^ The Use of Racial, Ethnic, and Ancestral Categories in Human Genetics Research by Race, Ethnicity, and Genetics Working Group. Am J Hum Genet. 2005 77(4): 519–532.
- ^ DECONSTRUCTING THE RELATIONSHIP BETWEEN GENETICS AND RACE[dead link] Michael Bamshad, Stephen Wooding, Benjamin A. Salisbury and J. Claiborne Stephens. Nature Genetics (2004) 5:598-609
- ^ Conceptualizing human variation by S O Y Keita, 2, R A Kittles1, C D M Royal, G E Bonney, P Furbert-Harris, G M Dunston & C N Rotimi. Nature Genetics 36, S17 - S20 (2004)
- ^ Implications of biogeography of human populations for 'race' and medicine by Sarah A Tishkoff & Kenneth K Kidd. Nature Genetics 36, S21 - S27 (2004)
- ^ Genetic variation, classification and 'race' by Lynn B Jorde & Stephen P Wooding. Nature Genetics' 36, S28 - S33 (2004)
- ^ "Project MUSE - Human Biology - Human Genetic Diversity and the Nonexistence of Biological Races". Muse.jhu.edu. http://muse.jhu.edu/journals/human_biology/v075/75.4long.pdf. Retrieved 2009-04-18.
- ^ http://www.anthrosource.net/doi/abs/10.1525/an.2006.47.2.7?journalCode=an accessed June 2007
- ^ Saitou. Kyushu Museum. 2002. February 2, 2007
- ^ Race and Three Models of Human Origin, Lieberman and Jackson (1995).
- ^ Theodosious Dobzhansky "Comment" in Current Anthropology 3(3): 279-280
- ^ Scientists Find A DNA Change That Accounts For Light Skin, The Washington Post, December 16, 2005
- ^ Pilar Ossorio and Troy Duster (2006) Race and Genetics Controversies in Biomedical, Behavioral, and Forensic Sciences American Psychologist 60 115–128 doi:10.1037/0003-066X.60.1.115
- ^ Your Family May Once Have Been A Different Color by Robert Krulwich. Morning Edition, National Public Radio. 2 Feb 2009.
- ^ a b What is a population? An empirical evaluation of some genetic methods for identifying the number of gene pools and their degree of connectivity. by ROBIN S. WAPLES and OSCAR GAGGIOTTI. Molecular Ecology (2006) 15, 1419–1439. doi:10.1111/j.1365-294X.2006.02890.x
- ^ Koertvelyessy, TA and MT Nettleship 1996 Ethnicity and mating structure in Southwestern Hungary. Rivista di Antropologia (Roma) 74:45-53
- ^ Koertvelyessy, T 1995 Etnicity, isonymic relationships, and biological distance in Northeastern Hungary. Homo 46/1:1-9.
- ^ Pettener. D 1990 Temporal trends in marital structure and isonymy in S. Paolo Albanese, Italy. Human Biology 6:837-851.
- ^ Biondi, G, P Raspe, GW Lasker, and GGN Mascie-Taylor 1990 Relationships estimated by isonymy among the Italo-Greco villages of southern Italy. Human Biology 62:649-663.
- ^ Wright S. 1978. Evolution and the Genetics of Populations, Vol. 4, Variability Within and Among Natural Populations. Chicago, II: Univ. Chicago Press
- ^ a b * Witherspoon DJ, Wooding S, Rogers AR, Marchani EE, Watkins WS, Batzer MA, Jorde LB. (2007) Genetic similarities within and between human populations. Genetics. 176(1):351–9. Full Text
- ^ Leonard Lieberman, Rodney C. Kirk, and Alice Littlefield, "Perishing Paradigm: Race—1931-99," American Anthropologist 105, no. 1 (2003): 110-13. A following article in the same issue, by Mat Cartmill and Kaye Brown, questions the precise rate of decline, but from their biased perspective agree that the Negroid/Caucasoid/Mongoloid paradigm has fallen into near-total disfavor.
- ^ (Wilson et al. 2001), (Cooper et al. 2003) (given in summary by Bamshad et al. 2004 p.599)
- ^ (Schwartz 2001), (Stephens 2003) (given in summary by Bamshad et al. 2004 p.599)
- ^ (Smedley and Smedley 2005), (Helms et al. 2005), . Lewontin, for example argues that there is no biological basis for race on the basis of research indicating that more genetic variation exists within such races than among them (Lewontin 1972).
- ^ (Risch et al. 2002), (Bamshad 2005). Neil Risch argues: "One could make the same arguments about sex and age! ... you can undermine any definitional system... In a recent study... we actually had a higher discordance rate between self-reported sex and markers on the X chromosome [than] between genetic structure [based on microsatellite markers] versus [racial] self-description, [which had a] 99.9% concordance... So you could argue that sex is also a problematic category. And there are differences between sex and gender; self-identification may not be correlated with biology perfectly. And there is sexism. And you can talk about age the same way. A person's chronological age does not perfectly correspond to his biological age for a variety of reasons, both inherited and non-inherited. Perhaps just using someone's actual birth year is not a very good way of measuring age. Does that mean we should throw it out? ... Any category you come up with is going to be imperfect, but that doesn't preclude you from using it or the fact that it has utility"(Gitschier 2005).
- ^ (Harpending and Rogers 2000), (Bamshad et al. 2003), (Edwards 2003), (Bamshad et al. 2004), (Tang et al. 2005), (Rosenberg et al. 2005): "If enough markers are used... individuals can be partitioned into genetic clusters that match major geographic subdivisions of the globe".
- ^ (Mountain and Risch 2004)
- ^ Frederick P. Rivara and Laurence Finberg, "Use of the Terms Race and Ethnicity," Archives of Pediatrics & Adolescent Medicine 155, no. 2 (2001): 119. For similar author's guidelines, see Robert S. Schwartz, "Racial Profiling in Medical Research," The New England Journal of Medicine, 344 (no, 18, May 3, 2001); M.T. Fullilove, "Abandoning 'Race' as a Variable in Public Health Research: An Idea Whose Time has Come," American Journal of Public Health, 88 (1998), 1297-1298; and R. Bhopal and L. Donaldson, "White, European, Western, Caucasian, or What? Inappropriate Labeling in Research on Race, Ethnicity, and Health." American Journal of Public Health, 88 (1998), 1303-1307.
- ^ See program announcement and requests for grant applications at the NIH website, at nih.gov.
- ^ ssc.uwo.ca
- ^ "'Race'—Still an Issue for Physical Anthropology? Results of Polish Studies Seen in the Light of the U.S. Findings" by Katarzyna A. Kaszycka. American Anthropologist March 2003, Vol. 105, No. 1, pp. 116-124
- ^ Marks, Jonathan (2007) "Grand Anthropological Themes" in American Ethnologist 34(2): 234, cf. Marks, Jonathan (1995) Human Biodiversity: Genes, Race, and History. New York: Aldine de Gruyter
- ^ "New Ideas, New Fuels: Craig Venter at the Oxonian". FORA.tv. 2008-11-03. http://fora.tv/2008/07/30/New_Ideas_New_Fuels_Craig_Venter_at_the_Oxonian#chapter_17. Retrieved 2009-04-18.
- ^ a b Shriver et al. 2003
- ^ "Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity". Office of Management and Budget. 1997-10-30. http://www.whitehouse.gov/omb/fedreg/1997standards.html. Retrieved 2009-03-19. Also: U.S. Census Bureau Guidance on the Presentation and Comparison of Race and Hispanic Origin Data and B03002. HISPANIC OR LATINO ORIGIN BY RACE; 2007 American Community Survey 1-Year Estimates
- ^ Mark D. Shriver & Rick A. Kittles, "Genetic ancestry and the search for personalized genetic histories" in Nature Reviews Genetics 5, 611-618
- ^ Hammer, M.F., A.J. Redd, E.T. Wood, M. R. Bonner, H. Jarjanazi, T. karafet, S. Santachiara-Benerecetti, A. Oppenheimer, M.A. Jobling, T. Jenkins, H. Ostrer, and B. Bonne-Tamir (2000) "Jewish and Middle Eastern Non-Jewish Populations Share a Common pool of Y-Chromosome Biallelic Haplotypes" in Proceedings of the National Academy of Sciences 97(12): 6769-6774
- ^ Thomas, M. K. Skoprecski, K. Ben-Ami, H. Parfitt, T. Bradman, and D.B. Goldstein (1988) "Origins of Old Testament priests" in Nature 394(6689): 138-140.
- ^ Nadia Abu el-Haj (2007) Rethinking Genetic Genealogy" in American Ethnology 34(2): 224-225
- ^ "Back with a Vengeance: the Reemergence of a Biological Conceptualization of Race in Research on Race/Ethnic Disparities in Health Reanne Frank". http://paa2006.princeton.edu/download.aspx?submissionId=61713. Retrieved 2009-04-18.
- ^ Charles Rotimi (2003) "Genetic Ancestry Tracing and the Abridan identity: A Double-Edged Sword?" in Developing World Bioethics 3(2): 153-154.
- ^ Race, Ethnicity, and Genetics Working Group*. "The Use of Racial, Ethnic, and Ancestral Categories in Human Genetics Research". Pubmedcentral.nih.gov. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1275602. Retrieved 2009-04-18.
- ^ Stephan Palmie (2007) "Genomic Moonlighting, Jewish Cyborgs, and Peircian Abduction" in American Ethnologist 34(2): 249.
- ^ "sickle cell prevalence". Ornl.gov. http://www.ornl.gov/sci/techresources/Human_Genome/posters/chromosome/sca.shtml. Retrieved 2009-04-18.
- ^ Taylor AL, Ziesche S, Yancy C, Carson P, D'Agostino R Jr, Ferdinand K, Taylor M, Adams K, Sabolinski M, Worcel M, Cohn JN. Combination of isosorbide dinitrate and hydralazine in blacks with heart failure. N Engl J Med 2004;351:2049-57. PMID 15533851.
- ^ Duster, Troy (2005) "Race and Reification in Science" in Science 307(5712): 1050-1051, Fausto-Sterling, Anne (2004) "Refashioning Race: DNA and the Politics of Health" in differences 15(3):1-37, Jones, Joseph and Alan Goodman (2005) "BiDil and the 'fact' of Genetic Blackness" in Anthropology News 46(7):26, Kahn, Joseph (2004) "How a Drug Becomes 'Ethnic:' Law, Commerce, and the Production of Racial Categories in Medicine" in Yale Journal of Health Policy, Law and Politics 4(1):1-46, Kahn, Joseph (2005) "Misreading Race and Genomics after BiDil" in Nature Genetics 37(7):655-656, Palmie, Stephan (2007) "Genomics, Divination and 'Racecraft'" in American Ethnologist 34(2): 205-222).
- ^ Multiple Sclerosis. The Immune System's Terrible Mistake. BY PETER RISKIND, M.D., PH.D.
- ^ "FBI - Most Wanted - The FBI's Ten Most Wanted Fugitives". http://www.fbi.gov/wanted/topten/fugitives/fugitives.htm.
- ^ Molecular eyewitness: DNA gets a human face Controversial crime-scene test smacks of racial profiling, critics say[dead link] CAROLYN ABRAHAM June 25, 2005
- ^ DNA tests offer clues to suspect's race By Richard Willing, USA TODAY
- ^ "Compositions and methods for inferring ancestry". Appft1.uspto.gov. http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PG01&s1=20040229231&OS=20040229231&RS=20040229231. Retrieved 2009-04-18.
- ^ Sauer, Norman J. (1992) "Forensic Anthropology and the Concept of Race: If Races Don't Exist, Why are Forensic Anthropologists So Good at Identifying them" in Social Science and Medicine 34(2): 107-111.
- ^ El-Najjar M. Y. and McWilliams K. R. Forensic Anthropology: The Structure, Morphology and Variation of Human Bone and Dentition, p. 72. Charles C. Thomas, Springfield, 1978.
- ^ Skinner M. and Lazenby R. A. Found Human Remains: A Field Manual for the Recovery of the Recent Human Skeletons, p. 47. Simon Fraser University, British Columbia, 1983.
- ^ Morse D., Duncan J. and Stoutamire J. (Editors) Handbook of Forensic Archaeology. p. 89. Bill’s Book Store, Tallahassee, 1983.
- ^ Sauer, Norman J. (1992) "Foren Anthropology and the Concept of Race: If Races Don't Exist, Why are Forensic Anthropologists So Good at Identifying them" in Social Science and Medicine 34(2): 107-111.
- Abizadeh, Arash (2001) "Ethnicity, Race, and a Possible Humanity" World Order 33.1: 23-34.
- American Association of Physical Anthropologists (1996) AAPA statement on biological aspects of race. Am J Phys Anthropol 101:569–570
- Banton M (1977) The idea of race. Westview Press, Boulder
- Boas 1912 "Change in Bodily Form of Descendants of Immigrants" in American Anthropologist 14: 530-562
- Brace 1964 "A Non-racial Approach Toward the Understanding of Human Diversity" in The Concept of Race, ed. Ashley Montagu
- Calafell F (2003) Classifying humans. Nat Genet 33:435–436
- Cooper RS, Kaufman JS, Ward R (2003) Race and genomics. N Engl J Med 348:1166–1170
- Dobzhansky, T. (1970). Genetics of the Evolutionary Process. New York, NY: Columbia University Press.
- ——— (2005) Race and reification in science. Science 307:1050–1051
- Ehrlich and Holm 1964 "A Biological View of Race" in The Concept of Race, ed. Ashley Montagu
- Frayer, David, M. Wolpoff, A. Thorne, F. Smith, G. Pope "Theories of Modern Origins: The Paleontological Test" in American Anthropologist 95(1) 14-50
- Guthrie RD (1996) The mammoth steppe and the origin of mongoloids and their dispersal. In: Akazawa T, Szathmary E (eds) Prehistoric Mongoloid dispersals. Oxford University Press, New York, pp 172–186
- Hannaford I (1996) Race: the history of an idea in the West. Johns Hopkins University Press, Baltimore
- Harpending H, Rogers A (2000) Genetic perspectives on human origins and differentiation. Annu Rev Genomics Hum Genet 1:361–385
- Harris, Marvin (1980) Patterns of Race in the Americas. Greenwood Press
- Hooton, E.A. (1926). Methods of racial analysis. Science 63, 75–81.
- Jablonski NG (2004) The evolution of human skin and skin color. Annu Rev Anthropol 33:585–623
- Keita SOY, Kittles RA (1997) The persistence of racial thinking and the myth of racial divergence. Am Anthropol 99:534–544
- Lahr MM (1996) The evolution of modern human diversity: a study of cranial variation. Cambridge University Press, Cambridge, United Kingdom
- Lamason RL, Mohideen MA, Mest JR, Wong AC, Norton HL, Aros MC, Jurynec MJ, Mao X, Humphreville VR, Humbert JE, Sinha S, Moore JL, Jagadeeswaran P, Zhao W, Ning G, Makalowska I, McKeigue PM, O'Donnell D, Kittles R, Parra EJ, Mangini NJ, Grunwald DJ, Shriver MD, Canfield VA, Cheng KC (2005). SLC24A5, a putative cation exchanger, affects pigmentation in zebrafish and humans. Science 310: 1782-6.
- Lewis B (1990) Race and slavery in the Middle East. Oxford University Press, New York
- Lie J. (2004). Modern Peoplehood. Harvard University Press, Cambridge, Mass.
- Lieberman DE, McBratney BM, Krovitz G (2002) The evolution and development of cranial form in Homo sapiens. Proc Natl Acad Sci USA 99:1134–1139
- Lieberman L (2001) How "Caucasoids" got such big crania and why they shrank: from Morton to Rushton. Curr Anthropol 42:69–95
- Leiberman and Jackson 1995 "Race and Three Models of Human Origins" in American Anthropologist 97(2) 231-242
- Lieberman, Hampton, Littlefield, and Hallead 1992 "Race in Biology and Anthropology: A Study of College Texts and Professors" in Journal of Research in Science Teaching 29:301-321
- Lewontin 1973 "The Apportionment of Human Diversity" in Evolutionary Biology 6:381-397
- Livingstone 1962 "On the Non-Existence of Human Races" in Current Anthropology 3: 279-281
- Long, J.C. and Kittles, R.A. (2003). Human genetic diversity and the nonexistence of biological races. Hum Biol. 75, 449–71.
- Marks J (1995) Human biodiversity: genes, race, and history. Aldine de Gruyter, New York
- Mayr, E. (1969). Principles of Systematic Zoology. New York, NY: McGraw-Hill.
- Mays VM, Ponce NA, Washington DL, Cochran SD (2003) Classification of race and ethnicity: implications for public health. Annu Rev Public Health 24:83–110
- Meltzer M (1993) Slavery: a world history, rev ed. DaCapo Press, Cambridge, MA
- Montagu (1941). "The Concept of Race in Light of Genetics" in Journal of Heredity 23: 241-247
- Montagu (1942). Man’s Most Dangerous Myth: The Fallacy of Race
- Mörner M (1967) Race mixture in the history of Latin America. Little, Brown, Boston
- Morton NE, Collins A (1998) Tests and estimates of allelic association in complex inheritance. Proc Natl Acad Sci USA 95:11389–11393
- Nobles M (2000) Shades of citizenship: race and the census in modern politics. Stanford University Press, Stanford
- Parra EJ, Kittles RA, Shriver MD (2004) Implications of correlations between skin color and genetic ancestry for biomedical research. Nat Genet 36:S54–S60
- Parra EJ, Marcini A, Akey J, Martinson J, Batzer MA, Cooper R, Forrester T, Allison DB, Deka R, Ferrell RE, Shriver MD (1998) Estimating African American admixture proportions by use of population-specific alleles. Am J Hum Genet 63:1839–1851
- Parra FC, Amado RC, Lambertucci JR, Rocha J, Antunes CM, Pena SD (2003) Color and genomic ancestry in Brazilians. Proc Natl Acad Sci USA 100:177–182
- Platz EZ, Rimm EB, Willett WC, Kantoff PW, Giovannucci E (2000) Racial variation in prostate cancer incidence and in hormonal system markers among male health professionals. J Natl Cancer Inst 92:2009–2017
- Pritchard JK (2001) Are rare variants responsible for susceptibility to complex diseases? Am J Hum Genet 69:124–137
- Pritchard JK, Cox NJ (2002) The allelic architecture of human disease genes: common disease-common variant...or not? Hum Mol Genet 11:2417–2423
- Rees JL (2003) Genetics of hair and skin color. Annu Rev Genet 37:67–90
- Relethford JH (2002) Apportionment of global human genetic diversity based on craniometrics and skin color. Am J Phys Anthropol 118:393–398
- Risch N (2000) Searching for the genetic determinants in a new millennium. Nature 405:847–856
- Roseman CC (2004) Detecting interregionally diversifying natural selection on modern human cranial form by using matched molecular and morphometric data. Proc Natl Acad Sci USA 101:12824–12829
- Rosenberg NA, Pritchard JK, Weber JL, Cann HM, Kidd KK, Zhivotovsky LA, Feldman MW (2002) Genetic structure of human populations. Science 298:2381–2385
- Rotimi CN (2004) Are medical and nonmedical uses of large-scale genomic markers conflating genetics and "race"? Nat Genet 36:S43–S47
- Serre D, Langaney A, Chech M, Teschler-Nicola M, Paunovic M, Mennecier P, Hofreiter M, Possnert G G, Pääbo S (2004) No evidence of Neandertal mtDNA contribution to early modern humans. PLoS Biol 2:313–317
- Shriver, M. D. et al. (2003). Skin pigmentation, biogeographical ancestry, and admixture mapping. Hum. Genet. 112, 387–399.
- Sider, Gerald 1993 Lumbee Indian Histories: Race, Ethnicity, and Indian Identity in the Southern United States
- Smedley A (1999) Race in North America: origin and evolution of a worldview, 2nd ed. Westview Press, Boulder
- Smith DJ, Lusis AJ (2002) The allelic structure of common disease. Hum Mol Genet 11:2455–2461
- Smith, Fred (1982) "Upper Pleistocene Hominid Evolution in South-Central Europe: A Review of the Evidence and Analysis of Trends" Current Anthropology 23: 667-686
- Smith MW, Patterson N, Lautenberger JA, Truelove AL, McDonald GJ, Waliszewska A, Kessing BD, et al. (2004) A high-density admixture map for disease gene discovery in African Americans. Am J Hum Genet 74:1001–1013
- Snowden FM (1983) Before color prejudice: the ancient view of blacks. Harvard University Press, Cambridge, MA
- Spickard PR (1992) The illogic of American racial categories. In: Root MPP (ed) Racially mixed people in America. Sage, Newbury Park, CA, pp 12–23
- Stanton W (1960) The leopard's spots: scientific attitudes toward race in America, 1815–1859. University of Chicago Press, Chicago
- Stringer C (2002) Modern human origins: progress and prospects. Philos Trans R Soc Lond B Biol Sci 357:563–579
- Sturm RA, Teasdale RD, Box NF (2001) Human pigmentation genes: identification, structure and consequences of polymorphic variation. Gene 277:49–62
- Takaki R (1993) A different mirror: a history of multicultural America. Little, Brown, Boston
- Tang H, Quertermous T, Rodriguez B, Kardia SL, Zhu X, Brown A, Pankow JS, Province MA, Hunt SC, Boerwinkle E, Schork NJ, Risch NJ (2005). Genetic structure, self-identified race/ethnicity, and confounding in case-control association studies. Am J Hum Genet 76, 268-75.
- Templeton AR (1998) Human races: a genetic and evolutionary perspective. Am Anthropol 100:632–650
- ——— (2002) Out of Africa again and again. Nature 416:45–51
- Thomas DC, Witte JS (2002) Point: population stratification: a problem for case-control studies of candidate-gene associations? Cancer Epidemiol Biomarkers Prev 11:505–512
- Thorne and Wolpoff 1992 "The Multiregional Evolution of Humans" in Scientific American (April) 76-83
- Todorov T (1993) On human diversity. Harvard University Press, Cambridge, MA
- Wallace R, Wallace D, Wallace RG (2004) Coronary heart disease, chronic inflammation, and pathogenic social hierarchy: a biological limit to possible reductions in morbidity and mortality. J Natl Med Assoc 96:609–619
- Wilson JF, Weale ME, Smith AC, Gratrix F, Fletcher B, Thomas MG, Bradman N, Goldstein DB (2001) Population genetic structure of variable drug response. Nat Genet 29:265–269
- Wilson and Brown 1953 "The Subspecies Concept and Its Taxonomic Application" in Systematic Zoology 2: 97-110
- Wolpoff, Milford 1993 "Multiregional Evolution: The Fossil Alternative to Eden" in The Human Evolution Sourcebook Russell Ciochon and John Fleagle, eds.
- Yu N, Chen FC, Ota S, Jorde LB, Pamilo P, Patthy L, Ramsay M, Jenkins T, Shyue SK, Li WH (2002) Larger genetic differences within Africans than between Africans and Eurasians. Genetics 161:269–274
|Wikiquote has a collection of quotations related to: Race (classification of human beings)|
Official statements and standards
- "The Race Question", UNESCO, 1950
- US Census Bureau: Definition of Race
- American Association of Physical Anthropologists' Statement on Biological Aspects of Race
- "Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity", Federal Register 1997
- American Anthropological Association's Statement on Race and RACE: Are we so different? a public education program developed by the American Anthropological Association.
- The Myth of Race On the lack of scientific basis for the concept of human races (Medicine Magazine, 2007).
- Race - The power of an illusion Online companion to California Newsreel's documentary about race in society, science, and history
- Steven and Hilary Rose, The Guardian, "Why we should give up on race", 9 April 2005
- Times Online, "Gene tests prove that we are all the same under the skin", 27 October 2004.
- Michael J. Bamshad, Steve E. Olson "Does Race Exist?", Scientific American, December 2003
- "Gene Study Identifies 5 Main Human Populations, Linking Them to Geography", Nicholas Wade, NYTimes, December 2002. Covering
- Scientific American Magazine (December 2003 Issue) Does race exists ?.
- DNA Study published by United Press International showing how 30% of White Americans have at least one Black ancestor
- Yehudi O. Webster Twenty-one Arguments for Abolishing Racial Classification, The Abolitionist Examiner, June 2000
- The Tex(t)-Mex Galleryblog, An updated, online supplement to the University of Texas Press book (2007), Tex(t)-Mex
- Times of India - Article about Asian racism
- South China Morning Post - Going beyond ‘sorry’
- Is Race "Real"? forum organized by the Social Science Research Council, includes 2005 op-ed article by A.M. Leroi from the New York Times advocating biological conceptions of race and responses from scholars in various fields More from Leori with responses
- Richard Dawkins: Race and creation (extract from The Ancestor's Tale: A Pilgrimage to the Dawn of Life) - On race, its usage and a theory of how it evolved. (Prospect Magazine October 2004)
- James, Michael (2008) Race, in the Stanford Encyclopedia of Philosophy.
- Ten Things Everyone Should Know About Race by California Newsreel.
- American Anthropological Association's educational website on race with links for primary school educators and researchers
- Boas's remarks on race to a general audience
- Catchpenny mysteries of ancient Egypt, "What race were the ancient Egyptians?", Larry Orcutt.
- Judy Skatssoon, "New twist on out-of-Africa theory", ABC Science Online, Wednesday, 14 July 2004.
- Racial & Ethnic Distribution of ABO Blood Types - bloodbook.com
- Are White Athletes an Endangered Species? And Why is it Taboo to Talk About It? Discussion of racial differences in athletics
- "Does Race Exist? A proponent's perspective" - Author argues that the evidence from forensic anthropology supports the idea of race.
- "Does Race Exist? An antagonist's perspective" - The author argues that clinal variation undermines the idea of race.
- American Ethnography - The concept of race Ashley Montagu's 1962 article in American Anthropology
- American Ethnography - The genetical theory of race, and anthropological method Ashley Montagu's 1942 American Anthropology article | http://diccionario.sensagent.com/Race_(classification_of_human_beings)/en-en/ | 13 |
84 | Normal Tool Instructions
Note: These instructions are abstracted from and can be supplemented by the full web lecture on the Normal Probability Distribution available through another link on this page.
The top button on the Normal Tool says "Normal Tool." This allows us to find probabilities for any normal distribution. The bottom button says "Standard Normal Tool." This allows us to find probabilities for a special case of the normal distribution (sometimes known as the z-distribution or unit normal).
Instructions will be given for the general case first (Normal Tool). Then we will go on to instructions for the simpler case (Standard Normal Tool).
1. Normal Tool: Finding probabilities for any normal distribution
We will use an example to illustrate the step by step use of the Normal Tool. As a big picture overview, let's suppose that scientists doing research take some interesting phenomena in nature, reduce it to numbers by measurement operations, and then model those numbers as a random variable. A frequently used random variable is the normal probability distribution.
Height Example. In this example we will be interested in the heights of northern European males. We take such a person and reduce him to a single number via the usual operations for measuring someone's height. Then we model the height of northern European males as a normal population with mu = 150 cm and sigma = 30 cm. In other words, our model is N(150, 30).
Click on the top button--Normal Tool.
What is the Probability Between 140 and 170? We have modeled the heights of northern European males as N(150, 30). If that model is true, and if we sample one man from that population, what are the chances he has a height between 140 cm and 170 cm?
Total Area under the Normal curve. Remember that we can interpret the area below a normal curve as probability. The total area below the normal curve (from negative infinity up to positive infinity) is assumed to be 1. That is, the probability that a man's height will fall between negative and positive infinity is 1.
Area Between. Since the total area under the curve is 1, the area between 140 and 170 must be some fraction of 1. On the Normal Tool the first thing you must do is make sure that the little icon indicating "area between" is clicked (see lecture graphic). "Between" is the default setting for the Normal Tool, so when you open it up it automatically gives you the area between two values.
NOTE: Do not use the 'ENTER'
key on your keyboard to enter values.
Set mu. On the lecture graphic, arrows point to little boxes where you can set mu and sigma. First type in the mu which is relevant to whatever example you are working on. Then click the "ENTER MU (50 - 500)" button right next to the box where you entered the value of mu. (Note: The Normal Probability Tool only accepts values of mu between 50 and 500.) For our height example, I have entered mu = 150.
Set sigma. The lecture graphic also shows where to enter the value of sigma (toward the lower right-hand corner of the tool). For our height example, I have entered sigma = 30. You must type in the value of sigma and then press the "ENTER SIGMA" button next to it. DO NOT use the 'Enter' or 'Return' key on your keyboard to enter scores.
Set lower value. We are looking for the area (probability) between two values. The lecture graphic shows you where you can enter the lower of the two values. Once you type in the number, click on the button which says "ENTER LOWER SCORE." For the height example, the lower value is 140 cm, so on the lecture graphic I have set the lower value to 140.
Set upper value. Similarly, as you can see on the lecture graphic, there's a box where you can enter the upper score. Once you type in the upper value number, click on the button which says "ENTER UPPER SCORE." Following the height example, I have set the upper score to 170 on the lecture graphic.
Find probability. You have entered mu, sigma, upper score and lower score. Now you are ready to find the answer to the question. The lecture graphic points to a box where the probability will appear. All you have to do is read it and record it. For the height example, the probability that a northern European man's height will fall between 140 and 170 cm is .3747.
Black Area. Probability is represented by the black area under the curve. Look at the normal distribution on Normal Probability Tool. The black area between 140 and 170 represents a probability of .3747.
We have set up a correspondence between area on a picture we can see and the concept of probability. This allows us to picture probability clearly and simply.
Click and drag. Play with the Normal Tool. You'll notice that there are two blue pointers just below the normal curve. One is labeled "lower score" and the other "upper score." If you click on either of them, you can drag the black area to whatever value you want. The upper or lower score changes accordingly. The probability changes also accordingly. Try it and watch how the black area and the probability change together.
Positive and Negative Infinity. Play with the Normal Tool some more. You'll notice that to the right of the white boxes where you enter the upper and lower scores there are buttons labeled "-oo" and "+oo." This is as close as we could get to the symbols for negative infinity (-oo) and positive infinity (+oo). If you click on the minus infinity button (-oo) the lower score will become minus infinity. If you click on the plus infinity button (+oo) the upper score will become plus infinity. Try this out now. Find the probability that a height will fall between minus and plus infinity. (Answer: 1.) What is the probability that a height will fall between minus infinity and 150 cm? (Answer: .5.)
Now we will turn to a related question--what is the area probability outside of two values?
What is the Probability Outside 140 and 170? If we sample one northern European male, what's the probability that his height will fall outside of 140 and 170? In other words, what are the chances that he'll be either below 140, or he'll be above 170 in height? That's what we mean by the word "outside."
Area Outside. The first thing you have to do is click the icon for "Area Outside" on the Normal Tool. The Normal Tool will now show you the area outside 140 and 170. It will also change the probability.
And then you do exactly the same thing that you did before. For our current example, you set mu at 150, set sigma at 30, set the lower value at 140, set the upper value at 170.
Find probability. Then you simply read the probability. This time it is .6253. The probability that a height will fall outside (above or below) 140 and 170 cm is .6253.
Now we will turn to finding the area above a certain value.
What is the Probability Above 170? Perhaps a basketball coach is interested in tall men. We have modeled the heights of northern European males as N(150, 30). If that model is true, and if we sample one man from that population, what are the chances he has a height above 170 cm? This question implies that the lower score will be 170 and the upper score will be plus infinity. All scores above 170 will fall between 170 (on the low end) and plus infinity (on the high end).
Set mu: 150.
Set sigma: 30
Click Between Icon.
Set lower score: 170
Set upper score: +oo.
Read probability: .2546. There's about a 25% chance that the man would have a height above 170 cm. That's represented by the black area under the normal curve.
Now we will turn to a special case of the normal distribution: The Standard Normal.
Click on "Back to Menu" at the bottom of the tool.
2. Standard Normal Tool: Finding probabilities for N(0, 1)
N(0, 1): There is a particular form of the normal distribution which is very commonly used in statistics. It is called unit normal or the standard normal or the z distribution. The unit normal is simply a normal distribution which has a mean (mu) = 0, and a standard deviation (sigma) = 1. In more compressed symbols the unit normal is N(0, 1).
Everything works exactly the same with the unit normal as it does for any normal. So everything we've already learned applies to this topic. We will just be using a particular member of the normal family of distributions. This member of the family has mu = 0 and sigma = 1 and is sometimes called the z distribution.
z-Tables in Stat Books. The unit normal is the particular form of the normal that is found in z-tables in the back of stat books. "In the old days" before we had interactive programs like Normal Tool, we had to convert all questions to z scores and look up probabilities in z-tables.
[If you are still using the Standard Normal Tool, click on "Back to Menu" at the bottom so you see a simple page with two buttons.]
Click on the lower button--Standard Normal Tool.
Question. Suppose that we have N(0, 1) as our probability model. What is the probability of a score between -1 and +1 on (N(0, 1)?
Don't need to set mu and sigma. On the unit normal, N(0, 1), mu is always 0 and sigma is always 1. So you don't need to set them.
Click on the Area Between Icon.
Set lower and upper scores. Set the lower and upper score as we did above. In this case the lower score is -1 and the upper score is +1. When you start the Unit Normal option, it will come up with minus one and plus one as the lower and upper scores. So we don't have to do anything to solve the particular question we have asked.
Read the probability. The answer is .6827. This should be familiar to you. If it's not, it soon will be.
©Copyright 1997, 2000 Tom Malloy | http://www.psych.utah.edu/stat/bots/game7/Game7.html | 13 |
50 | In this section, we'll take a look at some formulas for calculating the volumes of some of the most common polyhedra.
The volume of a prism is equal to the product of the area of its base and the length of its altitude; V = Bh , where B is the area of the base and h is the length of the altitude (the height). The altitude of a prism is a segment with one endpoint in one of the bases, the other endpoint in the plane that contains the other base, perpendicular to that base. It is often called the height of the prism. The area of the base is a simple calculation of the area of whichever polygon forms the base of the prism.
Recall that a prism is only one special case of a cylinder. Unlike a prism, a cylinder's base can be any simple closed curve, not necessarily a polygon. The formula for the volume of a cylinder is roughly the same as that for a prism, though. The volume of a cylinder is the area of its base times the length of its altitude; V = Bh , where B is the area of the base and h is the length of the altitude (the height). Again, the altitude is the segment with one endpoint in one of the bases, the other endpoint in the plane that contains the other base, and perp endicular to that base. A circular cylinder adheres to this volume formula, but can also be written as Π times the radius squared times the height: V = Πr 2 h . This is only a different way to write the product of the altitude and the area of the base (since the area of a circle is derived differently than the area of a polygon..
A pyramid has a slightly more complicated formula for its volume. The volume of a pyramid is equal to 1/3 the product of the area of its base and the length of its altitude. This formula is often written V = (1/3)Bh , where B is the area of the base and h is the length of the altitude (the height). This formula is especially important to know because by selecting a point inside any polyhedron as the vertex of a pyramid, that polyhedron can b e broken down into components that are all pyramids. Just as a polygon will have as many triangles as it has sides, so will a polyhedron have as many pyramids as it does faces. With this method, we can find the volume of any polyhedron by breaking it up into a number of pyramids, calculating their individual volumes, and adding those volumes together.
The pyramid, like the prism, in only a specific case of a more general solid. All pyramids are cones with polygons for bases. A cone can have any simple closed curve as its base. The formula to find the volume of a cone is the same as that for a pyramid, however: 1/3 the product of the base's area and the altitude, or V = (1/3)Bh . When the base of a cone is a circle, the cone is a circular cone. The volume of a circular cone is (1/3)Π times the square of the radius times the length of the altitude; V = (1/3)Πr 2 h . Note that this is only another way to express the formula for a cone--it is a little more specific because we know a little more about this particular cone, it's base is a circle.
The volume of a sphere, just like its surface area, is dependent solely on its radius. The volume of a sphere is equal to (4/3)Π times the radius cubed; V = (4/3)Πr 3 .
Remember that the volume of a sphere and all of the other solids in this section are volumes of solids, not surfaces. | http://www.sparknotes.com/math/geometry2/3Dmeasurements/section3.rhtml | 13 |
85 | Grade Seven California Mathematics Content Standards
By the end of grade seven, students are adept at manipulating numbers and equations and understand the general principles at work. Students understand and use factoring of numerators and denominators and properties of exponents. They know the Pythagorean theorem and solve problems in which they compute the length of an unknown side. Students know how to compute the surface area and volume of basic three-dimensional objects and understand how area and volume change with a change in scale. Students make conversions between different units of measurement. They know and use different representations of fractional numbers (fractions, decimals, and percents) and are proficient at changing from one to another. They increase their facility with ratio and proportion, compute percents of increase and decrease, and compute simple and compound interest. They graph linear functions and understand the idea of slope and its relation to ratio.
1.0 Students know the properties of, and compute with, rational numbers expressed in a variety of forms:
1.1 Read, write, and compare rational numbers in scientific notation (positive and negative powers of 10) with approximate numbers using scientific notation.
1.2 Add, subtract, multiply, and divide rational numbers (integers, fractions, and terminating decimals) and take positive rational numbers to whole-number powers.
1.3 Convert fractions to decimals and percents and use these representations in estimations, computations, and applications.
1.4 Differentiate between rational and irrational numbers.
1.5 Know that every rational number is either a terminating or repeating decimal and be able to convert terminating decimals into reduced fractions.
1.6 Calculate the percentage of increases and decreases of a quantity.
1.7 Solve problems that involve discounts, markups, commissions, and profit and compute simple and compound interest.
2.0 Students use exponents, powers, and roots and use exponents in working with fractions:
2.1 Understand negative whole-number exponents. Multiply and divide expressions involving exponents with a common base.
2.2 Add and subtract fractions by using factoring to find common denominators.
2.3 Multiply, divide, and simplify rational numbers by using exponent rules.
2.4 Use the inverse relationship between raising to a power and extracting the root of a perfect square integer; for an integer that is not square, determine without a calculator the two integers between which its square root lies and explain why.
2.5 Understand the meaning of the absolute value of a number; interpret the absolute value as the distance of the number from zero on a number line; and determine the absolute value of real numbers.
Algebra and Functions
1.0 Students express quantitative relationships by using algebraic terminology, expressions, equations, inequalities, and graphs:
1.1 Use variables and appropriate operations to write an expression, an equation, an inequality, or a system of equations or inequalities that represents a verbal description (e.g., three less than a number, half as large as area A).
1.2 Use the correct order of operations to evaluate algebraic expressions such as 3(2x + 5)2.
1.3 Simplify numerical expressions by applying properties of rational numbers (e.g., identity, inverse, distributive, associative, commutative) and justify the process used.
1.4 Use algebraic terminology (e.g., variable, equation, term, coefficient, inequality, expression, constant) correctly.
1.5 Represent quantitative relationships graphically and interpret the meaning of a specific part of a graph in the situation represented by the graph.
2.0 Students interpret and evaluate expressions involving integer powers and simple roots:
2.1 Interpret positive whole-number powers as repeated multiplication and negative whole-number powers as repeated division or multiplication by the multiplicative inverse. Simplify and evaluate expressions that include exponents.
22 Multiply and divide monomials; extend the process of taking powers and extracting roots to monomials when the latter results in a monomial with an integer exponent.
3.0 Students graph and interpret linear and some nonlinear functions:
3.1 Graph functions of the form y = nx2 and y = nx3 and use in solving problems.
3.2 Plot the values from the volumes of three-dimensional shapes for various values of the edge lengths (e.g., cubes with varying edge lengths or a triangle prism with a fixed height and an equilateral triangle base of varying lengths).
3.3 Graph linear functions, noting that the vertical change (change in y-value) per unit of horizontal change (change in x-value) is always the same and know that the ratio ("rise over run") is called the slope of a graph.
3.4 Plot the values of quantities whose ratios are always the same (e.g., cost to the number of an item, feet to inches, circumference to diameter of a circle). Fit a line to the plot and understand that the slope of the line equals the quantities.
4.0 Students solve simple linear equations and inequalities over the rational numbers:
4.1 Solve two-step linear equations and inequalities in one variable over the rational numbers, interpret the solution or solutions in the context from which they arose, and verify the reasonableness of the results.
4.2 Solve multistep problems involving rate, average speed, distance, and time or a direct variation.
Measurement and Geometry
1.0 Students choose appropriate units of measure and use ratios to convert within and between measurement systems to solve problems:
1.1 Compare weights, capacities, geometric measures, times, and temperatures within and between measurement systems (e.g., miles per hour and feet per second, cubic inches to cubic centimeters).
1.2 Construct and read drawings and models made to scale.
1.3 Use measures expressed as rates (e.g., speed, density) and measures expressed as products (e.g., person-days) to solve problems; check the units of the solutions; and use dimensional analysis to check the reasonableness of the answer.
2.0 Students compute the perimeter, area, and volume of common geometric objects and use the results to find measures of less common objects. They know how perimeter, area, and volume are affected by changes of scale:
2.1 Use formulas routinely for finding the perimeter and area of basic two-dimensional figures and the surface area and volume of basic three-dimensional figures, including rectangles, parallelograms, trapezoids, squares, triangles, circles, prisms, and cylinders.
2.2 Estimate and compute the area of more complex or irregular two-and three-dimensional figures by breaking the figures down into more basic geometric objects.
2.3 Compute the length of the perimeter, the surface area of the faces, and the volume of a three-dimensional object built from rectangular solids. Understand that when the lengths of all dimensions are multiplied by a scale factor, the surface area is multiplied by the square of the scale factor and the volume is multiplied by the cube of the scale factor.
2.4 Relate the changes in measurement with a change of scale to the units used (e.g., square inches, cubic feet) and to conversions between units (1 square foot = 144 square inches or [1 ft 2] = [144 in 2], 1 cubic inch is approximately 16.38 cubic centimeters or [1 in 3] = [16.38 cm3]).
3.0 Students know the Pythagorean theorem and deepen their understanding of plane and solid geometric shapes by constructing figures that meet given conditions and by identifying attributes of figures:
3.1 Identify and construct basic elements of geometric figures (e.g., altitudes, mid-points, diagonals, angle bisectors, and perpendicular bisectors; central angles, radii, diameters, and chords of circles) by using a compass and straightedge.
3.2 Understand and use coordinate graphs to plot simple figures, determine lengths and areas related to them, and determine their image under translations and reflections.
3.3 Know and understand the Pythagorean theorem and its converse and use it to find the length of the missing side of a right triangle and the lengths of other line segments and, in some situations, empirically verify the Pythagorean theorem by direct measurement.
3.4 Demonstrate an understanding of conditions that indicate two geometrical figures are congruent and what congruence means about the relationships between the sides and angles of the two figures.
3.5 Construct two-dimensional patterns for three-dimensional models, such as cylinders, prisms, and cones.
3.6 Identify elements of three-dimensional geometric objects (e.g., diagonals of rectangular solids) and describe how two or more objects are related in space (e.g., skew lines, the possible ways three planes might intersect).
Statistics, Data Analysis, and Probability
1.1 Students collect, organize, and represent data sets that have one or more variables and identify relationships among variables within a data set by hand and through the use of an electronic spreadsheet software program:
1.1 Know various forms of display for data sets, including a stem-and-leaf plot or box-and-whisker plot; use the forms to display a single set of data or to compare two sets of data.
1.2 Represent two numerical variables on a scatterplot and informally describe how the data points are distributed and any apparent relationship that exists between the two variables (e.g., between time spent on homework and grade level).
1.3 Understand the meaning of, and be able to compute, the minimum, the lower quartile, the median, the upper quartile, and the maximum of a data set.
1.0 Students make decisions about how to approach problems:
1.1 Analyze problems by identifying relationships, distinguishing relevant from irrelevant information, identifying missing information, sequencing and prioritizing information, and observing patterns.
1.2 Formulate and justify mathematical conjectures based on a general description of the mathematical question or problem posed.
1.3 Determine when and how to break a problem into simpler parts.
2.0 Students use strategies, skills, and concepts in finding solutions:
2.1 Use estimation to verify the reasonableness of calculated results.
2.2 Apply strategies and results from simpler problems to more complex problems.
2.3 Estimate unknown quantities graphically and solve for them by using logical reasoning and arithmetic and algebraic techniques.
2.4 Make and test conjectures by using both inductive and deductive reasoning.
2.5 Use a variety of methods, such as words, numbers, symbols, charts, graphs, tables, diagrams, and models, to explain mathematical reasoning.
2.6 Express the solution clearly and logically by using the appropriate mathematical notation and terms and clear language; support solutions with evidence in both verbal and symbolic work.
2.7 Indicate the relative advantages of exact and approximate solutions to problems and give answers to a specified degree of accuracy.
2.8 Make precise calculations and check the validity of the results from the context of the problem.
3.0 Students determine a solution is complete and move beyond a particular problem by generalizing to other situations:
3.1 Evaluate the reasonableness of the solution in the context of the original situation.
3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems.
3.3 Develop generalizations of the results obtained and the strategies used and apply them to new problem situations. | http://www.aesd.net/cms/page_view?d=x&piid=&vpid=1295705718032 | 13 |
50 | Because lines l and m are parallel and line AB is a transversal, the angle whose measure is labeled as 120º is supplementary to .
Now we have a 30-60-90 triangle whose longer leg, AC,
is also the distance between lines l and m.
Using the :: side ratios for 30-60-90 triangles
you can use the hypotenuse length to calculate the lengths of the
other two legs. The short leg has a : ratio to the hypotenuse, so its
Let’s analyze each statement separately.
- This statement implies that DF = EF. We know that BF = AF because it is given that line CF is the perpendicular bisector of AB, and by definition, F is the midpoint of AB. It is also given that the area of triangle CDB is equal to the area of triangle CEA. These two triangles share the same height, and since the area of a triangle is found by the formula 1⁄2 b h, it follows that if their areas are equal, their bases are equal too. If BD = AE, then by subtracting DE from each segment, we have BE = AD and thus EF = DF. So statement I is true.
- This statement is simply not backed by any evidence. All we know is that BE = AD, EF = DF, and BF = AF. As long as points E and D are equidistant from F, all these conditions hold, so there is no guarantee that they are the midpoints of BF and AF, respectively. E and D could be anywhere along BF and AF, respectively, as long as they are equidistant from F. Thus, this statement is not necessarily true.
- Triangles CDB and CEA are equal in area; this is given. By subtracting the area of triangle CED from each of these triangles, we see that triangles CEB and CDA must have the same area. This statement is true.
Only statements I and III must be true.
The area of a triangle with base x and height h is given by the formula 1⁄2xh. The area of a square with sides of length x is x2. Since you know the two shapes have equal areas, you can set the two expressions equal to each other and solve for h:
The correct answer is h = 2x.
If ABD is an equilateral triangle, then AD = AB = BD = 4, and all the sides of the rhombus have a length of 4 (by definition of a rhombus, all sides are congruent). Also, by definition of a rhombus, opposite angles are congruent, so . Draw an altitude from a to DC to create a 30-60-90 triangle, and from the length ratio of x : x : 2x among the sides, you can calculate the length of this altitude to be 2. The area of a rhombus is bh, so the area of this rhombus is 4 2 = 8.
The length of the arc depends on the circumference of the circle and the measure of the central angle that intercepts that arc. The formula is:
where n is the measure of the central angle that intercepts the arc and r is the radius.
Angle c is the inscribed angle or one-half as large as the central angle that intercepts the circle at the same points. So the measure of this angle is 2cº.
Now simply plug the values into the formula: the length of arc AB is: | http://www.sparknotes.com/testprep/books/sat2/math1c/chapter6section7.rhtml | 13 |
164 | Chem1 General Chemistry Virtual Textbook
Understanding density and buoyancy
The density of an object is one of its most important and easily-measured physical properties. Densities are widely used to identify pure substances and to characterize and estimate the composition of many kinds of mixtures. The purpose of this lesson is to show how densities are defined, measured, and utilized, and to make sure you understand the closely-related concepts of buoyancy and specific gravity
You didn't have to be in the world very long to learn that the mass and volume of a given substance are directly proportional, although you certaintly did not first learn it in these words which are now the words of choice now that you have become a scholar.
These plots show how the masses of three liquids vary with their volumes. Notice that
The only difference between these plots is their slopes. Denoting mass and volume by m and V respectively, we can write the equation of each line as m = ρV, where the slope ρ (rho) is the proportionality constant that relates mass to volume. This quantity ρ is known as the density, which is usually defined as the mass per unit volume: ρ = m/V.
The volume units millilitre (mL) and cubic centimetre (cm3) are almost identical and are used interchangably in this course.
The general meaning of density is the amount of anything per unit volume. What we conventionally call the "density" is more precisely known as the "mass density".
Density can be expressed in any combination of mass and volume units; the most commonly seen units are grams per mL (g mL–1, g cm–3), or kilograms per litre.
Ordinary commercial nitric acid is a liquid having a density of 1.42 g mL–1, and contains 69.8% HNO3 by weight. a) Calculate the mass of HNO3 in 800 ml of nitric acid. b) What volume of acid will contain 100 g of HNO3?
Solution: The mass of 800 mL of the acid is (1.42 g mL–1) × (800 mL) = 1140 g. The weight of acid that contains 100 g of HNO3 is (100 g) / (0.698) = 143 g and will have a volume of (143 g) / (1.42 g mL–1) = 101 mL.
It is sometimes more convenient to express the volume occupied by a unit mass of a substance. This is just the inverse of the density and is known as the specific volume.
A glass bulb weighs 66.3915 g when evacuated, and 66.6539 g when filled with xenon gas at 25°C. The bulb can hold 50.0 mL of water. Find the density and specific volume of xenon under these conditions.
Solution: The mass of xenon is found by difference: (66.6539 – 66.3915)g = 0.2624 g. The density ρ = m/V = (0.2624 g)/(0.050 L) = 5.248 g L–1. The specific volume is 1/(5.248 g L–1 = 0.190 L g–1.
A quantity that is very closely related to density, and which is frequently used in its place, is specific gravity.
Specific gravity is the ratio of the mass of a material to that of an equal volume of water. Because the density of water is about 1.00 g mL–1, the specific gravity is numerically very close to that of the density, but being a ratio, it is dimensionless.
The presence of "volume" in this definition introduces a slight complication, since volumes are temperature-dependent owing to thermal expansion. At 4°C, water has its maximum density of almost exactly 1.000 g mL–1, so if the equivalent volume of water is assumed to be at this temperature, then the density and specific gravity can be considered numerically identical. In making actual comparisons, however, the temperatures of both the material being measured and of the equivalent volume of water are frequently different, so in order to specify a specific gravity value unambiguously, it is necessary to state the temperatures of both the substance in question and of the water.
Thus if we find that a given volume of a substance at 20°C weighs 1.11 times as much as the same volume of water measured at 4°C, we would express its specific gravity as
Although most chemists find density to be more convenient to work with and consider specific gravity to be rather old-fashioned, the latter quantity is widely used in many industrial and technical fields ranging from winemaking to urinalysis.
In general, gases have the lowest densities, but these densities are highly dependent on the pressure and temperature which must always be specified. To the extent that a gas exhibits ideal behavior (low pressure, high temperature), the density of a gas is directly proportional to the masses of its component atoms, and thus to its molecular weight. Measurement of the density of a gas is a simple experimental way of estimating its molecular weight (more here).
Liquids encompass an intermediate range of densities. Mercury, being a liquid metal, is something of an outlier. Liquid densities are largely independent of pressure, but they are somewhat temperature-sensitive.
The density range of solids is quite wide. Metals, whose atoms pack together quite compactly, have the highest densities, although that of lithium, the lighest metallic element, is quite low. Composite materials such as wood and high-density polyurethane foam contain void spaces which reduce the average density.
All substances tend to expand as they are heated, causing the same mass to occupy a greater volume, and thus lowering the density. For most solids, this expansion is relatively small, but it is far from negligible; for liquids, it is greater. The volumes of gases, as you may already know (see here for details), are highly temperature-sensitive, and so, of course, are their densities.
What is the cause of thermal expansion? As molecules aquire thermal energy, they move about more vigorously. In condensed phases (liquids and solids), this motion has the character of an irregular kind of bumping or jostling that causes the average distances between the molecules to increase, thus leading to increased volume and smaller density.
One might expect the densities of the chemical elements to increase uniformly with atomic weight, but this is not what happens; density depends on the volume as well as the mass, and the volume occupied by a given mass of an element, and these volumes can vary in a non-uniform way for two reasons:
The sizes (atomic radii) follow the zig-zag progression that characterizes the other periodic properties of the elements, with atomic volumes diminishing with increasing nuclear charge across each period (more here).
The atoms comprising the different solid elements do not pack together in the same way. The non-metallic solids are often composed of molecules that are more spread out in space, and which have shapes that cannot be arranged as compactly. so they tend to form more open crystal lattices than do the metals, and therefore have lower densities.
The plot below is taken from the popular WebElements site.
Nature has conveniently made the density of water at ordinary temperatures almost exactly 1.000 g/mL ( 1 kg/L). Water is subject to thermal expansion just as are all other liquids, and throughout most of its temperature range, the density of water diminishes with temperature. But water is famously exceptional over the temperature range 0-4° C, where raising the temperature causes the density to increase, reaching its greatest value at about 4°C.
This 4°C density maximum is one of many "anomalous" behaviors of water. As you may know, the H2O molecules in liquid and solid water are loosely joined together through a phenomenon known as hydrogen bonding. Any single water molecule can link up to four other H2O molecules, but this occurs only when the molecules are locked into place within an ice crystal. This is what leads to a relatively open lattice arrangement, and thus to the relatively low density of ice.
Below are three-dimensional views of a typical local structure of liquid water (right) and of ice (left). Notice the greater openness of the ice structure which is necessary to ensure the strongest degree of hydrogen bonding in a uniform, extended crystal lattice. The more crowded and jumbled arrangement in liquid water can be sustained only by the greater amount thermal energy available above the freezing point.
When ice melts, thermal energy begins to overcome the hydrogen-bonding forces so that each H2O molecule, instead of being permanently connected to four neighbors, is now only linked to an average of three other molecules through hydrogen bonds that continually break and re-form. With fewer hydrogen bonds, the geometrical requirements that formerly mandated a more open structural arrangement now diminish, so the entire network tends to collapse, rendering the water more dense. As the temperature rises, the fraction of H2O molecules that occupy ice-like clusters diminishes, contributing to the rise in density that is seen between 0° and 4°.
Whenever a continuously varying quantity such as density passes through a maximum or a minimum value as the temperature or some other variable is changing, you know that two opposing effects are at work.
The 4° density maximum of water corresponds to the temperature at which the breakup of ice-like clusters (leading to higher density) and thermal expansion (leading to lower density) achieve a balance.
Suppose that you place 1000 mL of pure water at 25°C in the refrigerator and that it freezes, producing ice at 0C. What will be the volume of the ice?
Solution: From the graph above, the density of water at 25°C is 0.9997 kg L–1, and that of ice at 0°C 0.917 g L–1.
The density maximum at 4°C has some interesting consequences in the aquatic ecology of lakes. In all but the most shallow lakes, the water tends to be stratified, so that for most of the year, the denser water remains near the bottom and mixes very little with the less-dense waters above. Because water has its density maximum at 4°C, the waters of deep lakes (and of the oceans) usually stay around 4°C at all times of the year. In the summer this will be the coldest water, but in the winter, the surface waters lose heat to the atmosphere and if they cool below 4°, they will be colder than the more dense waters below.
When the weather turns cold in the fall, the surface waters lose heat and cool to 4°C. This more dense layer of water sinks to the bottom, displacing the water below, which rises to the surface and restores nutrients that were removed when dead algae sank to the bottom. This “fall turnover” renews the lake for the next season.
What do an ice cube and a block of wood have in common? Throw either material into water, and it will float. Well, mostly; each object will have its bottom part immersed, but the upper part will ride high and dry. People often say that wood and ice float because they are "lighter than water", but this of course is nonsense unless we compare the masses of equal volumes of the substances. In other words, we need to compare the masses-per-unit-volume, meaning the densities, of each material with that of water. So we would more properly say that objects capable of floating in water must have densities smaller than that of water.
The apparent weight of an object immersed in a fluid will be smaller than its “true” weight (Archimedes' principle). The latter is the downward force exerted by gravity on the object. Within a fluid, however, this downward force is partially opposed by a net upward force that results from the displacement of this fluid by the object. The difference between these two weights is known as the buoyancy.
The displaced fluid is of course not really confined to the "phantom volume" shown at the bottom of the diagram; it spreads throughout the container and exerts forces on all surfaces of the object and increase with depth, combining to produce the net buoyancy force as shown. See here for another diagram that shows this more clearly.
Dynamics of buoyancy - an interesting physics-mechanics treatment
An object weighs 36 g in air and has a volume of 8.0 cm3. What will be its apparent weight when immersed in water?
Solution: When immersed in water, the object is buoyed up by the mass of the water it displaces, which of course is the mass of 8 cm3 of water. Taking the density of water as unity, the upward (buoyancy) force is just 8 g.
The apparent weight will be (36 g) – (8 g) = 28 g.
Air is of course a fluid, and buoyancy can be a problem when weighing a large object such as an empty flask. The following problem illustrates a more extreme case:
A balloon having a volume of 5.000 L is placed on a sensitive balance which registers a weight of 2.833 g. What is the "true weight" of the balloon if the density of the air is 1.294 g L–1?
Solution: The mass of air displaced by the balloon exerts a buoyancy force of
(5.000 L) × (1.294 g L –1) = 6.470 g. Thus the true weight of the balloon is this much greater than the apparent weight: (2.833 + 6.470) g = 9.303 g.
A piece of metal weighs 9.25 g in air, 8.20 g in water, and 8.36 g when immersed in gasoline. a) What is the density of the metal? b) What is the density of the gasoline?
Solution: When immersed in water, the metal object displaces (9.25 – 8.20) g = 1.05 g of water whose volume is (1.05 g) / (1.00 g cm–3) = 1.05 cm3. The density of the metal is thus (9.25 g) / (1.05 cm3) = 8.81 g cm–3.
The metal object displaces (9.25 - 8.36) g = 0.89 g of gasoline, whose density must therefore be (0.89 g) / (1.05 cm3) = 0.85 g cm–3.
When an object floats in a liquid, the portion of it that is immersed has a volume that depends on the mass of this same volume of displaced liquid.
A cube of ice that is 10 cm on each side floats in water. How many cm does the top of the cube extend above the water level? (Density of ice = 0.917 g cm–3.)
Solution: The volume of the ice is (10 cm)3 = 1000 cm3 and its mass is
(1000 cm3) x (0.917 g cm–3) = 917 g. The ice is supported by an upward force equivalent to this mass of displaced water whose volume is (917 g) / (1.00 g cm–3) = 917 cm3 . Since the cross section of the ice cube is 100-cm2, it must sink by 9.17 cm in order to displace 917 cm3 of water. Thus the height of cube above the water is (10 cm – 9.17 cm) = 0.83 cm.
... hence the expression, “the tip of the iceberg”, implying that 90% of its volume is hidden under the surface of the water.
The most obvious way of finding the density of a material is to measure its mass and its volume. This is the only option we have for gases, but observing the mass of a fixed volume of a liquid is time-consuming and awkward, and measuring the volumes of solids whose shapes are irregular or which are finely divided is usually impractical.
The traditional hydrometer is a glass tube having a weighted bulb near the bottom. The hydrometer is lowered into a container of the liquid to be measured, and comes to a rest with the upper part protruding above the liquid surface at a height (read from a calibrated scale) that depends on the density of the liquid. This will only work, of course, if the overall density of the hydrometer itself is smaller than the density of the liquid to be measured. For this reason, hydrometers intended for general use come in sets. Because liquid densities are temperature dependent, hydrometers intended for precise measurements also contain an internal thermometer so that this information can be collected in the event that temperature corrections will be made.
Owing to the ease with which they can be observed, densities are widely employed to estimate the composition or quality of liquid mixtures or solutions, and in some cases determine their commercial value. This has given rise to many kinds of hydrometers that are specialized for specific uses:
Battery hydrometer - theory
Aquarium salinity hydrometer
A boat with depth markings on its body can be thought of as a gigantic hydrometer!
Sugar and syrup hydrometer
Don't confuse them!
A hydrometer measures the density or specific gravity of a liquid
a hygrometer measures the relative humidity of the air
Hydrometers for general purpose use are normally calibrated in units of specific gravity, but often defined at temperatures other than 25°C. A very common type of calibration is in "degrees" on various arbitrary scales, of which the best known are the Baumé scales. Special-purpose hydrometer scales can get quite esoteric; thus alcohol hydrometers may directly mesure percentage alcohol by weight on a 0–100% scale, or "proof" (twice the volume-percent of alcohol) on a 0-200 scale.
Measuring the density of a solid that is large enough to weigh accurately is largely a matter of determining its volume. For an irregular solid such as a rock, this is most easily done by observing the amount of water it displaces.
A small vessel having a precisely determined volume can be used to determine the density of powdered or granular samples. The vessel (known as a pycnometer) is weighed while empty, and again when filled; the density is found from the weight difference and the calibrated volume of the pycnometer. This method is also applicable to liquids and gases.
In forensic work it is often necessary to determine the density of very small particles such as fibres, flakes of paint or metal, or grains of sand. Neither the weight nor volumes of such samples can be determined directly, so the simplest solution is to place the sample in a series of liquids of different densities, and see if it floats, sinks, or remains suspended within the liquid. A more sophisticated method is to layer two liquids in a vertical glass tube and allow them to slowly mix, creating a density gradient. When a particle is dropped into the tube, it sinks to a depth that matches its density.
This reference provides a brief summary of some of the modern methods of determining density.
The most famous application of buoyancy is due to Archimedes of Syracuse around 250 BC. He was asked to determine whether the new crown that King Hiero II had commissioned contained all the gold that he had provided to the goldsmith for that purpose; apparently he suspected that the smith might have set aside some of the gold for himself and substituted less-valuable silver instead. According to legend, Archimedes devised the principle of the “hydrostatic balance” after he noticed his own apparent loss in weight while sitting in his bath. The story goes that he was so enthused with his discovery that he jumped out of his bath and ran through the town, shouting "eureka" to the bemused people.
If the weight of the crown when measured in air was 4.876 kg and its weight in water was 4.575 kg, what was the density of the crown?
Solution: The volume of the crown can be found from the mass of water it displaced, and thus from its buoyancy: (4876 – 4575) g / (1.00 g cm–3) = 301 cm3. The density is then
(4876 g) / (301 cm3) = 16.2 g cm–3
The densities of the pure metals: silver = 10.5, gold = 19.3 g cm–3,
One of the delights of chemical science is to find way of using the macroscopic properties of bulk matter to uncover information about the microscopic world at the atomic level. The following problem example is a good illustration of this.
Estimate the diameter of the neon atom from the following information:
Density of liquid neon: 1.204 g cm–3; molar mass of neon: 20.18 g.
Solution: This problem can be divided into two steps.
1 - Estimate the volume occupied by each atom. One mole (6.02E23 atoms) of neon occupy a volume of (20.18 g) / (1.204 g cm–3) = 16.76 cm3. If this space is divided up equally into tiny boxes, each just large enough to contain one atom, then the volume allocated to each atom is given by: (16.76 cm3 mol–1) / (6.02E23 atom mol–1) = 2.78E–23 cm3 atom–1.
2 - Find the length of each box, and thus the atomic diameter. Each atom of neon has a volume of about 2.8E–23 cm3. If we re-express this volume as 28E–24 cm3 and fudge the “28” a bit, we can come up with a reasonably good approximation
of the diameter of the neon atom without even using a calculator. Taking the volume as 27E–24 cm3 allows us to find the cube root, 3.0E–8 cm = 3.0E–10 m = 300 pm, which corresponds to the length of the box and thus to the diameter of the atom it encloses.
The accepted [van der Waals] atomic radius of neon is 154 pm, corresponding to a diameter of about 310 pm. This estimate is suprisingly good, since the atoms of a liquid are not really confined to orderly little boxes in the liquid.
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially imortant that you know the precise meanings of all the highlighted terms in the context of this topic.
This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License. | http://www.chem1.com/acad/webtext/pre/density.html | 13 |
99 | Warning: the HTML version of this document is generated from
Latex and may contain translation errors. In
particular, some mathematical expressions are not translated correctly.
Like linked lists, trees are made up of nodes. A common kind of tree is a binary tree, in which each node contains a reference to two other nodes (possibly null). These references are referred to as the left and right subtrees. Like list nodes, tree nodes also contain cargo. A state diagram for a tree looks like this:
To avoid cluttering up the picture, we often omit the Nones.
The top of the tree (the node tree refers to) is called the root. In keeping with the tree metaphor, the other nodes are called branches and the nodes at the tips with null references are called leaves. It may seem odd that we draw the picture with the root at the top and the leaves at the bottom, but that is not the strangest thing.
To make things worse, computer scientists mix in another
Finally, there is a geometric vocabulary for talking about trees. We already mentioned left and right, but there is also "up" (toward the parent/root) and "down" (toward the children/leaves). Also, all of the nodes that are the same distance from the root comprise a level of the tree.
We probably don't need three metaphors for talking about trees, but there they are.
Like linked lists, trees are recursive data structures because they are defined recursively.
A tree is either:
20.1 Building trees
The process of assembling a tree is similar to the process of assembling a linked list. Each constructor invocation builds a single node.
The cargo can be any type, but the arguments for left and right should be tree nodes. left and right are optional; the default value is None.
To print a node, we just print the cargo.
One way to build a tree is from the bottom up. Allocate the child nodes first:
left = Tree(2)
Then create the parent node and link it to the children:
tree = Tree(1, left, right);
We can write this code more concisely by nesting constructor invocations:
>>> tree = Tree(1, Tree(2), Tree(3))
20.2 Traversing trees
Any time you see a new data structure, your first question should be, "How do I traverse it?" The most natural way to traverse a tree is recursively. For example, if the tree contains integers as cargo, this function returns their sum:
The base case is the empty tree, which contains no cargo, so the sum is 0. The recursive step makes two recursive calls to find the sum of the child trees. When the recursive calls complete, we add the cargo of the parent and return the total.
20.3 Expression trees
A tree is a natural way to represent the structure of an expression. Unlike other notations, it can represent the computation unambiguously. For example, the infix expression 1 + 2 * 3 is ambiguous unless we know that the multiplication happens before the addition.
This expression tree represents the same computation:
The nodes of an expression tree can be operands like 1 and 2 or operators like + and *. Operands are leaf nodes; operator nodes contain references to their operands. (All of these operators are binary, meaning they have exactly two operands.)
We can build this tree like this:
>>> tree = Tree('+', Tree(1), Tree('*', Tree(2), Tree(3)))
Looking at the figure, there is no question what the order of operations is; the multiplication happens first in order to compute the second operand of the addition.
Expression trees have many uses. The example in this chapter uses trees to translate expressions to postfix, prefix, and infix. Similar trees are used inside compilers to parse, optimize, and translate programs.
20.4 Tree traversal
We can traverse an expression tree and print the contents like this:
In other words, to print a tree, first print the contents of the root, then print the entire left subtree, and then print the entire right subtree. This way of traversing a tree is called a preorder, because the contents of the root appear before the contents of the children. For the previous example, the output is:
>>> tree = Tree('+', Tree(1), Tree('*', Tree(2), Tree(3)))
This format is different from both postfix and infix; it is another notation called prefix, in which the operators appear before their operands.
You might suspect that if you traverse the tree in a different order, you will get the expression in a different notation. For example, if you print the subtrees first and then the root node, you get:
The result, 1 2 3 * +, is in postfix! This order of traversal is called postorder.
Finally, to traverse a tree inorder, you print the left tree, then the root, and then the right tree:
The result is 1 + 2 * 3, which is the expression in infix.
To be fair, we should point out that we have omitted an important complication. Sometimes when we write an expression in infix, we have to use parentheses to preserve the order of operations. So an inorder traversal is not quite sufficient to generate an infix expression.
Nevertheless, with a few improvements, the expression tree and the three recursive traversals provide a general way to translate expressions from one format to another.
As an exercise, modify printTreeInorder so that it puts parentheses around every operator and pair of operands. Is the output correct and unambiguous? Are the parentheses always necessary?
If we do an inorder traversal and keep track of what level in the tree we are on, we can generate a graphical representation of a tree:
def printTreeIndented(tree, level=0):
The parameter level keeps track of where we are in the tree. By default, it is initially 0. Each time we make a recursive call, we pass level+1 because the child's level is always one greater than the parent's. Each item is indented by two spaces per level. The result for the example tree is:
20.5 Building an expression tree
In this section, we parse infix expressions and build the corresponding expression trees. For example, the expression (3+7)*9 yields the following tree:
Notice that we have simplified the diagram by leaving out the names of the attributes.
The parser we will write handles expressions that include numbers, parentheses, and the operators + and *. We assume that the input string has already been tokenized into a Python list. The token list for (3+7)*9 is:
['(', 3, '+', 7, ')', '*', 9, 'end']
The end token is useful for preventing the parser from reading past the end of the list.
As an exercise, write a function that takes an expression string and returns a token list.
The first function we'll write is getToken, which takes a token list and an expected token as arguments. It compares the expected token to the first token on the list: if they match, it removes the token from the list and returns true; otherwise, it returns false:
def getToken(tokenList, expected):
Since tokenList refers to a mutable object, the changes made here are visible to any other variable that refers to the same object.
The next function, getNumber, handles operands. If the next token in tokenList is a number, getNumber removes it and returns a leaf node containing the number; otherwise, it returns None.
Before continuing, we should test getNumber in isolation. We assign a list of numbers to tokenList, extract the first, print the result, and print what remains of the token list:
>>> tokenList = [9, 11, 'end']
The next method we need is getProduct, which builds an expression tree for products. A simple product has two numbers as operands, like 3 * 7.
Here is a version of getProduct that handles simple products.
Assuming that getNumber succeeds and returns a singleton tree, we assign the first operand to a. If the next character is *, we get the second number and build an expression tree with a, b, and the operator.
If the next character is anything else, then we just return the leaf node with a. Here are two examples:
>>> tokenList = [9, '*', 11, 'end']
>>> tokenList = [9, '+', 11, 'end']
The second example implies that we consider a single operand to be a kind of product. This definition of "product" is counterintuitive, but it turns out to be useful.
Now we have to deal with compound products, like like 3 * 5 *
With a small change in getProduct, we can handle an arbitrarily long product:
In other words, a product can be either a singleton or a tree with * at the root, a number on the left, and a product on the right. This kind of recursive definition should be starting to feel familiar.
Let's test the new version with a compound product:
>>> tokenList = [2, '*', 3, '*', 5 , '*', 7, 'end']
Next we will add the ability to parse sums. Again, we use a slightly counterintuitive definition of "sum." For us, a sum can be a tree with + at the root, a product on the left, and a sum on the right. Or, a sum can be just a product.
If you are willing to play along with this definition, it has a nice property: we can represent any expression (without parentheses) as a sum of products. This property is the basis of our parsing algorithm.
getSum tries to build a tree with a product on the left and a sum on the right. But if it doesn't find a +, it just builds a product.
Let's test it with 9 * 11 + 5 * 7:
>>> tokenList = [9, '*', 11, '+', 5, '*', 7, 'end']
We are almost done, but we still have to handle parentheses. Anywhere in an expression where there can be a number, there can also be an entire sum enclosed in parentheses. We just need to modify getNumber to handle subexpressions:
Let's test this code with 9 * (11 + 5) * 7:
>>> tokenList = [9, '*', '(', 11, '+', 5, ')', '*', 7, 'end']
The parser handled the parentheses correctly; the addition happens before the multiplication.
20.6 Handling errors
Throughout the parser, we've been assuming that expressions are well-formed. For example, when we reach the end of a subexpression, we assume that the next character is a close parenthesis. If there is an error and the next character is something else, we should deal with it.
The raise statement creates an exception; in this case a ValueError. If the function that called getNumber, or one of the other functions in the traceback, handles the exception, then the program can continue. Otherwise, Python will print an error message and quit.
As an exercise, find other places in these functions where errors can occur and add appropriate raise statements. Test your code with improperly formed expressions.
20.7 The animal tree
In this section, we develop a small program that uses a tree to represent a knowledge base.
The program interacts with the user to create a tree of questions and animal names. Here is a sample run:
Are you thinking of an animal? y
Here is the tree this dialog builds:
At the beginning of each round, the program starts at the top of the tree and asks the first question. Depending on the answer, it moves to the left or right child and continues until it gets to a leaf node. At that point, it makes a guess. If the guess is not correct, it asks the user for the name of the new animal and a question that distinguishes the (bad) guess from the new animal. Then it adds a node to the tree with the new question and the new animal.
Here is the code:
The function yes is a helper; it prints a prompt and then takes input from the user. If the response begins with y or Y, the function returns true:
The condition of the outer loop is True, which means it will continue until the break statement executes, if the user is not thinking of an animal.
The inner while loop walks the tree from top to bottom, guided by the user's responses.
When a new node is added to the tree, the new question replaces the cargo, and the two children are the new animal and the original cargo.
One shortcoming of the program is that when it exits, it forgets everything you carefully taught it!
As an exercise, think of various ways you might save the knowledge tree in a file. Implement the one you think is easiest.
Warning: the HTML version of this document is generated from Latex and may contain translation errors. In particular, some mathematical expressions are not translated correctly. | http://www.greenteapress.com/thinkpython/thinkCSpy/html/chap20.html | 13 |
144 | Thermodynamic temperature is the absolute measure of temperature and is one of the principal parameters of thermodynamics. Thermodynamic temperature is an “absolute” scale because it is the measure of the fundamental property underlying temperature: its null or zero point, absolute zero, is the temperature at which the particle constituents of matter have minimal motion and can be no colder.
Temperature arises from the random submicroscopic vibrations of the particle constituents of matter. These motions comprise the kinetic energy in a substance. More specifically, the thermodynamic temperature of any bulk quantity of matter is the measure of the average kinetic energy of a certain kind of vibrational motion of its constituent particles called translational motions. Translational motions are ordinary, whole-body movements in three-dimensional space whereby particles move about and exchange energy in collisions. Fig. 1 at right shows translational motion in gases; Fig. 4 below shows translational motion in solids. Thermodynamic temperature’s null point, absolute zero, is the temperature at which the particle constituents of matter are as close as possible to complete rest; that is, they have minimal motion, retaining only quantum mechanical motion. Zero kinetic energy remains in a substance at absolute zero (see Heat energy at absolute zero, below).
Throughout the scientific world where measurements are made in SI units, thermodynamic temperature is measured in kelvins (symbol: K). Many engineering fields in the U.S. however, measure thermodynamic temperature using the Rankine scale.
By international agreement, the unit “kelvin” and its scale are defined by two points: absolute zero, and the triple point of Vienna Standard Mean Ocean Water (water with a specified blend of hydrogen and oxygen isotopes). Absolute zero—the coldest possible temperature—is defined as being precisely 0 K and −273.15 °C. The triple point of water is defined as being precisely 273.16 K and 0.01 °C. This definition does three things:
- It fixes the magnitude of the kelvin unit as being precisely 1 part in 273.16 parts the difference between absolute zero and the triple point of water;
- It establishes that one kelvin has precisely the same magnitude as a one-degree increment on the Celsius scale; and
- It establishes the difference between the two scales’ null points as being precisely 273.15 kelvins (0 K = −273.15 °C and 273.16 K = 0.01 °C).
Temperatures expressed in kelvins are converted to degrees Rankine simply by multiplying by 1.8 as follows: TK × 1.8 = T°R, where TK and T°R are temperatures in kelvins and degrees Rankine respectively. Temperatures expressed in Rankine are converted to kelvins by dividing by 1.8 as follows: T°R ÷ 1.8 = TK.
Table of thermodynamic temperatures
The full range of the thermodynamic temperature scale and some notable points along it are shown in the table below.
wavelength of
(precisely by definition)
|| ∞
(precisely by definition)
||2.897 77 meters|
(Radio, FM band)
|Water’s triple point|
(precisely by definition)
(Long wavelength I.R.)
|Water’s boiling point A
|| 373.1339 K
(Mid wavelength I.R.)
|Sun’s visible surfaceD
(Far Ultraviolet light)
|Sun’s core E
||16 million °C
||0.18 nm (X-rays)
(peak temperature)E
||350 million °C
||8.3 × 10−3 nm|
|Sandia National Labs’|
Z machine E
||2 billion °C
||1.4 × 10−3 nm|
|Core of a high–mass|
star on its last day E
||3 billion °C
||1 × 10−3 nm|
|Merging binary neutron|
star system E
||350 billion °C
||8 × 10−6 nm|
Ion Collider E
||1 trillion °C
||3 × 10−6 nm|
|CERN’s proton vs.|
nucleus collisions E
||10 trillion °C
||3 × 10−7 nm|
|Universe 5.391 × 10−44 s|
after the Big Bang E
|1.417 × 1032 K
||1.417 × 1032 °C
||1.616 × 10−26 nm|
(Planck frequency)
A For Vienna Standard Mean Ocean Water at one standard atmosphere (101.325 kPa) when calibrated strictly per the two-point definition of thermodynamic temperature.
B The 2500 K value is approximate. The 273.15 K difference between K and °C is rounded to 300 K to avoid false precision in the Celsius value.
C For a true blackbody (which tungsten filaments are not). Tungsten filaments’ emissivity is greater at shorter wavelengths, which makes them appear whiter.
D Effective photosphere temperature. The 273.15 K difference between K and °C is rounded to 273 K to avoid false precision in the Celsius value.
E The 273.15 K difference between K and °C is ignored to avoid false precision in the Celsius value.
F For a true blackbody (which the plasma was not). The Z machine’s dominant emission originated from 40 MK electrons (soft x–ray emissions) within the plasma.
The relationship of temperature, motions, conduction, and heat energy
The nature of kinetic energy, translational motion, and temperature
At its simplest, “temperature” arises from the kinetic energy of the vibrational motions of matter’s particle constituents (molecules, atoms, and subatomic particles). The full variety of these kinetic motions contribute to the total heat energy in a substance. The relationship of kinetic energy, mass, and velocity is given by the formula Ek = 1⁄2m • v 2. Accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy—and precisely the same temperature—as those with four times the mass but half the velocity.
The thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the average—or “mean”—kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three x, y, and z–axis dimensions of space means the particles move in the three spatial degrees of freedom. This particular form of kinetic energy is sometimes referred to as kinetic temperature. Translational motion is but one form of heat energy and is what gives gases not only their temperature, but also their pressure and the vast majority of their volume. This relationship between the temperature, pressure, and volume of gases is established by the ideal gas law’s formula pV = nRT and is embodied in the gas laws.
The extent to which the kinetic energy of translational motion of an individual atom or molecule (particle) in a gas contributes to the pressure and volume of that gas is a proportional function of thermodynamic temperature as established by the Boltzmann constant (symbol: kB). The Boltzmann constant also relates the thermodynamic temperature of a gas to the mean kinetic energy of an individual particle’s translational motion as follows:
- Emean = 3⁄2kBT
- Emean is the mean kinetic energy in joules (symbol: J)
- kB = 1.380 6504(24) × 10−23 J/K
- T is the thermodynamic temperature in kelvins
While the Boltzmann constant is useful for finding the mean kinetic energy of a particle, it’s important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occurs across a wide range of speeds (see animation in Fig. 1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the Maxwell–Boltzmann distribution. The graph shown here in Fig. 2 shows the speed distribution of 5500 K helium atoms. They have a most probable speed of 4.780 km/s (0.2092 s/km). However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the x–axis to the right). This graph uses inverse speed for its x–axis so the shape of the curve can easily be compared to the curves in Fig. 5 below. In both graphs, zero on the x–axis represents infinite temperature. Additionally, the x and y–axis on both graphs are scaled proportionally.
The high speeds of translational motion
Although very specialized laboratory equipment is required to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The translational motions of elementary particles are very fast and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting cold temperature of 700 nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool cesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7 mm per second to in order to calculate their temperature. Formulas for calculating the velocity and speed of translational motion are given in the following footnote.
The internal motions of molecules and specific heat
There are other forms of heat energy besides the kinetic energy of translational motion. As can be seen in the animation at right, molecules are complex objects; they are a population of atoms and thermal agitation can strain their internal chemical bonds in three different ways: via rotation, bond length, and bond angle movements. These are all types of internal degrees of freedom. This makes molecules distinct from monatomic substances (consisting of individual atoms) like the noble gases helium and argon, which have only the three translational degrees of freedom. Kinetic energy is stored in molecules’ internal degrees of freedom, which gives them an internal temperature. Even though these motions are called “internal,” the external portions of molecules still move—rather like the jiggling of a stationary water balloon. This permits the two-way exchange of kinetic energy between internal motions and translational motions with each molecular collision. Accordingly, as heat is removed from molecules, both their kinetic temperature (the kinetic energy of translational motion) and their internal temperature simultaneously diminish in equal proportions. This phenomenon is described by the equipartition theorem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active degrees of freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the detailed study of non-equilibrium phenomena such as combustion, the sublimation of solids, and the diffusion of hot gases in a partial vacuum.
The kinetic energy stored internally in molecules does not contribute to the temperature of a substance (nor to the pressure or volume of gases). This is because any kinetic energy that is, at a given instant, bound in internal motions is not at that same instant contributing to the molecules’ translational motions. This extra kinetic energy simply increases the amount of heat energy a substance absorbs for a given temperature rise. This property is known as a substance’s specific heat capacity.
Different molecules absorb different amounts of heat energy for each incremental increase in temperature; that is, they have different specific heat capacities. High specific heat capacity arises, in part, because certain substances’ molecules possess more internal degrees of freedom than others do. For instance, room-temperature nitrogen, which is a diatomic molecule, has five active degrees of freedom: the three comprising translational motion plus two rotational degrees of freedom internally. Not surprisingly, in accordance with the equipartition theorem, nitrogen has five-thirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases. Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of heat energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom.
The diffusion of heat energy: Entropy, phonons, and mobile conduction electrons
Heat conduction is the diffusion of heat energy from hot parts of a system to cold. A “system” can be either a single bulk entity or a plurality of discrete bulk entities. The term “bulk” in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever heat energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases).
One particular heat conduction mechanism occurs when translational motion—the particle motion underlying temperature—transfers momentum from particle to particle in collisions. In gases, these translational motions are of the nature shown above in Fig. 1. As can be seen in that animation, not only does momentum (heat) diffuse throughout the volume of the gas through serial collisions, but entire molecules or atoms can advance forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quickly—especially for light atoms or molecules; convection speeds this process even more.
Translational motion in solids however, takes the form of phonons (see Fig. 4 at right). Phonons are constrained, quantized wave packets traveling at the speed of sound for a given substance. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, phonon-based heat conduction is usually inefficient and such solids are considered thermal insulators (such as glass, plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam.
Metals however, are not restricted to only phonon-based heat conduction. Heat energy conducts through metals extraordinarily quickly because instead of direct molecule-to-molecule collisions, the vast majority of heat energy is mediated via very light, mobile conduction electrons. This is why there is a near-perfect correlation between metals’ thermal conductivity and their electrical conductivity. Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized, i.e. not tied to a specific atom, and behave rather like a sort of “quantum gas” due to the effects of zero-point energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only 1⁄1836th that of a proton. This is about the same ratio as a .22 Short bullet (29 grains or 1.88 g) compared to the rifle that shoots it. As Isaac Newton wrote with his third law of motion,
- “Law #3: All forces occur in pairs, and these two forces
- are equal in magnitude and opposite in direction.”
However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner—because they are much less massive—heat energy is readily borne by mobile conduction electrons. Additionally, because they’re delocalized and very fast, kinetic heat energy conducts extremely quickly through metals with abundant conduction electrons.
The diffusion of heat energy: Black-body radiation
Thermal radiation is a byproduct of the collisions arising from atoms’ various vibrational motions. These collisions cause the atoms’ electrons to emit thermal photons (known as black-body radiation). Photons are emitted anytime an electric charge is accelerated (as happens when two atoms’ electron clouds collide). Even individual molecules with internal temperatures greater than absolute zero also emit black-body radiation from their atoms. In any bulk quantity of a substance at equilibrium, black-body photons are emitted across a range of wavelengths in a spectrum that has a bell curve-like shape called a Planck curve (see graph in Fig. 5 at right). The top of a Planck curve—the peak emittance wavelength—is located in particular part of the electromagnetic spectrum depending on the temperature of the black body. Substances at extreme cryogenic temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see Table of thermodynamic temperatures, above).
Black-body radiation diffuses heat energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Black-body photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process.
As established by the Stefan–Boltzmann law, the intensity of black-body radiation increases as the fourth power of absolute temperature. Thus, a black body at 824 K (just short of glowing dull red) emits 60 times the radiant power as it does at 296 K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, black-body radiation can be the principal mechanism by which heat energy escapes a system.
The heat of phase changes
The kinetic energy of particle motion is just one contributor to the total heat energy in a substance; another is phase transitions, which are the potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing). The heat energy required for a phase transition is called latent heat. This phenomenon may more easily be grasped by considering it in the reverse direction: latent heat is the energy required to break chemical bonds (such as during evaporation and melting). Most everyone is familiar with the effects of phase transitions; for instance, steam at 100 °C can cause severe burns much faster than the 100 °C air from a hair dryer. This occurs because a large amount of latent heat is liberated as steam condenses into liquid water on the skin.
Even though heat energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal lattice chemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig. 7, the melting of ice is shown within the lower left box heading from blue to green.
At one specific thermodynamic point, the melting point (which is 0 °C across a wide pressure range in the case of water), all the atoms or molecules are—on average—at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are quantized forces: they either hold fast, or break; there is no in-between state. Consequently, when a substance is at its melting point, every joule of added heat energy only breaks the bonds of a specific quantity of its atoms or molecules, converting them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional heat energy can’t make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), heat energy must be removed from a substance.
As stated above, the heat energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it’s called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30 kJ per mole for water and most of the metallic elements. If the substance is one of the monatomic gases, (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3 kJ per mole. Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0 °C into water at 0 °C, one must add roughly 80 times the heat energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals’ ratios are even greater, typically in the range of 400 to 1200 times. And the phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is known as enthalpy of vaporization) is roughly 540 times that required for a one-degree increase.
Water’s sizable enthalpy of vaporization is why one’s skin can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above). In the opposite direction, this is why one’s skin feels cool as liquid water on it evaporates (a process that occurs at a sub-ambient wet-bulb temperature that is dependent on relative humidity). Water’s highly energetic enthalpy of vaporization is also an important factor underlying why “solar pool covers” (floating, insulated blankets that cover swimming pools when not in use) are so effective at reducing heating costs: they prevent evaporation. For instance, the evaporation of just 20 mm of water from a 1.29-meter-deep pool chills its water 8.4 degrees Celsius.
The total kinetic energy of all particle motion—including that of conduction electrons—plus the potential energy of phase changes, plus zero-point energy comprise the internal energy of a substance, which is its total heat energy. The term internal energy mustn’t be confused with internal degrees of freedom. Whereas the internal degrees of freedom of molecules refers to one particular place where kinetic energy is bound, the internal energy of a substance comprises all forms of heat energy.
Heat energy at absolute zero
As a substance cools, different forms of heat energy and their related effects simultaneously decrease in magnitude: the latent heat of available phase transitions are liberated as a substance changes from a less ordered state to a more ordered state; the translational motions of atoms and molecules diminish (their kinetic temperature decreases); the internal motions of molecules diminish (their internal temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower; and black-body radiation’s peak emittance wavelength increases (the photons’ energy decreases). When the particles of a substance are as close as possible to complete rest and retain only ZPE-induced quantum mechanical motion, the substance is at the temperature of absolute zero (T=0).
Note that whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero heat energy; one must be very precise with what one means by “heat energy.” Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably, T=0 helium remains liquid at room pressure and must be under a pressure of at least 25 bar to crystallize. This is because helium’s heat of fusion—the energy required to melt helium ice—is so low (only 21 J mol−1) that the motion-inducing effect of zero-point energy is sufficient to prevent it from freezing at lower pressures. Only if under at least 25 bar of pressure will this latent heat energy be liberated as helium freezes while approaching absolute zero. A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars). These are known as solid-solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one.
The above complexities make for rather cumbersome blanket statements regarding the internal energy in T=0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowest-energy crystal lattice such those with a closest-packed arrangement (see Fig. 8, above left) contain minimal internal energy, retaining only that due to the ever-present background of zero-point energy. One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration). Lastly, it is always true to say that all T=0 substances contain zero kinetic heat energy.
Practical applications for thermodynamic temperature
Thermodynamic temperature is useful not only for scientists, it can also be useful for lay-people in many disciplines involving gases. By expressing variables in absolute terms and applying Gay–Lussac’s law of temperature/pressure proportionality, the solutions to familiar problems are straightforward. For instance, how is the pressure in an automobile tire affected by temperature? If the tire has a “cold” pressure of 200 kPa-gage, then in absolute terms—relative to a vacuum—its pressure is 300 kPa-absolute. Room temperature (“cold” in tire terms) is 296 K. What would the tire pressure be if was 20 °C hotter? The answer is 316 K⁄296 K = 6.8% greater thermodynamic temperature and absolute pressure; that is, a pressure of 320 kPa-absolute and 220 kPa-gage.
The origin of heat energy on Earth
Earth’s proximity to the Sun is why most everything near Earth’s surface is warm with a temperature substantially above absolute zero. Solar radiation constantly replenishes heat energy that Earth loses into space and a relatively stable state of equilibrium is achieved. Because of the wide variety of heat diffusion mechanisms (one of which is black-body radiation which occurs at the speed of light), objects on Earth rarely vary too far from the global mean surface and air temperature of 287 to 288 K (14 to 15 °C). The more an object’s or system’s temperature varies from this average, the more rapidly it tends to come back into equilibrium with the ambient environment.
History of thermodynamic temperature
- 1702–1703: Guillaume Amontons (1663 – 1705) published two papers that may be used to credit him as being the first researcher to deduce the existence of a fundamental (thermodynamic) temperature scale featuring an absolute zero. He made the discovery while endeavoring to improve upon the air thermometers in use at the time. His J-tube thermometers comprised a mercury column that was supported by a fixed mass of air entrapped within the sensing portion of the thermometer. In thermodynamic terms, his thermometers relied upon the volume / temperature relationship of gas under constant pressure. His measurements of the boiling point of water and the melting point of ice showed that regardless of the mass of air trapped inside his thermometers or the weight of mercury the air was supporting, the reduction in air volume at the ice point was always the same ratio. This observation led him to posit that a sufficient reduction in temperature would reduce the air volume to zero. In fact, his calculations projected that absolute zero was equivalent to −240 degrees on today’s Celsius scale—only 33.15 degrees short of the true value of −273.15 °C.
- 1742: Anders Celsius (1701 – 1744) created a “backwards” version of the modern Celsius temperature scale whereby zero represented the boiling point of water and 100 represented the melting point of ice. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that ice’s melting point was effectively unaffected by pressure. He also determined with remarkable precision how water’s boiling point varied as a function of atmospheric pressure. He proposed that zero on his temperature scale (water’s boiling point) would be calibrated at the mean barometric pressure at mean sea level.
- 1744: Coincident with the death of Anders Celsius, the famous botanist Carolus Linnaeus (1707 – 1778) effectively reversed Celsius’s scale upon receipt of his first thermometer featuring a scale where zero represented the melting point of ice and 100 represented water’s boiling point. The custom-made “linnaeus-thermometer,” for use in his greenhouses, was made by Daniel Ekström, Sweden’s leading maker of scientific instruments at the time. For the next 204 years, the scientific and thermometry communities world-wide referred to this scale as the “centigrade scale.” Temperatures on the centigrade scale were often reported simply as “degrees” or, when greater specificity was desired, “degrees centigrade.” The symbol for temperature values on this scale was °C (in several formats over the years). Because the term “centigrade” was also the French-language name for a unit of angular measurement (one-hundredth of a right angle) and had a similar connotation in other languages, the term “centesimal degree” was used when very precise, unambiguous language was required by international standards bodies such as the Bureau international des poids et mesures (BIPM). The 9th CGPM (Conférence générale des poids et mesures) and the CIPM (Comité international des poids et mesures) formally adopted “degree Celsius” (symbol: °C) in 1948.
- 1777: In his book Pyrometrie (Berlin: Haude & Spener, 1779) completed four months before his death, Johann Heinrich Lambert (1728 – 1777)—sometimes incorrectly referred to as Joseph Lambert—proposed an absolute temperature scale based on the pressure / temperature relationship of a fixed volume of gas. This is distinct from the volume / temperature relationship of gas under constant pressure that Guillaume Amontons discovered 75 years earlier. Lambert stated that absolute zero was the point where a simple straight-line extrapolation reached zero gas pressure and was equal to −270 °C.
- Circa 1787: Notwithstanding the work of Guillaume Amontons 85 years earlier, Jacques Alexandre César Charles (1746 – 1823) is often credited with “discovering”, but not publishing, that the volume of a gas under constant pressure is proportional to its absolute temperature. The formula he created was V1/T1 = V2/T2.
- 1802: Joseph Louis Gay-Lussac (1778 – 1850) published work (acknowledging the unpublished lab notes of Jacques Charles fifteen years earlier) describing how the volume of gas under constant pressure changes linearly with its absolute (thermodynamic) temperature. This behavior is called Charles’s Law and is one of the gas laws. His are the first known formulas to use the number “273” for the expansion coefficient of gas relative to the melting point of ice (indicating that absolute zero was equivalent to −273 °C).
- 1848: William Thomson, (1824 – 1907) also known as Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby “infinite cold” (absolute zero) was the scale’s null point, and which used the degree Celsius for its unit increment. Like Gay-Lussac, Thomson calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the Kelvin thermodynamic temperature scale. It’s noteworthy that Thomson’s value of “−273” was actually derived from 0.00366, which was the accepted expansion coefficient of gas per degree Celsius relative to the ice point. The inverse of −0.00366 expressed to five significant digits is −273.22 °C which is remarkably close to the true value of −273.15 °C.
- 1859: William John Macquorn Rankine (1820 – 1872) proposed a thermodynamic temperature scale similar to William Thomson’s but which used the degree Fahrenheit for its unit increment. This absolute scale is known today as the Rankine thermodynamic temperature scale.
- 1877 - 1884: Ludwig Boltzmann (1844 – 1906) made major contributions to thermodynamics through an understanding of the role that particle kinetics and black-body radiation played. His name is now attached to several of the formulas used today in thermodynamics.
- Circa 1930s: Gas thermometry experiments carefully calibrated to the melting point of ice and boiling point of water showed that absolute zero was equivalent to −273.15 °C.
- 1948: Resolution 3 of the 9th CGPM (Conférence Générale des Poids et Mesures, also known as the General Conference on Weights and Measures) fixed the triple point of water at precisely 0.01 °C. At this time, the triple point still had no formal definition for its equivalent kelvin value, which the resolution declared “will be fixed at a later date.” The implication is that if the value of absolute zero measured in the 1930s was truly −273.15 °C, then the triple point of water (0.01 °C) was equivalent to 273.16 K. Additionally, both the CIPM (Comité international des poids et mesures, also known as the International Committee for Weights and Measures) and the CGPM formally adopted the name “Celsius” for the “degree Celsius” and the “Celsius temperature scale.”
- 1954: Resolution 3 of the 10th CGPM gave the Kelvin scale its modern definition by choosing the triple point of water as its second defining point and assigned it a temperature of precisely 273.16 kelvin (what was actually written 273.16 “degrees Kelvin” at the time). This, in combination with Resolution 3 of the 9th CGPM, had the effect of defining absolute zero as being precisely zero kelvin and −273.15 °C.
- 1967/1968: Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature “kelvin”, symbol K, replacing “degree absolute”, symbol °K. Further, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also decided in Resolution 4 that “The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water.”
- 2005: The CIPM (Comité International des Poids et Mesures, also known as the International Committee for Weights and Measures) affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the Kelvin thermodynamic temperature scale would refer to water having an isotopic composition defined as being precisely equal to the nominal specification of Vienna Standard Mean Ocean Water.
Derivations of thermodynamic temperature
Strictly speaking, the temperature of a system is well-defined only if its particles (atoms, molecules, electrons, photons) are at equilibrium, so that their energies obey a Boltzmann distribution (or its quantum mechanical counterpart). There are many possible scales of temperature, derived from a variety of observations of physical phenomena. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely defined (up to some constant multiplicative factor) by considering the efficiency of idealized heat engines. Thus the ratio T2/T1 of two temperatures T1 andT2 is the same in all absolute scales.
Loosely stated, temperature controls the flow of heat between two systems, and the universe as a whole, as with any natural system, tends to progress so as to maximize entropy. This suggests that there should be a relationship between temperature and entropy. To elucidate this, consider first the relationship between heat, work and temperature. One way to study this is to analyse a heat engine, which is a device for converting heat into mechanical work, such as the Carnot heat engine. Such a heat engine functions by using a temperature gradient between a high temperature TH and a low temperature TC to generate work, and the work done (per cycle, say) by the heat engine is equal to the difference between the heat energy qH put into the system at the high temperature the heat qC ejected at the low temperature (in that cycle). The efficiency of the engine is the work divided by the heat put into the system or
where wcy is the work done per cycle. Thus the efficiency depends only on qC/qH. Because qC and qH correspond to heat transfer at the temperatures TC and TH, respectively, the ratio qC/qH should be a function f of these temperatures:
Carnot’s theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, a heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 and T3. This can only be the case if
Now specialize to the case that T1 is a fixed reference temperature: the temperature of the triple point of water. Then for any T2 and T3,
Therefore if thermodynamic temperature is defined by
then the function f, viewed as a function of thermodynamic temperature, is simply
and the reference temperature T1 will have the value 273.16. (Of course any reference temperature and any positive numerical value could be used — the choice here corresponds to the Kelvin scale.)
It follows immediately that
Substituting Equation 3 back into Equation 1 gives a relationship for the efficiency in terms of temperature:
Notice that for TC=0 the efficiency is 100% and that efficiency becomes greater than 100% for TC<0. Since an efficiency greater than 100% violates the first law of thermodynamics, this requires that zero must be the minimum possible temperature. This has an intuitive interpretation: temperature is the motion of particles, so no system can, on average, have less motion than the minimum permitted by quantum physics. In fact, as of June 2006, the coldest man-made temperature was 450 pK.
Subtracting the right hand side of Equation 4 from the middle portion and rearranging gives
where the negative sign indicates heat ejected from the system. This relationship suggests the existence of a state function S (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by
where the subscript indicates heat transfer in a reversible process. The function S corresponds to the entropy of the system, mentioned previously, and the change of S around any cycle is zero (as is necessary for any state function). Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat:
For a system in which the entropy S is a function S(E) of its energy E, the thermodynamic temperature T is therefore given by
so that the reciprocal of the thermodynamic temperature is the rate of increase of entropy with energy.
In the following notes, wherever numeric equalities are shown in ‘concise form’—such as 1.85487(14) × 1043—the two digits between the parentheses denotes the uncertainty at 1σ standard deviation (68% confidence level) in the two least significant digits of the significand.
- ^ a b c d e While scientists are achieving temperatures ever closer to absolute zero, they can not fully achieve a state of “zero” temperature. However, even if scientists could remove all kinetic heat energy from matter, quantum mechanical zero-point energy (ZPE) causes particle motion that can never be eliminated. Encyclopedia Britannica Online defines zero-point energy as the “vibrational energy that molecules retain even at the absolute zero of temperature.” ZPE is the result of all-pervasive energy fields in the vacuum between the fundamental particles of nature; it is responsible for the Casimir effect and other phenomena. See Zero Point Energy and Zero Point Field, which is an excellent explanation of ZPE by Calphysics Institute. See also Solid Helium by the University of Alberta’s Department of Physics to learn more about ZPE’s effect on Bose–Einstein condensates of helium.
Although absolute zero (T=0) is not a state of zero molecular motion, it is the point of zero temperature and, in accordance with the Boltzmann constant, is also the point of zero particle kinetic energy and zero kinetic velocity. To understand how atoms can have zero kinetic velocity and simultaneously be vibrating due to ZPE, consider the following thought experiment: two T=0 helium atoms in zero gravity are carefully positioned and observed to have an average separation of 620 pm between them (a gap of ten atomic diameters). It’s an “average” separation because ZPE causes them to jostle about their fixed positions. Then one atom is given a kinetic kick of precisely 83 yoctokelvin (1 yK = 1 × 10–24 K). This is done in a way that directs this atom’s velocity vector at the other atom. With 83 yK of kinetic energy between them, the 620-pm gap through their common barycenter would close at a rate of 719 pm/s and they would collide after 0.862 second. This is the same speed as shown in the Fig. 1 animation above. Before being given the kinetic kick, both T=0 atoms had zero kinetic energy and zero kinetic velocity because they could persist indefinitely in that state and relative orientation even though both were being jostled by ZPE. At T=0, no kinetic energy is available for transfer to other systems. The Boltzmann constant and its related formulas describe the realm of particle kinetics and velocity vectors whereas ZPE is an energy field that jostles particles in ways described by the mathematics of quantum mechanics. In atomic and molecular collisions in gases, ZPE introduces a degree of chaos, i.e., unpredictability, to rebound kinetics; it is as likely that there will be less ZPE-induced particle motion after a given collision as more. This random nature of ZPE is why it has no net effect upon either the pressure or volume of any bulk quantity (a statistically significant quantity of particles) of T≥3 K gases. However, in T=0 condensed matter; e.g., solids and liquids, ZPE causes inter-atomic jostling where atoms would otherwise be perfectly stationary. Inasmuch as the real-world effects that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won’t freeze unless under a pressure of at least 25 bar), ZPE is very much a form of heat energy and may properly be included when tallying a substance’s internal energy.
Note too that absolute zero serves as the baseline atop which thermodynamics and its equations are founded because they deal with the exchange of heat energy between “systems” (a plurality of particles and fields modeled as an average). Accordingly, one may examine ZPE-induced particle motion within a system that is at absolute zero but there can never be a net outflow of heat energy from such a system. Also, the peak emittance wavelength of black-body radiation shifts to infinity at absolute zero; indeed, a peak no longer exists and black-body photons can no longer escape. Due to the influence of ZPE however, virtual photons are still emitted at T=0. Such photons are called “virtual” because they can’t be intercepted and observed. Furthermore, this zero-point radiation has a unique zero-point spectrum. However, even though a T=0 system emits zero-point radiation, no net heat flow Q out of such a system can occur because if the surrounding environment is at a temperature greater than T=0, heat will flow inward, and if the surrounding environment is at T=0, there will be an equal flux of ZP radiation both inward and outward. A similar Q equilibrium exists at T=0 with the ZPE-induced “spontaneous” emission of photons (which is more properly called a stimulated emission in this context). The graph at upper right illustrates the relationship of absolute zero to zero-point energy. The graph also helps in the understanding of how zero-point energy got its name: it is the vibrational energy matter retains at the “zero kelvin point.” Citation: Derivation of the classical electromagnetic zero-point radiation spectrum via a classical thermodynamic operation involving van der Waals forces, Daniel C. Cole, Physical Review A, Third Series 42, Number 4, 15 August 1990, Pg. 1847–1862.
- ^ The cited emission wavelengths are for true black bodies in equilibrium. In this table, only the sun so qualifies. CODATA 2006 recommended value of 2.897 7685(51) × 10−3 m K used for Wien displacement law constant b.
- ^ a b A record cold temperature of 450 ±80 pK in a Bose–Einstein condensate (BEC) of sodium atoms was achieved in 2003 by researchers at MIT. Citation: Cooling Bose–Einstein Condensates Below 500 Picokelvin, A. E. Leanhardt et al., Science 301, 12 Sept. 2003, Pg. 1515. It’s noteworthy that this record’s peak emittance black-body wavelength of 6,400 kilometers is roughly the radius of Earth.
- ^ The peak emittance wavelength of 2.897 77 m is a frequency of 103.456 MHz
- ^ Measurement was made in 2002 and has an uncertainty of ±3 kelvins. A 1989 measurement produced a value of 5777 ±2.5 K. Citation: Overview of the Sun (Chapter 1 lecture notes on Solar Physics by Division of Theoretical Physics, Dept. of Physical Sciences, University of Helsinki). Download paper (252 kB PDF)
- ^ The 350 MK value is the maximum peak fusion fuel temperature in a thermonuclear weapon of the Teller–Ulam configuration (commonly known as a “hydrogen bomb”). Peak temperatures in Gadget-style fission bomb cores (commonly known as an “atomic bomb”) are in the range of 50 to 100 MK. Citation: Nuclear Weapons Frequently Asked Questions, 3.2.5 Matter At High Temperatures. Link to relevant Web page. All referenced data was compiled from publicly available sources.
- ^ Peak temperature for a bulk quantity of matter was achieved by a pulsed-power machine used in fusion physics experiments. The term “bulk quantity” draws a distinction from collisions in particle accelerators wherein high “temperature” applies only to the debris from two subatomic particles or nuclei at any given instant. The >2 GK temperature was achieved over a period of about ten nanoseconds during “shot Z1137.” In fact, the iron and manganese ions in the plasma averaged 3.58 ±0.41 GK (309 ±35 keV) for 3 ns (ns 112 through 115). Citation: Ion Viscous Heating in a Magnetohydrodynamically Unstable Z Pinch at Over 2 × 109 Kelvin, M. G. Haines et al., Physical Review Letters 96, Issue 7, id. 075003. Link to Sandia’s news release.
- ^ Core temperature of a high–mass (>8–11 solar masses) star after it leaves the main sequence on the Hertzsprung–Russell diagram and begins the alpha process (which lasts one day) of fusing silicon–28 into heavier elements in the following steps: sulfur–32 → argon–36 → calcium–40 → titanium–44 → chromium–48 → iron–52 → nickel–56. Within minutes of finishing the sequence, the star explodes as a Type II supernova. Citation: Stellar Evolution: The Life and Death of Our Luminous Neighbors (by Arthur Holland and Mark Williams of the University of Michigan). Link to Web site. More informative links can be found here, and here, and a concise treatise on stars by NASA is here.
- ^ Based on a computer model that predicted a peak internal temperature of 30 MeV (350 GK) during the merger of a binary neutron star system (which produces a gamma–ray burst). The neutron stars in the model were 1.2 and 1.6 solar masses respectively, were roughly 20 km in diameter, and were orbiting around their barycenter (common center of mass) at about 390 Hz during the last several milliseconds before they completely merged. The 350 GK portion was a small volume located at the pair’s developing common core and varied from roughly 1 to 7 km across over a time span of around 5 ms. Imagine two city-sized objects of unimaginable density orbiting each other at the same frequency as the G4 musical note (the 28th white key on a piano). It’s also noteworthy that at 350 GK, the average neutron has a vibrational speed of 30% the speed of light and a relativistic mass (m) 5% greater than its rest mass (m0). Citation: Torus Formation in Neutron Star Mergers and Well-Localized Short Gamma-Ray Bursts, R. Oechslin et al. of Max Planck Institute for Astrophysics., arXiv:astro-ph/0507099 v2, 22 Feb. 2006. Download paper (725 kB PDF) (from Cornell University Library’s arXiv.org server). To view a browser-based summary of the research, click here.
- ^ Results of research by Stefan Bathe using the PHENIX detector on the Relativistic Heavy Ion Collider at Brookhaven National Laboratory in Upton, New York, U.S.A. Bathe has studied gold-gold, deuteron-gold, and proton-proton collisions to test the theory of quantum chromodynamics, the theory of the strong force that holds atomic nuclei together. Link to news release.
- ^ Citation: How do physicists study particles? by CERN.
- ^ The Planck frequency equals 1.854 87(14) × 1043 Hz (which is the reciprocal of one Planck time). Photons at the Planck frequency have a wavelength of one Planck length. The Planck temperature of 1.416 79(11) × 1032 K equates to a calculated b /T = λmax wavelength of 2.045 31(16) × 10−26 nm. However, the actual peak emittance wavelength quantizes to the Planck length of 1.616 24(12) × 10−26 nm.
- ^ At non-relativistic temperatures of less than about 30 GK, classical mechanics are sufficient to calculate the velocity of particles. At 30 GK, individual neutrons (the constituent of neutron stars and one of the few materials in the universe with temperatures in this range) have a 1.0042 γ (gamma or Lorentz factor). Thus, the classic Newtonian formula for kinetic energy is in error less than half a percent for temperatures less than 30 GK.
- ^ Even room–temperature air has an average molecular translational speed (not vector-isolated velocity) of 1822 km/hour. This is relatively fast for something the size of a molecule considering there are roughly 2.42 × 1016 of them crowded into a single cubic millimeter. Assumptions: Average molecular weight of wet air = 28.838 g/mol and T = 296.15 K. Assumption’s primary variables: An altitude of 194 meters above mean sea level (the world–wide median altitude of human habitation), an indoor temperature of 23 °C, a dewpoint of 9 °C (40.85% relative humidity), and 760 mmHg (101.325 kPa) sea level–corrected barometric pressure.
- ^ Citation: Adiabatic Cooling of Cesium to 700 nK in an Optical Lattice, A. Kastberg et al., Physical Review Letters 74, No. 9, 27 Feb. 1995, Pg. 1542. It’s noteworthy that a record cold temperature of 450 pK in a Bose–Einstein condensate of sodium atoms (achieved by A. E. Leanhardt et al. of MIT) equates to an average vector-isolated atom velocity of 0.4 mm/s and an average atom speed of 0.7 mm/s.
- ^ a b The rate of translational motion of atoms and molecules is calculated based on thermodynamic temperature as follows:
In the above formula, molecular mass, m, in kg/particle is the quotient of a substance’s molar mass (also known as atomic weight, atomic mass, relative atomic mass, and unified atomic mass units) in g/mol or daltons divided by 6.022 141 79(30) × 1026 (which is the Avogadro constant times one thousand). For diatomic molecules such as H2, N2, and O2, multiply atomic weight by two before plugging it into the above formula.
- is the vector-isolated mean velocity of translational particle motion in m/s
- kB (Boltzmann constant) = 1.380 6504(24) × 10−23 J/K
- T is the thermodynamic temperature in kelvins
- m is the molecular mass of substance in kg/particle
The mean speed (not vector-isolated velocity) of an atom or molecule along any arbitrary path is calculated as follows:
Note that the mean energy of the translational motions of a substance’s constituent particles correlates to their mean speed, not velocity. Thus, substituting for v in the classic formula for kinetic energy, Ek = 1⁄2m • v 2 produces precisely the same value as does Emean = 3/2kBT (as shown in the section titled The nature of kinetic energy, translational motion, and temperature).
- is the mean speed of translational particle motion in m/s
Note too that the Boltzmann constant and its related formulas establish that absolute zero is the point of both zero kinetic energy of particle motion and zero kinetic velocity (see also Note 1 above).
- ^ The internal degrees of freedom of molecules cause their external surfaces to vibrate and can also produce overall spinning motions (what can be likened to the jiggling and spinning of an otherwise stationary water balloon). If one examines a single molecule as it impacts a containers’ wall, some of the kinetic energy borne in the molecule’s internal degrees of freedom can constructively add to its translational motion during the instant of the collision and extra kinetic energy will be transferred into the container’s wall. This would induce an extra, localized, impulse-like contribution to the average pressure on the container. However, since the internal motions of molecules are random, they have an equal probability of destructively interfering with translational motion during a collision with a container’s walls or another molecule. Averaged across any bulk quantity of a gas, the internal thermal motions of molecules have zero net effect upon the temperature, pressure, or volume of a gas. Molecules’ internal degrees of freedom simply provide additional locations where kinetic energy is stored. This is precisely why molecular-based gases have greater specific heat capacity than monatomic gases (where additional heat energy must be added to achieve a given temperature rise).
- ^ When measured at constant-volume since different amounts of work must be performed if measured at constant-pressure. Nitrogen’s CvH (100 kPa, 20 °C) equals 20.8 J mol–1 K–1 vs. the monatomic gases, which equal 12.4717 J mol–1 K–1. Citations: W.H. Freeman’s Physical Chemistry, Part 3: Change (422 kB PDF, here), Exercise 21.20b, Pg. 787. Also Georgia State University’s Molar Specific Heats of Gases.
- ^ The speed at which thermal energy equalizes throughout the volume of a gas is very rapid. However, since gases have extremely low density relative to solids, the heat flux—the thermal power conducting through a unit area—through gases is comparatively low. This is why the dead-air spaces in multi-pane windows have insulating qualities.
- ^ Diamond is a notable exception. Due to the highly quantized modes of phonon vibration occurring in its rigid crystal lattice, not only does diamond have exceptionally poor specific heat capacity, it also has exceptionally high thermal conductivity.
- ^ Correlation is 752 (W m−1 K−1) / (MS•cm), σ = 81, through a 7:1 range in conductivity. Value and standard deviation based on data for Ag, Cu, Au, Al, Ca, Be, Mg, Rh, Ir, Zn, Co, Ni, Os, Fe, Pa, Pt, and Sn. Citation: Data from CRC Handbook of Chemistry and Physics, 1st Student Edition and this link to Web Elements’ home page.
- ^ Water’s enthalpy of fusion (0 °C, 101.325 kPa) equates to 0.062284 eV per molecule so adding one joule of heat energy to 0 °C water ice causes 1.0021 × 1020 water molecules to break away from the crystal lattice and become liquid.
- ^ Water’s enthalpy of fusion is 6.0095 kJ mol−1 K−1 (0 °C, 101.325 kPa). Citation: Water Structure and Science, Water Properties, Enthalpy of fusion, (0 °C, 101.325 kPa) (by London South Bank University). Link to Web site. The only metals with enthalpies of fusion not in the range of 6–30 J mol−1 K−1 are (on the high side): Ta, W, and Re; and (on the low side) most of the group 1 (alkaline) metals plus Ga, In, Hg, Tl, Pb, and Np. Citation: This link to Web Elements’ home page.
- ^ Xenon value citation: This link to WebElements’ xenon data (available values range from 2.3 to 3.1 kJ mol−1). It is also noteworthy that helium’s heat of fusion of only 0.021 kJ mol−1 is so weak of a bonding force that zero-point energy prevents helium from freezing unless it is under a pressure of at least 25 atmospheres.
- ^ Citation: Data from CRC Handbook of Chemistry and Physics, 1st Student Edition and this link to Web Elements’ home page.
- ^ H2O specific heat capacity, Cp = 0.075327 kJ mol−1 K−1 (25 °C); Enthalpy of fusion = 6.0095 kJ mol−1 (0 °C, 101.325 kPa); Enthalpy of vaporization (liquid) = 40.657 kJ mol−1 (100 °C). Citation: Water Structure and Science, Water Properties (by London South Bank University). Link to Web site.
- ^ Mobile conduction electrons are delocalized, i.e. not tied to a specific atom, and behave rather like a sort of “quantum gas” due to the effects of zero-point energy. Consequently, even at absolute zero, conduction electrons still move between atoms at the Fermi velocity of about 1.6 × 106 m/s. Kinetic heat energy adds to this speed and also causes delocalized electrons to travel farther away from the nuclei.
- ^ No other crystal structure can exceed the 74.048% packing density of a closest-packed arrangement. The two regular crystal lattices found in nature that have this density are hexagonal close packed (HCP) and face-centered cubic (FCC). These regular lattices are at the lowest possible energy state. Diamond is a closest-packed structure with an FCC crystal lattice. Note too that suitable crystalline chemical compounds, although usually comprised of atoms of different sizes, can be considered as “closest-packed structures” when considered at the molecular level. One such compound is the common mineral known as magnesium aluminum spinel (MgAl2O4). It has a face-centered cubic crystal lattice and no change in pressure can produce a lattice with a lower energy state.
- ^ Nearly half of the 92 naturally occurring chemical elements that can freeze under a vacuum also have a closest-packed crystal lattice. This set includes beryllium, osmium, neon, and iridium (but excludes helium), and therefore have zero latent heat of phase transitions to contribute to internal energy (symbol: U). In the calculation of enthalpy (formula: H = U + pV), internal energy may exclude different sources of heat energy—particularly ZPE—depending on the nature of the analysis. Accordingly, all T=0 closest-packed matter under a perfect vacuum has either minimal or zero enthalpy, depending on the nature of the analysis. Citation: Use Of Legendre Transforms In Chemical Thermodynamics, Robert A. Alberty, Pure Appl.Chem., 73, No.8, 2001, 1349–1380 (400 kB PDF, here).
- ^ Pressure also must be in absolute terms. The air still in a tire at 0 kPa-gage expands too as it gets hotter. It’s not uncommon for engineers to overlook that one must work in terms of absolute pressure when compensating for temperature. For instance, a dominant manufacturer of aircraft tires published a document on temperature-compensating tire pressure, which used gage pressure in the formula. However, the high gage pressures involved (180 psi ≈ 12.4 bar) means the error would be quite small. With low-pressure automobile tires, where gage pressures are typically around 2 bar, failing to adjust to absolute pressure results in a significant error. Referenced document: Aircraft Tire Ratings (155 kB PDF, here).
- ^ Regarding the spelling “gage” vs. “gauge” in the context of pressures measured relative to atmospheric pressure, the preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular industry or country. Industries in British English-speaking countries typically use the spelling “gauge pressure” to distinguish it from the pressure-measuring instrument, which in the U.K., is spelled “pressure gage.” For the same reason, many of the largest American manufacturers of pressure transducers and instrumentation use the spelling “gage pressure”—the convention used here—in their formal documentation to distinguish it from the instrument, which is spelled “pressure gauge.” (see Honeywell-Sensotec’s FAQ page and Fluke Corporation’s product search page).
- ^ A difference of 100 kPa is used here instead of the 101.325 kPa value of one standard atmosphere. In 1982, the International Union of Pure and Applied Chemistry (IUPAC) recommended that for the purposes of specifying the physical properties of substances, “the standard pressure” (atmospheric pressure) should be defined as precisely 100 kPa (≈750.062 Torr). Besides being a round number, this had a very practical effect: relatively few people live and work at precisely sea level; 100 kPa equates to the mean pressure at an altitude of about 112 meters, which is closer to the 194–meter, worldwide median altitude of human habitation. For especially low-pressure or high-accuracy work, true atmospheric pressure must be measured. Citation: IUPAC.org, Gold Book, Standard Pressure
- ^ The deepest ocean depths (3 to 10 km) are no colder than about 274.7 – 275.7 K (1.5 – 2.5 °C). Even the world-record cold surface temperature established on July 21, 1983 at Vostok Station, Antarctica is 184 K (a reported value of −89.2 °C). The residual heat of gravitational contraction left over from earth’s formation, tidal friction, and the decay of radioisotopes in earth’s core provide insufficient heat to maintain earth’s surface, oceans, and atmosphere “substantially above” absolute zero in this context. Also, the qualification of “most-everything” provides for the exclusion of lava flows, which derive their temperature from these deep-earth sources of heat.
- ^ Citations: Thermodynamics-information.net, A Brief History of Temperature Measurement and; Uppsala University (Sweden), Linnaeus’ thermometer
- ^ a b According to The Oxford English Dictionary (OED), the term “Celsius’s thermometer” had been used at least as early as 1797. Further, the term “The Celsius or Centigrade thermometer” was again used in reference to a particular type of thermometer at least as early as 1850. The OED also cites this 1928 reporting of a temperature: “My altitude was about 5,800 metres, the temperature was 28° Celsius.” However, dictionaries seek to find the earliest use of a word or term and are not a useful resource as regards the terminology used throughout the history of science. According to several writings of Dr. Terry Quinn CBE FRS, Director of the BIPM (1988 – 2004), including Temperature Scales from the early days of thermometry to the 21st century (148 kB PDF, here) as well as Temperature (2nd Edition / 1990 / Academic Press / 0125696817), the term Celsius in connection with the centigrade scale was not used whatsoever by the scientific or thermometry communities until after the CIPM and CGPM adopted the term in 1948. The BIPM wasn’t even aware that “degree Celsius” was in sporadic, non-scientific use before that time. It’s also noteworthy that the twelve-volume, 1933 edition of OED didn’t even have a listing for the word Celsius (but did have listings for both centigrade and centesimal in the context of temperature measurement). The 1948 adoption of Celsius accomplished three objectives:
- 1) All common temperature scales would have their units named after someone closely associated with them; namely, Kelvin, Celsius, Fahrenheit, Réaumur and Rankine.
2) Notwithstanding the important contribution of Linnaeus who gave the Celsius scale its modern form, Celsius’s name was the obvious choice because it began with the letter C. Thus, the symbol °C that for centuries had been used in association with the name centigrade could continue to be used and would simultaneously inherit an intuitive association with the new name.
3) The new name eliminated the ambiguity of the term “centigrade,” freeing it to refer exclusively to the French-language name for the unit of angular measurement.
- Kinetic Molecular Theory of Gases. An excellent explanation (with interactive animations) of the kinetic motion of molecules and how it affects matter. By David N. Blauch, Department of Chemistry, Davidson College.
- Zero Point Energy and Zero Point Field. A Web site with in-depth explanations of a variety of quantum effects. By Bernard Haisch, of Calphysics Institute. | http://www.chemeurope.com/en/encyclopedia/Thermodynamic_temperature.html | 13 |
69 | Introduction to Serial Communications
Learn the basic principles of serial communication. This page also contains basic connector pinouts, recommended cable lengths and other useful information.
Many PCs and compatible computers are equipped with two serial ports and one parallel port. Although these two types of ports are used for communicating with external devices, they work in different ways.
A parallel port sends and receives data eight bits at a time over 8 separate wires. This allows data to be transferred very quickly; however, the cable required is more bulky because of the number of individual wires it must contain. Parallel ports are typically used to connect a PC to a printer and are rarely used for much else. A serial port sends and receives data one bit at a time over one wire. While it takes eight times as long to transfer each byte of data this way, only a few wires are required. In fact, two-way (full duplex) communications is possible with only three separate wires - one to send, one to receive, and a common signal ground wire.
- Bi-directional Communications
- Communicating by Bits
- The Parity Bit
- DCE and DTE Devices
- 9 to 25 Pin Adapters
- Baud vs. Bits per Second
- Cables, Null Modems, and Gender Changers
- Cables Lengths
- Gender Changers
- Null Modem Cables and Null Modem Adaptors
- Synchronous and Asynchronous Communications
The serial port on your PC is a full-duplex device meaning that it can send and receive data at the same time. In order to be able to do this, it uses separate lines for transmitting and receiving data. Some types of serial devices support only one-way communications and therefore use only two wires in the cable - the transmit line and the signal ground.
Once the start bit has been sent, the transmitter sends the actual data bits. There may either be 5, 6, 7, or 8 data bits, depending on the number you have selected. Both receiver and the transmitter must agree on the number of data bits, as well as the baud rate. Almost all devices transmit data using either 7 or 8 databits.
Notice that when only 7 data bits are employed, you cannot send ASCII values greater than 127. Likewise, using 5 bits limits the highest possible value to 31. After the data has been transmitted, a stop bit is sent. A stop bit has a value of 1 - or a mark state - and it can be detected correctly even if the previous data bit also had a value of 1. This is accomplished by the stop bit's duration. Stop bits can be 1, 1.5, or 2 bit periods in length.
Besides the synchronization provided by the use of start and stop bits, an additional bit called a parity bit may optionally be transmitted along with the data. A parity bit affords a small amount of error checking, to help detect data corruption that might occur during transmission. You can choose either even parity, odd parity, mark parity, space parity or none at all. When even or odd parity is being used, the number of marks (logical 1 bits) in each data byte are counted, and a single bit is transmitted following the data bits to indicate whether the number of 1 bits just sent is even or odd.
For example, when even parity is chosen, the parity bit is transmitted with a value of 0 if the number of preceding marks is an even number. For the binary value of 0110 0011 the parity bit would be 0. If even parity were in effect and the binary number 1101 0110 were sent, then the parity bit would be 1. Odd parity is just the opposite, and the parity bit is 0 when the number of mark bits in the preceding word is an odd number. Parity error checking is very rudimentary. While it will tell you if there is a single bit error in the character, it doesn't show which bit was received in error. Also, if an even number of bits are in error then the parity bit would not reflect any error at all.
Mark parity means that the parity bit is always set to the mark signal condition and likewise space parity always sends the parity bit in the space signal condition. Since these two parity options serve no useful purpose whatsoever, they are almost never used.
RS-232 stands for Recommend Standard number 232 and C is the latest revision of the standard. The serial ports on most computers use a subset of the RS-232C standard. The full RS-232C standard specifies a 25-pin "D" connector of which 22 pins are used. Most of these pins are not needed for normal PC communications, and indeed, most new PCs are equipped with male D type connectors having only 9 pins.
Two terms you should be familiar with are DTE and DCE. DTE stands for Data Terminal Equipment, and DCE stands for Data Communications Equipment. These terms are used to indicate the pin-out for the connectors on a device and the direction of the signals on the pins. Your computer is a DTE device, while most other devices are usually DCE devices.
If you have trouble keeping the two straight then replace the term "DTE device" with "your PC" and the term "DCE device" with "remote device" in the following discussion.
The RS-232 standard states that DTE devices use a 25-pin male connector, and DCE devices use a 25-pin female connector. You can therefore connect a DTE device to a DCE using a straight pin-for-pin connection. However, to connect two like devices, you must instead use a null modem cable. Null modem cables cross the transmit and receive lines in the cable, and are discussed later in this chapter. The listing below shows the connections and signal directions for both 25 and 9-pin connectors.
|25 Pin Connector on a DTE device (PC connection)|
|Male RS232 DB25|
|Pin Number||Direction of signal:|
|2||Transmitted Data (TD) Outgoing Data (from a DTE to a DCE)|
|3||Received Data (RD) Incoming Data (from a DCE to a DTE)|
|4||Request To Send (RTS) Outgoing flow control signal controlled by DTE|
|5||Clear To Send (CTS) Incoming flow control signal controlled by DCE|
|6||Data Set Ready (DSR) Incoming handshaking signal controlled by DCE|
|7||Signal Ground Common reference voltage|
|8||Carrier Detect (CD) Incoming signal from a modem|
|20||Data Terminal Ready (DTR) Outgoing handshaking signal controlled by DTE|
|22||Ring Indicator (RI) Incoming signal from a modem|
|9 Pin Connector on a DTE device (PC connection)|
|Male RS232 DB9|
|Pin Number||Direction of signal:|
|1||Carrier Detect (CD) (from DCE) Incoming signal from a modem|
|2||Received Data (RD) Incoming Data from a DCE|
|3||Transmitted Data (TD) Outgoing Data to a DCE|
|4||Data Terminal Ready (DTR) Outgoing handshaking signal|
|5||Signal Ground Common reference voltage|
|6||Data Set Ready (DSR) Incoming handshaking signal|
|7||Request To Send (RTS) Outgoing flow control signal|
|8||Clear To Send (CTS) Incoming flow control signal|
|9||Ring Indicator (RI) (from DCE) Incoming signal from a modem|
The TD (transmit data) wire is the one through which data from a DTE device is transmitted to a DCE device. This name can be deceiving, because this wire is used by a DCE device to receive its data. The TD line is kept in a mark condition by the DTE device when it is idle. The RD (receive data) wire is the one on which data is received by a DTE device, and the DCE device keeps this line in a mark condition when idle.
RTS stands for Request To Send. This line and the CTS line are used when "hardware flow control" is enabled in both the DTE and DCE devices. The DTE device puts this line in a mark condition to tell the remote device that it is ready and able to receive data. If the DTE device is not able to receive data (typically because its receive buffer is almost full), it will put this line in the space condition as a signal to the DCE to stop sending data. When the DTE device is ready to receive more data (i.e. after data has been removed from its receive buffer), it will place this line back in the mark condition. The complement of the RTS wire is CTS, which stands for Clear To Send. The DCE device puts this line in a mark condition to tell the DTE device that it is ready to receive the data. Likewise, if the DCE device is unable to receive data, it will place this line in the space condition. Together, these two lines make up what is called RTS/CTS or "hardware" flow control. The Software Wedge supports this type of flow control, as well as Xon/XOff or "software" flow control. Software flow control uses special control characters transmitted from one device to another to tell the other device to stop or start sending data. With software flow control the RTS and CTS lines are not used.
DTR stands for Data Terminal Ready. Its intended function is very similar to the RTS line. DSR (Data Set Ready) is the companion to DTR in the same way that CTS is to RTS. Some serial devices use DTR and DSR as signals to simply confirm that a device is connected and is turned on. The Software Wedge sets DTR to the mark state when the serial port is opened and leaves it in that state until the port is closed. The DTR and DSR lines were originally designed to provide an alternate method of hardware handshaking. It would be pointless to use both RTS/CTS and DTR/DSR for flow control signals at the same time. Because of this, DTR and DSR are rarely used for flow control.
CD stands for Carrier Detect. Carrier Detect is used by a modem to signal that it has a made a connection with another modem, or has detected a carrier tone.
The last remaining line is RI or Ring Indicator. A modem toggles the state of this line when an incoming call rings your phone.
The Carrier Detect (CD) and the Ring Indicator (RI) lines are only available in connections to a modem. Because most modems transmit status information to a PC when either a carrier signal is detected (i.e. when a connection is made to another modem) or when the line is ringing, these two lines are rarely used.
The following table shows the connections inside a standard 9 pin to 25 pin adapter.
|9 Pin Connector||25 Pin Connector|
|Pin 1 DCD||Pin 8 DCD|
|Pin 2 RD||Pin 3 RD|
|Pin 3 TD||Pin 2 TD|
|Pin 4 DTR||Pin 20 DTR|
|Pin 5 GND||Pin 7 GND|
|Pin 6 DSR||Pin 6 DSR|
|Pin 7 RTS||Pin 4 RTS|
|Pin 8 CTS||Pin 5 CTS|
|Pin 9 RI||Pin 22 RI|
The baud unit is named after Jean Maurice Emile Baudot, who was an officer in the French Telegraph Service. He is credited with devising the first uniform-length 5-bit code for characters of the alphabet in the late 19th century. What baud really refers to is modulation rate or the number of times per second that a line changes state. This is not always the same as bits per second (BPS). If you connect two serial devices together using direct cables then baud and BPS are in fact the same. Thus, if you are running at 19200 BPS, then the line is also changing states 19200 times per second. But when considering modems, this isn't the case.
Because modems transfer signals over a telephone line, the baud rate is actually limited to a maximum of 2400 baud. This is a physical restriction of the lines provided by the phone company. The increased data throughput achieved with 9600 or higher baud modems is accomplished by using sophisticated phase modulation, and data compression techniques.
In a perfect world, all serial ports on every computer would be DTE devices with 25-pin male "D" connectors. All other devices to would be DCE devices with 25-pin female connectors. This would allow you to use a cable in which each pin on one end of the cable is connected to the same pin on the other end. Unfortunately, we don't live in a perfect world. Serial ports use both 9 and 25 pins, many devices can be configured as either DTE or DCE, and - as in the case of many data collection devices - may use completely non standard or proprietary pin-outs. Because of this lack of standardization, special cables called null modem cables, gender changers and custom made cables are often required.
The RS-232C standard imposes a cable length limit of 50 feet. You can usually ignore this "standard", since a cable can be as long as 10000 feet at baud rates up to 19200 if you use a high quality, well shielded cable. The external environment has a large effect on lengths for unshielded cables. In electrically noisy environments, even very short cables can pick up stray signals. The following chart offers some reasonable guidelines for 24 gauge wire under typical conditions. You can greatly extend the cable length by using additional devices like optical isolators and signal boosters. Optical isolators use LEDs and Photo Diodes to isolate each line in a serial cable including the signal ground. Any electrical noise affects all lines in the optically isolated cable equally - including the signal ground line. This causes the voltages on the signal lines relative to the signal ground line to reflect the true voltage of the signal and thus canceling out the effect of any noise signals.
|Baud Rate||Shielded Cable Length||Unshielded Cable Length|
A problem you may encounter is having two connectors of the same gender that must be connected. You can purchase gender changers at any computer or office supply store for under $5.
Note: The parallel port on a PC uses a 25 pin female connector which sometimes causes confusion because it looks just like a serial port except that it has the wrong gender. Both 9 and 25 pin serial ports on a PC will always have a male connector.
If you connect two DTE devices (or two DCE devices) using a straight RS232 cable, then the transmit line on each device will be connected to the transmit line on the other device and the receive lines will likewise be connected to each other. A Null Modem cable or Null Modem adapter simply crosses the receive and transmit lines so that transmit on one end is connected to receive on the other end and vice versa. In addition to transmit and receive, DTR & DSR, as well as RTS & CTS are also crossed in a Null modem connection.
Null modem adapter are available at most computer and office supply stores for under $5.
There are two basic types of serial communications, synchronous and asynchronous. With Synchronous communications, the two devices initially synchronize themselves to each other, and then continually send characters to stay in sync. Even when data is not really being sent, a constant flow of bits allows each device to know where the other is at any given time. That is, each character that is sent is either actual data or an idle character. Synchronous communications allows faster data transfer rates than asynchronous methods, because additional bits to mark the beginning and end of each data byte are not required. The serial ports on IBM-style PCs are asynchronous devices and therefore only support asynchronous serial communications.
Asynchronous means "no synchronization", and thus does not require sending and receiving idle characters. However, the beginning and end of each byte of data must be identified by start and stop bits. The start bit indicate when the data byte is about to begin and the stop bit signals when it ends. The requirement to send these additional two bits cause asynchronous communications to be slightly slower than synchronous however it has the advantage that the processor does not have to deal with the additional idle characters.
An asynchronous line that is idle is identified with a value of 1, (also called a mark state). By using this value to indicate that no data is currently being sent, the devices are able to distinguish between an idle state and a disconnected line. When a character is about to be transmitted, a start bit is sent. A start bit has a value of 0, (also called a space state). Thus, when the line switches from a value of 1 to a value of 0, the receiver is alerted that a data character is about to come down the line.
Need more help?
Don't hesitate to call or email us with your questions!
Technical Support: 215-496-0202
Toll-Free: 1 (800) 722-6004 | http://www.taltech.com/support/entry/serial_intro | 13 |
125 | Mechanics: Vectors and Projectiles
Vectors and Projectiles: Problem Set Overview
This set of 34 problems targets your ability to perform basic vector operations such as vector addition and vector resolution, to use right angle trigonometry and vector addition principles to analyze physical situations involving displacement vectors, and to combine a conceptual understanding of projectile motion with an ability to use kinematic equations in order to solve horizontally and non-horizontally launched projectile problems. Problems range in difficulty from the very easy and straight-forward to the very difficult and complex. The more difficult problems are color-coded as blue problems.
Direction: The Counter-Clockwise From East Convention
A vector is a quantity which has magnitude and direction. The direction can be described as being east, west, north, or south using the typical map convention. Most of us are familiar with the map convention for the direction of a vector. On a map, up on the page is usually in the direction of North and to the right on the page is usually in the direction of east. In Physics, we utilize the map convention to express the direction of a vector. When a vector is neither north or south or east or west, an additional convention must be used. One convention commonly used for expressing the direction of vectors is the counter-clockwise from east convention (CCW). The direction of a vector is represented as the counter-clockwise angle of rotation which the vector makes with due East.
Often times a motion involves several segments or legs. For instance, a person in a maze makes several individual displacements in order to finish some distance out of place from the starting position. Such individual displacement vectors can be added using a head-to-tail method of vector addition. If adding vector B to vector A, then vector A should first be drawn; then vector B should be added to it by drawing it so that the tail of vector B starts at the location that the head of vector A ends. The resultant vector is then drawn from the tail of A (starting point) to the head of B (finishing point). The resultant is equivalent to the sum of the individual vectors. In this set of problems, you will have to be able to read the word story problem and sketch an appropriate vector addition diagram.
Adding Right Angle Vectors
Two vectors which are added at right angles to each other will sum to a resultant vector which is the hypotenuse of a right triangle. The Pythagorean theorem can be used to relate the magnitude of the hypotenuse to the magnitudes of the other two sides of the triangle. The angles within the right triangle can be determined from knowledge of the length of the sides using trigonometric functions. The mnemonic SOH CAH TOA can help one remember how the lengths of the opposite, adjacent and hypotenuse sides of the right triangle are related to the angle value.
Resolving an Angled Vector into Right Angle Components
If one of the vectors to be added is not directed due east, west, north or south, then vector resolution can be employed in order to simply the addition process. Any vector which makes an angle to one of the axes can be projected onto the axes to determine its components. Trigonometric functions (remembered by SOH CAH TOA) can be used to resolve such a vector and to determine the magnitudes of its x- and y- components. By resolving an angled vector into x- and y-components, the components of the vector can be substituted for the actual vector itself and used in solving a vector addition diagram. The resolution of angled vectors into x- and y-components allows a student to determine the magnitude of the sides of the resultant vector by summing up all the east-west and north-south components.
Relative Velocity Situations
Often times an object is moving within a medium which is moving relative to its surroundings. For instance, a plane moves through air which (due to winds) is moving relative to the land below. And a boat moves through water which (due to currents) is moving relative to the land on the shore. In such situations, an observer on land will observe the plane or the boat to move at a different velocity than an observer in the boat or the plane would observe. It's a matter of reference frame. One's perception of a motion is dependent upon one's reference frame - whether the person is in the boat, the plane or on land.
In a relative velocity problem, information is typically stated about the motion of the plane relative to the air (plane velocity) or the motion of the boat relative to the water (boat velocity). And information about the motion of the air relative to the ground (wind velocity or air velocity) or the motion of the water relative to the shore (water velocity or river velocity ) is typically stated. The problem centers around relating these two components of the plane or boat motion to the resulting velocity. The resulting velocity of the plane or boat relative to the land is simply the vector sum of the plane or boat velocity and the wind or river velocity.
The approach to such problems demands a careful reading (and re-reading) of the problem statement and a careful sketch of the physical situation. Efforts must be made to avoid mis-interpreting the physical situation. Once properly set-up, the algebraic manipulations become relatively simply and straight-forward. The crux of the problem is typically associated with the reading, interpreting and understanding of the problem statement.
A projectile is an object which upon which the only force of influence is the force of gravity. As a projectile moves through the air, its trajectory is effected by the force of gravity; air resistance is assumed to have a negligible effect upon the motion. Because gravity is the only force, the acceleration of a projectile is the acceleration of gravity - 9.8 m/s/s, down. As such, projectiles travel along their trajectory with a constant horizontal velocity and a changing vertical velocity. The vertical velocity changes by -9.8 m/s each second. (Here the - sign indicates that an upward velocity value would be decreasing and a downward velocity value would be increasing.)
A projectile has a motion which is both horizontal and vertical at the same time. These two components of motion can be described by kinematic equations. Since perpendicular components of motion are independent of each other, any motion in the horizontal direction is unaffected by a motion in a vertical direction (and vice versa). As such, two separate sets of equations are used to describe the horizontal and the vertical components of a projectile's motion. These equations are described below.
The VoxVoy Equations
Projectile problems in this set of problems can be divided into two types - those which are launched in a strictly horizontal direction and those which are launched at an angle to the horizontal. A horizontally launched projectile has an original velocity which is directed only horizontally; there is no vertical component to the original velocity. It is sometimes said that voy = 0 m/s for such problems. (The voy is the y-component of the original velocity.)
A non-horizontally launched projectile (or angled-launched projectile) is a projectile which is launched at an angle to the horizontal. Such a projectile has both a horizontal and vertical component to its original velocity. The magnitudes of the horizontal and vertical components of the original velocity can be calculated from knowledge of the original velocity and the angle of launch (theta or ) using trigonometric functions. The equations for such calculations are
The quantities vox and voy are the x- and y-components of the original velocity. The values of vox and voy are related to the original velocity (vo) and the angle of launch (). Here the angle of launch is defined as the angle with respect to the horizontal. This relationship is depicted in the diagram and equations shown below.
The Known and Unknown Variables
It is suggested that you utilize an x-y table to organize your known and unknown information. An x-y table lists kinematic quantities in terms of horizontal and vertical components of motion. The horizontal displacement, initial horizontal velocity. and horizontal acceleration are all listed in the same column. A separate column is used for the vertical components of displacement, initial velocity and acceleration. In this problem set, you will have to give attention to the following kinematic quantities and their corresponding symbols.
|horizontal displacement||x or dx||vertical displacement||y or dy|
|original horizontal velocity||vox||original vertical velocity||voy|
|horizontal acceleration||ax||vertical acceleration||ay|
|final horizontal velocity||vfx||final vertical velocity||vfy|
Given these symbols for the basic kinematic quantities, an x-y table for a projectile problem would have the following form:
x = __________________
vox = __________________
ax = __________________
vfx = __________________
t = __________________
y = __________________
voy = __________________
ay = __________________
vfy = __________________
t = __________________
Of the nine quantities listed above, eight are vectors which have a specific direction associated with them. Time is the only quantity which is a scalar. As a scalar, time can be listed in an x-y table in either the horizontal or the vertical columns. In a sense, time is the one quantity which bridges the gap between the two columns. While horizontal and vertical components of motion are independent of each other, both types of quantities are dependent upon time. This is best illustrated when inspecting the kinematic equations which are used to solve projectile motion problems.
If the understanding that a projectile is an object upon which the only force is gravity is applied to these projectile situations, then it is clear that there is no horizontal acceleration. Gravity only accelerates projectiles vertically, so the horizontal acceleration is 0 m/s/s. Any term containing the ax variable will thus cancel. There are three equations in the top row of horizontal motion equations which contain the ax variable; these have thus been greyed out.
Trajectory Diagram and Characteristics
Non-horizontally launched projectiles (or angle-launched projectiles) move horizontally above the ground as they move upward and downward through the air. One special case is a projectile which is launched from ground level, moves upwards towards a peak position, and subsequently fall from the peak position back to the ground. A trajectory diagram is often used to depict the motion of such a projectile. The diagram below depicts the path of the projectile and also displays the components of its velocity at regular time intervals.
The vx and vy vectors in the diagram represent the horizontal and vertical components of the velocity at each instant during the trajectory. A careful inspection shows that the vx values remain constant throughout the trajectory. The vy values decrease as the projectile rises from its initial location towards the peak position. As the projectile falls from its peak position back to the ground, the vy values increase. In other words, the projectile slows down as it rises upward and speeds up as it falls downward. This information is consistent with the definition of a projectile - an object whose motion is influenced solely by the force of gravity; such an object will experience a vertical acceleration only.
At least three other principles are observed in the trajectory diagram which apply to this special case of an angle-launched projectile problem.
The time for a projectile to rise to the peak is equal to the time for it to fall to the peak. The total time (ttotal) is thus the time up (tup) to the peak multiplied by two:
ttotal = 2 • tup
At the peak of the trajectory, there is no vertical velocity for a projectile. The equation vfy = voy + ay • t can be applied to the first half of the trajectory of the projectile. In such a case, t represents tup and the vfy at this instant in time is 0 m/s. By substituting and re-arranging, the following derivation is performed.
vfy = voy + ay • t
0 m/s = voy + (-9.8 m/s/s) • tup
tup = voy / (9.8 m/s/s)
The projectile strikes the ground with a vertical velocity which is equal in magnitude to the vertical velocity with which it left the ground. That is,
vfy = voy
The Basic Strategy
The basic approach to solving projectile problems involves reading the problem carefully and visualizing the physical situation. A well-constructed diagram is often a useful means of visualizing the situation. Then list and organize all known and unknown information in terms of the symbols used in the projectile motion equations. An x-y table is a useful organizing scheme for listing such information. Inspect all known quantities, looking for either three pieces of horizontal information or three pieces of vertical information. Since all kinematic equations list four variables, knowledge of three variables allows you to determine the value of a fourth variable. For instance, if three pieces of vertical information are known, then the vertical equations can be used to determine a fourth (and a fifth) piece of vertical information. Often times, the fourth piece of information is the time. In such instances, the time can then be combined with two pieces of horizontal information to calculate another horizontal variable using the ...
Habits of an Effective Problem-Solver
An effective problem solver by habit approaches a physics problem in a manner that reflects a collection of disciplined habits. While not every effective problem solver employs the same approach, they all have habits which they share in common. These habits are described briefly here. An effective problem-solver...
- ...reads the problem carefully and develops a mental picture of the physical situation. If needed, they sketch a simple diagram of the physical situation to help visualize it.
- ...identifies the known and unknown quantities in an organized manner, often times recording them on the diagram iteself. They equate given values to the symbols used to represent the corresponding quantity (e.g., vox = 12.4 m/s, voy = 0.0 m/s, dx = 32.7 m, dy = ???).
- ...plots a strategy for solving for the unknown quantity; the strategy will typically center around the use of physics equations be heavily dependent upon an understaning of physics principles.
- ...identifies the appropriate formula(s) to use, often times writing them down. Where needed, they perform the needed conversion of quantities into the proper unit.
- ...performs substitutions and algebraic manipulations in order to solve for the unknown quantity.
Additional Readings/Study Aids:
The following pages from The Physics Classroom tutorial may serve to be useful in assisting you in the understanding of the concepts and mathematics associated with these problems.
- Vectors and Direction
- Vector Addition
- Vector Components
- Vector Resolution
- Relative Velocity and Riverboat Problems
- Independence of Perpendicular Components
- What is a Projectile?
- Characteristics of a Projectile's Trajectory
- Horizontal and Vertical Velocity Components
- Horizontal and Vertical Displacement
- Calculating Initial Velocity Components
- Horizontally Launched Projectiles Problems
- Non-Horizontally Launched Projectiles Problems
Problem Sets and Audio Guided Solutions
Vectors and Projectiles Problem Set
Vectors and Projectiles Audio Guided Solutions
View the audio guided solution for problem:
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | http://www.physicsclassroom.com/calcpad/vecproj/ | 13 |
55 | The SI unit for measuring an electric current is the ampere, which is the flow of electric charges through a surface at the rate of one coulomb per second. Electric current can be measured using an ammeter.
Electric currents cause many effects, notably heating, but also induce magnetic fields, which are widely used for motors, inductors and generators.
The conventional symbol for current is , which originates from the French phrase intensité de courant, or in English current intensity. This phrase is frequently used when discussing the value of an electric current, but modern practice often shortens this to simply current. The symbol was used by André-Marie Ampère, after whom the unit of electric current is named, in formulating the eponymous Ampère's force law which he discovered in 1820. The notation travelled from France to Britain, where it became standard, although at least one journal did not change from using to until 1896.
A flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of either positive or negative charges, or both, a convention for the direction of current which is independent of the type of charge carriers is needed. The direction of conventional current is defined arbitrarily to be the direction of the flow of positive charges.
In metals, which make up the wires and other conductors in most electrical circuits, the positive charges are immobile, and the charge carriers are electrons. Because the electron carries negative charge, the electron motion in a metal conductor is in the direction opposite to that of conventional (or electric) current.
When analyzing electrical circuits, the actual direction of current through a specific circuit element is usually unknown. Consequently, each circuit element is assigned a current variable with an arbitrarily chosen reference direction. This is usually indicated on the circuit diagram with an arrow next to the current variable. When the circuit is solved, the circuit element currents may have positive or negative values. A negative value means that the actual direction of current through that circuit element is opposite that of the chosen reference direction. In electronic circuits, the reference current directions are often chosen so that all currents are toward ground. This often matches conventional current direction, because in many circuits the power supply voltage is positive with respect to ground.
Ohm's law states that the current through a conductor between two points is directly proportional to the potential difference across the two points. Introducing the constant of proportionality, the resistance, one arrives at the usual mathematical equation that describes this relationship:
where I is the current through the conductor in units of amperes, V is the potential difference measured across the conductor in units of volts, and R is the resistance of the conductor in units of ohms. More specifically, Ohm's law states that the R in this relation is constant, independent of the current.
AC and DC
Direct current (DC) is the unidirectional flow of electric charge. Direct current is produced by sources such as batteries, thermocouples, solar cells, and commutator-type electric machines of the dynamo type. Direct current may flow in a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. The electric charge flows in a constant direction, distinguishing it from alternating current (AC). A term formerly used for direct current was galvanic current.
AC is the form in which electric power is delivered to businesses and residences. The usual waveform of an AC power circuit is a sine wave. In certain applications, different waveforms are used, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. In these applications, an important goal is often the recovery of information encoded (or modulated) onto the AC signal.
The human caused occurrences of electric current includes the flow of conduction electrons in metal wires, such as the overhead power lines that deliver electrical energy across long distances and the smaller wires within electrical and electronic equipment. Eddy currents are electric currents that occur in conductors exposed to changing magnetic fields. Similarly, electric currents occur, particularly in the surface, of conductors exposed to electromagnetic waves. When oscillating electric currents flow at the correct voltages within radio antennas, radio waves are generated.
In electronics, other forms of electric current include the flow of electrons through resistors or through the vacuum in a vacuum tube, the flow of ions inside a battery or a neuron, and the flow of holes within a semiconductor.
Current can be measured using an ammeter.
At the circuit level, there are various techniques that can be used to measure current:
- Shunt resistors
- Hall effect current sensor transducers
- Transformers (however DC cannot be measured)
- Magnetoresistive field sensors
Joule heating, also known as ohmic heating and resistive heating, is the process by which the passage of an electric current through a conductor releases heat. It was first studied by James Prescott Joule in 1841. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current flowing through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the wire.
This relationship is known as Joule's First Law. The SI unit of energy was subsequently named the joule and given the symbol J. The commonly known unit of power, the watt, is equivalent to one joule per second.
Electric current produces a magnetic field. The magnetic field can be visualized as a pattern of circular field lines surrounding the wire that persists as long as the current flows.
Magnetism can also produce electric currents. When a changing magnetic field is applied to a conductor, an EMF is produced, and when there is a suitable path, this causes current to flow.
Electric current can be directly measured with a galvanometer, but this method involves breaking the electrical circuit, which is sometimes inconvenient. Current can also be measured without breaking the circuit by detecting the magnetic field associated with the current. Devices used for this include Hall effect sensors, current clamps, current transformers, and Rogowski coils.
The theory of Special Relativity allows one to transform the magnetic field into a static electric field for an observer moving at the same speed as the charge in the diagram. The amount of current is particular to a reference frame.
Conduction mechanisms in various media
In metallic solids, electric charge flows by means of electrons, from lower to higher electrical potential. In other media, any stream of charged objects (ions, for example) may constitute an electric current. To provide a definition of current that is independent of the type of charge carriers flowing, conventional current is defined to flow in the same direction as positive charges. So in metals where the charge carriers (electrons) are negative, conventional current flows in the opposite direction as the electrons. In conductors where the charge carriers are positive, conventional current flows in the same direction as the charge carriers.
In a vacuum, a beam of ions or electrons may be formed. In other conductive materials, the electric current is due to the flow of both positively and negatively charged particles at the same time. In still others, the current is entirely due to positive charge flow. For example, the electric currents in electrolytes are flows of positively and negatively charged ions. In a common lead-acid electrochemical cell, electric currents are composed of positive hydrogen ions (protons) flowing in one direction, and negative sulfate ions flowing in the other. Electric currents in sparks or plasma are flows of electrons as well as positive and negative ions. In ice and in certain solid electrolytes, the electric current is entirely composed of flowing ions.
A solid conductive metal contains mobile, or free electrons, originating in the conduction electrons. These electrons are bound to the metal lattice but no longer to an individual atom. Metals are particularly conductive because there are a large number of these free electrons, typically one per atom in the lattice. Even with no external electric field applied, these electrons move about randomly due to thermal energy but, on average, there is zero net current within the metal. At room temperature, the average speed of these random motions is 106 metres per second. Given a surface through which a metal wire passes, electrons move in both directions across the surface at an equal rate. As George Gamow put in his science-popularizing book, One, Two, Three...Infinity (1947), "The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current."
When a metal wire is connected across the two terminals of a DC voltage source such as a battery, the source places an electric field across the conductor. The moment contact is made, the free electrons of the conductor are forced to drift toward the positive terminal under the influence of this field. The free electrons are therefore the charge carrier in a typical solid conductor.
For a steady flow of charge through a surface, the current I (in amperes) can be calculated with the following equation:
More generally, electric current can be represented as the rate at which charge flows through a given surface as:
Electric currents in electrolytes are flows of electrically charged particles (ions). For example, if an electric field is placed across a solution of Na+ and Cl− (and conditions are right) the sodium ions move towards the negative electrode (cathode), while the chloride ions move towards the positive electrode (anode). Reactions take place at both electrode surfaces, absorbing each ion.
Water-ice and certain solid electrolytes called proton conductors contain positive hydrogen ions or "protons" which are mobile. In these materials, electric currents are composed of moving protons, as opposed to the moving electrons found in metals.
In certain electrolyte mixtures, brightly coloured ions are the moving electric charges. The slow progress of the colour makes the current visible.
Gases and plasmas
In air and other ordinary gases below the breakdown field, the dominant source of electrical conduction is via relatively few mobile ions produced by radioactive gases, ultraviolet light, or cosmic rays. Since the electrical conductivity is low, gases are dielectrics or insulators. However, once the applied electric field approaches the breakdown value, free electrons become sufficiently accelerated by the electric field to create additional free electrons by colliding, and ionizing, neutral gas atoms or molecules in a process called avalanche breakdown. The breakdown process forms a plasma that contains enough mobile electrons and positive ions to make it an electrical conductor. In the process, it forms a light emitting conductive path, such as a spark, arc or lightning.
Plasma is the state of matter where some of the electrons in a gas are stripped or "ionized" from their molecules or atoms. A plasma can be formed by high temperature, or by application of a high electric or alternating magnetic field as noted above. Due to their lower mass, the electrons in a plasma accelerate more quickly in response to an electric field than the heavier positive ions, and hence carry the bulk of the current. The free ions recombine to create new chemical compounds (for example, breaking atmospheric oxygen into single oxygen [O2 → 2O], which then recombine creating ozone [O3]).
Since a "perfect vacuum" contains no charged particles, it normally behaves as a perfect insulator. However, metal electrode surfaces can cause a region of the vacuum to become conductive by injecting free electrons or ions through either field electron emission or thermionic emission. Thermionic emission occurs when the thermal energy exceeds the metal's work function, while field electron emission occurs when the electric field at the surface of the metal is high enough to cause tunneling, which results in the ejection of free electrons from the metal into the vacuum. Externally heated electrodes are often used to generate an electron cloud as in the filament or indirectly heated cathode of vacuum tubes. Cold electrodes can also spontaneously produce electron clouds via thermionic emission when small incandescent regions (called cathode spots or anode spots) are formed. These are incandescent regions of the electrode surface that are created by a localized high current flow. These regions may be initiated by field electron emission, but are then sustained by localized thermionic emission once a vacuum arc forms. These small electron-emitting regions can form quite rapidly, even explosively, on a metal surface subjected to a high electrical field. Vacuum tubes and sprytrons are some of the electronic switching and amplifying devices based on vacuum conductivity.
Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.
In a semiconductor it is sometimes useful to think of the current as due to the flow of positive "holes" (the mobile positive charge carriers that are places where the semiconductor crystal is missing a valence electron). This is the case in a p-type semiconductor. A semiconductor has electrical conductivity intermediate in magnitude between that of a conductor and an insulator. This means a conductivity roughly in the range of 10−2 to 104 siemens per centimeter (S⋅cm−1).
In the classic crystalline semiconductors, electrons can have energies only within certain bands (i.e. ranges of levels of energy). Energetically, these bands are located between the energy of the ground state, the state in which electrons are tightly bound to the atomic nuclei of the material, and the free electron energy, the latter describing the energy required for an electron to escape entirely from the material. The energy bands each correspond to a large number of discrete quantum states of the electrons, and most of the states with low energy (closer to the nucleus) are occupied, up to a particular band called the valence band. Semiconductors and insulators are distinguished from metals because the valence band in any given metal is nearly filled with electrons under usual operating conditions, while very few (semiconductor) or virtually none (insulator) of them are available in the conduction band, the band immediately above the valence band.
The ease with which electrons in the semiconductor can be excited from the valence band to the conduction band depends on the band gap between the bands. The size of this energy bandgap serves as an arbitrary dividing line (roughly 4 eV) between semiconductors and insulators.
With covalent bonds, an electron moves by hopping to a neighboring bond. The Pauli exclusion principle requires the electron to be lifted into the higher anti-bonding state of that bond. For delocalized states, for example in one dimension – that is in a nanowire, for every energy there is a state with electrons flowing in one direction and another state with the electrons flowing in the other. For a net current to flow, more states for one direction than for the other direction must be occupied. For this to occur, energy is required, as in the semiconductor the next higher states lie above the band gap. Often this is stated as: full bands do not contribute to the electrical conductivity. However, as the temperature of a semiconductor rises above absolute zero, there is more energy in the semiconductor to spend on lattice vibration and on exciting electrons into the conduction band. The current-carrying electrons in the conduction band are known as "free electrons", although they are often simply called "electrons" if context allows this usage to be clear.
Current density and Ohm's law
Current density is a measure of the density of an electric current. It is defined as a vector whose magnitude is the electric current per cross-sectional area. In SI units, the current density is measured in amperes per square metre.
where is current in the conductor, is the current density, and is the differential cross-sectional area vector.
The current density (current per unit area) in materials with finite resistance is directly proportional to the electric field in the medium. The proportionality constant is call the conductivity of the material, whose value depends on the material concerned and, in general, is dependent on the temperature of the material:
with being the elementary charge and the electron density. The carriers move in the direction of decreasing concentration, so for electrons a positive current results for a positive density gradient. If the carriers are holes, replace electron density by the negative of the hole density .
In linear materials such as metals, and under low frequencies, the current density across the conductor surface is uniform. In such conditions, Ohm's law states that the current is directly proportional to the potential difference between two ends (across) of that metal (ideal) resistor (or other ohmic device):
where is the current, measured in amperes; is the potential difference, measured in volts; and is the resistance, measured in ohms. For alternating currents, especially at higher frequencies, skin effect causes the current to spread unevenly across the conductor cross-section, with higher density near the surface, thus increasing the apparent resistance.
The mobile charged particles within a conductor move constantly in random directions, like the particles of a gas. In order for there to be a net flow of charge, the particles must also move together with an average drift rate. Electrons are the charge carriers in metals and they follow an erratic path, bouncing from atom to atom, but generally drifting in the opposite direction of the electric field. The speed at which they drift can be calculated from the equation:
- is the electric current
- is number of charged particles per unit volume (or charge carrier density)
- is the cross-sectional area of the conductor
- is the drift velocity, and
- is the charge on each particle.
Typically, electric charges in solids flow slowly. For example, in a copper wire of cross-section 0.5 mm2, carrying a current of 5 A, the drift velocity of the electrons is on the order of a millimetre per second. To take a different example, in the near-vacuum inside a cathode ray tube, the electrons travel in near-straight lines at about a tenth of the speed of light.
Any accelerating electric charge, and therefore any changing electric current, gives rise to an electromagnetic wave that propagates at very high speed outside the surface of the conductor. This speed is usually a significant fraction of the speed of light, as can be deduced from Maxwell's Equations, and is therefore many times faster than the drift velocity of the electrons. For example, in AC power lines, the waves of electromagnetic energy propagate through the space between the wires, moving from a source to a distant load, even though the electrons in the wires only move back and forth over a tiny distance.
The ratio of the speed of the electromagnetic wave to the speed of light in free space is called the velocity factor, and depends on the electromagnetic properties of the conductor and the insulating materials surrounding it, and on their shape and size.
The magnitudes (but, not the natures) of these three velocities can be illustrated by an analogy with the three similar velocities associated with gases.
- The low drift velocity of charge carriers is analogous to air motion; in other words, winds.
- The high speed of electromagnetic waves is roughly analogous to the speed of sound in a gas (these waves move through the medium much faster than any individual particles do)
- The random motion of charges is analogous to heat – the thermal velocity of randomly vibrating gas particles.
- Anthony C. Fischer-Cripps (2004). The electronics companion. CRC Press. p. 13. ISBN 978-0-7503-1012-3.
- Lakatos, John; Oenoki, Keiji; Judez, Hector; Oenoki, Kazushi; Hyun Kyu Cho (March 1998). "Learn Physics Today!". Lima, Peru: Colegio Dr. Franklin D. Roosevelt. Retrieved 2009-03-10.
- T. L. Lowe, John Rounce, Calculations for A-level Physics, p. 2, Nelson Thornes, 2002 ISBN 0-7487-6748-7.
- Howard M. Berlin, Frank C. Getz, Principles of Electronic Instrumentation and Measurement, p. 37, Merrill Pub. Co., 1988 ISBN 0-675-20449-6.
- A-M Ampère, Recuil d'Observations Électro-dynamiques, p. 56, Paris: Chez Crochard Libraire 1822 (in French).
- Electric Power, vol. 6, p. 411, 1894.
- Consoliver, Earl L., and Mitchell, Grover I. (1920). Automotive ignition systems. McGraw-Hill. p. 4.
- Robert A. Millikan and E. S. Bishop (1917). Elements of Electricity. American Technical Society. p. 54.
- Oliver Heaviside (1894). Electrical papers 1. Macmillan and Co. p. 283. ISBN 0-8218-2840-1.
- N. N. Bhargava and D. C. Kulshreshtha (1983). Basic Electronics & Linear Circuits. Tata McGraw-Hill Education. p. 90. ISBN 978-0-07-451965-3.
- National Electric Light Association (1915). Electrical meterman's handbook. Trow Press. p. 81.
- Andrew J. Robinson, Lynn Snyder-Mackler (2007). Clinical Electrophysiology: Electrotherapy and Electrophysiologic Testing (3rd ed.). Lippincott Williams & Wilkins. p. 10. ISBN 978-0-7817-4484-3.
- What is a Current Sensor and How is it Used?. Focus.ti.com. Retrieved on 2011-12-22.
- Andreas P. Friedrich, Helmuth Lemme The Universal Current Sensor. Sensorsmag.com (2000-05-01). Retrieved on 2011-12-22.
- "The Mechanism Of Conduction In Metals", Think Quest.
- Rudolf Holze, Experimental Electrochemistry: A Laboratory Textbook, page 44, John Wiley & Sons, 2009 ISBN 3527310983.
- "Lab Note #106 Environmental Impact of Arc Suppression". Arc Suppression Technologies. April 2011. Retrieved March 15, 2012.
- Allaboutcircuits.com, a useful site introducing electricity and electronics | http://en.wikipedia.org/wiki/Electrical_current | 13 |
70 | Time Line: African American History 1619-1900
Tennessee events are marked with the letters “TN” in teal.
Africans are shipped to
--- African slaves are imported to the Hudson River Valley in New York.
Feb. 2 Eight years after the settlement of Boston, a ship named Desire arrives in Boston with its first African slaves.
--- Although slavery is never technically illegal in the colonies, Plymouth and Massachusetts Bay are the first colonies to
authorize slavery through legislation as part of the 1641 Body of Liberties. They will be followed by Connecticut (1650), Virginia (1661), Maryland (1663), New York and New Jersey (1664), South Carolina (1682), Rhode Island & Pennsylvania (1700), North Carolina (1715), and Georgia (1750).
triangular slave trade begins about this time—a
for sugar, tobacco, and liquor; these products are then taken to New England to be sold for lumber (including masts for the ships) and manufactured goods. Newport, Rhode Island, and Salem, Massachusetts, will become major ports during this period, which marks the beginning of the extensive introduction of African slaves into the British West Indies to work on the sugar plantations. In some respects it can be considered the first industrial revolution, in which profits result directly from the use of cheap labor. [Hunt]
guaranteed under English common law, including the right to life, and allows the slaves' owners to treat their slaves as they wish, without fear of reprisal. Thus the West Indies begins the process of making slavery both African and brutal by statute. [Hunt]
--- From 1660 to about 1710, slavery converts slowly to the West Indies model. At first the distinction between slavery
and indentured servitude are imprecise. As the planter class develops, though, slavery is considered essential in establishing such cash crops as rice in South Carolina. Within 50 years, Charles Town (Charleston), South Carolina, will become the largest mainland slave market. [Berlin]
--- As the English take control
--- King Philip’s
War begins as population growth and new leadership in the
--- By the third
decade of the 18th century, a system of organized agricultural slavery is well established in the
Feb. 18 The first American protest against slavery is organized by Quakers in Germantown, Pennsylvania.
North American colonies.
--- Slaves make up more than 17% (1/6) of the population of Philadelphia.
Apr. 7 Nine whites are killed during a New York slave revolt; 21 slaves are executed for murder.
Oglethorpe, who intends to create a classless society, wants to reserve the land and the jobs for English labor. Oglethorpe and the other Trustees interview all potential colonists, choosing carpenters, farmers, bakers, and other tradesmen who can build the colony into an efficiently functioning settlement. Despite the founders’ declared intention of providing a haven for debtors in English prisons, not one such individual is among the original colonists.
Sept. 9 Slaves revolt in
March A series of suspicious fires and rumors of slave conspiracies cause a widespread panic in New York: 31 black slaves
and five whites are executed as conspirators.
May 19 The Georgia Trustees petition King George II to permit them to repeal the colony’s prohibition against slavery. By October he agrees to the request.
Jan. 1 Slavery becomes legal in Georgia.
--- Landon Carter, a
isolation, uncertainties, and fears of the planter class. His journals, written until his death in 1778, also record the Colonies’ movement toward revolution. [Hunt]
September John Woolman, a New Jersey Quaker, writes in his Journal that he has embarked on a campaign to convince other
Friends to give up their slaves.
Mar. 15 In order not to discourage the settlement of skilled laborers in the state, Georgia prohibits slaves from working as carpenters, masons, bricklayers, plasterers, or joiners.
Dec. 25 Jupiter Hammon, a New York slave, publishes the poem, “An Evening Thought: Salvation by Christ, with Penitential Cries.”
Mar. 5 Runaway slave Crispus Attucks is the first person killed in the Boston Massacre.
Sept. 1 The first The first book published by an African American is a volume of poetry by Phyllis Wheatley.
--- The first African American Masonic group is organized.
April The first abolitionist society
Nov. 16 Lord Dunmore, the Royal Governor of
Virginia, issues Lord Dunmore’s Proclamation,
the first large-scale emancipation of slaves in
American history (when he offers freedom to Virginia’s slaves if they will
agree to aid the British cause by serving in the Army. Within a month,
Dec. 14 In the Virginia Declaration the Virginia House of Burgesses declares Dunmore’s Proclamation “encouragement to a general insurrection” and threatens all rebelling slaves with a death sentence.
July 4 A section denouncing the slave trade in an early draft of the Declaration of Independence fails to be approved
by the Continental Congress on July 1, when both Northern and Southern slave-holding delegates object; it does not appear in the final draft, adopted on this date.
July 8 A Vermont constitution is published. [Although calling itself a state, Vermont will not be admitted to statehood until March 4, 1791 – it is more of an independent republic at this time], and becomes the first American colony to abolish slavery; a number of others will follow over the next ten years. However, many of the state emancipation laws specify only gradual abolition, beginning with the second or third generation after the law takes effect. Slaves are listed in the Pennsylvania census through 1850. [Hunt]
--- Around 5,000 African American soldiers participate in the American Revolutionary War.
Dec. TN Robert, James
Robertson’s black servant, is among the small party of explorers who select the
--- In Commonwealth v. Jennison, slavery is declared unconstitutional in Massachusetts. Chief Justice William Cushing
Sept. 17 Although the Continental Congress excludes slavery from the Northwest Territory, the U.S. Constitution (with three
clauses recognizing slavery) is sent to the states for ratification. The new Constitution includes the Fugitive Slave
Clause, the three-fifths clause, and a clause prohibiting the abolition of the African slave trade before 1808. [Foner, Forever Free]
African Methodist Episcopal church is founded in
are uncomfortable with the idea of forming independent (not merely segregated) congre-gations. A.M.E. Founder Richard Allen chooses Methodism as the basis for his church because it emphasizes “the plain and simple gospel,” as well as a strong commitment to education and self-help. When this group unites with churches in other cities in 1816, Richard Allen is elected the first bishop of the A.M.E. Church. African Americans are creating their own national institutions long before slavery comes to an end. [Hunt]
--- Benjamin Franklin and Benjamin Rush join the Pennsylvania Abolition Society and help to write its constitution. The
organization, established in 1784, takes an active roll in litigation on behalf of free blacks.
Apr. 30 George Washington is inaugurated President (1789-1797).
--- Thomas Jefferson proposes a Southwest Ordinance similar to the Northwest Ordinance, but the legislation passed by Congress estab Congress establishes no prohibition on slavery in U.S. territory south of the Ohio River.
--- TN The population of the Tennessee Territory is 35,691; of those, 3,417 (9.6 percent) residents are black. .
March President George Washington appoints Benjamin Banneker, an African American scientist, to the commission
surveying the District of Columbia.
Aug. 22 The Haitian war of independence begins when over 100,000 slaves rise up against the greatly outnumbered French planters. Revolutionary leader Toussaint L’Ouverture ultimately forms a strategic alliance with the French but maintains control of the island, becoming military dictator.
Feb. 12 The first Fugitive Slave Law requires runaway slaves to be returned to their owners, wherever they are found.
Jan. 16 TN Robert “Black Bob” Renfro, still a slave, is licensed
tavern. (Bob, the slave of Joseph Renfro, had come to Middle Tennessee on John Donelson’s historic river voyage, leaving the group near present-day Clarksville on 12 April 1780.) Bob will be involved in several precedent-setting court cases, winning at least three cases before white juries. [Ellis]
June 20 Eli Whitney patents the cotton gin, making cotton both easier and faster to process and revitalizing the demand for
slave labor in the cotton fields.
Mar. 4 John Adams is inaugurated the nation’s second President (1797-1801).
--- TN Of Nashville’s 345 inhabitants, 154 are black. [Goodstein] Only fourteen of them are free; by 1810 there are 130 free xxxxxxxxxxxxxxblacks in Nashville. [Lovett]
Aug. 30 Gabriel Prosser, a Virginia slave, gathers an army of discontented slaves (estimated at 1000-4000 individuals) and
prepares to attack Richmond. They are foiled by informants and severe weather. Prosser and others are captured and hanged.
Mar. 4 Thomas Jefferson is inaugurated the nation’s third President (1801-1809).
Nov. 10 TN Nashvillian “Black Bob” Renfro is granted emancipation from his owner Robert Searcy by an act of the Fourth
xxxxxxxxxxxxxTennessee General Assembly. (Early Tennessee legislatures often sanctioned the voluntary manumission of slaves by their xxxxxxxxxxxxxowners.) [Ellis]
April Toussaint L’Ouverture, leader of the Haitian slave rebellion, is tricked by Napoleon into leaving Haiti and dies in a
French prison. His lieutenant, Jean-Jacques Dessalines, carries on the struggle against Napoleon's generals, Leclerc
and Rochambeau. Hundreds of people die in the fighting; shocking atrocities are committed by both sides.
Apr. 30 Napoleon, understanding that the
Louisiana Territory (which is no longer useful to him) to the U.S.
Nov. 28 Rochambeau surrenders. Dessalines
it will forever after haunt American plantation owners with the specter of violent overthrow; an early response will be the American Colonization Society (1816). [Hunt]
Mar. 25 The British Parliament abolishes the slave trade. Although Congress will also ban the importation of slaves into the
U.S. after January 1, 1808, slave shipments to
America will continue largely unchallenged until 1859.
Mar. 4 James Madison is inaugurated the nation’s fourth President (1809-1817).
suppressed by federal troops.
--- About 2,000,000 Africans now
July 27 Federal troops
are sent to destroy a Maroon (runaway-slave) settlement in
Dec. 21 The American Colonization Society is
includes James Monroe,
Andrew Jackson, Francis Scott Key, Henry Clay, and Daniel Webster – consists of
both philanthropists and slave owners who, for reasons ranging from altruism to
fear, want to enable blacks to return to
Mar. 4 James Monroe is inaugurated the nation’s fifth President (1817-1825).
--- Richard Allen’s
policies of the American Colonization Society. Three thousand people attend.
Mar. 6 The Missouri Compromise settles the issue of slavery in
the areas obtained by the
May 30 Denmark Vesey, a
carpenter and former slave who bought his own freedom in 1800, designs one of
the most complex slave plots in history, involving thousands of African
Americans in the
Aug. 2 Illinois passes a referendum declaring the state free; nevertheless, a complex series of indenture and apprenticeship laws along with frequent kidnappings of black workers will maintain a system not much different from slavery for many years.
Mar. 4 John Quincy Adams becomes the nation’s 6th President (1825-1829).
--- By this time, 2,638 African
Americans have migrated to
Mar. 16 Freedom’s
Journal is published in
before the Civil War.
Mar. 4 Andrew Jackson is inaugurated the nation’s 7th President (1829-1837).
Aug. 10 Following a race riot in Cincinnati, Ohio, more than 1,000 African Americans leave the city for Canada.
Sept. 20 About 40 delegates from various states meet in Philadelphia for the first national African American convention to discuss xxxxxxxxxxxxx the abolition of slavery.
Jan. 1 William Lloyd Garrison publishes the first issue of the Liberator, a weekly abolitionist journal, signaling the
emergence of a more militant attitude within the anti-slavery movement.
Aug. 21 Nat Turner, born during Gabriel Prosser’s slave rebellion (1800), leads a band of 40 slaves from house to house
through Southampton County, Virginia, stabbing, shooting, or clubbing every white person they find. They kill at least 55 people before being caught and executed. Virginia and North Carolina courts will execute more than 50 people charged with participating, and vengeful mobs, mobilized by panic, kill 200 more.
December The Virginia legislature considers a petition to emancipate Virginia’s slaves. A motion to reject it outright is defeated.
In the intense debate that follows, one legislator declares slavery “the greatest curse that God is His wrath ever inflicted upon a people.” In the climate of fear created by the Nat Turner rebellion, and facing the growing belief that slavery may be a hindrance to economic development, the legislature earnestly debates a gradual emancipation statute. xxxxxxxxxxxxx [Hunt] “The arguments expressed during the Virginia slavery debate...profoundly [shape] the development of future justifications for slavery. Faced with an opportunity tto abolish slavery in Virginia, what [results] instead [is] the xxxxxxxxxxxxx ideological cornerstone of the Southern Confederacy.” [Curtis]
--- The Nullification Controversy pits President Jackson against South Carolina Senator John C. Calhoun in a debate
about the rights of a state to nullify federal law. The first state to have over-planted its soil to the point where its productivity has diminished, South Carolina (concerned that Congress might also claim the power to terminate slavery) declares the increasing federal tariffs null and void and threatens to secede.
justify neither nullification nor secession, is his confrontational response to South Carolina’s action. The President’s tough stand on the issue demonstrates his confidence in his strong bipartisan support from both sides of the North-South divide. [Hunt]
May 18 TN Birth of Davidson County Representative Sampson W. Keeble. The first African American elected to the Tennessee xxxxxxxxxxxxxxGeneral Assembly, Keeble was born a slave in Rutherford County, Tennessee.
--- John C. Calhoun and Henry Clay persuade Congress to pass the Compromise Tariff, which slowly lowers the duties on cotton.
Dec. 3 The first classes are held at Oberlin College in Ohio, one of the earliest colleges to admit African American students. The first black students are admitted in the fall of 1835; by 1860 one-third of its students are black. Oberlin also pioneers “the joint education of the sexes,” enrolling both males and females from the beginning. In 1862 Oberlin graduate Mary Jane Patterson is the first black woman to earn a college degree.
--- Black Baptists in
River Baptist Association.
--- TN Approximate birth year of Davidson County Representative Thomas A. Sykes, born a slave in North Carolina to xxxxxxxxxxxxxxunknown parents.
TN The Cherokee census of
--- “Free Frank”
McWorter becomes the first African American to found a town when he records the
June 15 Arkansas is admitted to the Union as a slave state. It is positioned to balance Michigan, which enters as a free state on January 26, 1837.
Mar. 4 Martin Van Buren, a Democrat, defeats Whig candidate William Henry Harrison to become the nation's 8th
Sept. 3 Twenty-year-old Frederick
Douglass escapes from slavery in Baltimore.
July 2 Slaves, led by Joseph Cinqué, revolt against the crew of the slave ship Amistad. When they are captured by the U.S.
Navy two months later, they are jailed in Connecticut, a state in which slavery is legal.
--- TN Sarah Estell, a free black businesswoman, opens a successful ice cream parlor and catering business in Nashville,
where she provides banquets for “firemen, church socials, and political parties.” Sally Thomas, although still technically a slave, has been permitted to run a laundry business since 1817. She has used the profits to buy her children’s freedom.
Mar. 4 William Henry Harrison is inaugurated the nation’s ninth President. He develops pneumonia during his inauguration and dies a month later.
Apr. 6 Although the Constitution does not provide for the Vice President to succeed to the Presidency in the event of the President’s death, John Tyler defies a power grab by the cabinet and has himself sworn in as President (1841-1845). [Winik]
--- The U.S. Supreme Court upholds a lower court’s decision that the Amistad mutineers are the victims of kidnapping and thus within their rights to secure their freedom in any way possible. Through private donations, the 35 surviving Africans are able to secure passage back to Africa.
Africans on the slave ship Creole,
to the Bahamas, where the government grants them asylum and freedom.
--- Joseph Jenkins Roberts
becomes the first non-white governor of
--- Members and clergy of the Methodist Episcopal Church split from the church over its failure to pass a promised edict
forbidding members to own slaves. The new organization is named the Wesleyan Methodist Church in America.
--- Although its rules are not as strict as some members would wish, from its 1784 founding in the United States, the
Methodist Episcopal Church has opposed slavery. When a Georgia bishop becomes a slave owner by marriage, the
church splits a second time over the slavery issue, and the Methodist Episcopal Church, South, becomes a separate entity.
--- TN Probable birth year of Shelby County Representative Thomas F. Cassels, born in Jackson County, Ohio. His parents xxxxxxxxxxxxxxare “free persons of color” in a community that is a busy hub of Underground Railroad activity.
Mar. 3 Florida is admitted to the Union as a slave state, paired with Iowa, which will enter as a free state on December 28, 1846.
Mar. 4 TN Tennessean James K. Polk is inaugurated as the nation’s 11th President (1845-1849).
May 3 Macon B. Allen of Massachusetts becomes the first African American lawyer admitted to the bar.
May 8 The Baptist movement has worked to maintain an uneasy peace among its members by simply avoiding discussion of the topic of slavery. However, when an 1840 American Baptist Anti-Slavery Convention brings the issue into the open, the Mission Board is forced to take a stand. When the Board refuses to accept Georgia’s nomination of a slave-owner to be sent out as a missionary, 293 Southern leaders representing 365,000 members, meet in Augusta, Georgia, and agree regretfully to withdraw. This group will form the Southern Baptist Convention, which eventually grows to be the largest Protestant denomination in the country.
May 23 Frederick Douglass publishes his biography, Narrative of the Life of Frederick Douglass. He is 27 years old.
Dec. 11 TN The Tennessee General Assembly charters the Nashville & Chattanooga Railway. BY 1857-58 Chattanooga is a major
railway hub in the South.
on the terms of the Missouri Compromise. Wisconsin’s admission as a free state on May 29, 1848, is seen as the balance for Texas. Mexico, never having recognized Texas independence, declares war on the United States.
--- TN Approximate birth year of Hamilton County Representative William C. Hodge, born in North Carolina.
Apr. 24 Mexican forces attack American troops near the Rio Grande, beginning the Mexican War.
May 13 The U.S. Congress declares war on
--- The Wilmot Proviso is amended to a bill providing for negotiation of a settlement with Mexico. A challenge to pro-slavery groups, the Proviso bans slavery in any of the territory acquired in the Mexican war. Although the amended bill is passed by the House in 1846 and 1847, the Southern-dominated Senate blocks it. The effect of the debate over the Proviso is to intensify the conflict between the North and the South over slavery. The escalating controversy will lead to Southern secession. The political debate has shifted subtly from abolitionism to free soil. [Hunt]
July 26 The legislature of Liberia declares itself an independent state. Joseph Jenkins Roberts is elected its first president.
Free Soil Movement is organized in the
abolitionists who are extremely antagonistic toward the extension of slavery into the territories. Fairly successful as a third party, it sends two Senators and 14 Representatives to the 31st Congress. Its membership includes many northern Whigs and Democrats who are opposed to slavery. By about 1854 most Free-Soilers have merged with the Republican party.
Feb. 2 The Treaty of Guadalupe Hidalgo
ends the Mexican War.
May 10 TN Birth of Fayette County Representative Monroe W. Gooden near Somerville, Tennessee, to slave Monroe Gooden Sr. and an unknown mother.
Sept. 19 TN Birth of
plantation of BoswelL Baker Degraffenreid in the northern part of the county.
Mar. 5 Zachary Taylor, a Whig, a cousin of James Madison, and a hero of the Mexican War, is elected 12th President of the U.S.
--- TN Approximate birth year of Shelby County Representative Leon Howard.
Autumn xxxxxxKnowing she will be sold after her owner’s death, Harriet Tubman escapes from slavery in Maryland. However, she
will return to the South nineteen times, bringing out more than 300 slaves.
--- As Congress debates the status of slavery in the territory acquired from Mexico, a number of proposals remain on the table: one is the Wilmot Proviso, which would ban all slavery in that territory; another is a measure, sanctioned by President Zachary Taylor, to extend the Missouri compromise line to the Pacific. Senator Stephen A. Douglas is identified with “Popular Sovereignty,” which eventually emerges as part of the Compromise of 1850. This plan will permit territorial governments to make their own determinations about slavery. [Hunt]
June 3 TN Delegates from nine Southern states meet in Nashville to discuss their concerns about Northern attitudes relating to slavery. The Tennessee General Assembly, opposed to disunion, refuses to send delegates, but individual counties send 101 delegates to the Nashville Convention (sometimes called the Southern Convention), thus becoming the largest group from any state to participate. The delegates resist the “Fire-Eaters’” demands for secession but adopt resolutions “asserting the South’s constitutional rights in the territories and the rights and interests of Texas in the boundary dispute.” Although the Convention fails to unite the South, it does call attention to Southern grievances and almost certainly influences the passage of the Compromise of 1850. [Goodstein]
July 4 Falling ill with gastroenteritis after a 4th of July celebration, President Zachary Taylor becomes the second President to die in office.
July 10 Millard Fillmore is inaugurated the nation’s 13th President (1850-1853).
Sept. 9-20 President Fillmore signs the five bills making up the Compromise of 1850, the passage of which is orchestrated by Stephen Douglas. The plan will
· organize New Mexico/Arizona and Utah under the rule of “popular sovereignty,” by which each territory can choose its own response to slavery. Critics protest that it undermines the Missouri Compromise;
· admit California to the Union as a free state, despite the fact that it upsets the 15-15 balance of free and slave states;
abolish the sale of slaves (although not the
institution of slavery) in the
· enact a harsh new Fugitive Slave Law that penalizes law enforcement officials for failing to arrest anyone suspected of being a runaway slave, and that requires fines and jail terms for anyone providing food or shelter to runaway slaves.
Nov. TN Although the Compromise of 1850 reduces the Southern passion for establishing regional unity against the North, fifty
delegates from seven southern states meet for a second Nashville Convention and affirm the right to secede.
June 5 Harriet Beecher Stowe sells Uncle Tom’s Cabin to the National Era for $300. Despite the paper's small circulation,
the story is widely read as copies pass from hand to hand. After the last (40th) installment (April 1852), it appears in book form, selling half a million copies by 1857. Neither slavery nor the Fugitive Slave Law ever recovers its legitimacy.
Oct. 15 Shelby County Representative Isham (Isaac) Franklin Norris is born in Tennessee, probably to slave parents.
--- TN Approximate birth year of Tipton County Representative John W. Boyd,
born in Covington, Tennessee, to Philip and Sophia Fields Boyd.
--- TN Birth year of Shelby County Representative William A. Feilds, born near Fisherville, Tennessee. His mother, who was
born in Virginia, is the slave of Jean Field Sanford. Researchers are certain William Feilds and John W. Boyd (above) xxxxxxxxxxxxxxwere cousins, perhaps even first cousins.
Nov. The defeat of the Whig candidate, Mexican War hero Winfield Scott, by Democrat Franklin Pierce of New Hampshire,
marks the end of Whig party influence in the country. The emerging Republican party, which will take shape over the
next two or three years, will fill its ranks with Whigs, Free Soilers, Know-Nothings, and disgruntled Northern Democrats
Nov. 21 TN Birth date of Hamilton County Representative Styles L. Hutchins, born in Lawrenceville, Georgia. His father,
William Dougherty Hutchins, is a free man, who owns his own Atlanta barbershop.
Mar. 4 Franklin Pierce is inaugurated the nation’s 14th President (1853-1857).
--- William Wells Brown publishes Clotel, the first novel by a black author. The book is published in London while Brown
is still technically a slave. He will later write The Escape, the first African American play.
Nov. TN Nelson G. Merry, a former slave, becomes the first Tennessee African American to be ordained and placed over a congregation. He is named moderator (pastor) of the first Colored Baptist Mission on Pearl Street in Nashville, where he has preached since 1848.
May 30 Congress passes the Kansas-Nebraska Act, introduced by Stephen Douglas, although it has been condemned by xxxxxxxxxxxxxxFrederick Douglass and others in the anti-slavery movement. By permitting residents of Kansas and Nebrasks to
decide for themselves whether to allow slavery in their territories, the bill essentially repeals the 1820 Missouri xxxxxxxxxxxxxxCompromise (which has permitted slavery north of latitude 36o30’) and opens the Northern territory to slavery. The xxxxxxxxxxxxxxKansas-Nebraska Act will also corrupthe the westward movement: Americans have come tp believe that the
solutions to many issues (overpopulation, mass production manufacturing, dreams of expansion and adventure) lie in the xxxxxxxxxxxxxxWest (which historian Frederick Jackson Turner will later describe as “the soul of American democracy"). By this time,
however, the West is dissolving in terrorism (from violent acts by Border Ruffians, John Brown, and others) and
electoral fraud (see 1857 timeline entries on the Lecompton Constitution), and the dream of Jacksonian America is xxxxxxxxxxxxxxcrumbling. [Hunt]
July 6 The first official Republican party meeting takes place in Jackson, Mich., impelled by the feeling of betrayal among
Northern and Northwestern states after the Kansas-Nebraska Act is approved by Congress. Loyal to the precepts of the Missouri Compromise, it attracts Free-Soilers and others opposed to slavery and becomes powerful nationally when John C. Frémont (“Free soil, free labor, free speech, free men, Frémont!”) is nominated for President in 1856. Four years later Abraham Lincoln will become the first Republican elected to that office. The power of the new party is not so much in an anti-slavery agenda (the party never moves beyond the idea of preventing slavery’s expansion into the west) as in its effectiveness in creating a cross-sectional alliance between New England, the mid-Atlantic states, and the old Northwest. For the first time in American politics, there is a “politicized North.” [Hunt]
Feb. 26 TN
Aug. 25 TN The first train carries passengers (8 miles at 15 mph!) on the Louisville & Nashville Railroad line.
November Although Democrat James Buchanan wins the popular Presidential voteand survives the electoral college tally, Republican xxxxxxxxxxxxxxcandidate John C. Frémont comes within two states of defeating him. It is clear that the Republican party has become a xxxxxxxxxxxxxxpolitical force to contend with.
Dec. TN A race riot takes
are well-educated and prosperous, tightening the controls on local African American citizens and forcing free black schools to close until after the city’s occupation by Union forces in February 1862.
--- TN African American education in Memphis is likewise shut down when local whites forbid black residents to learn to read.
Mar. 4 James Buchanan is inaugurated the nation’s 15th President (1857-1861).
Supreme Court rules, in Dred Scott v
United States and thus has no right to sue or to claim other rights of citizenship. The decision is a focal point of the Lincoln-Douglas debates in the 1858 Illinois Senate campaign. Although Lincoln loses the election, his “house divided” speech and the exposure he receives in the debates catapult him into national prominence.
Oct. 19 A Constitutional Convention meets in Lecompton, capital city of the Kansas Territory, to draft a state constitution. Pro-slave delegates push through the Lecompton Constitution protecting slavery.
away from the polls in protest. News reports of the election stir up the North against the slave system, and many northern Democrats, including Stephen A. Douglas, break with the party, voting against President Buchanan’s endorsement of the document and his recommendation to admit Kansas as a slave state.
--- TN Approximate birth year of Haywood County Representative Samuel A. McElwee, born into slavery in Madison
Jan. 4 Kansas voters, given an opportunity to reconsider the Lecompton Constitution after voting irregularities are charged in the earlier referendum, decisively reject it by a vote of 10,226 to 138!
Clothilde, the last ship to carry slaves to the
shipment of slaves. Its captain, Tim Meaher, has made a bet that he can sneak in a shipload of slaves under cover of darkness.
--- TN A
group of African Americans in
independent black congregation that is not organized under the patronage and control of a white church.
July 18 TN Birth of Fayette County Representative David F. Rivers, born in Montgomery, Alabama, to Edmonia Rivers, a free
woman of color, and an unknown father.
Oct. 16 John Brown and his followers (five of the 13 are African American) attack Harper's Ferry, Virginia (now West Virginia),
in an attempt to free and arm the local slaves. Brown becomes a martyr for abolition.
Oct. 27 TN The Louisville & Nashville Railroad line, chartered in 1850, is completed between its two namesake cities, 180 miles
apart. By the time the Civil War begins in 1861, the L&N will have laid 269 miles of track. Spanning the Union and xxxxxxxxxxxxxxConfederate lines, it will be of use to both armies. Because of Nashville's early occupation by Union forces, it will
suffer less damage than other railroads and will be positioned to expand quickly after the war.
--- TN Slaves now constitute one-fourth of Tennessee’s population and about 15% of the national population. Tennessee’s slaves are valued at $114 million. [Hunt]
--- Approximately 300,000 free blacks are living in Southern states, primarily in Virginia, Kentucky, and South Carolina.
primarily in Virginia, Kentucky, and South Carolina.
--- TN Fewer
than 20% of
--- In this year “only five Northern states, all with tiny black populations, [allow] black men to vote on the same terms as
May 16 Abraham Lincoln receives the Republican party’s nomination for President on the third ballot.
November A four-way party split causes a messy and complicated election: the Democrats have split into two factions,
represented by John C. Breckinridge and Stephen A.
Douglas; the Whig candidate, John Bell, carries
Dec. 2 In his final speech to Congress, President Buchanan anticipates the impending Southern Secession, arguing that
secession is clearly unconstitutional (as opposed to the right of revolution), but that a Union of consent cannot rest on force. In other words, no state has the right to oppress another state – if a state secedes, the Union is dead. [Hunt]
Dec. 20 In a convention called by John C. Calhoun to consider secession, South Carolina's delegates vote unanimously to
secede from the Union. This move, foreshadowed by the demands of the Fire-Eaters (led by Edmund Ruffin, William Yancey, and others) during the Nashville Convention of 1850, has intensified in the face of growing Southern opposition to Jacksonian politics and to Northern abolition and feminist movements. But the issue comes to a head with the Lincoln’s election, which, to the South, represents a complete breakdown of the political system. [Hunt]
Feb. 4 Seven states secede to form the Confederate States of America.
Feb 18 Jefferson Davis is inaugurated President of the Confederacy in Montgomery, Alabama, two weeks before Lincoln's
Mar. 4 Abraham Lincoln is inaugurated President, with Hannibal Hamlin of Maine as Vice President.
Mar. 11 The Confederate States of America – at this time consisting of Alabama, Florida, Georgia, Louisiana, Mississippi,
South Carolina, and Texas – adopts a Constitution.
Apr. 12 Confederate
batteries fire on
Unified by their response to this attack on their flag, Republicans and Democrats in the Northern tier of states suddenly form what would previously have been an unattainable coalition and come together as Unionists, instantly uniting against this “treason by force.” [Hunt]
earnest, but nobody expects the conflict to last more than a few months.
May 24 General
Benjamin F. Butler, in command of
and proclaims they can no longer be returned to their owners. [Foner, Forever Free]
June 8 TN The Tennessee General Assembly votes to secede from the Union, despite evidence that many Tennesseans (possibly a majority) are opposed to secession.
June 28 TN The Tennessee General Assembly authorizes a draft of free black men into the Confederate army. Most free black men
will manage to evade both the Confederate draft and the local sheriffs compelled to enforce it.
return escaped or confiscated slaves who are working or fighting for the rebel forces.
Feb. 16 TN General Grant accepts the surrender of Fort Donelson as Union forces breach the Southern defenses and open a corridor to Nashville.
Feb. 21 Nathaniel
Gordon, a slave trader from
comments, “For forty years the slave-trade has been pronounced piracy by law, and to engage in it has been a capital
offense. But the sympathy of the Government and its
officials has been so often on the side of the criminal, and it seemed so
absurd to hang a man for doing at sea that which, in half the
Feb. 23 TN The Confederate flag is lowered from the
Tennessee Capitol as
William Driver, a native of Salem, Massachusetts, and a proud Union supporter, offers his personal flag, which he calls “Old Glory,” to be flown from the Capitol.
March TN Tennessee Senator Andrew Johnson is appointed military governor and arrives in Nashville to head the occupation forces.
Mar. Congress adopts an article of war forbidding members of the army and navy to return fugitive slaves to their owners.
Apr. 16 The Confederacy issues a draft order, making all healthy white men between the ages of 18 and 35 liable for a three-
year term of military service. By September the upper age limit will be raised to 45; by October 11, a man owning 20 or more slaves becomes exempt; by February 1864, the age range will include men between the ages of 17 and 50.
abolishes slavery in the
“colonization” of freed slaves outside the U.S. [Foner, Forever Free]
June 6 TN
July 2 TN The Morrill Act allocates federal land or its monetary value to various states for the teaching of “agricultural and mechanical” subjects and military training to students. After the Civil War Tennessee will designate East Tennessee University (renamed the University of Tennessee in 1879) as a land-grant institution.
July 17 Congress passes two acts that change the status of slaves and anticipate the Emancipation Proclamation.
· The Second Confiscation Act frees the slaves of owners who are actively engaged in rebellion and authorizes military commanders to appropriate those former slaves as military personnel “in any capacity to suppress the rebellion.”
The Militia Act
authorizes the employment of “persons of African descent” in “any military or
naval service for which they may be found competent,” and grants freedom to
those slaves and their families. In
other words, Lincoln can now use black soldiers in the Union Army.
Sept. 23 Lincoln’s preliminary publication of the Emancipation Proclamation is released. While it does not immediately free
all slaves, it provides a forewarning to owners that the rebellion must end by January 1 or the Proclamation will be signed. It takes a surprisingly conciliatory tone, offering aid to states that make provisions for gradual emancipation and referring once again to Congress’s April 16 appropriation for colonizing freed slaves somewhere outside the borders of xxxxxxxxxxxxxxthe United States.
Dec. 7 TN Work on
over a three-month period by Union soldiers and hundreds of black workers – free and slave – who have been conscripted into service in what is probably the first large-scale use of contraband labor in Tennessee during the war. With insufficient food, shelter, and clothing, many of these workers will die; most are never paid. Regrettably, the construction of Fort Negley becomes a model for future projects, as Union officers, lacking laborers, impress black men into service and work them in merciless conditions. [Hunt]
Dec. 31 TN On the last day of 1862 Union General William S. Rosecrans’s Army of the Cumberland challenges General
Braxton Bragg’s Army of Tennessee at Murfreesboro.
Jan. 1 President Abraham Lincoln signs the Emancipation Proclamation. It frees all slaves in regions under Confederate
authorizes the enlistment of black soldiers.
It is important to recognize that it does not outlaw slavery in all
areas of the country. Tennessee, which is
under Union control (and whose constitution will be among the first to ban slavery);
Southern Louisiana, which has remained loyal to the Union; and the border
states of Delaware, Maryland, Kentucky, and Missouri are exempt from the
Emancipation Proclamation, even though slavery exists in its cruelest forms in
all six states. [See
Jan. 2 The Battle of Stones River ends. With 23,000 casualties, it is the second bloodiest battle fought west of the
Appalachians during the Civil War with the highest percentage of casualties on both sides. Rosecrans' repulse of two
Confederate attacks and the subsequent Confederate withdrawal as Union reinforcements arrive goes a long way toward xxxxxxxrestoring Union morale: Lincoln later writes: "I can never forget you gave us a hard-earned victory, which had there been xxxxxxxa defeat instead, the nation could scarcely have lived over."
Mar. 3 The Conscription Act/Enrollment Act is passed, requiring enrollment of all able-bodied men in the Union Army,
although they can purchase their exemption by paying $300 or by sending a substitute. Only 46,347 of the 776,892 men receiving draft notices will actually don a uniform. [Lapham]
May Authority is granted for the formation of a U.S. Bureau of Colored Troops. Andrew Johnson, military governor of the occupation forces, drags his feet about initiating the troops, feeling, among other things, that contraband labor is too essential to pillage for soldiers. [Hunt]
June 20 West Virginia separates itself from Virginia to become a new Unionist state. Its constitution bans the introduction of slaves into the state but does not address the issue of emancipating the slaves already there.
Summer TN Nashville has become a surprisingly dynamic city: it provides medical care, maintenance, and supplies for the war effort and the railroads; it attracts refugees, both black and white (including multitudes fleeing Confederate occupation in East Tennessee, and a huge number of contraband workers and their families); and it supplies food, rest, and recreation for military personnel, including “a licensed and medically regulated prostitution district.” [Hunt]
July 4 The Confederacy is reeling from three major losses: battles at Tullahoma, Vicksburg, and Gettysburg have taken a huge
toll on Southern forces. Many people mistakenly assume the war is nearly over. However, the South is more resilient
July 11-13 A week after the Battle of Gettysburg, opposition to the draft and its “rich man's exemptions" sparks a riot in New
July 18 The 54th Massachusetts Volunteers, an all-black unit, attack Fort Wagner in Charleston, South Carolina. Nearly half
the men in the regiment are killed, wounded, or captured. Sgt. William H. Carney will become the first African American to receive the Congressional Medal of Honor for courage under fire.
July 30 Confederate President Davis announces that black soldiers of the USCT will be treated as escaped slaves and returned
war, and not as escaped slaves. [Foner, Forever Free]
Sept. 10 TN The Bureau of U.S. Colored
Troops opens in
Tennessee, and the state will see more than 5,000 casualties. George Luther Stearns, Assistant Adjutant General for the Recruitment of Colored Troops, is put in charge of recruiting in Tennessee. A fervent abolitionist, Stearns, John Brown’s largest financial backer, even owned the rifles Brown used at Harper’s Ferry. He recruited the Union’s first African American regiment, the 54th Massachusetts, and will later be a leader in establishing the Freedmen’s Bureau.
Dec. 2 The statue “Freedom” is placed on top of the U.S. Capitol. Sculptor Philip Reid was a slave in a Maryland foundry when
the statue was cast.
Dec. 8 President Lincoln announces the Proclamation of Amnesty and Reconstruction, pardoning Confederates who
pledge loyalty to the Union and agree to accept emancipation. A state can begin the process of rejoining the Union as soon as 10% of a Confederate state’s voters make the pledge. This fairly loose oath, pledging Union loyalty from the moment the oath is taken, angers black leaders, Southern Unionists, and Congressional Republicans. Lincoln seems xxxxxxxxxxxxxxmore interested in disrupting the Confederacy than actually implementing Reconstruction. [Hunt]
--- The black Baptists of the West and South organize the Northwestern Baptist Convention and the Southern Baptist Convention. In 1866 they will merge with the American Baptist Convention to form the Consolidated Baptist Convention, which will support the efforts of black Baptists in several Southern states to form their own conventions.
January Radical Republicans are hostile to
Feb. 8 TN Birth date of Jesse M.H. Graham in Clarksville or Nashville, Tennessee.
Mar. 1 Rebecca Lee Crumpler becomes the
first black woman to receive a medical degree, graduating from the
March TN Military Governor Andrew Johnson, speaking at the dedication of the Northwestern Military Railroad at Johnsonville,
urges Unionists to “go to the ballot box” and vote slavery out of the state. The railroad, strategic to the success of the Union army’s attack on Atlanta, has been built by thousands of black contraband workers and U.S. Colored Troops.
June 15 Congress passes a bill authorizing equal pay, equipment, arms, and health care for African American troops in the
July Congress passes the Wade-Davis Bill, which requires a majority vote of state voters to gain readmission to the
Union, restricts many former Confederates from political participation in Reconstruction, and demands that blacks receive not only their freedom but also equality before the law; Lincoln’s July 4 pocket veto of the bill kills it.
Sept. 2 Sherman takes Atlanta. That victory will give an enormous boost to Lincoln's Presidential hopes, which have been
damaged by the length of the war and the sense of stalemate the country now feels.
Sept. 5 The new Louisiana constitution abolishes slavery; Maryland, Missouri, and Tennessee will do the same in the next
few months. Note that these are four of the six states
that were exempted from the Emancipation Proclamation. [See
the application of the Emancipation Proclamation to Tennessee.
Oct. 4 The National Colored Men’s Convention meets in Syracuse, New York, chaired by Frederick Douglass.
--- Beginning of the New Orleans Tribune, in all probability the first African American daily newspaper.
Nov. 8 President Abraham Lincoln is re-elected, defeating Democratic candidate George McClellan. Andrew Johnson becomes
Vice President, but he and Lincoln barely know each other.
Nov. 30 Terrible Confederate losses in the Battle of Franklin (6,252 casualties in about five hours) all but destroy the Army of Tennessee and completely end its effectiveness.
Dec. 22 Sherman occupies Savannah, completing his march to the sea.
--- By this point about 180,000 African American men (over 20% of the adult male black population between 20 and 45)
have served in the Union Army, and many more in the Navy.
--- African-American soldiers comprise 10% of the entire Union Army. These troops suffer extremely high losses:
approximately one-third of all black soldiers enrolled in the military will lose their lives in the Civil War.
--- TN Four Freedmen’s Savings and Trust Company Bank branches will operate in Tennessee (in Chattanooga, Columbia, Memphis, and Nashville) between 1865 and 1874. A significant resource for the black community, the bank will fail in 1874 following the economic depression of the 1870s, largely through mismanagement and fraud by the white managers of an important Washington, D.C. branch.
Jan. TN William Scott begins publication of The Colored Tennessean, the first black newspaper in Nashville.
Jan. 2 TN John Mercer Langston, founder and dean of the Howard University Law School, speaks at Nashville's second annual
Emancipation Day celebration.
Jan. TN The Tennessee General Assembly amends the state constitution to prohibit slavery; voters will ratify the amendment in
Jan. 9 TN Fisk Free Colored School opens in the buildings of a former U. S. Army hospital. Tennessee Governor W. G.
“Parson” Brownlow advises students to be “mild and temperate” in their behavior toward white people, and warns teachers to be “exceedingly prudent and cautious.” The school will number 600 students by February and will continue to expand for some time.
Jan. 16 Under Union Gen. Sherman’s Field Order No. 15, 40-acre plots of land are set aside in coastal South Carolina,
Georgia, and Florida for the exclusive use of freed blacks, who can claim “possessory title” with option to purchase. Sherman’s primary motive is to get rid of the multitudes of refugees following his army – not only are they impeding his military operations, but they are also consuming rations he needs for his troops. [Hunt]
Jan. 31 U.S. Congress approves the abolition of slavery and involuntary servitude, sending the 13th Amendment to the states
Feb. 1 J. S. Rock, who will be the first black lawyer to practice in the Supreme Court, is admitted to the bar of the Supreme
--- General Sherman’s army turns north toward the Carolinas and Virginia.
Feb. 8 Martin Robinson Delany, a writer, publisher, and physician, becomes the first African American to receive a regular
army commission when President Lincoln promotes him to the rank of major in the U. S. Army.
Mar. 3 A joint resolution of Congress frees the wives and children of soldiers, regardless of their owners' loyalty, [Berlin]
--- The U.S. Congress establishes the Bureau of Refugees, Freedmen, and Abandoned Lands (to be known as the
Freedmen’s Bureau); its function is to ease the transition from slavery, offering shelter, medical care, legal services, and educational facilities to former slaves. Authorized to function for only one year, the bureau will operate until 1868.
Mar. 4 TN Abraham Lincoln is inaugurated for a second term, with Tennessean Andrew Johnson as Vice President. Lincoln
pledges “malice toward none and charity for all.”
Mar. 13 TN The Confederate States Congress authorizes the recruitment of black soldiers -- slave or free -- to serve in the
Confederate Army; however, this uncharacteristic move by the Confederate Congress comes too late to prepare any
black troops for battle. Some scholars believe that as many as 65,000 African Americans may have served the Confederate Army in some fashion: the Confederacy impressed and leased slaves extensively to work on fortifications and other projects; individual slaves sometimes accompanied their masters (usually officers) into war as personal servants; and a few (perhaps including Tennessee legislator Sampson W. Keeble) actually fought, generally to protect their own farms or neighborhoods.
Mar. 26 TN Tennessee voters ratify the new state constitution, which includes an anti-slavery amendment.
Apr. 5 TN The Tennessee General Assembly ratifies the 13th Amendment.
Apr. 9 Gen. Robert E. Lee surrenders at Appomattox Court House, Virginia. President Lincoln and General Grant give USCT
regiments the honor of being the first troops to occupy the Confederate capital at Richmond.
Apr. 11 In the last speech he will deliver, President Lincoln makes a rare public endorsement of limited voting rights for black
Apr. 14 TN Lincoln is assassinated. Vice President Andrew Johnson, a Tennessee Democrat, becomes President (1865-1869).
Apr. 26 Confederate General Joe Johnston meets with General William T. Sherman in North Carolina to negotiate a
surrender. Although CSA President Davis is firmly set against surrender, and many commanders (including Forrest in
May 29 TN President Johnson issues his Amnesty Proclamation; Johnson's Reconstruction strategy disfranchises large land owners
owners (anyone with taxable property over $20,000) and former Confederate military leaders until their individual petitions for amnesty are approved; the federal government also now requires all states to ratify the 13th Amendment. The most surprising edict among the otherwise strict requirements is that only 10% of the voting population of any Southern state must take a loyalty oath in order for readmission to the Union. Johnson also intends that each state convention declare secession null and void and repudiate the debt each Confederate state has acquired in the war. Unfortunately, the state conventions and leadership will openly defy or circumvent him, thus cutting off their best ally in Washington, since Johnson might have been a useful mediator between the former Confederate states and the congressional Republicans. As a Democrat in a Republican administration that has no respect for him, he is ineffectual against the political realities of 1865-66, even though he has proved himself an anti-secessionist and a convert to the cause of emancipation in Tennessee. [Hunt]
June Southern white men excluded from the general amnesty may begin their appeals for individual pardons on this date.
June 19 “Juneteenth,” the oldest known celebration commemorating the end of slavery -- word of Emancipation finally reaches
slaves in isolated areas of Texas.
August Southern states open Constitutional Conventions to renounce secession, disavow the Southern debt, and ratify the
Aug. TN The first State Colored Men’s Convention meets at St. John’s African Methodist Episcopal Church in Nashville.
Delegates call for the final ratification of the 13the Amendment, as well as full citizenship and black suffrage. There is no apposite response from the Tennessee General Assembly.
Aug. TN Night riders expand their terrorist activities throughout Tennessee, causing General George H. Thomas to increase the
Union presence in the state.
September President Johnson demonstrates a greater tendency to align himself with white Southern land owners, declaring "white
men alone must manage the South.” He issues a controversial order to return appropriated land to its former owners, even lands granted to freedmen by Sherman’s January 16 Field Order No. 15. Because many freedmen have already settled in and begun farming the land, some are stubbornly resistant to leaving.
October Southern states put local, state, and congressional elections in process, anticipating full restoration to the Union as soon
as they comply with Johnson’s order
Nov. 25 Issuance of Mississippi’s first “Black Codes.” Other states also pass laws imposing restrictions on black citizens:
freedmen can work only as field hands; unemployed black men can be auctioned to planters as laborers; black children can be taken from their families and made to work; blacks refusing to sign labor contracts can be penalized; strict laws control vagrancy, apprenticeship, and public transportation. In addition, blacks are forbidden to testify against whites
in court, and they cannot serve on juries, bear arms, or hold large meetings.
December Ulysses S. Grant makes a victory tour of an unexpectedly friendly South and recommends lenient Reconstruction
Dec. 4 The U.S. Senate and House form a Joint Committee on Reconstruction. More than sixty newly-elected Senators and
Representatives from Southern states (all but Mississippi have consented to the presidential requirements for readmission to the Union) are denied their seats in the 39th Congress when the Clerk refuses to include their names in the roll call.
Dec. 6 The 13th Amendment, abolishing slavery, is ratified.
Winter Nashville, Memphis, and other Southern cities begin to experience an influx of freedmen from rural areas that will double the black population of the South’s ten largest cities within five years.
--- TN Nashville Normal and Theological Institute opens under the guidance of the American Baptist Home Mission Society.
(Its predecessor, the “Baptist College,” originally a seminary for African American preachers, began in a private home in 1864.) The school is renamed Roger Williams University in 1883. Its major buildings will be destroyed by fires of suspicious origin in 1905.
Jan. 1 By the beginning of 1866 President Johnson has issued individual pardond to more than 7,000 Southern men denied
amnesty under the $20,000 property clause.
Feb. 2 An African American delegation led by Frederick Douglass meets with President Johnson to advocate black suffrage. Johnson says he will continue to support the interests of Southern whites and vows to oppose black voting rights.
Feb. 19 President Johnson vetoes the bill renewing the Freedmen’s Bureau.
Mar. 27 President Johnson vetoes the Civil Rights Act of 1866. The Civil Rights Bill is designed to put an end to the
Black Codes, which will survive in spite of Congressional efforts and will create a deliberately unequal application of civil law.
Apr. 9 By overwhelming majorities, both houses of Congress overturn Johnson’s vetoes of both the Freedmen’s Bureau bill and the Civil Rights Act (which prohibits state governments from discrimination on the basis of race). These are the first major bills to supersede a Presidential veto; the rift between Congress and the President deepens.
Apr. 16 Virginia Freedmen parading to celebrate the Civil Rights Act are attacked by whites; five people die in the ensuing
May 1-3 TN A race riot in Memphis results in 48 deaths, five rapes, many injuries, and the destruction of 90 black homes, 12
schools, and four churches.
May 26 TN The Tennessee General Assembly passes legislation giving persons of color the right to make contracts, to sue, to
inherit property, and to have equal benefits with whites under the laws and regarding protection of life and property.
June TN The Ku Klux Klan is founded in Pulaski, TN, by a group of Confederate veterans.
June 13 Congress approves the 14th Amendment and sends it to the states for ratification. The moderate Republican response to the Black Codes and to Johnson’s failure to make self-Reconstruction work, it becomes the core of moderate Congressional Reconstruction. It characterizes citizenship as the entitlement of all people born or naturalized in the United States and increases federal power over the states to protect individual rights, while the daily affairs of the states are left in their own hands. Unpopular with the Congressional Radicals, this amendment will require more than two years to be ratified by the states.
July Congress again overrides a Presidential veto to pass the supplemental Freedmen's Bureau Bill.
July 2 TN Governor (“Parson”) Brownlow, a slave-owner but also a dedicated Unionist, moves to return Tennessee to the Union.
July 19 TN Tennessee, recognizing that the14th Amendment gives the states broader autonomy to manage Constitutionsl issues
than they expected, becomes the third state – and the first former Confederate state – to ratify the amendment.
July 24 TN Tennessee is the first former Confederate state readmitted to the Union.
Thus the state will be exempt from the intensifying conflict between Congress and other former Confederate states.
July 30 A mob of whites attacks a black suffrage meeting in New Orleans; 38 die, 150 are injured.
August President Johnson undertakes a disastrous speaking tour of the Northern states, accompanied by Ulysses S. Grant;
Johnson’s undignified and spiteful responses to the hostile crowds cost him the support of many Northerners, as well
as the respect of Grant.
Aug. 6 TN The second Tennessee State Colored Men’s Convention meets in Nashville to advocate black suffrage and to
organize demonstrations at the General Assembly. Leaders of the movement include Sampson W. Keeble, Nelson G. Merry, Samuel and Peter Lowery, and others.
November Republicans take more than a 2/3 majority in Congressional elections; they are now guaranteed to override any Presidential vetoes in the coming legislative session.
Dec. 6 President Johnson announces to Congress that the Union has been restored.
--- TN Most of the 356,000 acres confiscated from white Confederate loyalists in Tennessee are returned after 1866. Most former slaves are no more than gang laborers or, at best, share-croppers, working white farms for shares of produce or extremely low wages. Only about 400 black Tennessee farmers own their own land by the end of this year. In Wilson County, for example, blacks own only 30 of the 10,997 acres of farmland.
Jan. 8 Overriding President Johnson’s veto, Congress grants the black citizens of the District of Columbia the right to vote.
Feb. 25 TN The Tennessee General Assembly grants African Americans the right to vote and to hold political office; Governor
Brownlow signs the bill into law the following day.
Mar. TN Tennessee’s African American leaders hold their first political meetings to organize the black vote. By the end of 1867
around 40,000 African American men will have registered to vote.
Mar. TN The Tennessee General Assembly passes an act to reorganize public schools in the state, with provisions for black and white children to be taught in separate schools. The act reestablishes the office of state superintendent of education, and specifies funding and county supervision of the system.
Mar. 2 TN Beginning of “Congressional Reconstruction” – Congress, challenging the ex-Confederate states, Tennessee excepted,
who have refused to ratify the 14th Amendment, passes four Military Reconstruction Acts dividing the South into five military districts – existing state and local governments are placed under authority of military commanders until they meet and adopt new state constitutions, ratify the 14th Amendment, and permit black adult males to participate in the process for the first time. [Hunt]
Mar. 2 Howard University is officially incorporated by Congress. Named for Major General Oliver O. Howard, Commissioner of the Freedmen’s Bureau, it is originally conceived as a theological seminary for freedmen, then incorporated as a liberal arts college, primarily for the training of black teachers and preachers, but open to men and women of all races. It is the third university established in Washington, D.C., after Georgetown University (1789) and George Washington University (1821).
Mar. 23 The Second Reconstruction Act (also passed over Johnson’s veto) instructs military commanders to register voters
and call for constitutional conventions, barring from participation anyone in office prior to the war who “gave aid or support to the rebellion.”
April TN Formal political restructuring of the Ku Klux Klan in Nashville, to oppose black equality and Republican leadership. It lists its purposes as
· To protect the weak, the innocent, & the defenseless from the indig-nities, wrongs & outrages of the lawless, the violent & the brutal;
· to relieve the injured & oppressed;
· to succor the suffering & unfortunate, & especially the widows & orphans of the Confederate soldiers.
To protect & defend the Constitution of the
· Third: To aid & assist in the execution of all constitutional laws, & to protect the people from unlawful seizure, & from trial except by their peers in conformity with the laws of the land.
May TN Induction of Nathan Bedford Forrest into the KKK and his subsequent election as Grand Wizard of the Klan.
June TN The KKK holds its first anniversary parade in Pulaski, Tennessee.
Aug. TN Tennessee holds the South’s first statewide elections to include black voters, electing Republicans in nearly all
positions – governor, congressional seats, and most state legislative posts.
August President Johnson attempts unsuccessfully to fire Secretary of War Edwin Stanton, triggering a deeper conflict with Congress and causing a final breach with Ulysses S. Grant.
Aug. 22 TN
at Vanderbilt University until Joseph A. Johnson is admitted to the Divinity School in 1953.]
Sept. TN Black Nashvillians vote for the first time in city elections, electing two black councilmen; one of the two is not seated,
and a white councilman is appointed to the seat.
will become part
October Voter registration is completed in the ten Southern states subject to the Reconstruction Acts.
November Diminishing Republican strength in the Northern states convinces the party to win the South over before the next Presidential election. The party platform is set up to include equality for African Americans.
--- TN Thomas A. Sykes is elected to the first of five one-year terms in the North Carolina legislature, serving from 1868-1871.
Dec. TN First reports of Ku Klux Klan night-riding surface in Middle Tennessee.
Dec. 10 TN
Murfreesboro Road near Nashville by leaders of the Colored Agricultural and Mechanical Association. Its annual fair each fall serves to build a strong voting base among area freedmen and brings to Nashville such nationally important black political leaders as Frederick Douglass and John Mercer Langston.
--- Every legislator pictured in a photograph of the 1868 Louisiana State Legislature is black.
Southern lawmakers, both black
and white, begin to work together in the constitutional conventions, the first
political meetings in
April Hampton Normal &
Agricultural Institute opens in Hampton, Virginia. Like Fisk,
May 16 Andrew Johnson is the first President to be impeached by a house of Congress; he avoids conviction and retains his office after being acquitted in the Senate by a single vote on May 26.
May 20 James J. Harris and P. B. S. Pinchback are the first African American delegates to a Republican National Convention. They support the nomination of U. S. Grant for President. Grant is nominated unopposed on the first ballot.
June 13 Oscar J. Dunn, a former slave, is elected lieutenant governor of Louisiana.
June 22 Arkansas is the 2nd state readmitted to the Union, 2 years after Tennessee.
June 25 Florida, Louisiana, North Carolina, and South Carolina rejoin the Union.
July 4 TN Ku Klux Klan members make a public show of
their organization’s strength with parades and confrontations throughout
July 9 Rev. Francis L. Cardozo
(1837-1903) is elected Secretary of State in
July 14 Alabama is readmitted to the Union.
July 27 TN Governor Brownlow calls the TN Legislature into special session to demand that any further Ku Klux Klan activity be
punished with death.
July 28 TN The Fourteenth
Amendment is finally
ratified by enough states to become law.
Aug. 28 TN Nathan Bedford Forrest, who claims 40,000 KKK members in Tennessee and a total of 550,000 that he can
September The Georgia State Legislature expels its newly elected black legislators. The Atlanta Constitution supports the move,
saying, “The Negro is unfit to rule the State.” President Grant immediately imposes military rule on the state, but it will be a full year before the legislators are readmitted.
Sept. TN Between 1868 and 1870, Greene E Evans is admitted to Fisk University, where he pays his way by hauling gravel,
laying sod, and teaching school in the summertime in a schoolhouse he built himself. [Marsh]
Sept. TN Five African
Americans are elected to the
enacts an “anti-Klan” law with penalties for “prowling” by night, in or out of
disguise, “for the purpose of disturbing the peace, or alarming the peaceable
citizens”; for advising resistance to the law; or for threatening or
intimidating a voter.
Sept. 11 TN President Johnson meets with a group of TN legislators, who assure him that the new militia law will be used only in extreme circumstances, or when federal troops are unavailable.
Sept. 16 TN Governor Brownlow issues a call for militia companies to form throughout the state and assemble in Nashville.
Sept. 28 The Opelousas Massacre in Louisiana results in the death of 200-300 blacks at the hands of violent whites, many of
them Confederate veterans and prominent citizens.
Nov. 3 TN U. S. Grant is elected President. Southern black men, voting in their first national election, cast 700,000 votes for the Republican ticket. Many of the less wealthy white voters also vote Republican, reflecting the growing class conflict between poor farmers and wealthy plantation owners. East Tennessee, a stronghold of Unionism during the war, is already strongly Republican; the high Republican vote in West Tennessee, where most black voters live, reflects a combination of black & white voting power.
--- TN Tennessee
is the first state to replace a bi-racial Republican state government with an
all-white Democratic government, followed by
--- Massachusetts elects two African Americans to its State House of Representatives: Edward G. Walker and Charles L. Mitchell become the first African Americans to serve in a legislative assembly.
Winter TN The Freedmen’s Bureau reports that there are now nearly 3,000 schools in the South, serving over 150,000 black students. [Integration of schools will come much more slowly: it is not until May 1957 that Bobby Cain, a student
at Clinton High School, Clinton, Anderson County, Tennessee, will become the first African American to graduate from a state-supported integrated public high school in the South.]
Feb. 26 Congress approves the 15th Amendment, stating that “race, color, or previous condition of servitude” will not be used to bar U.S. male citizens from voting; they send it to the states for ratification.
Feb. 27 John W. Menard, elected as a Republican from Louisiana to the House or Representatives, is barred from his seat by white Congressmen and pleads his case to be seated, becoming the first African American representative to speak on the floor of the House. Congress still refuses to seat Menard.
Mar. 4 U.S. Grant is inaugurated the nation’s eighteenth President (1869-1877).
--- By the end of the 41st U.S. Congress, two African Americans will have been seated: Robert Brown Elliott and Joseph
H. Rainey, both of South Carolina.
--- TN Following a private meeting with President Grant, Nathan Bedford Forrest issues a document disbanding the Ku Klux Klan, stating that it is "being perverted from its original honorable and patriotic purposes, becoming injurious instead of subservient to the public peace." Forrest’s actions may be motivated, at least in part, by hopes of avoiding punish-ment for the illegal activities of an organization that is largely out of control. The Klan has been extremely violent for years under his leadership, and he disbands it only when it comes under intense criticism (and when its work is essentially done — many blacks and Republicans have already been frightened away from the polls). Whatever Forrest’s motives, Klan violence most assuredly does not end with his declaration.
Apr. 6 President Grant appoints Ebenezer
Don Carlos Bassett minister to
May 10 The first rail line to cross the continent is completed. The railroad network that will now develop is the major factor in the emergence of a new industrial age, which will dramatically change the nation’s labor and employment patterns.
Sept. 11 TN African American city councilman Randal Brown urges Nashville blacks to join the Black Exodus and homestead movement westward; other leaders express concern about the Chinese laborers being brought in to replace black workers.
October As brutal attacks on African Americans continue throughout the South, Georgia legislator Abram Colby, the black son of a white planter, is kidnapped and whipped by the Klan. Although his back is permanently injured and he loses the use of his left hand, he returns to the legislature and continues to campaign against Klan violence.
Nov. 16 TN Tennessee rejects the 15th Amendment, and does not join other states in post-ratifying it until 1997. It will be the last state to ratify.
--- The 1870 Census shows that
African Americans make up 12.7% of the
--- TN Although blacks comprise one-third
of Middle Tennessee’s population, only six percent of black families own their
own land. In
--- Most of the black members remaining in the Methodist Episcopal Church, South, leave (with the denomination's blessing) to form the Colored Methodist Episcopal Church (today’s Christian Methodist Episcopal Church).
--- TN Due to the political skills of African American leader Edward Shaw, who holds the post of wharf master in Memphis, Shelby County elects as many as six black city councilmen during the 1870s and 1880s.
--- TN A series of yellow fever epidemics will devastate Memphis for the next decade, killing hundreds of people, and even
causing the State of Tennessee to revoke the city’s charter in 1879 because of the collapse of the city’s financial base.
--- TN A large number of convicts are leased from the main prison in Nashville to three separate railroad companies in
Jan. 10 Grant proposes a treaty to annex what is now the Dominican Republic in an effort to find land where freed slaves can
settle. The Senate Foreign Relations committee opposes the plan, and the treaty is never approved.
Jan. 10 TN The Tennessee Constitutional Convention begins.
Jan. 26 Virginia is readmitted to the Union.
Feb. 3 Jasper J. Wright, an African American judge, is elected to the South Carolina Supreme Court.
Feb. 17 TN The 15th Amendment to the Constitution is ratified by 29 of the 37 states, guaranteeing the right of African American
men to vote. 1869: Nevada, West Virginia, North Carolina, Louisiana, Illinois, Michigan, Wisconsin, Maine, Massachusetts, Arkansas, South Carolina, Penn-sylvania, New York (which then rescinds its approval), Indiana, Connec-ticut, Florida, New Hampshire, Virginia, Vermont, and Alabama. 1870: Missouri, Minnesota, Mississippi, Rhode Island, Kansas, Ohio, Georgia, Iowa, and (satisfying the 29-state requirement, in case NY’s withdrawal is effective) Nebraska. The amendment is rejected by Maryland, Kentucky, & Tennessee. Eventually all the remaining states post-ratify the amendment: Texas (2-18-1870), New Jersey (2-15-1871), Delaware (2-12-1901), Oregon (2-24-1959), California (4-3-1962), Maryland (5-7-1973), Kentucky (3-18-1976), and Tennessee (April 3, 1997).
Feb. 23 TN The Tennessee Constitutional Convention ends, having adopted the Constitution that is still in effect today. It outlaws slavery and ensures universal suffrage. The Supreme Court will later strike down provisions forbidding interracial marriage, blocking integrated schools, and allowing a poll tax.
Feb. 23 Mississippi is readmitted to the Union.
Feb. 25 Hiram Revels, a Republican from
Mississippi, is sworn in as the first black member of the United States
Senate. Ironically, Revels is elected to
fill the position vacated by Jefferson Davis nearly 10 years earlier. Revels serves only through
Mar. 17 North Carolina Governor Holden sends for federal troops to help control the Ku Klux Klan. Public backlash will cost him the next election.
Mar. 30 Texas is readmitted to the Union.
May 31 President Grant signs the First Enforcement Act. These “Force Acts” make the bribing, intimidation, or racial discrimination of voters federal crimes. They also authorize the use of federal troops against the KKK, outlawing conspiracies to prevent the exercise of constitutional rights. Three such laws are passed between May 1870 and April 1871. All are declared unconstitutional in United States v. Cruikshank (1876)
July 15 Georgia is readmitted to the Union – the last of the Confederacy to return.
Dec. 12 Joseph Hayne Rainey, born a slave in 1832, is sworn in to fill an unexpired term in the U.S. House of Representatives.
A South Carolina Republican, he will be re-elected four times, serving until 1879, thus becoming the longest-serving xxxxxxxxxxxxxxblack Congressman until the 1950s.
--- The General Assembly establishes branch penitentiaries in the East Tennessee coal fields and begins the practice of leasing prisoners to work in the mines. By 1884 the Tennessee Coal, Iron, and Railway Company has taken complete control and leases the entire prison population.
Mar. 4 During the 42nd U.S. Congress, there are five black members in the House of Representatives: Benjamin S. Turner of
Alabama; Josiah T. Walls of Florida; and Robert Brown Elliot, Joseph H. Rainey, and Robert Carlos DeLarge of South Carolina.
Apr. 20 The Ku Klux Klan Act becomes law, allowing President Grant to suspend habeas corpus in enforcing the
Fourteenth and Fifteenth Amendments.
active departments: normal, commercial, and music.
Oct. 6 TN The Fisk Jubilee Singers leave Nashville on their first American concert tour to raise money for the college. Among
the eleven students on the tour is baritone Greene Evans, who will be elected to the General Assembly ten years later. Director George White has planned a route in keeping with the Underground Railroad: over the next eighteen months, beginning in Cincinnati, the group will visit Ohio, Pennsylvania, New York, Connecticut, Rhode Island, Massachusetts, New Jersey, Maryland, and Washington, D.C., giving hundreds of performances, and raising $40,000 for Fisk University. Although the Singers perform many types of music, it is their performance of Negro spirituals that awakens an interest in this genre of music and becomes the distinctive signature of the group.
Oct. 12 Congress listens to testimony from victims of Klan violence in the South. Grant takes action: having ordered the Ku
Klux Klan in SC to disperse and surrender arms, he quickly sends in federal troops to suppress the Klan.
Oct. 17 The last of a series of anti-Klan enforcement acts is passed, providing protection to African Americans voting in
federal elections. Nonetheless, both black and poor white voters will increasingly be kept from voting by locally enforced poll taxes as well as literacy tests and property ownership requirements. However, blacks do represent a considerable voting force in the South for some time, sometimes combining with various groups of “populist” white voting blocs. African American political disfranchisement will not be complete until after the enactment of the Mississippi state constitution in 1890.
--- TN The Memphis Weekly Planet becomes West Tennessee’s first African American newspaper.
--- Vanderbilt University is chartered under the name of Central University of the Methodist Episcopal Church.
Feb. 27 Charlotte Ray (daughter of Charles Bennett Ray, who has been editor of the Colored American, an important early
New York newspaper, and is also pastor of the Bethesda Congregation Church) graduates from Harvard University.
She is the first African American woman lawyer in the United States and the first woman admitted to the bar in the
District of Columbia, which has removed the term “male” from the requirements for the bar.
Mar. 4 TN The Fisk Jubilee Singers perform for Vice President Colfax and members of Congress but are forced to leave their
Washington, D.C., hotel because of their race.
Mar. 5 TN The Fisk Jubilee Singers perform for President Grant at the White House.
May 1 At the Liberal Republican Convention in Cincinnati, party leaders, displeased with vindictive Reconstruction policies
and corruption (which they call “Grantism”) nominate newspaperman Horace Greeley.
May 6 TN The Fisk Jubilee Singers embark on a year-long concert tour of Great Britain that will earn $50,000 for the
university and earn them invitations to sing for Queen Victoria and other European monarchs.
May 22 President Grant signs the Amnesty Act, restoring full civil rights to all white Southern men except about 500 former
Confederate leaders .
June 5 At the Republican Convention in Philadelphia, the party re-nominates Ulysses S. Grant on the first ballot.
July 1 Congress terminates the Freedmen’s Bureau July 9 The Democratic party joins the Liberal Republicans in nominating Horace Greeley for President. [See entry for May 1, 1872]
Sept. 21 John Henry Conyers of South Carolina becomes the first black student at the Annapolis Naval Academy.
Nov. 5 Ulysses S. Grant is re-elected with a popular majority of 763,000 and an electoral college majority of 286-66 over
opponent Horace Greeley.
Dec. 9 Pinckney Benton Stewart Pinchback of Louisiana becomes the nation’s first African American governor; however,
because of white antipathy he serves only very briefly, leaving office on 13 January 1873.
--- TN James T. Rapier, educated in Nashville’s free black schools, becomes the first black congressman from Alabama.
Jan. 6 TN Samson W. Keeble takes his seat as the first African American member of the Tennessee State Legislature in the 38th xxxxxxxxxxxxxxGeneral Assembly, 1873-1875. He is appointed to the committees on Immigration, Military Affairs, and Tippling
and Tippling Houses, and is later added to the committee on Charitable Institutions. He introduces three bills, none of xxxxxxxxxxxxxxthem successful, and frequently speaks in favor of protecting the wages of laborers.
Winter The New York Tribune publishes a series of articles accusing black lawmakers in South Carolina of corruption.
first reading but does not receive a second – the legislature adjourns one week later.
Apr. 13 The Colfax Massacre—a paramilitary group known as the White League, part of a "shadow government" in Louisiana
(and similar in many respects to the Ku Klux Klan), clashes with the state militia, which is largely black. Three members of the White League die in the attack, but about 100 black men are killed, nearly half of them slaughtered in cold blood after their surrender. Similar incidents occur about the same time in Coushatta and New Orleans. President Grant sends federal troops to restore order.
--- TN Frederick Douglass, speaking in Nashville, urges black Tennesseans to stay and fight for racial justice rather than to
join the Black Exodus west.
Sept. 18 The Panic of 1873 plunges the nation into a depression.
--- Democrats control both Houses of Congress for the first time since before the Civil War.
June 29 The Freedmen’s Bank closes. Originally created to provide a safe place for black soldiers to deposit their pay, the bank rapidly becomes the financial base of many in the African American community, devastating them when it closes. Contrary to what depositors have been led to believe, the bank’s assets are not protected by the federal government. In spite of desperate attempts to revive the bank (Frederick Douglass pours thousands of dollars of his own money into an effort to save it), half the depositors will eventually get back only about 60% of their money; others receive nothing. Some depositors and their descendants spend as many as thirty years petitioning Congress for reparation.
Fall As the fall elections approach, reports of Southern violence, political corruption, and economic depression give a
considerable advantage to the Democrats, who will take control of Congress when it convenes in 1875.
--- TN Knoxville College opens during this year as a normal school sponsored by the United Presbyterian Church of North
America. Designated a college in 1877, it offers teacher training; college courses in classics, science, and theology; classes in agriculture, industrial arts, and medicine. Because, in these early years, so few blacks are prepared for higher education, the college initially offers classes from first grade through college level. The elementary department will be discontinued in 1926 and the academy (high school) in 1931.
Jan. 26 Andrew Johnson is elected to the U.S. Senate as a Democrat from Tennessee.
Mar. 1 The Forty-Fourth Congress, which has six black members and is still under the control of the Republicans, passes the Civil Rights Act of 1875, which outlaws racial segregation in public facilities and housing and prevents the exclusion of African Americans from jury service. (Not enforced in the South, the law will be struck down by the Supreme Court in 1883.)
Mar. 5 Blanche Kelso Bruce takes his seat as the United States Senator from Mississippi. He will be the first African American
Senator to serve a full six-year term.
Mar. 11 TN The Tennessee Legislature passes House Bill No. 527 permitting racial discrimination in transportation, lodging, and
places of entertainment. The Bill receives Senate approval before the end of the month and is signed into law (Chapter 130).
Mar. 23 TN Chapter 90 of the Acts of Tennessee 1875 orders the establishment of a state normal school or schools, the creation of a State Board of Education, and the requirement that separate schools “for white and colored pupils” should be established.
May 5 TN The Fisk Jubilee Singers return to the U.S., having raised $50,000 for the University during a year-long British tour.
July 5 TN African American preacher Hezekiah Hanley holds a celebration of racial unity in Memphis. Among the invited guests
are Nathan Bedford Forrest and other former Confederate generals.
July 31 TN Andrew Johnson dies of a stroke and is buried in Greeneville, Tennessee.
Dec. 1 TN The Inaugural Exercises of the State Normal College, known as “The Peabody State Normal School of the
University of Nashville,” are held in the House of Representatives. This particular institution accepts white students only.
--- TN Styles L. Hutchins graduates from University of South Carolina Law School and is admitted to the South Carolina bar.
--- TN William F. Yardley, a Knoxville politician, becomes the first African American to campaign for governor of Tennessee.
Apr. 5 TN The Colored National Convention meets in the House Chamber of the Tennessee General Assembly. Eighteen states and the District of Columbia are represented. Tennessee delegates are W. Sumner, Abram Smith, Edward Shaw, and James C. Napier. Former Louisiana Governor Pinckney Benton Stewart Pinchback and Senator H.S. Smith of Alabama deliver speeches considered the “high point of the convention.” The Convention’s efforts to choose and endorse a Presidential candidate are unsuccessful, although Edward Shaw, Memphis wharf master, speaks out strongly against the Grant administration. [Walker]
Oct. 13 TN Meharry Medical College, the first American college for the training of African American physicians, opens in Nashville. The Freedmen’s Aid Society of the Methodist Episcopal Church helps establish Meharry as a department of Central Tennessee College.
Nov. 7 Edward Bouchet becomes the first African American to receive a Ph.D. from an American institution (Yale University).
Nov. 8 The bitterly disputed Presidential election takes place between candidates
Samuel J. Tilden (D) and Rutherford B. Hayes (R).
Nov. 9 Because of allegations of voting fraud in four states, there is no certain victor in the Presidential election. Tilden
receives 184 electoral votes and Hayes, 165; 21 votes are uncertain. Both candidates claim victory.
--- TN John W. Boyd is elected as magistrate of the Ninth Civil District, Tipton County.
--- By this year about 2,000 African American men have held/are holding public office, "ranging from member of
Congress to justice of the peace.” In spite of prohibitions against educating slaves, “83 percent of the black officials [are] able to read and write.” Twelve percent of them are lawyers or school teachers. [Foner]
--- TN From an African American prison population of 33 percent at the main prison in Nashville, the number has now risen to
67%. Other Southern states also have predominantly black prison populations, far out of proportion to the percentage
of blacks in the general population.
--- TN Sampson W. Keeble is elected a magistrate in Davidson County. He will servie until 1882.
Jan. 24 Congress appoints a 15-member electoral commission to resolve the disputed election. In what is little more than a back-
room deal, the Republicans agree to abandon Reconstruction policies in exchange for the Presidency. The so-called “Compromise of 1877” results in an end to military intervention in the South and restores “home rule.”
Mar. 5 TN Rutherford B. Hayes is inaugurated the nation’s nineteenth President (1877-1881). He quickly withdraws federal troops from the South, and ends federal support for the remaining Reconstruction governments. This agreement officially ends Reconstruction. The South begins the process of codifying and enforcing segregation. Although Tennessee will elect a number of black politicians over the next few years, the last African American state legislator will end his term in 1893, and no other will be seated until 1964. Violations of black civil rights will not again be addressed on a national scale until after World War II.
Mar. 15 The Nation reports that “the great body of the Republican party is ... opposed to the continuance at the South of the policy of military interference and coercion as pursued by General Grant.”
June 14 Henry Ossian Flipper becomes the first African American to graduate from West Point.
--- TN James Carroll Napier, an 1872 graduate of the Howard University Law School, is elected the first black city councilman in Nashville, serving five terms. He will later serve as Register of the United States Treasury under President William Howard Taft (1911-1913).
--- TN Thomas F. Cassels
is appointed assistant attorney general of
--- TN East Tennessee University, one of the earliest land-grant colleges, is renamed the University of Tennessee.
--- The 1880 Census shows that African Americans make up 13.1% of the U.S. population (6,580,793 of 50,155,783).
--- Styles L. Hutchins becomes the first black attorney admitted to the Georgia bar, despite legal efforts to block him
from taking the test.
--- The National Baptist Convention, USA, has its beginnings in a meeting of 150 Baptist pastors in Montgomery, Alabama.
--- TN Even at this late date, 50%-60% of rural freedmen continue to work as wage laborers, many on the same farms on
which they were once slaves.
--- TN Four African Americans are elected to the Tennessee General Assembly: John W. Boyd of Tipton County,
--- TN The Black Exodus to Kansas and other Western states, which began about 1872, comes gradually to an end. More than 2,400 people have migrated from Nashville alone.
--- TN During 1881, despite the black representatives in the House, the 42nd Tennessee Legislature passes the first “Jim Crow” law in the South, requiring the segregation of the races on railroad cars. By 1900 all Southern states will have segregated their transportation systems, a move sanctioned by the U.S. Supreme Court in 1896 with the Plessy v. Ferguson decision. Future laws will be passed that discriminate against African Americans regarding public school attendance, housing, and the use of public facilities such as restaurants, theaters, and hotels. In 1967, when the Court rules miscegenation laws unconstitutional, 16 states will still have laws prohibiting interracial marriage. It will be November 2000 before Alabama, the last hold-out, repeals its law – although 40% of the electorate votes to keep it!
--- TN John W. Boyd, a Republican, represents Tipton County in the 42nd and 43rd General Assemblies, 1881-1885. He is appointed to the committees on Immigration, New Counties and County Lines, and Tippling and Tippling Houses.
--- TN Thomas Frank Cassels is a Republican from Shelby County, serving in the 42nd General Assembly from 1881 to xxxxxxxxxxxxxx1883. He is appointed to the committees on Education and Common Schools, Judiciary, Privileges and Elections, and Public Roads.
--- TN Isaac F. Norris is a Republican from Shelby County, to serve in the 42nd General Assembly from 1881-1883. He is xxxxxxxxxxxxxxappointed to the committees on Banks, Claims, Immigration, and Public Grounds and Buildings.
--- TN Thomas A. Sykes represents Davidson County in the legislature, in spite of decreased black voting strength brought on xxxxxxxxxxxxxxby a new poll tax and acts of violence against blacks. A Republican, he is appointed to the committees on Claims and Penitentiary.
--- TN Styles L. Hutchins opens a law office in Chattanooga and becomes a partner in a newspaper, The Independent Age,
of which he is editor.
its first reading and is referred to the Judiciary Committee. It passes its second reading February 22.
racial discrimination in the use of public facilities and transportation. It passes first and second readings.
negroes, mulattoes and persons of mixed blood descended from the negro race, and to proscribe the punishment for violation thereof.” It passes first and second readings.
arrangements for persons of color who may be entitles to admission." It passes its first reading and is referred to the
Judiciary Committee; after passing its second reading, it is referred to the Committee on Education and Common Schools, where it is tabled.
Feb. 16 TN Thomas A. Sykes introduces House Bill No. 289, to admit African American students “into the school for the blind at Nashville and the school for the deaf and dumb at Knoxville, in separate accommodations provided for them.” The bill passes its first reading and is referred to the Judiciary Committee. A week later it passes its second reading.
Chapter 131 of an act passed
Feb. 24 TN After two vicious lynchings in Springfield, the General Assembly has passed a resolution condemning “this violation of law as tending to subvert all government, and as deserving prompt punishment”; legislators have also passed a bill to punish any sheriff whose negligence allows a prisoner to be taken from his custody “and put to death by violence.” Hoping to take advantage of the legislature’s unanticipated disposition toward justice, Thomas F. Cassels introduces House Bill No. 478 to compensate families of the victims of mob violence. His bill passes the first reading but dies in committee.
Feb. 25 TN House Bill No. 33 (by Isaac F. Norris), relating to labor contracts, passes its third reading by a vote of 38-25.
Feb. 26 TN Isaac F. Norris introduces House Bill No. 510, concerning the payment of wages of laborers. It passes its first reading and is referred to the Judiciary Committee. It passes its second reading 29 March; there are no further references.
Mar. 4 James A. Garfield is inaugurated the nation’s twentieth President (1881).
Mar. 10 TN Thomas A. Sykes introduces House Bill No. 560, to eliminate discrimination against blacks in jury selection for circuit and criminal courts. The bill passes its second reading March 29 but is apparently tabled before being brought to a vote.
Republicans join Democrats in voting against it.
Mar. 24 TN House Bill No. 73 is taken up as a special order. A number of amendments are offered; Cassels' attempt to call the
previous question on the passage of the bill fails for lack of a second; a motion to table the bill and all amendments prevails.
Mar. 30 TN The four black legislators [Boyd, Cassels, Norris, and Sykes] file a protest against the rejection of House Bill No. 70, saying that Chapter 130 “authorizes railroad companies and their employes, unjustly, cruelly, wantonly, without just cause of provocation, and in violation of the common law and the laws of the general government, to oppress and
discriminate against more than four hundred thousand citizens of the State of Tennessee, and the colored people of all other States who may desire to travel in Tennessee,” and that it “wickedly, cruelly, and inhumanly attempts to deny to persons aggrieved by the provisions of the said act any remedy or redress of grievances in the State courts of Tennessee.”
Mar. 30 TN Isaac F. Norris introduces House Bill No. 682, concerning discrimination against railroad passengers (referring to Chapter 130, Acts of Tennessee, 1875). The bill passes its first and second readings (March 30 and March 31), but is subsequently tabled.
Mar. 30 House Bill No. 289, admitting black students into the school for the blind and the school for the deaf and dumb, passes
by a vote of 59-1 and becomes law.
Apr. 7 TN The Tennessee House of Representatives passes a “compromise” bill, Senate Bill No. 342, permitting “separate but equal” facilities for African Americans on trains. This bill requires railroad companies either to partition off a portion of a first-class car for black passengers who have paid first-class fare, or to provide separate cars for blacks. Having passed the Senate 18-1, it passes the House 50-2. Norris and Sykes vote against the bill; Boyd is absent; Cassels abstains. Thirteen other Southern states will follow Tennessee’s lead and segregate public carriers over the next few years.
Apr. 14 TN The General Assembly passes a $10,000 appropriations bill for the State Normal College, which will be augmented by
a $6,000-9,000 grant from the Peabody Education Fund for student scholarships.
Apr. 14 TN The State Board of Education reports that it is authorized by the General Assembly to spend "$10,000 annually for
Normal School purposes,” $2,500 of which is reserved “for the normal education of colored teachers.” The Board meanwhile invites the state’s black colleges to submit proposals “to educate the colored candidates for teachers.”
June 3 TN The State Board of Education asks the governor to notify the legislature “that only $2,500 in gross is appropriated for the Colored Normal School.”
June 15 TN The State Board of Education appropriates $50 per year for the education of each African American scholarship
student. That gives each Senatorial district two black students, who will be appointed by the Senator from that district from among those receiving the highest scores on a standard examination. The schools approved for the education of normal students are Knoxville College, Knoxville; Freedmen’s Normal Institute, Maryville; Fisk University, Nashville Theological and Normal Institute, and Central Tennessee College, Nashville; and LeMoyne Normal Institute, Memphis.
July 2 President James Garfield is shot by assassin Charles Guiteau. Garfield will lie in the White House for weeks, mortally
wounded but clinging to life as doctors attempt to save him.
July 4 The first president of Tuskegee Institute, Dr. Booker T. Washington, who was born a slave, officially opens the Normal
School for Colored Teachers in Macon County, Alabama. Washington is a champion of vocational education as a means to African American self reliance.
Sept. 19 President Garfield dies, more than eleven weeks after he was shot. Chester A. Arthur, a Republican from Vermont, becomes the twenty-first President (1881-1885).
Nov. 30 TN Jessee [sic] Graham is listed in the State School Board minutes as a recipient of a Peabody Scholarship to attend
--- TN More than half the convicts in the Tennessee State Prison at Nashville are now being leased out as laborers.
--- TN Between 1882 and 1930 Tennessee has 214 confirmed lynching victims: most in middle and west Tennessee, most
(83%) African Americans.
--- TN Charles Spencer Smith founds the Sunday School Union of the A.M.E. Church at 206 Public Square, Nashville. The
publishing house is the first and only steam printing establishment owned and managed by an African American.
Smith, elected in 1874 to a term in the Alabama House of Representatives, received a medical degree from Central xxxxxxxTennessee College in 1880, In 1900 he will become a bishop of the A.M.E. Church and in 1911 will be the first black xxxxxxxto receive a Doctor of Divinity degree from Victoria College in Toronto. [Roseman]
--- The Supreme Court rules in United States vs. Harris that the Klan Act (see May 31, 1870) is partially unconstitutional,
asserting that Congress’s power under the 14th Amendment does not apply to private conspiracies.
Apr. 6 TN In the second extra House Session, Thomas A. Sykes introduces House Bill No. 3, “To exempt educational institutions from taxation.” It passes the first and second readings and is referred to the Committee on Education and Common Schools. It is eventually tabled.
--- A flood of civil rights cases strikes down the federal Civil Rights Act of 1875. Congress may no longer legislate on
civil rights issues unless states pass discriminatory laws.
--- TN Leonidas (Leon) Howard is elected to represent Shelby County in the 43rd General Assembly from 1883 to 1884. A xxxxxxxxxxxxxxRepublican, he helps defeat two blacks (one is Isaac Norris) running on the Democratic ticket. He is appointed to the committee on Military Affairs.
--- TN Samuel Allen McElwee, a Republican, is elected to the 43rd (as well as, later, the 44th and 45th) General Assembly,
xrepresenting Haywood County from 1883-1888. He is appointed to the committees on Military Affairs and Public Printing.
--- TN David F. Rivers is elected to represent Fayette County as a Republican in the 43rd and 44th General Assemblies,
1883-1886, although he is not able to serve his second term. He is appointed to the committees on Education and
Common Schools, Federal Relations, and Public Printing. Although there are
twice as many black residents in Fayette xxxxxxxxxxxxxxCounty as white, the county will send
only two African American representatives to Nashville: Rivers (1883-1884)
--- TN John W. Boyd serves a second House term representing Tipton County. He is appointed to the committee on
passes its first reading and is referred to the Committee on Education and Common Schools. It passes its second reading on 16 January.
Judiciary Committee, where it is tabled.
Jan. 10 TN Leon Howard introduces House Bill No. 129, To repeal sections 2437a and 2437b of the Code, in regard to illicit
intercourse. It passes first reading and is sent to the Judiciary Committee, where it dies.
Feb. 8 TN In his annual report to the General Assembly, Governor William Brimage Bate (1826-1905) recommends legislation
authorizing the appointment of an Assistant Superintendent of Public Instruction, who will be responsible for the education of African American students.
Feb. 15 TN House Bill No. 12 has been made the special order for the session, having been passed over three times earlier.
McElwee reduces the appropriation to black students, but the House votes to table the bill; however, they prove willing to approve the committee’s bill on the same subject and appropriate $3,300 per year for normal school xxxxxxxxxxxxxxscholarships for African American students, making each scholarship worth $50.
Public Schools. It passes its first and second reading and is referred to the Committee on Education and Common Schools, where it is tabled.
jurors.” The bill passes its first and second readings, but there are no further references to it after that.
etc. The bill passes its first and second readings but is tabled by the Judiciary Committee.
paying first-class fare. This bill is one of several representing the black legislators’ more tightly focused effort to weaken the power of Chapter 130 of the Acts of 1875. It passes its first and second readings and is referred to the Judiciary Committee.
Mar. 21 TN After hours of debate, Leon Howard offers an amendment repealing only the provision of the Act of 1875 that pertains to railroads; it is defeated by a vote of 64-27.
Mar. 24 TN W. A. Milliken offers an amendment to Boyd's House Bill No. 663, requiring railroad companies to provide separate cars for different passengers. It passes by a vote of 56-19, with Boyd voting against it, and Howard and McElwee (both deeply opposed to the separate-but-equal provision) abstaining.
Apr. 24 TN David F. Rivers is listed as the recipient of a Peabody Scholarship in the minutes of the State Board of Education. Appointed by Senator Cason, District 12, he attends Roger Williams University.
May 1 TN Eben S. Stearns, President of the Peabody Normal College, lists the "Requirements for Obtaining and Holding Peabody
Scholarships at the Normal College at Nashville, Tenn.” Students meeting all the scholarship requirements can receive up to $200 per year for board and other college expenses.
Oct. 15 The Supreme Court declares the Civil Rights Act of 1875 unconstitutional, finding that the 14th Amendment forbids
states, but not individual citizens, from discriminating.
Nov. 26 Death of Sojourner Truth (Isabella Baumfree, born 1797), ardent abolitionist and powerful public speaker.
--- TN Ida B. Wells files a lawsuit against the Chesapeake & Ohio & South-western Railroad Company for segregation on the company’s railroad cars. Thomas F. Cassels is her first lawyer. [Goings] Wells will soon replace him for being too accommodating to the railroad lawyers.
Feb. 28 TN More than 300 black leaders from 17 Tennessee counties meet in Nashville to discuss the role of African Americans in
local and national elections. The largest delegations are from Shelby County, with 62 delegates; Davidson, 52; and Haywood, 48. Thomas F. Cassels, serving as chairman, shares his concerns that many current state laws violate the constitutional rights of black Tennesseans. James C. Napier, the keynote speaker, stresses the need for political unity among black voters. Samuel A. McElwee’s demand that black unity occur within the Republican party stirs up enormous controversy. The convention ends by warning that failure to support black causes will erode black commitment to the party.
--- TN At the State Republican Conventiom Samuel A McElwee is elected temporary chairman and is chosen as one of two
delegates (the other is General George Maney) to the Chicago Presidential Convention, which nominates James G. Blaine.
June 24 John Lynch is the first black to be elected chairman of the Republican National Convention.
Nov. 4 Grover Cleveland, a Democrat from New York, is elected president.
--- TN John W. Boyd challenges his loss in the Senate election for Tipton and Fayette counties, claiming fraud when the
District 4 ballot box mysteriously disappears. Although he carries his challenge to the State Senate, members vote
mysteriously disappears. Although he carries his challenge to the State
to seat his opponent.
--- TN Greene E. Evans is elected Republican representative from Shelby County to the 44th General Assembly, 1885-1886. He is on the committee on Education & Common Schools.
--- TN William A. Feilds is elected to represent Shelby County in the 44th General Assembly from 1885-1886. A Republican, Feilds is a school teacher and principal in the 5th Civil District of Shelby County. He is appointed to the committees on Federal Relations, Internal Improvement, and Public Roads.
--- TN William C. Hodge is the first black legislator elected from Hamilton County, serving as a Republican in the 44th General Assembly from 1885-1886. He is appointed to the committees on Education and Common Schools, Military Affairs, and Penitentiary.
--- TN Samuel A. McElwee, serving a second term in the legislature representing Haywood County, receives the Republican nomination for Speaker of the House. Though the nomination is largely symbolic in the Democratic-controlled legislature, McElwee receives 32 votes. He serves on the committee on Banks. During this year his wife dies, leaving him with two small children. Placing the children with relatives, he enters Central Tennessee College, earning a law degree the following year.
--- TN David F. Rivers is listed in the Biographical Directory of the Tennessee General Assembly, Volume II, 1861-
1901, as a member of the 1885 General Assembly, but does not appear in any records in the House Journal for that year. According to family members, Rivers, having been driven out of Fayette County by racial violence, does not serve out the legislative term to which he has been elected but moves to Nashville and takes a position teaching theology at Roger Williams University.
Jan. 19 TN William C. Hodge introduces House Bill No. 139, To amend the road law of 1883. It passes first reading and is referred to the Committee on Public Roads. It is tabled on its second reading on February 27.
Jan. 19 TN William C. Hodge introduces House Bill No. 140, To amend the road law. It passes first reading and is referred to the Committee on Public Roads. On its second reading on February 27, it is tabled.
advertisements. The bill passes its first reading and is referred to the Judiciary committee. It will pass its second reading on January 24.
Committee on Public Roads. Returned to the House on March 2, it will be tabled.
Feb. TN In his annual report to the General Assembly, Governor William Brimage Bate (1826-1905), for the second time, urges
legislation authorizing the appointment of an Assistant Superintendent of Public Instruction, responsible for the education of African American students.
second readings, but then is referred to the Judiciary Committee, where it dies.
first and second readings and is referred to the Judiciary Committee. On February 28 it is withdrawn without explanation.
Feb. 19 TN Greene E. Evans presents House Bill No. 514, at the request of Governor Bate, providing for the appointment of an xxxxxxxxxxxxxxAssistant State Superintendent of Public Instruction. The bill passes its first and second readings, and is then sent to the Committee on Education and Common Schools, of which Evans is a member, where it is tabled.
Feb. 27 TN House Bill No. 141, on third reading, is defeated by a vote of 49-20.
Mar. 2 TN House Bill No. 151 is rejected.
Mar. 3 TN House Bill No. 119 is tabled.
Mar. 4 Grover Cleveland becomes the nation’s 21st President (1885-1889).
May 20 TN The State School Board asks the General Assembly to repeal the act reducing the salary of the State Superintendent.
May 25 TN The General Assembly meets in extraordinary session. They will meet through June 12.
May 27 TN Greene E. Evans introduces House Bill No. 29, To provide for the appointment of an Assistant Superintendent of Public Instruction. It passes first and second readings and is referred to the Committee on Education and Common Schools, where it is tabled.
May 27 TN William A. Feilds introduces House Bill No. 34, To empower Managers of Teachers’ Institutes to examine and issue certificates, to be approved by the County Superintendent. It passes first and second readings and is referred to the Committee on Education and Common Schools, where it is tabled.
June 3 TN William C. Hodge introduces House Bill No. 63, To provide for the protection of the ballot box. It passes first and second readings and is referred to the Committee on Elections, where it dies.
June 25 African American priest Samuel David Ferguson is ordained a bishop of the Episcopal church; he will serve until his death in 1916.
--- TN The Sunday School Union, where the first Sunday school literature by African Americans is published, moves from Bloomington, Indiana, to a five-story brick and stone building at 206 Public Square in Nashville.
--- TN This year will see the establishment of the first African American-owned drug store in Nashville.
Feb. 20 TN The State Board of Education submits payment for sixty-one African American students who have received State
Normal (Peabody) Scholar-ships to attend Central Tennessee College, Fisk University, Knoxville College, and Roger Williams University
--- TN Samuel A. McElwee receives a law degree from Central Tennessee College in Nashville.
Sept. 20 TN Nashville’s first public high school for African American students opens: Meigs Public School offers the first classes for
9th and 10th graders; new courses for 11th graders will be added in the 1887-1888 school year. Ten years later (1897-1898 school year) the high school department at Meigs is transferred to Pearl High School, from which the first class will graduate on 2 June 1898.
Dec. 8 The American Federation of Labor is organized, signaling the rise of the labor movement. Black Americans are excluded from all major unions of the period.
--- TN Monroe W. Gooden, the only Democrat among the African American legislators, is elected to represent Fayette
county in the 45th General Assembly from 1887-1888. He is appointed to the committees on Agriculture and Federal Relations.
--- TN Styles Linton Hutchins, a Republican, begins his legislative term, representing Hamilton County in the 45th General Assembly from 1887-1888. He is appointed to the committees on Education and Common Schools, and New Counties and County Lines.
--- TN Samuel A. McElwee, a Republican, is elected to a third term representing Haywood County. He is appointed to the
committees on Charitable Institutions, Elections, and Judiciary. Gooden, Hutchins, and McElwee are the last African Americans elected to serve in the Tennessee General Assembly until Memphis voters elect A. W. Willis in 1964, more than 75 years later.
--- TN Booker T. Washington invites Samuel A. McElwee to be commencement speaker at the 1887 graduation exercises of
Jan. 7 TN In the wake of a brutal lynching in West Tennessee, Samuel A. McElwee introduces House Bill No. 5, to prevent mob xxxxxxxxxxxxxxviolence. The bill passes its first and second readings and is referred to the Judiciary committee. McElwee makes xxxxxxxxxxxxxxseveral attempts to have the bill declared the special order for the session (Feb. 16, 21, and 22).
Jan. 12 TN Styles L. Hutchins introduces House Bill No. 136, to repeal a section of the Chattanooga charter making poll taxes a requirement for voting in city elections. It passes its second reading a week later.
the South: new laws are sending many African Americans to prison for minor offenses, and convicts are being forced to do jobs that are now unavailable to free laborers. The bill passes its first and second readings and is referred to the Committee on Penitentiary, where it is tabled.
Feb. 22 TN House Bill No. 5, to prevent mob violence, having been delayed for several days, is at last made the special order for
for the afternoon session. Samuel A. McElwee makes a powerful speech in its support, demanding reform: “I stand here today and enter my most solemn protest against mob violence in Tennessee . . . .Great God, when will this Nation treat the Negro as an American citizen? . . . As a humble representative of the Negro race, and as a member of this body, I stand here today and wave the flag of truce between the races and demand a reformation in Southern society.” The Judiciary Committee offers a substitute bill. By a 41-36 vote, both bills are tabled.
Mar. 5 TN Morristown Seminary and Normal Institute, Morristown, Tennessee, is designated as one of the colleges eligible for
Peabody Scholarship students “of African descent.”
Mar. 23 TN House Bill No. 136, to amend the charter of Chattanooga to eliminate poll taxes, passes on third reading. The ease of the bill’s passage suggests that whites have not yet realized the effectiveness of the poll tax as a method of restricting black voters from exercising their rights.
Jun. 19 TN Sampson W. Keeble dies of “a congestive chill” (probably malaria) in Richmond, Texas, and may have been buried
there. He is listed with his daughter and son-in-law on a gravestone in Greenwood Cemetery on Elm Hill Pike in
Nashville, near the graves of James C. Napier and publisher R. H. Boyd.
Aug. 15 Eatonville, Florida, becomes the first African American township to be incorporated into the United States.
Dec. 7 TN Central Tennessee College, Fisk University, and Roger Williams University ask the State Board of Education to urge
the General Assembly “to restore the former appropriations for colored scholarships to $3300.”
--- Two large African-American-owned banks open during the year: the Savings Bank of the Grand Fountain United Order of the Reformers (Richmond, Virginia) and Capital Savings Bank (Washington, D.C.).
--- TN With more than a 2/3 majority in both Houses of the General Assembly, Tennessee Democrats disfranchise black voters
in the state by passing four restrictive bills sponsored by Senators Myers, Dortch, and Lea, as well as reinstating a poll tax urged by Governor Robert L. Taylor. [See entry titled “Disfranchising Laws” in Tennessee Encyclopedia of History & Culture: http://tennesseeencyclopedia.net/entry.php?rec=380 .] This is the first legislative session in nearly ten years in which no African American representative is seated.
--- TN The General Assembly, for reasons that are unspecified but probably related to the same political climate that permitted
the passage of laws limiting black suffrage, cuts the appropriation for “colored normal scholarships” from $3,300 to $1,500 per year, making each individual scholarship worth only $22.70. In his 1889 Annual Report of the State School Board to the Legislature, Board Secretary Frank Goodman protests the cuts and requests that the original appropriation be restored. [Lauder]
Mar. 4 Benjamin Harrison becomes the nation’s 22nd President (1889-1893).
Mar. 30 TN Cabell Rives Berry, Senator from Williamson and Marshal Counties, introduces an amendment to Senate Appropriations
Bill No. 456, making the item"Colored Normal Department" call for "$3,300 per annum instead of $2,500 per annum,
as the bill now provides.”
--- TN Senate Appropriations Bill No. 456, with amendments added by the Senate Committee of Finance, Ways & Means
(none of which change the scholarship appropriation in any way) will pass both the Senate and the House before the end of the 1889 session. This vote is particularly surprising in light of the disfranchising bills passed during the session.
--- According to the 1890 census, African Americans make up 11.9% of the U.S. population (7,488,676 of 62, 947,714).
--- TN The Black Northern Migration draws thousands of black Tennesseans to the industrial cities of the North. Between
1870 and 1930 Tennessee’s black population declines to 18.3% from an earlier figure of 25.6%.
--- The American Baptist Publication Society no longer publishes the writings of African American ministers because
Southern white readers have objected to them.
--- “Pitchfork Ben” Tillman is elected governor of South Carolina. An apologist for violence against blacks, Tillman calls
his victory “a triumph of ... white supremacy." His words are generally more inflammatory than his policies -- he makes
an effort to curb lynching in his state, while also advocating segregation and disfranchisement of black voters.
Nov. 1 The Mississippi Plan becomes law on this date. It uses literacy and "understanding" tests to disfranchise minority
voters. Similar statutes will be adopted by South Carolina (1895), Louisiana (1898), North Carolina (1900), Alabama (1901), Virginia (1901), Georgia (1908), and Oklahoma (1910)
--- TN The Tennessee Coal, Iron, and Railroad Company (TCI) uses convicts as strikebreakers when coal miners strike.
Violent uprisings continue until 1895, when the General Assembly ends the practice of convict leasing.
--- TN Vigilante groups produce havoc throughout Tennessee. A Sevier County groups known as the White Caps begins a
reign of terror, beating and occasionally killing people (primarily women) they believed to be “lewd or adulterous.” Their activities continue nearly unchecked until 1896.
--- TN Approximately 235 African Americans will lose their lives to lynchings this year; 204 black Tennesseans will be lynched
during the years between 1890 and 1950.
March TN After Ida B. Wells speaks out in The Memphis Free Speech against a recent lynching, a white mob burns the
newspaper office. Wells is forced to move out of the state to guarantee her safety,
May 20 TN Frederick Douglass speaks at the First Colored Baptist Church in response to recent lynchings in Nashville and
Dec. 1 TN Dr. Miles V. Lynk, a graduate of Meharry Medical School and the first African American physician in Madison County,
publishes the first national medical journal for black physicians, The Medical and Surgical Observer. He is 21 years old. He will later found the University of West Tennessee, earn a law degree, serve as Dean of the School of Nurse Training of Terrill Memorial Hospital in Memphis, and become the ninth recipient of the Distinguished Service Award from the National Medical Association.
Dec. 27 Biddle University (NC) defeats Livingstone College (NC) 5-0 in the first footabll game between teams from black
--- TN Working as an emigration agent for a railroad company, Isaac F. Norris moves his family to the newly opened xxxxxxxxxxxxxOklahoma Territory, where he will continue to be active in politics..
Mar. 4 Grover Cleveland is sworn in to his second term as President, the first covering the years 1885-1889, and the second
running from 1893-1897.
--- TN After about 50 years of the practice known as convict leasing, the Tennessee General Assembly finally addresses the
issue and passes legislation to construct a new state penitentiary and abolish convict leasing at the expiration of the lease contract in 1896.
--- TN David F. Rivers takes a position as pastor of the Metropolitan Baptist Church in Kansas City, Kansas. By 1900 he
will be serving as pastor of the Berean Baptist Church in Washington, D. C.
--- African American workers are hired by the Pullman Company as strike breakers after a costly strike by employees.
--- xx Jesse M. H. Graham becomes editor of the Clarksville Enterprise.
Feb. 20 Death of Frederick Douglass.
Sept. 18 Booker T. Washington delivers the “Atlanta Compromise” address at the Atlanta Cotton States Exposition. He asserts
that the “Negro problem" will be resolved if the South abides by a policy of gradualism and accommodation. Much of
what Washington proposes is black self-help: African Americans will rise socially and politically if they work, save, and xxxxxxxgain an education, but whites must be willing to accept and encourage this effort.
Sept. 24 The National Baptist Convention of the United States is created by the union of several smaller Baptist organizations. The Baptist church becomes the nation’s largest African American religious denomination.
Dec. 4 In the state Constitutional Convention, South Carolina adopts a new constitution containing an "understanding" clause
designed to eliminate black voters.
--- TN xxx Samuel L. McElwee and James Napier are named to the original committee of the Negro Department of the
Tennessee Centennial. Both will withdraw before the Exposition
--- TN Richard H. Boyd establishes the National Baptist Publishing Board, which is reportedly the oldest extant African-
American-owned publishing company.
May The U.S. Supreme Court, in Plessy v. Ferguson, upholds Louisiana statute requiring "separate but equal"
accommodations on railroads, saying segregation is not necessarily discrimination. Justice Harlan’s dissent (“The Constitution is color-blind!”) insists that all segregation is inherently discrimination, that states cannot impose criminal penalties upon a citizen who merely wants to use public highways and carriers. It is this very argument that will eventually be used to win Brown v. Board of Education (1954).
July 21 The National Association of Colored Women is established, with Mary Church Terrell as its first president.
Nov. 3 William McKinley, an Ohio Republican, is elected President.
---TN xxxx Jesse M. H. Graham is elected as a Republican representing Montgomery County in the 50th General Assembly.
He arrives in Nashville to
find his seat contested on the first day of session. Although he is provisionally seated on
Mar. 4 William McKinley is inaugurated as President (1897-1901).
May 1 TN The Tennessee Centennial Exposition opens in Nashville, to run until October 31. It was a successful effort to stimulate
the economy after a 20- year period of economic depression.
--- TN During 1897 Tennessee Coal (TCI) pays Louisiana $18.50 a month for a "first-class" state convict.
Apr. 21 The Spanish-American War begins. Black volunteers make up sixteen regiments, four of which will see combat. Five
African Americans win Congressional Medals of Honor for their valor.
Apr. 25 Announcing their judgment in the case of Williams v. Mississippi, the Supreme Court rules in favor of the Mississippi
Constitution, which requires voters to pass a literacy test in order to receive a ballot. This law, clearly aimed at disfranchising black voters, places the power of interpretation in the hands of local, politically appointed registrars.
June 2 TN The first class graduates from Pearl High School, Nashville’s African American high school.
Sept 9 TN Death of former Representative William A. Feilds, a member of the Shelby County Court, whose surviving members
publish a resolution honoring his service..
Link to Timeline Sources. | http://tn.gov/tsla/exhibits/aale/aahtimelin.htm | 13 |
75 | Saturday, December 30, 2006
Current understanding of the co-evolution* of bats and moths has been thrown into question following new research reported in Current Biology.
Dr James Windmill from the University of Bristol has shown how the Yellow Underwing moth changes its sensitivity to a bat's calls when the moth is being chased. And in case there is another attack, the moth's ear remain tuned in for several minutes after the calls stop.
Dr Windmill said: "Because the moth cleverly tunes its ear to enhance its detection of bats, we must now question whether the bat in turn modifies its calls to avoid detection by the moth. In view of the vast diversity of bat calls, this is only to be expected.
"To date, this phenomenon has not been reported for insects or, in fact, for any other hearing system in the animal kingdom. These findings change our understanding of the co-evolution of bats and moths and have implications for the hearing of many other animals."
It has been known for over 50 years that moths can hear the ultrasonic hunting calls of their nocturnal predator, the bat. Previously it was thought that these ears were only partially sensitive to the sound frequencies commonly used by bats and that bats would make their hunting calls inaudible to moths.
But now it appears that even though moth ears are among the simplest in the insect world - they have only two or four vibration sensitive cells attached to a small eardrum - moths are not as deaf as previously thought.
As a bat gets closer to the moth, both the loudness and frequency (pitch) of the bat's calls increase. Surprisingly, the sensitivity of the moth's ear to the bat's calls also increases. This occurs because the moth's ear dynamically becomes more sensitive to the frequencies that many bats use when attacking moths.
This multidisciplinary work involved engineers, biologists and physicists; biological measurements are accompanied by a mathematical model explaining the basis for the unconventional behaviour of the moth's ear.
Original Press Release ("How to avoid a bat" - 19th December 2006) available via this link.
Based on the paper:
Keeping up with Bats: Dynamic Auditory Tuning in a Moth
James Frederick Charles Windmill1, Joseph Curt Jackson, Elizabeth Jane Tuck and Daniel Robert
Many night-flying insects evolved ultrasound sensitive ears in response to acoustic predation by echolocating bats. Noctuid moths are most sensitive to frequencies at 20-40 kHz, the lower range of bat ultrasound. This may disadvantage the moth because noctuid-hunting bats in particular echolocate at higher frequencies shortly before prey capture and thus improve their echolocation and reduce their acoustic conspicuousness. Yet, moth hearing is not simple; the ear's nonlinear dynamic response shifts its mechanical sensitivity up to high frequencies. Dependent on incident sound intensity, the moth's ear mechanically tunes up and anticipates the high frequencies used by hunting bats. Surprisingly, this tuning is hysteretic, keeping the ear tuned up for the bat's possible return. A mathematical model is constructed for predicting a linear relationship between the ear's mechanical stiffness and sound intensity. This nonlinear mechanical response is a parametric amplitude dependence that may constitute a feature common to other sensory systems. Adding another twist to the coevolutionary arms race between moths and bats, these results reveal unexpected sophistication in one of the simplest ears known and a novel perspective for interpreting bat echolocation calls.
*Info on co-evolution:
In biology, co-evolution is the mutual evolutionary influence between two species. Each party in a co-evolutionary relationship exerts selective pressures on the other, thereby affecting each others' evolution. Co-evolution includes the evolution of a host species and its parasites, in examples of mutualism evolving through time. Few perfectly isolated examples of evolution can be identified. Evolution in response to abiotic factors, such as climate change, is not coevolution (since climate is not alive and does not undergo biological evolution). Evolution in a one-on-one interaction, such as that between a specialized host-symbiont or host-parasite pair, is coevolution. But many cases are less clearcut: a species may evolve in response to a number of other species, each of which is also evolving in response to a set of species. This situation has been referred to as "diffuse coevolution". And, certainly, for many organisms, the biotic (living) environment is the most prominent selective pressure, resulting in evolutionary change.
Technorati: co-evolution, bats, moths, new, research, current, biology, university, bristol, moth, sensitivity, ear, detection, diversity, phenomenon, hearing, animal, kingdom, vibration, ultrasonic, echolocation, evolution, climate, change, selective, pressure
Friday, December 29, 2006
A fossil of a leaf-imitating insect from 47 million years ago bears a striking resemblance to the mimickers of today.
The discovery represents the first fossil of a leaf insect (Eophyllium messelensis), and also shows that leaf imitation is an ancient and successful evolutionary strategy that has been conserved over a relatively long period of time.
Scientists led by Sonja Wedmann of the Institute of Paleontology in Bonn, Germany, unearthed the remains at a well-known fossil site called Messel*, in Hessen, Germany.
The 2.4-inch-long insect had physical characteristics similar to the oblong leaves of trees living there at the time, including Myrtle trees, legumes, such as alfalfa, and Laurel trees.
Continued at "Ancient insects used advanced camouflage" [The Eocene Epoch**] [Image: PNAS]
Based on the Proceedings of the National Academy of Sciences (PNAS) paper:
The first fossil leaf insect: 47 million years of specialized cryptic morphology and behavior
Sonja Wedmann, Sven Bradler, and Jes Rust
Published online before print December 29, 2006, 10.1073/pnas.0606937104
PNAS | January 9, 2007 | vol. 104 | no. 2 | 565-569
Stick and leaf insects (insect order Phasmatodea) are represented primarily by twig-imitating slender forms. Only a small percentage (approx 1%) of extant phasmids belong to the leaf insects (Phylliinae), which exhibit an extreme form of morphological and behavioral leaf mimicry. Fossils of phasmid insects are extremely rare worldwide. Here we report the first fossil leaf insect, Eophyllium messelensis gen. et sp. nov., from 47-million-year-old deposits at Messel in Germany. The new specimen, a male, is exquisitely preserved and displays the same foliaceous appearance as extant male leaf insects. Clearly, an advanced form of extant angiosperm leaf mimicry had already evolved early in the Eocene. We infer that this trait was combined with a special behavior, catalepsy or "adaptive stillness," enabling Eophyllium to deceive visually oriented predators. Potential predators reported from the Eocene are birds, early primates, and bats. The combination of primitive and derived characters revealed by Eophyllium allows the determination of its exact phylogenetic position and illuminates the evolution of leaf mimicry for this insect group. It provides direct evidence that Phylliinae originated at least 47 Mya. Eophyllium enlarges the known geographical range of Phylliinae, currently restricted to southeast Asia, which is apparently a relict distribution. This fossil leaf insect bears considerable resemblance to extant individuals in size and cryptic morphology, indicating minimal change in 47 million years. This absence of evolutionary change is an outstanding example of morphological and, probably, behavioral stasis.
Correction for Wedmann et al., PNAS 104 (2) 565-569.
Published online before print January 19, 2007, 10.1073/pnas.0700092104
PNAS | February 6, 2007 | vol. 104 | no. 6 | 2024
*Info on the Messel Pit Fossil Site:
"...The Messel fossil finds are extraordinary in more ways than one: entire skeletons are preserved perfectly here - birds with their feathers, mammals with skin and hair. They provide evidence of an important period in the evolutionary history of mammals, which were able to develop at a rapid rate after the extinction of the dinosaurs.
The "Messel Propalaeotherium" represents an early European side branch of the horse family tree: they were much smaller than horses today, lived in the rainforest undergrowth and fed on foliage and fruit, as was established from their teeth and stomach contents. The discovery of an anteater was a huge surprise for palaeontologists, since it originally came from South America. The most common mammal finds are the bats, with the seven species that existed in those times occupying various ecological niches.
The Messel Lake also preserved an astonishingly large number of birds, various reptiles such as turtles, crocodiles and snakes, as well as fish, frogs and a diverse insect fauna. There are 60 families of flora, including various plants whose relations thrive today in South-east Asia, Central and South America. Botanical details have been well-preserved too: flowers in which the pollen grains have survived, as well as fruit clusters and individual fruits..."
**Info on The Eocene Epoch:
"The Eocene epoch is part of the Tertiary Period in the Cenozoic Era, and lasted from about 54.8 to 33.7 million years ago (mya). The oldest known fossils of most of the modern orders of mammals appear in a brief period during the Early Eocene and all were small, under 10 kg. Both groups of modern ungulates (Artiodactyla and Perissodactyla) became prevalent mammals at this time, due to a major radiation between Europe and North America."
A recent post on mimicry: "Predator Mimicry: Metalmark Moths Mimic Their Jumping Spider Predators"
Technorati: fossil, leaf, insect, discovery, imitation, ancient, evolutionary, strategy, paleontology, institute, bonn, messil, site, germany, hessen, insects, camouflage, eocene, epoch, pnas, trait, fossils, mimicry, evolution, change, pit, history, mammals, extinction, dinosaurs
How many genes does it take to learn? Lessons from sea slugs
At any given time within just a single brain cell of sea slug known as Aplysia, more than 10,000 genes are active, according to scientists writing in Friday's (December 29, 2006) edition of the journal Cell. The findings suggest that acts of learning or the progression of brain disorders do not take place in isolation - large clusters of genes within an untold amount of cells contribute to major neural events.
'For the first time we provide a genomic dissection of the memory-forming network,' said Leonid Moroz*, a professor of neuroscience and zoology at the University of Florida Whitney Laboratory for Marine Bioscience. 'We took advantage of this powerful model of neurobiology and identified thousands of genes operating within a single neuron. Just during any simple event related to memory formation, we expect differences in gene expression for at least 200 to 400 genes.'
Researchers studied gene expression in association with specific networks controlling feeding or defensive reflexes in the sea slug. To their surprise, they identified more than 100 genes similar to those associated with all major human neurological diseases and more than 600 genes controlling development, confirming that molecular and genomic events underlying key neuronal functions were developed in early animal ancestors and remained practically unchanged for more than 530 million years of independent evolution in the lineages leading to men or sea slugs.
Moroz and his collaborators uncovered new information that suggest that gene loss in the evolution of the nervous system is as important as gene gain in terms of adaptive strategies. They believe that a common ancestor of animals had a complex genome and different genes controlling brain or immune functions were lost independently in different lineages of animals, including humans.
Until now, scientists have been largely in the dark about how genes control the generation of specific brain circuitry and how genes modify that circuitry to enable learning and memory. For that matter, little is known about the genes that distinguish one neuron from the next, even though they may function quite differently.
Molecular analyses of Aplysia neuronal genes are shedding light on these elusive processes. In 2000, senior author Eric Kandel, M.D., of Columbia University in New York shared the Nobel Prize in Physiology or Medicine for his work using Aplysia as a model of how memories are formed in the human brain.
Despite its simple nervous system - Aplysia has about 10,000 large neurons that can be easily identified, compared with about one hundred billion neurons in humans - the animal is capable of learning and its brain cells communicate in many ways identical to human neural communication.
In the new findings, scientists identified more than 175,000 gene tags useful for understanding brain functions, increasing by more than 100 times the amount of genomic information available for study, according to Moroz and 22 other researchers from UF and Columbia University. More than half of the genes have clear counterparts in humans and can be linked to a defined neuronal circuitry, including a simple memory-forming network.
"In the human brain there are a hundred billion neurons, each expressing at least 18,000 genes, and the level of expression of each gene is different," said Moroz, who is affiliated with UF's Evelyn F. and William L. McKnight Brain Institute and the UF Genetics Institute. "Understanding individual genes or proteins is important, but this is a sort of molecular alphabet. This helps us learn the molecular grammar, or a set of rules that can control orchestrated activity of multiple genes in specific neurons. If we are going to understand memory or neurological disease at the cellular level, we need to understand the rules."
Scientists also analyzed 146 human genes implicated in 168 neurological disorders, including Parkinson's and Alzheimer's diseases, and genes controlling aging and stem-cell differentiation. They found 104 counterpart genes in Aplysia, suggesting it will be a valuable tool for developing treatments for neurodegenerative diseases.
"The authors have assembled a tremendous amount of data on gene transcripts associated with neuronal signaling pathways in Aplysia that sheds new light on evolutionary relationships of this very ancient and highly successful marine animal," said Dennis Steindler, Ph.D., executive director of UF's McKnight Brain Institute, who did not participate in the research. "A very important part of this study is the discovery of novel genes not formerly associated with the mollusk genome that include many associated with neurological disorders."
The findings are especially important for scientists using mollusks in experimental systems, according to Edgar Walters, Ph.D., a professor of integrative biology and pharmacology at the University of Texas Medical School at Houston, who was not involved in the research.
"Few animals other than Aplysia allow scientists to relate a molecular pathway directly to the function of a cell, all in context with an animal's behavior," Walters said. "In a mammal, it's hard to identify and manipulate a single cell and know what its function is. With Aplysia, there is direct access to whatever cell you're interested in with just a micropipette. As a scientist who wants to know which molecules are present in Aplysia for experimental manipulation, I am very happy to see this paper come out."
Source: University of Florida
Based on the paper:
Neuronal Transcriptome of Aplysia: Neuronal Compartments and Circuitry
By Leonid L. Moroz et al.
Cell, Vol 127, 1453-1467, 29 December 2006
Molecular analyses of Aplysia, a well-established model organism for cellular and systems neural science, have been seriously handicapped by a lack of adequate genomic information. By sequencing cDNA libraries from the central nervous system (CNS), we have identified over 175,000 expressed sequence tags (ESTs), of which 19,814 are unique neuronal gene products and represent 50%-70% of the total Aplysia neuronal transcriptome. We have characterized the transcriptome at three levels: (1) the central nervous system, (2) the elementary components of a simple behavior: the gill-withdrawal reflex - by analyzing sensory, motor, and serotonergic modulatory neurons, and (3) processes of individual neurons. In addition to increasing the amount of available gene sequences of Aplysia by two orders of magnitude, this collection represents the largest database available for any member of the Lophotrochozoa and therefore provides additional insights into evolutionary strategies used by this highly successful diversified lineage, one of the three proposed superclades of bilateral animals.
*Info from Leonid Moroz's webpage:
Our laboratory works to characterize basic mechanisms underlying the design of nervous systems and evolution of neuronal signaling mechanisms. The major questions are: (1) why are individual neurons so different from each other, (2) how do they maintain such precise connections between each other, (3) how does this fixed wiring result in such enormous neuronal plasticity and (4) how does this contribute to learning and memory mechanisms? By taking advantage of relatively simpler nervous systems of invertebrate animals as models, we conbine neuroscience,genomics, bioinformatics, evolutionary theory, zoology, molecular biology, microanalytical chemistry and nanoscience to understand how neurons operate, remember and learn.
Technorati: brain, cell, sea slug, aplysia, genes, journal, learning, disorders, neural, memory, zoology, marine, bioscience, neurobiology, nervous, system, genome, circuitry, common, ancestor, central, behavior, evolution
Thursday, December 28, 2006
From the journal Developmental Biology: Free online access to Volume 300 Issue 1, Sea Urchin Genome: Implications and Highlights up until end of June for non-subscribers.
From Eric Davidson's* introductory paper "Special issue: The sea urchin genome":
The Strongylocentrotus purpuratus Genome Project focused the attention of the sea urchin research community as nothing had ever done before. Two numbers tell the story. The first is the more than 9700 genes annotated by volunteers from this research community, guided by the energetic leadership of Erica Sodergren and George Weinstock at the Baylor College of Medicine Human Genome Sequencing Center, where the sequence was obtained and the annotation effort was organized. The second is the number of papers in this very issue, which contains 36 individual studies no one of which could or would have existed absent the genome sequence. Together with the main announcement of the genome sequence in Science and four additional genome-related papers published with it, over 40 diverse works have been called into existence with the advent of this sequence. The genome sequence provides a digital definition of the potentialities of the animal, and these papers show how many different kinds of potentiality it illuminated. This collection contains remarkable surprises, and some of the papers herein literally set up new fields of scientific enterprise...
...The deuterostomes were first imagined a century ago on the basis of comparative embryo anatomy, perhaps the greatest early success story of that field; their reality as a clade was indicated by pre-genomic evidence such as intron position in shared genes, then strongly supported by rRNA and protein molecular phylogeny. But now this superphylum, our own, is defined by the sea urchin genome project in terms of the sharing patterns of literally thousands of genes. The other side of the coin is the gene families that appeared or have hugely expanded during echinoderm evolution, most prominently the sensory receptor genes, immune genes of several large families, and the biomineralization genes, which are unlike any seen elsewhere. It is no wonder that there are differences and surprises: this is also the first non-chordate marine genome to be sequenced, the first sequence of a maximum indirectly developing animal, as well as the first echinoderm genome...
1) Shedding genomic light on Aristotle's lantern
Erica Sodergren, Yufeng Shen, Xingzhi Song, Lan Zhang, Richard A. Gibbs, George M. Weinstock
Sea urchins have proved fascinating to biologists since the time of Aristotle who compared the appearance of their bony mouth structure to a lantern in The History of Animals. Throughout modern times it has been a model system for research in developmental biology. Now, the genome of the sea urchin Strongylocentrotus purpuratus is the first echinoderm genome to be sequenced. A high quality draft sequence assembly was produced using the Atlas assembler to combine whole genome shotgun sequences with sequences from a collection of BACs selected to form a minimal tiling path along the genome. A formidable challenge was presented by the high degree of heterozygosity between the two haplotypes of the selected male representative of this marine organism. This was overcome by use of the BAC tiling path backbone, in which each BAC represents a single haplotype, as well as by improvements in the Atlas software. Another innovation introduced in this project was the sequencing of pools of tiling path BACs rather than individual BAC sequencing. The Clone-Array Pooled Shotgun Strategy greatly reduced the cost and time devoted to preparing shotgun libraries from BAC clones. The genome sequence was analyzed with several gene prediction methods to produce a comprehensive gene list that was then manually refined and annotated by a volunteer team of sea urchin experts. This latter annotation community edited over 9000 gene models and uncovered many unexpected aspects of the sea urchin genetic content impacting transcriptional regulation, immunology, sensory perception, and an organism's development. Analysis of the basic deuterostome genetic complement supports the sea urchin's role as a model system for deuterostome and, by extension, chordate development.
2) High regulatory gene use in sea urchin embryogenesis: Implications for
bilaterian development and evolution
Meredith Howard-Ashby, Stefan C. Materna, C. Titus Brown, Qiang Tu, Paola Oliveri,
R. Andrew Cameron, Eric H. Davidson
A global scan of transcription factor usage in the sea urchin embryo was carried out in the context of the Strongylocentrotus purpuratus genome sequencing project, and results from six individual studies are here considered. Transcript prevalence data were obtained for over 280 regulatory genes encoding sequence-specific transcription factors of every known family, but excluding genes encoding zinc finger proteins. This is a statistically inclusive proxy for the total “regulome” of the sea urchin genome. Close to 80% of the regulome is expressed at significant levels by the late gastrula stage. Most regulatory genes must be used repeatedly for different functions as development progresses. An evolutionary implication is that animal complexity at the stage when the regulome first evolved was far simpler than even the last common bilaterian ancestor, and is thus of deep antiquity.
*From Eric Davidson's lab homepage:
"...The major focus of research in our laboratory is on gene networks that control development and their evolution. Our areas of research include the transcriptional mechanisms by which specification of embryonic blastomeres occurs early in development; structure/function relationships in developmental cis-regulatory systems; sea urchin genomics; and regulatory evolution in the bilaterians. Most of our work is carried out on sea urchin embryos, which provide key experimental advantages..."
Technorati: developmental, biology, sea urchin, genome, implications, highlights, project, research, human, science, anatomy, evolution, genes, aristotle, lantern, genetic, immunology, evolutionary, complexity, networks
Wednesday, December 27, 2006
An open access/free paper from PLoS ONE:
The Syntax and Meaning of Wild Gibbon Songs
By Esther Clarke, Ulrich H. Reichard, Klaus Zuberbuhler*
Spoken language is a result of the human capacity to assemble simple vocal units into more complex utterances, the basic carriers of semantic information. Not much is known about the evolutionary origins of this behaviour. The vocal abilities of non-human primates are relatively unimpressive in comparison, with gibbon songs being a rare exception. These apes assemble a repertoire of call notes into elaborate songs, which function to repel conspecific intruders, advertise pair bonds, and attract mates. We conducted a series of field experiments with white-handed gibbons** at Khao Yai National Park, Thailand, which showed that this ape species uses songs also to protect themselves against predation. We compared the acoustic structure of predatory-induced songs with regular songs that were given as part of their daily routine. Predator-induced songs were identical to normal songs in the call note repertoire, but we found consistent differences in how the notes were assembled into songs. The responses of out-of-sight receivers demonstrated that these syntactic differences were meaningful to conspecifics. Our study provides the first evidence of referential signalling in a free-ranging ape species, based on a communication system that utilises combinatorial rules.
Citation: Clarke E, Reichard UH, Zuberbühler K (2006) The Syntax and Meaning of Wild Gibbon Songs. PLoS ONE 1(1): e73. doi:10.1371/journal.pone.0000073
Continued at "The Syntax and Meaning of Wild Gibbon Songs" [Primatology]
*Klaus Zuberbuhler is author of:
The Phylogenetic Roots of Language - Evidence From Primate Communication and Cognition
The anatomy of the nonhuman primate vocal tract is not fundamentally different from the human one. Notwithstanding, nonhuman primates are remarkably unskillful at controlling vocal production and at combining basic call units into more complex strings. Instead, their vocal behavior is linked to specific psychological states, which are evoked by events in their social or physical environment. Humans are the only primates that have evolved the ability to produce elaborate and willfully controlled vocal signals, although this may have been a fairly recent invention. Despite their expressive limitations, nonhuman primates have demonstrated a surprising degree of cognitive complexity when responding to other individuals' vocalizations, suggesting that, as recipients, crucial linguistic abilities are part of primate cognition. Pivotal aspects of language comprehension, particularly the ability to process semantic content, may thus be part of our primate heritage. The strongest evidence currently comes from Old World monkeys, but recent work indicates that these capacities may also be present in our closest relatives, the chimpanzees.
And co-author (with Kate Arnold) of:
Language evolution: Semantic combinations in primate calls
Syntax sets human language apart from other natural communication systems, although its evolutionary origins are obscure. Here we show that free-ranging putty-nosed monkeys combine two vocalizations into different call sequences that are linked to specific external events, such as the presence of a predator and the imminent movement of the group. Our findings indicate that non-human primates can combine calls into higher-order sequences that have a particular meaning.
**Info on white-handed gibbons:
...The white-handed gibbon is distinguished by its musical howl. They are quiet during the day but commonly howl at sunrise and sunset. They are very vocal, making loud "whoop" sounds. Their loud resonant songs can be heard up to 1/2 mile away. Songs by far excel those of all other species because of a sound-amplifying throat sac.
Duetting is the singing between the male and female, and is dominated by the female. This helps to maintain the pair bond between the pair and to maintain the territory. Each morning upon awakening a family group of gibbons loudly announces its presence in the forest, using a territorial hooting call and menacing gestures. This call warns other gibbons to stay out of their territory (and especially away from the local fruit trees). This noisy display takes 1/2 hour or more every morning and is usually started by the adult female. The male and female have different calls.
In friendly greetings, corners of mouth are drawn back, revealing teeth, and tongue is sometimes protruding. In anger, mouth is opened and closed repeatedly, smacking lips and snapping teeth together. Snarling is interpreted as an intention of biting.
There are 9 species with 9 different territorial songs. The gibbons seem to be born knowing the songs because they are always the same, and not learned...
Technorati: syntax, meaning, white, handed, gibbons, gibbon, songs, language, evolutionary, origins, apes, primates, primatology, species, study, anatomy, humans, monkeys, old world, semantic, primate, evolution, complexity
An open access/free article from the Proceedings of the National Academy of Sciences (PNAS):
by Jonathan P. Fay, Sunil Puria*, and Charles R. Steele**
Edited by Eric I. Knudsen***, Stanford University School of Medicine, Stanford, California
At frequencies above 3 kHz, the tympanic membrane vibrates chaotically. By having many resonances, the eardrum can transmit the broadest possible bandwidth of sound with optimal sensitivity. In essence, the eardrum works best through discord. The eardrum's success as an instrument of hearing can be directly explained through a combination of its shape, angular placement, and composition. The eardrum has a conical asymmetrical shape, lies at a steep angle with respect to the ear canal, and has organized radial and circumferential collagen fiber layers that provide the scaffolding. Understanding the role of each feature in hearing transduction will help direct future surgical reconstructions, lead to improved microphone and loudspeaker designs, and provide a basis for understanding the different tympanic membrane structures across species. To analyze the significance of each anatomical feature, a computer simulation of the ear canal, eardrum, and ossicles was developed. It is shown that a cone-shaped eardrum can transfer more force to the ossicles than a flat eardrum, especially at high frequencies. The tilted eardrum within the ear canal allows it to have a larger area for the same canal size, which increases sound transmission to the cochlea. The asymmetric eardrum with collagen fibers achieves optimal transmission at high frequencies by creating a multitude of deliberately mistuned resonances. The resonances are summed at the malleus attachment to produce a smooth transfer of pressure across all frequencies. In each case, the peculiar properties of the eardrum are directly responsible for the optimal sensitivity of this discordant drum.
The function of the middle ear in terrestrial mammals is to transfer acoustic energy between the air of the ear canal to the fluid of the inner ear. The first and crucial step of the transduction process takes place at the tympanic membrane, which converts sound pressure in the ear canal into vibrations of the middle ear bones. Understanding how the tympanic membrane manages this task so successfully over such a broad range of frequencies has been a subject of research since Helmholtz's publication in 1868 (1, 2).
Even though the function of the eardrum is clear and the anatomy of the eardrum is well characterized, the connection between the anatomical features and the ability of the eardrum to transduce sound has been missing. The missing structure-function relationships can be summarized by the following three questions. Why does the mammalian eardrum have its distinctive conical and toroidal shape? What is the advantage of its angular placement in the ear canal? What is the significance of its highly organized radial and circumferential fibers?
The shape of the human and feline eardrum is known from detailed Moire interferometry contour maps (refs. 3 and 4 and Fig. 1a). From the contour maps, three-dimensional reconstructions reveal the striking similarity of the two eardrums. In both cases, the eardrum has an elliptical outer boundary, whereas the central portion has a distinctive conical shape (Fig. 1b). As one moves away from the center, the cone starts to bend forming an outer toroidal region (Fig. 1 b and c).
*Info on Sunil Puria:
...Of the five senses, the auditory system is one of the most remarkable. It can operate over a dynamic range of more than six orders of magnitude in sound pressure level. To accomplish this task, the hair cells in the fluid-filled inner ear detect motions down to the dimensions of atoms and are limited only by Brownian motion of the surrounding fluid.
It has recently been discovered that these hair cells, which act as transducers of mechanical motion to electrical impulses transmitted to the central nervous system, are inherently non-linear. Consequently, the mechanics of a normal inner-ear must remain non-linear for normal function while a damaged ear exhibits more linear characteristics. Thus the auditory system uses non-linear elements to achieve exquisite sensitivity and a large dynamic range. These non-linearities of the ear are increasingly being exploited in speech coding technologies.
Recently, it was discovered that a healthy ear not only detects sounds but also generates sounds... (More)
**Info on Charles Steele:
...Asymptotic analysis and computation, biomechanics, mechanics of hearing, noninvasive mechanical measurement of bone and soft tissue, plant morphogenesis. He is the author of over 80 archival papers and three handbook chapters in these areas. He is the Editor-in-Chief of the International Journal of Solids and Structures... (more)
***Info on Eric Knudsen:
We study mechanisms of attention, learning and strategies of information processing in the central auditory system of developing and adult barn owls, using neurophysiological, pharmacological, anatomical and behavioral techniques. Studies focus on the process of sound localization. Sound localization is shaped powerfully by an animal's auditory and visual experience. Experiments are being conducted to elucidate developmental influences, extent and time course of this learning process, and its dependence on visual feedback... (More)-------
by Andrea Streit (homepage)
The vertebrate inner ear forms a highly complex sensory structure responsible for the detection of sound and balance. Some new aspects on the evolutionary and developmental origin of the inner ear are summarised here.
Recent molecular data have challenged the longstanding view that special sense organs such as the inner ear have evolved with the appearance of vertebrates. In addition, it has remained unclear whether the ear originally arose through a modification of the amphibian mechanosensory lateral line system or whether both evolved independently.
A comparison of the developmental mechanisms giving rise to both sensory systems in different species should help to clarify some of these controversies. During embryonic development, the inner ear arises from a simple epithelium adjacent to the hindbrain, the otic placode, that is specified through inductive interactions with surrounding tissues.
This review summarises the embryological evidence showing that the induction of the otic placode is a multistep process which requires sequential interaction of different tissues with the future otic ectoderm and the recent progress that has been made to identify some of the molecular players involved.
Finally, the hypothesis is discussed that induction of all sensory placodes initially shares a common molecular pathway, which may have been responsible to generate an 'ancestral placode' during evolution.
Technorati: open access, pnas, discordant, eardrum, stanford, tympanic, membrane, sound, bandwidth, ear, canal, cochlea, malleus, middle, pressure, anatomy, auditory, system, inner, vertebrates, balance, sensory, ancestral, evolution
If a zebrafish loses a chunk of its tail fin, it'll grow back within a week. Like lizards, newts, and frogs, a zebrafish can replace surprisingly complex body parts. A tail fin, for example, has many different types of cells and is a very intricate structure. It is the fish version of an arm or leg.
The question of how cold-blooded animals re-grow missing tails and other appendages has fascinated veterinary and medical scientists. They also wonder if people, and other warm-blooded animals that evolved from these simpler creatures, might still have untapped regenerative powers hidden in their genes.
People are constantly renewing blood components, skeletal muscles and skin. We can regenerate liver tissue and repair minor injuries to bone, muscle, the tips of our toes and fingers, and the corneas of our eyes. Finding out more about the remarkable ability of amphibians and fish to re-grow complex parts might provide the information necessary to create treatments for people whose hearts, spinal cords, eyes or arms and legs have been badly hurt.
Scientists have discovered some of the genes and cell-to-cell communication pathways that enable zebrafish to restore their tail fins.
'The ability to regenerate body parts such as those that are damaged by injury or disease,' said Dr. Randall Moon*, professor of pharmacology at the University of Washington (UW), an investigator of the Howard Hughes Medical Institute, and a researcher in the UW Institute for Stem Cell and Regenerative Medicine, 'involves creating cells that can take any number of new roles. This can be done by re-programming cells that already have a given function or by activating resident stem cells.'
Continued at "How does a zebrafish grow a new tail?"
Based on the journal Development paper:
Cristi L. Stoick-Cooper, Gilbert Weidinger, Kimberly J. Riehle, Charlotte Hubbert, Michael B. Major, Nelson Fausto, and Randall T. Moon
In contrast to mammals, lower vertebrates have a remarkable capacity to regenerate** complex structures damaged by injury or disease. This process, termed epimorphic regeneration, involves progenitor cells created through the reprogramming of differentiated cells or through the activation of resident stem cells. Wnt/beta-catenin signaling regulates progenitor cell fate and proliferation during embryonic development and stem cell function in adults, but its functional involvement in epimorphic regeneration has not been addressed. Using transgenic fish lines, we show that Wnt/beta-catenin signaling is activated in the regenerating zebrafish tail fin and is required for formation and subsequent proliferation of the progenitor cells of the blastema. Wnt/beta-catenin signaling appears to act upstream of FGF signaling, which has recently been found to be essential for fin regeneration. Intriguingly, increased Wnt/beta-catenin signaling is sufficient to augment regeneration, as tail fins regenerate faster in fish heterozygous for a loss-of-function mutation in axin1, a negative regulator of the pathway. Likewise, activation of Wnt/beta-catenin signaling by overexpression of wnt8 increases proliferation of progenitor cells in the regenerating fin. By contrast, overexpression of wnt5b (pipetail) reduces expression of Wnt/beta-catenin target genes, impairs proliferation of progenitors and inhibits fin regeneration. Importantly, fin regeneration is accelerated in wnt5b mutant fish. These data suggest that Wnt/beta-catenin signaling promotes regeneration, whereas a distinct pathway activated by wnt5b acts in a negative-feedback loop to limit regeneration.
*Info on Randall Moon:
Randall Moon studies the biochemistry of the Wnt signal transduction pathways and the roles of these pathways in vertebrates. He is interested in understanding the normal mechanisms and functions of Wnt signaling and in using this understanding to develop insights into the roles of Wnt signaling in diseases. He is also interested in developing potential therapies, with an emphasis on regenerative medicine.
**Info on Regeneration:
"...Regeneration occurs in many, if not all vertebrate embryos, and is present in some adult animals such as salamanders ( e.g. the newt and axolotl), hydra, horseshoe crabs and a type of mouse. . Mammals exhibit limited regenerative abilities, although not as impressive as salamanders. Examples of mammalian regeneration include antlers, finger tips and holes in ears. Finger tip regeneration has been well characterized, and these studies have resulted in the first demonstration of a genetic pathway controlling regeneration in a mammal. Several species of mammals can regenerate ear holes; a phenomenon that has been most studied in the MRL mouse. If the processes behind regeneration are fully understood, it is believed this would lead to better treatment for individuals with nerve injuries (such as those resulting from a broken back or a polio infection), missing limbs, and/or damaged or destroyed organs.
Regeneration of a lost limb occurs in two major steps, first de-differentiation of adult cells into a stem cell state similar to embryonic cells and second, development of these cells into new tissue more or less the same way it developed the first time . Some animals like planarians instead keep clusters of non-differentiated cells within their bodies, which migrate to the parts of the body that need healing..."
Technorati: zebrafish, lizards, newts, frogs, fish, evolved, liver, bone, muscle, regenerate, amphibians, stem, cells, development, regeneration, body, parts, mutation, biochemistry, genetic, mammals, crabs, mouse, salamanders
Tuesday, December 26, 2006
The mystery of what killed Australia's giant animals - the so-called 'megafauna' - during the Last Ice Age is one of the longest-running and most emotive debates in palaeontology. Scientists have now published clear evidence from south-eastern Australia to show that climate change was not the driving force behind the extinctions, which took place between 50 and 40 thousand years ago.
This refocuses attention on humans as the main cause. The latest study, published in the January 2007 issue of the respected international journal Geology, is unique in providing - for the first time - a long-term perspective on the responses of the megafauna in the Naracoorte Caves (info) region of south-eastern Australia to cyclical swings in Ice Age climates.
"Climate change was certainly not the main culprit in the extinctions. Our data show that the megafauna was resilient to climatic fluctuations over the past half-million years", said team leader and palaeontologist Dr Gavin Prideaux from the Western Australian Museum and Flinders University.
Australia lost 90% of its large fauna, including rhino-sized marsupials, 3-metre tall kangaroos and giant goannas within 20 thousand years of human arrival. Opinions are divided between the relative importance of climatic changes and the activities of humans themselves via habitat disturbance or over-hunting. Unfortunately, the debate has been hamstrung by a lack of basic data on how communities responded to climate changes before humans arrived.
The new fossil evidence from Naracoorte reveals surprising stability in the mammal composition through successive wet and dry phases. "Although populations fluctuated locally in concert with cyclical climatic changes, with larger species favoured in wetter times, most if not all of them survived even the driest times - then humans arrived", said Dr Prideaux.
The Naracoorte Caves World Heritage Area in south-eastern South Australia contains the richest assemblage of Pleistocene (1.8 million to 10 thousand years ago) animals anywhere in Australia. What makes the record more remarkable is that it can be directly compared to a 500 thousand-year record of local rainfall preserved in the stalagmites in these caves.
The fossils were dated by two independent methods (optically stimulated luminescence and uranium-series dating) at the Universities of Wollongong and Melbourne, by geochronologists Professor Richard 'Bert' Roberts, Dr Kira Westaway and Dr John Hellstrom. The multi-disciplinary team also included Dr Dirk Megirian from the Museum of Central Australia in Alice Springs, who studied the sediments for additional clues of the prevailing climate conditions.
"These analyses have allowed us to pinpoint the ages of the fossils and the major shifts in climate. Our evidence shows that the Naracoorte giants perished under climatic conditions similar to those under which they previously thrived, which strongly implicates humans in their extinction" said Professor Roberts.
The research project was supported by the South Australian Department for Environment and Heritage, GreenCorp, the Friends of the Naracoorte Caves, the Cave Exploration Group of South Australia, the Commonwealth Natural Heritage Trust Extension Bushcare Program, and the Australian Research Council.
Original PR available via Media Releases (Dec. 22 entry) [Paleontology]
Based on The Geological Society of America's* Geology paper:
"Mammalian responses to Pleistocene climate change in southeastern Australia"
by Gavin J. Prideaux, , Richard G. Roberts, Dirk Megirian, Kira E. Westaway, John C. Hellstrom, Jon M. Olley
NYA at: http://dx.doi.org/10.1130/G23070A.1
Resolving faunal responses to Pleistocene climate change is vital for differentiating human impacts from other drivers of ecological change. While 90% of Australia's large mammals were extinct by ca. 45 ka, their responses to glacial-interglacial cycling have remained unknown, due to a lack of rigorous biostratigraphic studies and the rarity of terrestrial climatic records that can be related directly to faunal records. We present an analysis of faunal data from the Naracoorte Caves in southeastern Australia, which are unique not only because of the species richness and time-depth of the assemblages that they contain, but also because this faunal record is directly comparable with a 500 k.y. speleothem-based record of local effective moisture. Our data reveal that, despite significant population fluctuations driven by glacial-interglacial cycling, the species composition of the mammal fauna was essentially stable for 500 k.y. before the late Pleistocene extinctions. Larger species declined during a drier interval between 270 and 220 ka, likely reflecting range contractions away from Naracoorte, but they then recovered locally, persisting well into the late Pleistocene. Because the speleothem record and prior faunal response imply that local conditions should have been favorable for megafauna until at least 30 ka, climate change is unlikely to have been the principal cause of the extinctions.
*About the GSA:
"...Established in 1888, The Geological Society of America provides access to elements that are essential to the professional growth of earth scientists at all levels of expertise and from all sectors: academic, government, business, and industry.
The Geological Society's growing membership unites thousands of earth scientists from every corner of the globe in a common purpose to study the mysteries of our planet and share scientific findings..."
Technorati: mystery, australia, megafauna, palaeontology, paleontology, climate change, journal, geology, ice age, western, australian, museum, flinders, university, humans, fossil, evidence, extinction, geological, society, america, pleistocene, species
Complexity Constrains Evolution of Human Brain Genes
Despite the explosive growth in size and complexity of the human brain, the pace of evolutionary change among the thousands of genes expressed in brain tissue has actually slowed since the split, millions of years ago, between human and chimpanzee, an international research team reports in the December 26, 2006, issue of the journal, PLOS Biology.
The rapid advance of the human brain, the authors maintain, has not been driven by evolution of protein sequences. The higher complexity of the biochemical network in the brain, they suspect, with multiple gene-gene interactions, places strong constraints on the ability of most brain-related genes to change.
'We found that genes expressed in the human brain have in fact slowed down in their evolution, contrary to some earlier reports,' says study author Chung-I Wu, professor of ecology and evolution at the University of Chicago. 'The more complex the brain, it seems, the more difficult it becomes for brain genes to change. Calibrated against the genomic average, brain-expressed genes in humans appear to have evolved more slowly than in chimpanzee or old-world monkey.'
Humans have an exceptionally big brain relative to their body size. Although humans weigh about 20 percent more than chimpanzees, our closest relative, the human brain weighs 250 percent more. How such a massive morphological change occurred over a relatively short evolutionary time has long puzzled biologists.
Previous reports have argued that the genes that regulate brain development and function evolved much more rapidly in humans than in nonhuman primates and other mammals because of natural selection processes unique to the human lineage.
The comparative pace of organ-specific evolution, however, turns out to be difficult to measure. To assess the speed with which both humans and chimpanzees accumulated many small differences in gene sequences accurately, Wu and colleagues in Taiwan and Japan decided to sequence several thousand genes expressed in the brain of the macaque monkey and compare them with available genomic sequences from human, chimpanzee, and mice.
What they found was that the "more advanced" species had faster overall rates of evolution. So, on average, the genes from humans and chimpanzees changed faster than genes from monkeys, which changed faster than those from mice.
They explained the trend as a correlate of smaller population size in the more advanced species. Species with smaller population size can more easily escape the harsh scrutiny of natural selection.
When they compared the pace of evolution among genes expressed in the brain, however, the order was reversed. When calibrated against the genomic average, brain genes in humans evolved more slowly than in other primates, which were slower than mice.
"We would expect positive selection to work most effectively on tissue-specific genes, where there would be fewer conflicting requirements," says Wu. "For example, genes expressed only in male reproductive tissues have evolved very rapidly."
Brains, however, "are intriguing in this respect," Wu says. Genes that are expressed only in the brain evolved more slowly than those that are expressed in the brain as well as other tissues, and those genes evolved more slowly than genes expressed throughout the rest of the organism.
The authors attribute the slowdown to mounting complexity of interactions within the brain. "We know that proteins with more interacting partners evolve more slowly," Wu said. "Mutations that disrupt existing interactions aren't tolerated."
Although the gene sequences from human and chimpanzee remain very similar, previous studies in tissues other than the brain have shown that gene expression varies widely. Other studies have found that, within the brain, the abundance of expressed genes per neuron appears to be greater in humans.
"On the basis of individual neurons of the brain, humans may indeed have a far more active, or even more complex, transcription profile than chimpanzee," the authors note. "We suggest that such abundant and complex transcription may increase gene-gene interactions and constrains coding-sequence evolution."
Future studies of brain function and evolution will increasingly take advantage of the approaches of systems biology, Wu suggested. "The slowdown in genetic evolution in the more advanced organs makes sense," he said, "only when one takes a systems perspective."
Academia Sinica and the National Sciences Council of Taiwan, the Ministry of Health and Welfare of Japan, and the U.S. National Institutes of Health funded the research. Additional authors are C.-K. James Shen, Hurng-Yi Wang and Huan-Chieh Chien of Academia Sinica, Taiwan; Naoki Osada and Katsuyuki Hashimoto of the National Institute of Infectious Diseases, Japan; Sumio Sugano of the University of Tokyo, Japan; Takashi Gojobori of the National Institute of Genetics, Japan; Chen-Kung Chou of Taipei Veterans General Hospital, Taiwan; and Shih-Feng Tsai of the National Health Research Institute, Taiwan.
Based on the open access/free PLoS Biology article:
Rate of Evolution in Brain-Expressed Genes in Humans and Other Primates
Citation: Wang HY, Chien HC, Osada N, Hashimoto K, Sugano S, et al. (2007) Rate of Evolution in Brain-Expressed Genes in Humans and Other Primates. PLoS Biol 5(2): e13 DOI: 10.1371/journal.pbio.0050013
Brain-expressed genes are known to evolve slowly in mammals. Nevertheless, since brains of higher primates have evolved rapidly, one might expect acceleration in DNA sequence evolution in their brain-expressed genes. In this study, we carried out full-length cDNA sequencing on the brain transcriptome of an Old World monkey (OWM) and then conducted three-way comparisons among (i) mouse, OWM, and human, and (ii) OWM, chimpanzee, and human. Although brain-expressed genes indeed appear to evolve more rapidly in species with more advanced brains (apes > OWM > mouse), a similar lineage effect is observable for most other genes. The broad inclusion of genes in the reference set to represent the genomic average is therefore critical to this type of analysis. Calibrated against the genomic average, the rate of evolution among brain-expressed genes is probably lower (or at most equal) in humans than in chimpanzee and OWM. Interestingly, the trend of slow evolution in coding sequence is no less pronounced among brain-specific genes, vis-à-vis brain-expressed genes in general. The human brain may thus differ from those of our close relatives in two opposite directions: (i) faster evolution in gene expression, and (ii) a likely slowdown in the evolution of protein sequences. Possible explanations and hypotheses are discussed.
Whether comparing morphology or cognitive ability, it is clear that the human brain has evolved rapidly relative to that of other primates. But the extent to which genes expressed in the brain also reflect this overall pattern is unclear. To address this question, it's necessary to measure any variations in the DNA sequences of these genes between human and chimpanzee. And, to do this as accurately as possible, it's also important to require an appropriate reference group to act as a benchmark against which the differences can be measured. We therefore compared publicly available genomic sequences of chimps and humans with complementary DNA sequences of several thousand genes expressed in the brain of another closely related primate - the macaque, an Old World monkey - as well as the more distantly related mouse. Our analyses of the rates of protein evolution in these species suggest that genes expressed in the human brain have in fact slowed down in their evolution since the split between human and chimpanzee, contrary to some previously published reports. We suggest that advanced brains are driven primarily by the increasing complexity in the network of gene interactions. As a result, brain-expressed genes are constrained in their sequence evolution, although their expression levels may change rapidly.
Sunday, December 24, 2006
An open access/free paper (includes video) from PLoS ONE:
Predator Mimicry: Metalmark Moths Mimic Their Jumping Spider Predators
Cases of mimicry provide many of the nature's most convincing examples of natural selection. Here we report evidence for a case of predator mimicry in which metalmark moths in the genus Brenthia mimic jumping spiders, one of their predators. In controlled trials, Brenthia had higher survival rates than other similarly sized moths in the presence of jumping spiders and jumping spiders responded to Brenthia with territorial displays, indicating that Brenthia were sometimes mistaken for jumping spiders, and not recognized as prey. Our experimental results and a review of wing patterns of other insects indicate that jumping spider mimicry is more widespread than heretofore appreciated, and that jumping spiders are probably an important selective pressure shaping the evolution of diurnal insects that perch on vegetation.
The phenomenon of mimicry, a high degree of resemblance due to selection, was first proposed in 1862 by Sir Walter Henry Bates (info) upon his return from eleven years as a professional collector in Amazon. Writing about butterfly wing patterns, Bates noted, "... on these expanded membranes Nature writes, as on a tablet, the story of the modifications of species..." Bates proposed that longwings and other butterflies gain protection by mimicking distasteful species and that the resemblances among such unrelated insects lent support to Charles Darwin's newly proposed theory of natural selection . Since Bates's initial contribution various cases of mimicry have been identified from across the tree of life. In this paper, we describe a curious form of Batesian mimicry - again involving the wing patterns of Lepidoptera - in which prey (metalmark moths) obtain protection by mimicking their predators (jumping spiders)
Many examples of Batesian and Mullerian mimicry and camouflage have been described –. Even cases of aggressive mimicry, where predators mimic prey, are known (e.g., females of Photuris fireflies lure males of different firefly species to their death by mimicking their courtship signals ). However, predator mimicry - cases in which prey have evolved to mimic their predators to thwart predation attempts - are both exceptional and rare.
Predator mimicry was suggested for owls, where owl ear tufts mimic mammalian predators for protection from such predators as lynx, fox, and marten . Another potential case of predator mimicry is among South American cichlids: coloration and spotting of certain prey species makes them so similar to their predators that they are thought to be their mimics . Eyespots on the wings of giant silk moths and other Lepidoptera undoubtedly mimic eyes of mammalian predators - but here the eyes may function not to mimic their would-be predators, but to resemble a much larger animal, one sizable enough to be a threat to lepidopteran would-be predators. Hence, these eyespots might be regarded as startle coloration . However, in none of these cases are there experimental data demonstrating the efficacy of this mimicry. Our literature review suggests there are few well-supported cases of predator mimicry: e.g., lycaenid butterflies that chemically mimic ants and salticid spiders that mimic ants to avoid being preyed upon by them (other ant mimics probably gain protection from all predators that tend to avoid ants ). Here we present evidence for another case of predator mimicry involving salticid spiders, but in this case salticids are predators and not prey.
Citation: Rota J, Wagner DL (2006) Predator Mimicry: Metalmark Moths Mimic Their Jumping Spider Predators. PLoS ONE 1(1): e45. doi:10.1371/journal.pone.0000045
"There are three forms of mimicry utilized by both predator and prey: Batesian mimicry, Muellerian mimicry, and self-mimicry. Mimicry refers to the similarities between animal species; camouflage refers to an animal species resembling an inanimate object..."
*David Wagner is author of "Caterpillars of Eastern North America : A Guide to Identification and Natural History"
From the Preface:
"I recently attended a seminar at Harvard University to hear Stefan Cover speak. He started off simply enough. "Everyone needs an obsession. Mine is ants." Everyone chuckled … more than a few heads nodded in agreement. For the past ten years mine has been caterpillars. They have provided a bounty of trip memories, abundant photographic opportunities, led to dozens of collaborations and friendships, some of which will be lifelong, and introduced me to a world full of beauty, change, carnage, and discovery. Stefan was right.
My goal in writing this guide is twofold. First, to provide larval images and biological summaries for the larger, commonly encountered caterpillars found east of the 100th meridian. Sounds simple, yet the problems associated with compiling such information are legion: literature is scattered, lacking, or, worse, especially in the case of some early accounts, wrong. For many common moths the species taxonomy is still under study, life histories are incompletely known, and distributional data have yet to be assembled. In this guide I offer a synopsis for each species that includes information on its distribution, phenology, and life history.
...Behaviors and phenomena previously believed to be exceptional or uncommon are shown to be otherwise: e.g., both Batesian and Mullerian mimicry appear to be more prevalent in caterpillars than previously recognized. Pronounced developmental changes (in form, coloration, and behavior), bordering on hypermetamorphosis, were seen in several families - striking examples occur among the daggers and slug caterpillars..."
Technorati: predators, mimicry, video, spiders, natural selection, insects, evolution, theory, charles darwin, tree, life, batesian, mullerian, butterflies, ants, self, caterpillars, discovery, taxonomy
I can't find the original volume so I may have got the exact words wrong, but I recall one of those marvellous old Punch cartoons in which every last detail is painstakingly explained. A devoted mother is looking proudly on at a military parade as her son's platoon marches past: 'There's my boy, he's the only one in step!' On The Guardian letters page of December 19th 2006, I initiated an exchange about Professor Andrew McIntosh* (info) of Leeds university, who has publicly stated that he believes the world is only 6,000 years old, and publicly stated that the theory of evolution violates the second law of thermodynamics**. Both these beliefs place McIntosh out of step with his scientific colleagues, not just his platoon but the entire regiment - to paraphrase Evelyn Waugh, the whole ruddy division. Amazingly, McIntosh is Professor of Thermodynamics at Leeds, and, equally amazingly, a letter supporting him has now appeared from Professor Stuart Burgess (homepage), Head of the Department of Mechanical Engineering at Bristol University . Other letters to the Editor indicate that a distressing number of otherwise knowledgeable and intelligent people have little conception of the enormity of what is being said.
Science doesn't work by vote and it doesn't work by authority. It is possible that Burgess and McIntosh really are the only ones in step, and the whole scientific establishment is flat wrong. Indeed, I shall bias my discussion in their favour by continuing to use that word 'establishment' with all its pejorative overtones of fuddyduddy, stick-in-the-muddy authoritarianism. I like mavericks. I like free spirits who buck the trend and strike out on their own. They are not usually right, but on the rare occasions when they are, they are very right indeed: importantly so, and all power to them. Maybe Burgess and McIntosh are right and all the rest of us – biologists, geologists, archeologists, historians, chemists, physicists, cosmologists and, yes, thermodynamicists and respectable theologians, the vast majority of Nobel Prizewinners, Fellows of the Royal Society and of the National Academies of the world - are wrong. Not just slightly wrong but catastrophically, appallingly, devastatingly wrong. It is possible, and I am going to follow that possibility through to its logical conclusion. I shall not here defend the views held by the scientific establishment. I am among those who have done that elsewhere, in many books. My purpose in this article is only to convey the full magnitude of the error into which, if Burgess and McIntosh are right, the scientific establishment has fallen.
Continued at "The Only One in Step"
*On November 29th 2006, Leeds University (UK) issued the following press release:
Professor Andrew McIntosh's directorship of Truth in Science***, and his promotion of that organisation's views, are unconnected to his teaching or research at the University of Leeds in his role as a professor of thermodynamics. As an academic institution, the University wishes to distance itself publicly from theories of creationism and so-called intelligent design which cannot be verified by evidence.
**Info on the Second Law of Thermodynamics:
Sometimes people say that life violates the second law of thermodynamics. This is not the case; we know of nothing in the universe that violates that law. So why do people say that life violates the second law of thermodynamics? What is the second law of thermodynamics?
The second law is a straightforward law of physics with the consequence that, in a closed system, you can't finish any real physical process with as much useful energy as you had to start with - some is always wasted. This means that a perpetual motion machine is impossible. The second law was formulated after nineteenth century engineers noticed that heat cannot pass from a colder body to a warmer body by itself.
According to philosopher of science Thomas Kuhn, the second law was first put into words by two scientists, Rudolph Clausius and William Thomson (Lord Kelvin), using different examples, in 1850-51 (2). American quantum physicist Richard P. Feynman, however, says the French physicist Sadi Carnot discovered the second law 25 years earlier (3). That would have been before the first law, conservation of energy, was discovered! In any case, modern scientists completely agree about the above principles.
***From the Truth in Science website:
"Welcome to Truth in Science, a new organisation to promote good science education in the UK. Our initial focus will be on the origin of life and its diversity.
For many years, much of what has been taught in school science lessons about the origin of the living world has been dogmatic and imbalanced. The theory of Darwinian evolution has been presented as scientifically uncontroversial and the only credible explanation of origins." | http://evomech1.blogspot.com/2006_12_24_archive.html | 13 |
64 | This page derives some simple formulas that are useful for computing dihedral angles in polyhedra.
We often start by knowing both the true angle, α, between two edges of a face of a polyhedron, and the angle β that the same two edges appear to make in some projection. For example, every face of a regular icosahedron is an equilateral triangle, with angles of π/3 between the edges — but when we look straight at a vertex of the icosahedron, we see five edges in a projection where the angles between them appear to be 2π/5.
To see how we can make use of this kind of information, let's suppose we have an isoceles triangle with two sides of length s and a known angle α between them (triangle CAD in the diagram). And suppose that this triangle's projection in some plane is again an isoceles triangle, this time with an angle β between its sides (triangle CBD).
We construct the point E as the midpoint of CD. We find the plane that's perpendicular to the line segment AC and passes through D, and label the point where this plane cuts AC as F and the point where it cuts BC as G.
Some angles we might then want to know are:
If we have a regular polyhedron, the reflection of triangle CAD in the plane of ABC will be an adjacent face of the same polyhedron, and the angle between the two faces will be 2δ.
To find γ, note that we can compute the length EC in two ways: either as s sin ½α from triangle AEC, or as s cos γ sin ½β from triangle BEC. Equating the two we have:
cos γ = sin ½α / sin ½β
To find δ, we use the fact that we can obtain the length GD in two ways: either as s sin α sin δ from triangle FGD, or as s cos γ sin β, from triangle BGD. When we equate these and solve for sin δ, we get:
sin δ = [sin β / sin α] cos γ = [(2 sin ½β cos ½β) / (2 sin ½α cos ½α)] (sin ½α / sin ½β)
which simplifies to:
sin δ = cos ½β / cos ½α
To find ε, we see that:
cos ε = BE / AE = (EC cot ½β) / (EC cot ½α)
This gives us:
cos ε = tan ½α / tan ½β
Suppose we have a regular polyhedron in which n faces, all of them regular f-gons, meet at every vertex. If α is the interior angle between edges of an f-gon:
α = (1 – 2/f) π
½α = π/2 – π/f
cos ½α = sin π/f
Looking straight at the vertex, if β is the projected angle between the edges:
β = 2π/n
½β = π/n
The sine of half the internal dihedral angle between the faces of the polyhedron will then be given by our formula from the previous section:
sin δ = cos ½β / cos ½α = (cos π/n) / (sin π/f)
To calculate the full dihedral angle, 2δ, it's convenient to rewrite this as:
cos 2δ = 1 – 2 sin2 δ = 1 – 2 [(cos π/n) / (sin π/f)]2
|Polyhedron||n||f||cos 2δ||Dihedral angle 2δ|
The 4-dimensional regular polytope known as the 24-cell has 24 octahedral hyperfaces. At each of its 24 vertices, 6 of these octahedra meet. But the corners of an octahedron (above left) don't have the correct angles for six of them to fit together without any gaps. What's needed to do that is the kind of pyramid that can be formed by taking one-sixth of a cube (above right).
Of course, the resolution to this is that the six octahedra that meet at each vertex of a 24-cell don't all lie in the same 3-dimensional hyperplane, any more than the four triangular faces that meet at each vertex of the octahedron lie in the same 2-dimensional plane. But if we project a view of those four triangles into a plane perpendicular to a line from the centre of the octahedron to the vertex, we see a square divided into four identical triangles. Similarly, if we project a view of the six octahedra that meet at each vertex of the 24-cell into a 3-dimensional hyperplane perpendicular to a line from the centre of the 24-cell to the vertex, we see six identical pyramids of the kind produced by subdividing a cube.
There are analogous situations at the vertices of all the 4-dimensional regular polytopes: the corners of the hyperfaces that come together there will be pyramids with n-sided polygons as their bases, but in the projection of the vertex into a suitable hyperplane there will be shorter pyramids formed by subdividing a “vertex figure” polyhedron with n-sided faces.
The initial known quantities in the problem are:
From the calculations we did in the first section of this page, we have:
cos γ1 = sin ½α1 / sin ½β
cos γ2 = sin ½α2 / sin ½β
cos ε1 = tan ½α1 / tan ½β
cos ε2 = tan ½α2 / tan ½β
In order to find the dihedral angle between hyperfaces of the polytope, we will initially calculate half that angle: δ, the angle between the hyperplane containing the single hyperface we've been dealing with (the pyramid with tip E), and the hyperplane containing the tetrahedron ABEF. That tetrahedron contains one face of the hyperface (the triangle ABE), and also the vector EF that is normal to the hyperplane we project into. It's the four-dimensional analogue of the triangle ABC in our original three-dimensional treatment: the reflection in it of the single hyperface we're analysing will produce an adjacent hyperface of the same regular polytope.
Now, the heights of the two pyramids are:
OE = r tan γ1
OF = r tan γ2
where r is the radius of the circumscribing circle of their common base. The perpendicular distances from a face of either pyramid to the centre of the base can then by found by using the cosine of the dihedral angle between the base and the face:
OP = OE cos ε1 = r cos ε1 tan γ1
OQ = OF cos ε2 = r cos ε2 tan γ2
The line segment OQ is perpendicular to the entire hyperplane of the tetrahedron ABEF. Why? It's perpendicular to the face ABF by construction, and it's perpendicular to EF because EF is normal to the hyperplane of projection that contains the whole projected pyramid, and we constructed OQ to lie within that pyramid.
But the line segment PQ lies within the tetrahedron ABEF, because each endpoint lies on one of its faces. This means OQP is a right triangle with a right angle at Q.
Now, OQ is perpendicular to the face ABE, as is OP by construction, so any vector in the plane of OQP will also be perpendicular to ABE. But the vector that's normal to the entire unprojected hyperface must be normal to that face, so the hyperface normal must lie in the plane of OQP. What's more, the hyperface normal must be perpendicular to the line segment OP, which lies within the hyperface. This is enough to tell us that the angle at P in the triangle must be δ, the angle between the hyperface and the tetrahedron ABEF.
So sin δ can be found from the right triangle OQP:
sin δ = OQ / OP
= (cos ε2 tan γ2) / (cos ε1 tan γ1)
= (cos ½α1 / cos ½α2) (sin γ2 / sin γ1)
= (cos ½α1 / cos ½α2) √[(sin2 ½β – sin2 ½α2)/(sin2 ½β – sin2 ½α1)]
The specific values we wish to substitute into this formula are:
½α1 = ½(1 – 2/fF) π = π/2 – π/fF
cos ½α1 = sin π/fF
sin ½α1 = cos π/fF
β = 2π/fV
cos ½α2 = (cos π/fV) / (sin π/dV)
For ½α2, we're using our formula from the previous section for the dihedral angle for a polyhedron, and applying it to the angle between vertices for the dual polyhedron; one is just π minus the other, so when we take half the angle that exchanges sin and cos. Here we've set the number of faces meeting at each vertex to fV, the sides-per-face of the vertex figure, since we're applying the formula to the dual of the vertex figure. And we've introduced a new parameter, dV, which is the number of faces that meet at each vertex in the vertex figure.
This yields a final result of:
sin δ = (cos π/dV) (sin π/fF) / √(sin2 π/fV – cos2 π/fF)
An equivalent formulation is:
cos 2δ = 1 – 2 [(cos π/dV) (sin π/fF)]2 / [sin2 π/fV – cos2 π/fF]
|Polytope||Hyperfaces||fF||Vertex figures||fV||dV||cos 2δ||Dihedral angle 2δ|
|120-cell||Dodecahedra||5||Tetrahedra||3||3||–(1 + √5)/4||144°|
|600-cell||Tetrahedra||3||Icosahedra||3||5||–(1 + 3√5)/8||164.478°| | http://gregegan.customer.netspace.net.au/SCIENCE/Dihedral/Dihedral.html | 13 |
94 | Circles, the perfect shape! On this page we hope
to clear up problems that you might have with circles and the
figures, such as radii, associated with them. Just start scrolling
down or click one of the links below to start understanding
Angles involving tangents and/or secants
Segments in circles
Circumference and arc length
Quiz on Circles
All the "parts" of a circle, such as the radius, the
diameter, etc., have a relationship with the circle or
another "part" that can always be expressed as a
theorem. The two theorems that deal with chords and
radii (plural of radius) are outlined below.
1. Problem: Find CD. Given: Circle R is congruent to circle S. Chord AB = 8. RM = SN. Solution: By theorem number 2 above, segment AB is congruent to segment CD. Therefore, CD equals 8.
Oh, the wonderfully confusing world of geometry! :-) The tangent being discussed here is not the trigonometric ratio. This kind of tangent is a line or line segment that touches the perimeter of a circle at one point only and is perpendicular to the radius that contains the point.
1. Problem: Find the value of x. Given: Segment AB is tangent to circle C at B. Solution: x is a radius of the circle. Since x contains B, and AB is a tangent segment, x must be perpendicular to AB (the definition of a tangent tells us that). If it is perpendicular, the triangle formed by x, AB, and CA is a right triangle. Use the Pythagorean Theorem to solve for x. 152 + x2 = 172 x2 = 64 x = 8
Congruent arcs are arcs that have the same degree measure
and are in the same circle or in congruent circles.
An inscribed angle is an angle with its vertex on a circle and with sides that contain chords of the circle. The figure below shows an inscribed angle.
The most important theorem dealing with inscribed
angles is stated below.
1. Problem: Find the measure of each arc or angle listed below. arc QSR angle Q angle R Solution: Arc QSR is 180o because it is twice the measure of its inscribed angle (angle QPR, which is 90o). Angle Q is 60o because it is half of its intercepted arc, which is 120o. Angle R is 30o by the Triangle Sum Theorem which says a triangle has three angles which have measures that equal 180o when added together.
In the last problem's figure, you noticed that
angle P is inscribed in semicircle QPR
and angle P = 90o. This leads us
to our next theorem, which is stated below.
1. Problem: Find the measure of arc GDE. Solution: By the theorem stated above, angle D and angle F are supplementary. Therefore, angle F equals 95o. The first theorem discussed in this section tells us the measure of an arc is twice that of its inscribed angle. With that theorem, arc GDE is 190o.
When two secants intersect inside a circle, the measure
of each angle formed is related to one-half the sum of the measures
of the intercepted arcs. The figure below shows this theorem in action.
1. Problem: Find the measure of angle 1. Givens: Arc AB = 60o Arc CD = 100o Solution: By the theorem stated above, the measure of angle 1 = .5((arc CD) - (arc AB)) angle 1 = .5((100 - 60)) angle 1 = 20o
Another way secants can intersect in circles is if they are only in line segments. There is a theorem that tells us when two chords intersect inside a circle, the product of the measures of the two segments of one chord is equal to the product of the measures of the two segments of the other chord. In the figure below, chords PR and QS intersect. By the theorem stated above, PT * TR = ST * TQ.
One last thing that has to be discussed when dealing with circles
is circumference, or the distance around a circle. The circumference
of a circle equals 2 times PI times the measure of the radius. That postulate
is usually represented by the following equation (where C represents circumference and
r stands for radius): C = 2(PI)r.
1. Problem: Find the length of a 24o arc of a circle with a 5 cm radius. Solution: n 24 2(PI) L = ---(2(PI)r) = ---(2(PI))5 = ----- 360 360 3 The length of the arc is (2/3)(PI) cm.
Take the Quiz on circles. (Very useful to review or to see if you've really got this topic down.) Do it! | http://library.thinkquest.org/20991/geo/circles.html | 13 |
76 | CBSE Board Math Syllabus for Class 12
CBSE Board Syllabus for Class 12 Math
|I.||RELATIONS AND FUNCTIONS||10|
|IV.||VECTORS AND THREE - DIMENSIONAL GEOMETRY||17|
UNIT I. RELATIONS AND FUNCTIONS
1. Relations and Functions : 10 Periods
Types of relations: reflexive, symmetric, transitive and equivalence relations. One to one and onto functions, composite functions, inverse of a function. Binary operations.
2. Inverse Trigonometric Functions: (12) Periods
Definition, range, domain, principal value branches. Graphs of inverse trigonometric functions. Elementary properties of inverse trigonometric functions.
1. Matrices: (18) Periods
Concept, notation, order, equality, types of matrices, zero matrix, transpose of a matrix, symmetric and skew symmetric matrices. Addition, multiplication and scalar multiplication of matrices, simple properties of addition, multiplication and scalar multiplication. Non-commutativity of multiplication of matrices and existence of non-zero matrices whose product is the zero matrix (restrict to square matrices of order 2). Concept of elementary row and column operations. Invertible matrices and proof of the uniqueness of inverse, if it exists; (Here all matrices will have real entries).
2. Determinants: (20) Periods
Determinant of a square matrix (up to 3 x 3 matrices), properties of determinants, minors, cofactors and applications of determinants in finding the area of a triangle. Adjoint and inverse of a square matrix. Consistency, inconsistency and number of solutions of system of linear equations by examples, solving system of linear equations in two or three variables (having unique solution) using inverse of a matrix.
1. Continuity and Differentiability: (18) Periods
Continuity and differentiability, derivative of composite functions, chain rule, derivatives of inverse trigonometric functions, derivative of implicit function.Concept of exponential and logarithmic functions and their derivative. Logarithmic differentiation. Derivative of functions expressed in parametric forms. Second order derivatives. Rolle's and Lagrange's Mean Value Theorems (without proof) and their geometric interpretations.
2. Applications of Derivatives: (10) Periods
Applications of derivatives: rate of change, increasing/decreasing functions, tangents & normals, approximation, maxima and minima (first derivative test motivated geometrically and second derivative test given as a provable tool). Simple problems (that illustrate basic principles and understanding of the subject as well as real-life situations).
3. Integrals: (20) Periods
Integration as inverse process of differentiation. Integration of a variaty of functions by substitution, by partial fractions and by parts, only simple integrals of the type to be evaluated.
Definite integrals as a limit of a sum, Fundamental Theorem of Calculus (without proof). Basic properties of definite integrals and evaluation of definite integrals.
4. Applications of the Integrals: (10) Periods
Applications in finding the area under simple curves, especially lines, areas of circles/ parabolas/ellipses (in standard form only), area between the two above said curves (the region should be clearly identifiable).
5. Differential Equations: (10) Periods
Definition, order and degree, general and particular solutions of a differential equation. Formation of differential equation whose general solution is given. Solution of differential equations by method of separation of variables, homogeneous differential equations of first order and first degree. Solutions of linear differential equation of the type:
dy/dx+ py = q, where p and q are functions of x.
UNIT-IV: VECTORS AND THREE-DIMENSIONAL GEOMETRY
1. Vectors: (12) Periods
Vectors and scalars, magnitude and direction of a vector. Direction cosines/ratios of vectors. Types of vectors (equal, unit, zero, parallel and collinear vectors), position vector of a point, negative of a vector, components of a vector, addition of vectors, multiplication of a vector by a scalar, position vector of a point dividing a line segment in a given ratio. Scalar (dot) product of vectors, projection of a vector on a line. Vector (cross) product of vectors.
2. Three - dimensional Geometry: (12) Periods
Direction cosines/ratios of a line joining two points. Cartesian and vector equation of a line, coplanar and skew lines, shortest distance between two lines. Cartesian and vector equation of a plane. Angle between (i) two lines, (ii) two planes. (iii) a line and a plane. Distance of a point from a plane.
UNIT-V: LINEAR PROGRAMMING
1. Linear Programming: (12) Periods
Introduction, definition of related terminology such as constraints, objective function, optimization, different types of linear programming (L.P.) problems, mathematical formulation of L.P. problems, graphical method of solution for problems in two variables, feasible and infeasible regions, feasible and infeasible solutions, optimal feasible solutions (up to three non-trivial constraints).
1. Probability: (18) Periods
Multiplication theorem on probability. Conditional probability, independent events, total probability, Baye's theorem, Random variable and its probability distribution, mean and variance of random variable. Repeated independent (Bernoulli) trials and Binomial distribution.
1) Mathematics Part I - Textbook for Class XII, NCERT Publication
2) Mathematics Part II - Textbook for Class XII, NCERT Publication
CBSE Board Best Sellers
In order to keep pace with technological advancement and to cope up with CBSE Board examinations, Pearson group has launched Edurite to help students by offering Books and CDs of different courses online.
Get help on CBSE Board Syllabus for class 12 Now
Board Sample Paper
- Tamilnadu Board Class 12 Zoology Sample Paper Of 2011
- Andhra Pradesh Board Class 11 Math 2011
- ICSE Board Class 10 Biology 2009
- CBSE Board Class 8 Science 2009
- West Bengal Board Class 12 English Core Sample Paper Of 2011
- Himachal Pradesh Board Class 12 Biology 2009
- Madhya Pradesh Board Class 12 History 2013-SET-2
- CBSE Board Class 11 Economics 2007
- Andhra Pradesh Board Class 12 Sociology 2011
- CBSE Board Class 12 Sociology 2008
Previous Year Paper
- CBSE Board Class 11 Political Science 2007
- CBSE Board Class 12 Business Studies 2005
- CBSE Board Class 12 English Elective 2005
- CBSE Board Class 11 Physics 2011
- ICSE Board Class 10 Geography 2007
- CBSE Board Class 12 Biology 2008
- CBSE Board Class 10 Social Science 2009
- CBSE Board Class 12 Psychology 2005
- CBSE Board Class 10 Science 2005
- CBSE Board Class 11 Chemistry 2005
- Rajasthan Board Class 11 Psychology
- CBSE Board 12th Physics Syllabus
- Madhya Pradesh Board Class 11 Business Studies
- Himachal Pradesh Board Class 10 Commerce
- Gujarat Board Class 12 Psychology
- Rajasthan Board Class 11 Business Studies
- ICSE Board Class 11 Home Science
- Himachal Pradesh Board Class 11 English Core
- Himachal Pradesh Board Class 11 Economics
- Madhya Pradesh Board Class 10 French | http://boards.edurite.com/cbse+board+math+class+12-syllabus~beN-cgU-sSW.html | 13 |
56 | Linear Algebra Toolbox 2
In the previous part I covered a bunch of basics. Now let’s continue with stuff that’s a bit more fun. Small disclaimer: In this series, I’ll be mostly talking about finite-dimensional, real vector spaces, and even more specifically for some n. So assume that’s the setting unless explicitly stated otherwise; I don’t want to bog the text down with too many technicalities.
(Almost) every product can be written as a matrix product
In general, most of the functions we call “products” share some common properties: they’re examples of “bilinear maps”, that is vector-valued functions of two vector-valued arguments which are linear in both of them. The latter means that if you hold either of the two arguments constant, the function behaves like a linear function of the other argument. Now we know that any linear function can be written as a matrix product for some matrix M, provided we’re willing to choose a basis.
Okay, now take one such product-like operation between vector spaces, let’s call it . What the above sentence means is that for any , there is a corresponding matrix such that (and also a such that , but let’s ignore that for a minute). Furthermore, since a product is linear in both arguments, itself (respectively ) is a linear function of a (respectively b) too.
This is all fairly abstract. Let’s give an example: the standard dot product. The dot product of two vectors a and b is the number . This should be well known. Now let’s say we want to find the matrix for some a. First, we have to figure out the correct dimensions. For fixed a, is a scalar-valued function of two vectors; so the matrix that represents “a-dot” maps a 3-vector to a scalar (1-vector); in other words, it’s a 1×3 matrix. In fact, as you can verify easily, the matrix representing “a-dot” is just “a” written as a row vector – or written as a matrix expression, . For the full dot product expression, we thus get = (because the dot product is symmetric, we can swap the positions of the two arguments). This works for any dimension of the vectors involved, provided they match of course. More importantly, it works the other way round too – a 1-row matrix represents a scalar-valued linear function (more concisely called a “linear functional”), and in case of the finite-dimensional spaces we’re dealing with, all such functions can be written as a dot product with a fixed vector.
The same technique works for any given bilinear map. Especially if you already know a form that works on coordinate vectors, in which case you can instantly write down the matrix (same as in part 1, just check what happens to your basis vectors). To give a second example, take the cross product in three dimensions. The corresponding matrix looks like this:
The is standard notation for this construction. Note that in this case, because the cross product is vector-valued, we have a full 3×3 matrix – and not just any matrix: it’s a skew-symmetric matrix, i.e. . I might come back to those later.
So what we have now is a systematic way to write any “product-like” function of a and b as a matrix product (with a matrix depending on one of the two arguments). This might seem like a needless complication, but there’s a purpose to it: being able to write everything in a common notation (namely, as a matrix expression) has two advantages: first, it allows us to manipulate fairly complex expressions using uniform rules (namely, the rules for matrix multiplication), and second, it allows us to go the other way – take a complicated-looked matrix expression and break it down into components that have obvious geometric meaning. And that turns out to be a fairly powerful tool.
Projections and reflections
Let’s take a simple example: assume you have a unit vector , and a second, arbitrary vector . Then, as you hopefully know, the dot product is a scalar representing the length of the projection of x onto v. Take that scalar and multiply it by v again, and you get a vector that represents the component of x that is parallel to v:
See what happened there? Since it’s all just matrix multiplication, which is associative (we can place parentheses however we want), we can instantly get the matrix that represents parallel projection onto v. Similarly, we can get the matrix for the corresponding orthogonal component:
All it takes is the standard algebra trick of multiplying by 1 (or in this case, an identity matrix); after that, we just use linearity of matrix multiplication. You’re probably more used to exploiting it when working with vectors (stuff like ), but it works in both directions and with arbitrary matrices: and – matrix multiplication is another bilinear map.
Anyway, with the two examples above, we get a third one for free: We’ve just separated into two components, . If we keep the orthogonal part but flip the parallel component, we get a reflection about the plane through the origin with normal . This is just , which is again linear in x, and we can get the matrix for the whole by subtracting the two other matrices:
None of this is particularly fancy (and most of it you should know already), so why am I going through this? Two reasons. First off, it’s worth knowing, since all three special types of matrices tend to show up in a lot of different places. And second, they give good examples for transforms that are constructed by adding something to (or subtracting from) the identity map; these tend to show up in all kinds of places. In the general case, it’s hard to mentally visualize what the sum (or difference) of two transforms does, but orthogonal complements and reflections come with a nice geometric interpretation.
I’ll end this part here. See you next time! | http://fgiesen.wordpress.com/2012/06/15/linear-algebra-toolbox-2/ | 13 |
59 | 101 "pfacts" for the Physics regents exam.
The original 101 pfacts can be found at: http://homepages.go.com/~abcdeder/101facts.html
(revised by Drew Panko, June 2002)
1. Mass and inertia are the same thing. (Mass actually measures inertia - in kilograms Much as monetary resources measures financial wealth - in dollars.)
2. Weight (force of gravity) decreases as you move away from the earth by distance squared. (It decreases, but only approaches zero, never reaching it, even far beyond the solar system.)
3. Weight (in newtons) is mass x acceleration (w = mg). Mass is not Weight! Mass is a scalar and measured in kilograms, weight is a force and a vector and measured in Newtons.
4. Velocity can only be constant when the net force (and acceleration) is zero. (The velocity can be zero and not constant - for example when a ball, thrown vertically, is at the top of its trajectory.)
5. Velocity, displacement [s], momentum, force (weight), torque, and acceleration are vectors.
6. Speed, distance [d], time, length, mass, temperature, charge, power and energy (joules) are scalar quantities.
7. The slope of the distance-time graph is velocity.
8. The slope of the velocity-time graph is acceleration.
9. The area under a velocity-time graph is distance.
10. Magnitude is a term used to state how large a vector quantity is.
11. At zero (0) degrees two vectors have a resultant equal to their sum. At 180 degrees two vectors have a resultant equal to their difference. From the minimum value (at 180) to the maximum value (at zero) is the total range of all the possible resultants of any two vectors.
12. An unbalanced force must produce an acceleration and the object cannot be in equilibrium.
13. If an object is not accelerating, it is in equilibrium and no unbalanced forces are acting.
14. The equilibrant force is equal in magnitude but opposite in direction to the resultant vector.
15. Momentum is conserved in all collision systems. Energy is conserved (in the KE of the objects) only if a collision is perfectly elastic.
16. Mechanical energy is the sum of the potential and kinetic energy.
17. UNITS: a = [m/sec2]; F = [kgm/sec2] = Newton; work = PE = KE = [kgm2/sec2] = Joule;Power = [kgm2/sec3] = [Joules/sec] = Watt
18. 1ev is a very small energy unit equal to 1.6 x 10-19 joules - used for small objects such as electrons. This is on the Reference Chart.
19. Gravitational potential energy increases as height increases.
20. Kinetic energy changes only if mass or velocity changes.
21. Mechanical energy (PE + KE) does not change for a free falling mass or a swinging pendulum. (when ignoring air friction)
III. Electricity and Magnetism
22. A coulomb is charge, an amp is current [coulomb/sec] and a volt is potential difference [joule/coulomb].
23. Short, fat, cold wires make the best conductors.
24. Electrons and protons have equal amounts of charge (1.6 x 10-19 coulombs each - known as one elementary charge). This is on the Reference Chart.
25. Adding a resistor in series increases the total resistance of a circuit.
26. Adding a resistor in parallel decreases the total resistance of a circuit.
27. All resistors in series have equal current (I).
28. All resistors in parallel have equal voltage (V).
29. If two similar charged spheres touch each other add the charges and divide by two to find the final charge on each sphere after they are separated.
30. Insulators contain no electrons free to move.
31. Ionized gases conduct electric current using positive ions, negative ions and electrons.
32. Electric fields all point in the direction of the force on a positive test charge.
33. Electric fields between two parallel plates are uniform in strength except at the edges.
34. Millikan determined the charge on a single electron using his famous oil-drop experiment.
35. All charge changes result from the movement of electrons not protons. (an object becomes positive by losing electrons)
36. The direction of a magnetic field is defined by the direction a compass needle points. (The direction an isolated north pole would feel.)
37. Magnetic fields point from the north to the south outside the magnet and south to north inside the magnet.
38. Magnetic flux is measured in webers.
39. Left hands are for negative charges and reverse answer for positive charges.
40. The first hand rule deals with the B-field around a current bearing wire, the third hand rule looks at the force on charges moving in a B-field, and the second hand rule is redundant.
41. Solenoids are stronger with more current or more wire turns or adding a soft iron core.
IV. Wave Phenomena
42. Sound waves are longitudinal and mechanical.
43. Light slows down, bends toward the normal and has a shorter wavelength when it enters a medium with a higher index of refraction (n).
44. All angles in wave theory problems are measured to the normal.
45. Blue light has more energy, a shorter wavelength and a higher frequency than red light (remember- ROYGBIV).
46. The electromagnetic spectrum (radio, infrared, visible. Ultraviolet x-ray and gamma) are listed lowest energy to highest. They are all electromagnetic and travel at the speed of light (c = f ! l ).
47. The speed (c) of all types of electromagnetic waves is 3.0 x 108 m/sec in a vacuum.
48. As the frequency of an electromagnetic wave increases its energy increases (E = h ! f) and its wavelength decreases and its velocity remains constant as long as it doesn't enter a medium with a different refractive index (i.e. optical density).
49. A prism produces a rainbow from white light by dispersion. (red bends the least because it slows the least).
50. Transverse wave particles vibrate back and forth perpendicular to the direction of the wave's velocity. Longitudinal wave particles vibrate back and forth parallel to the direction of the wave's velocity.
51. Light wave are transverse (they, and all (and only)transverse waves can be polarized).
52. The amplitude of a non-electromagnetic wave (i.e. water, string and sound waves) determines its energy. The frequency determines the pitch of a sound wave. Their wavelength is a function of its frequency and speed (v = f ! l ). Their speed depends on the medium they are traveling in.
53. Constructive interference occurs when two waves are zero (0) degrees out of phase or a whole number of wavelengths (360 degrees.) out of phase.
54. At the critical angle a wave will be refracted to 90 degrees. At angles larger than the critical angle, light is reflected not refracted.
55. Doppler effect: when a wave source moves toward you, you will perceive waves with a shorter wavelength and higher frequency than the waves emitted by the source. When a wave source moves away from you, you will perceive waves with a longer wavelength and lower frequency.
56. Double slit diffraction works because of diffraction and interference.
57. Single slit diffraction produces a much wider central maximum than double slit.
58. Diffuse reflection occurs from dull surfaces while regular (spectacular) reflection occurs from smooth (mirror-like) surfaces.
59. Only waves show diffraction, interference and the polarization.
60. The period of a wave is the inverse of its frequency (T = 1/f ). So waves with higher frequencies have shorter periods.
61. Monochromatic light has one frequency.
62. Coherent light waves are all in phase.
V. Modern Physics
63. In order to explain the photoelectric effect, Einstein proposed particle behavior for light (and all electromagnetic waves) with E = h f and KEmax = hf Wo.
64. A photon is a particle of light (wave packet).
65. To preserve the symmetry of the universe, DeBroglie proposed wave behavior for particles ( l = h/mv). Therefore large fast moving objects (baseballs, rockets) have very short wavelengths (that are unobservable) but very small objects, particularly when moving slowly have wavelengths that can be detected in the behavior of the objects.
66. Whenever charged particles are accelerated, electromagnetic waves are produced.
67. The lowest energy state of a atom is called the ground state.
68. Increasing light frequency increases the kinetic energy of the emitted photo-electrons in the photo-electric effect (KEmax = hf Wo).
69. As the threshold frequency increases for a photo-cell (photo emissive material) the work function also increases (Wo = h fo)
70. Increasing light intensity increases the number of emitted photo-electrons in the photo-electric effect but not their KE (i.e. more intensity>more photons>more electrons emitted). This is the particle nature shown by light.
VI. Motion in a plane
71. Key to understanding trajectories is to separate the motion into two independent components in different dimensions - normally horizontal and vertical. Usually the velocity in the horizontal dimension is constant (not accelerated) and the motion in the vertical dimension is changing (usually with acceleration of g).
72. Centripetal force and centripetal acceleration vectors are toward the center of the circle- while the velocity vector is tangent to the circle. (Centripetal means towards the center!)
73. An object in orbit is not weightless - it is its weight that keeps it moving in a circle around the astronomical mass it is orbiting. In other words, its weight is the centripetal force keeping it moving in a circle.
74. An object in orbit is in free fall - it is falling freely in response to its own weight. Any object inside a freely falling object will appear to be weightless.
75. Rutherford discovered the positive nucleus using his famous gold-foil experiment.
76. Fusion is the process in which hydrogen is combined to make helium.
77. Fission requires that a neutron causes uranium to be split into middle size atoms and produce extra neutrons, which, in turn, can go on and cause more fissions.
78. Radioactive half-lives are not effected by any changes in temperature or pressure (or anything else for that matter).
79. One AMU of mass is equal to 931 meV of energy. (E = mc2). This is on the Reference Charts!
80. Nuclear forces are very strong and very short-ranged.
81. There are two basic types of elementary particles: Hadrons & Leptons (see Chart).
82. There are two types of Hadrons: Baryons and Mesons (see Chart).
83. The two types of Hadrons are different because they are made up of different numbers of quarks. Baryons are made up of 3 quarks, and Mesons of a quark and antiquark.
84. Notice that to make long-lived Hadron particles quarks must combine in such a way as to give the charge of particle formed a multiple of the elementary charge.
85. For every particle in the "Standard Model" there is an antiparticle. The major difference of an antipartcle is that its charge is opposite in sign. All antiparticles will anhililate as soon as they come in contact with matter and will release a great amount of energy.
85. Notice that to make long-lived Hadron particles quarks must combine in such a way as to give the charge of particle formed a multiple of the elementary charge.
86. Notice that the retention of the Energy Level Diagrams on the new charts implies that there will be questions on it. The units (eV) can be converted to Joules with the coversion given on the first Chart of the Regents Reference tables. And can be used with the formula (given under Modern Physics formulas) to calculate the energy absorbed or released when the electron changes levels.
And by using another formula (given under Modern Physics formulas) you can calculate the frequency of electromagnetic radiation absorbed or released. AND using the Electro-magnetic spectrom given on the charts you can find out what kind of electromagnetic radiation it is (infrared, visible light, UV light, etc.)
Notice that because of the new syllabus, we've "lost" some facts students had to know before 2002.
This is a work in progress, these facts must be tested against four or five of the "new syllabus" regents exams to get fine-tuned.
101. Physics is phun!! (This is key. Honest!)
Special thanks to Physics teacher Jim Davidson for creating the original list.
(revised 6/2002 by D. Panko) | http://www.battaly.com/physics/101facts.htm | 13 |
78 | A square and its lines...
Math brain teasers require computations to solve.
Imagine a square that has each side consisting, half of length `a` and half length `b`. Inside the square there is a diamond, with each corner touching the midpoint of each side of the square. Each side of the diamond has length `c`. If one were to set the area of the square equal to the area of all the shapes inside the square, what would remain?
The Pythagorean Theorem.
The area of a square is `length squared`. First find the area of the entire square. The length is (a+b). The area of the square is (a+b)^2. Inside the box there is a diamond with length `c`. The diamond is simply a square tilted 45 degrees. The area of the diamond is c^2. There are also 4 triangles with area 1/2ab. Set them equal:
(a+b)^2=c^2 + 4(1/2ab) Expand...
a^2 + 2ab + b^2= c^2 + 2ab
Subtract `2ab` from both sides and you are left with the Pythagorean Theorem.
a^2 + b^2 = c^2
Jul 02, 2004
|Awesome, proving the pythagorean theorem was an extra credit assignment in my geo class this year, and i couldn't figure out how to do it without using the way we did it in class, but this could have worked! cool!|
Jul 13, 2004
|This may prove the Pythagorean Theorem (for which there are dozens of proofs) but more importantly, it proves that: ONE PICTURE IS WORTH 1000 WORDS.|
May 25, 2005
Jan 03, 2006
|One picture worth a thousand words woyuld be useful right now beacause I didn't follow the statements or the question. |
Jan 10, 2006
|I honestly didn't understand what the question was asking for, either. I understood the description of the setup perfectly, but didn't know what you meant by "if you set the area of the square equal to the area of the things inside..." The area of the square HAD TO BE the same as the total area of the shapes inside. After all, the shapes inside are defined by and contained in the square. |
Interesting derivation of the Pythagorean Theorem, though, and I must give kudos for that!
Jan 20, 2006
|great teaser but...huh? |
Jan 21, 2006
|Sorry, this "puzzle" doesn't do it for me. As to the Pythagorean theorem, the demonstration focuses on the single case where A=B and is not a general proof. Instead, one can divide the sides of a square into segments of lenghth A and B; the ratio A:B can be anything, not just 1:1. (The B segment of every side must be clockwise from the A segment, or vice versa.) Create four triangles (Each will have area of A*B/2.) and the smaller square (with sides of lenghth C) by joining each dividing point between A and B segments to the dividing points on adjacent sides. Either picture these triangles folded (or actually fold them) inward so they cover part of "square C." You will find you create a new square with sides the lenghth of the difference between A and B. The area of the C square is equal to the new square plus the area of the four triangles. C^2 = (A-B)^2 + 4(A*B/2). C^2= A^2 - 2A*B + B^2 + 2A*B. C^2 = A^2 + B^2.|
Apr 03, 2006
|I agree with stil|
Apr 05, 2006
|i never thought i'd say this about math but WHOA! THATS SO AWESOME!!!!!!!!!!!!!!!|
Apr 13, 2006
|it was a square, huh? so a = b.. is that correct? i don't really understand the question.. but .. nice.. althou i've seen this one (not in text.. but in picture).. lov it |
Apr 25, 2006
|hey! we studied that!! |
May 17, 2006
|I didn't understand what the question was asking.|
Jun 14, 2006
|This is actually the converse of the Pythagorean Theorem. a² + b² = c² is the converse. c² = b² + a² is the Pythagorean Theorem. But hey, in Algebra I we came up with a=b iff b=a.|
Jul 16, 2006
|we had to do that in math last year. |
and that is the Pythagorean Theorem, not the converse
Aug 23, 2006
|I'd say nice teaser, but your wording really didn't make sense... sorry.|
Oct 13, 2006
|hard to understand what u were talkin bout.|
Dec 19, 2006
|The area of the triangle would be ((a+b)^2)/8.|
base of triangle = (a+b)/2
height of triangle = (a+b)/2
because the diamond intersects the square at the midpoint of each side.
Dec 19, 2006
|just FYI-- A does not have to equal B in this situation, except for the fact that the setup calls for a "diamond" in the center, which suggests congruency along at least one center line. in this case, since it's a diamond within a square, at the midpoint of each side, it is indeed also a square. BUT, for any square with 4 sides equalling A+B (and, going around the square, never B+A) and building a shape in the middle connecting the point between A & B on each side, the resulting polygon would always be a RHOMBUS-- a quadrilateral with equal sides. in this case, the sides of the polygon in the center would always = c, the area would equal c^2, and it would be a proof for the pythagorean theorem even if a b. Causes a problem tho if the sides were ever flipped so that two A segments are in the same corner (and two B segments would thus have to occupy at least one other corner.) this twist would mean that a) the quadrilateral in the center would not be equilateral, and the area would be more difficult to compute. BUT, it would still have the same area.|
Feb 07, 2007
|poorly, poorly, POORLY worded, i am sorry to say.|
Feb 13, 2007
|I got this right and didn't even know it! I agree with a lot of people, that this was poorly worded. But I drew out the square and the diamond, and wrote a^2 + b^2 = c^2?? under it. I then got confused by the wording of the question and gave up, but when I looked at the answer, hey! I was right!|
Feb 20, 2007
|C^2 also equals 2b^2 or 2a^2...|
Sep 08, 2008
|it seems clear to me the reason this teaser is supposedly poorly worded (may be the case but easy enough to understand) is simply because the writer knew the answer before the question, maybe saw a proof for pythagoras' theorem and wanted to post a teaser on here demonstrating it. |
This is definitely my favourite proof for pythag, logical and simple and immensely satisfying, i first came across it in "Fermat's last theorem (simon singh),"and the fact that a=b doesn't matter because they're always referred to as a or b, that fact that they are the same doesn't alter the algebra involved, you don't need that assumption so this is a perfectly fine example of proof for pythagoras' theorem.
Jan 16, 2009
|Horrendously worded. |
Back to Top | http://www.braingle.com/brainteasers/teaser.php?op=2&id=193&comm=1 | 13 |
74 | THE GENERAL THEORY OF RELATIVITY
Written for students in the USC Self-paced Astronomy courses
NOTE: This Unit assumes you have studied Unit 56.
The Learning Objectives and references are in the Self-Paced Study Guide
Essay on the General Theory of Relativity
by John L. Safko
A. General Principle of Covariance (or Only the Tides are Real)
Consider yourself in an elevator. You cannot see outside, so you must determine the nature of the surrounding universe by local experiments. You let go of a coin and it falls to the bottom of the elevator. Aha!, you say, I am at rest on Earth. But, you could be in a spaceship that is accelerating and far from any other object. This is shown in Fig. 57-
Locally being at rest on the Earth's surface is equivalent to being in a uniformly accelerated spaceship.
Consider the opposite case. You float from the floor and the coin does not fall when you release it. Aha!, you say again, I am in space far from any other body. But, you could be freely falling towards the Earth as shown in Fig. 57-2.
Locally freely falling towards the Earth is equivalent to being at rest with respect to the distant stars far from any gravitating body.
We see that gravity is different than other forces. You can make gravity completely disappear in small regions by freely falling. This means that a free fall frame is a perfectly good inertial frame. The only way we can detect the difference is to look for tidal forces which arise if the gravitational field is not perfectly uniform. But for any real gravitational field we can always make the region we consider (our elevator in this case) small enough so we cannot detect the tidal forces. So:
"For sufficiently small regions, the special theory of relativity is correct!!"
Einstein used these ideas to conclude that the laws of physics should be independent of the coordinate system used. Another way of saying this is that the laws of physics are generally covariant. Of course it is obvious that some coordinate systems may make a physical situation easier to describe and predict than others. All frames, however, are equally valid.
B. Gravity as Curved Spacetime
A region of spacetime where, in the old Newtonian view, gravitational forces exist, can thus be broken into smaller regions where an inertial frame is defined and the special theory of relativity works. There is no single inertial frame for the entire region.
Consider our elevator example again, only this time cut small windows in the elevator so light can pass through the elevator. Since we cannot tell the difference between being at rest in empty space or freely falling in a gravitational field, the light will pass through on a straight line, as shown in Fig. 57-3. Uniform acceleration would make the light appear to be on a curved path, since the elevator moves as the light passes through. But, we have argued that uniform acceleration is the same as being at rest in a gravitational field. Light passing through an elevator at rest in a gravitational field would appear to be on a curved path as shown in Fig. 57-4. The conclusion is that a gravitational field deflects light.
If you are at rest in a box, far from any gravitating body, light should pass through your box on a straight line. So light passing through the same box, if it were freely falling towards the Earth, would also appear to move on a straight line.
If you were uniformly accelerated in the illustrated box, you would expect light passing through your box to be deflected. This means that, if you were at rest in a gravitational field, the light would also be deflected. So a gravitational field deflects light.
Gravity has a local property -- if you freely fall, you no longer feel the effects of gravity. Gravity also has a global property. In the presence of gravity, two freely falling bodies will separate or approach each other (the tides). The only way we can reconcile the local property and the global property of gravity is to give up the geometry we assumed for spacetime. What does this mean?
Consider the following question. What do we mean when we say a line is straight? We probably mean the shortest path. The only way we could determine this is to use light to define a "straight" line. But we have seen that light may travel on curved paths. This forces us to generalize the idea of a "straight" line to what is called a geodesic and to generalize the geometry of spacetime. An example of a geodesic is a great circle on a sphere, such as a line of constant longitude. Another example would be a line of constant Right Ascension on the celestial sphere. A geodesic is the path that has the shortest distance between two given points.
The spacetime of the General Theory must locally have the same properties of the Special Theory. Light must travel on a null geodesic. A material object travels on a timelike path. If it is in free-fall, this path is a timelike geodesic. (See Unit 56.)
On a larger scale, gravity bends or distorts spacetime. These geodesics are the shortest path in this distorted spacetime. The interval of the Special Theory, ds, which was written as
ds2 = (c dt)2 - [(dx)2 + (dy)2 + (dz)2],
now becomes, in general,
ds2 = gtt (c dt)2 + 2 gtx dx dt + 2 gty dy dt + 2 gtz dz dt + gxx (dx)2 + 2 gxy dx dy + 2 gxz dx dz + gyy (dy)2 + 2 gyz dy dz + gzz (dx)2,
where the "g" terms could all be functions of t, x, y, and z. This very messy expression is just all possible combinations of the changes in the coordinates, taken two at a time, multiplied by functions of the coordinates. This is called a quadratic form.
For the special theory, the coordinates can be chosen so to make the "g"s all constant and zero except gtt = 1, gxx = gyy = gzz = -1. The "g" quantities are called the metric of the space in that coordinate system. The principle of equivalence forces us to conclude that although the "g"s are different in another coordinate system, they are describing the same spacetime. So the "g"s are describing something independent of the coordinate system -- the geometry of spacetime.
C. The General Theory of Relativity
When Einstein realized the preceding, he was able to take over an existing mathematical structure. This structure was the theory of Reimannian geometry which, until that time, was thought to be an abstract mathematical structure with no physical uses. With the mathematical tools of Riemannian geometry, Einstein was able to formulate a theory that predicted the behavior of objects in the presence of gravity, electromagnetic, and other forces. This theory is called the General Theory of Relativity. Avoiding the mathematical details, the theory gives relations, called the field equations, that say:
properties of the geometry = properties of the non-gravitational forces present.
. . . . .(57-1)
In order to express the appropriate properties of the non-gravitational forces, we must also use the geometry on the right hand side of this relation.
Many physical theories are linear. By this we mean that if you add two sources, the resulting solution is the sum of the two solutions produced by the sources independently. The General Theory is highly non-linear since the geometric properties needed are non-linear and the geometry also appears on the right hand side of Eq. 57-1, as we define the needed properties of the non-gravitational forces. The results of non-linear theories can not be predicted by considering only small effects. For a non-linear theory the sum of two sources may produce a resultant solution which bears no resemblance to the individual solutions from the sources considered one at a time.
John Wheeler has described the results of solving Eq. 57-1 by saying
"Matter tells spacetime how to bend and spacetime returns the complement by telling matter how to move."
The General Theory is geometrical, that suggests drawing pictures to show what is happening to the geometry. Your text shows some such pictures near the end of Chapter 19. Figs. H 19-28 and 19-31 show what are called embedding diagrams. Two of the space dimensions are shown. The third dimension is not a space dimension. It is an attempt to show how the geometry differs from Euclidean. The bending is a measure of the curvature or distortion of space from flat. These diagrams can be very helpful in understanding what is happening, but don't let them mislead you.
The next step Einstein wanted to take was to completely eliminate the right hand side and express the entirety of physics as geometry. A theory expressing physics in terms of only one "object" (field) is called a unified theory. Except in special cases, neither Einstein nor anyone else has yet been able to find a theory that unifies gravity with the three other known natural forces.
Before we consider the experimental evidence that is consistent with the General Theory and some of the surprising predictions of the theory, let us briefly consider what has happened to our view of the nature of physical reality as we have taken the cosmic voyage. We developed the Newtonian world view, generalized it to the static spacetime of the Special Theory of Relativity and have now described the dynamic spacetime of the General Theory. These developments may seem very revolutionary, but they are evolutionary. The General Theory contains as a subset of its solutions the Special Theory. The General and Special theories contain as a subset of their solutions the solutions of the Newtonian theory. We have not given up concepts; we have only generalized them.
D. Tests of the General Theory
When first proposed, the general Theory of Relativity had no direct experimental underpinning. Now, many of us use equipment that could not work without using the General Theory. For example, the Global Positioning System (GPS) must use the predictions of the Special and General Theories. The GPS allows you, for a few hundred dollars, to buy a hand held instrument which can display your longitude, latitude and altitude to within 16 meters. (Or, for more money, you can get even better accuracy.) It also gives the time to within a few billionths of a second. The GPS consists of a set of 24 Earth-orbiting satellites with one or more atomic clocks. The entire system can only work if the predictions of the Special and the General Theories of Relativity are correct for weak gravitational fields.
D1. The Original Tests
Originally, three tests of the General Theory were proposed, whose results the theory seemed to properly predict. Between the early 1920's and the early 1960's, little experimental work occurred except to refine the measurements on the three experimental tests. Only in the last few years has the experimental side of general relativity blossomed. We will first discuss the three tests, which are called the classical tests of relativity, and then consider some recent developments.
The three classical tests of the general theory are
Ideally, the orbit of a single planet about a star is an ellipse fixed in space. The presence of other planets changes (perturbs) this orbit as was discussed in Unit 17. For all natural orbits in the solar system. these changes are small. In the case of Mercury, we can consider the perturbed orbit as an ellipse which slowly precesses (rotates in its own plane) as is shown in Fig. 57-5 or in the text in Fig. H 19-31a. The orbit of Mercury is observed to precess about 5,600 seconds of arc per century. Since Mercury orbits the Sun about 700 times in a century, this is a small change in the orbit per orbital period. Newtonian physics could predict all this precession except 43 seconds of arc per century. The 43 seconds of arc was what the General Theory of Relativity predicts.
The orbit of Mercury is perturbed by the presence of the other planets and be small effects predicted by the General Theory. The observed precession of the perihelion is about 5,600 seconds of arc per century. After subtracting the Newtonian perturbations caused by the other planets, 43 seconds of arc remain. This is the amount predicted by the General Theory.
The second prediction is the deflection of light in a gravitational field as shown in the text Fig. H 19-31d. Since the effect is so small in the solar system, it can only be detected for light that just grazes the Sun. This is shown in Fig. 57-6. Solar eclipse expeditions took photographs that verified this prediction. Nowadays, with radio telescopes, we can measure this effect very accurately since the Sun occults several quasars and pulsars each year.
The deflection of light passing near the Sun. The figure highly exaggerates the 1".75 predicted by the General Theory.
The third prediction of the theory is light should lose energy as it climbed out of a gravitational field as shown in the text Fig. H 19-31b. This was verified in the spectra of some red stars; however, there was a lot of noise in the experimental data. The astronomical results are shown schematically in Fig. 57-7. In the late 1950's the effect was accurately verified by measuring the wavelength of light as it traveled up or down a tower on Earth.
The gravitational red shift of light was first measured in the spectra of cool red dwarf stars. Accurate measurements were made on the Earth's surface by sending light up and down a tower. The Mossbauer Effect, which allows the frequency to be measured very accurately, was used.
D2. Modern Experimental Tests
With the developments produced in the space age, there have been many new tests posed for the general theory of relativity. The theory seems to be meeting the tests carried out so far. Among the tests are details of the motion of the Moon as the Earth-Moon orbits the Sun, the time delay in light signals passing near the Sun; the motion of binary stars as they produce gravitational radiation, and the apparent existence of black holes in stellar and galactic systems. Among the proposed tests are the actual detection of gravitational radiation from supernovae and the predicted precession of gyroscopes in Earth orbit.
D2a. Gravitational Time Delay
The General Theory not only predicts a deflection of light as the light passes near a gravitating body, it also predicts that it should take the light longer to pass through the region near the star. The geometrical reason for this is shown in the text Fig. H 19-31c. This gravitational time delay was first measured in 1968 by I. Shapiro using radar signals reflected from the surfaces of Venus and Mercury. Since Mercury and Venus were near opposition when the experiments were done, the signals passed near the surface of the Sun, giving the greatest relativistic effects. Using a later launch of a Mariner probe, a transmitter on the probe bounced a signal off of the planets' surfaces as well as sending a signal directly to the receivers on Earth. Since the position of the planets were known better than the position of the Vikings, this improved the accuracy. When the Viking probes landed on Mars, the results were even more precise. Since then, the experiment has been repeated with other space probes and with the signals from the few pulsars that are occulted by the Sun. The pulsar timing signals, like the signals from the space probes, arrive slightly later than they would have if the Sun were not present. The results are in agreement with the predictions of the General Theory..
D2b. Gravitational Radiation
Another prediction of the General Theory is that moving a mass should produce gravitational radiation, just as moving an electric charge produces electromagnetic radiation (light). Unless the masses are moved at relativistic speeds the radiation produced is very weak. The least radiation produced by gravity is a quadrapole radiation, as shown in Fig. 57-8, rather than the dipole which can be produced by electromagnetic sources. This occurs because only positive mass seems to exist. Electric charges come with either positive or negative sign, allowing a simpler radiation pattern.
a. The dipole distortion of a ring of charges through which a plane electromagnetic wave is passing.
b. The quadrapole distortion of a ring of matter through which a plane gravitational wave is passing.
Gravitational radiation can be detected, in principle, on Earth by detecting sub-nuclear, but coherent, displacements in a massive block of material. A number of such detectors have been built. So far, what they have detected has not been confirmed as gravitational radiation. This is not surprising since the radiation should be very weak. They should have detected the gravitational radiation produced by Supervova 1987a in the Small Magellanic Cloud. (Shown in Fig. 57-9 as it first appeared and as it appears in 1995) However, all the groups had shut down for repair and improvements at the time the supernova occurred. They had shut down at the same time to avoid the possibility that one would detect a signal without independent verification by another group. It remains to be seen if this procedure will be followed in the future.
a. Supernova 1987a in the small Magellanic Cloud as it first appeared.
It should also be possible to detect changes in optical path lengths as
gravitational radiation passes through an interferometer causing one arm to
expand more than the other. One such interferometer is shown in Fig. 57-10. A
set of devices, the LIGO (Laser Interferometry
Gravity Observatory) project, are under construction and testing in
Another possible method of detecting gravitational radiation is to examine the behavior of the source of that radiation. The argument is that in a close binary star system we would expect gravitational radiation to occur and the orbit of the stars about each other to decay as the gravitational radiation removes energy from the system. Several such systems have been studied whose models seem to be in agreement with theory. There are, however, two star systems, DI Her and AS Cam, whose behavior seems inconsistent with the theory. Since the observed data must be used to fit a model of these systems, it is not clear what is occurring. Future studies will resolve these problems either in favor of or against the general theory.
D2c. Gyroscope Precession
Both the special and the general theories predict that the axis of a rotating body that is orbiting another body should precess. The general theory predicts a slightly larger precession. The effect on the Earth's, or other planetary, axis is masked by irregularities in the rotation and classical precession of these axes.
One possible way of detecting this effect is to put a carefully shielded superconducting gyroscope in Earth orbit. Such a gyroscope is currently under construction at Stanford. It was originally scheduled for launch in 1986. This schedule has slipped to a current launch date after 2000.
E. Black Holes and Stellar Collapse
One of the more esoteric predictions of the general theory is the existence of black holes. A black hole is an object who mass is so large that light cannot escape from its surface. A simple argument for the existence of black holes is given in the next paragraph. As we shall see, the black holes predicted by the general theory are much more complicated.
E1. A classical argument for the formation of a Black Hole
In Units 3 and 13 we discussed the idea of escape velocity. For a body of mass M and size given by a radius R, the minimum velocity for a small body to escape from the surface is given by
vescape = square root of(2GM / R).
. . . . . .(57-2)
c = vescape = square root of(2GM / R)
. . . . .(57-3)
gives the mass to size ratio needed for a black hole. Combining Eqs. 57-1 and 57-2 gives the condition that a star traps all its emitted light when
(2GM) / (Rc2) = 1.
. . . . . . (57-4)
E2. The Exterior of a Spherical Black Hole
When we solve the exact spherically symmetric solution for a non-rotating source of gravity in the full General Theory of Relativity, we obtain the same result for the formation of a black hole. This R is called the Schwarzchild radius. For reasons we will discuss, the surface surrounding the source at R is called the event horizon.
If we use the mass of the Sun in Eq. 57-4, we find that a black hole with one solar mass would have a radius of 2.9 km. We can then divide Eq. 57-2 by itself using the solar values in the second case and expressing masses in solar masses and radii in km to get
R = 2.9 km (M / MSun).
. . . . . .(57-5)
For example, a 3.0 solar mass star would have a Schwarzchild radius of
R = 2.9 km x 3.0 = 8.7 km.
The black hole found in the general theory is much more complicated than
This is expected, since you would have to move faster than light to exit from a distance closer than R from the center. Since even light cannot escape, no information about what is happening inside the surface can be communicated to the outside; that is, no events can be observed inside this surface. This is why we say the Schwarzchild radius defines an event horizon.
We define the radius, R, as the number defined by the area of the surface which just contains the event horizon. If we call this area Ao, then R is defined by
Ao = 4(Pi)R2.
Another property of the event horizon, which could not have been anticipated
The appropriate coordinates for a distant observer to use are called Schwarzchild coordinates. In these coordinate both space displacement and time displacement are distorted by the geometry near the horizon such that an infinite coordinate time will pass before the falling object will reach the horizon. Thus, a distant observer measures a slowing of the body as it reaches the event horizon. To this distant observer the body would, according to her coordinates, never cross the event horizon in a finite time.
An observer at rest near the horizon would measure the velocity of the falling body to be nearly the speed of light as it passes. An observer on the body will feel nothing unusual, except for possible tidal effects, as he crosses the event horizon it a finite time according to his clock and measures his relative speed as less than light.
This apparent paradox can be resolved by examining the full mathematical structure of the theory. What the distant observer says is an infinite time is only a finite time for an observer at rest near the horizon and for the observer on the falling body. As in Unit 56, note the difference between a measurement and "seeing."
E3. Collapse of a Spherical Star
Suppose we consider the spherical collapse of a non-rotating star. This is a highly simplified case, but it will show some of the features of a rotating, not quite spherical star. As a star collapses, the emitted light is red shifted and non-radial paths are curved to a greater and greater extent until the star reaches the diameter at which light emitted tangentially goes in circular orbit about the star. Your text shows this in Fig. H 19-31. This tangentially emitted light is trapped in circular orbit about the star. The light so trapped is called the photon sphere. As the star continues to collapse, less and less of the non-radial light emitted from the surface can escape. At the Schwarzchild radius even the radial light does not escape as shown in Figs. 57-11 and 57-12.
A sketch showing the formation of an event horizon as a star collapses to form a black hole Time is vertically upwards.
A schematic of a black hole showing the singularity, the event horizon and the photon sphere. The photon sphere is the distance from a black hole that light emitted tangentially is just able to make a circular path. Any closer to the black hole and tangentially emitted light will spiral into the black hole.
If the black hole was formed by a collapsing star, the star, which is now interior to the event horizon, continues to collapse to the center, forming a singularity. Assuming the tidal effects outside the event horizon were not too large, an observer on a freely falling body would cross the event horizon without difficulty and in a finite time according to her clock. She would then have only a short amount of time before she comes too close to the center of the black hole and is crushed by the tidal forces as is shown in Fig. 57-13.
The event horizon of a Schwarzchild black hole may present no immediate problem; but, once you cross the event horizon, you have no way to go but towards the center. Getting too close to the center will result in tidal forces which can not be neglected. This will happen outside the event horizon for a small black hole. These forces are real, your feet will be accelerated relative to your head and your left and right sides will be squeezed together.
The distant gravitational field of a spherical star remains constant as this
collapse occurs. Even after the star crosses the Schwarzchild
radius the external curvature of spacetime remains.
To a planet in orbit about the star before the collapse, the external gravity
appears unchanged and its orbit is unchanged. In some sense the star becomes
like the Cheshire cat in "
The star continues to collapse inside the Schwartzchild radius until it reaches a singularity at the center of symmetry. Some who study astrophysics suggest that at some point in this final collapse there may be new physics that prevents the singularity from forming. This may be a valid argument, but it does not prevent the event horizon from forming. A massive enough star will form the event horizon long before the density of the star reaches nuclear density. We have a good understanding of the behavior of matter at nuclear densities, so new physics will not prevent the formation of the event horizon and thus the black hole in those cases.
Since the black hole is much smaller than the star that formed it, it is possible for bodies to approach much closer to the center of the star, but to still stay outside the event horizon. We find that many of these new possible orbits are unstable. Any material body must move slower than light. Bodies on such orbits will spiral in towards the black hole and eventually be absorbed. Some orbits make many rotations about the black hole before they begin the spiring process. Orbits that existed before the collapse began are not effected; they remain as stable orbits.
At some point, either inside or outside the event horizon, the tidal effects will become large enough to tear the body apart. The most distant part of the body will be accelerated less than the part towards the center and the left and right sides will be squeezed together. These real forces will tear the body apart as was shown in Fig. 57-13 for a person. If it were atoms approaching the black hole, this tidal force makes an atom scream (radiate), just as you would. This will later be shown to provide a method of detecting a black hole.
F. Rotating Black Holes
If the collapsing star has some rotation, the nature of the collapse is changed, but the collapse not prevented. Information on the staršs structure is radiated away. All that remains is the mass, the angular momentum, and possibly (but unlikely) the net charge. The rotating black hole is often called a Kerr black hole after the scientist who first formulated this solution to the field equations.
A schematic of the horizons and regions around a rotating black hole. There are two event horizons and two surfaces of infinite red shift. The singularity is now a ring about the axis. (after D"Inverno: Introducing Einstein's Relativity)
The resulting structure, shown in Fig 57-14, is much more complicated. There are two event horizons and two surfaces of infinite red shift. Each horizon touches one infinite reds shift surface at the axis of rotation Between the outer horizon and the outer surface of infinite red shift there exists a region called the ergosphere. In this ergosphere, a real particle must orbit in the direction of rotation of the black hole. Even light cannot travel against the rotation in the ergosphere.
Suppose you are in the ergosphere moving in the direction of rotation. Then you can enter and exit the surface of infinite red shift. If, while you are inside the ergosphere, you throw some mass towards the black hole, you can exit the outer surface with more energy than you had when you entered. This energy is provided by the rotational energy of the black hole. The black hole has less angular momentum after this interaction.
If you cross the outer event horizon, you must also cross the inner ecent horizon. You can enter but not exit. You will be forced towards the interior singularity. This singularity is not a point but is a ring in the plane perpendicular the the axis. Again the tidal forces become very large as a body approach the singularity, which would lead to the body's destruction.
The theory suggests that there are geodesics which pass through the ring and exit in another universe avoiding the singularity. This has been used as the deux ex machina in many science fiction books and movies. Studies of the formal solutions have shown that the presence of any finite sized body cuts off these possible paths. If something crosses the inner event horizon, it must eventually hit the singularity and be destroyed in the process.
G. Evidence for Black Holes
How can we detect an object which does not radiate? The answer lies in the behavior of light and matter near a black hole. If a black hole nearly lines up with a background star, we will see a displacement of the apparent position of the star or even multiple images. A very nearby black hole would even show as a black disk as illustrated in the computer generated Fig. 57-15. Black holes or other gravitating sources can even generate a ring of light if the positioning is exact.
a. Left: A computer generated image of the sky in the region of Orion as seen from Earth. The three stars of nearly equal brightness make up Orion's belt.
b. Right: The same region of the sky with a black hole located at the center of the drawing. The black hole's strong gravity bends the light passing near it. This causes a noticable visual distortion. Each star in (a) appears twice in (b) on each side of the black hole. Near the black hole you can see the entire sky, as light is bent around the hole. (Robert Nemiroff (GMU, NASA)
Another method that will work for black holes surrounded by infalling matter is to detect the radiation produced by the tidal effect and the bumping together of nuclei as they crowd towards the hole. (See Fig. 57-13)
Since most black holes will be rotating, the same dynamics that led to the solar system being in a plane will lead to the matter around the black hole forming a disk. This disk is named the accretion disk. Far from the black hole the matter in the accretion disk can have a stable circular orbit. Nearer the center, there are no stable orbits. The matter in that part of the accretion disk must spiral in towards the horizons. This matter "screams" as it is squeezed and distorted by the tidal effects and as it hits other matter. The net effect is a massive emission of radiation at x-ray and other wavelengths. Some of the matter will even be ejected along the axis of rotation of the black hole producing jets of relativistically moving matter. This matter can interact with matter in the surrounding interstellar medium producing radio emission and visible light. Your text shows several such examples in Fig. H 25-22 through Fig. H 25-26.
In December 1995 NASA launched the X-ray Timing Explorer (XTE) into near Earth orbit to look for x-ray pulses as brief as a microsecond. Neutron stars, white dwarfs and black holes all can produce such radiation. Even before the 1995 launch x-rays were detected by previous satellites. One of these sources was detected in the constellation of Cygnus. This source also emits gamma rays as well as visible light. It is called Cygnus X-1 and is currently believed to be a black hole in orbit about a blue super giant as shown in text Fig. H 19-29. Some of the observational data includes periodic changes in the x-ray emission, periodic Doppler shifting of the visible star, and changes in the radiation from the accretion disk. When all the observational data is considered we model the system as including a black hole of at least 3.5 solar masses and possibly as large as 15 solar masses.
When your text was written there were at least 3 other binary star systems in our galaxy that are good black hole candidates. They, along with Cygnus X-1, are listed in Table 19-2 of the text. Since then V404 Cygni has also been shown to be a probable black hole. It is most likely that others have been added to the list since this essay was written.
The Hubble space telescope has enabled astronomers to produce images of cluster centers and of other galaxies that were not possible before.
This allows us to look for supermassive black holes that might have been part of the early formation of galaxies. Astronomers look for supermassive black holes there by the following methods:
1. A rapid increase in stellar density as the center of the galaxy is approached but without enough starlight being emitted from the very center.
Such objects have been found in M32, M87, and M51 with masses ranging from 3 million to 3 billion solar masses. Your text has a picture of M32 in Fig. H 24-25. There is even evidence that there is a supermassive black hole in the center of our own Milky Way galaxy. This black hole is currently not active since little matter is falling onto it.
There is another type of black hole permitted besides those produced by the collapse of a star. These are topological black holes or wormholes in which two separate regions of spacetime (or even two separate spacetimes) are connected by a path that is not in the dimensions of the spacetime. An analogy is the handle of a cup which connects two separate portions of the cup. A model of a wormhole is shown in Fig 57-16.
A wormhole connecting two disjoint portions of spacetime. The distance through the wormhole may be shorter than the normal distances. If so, a wormhole traveler could cover regions of spacetime faster than light. Most likely such an attempt would close the wormhole and destroy the traveler.
Mathematically, wormholes are solutions or approximate solutions of Einstein's field equations without sources. The exact solutions depend heavily upon some assumed symmetry. It has been mathematically shown, in many cases, that if that symmetry is disturbed, the wormhole closes and becomes a singularity. A body trying to go through the wormhole would be such a disturbance.
Much use has been made of wormholes in science fiction as a means of rapid travel between different portions of spacetime without the need for speeds exceeding light. At this time we do not know if wormholes exist, but the theory does not seem to forbid them. So, remember the old adage "What is not forbidden, will occur."
I. Consequences on Cosmology
The general theory has had major effects on which cosmological theories we consider. In the large scale, cosmologists usually consider only cosmological theories that are consistent with the General Theory. This is what led the the statements such as: "the universe could be finite and not have a surface" and "there is no region outside a finite universe." To fit a cosmological model with observation we need the value of the deceleration parameter, qo. This value can only be determined from observational data after an assumption is made about the curvature of spacetime.
We still have conceptual problems with the very early universe. There is no consistent theory that successfully unites gravity and quantum mechanics Also we have no experimental data to guide us towards such a theory. So, our understanding of the very early universe remains problematic. The material discussed in the last two chapters of your text (Chapters 26 and 27) assumes, at least on the large scale, that the General Theory of Relativity is correct.
Sample questions are available at spastro.physics.sc.edu
This page resides on
This document is at
File last modified on
Page has been accessed times since 1/1/99
This page maintained by [email protected]
Copyright 1997/2004 by John L. Safko.
The views and opinions expressed in this page are strictly those of the page
author. The contents of the page have not been reviewed or approved by the | http://astro.physics.sc.edu/selfpacedunits/Unit57.html | 13 |
66 | Volume V, No. 3, Spring 1978
In no place of the country was the war so devastating as in Missouri, particularly for the southern half. Though major histories of the Civil War make Missouri's part seem insignificant compared to the Battles of Bull Run, Vicksburg and Gettysburg and Sherman's march through Georgia, Missouri nevertheless played an important and unique part.
Like the border slave states of Kentucky and Maryland which did not secede, Missouri's loyalty to the Union was in question for its sympathies were divided. But unlike those slave states that were bordered only on the north by non-slave states, Missouri jutted up on three sides into Union, non-slave territory. Besides the psychological advantage to whichever side controlled it, there was a decided geographic advantage to dominating Missouri. Situated on the confluence of the Missouri and Mississippi Rivers, Missouri's control was important to the economic and military strategy. The union's eventual control of the entire Mississippi Valley was one factor which defeated the South. Missouri was thus an important key to union success.
The sympathies of the people were strongly mixed. The Missouri governor and legislature were pro-Confederate. The slaveholders on the western prairie, Missouri and Mississippi River Valleys and in the boot heel region were also pro-Confederate. Settlers in the Ozarks were predominantly from eastern slave states and remained loyal to their heritage. The pro-Union contingent was established in German settlements near St. Louis and in spots along the Mississippi River. Between the remainder, strong-willed, lukewarm and nonexistent feelings existed toward slavery and freedom.
Missouri was so truly divided in sentiments that it was the only state which required a battle to decide its stand. It was the only state with a government in exile during the war, and the only union state where the animosity continued in the form of guerrilla warfare activities for many years after the end of the Civil War. Though it had only a few battles of national significance, more small battles, encounters, clashes, skirmishes and incidents took place in Missouri than any other state except Virginia.
Though events long before the outbreak of the war influenced Missouri's action, it was the Battle of Wilson Creek which in the long run paved the way for Missouri remaining union. Though the immediate result at the time seemed to be a decided Southern victory, this was another of the anomalies of Missouri's part in the War Between the States. In Missouri it began as a war within the state.
THE BATTLE OF WILSON CREEK
Predawn, August 10, 1861, ten miles southwest of Springfield on the banks of Wilson Creek the second major battle of the civil War and the largest battle west of the Mississippi River was about to take place. 7,000 Union troops moved toward the three Confederate campsites, hoping for a surprise attack. Intercepting the northernmost of the three camps, Union General Nathaniel Lyon opened fire at five-thirty. Colonel Franz Sigel, with 900 men and six cannons, fired upon the southernmost Confederate camp after hearing the reports of Lyon's engagement. Both Confederate camps were thrown into confusion and retreated toward the middle camp. Here they quickly reformed and under the command of General Sterling Price, struck back against Lyon's main body. 10,000 troops were soon engaged in battle, neither side gaining ground.
After Sigel's initial attack he moved toward the back of the Confederate army. By seven o'clock his position covered the only road south. He hoped to capture any Confederate troops retreating from the battlefield. Near eight o'clock Confederate General Benjamin McCulloch led an assault against Sigel. The Confederates had uniforms which resembled Lyon's troops. Believing that they might be Union, Sigel allowed them to approach within twenty yards. The Confederates opened fire and crushed Sigel's forces.
On the main front neither side had moved yet. At nine-thirty Lyon, rallying his men, began a charge. Lyon was killed in a hail of fire. Major Samuel Sturgis took command and regrouped the Union line. The Confederates then launched an attack, lasting from ten to eleven o'clock. When the Confederates fell back, Sturgis ordered the retreat to Springfield. Beginning another charge, the Confederates discovered no resistance. The Battle of Wilson Creek was over.
With this battle the Confederates opened the way to sweep Missouri and ensure its place in the Confederacy. However, the Confederate Army also suffered heavy losses and decided against pursuit. This decision proved to be a fatal mistake, for never again would the Union Army be at a disadvantage in Missouri.
Several other actions occurred as a direct result of the battle. Each state militia had its own uniforms. Some of the union Army had gray and some Confederates had blue. The confusion caused by similar uniforms which led to Sigel's loss was repeated many times on the battlefield. After Wilson Creek, the Union Army established blue and the Confederate Army chose gray as their official uniform colors.
High-ranking American officers had always led their men into battle as they did at Wilson Creek. But after this battle and the deaths of so many officers, the Union Army forbade high-ranking officers from leading soldiers into battle. Lyon's death left a vacancy in the Union command west of the Mississippi which was filled by Ulysses S. Grant, then an obscure colonel in Illinois. Grant's success in the west and his eventual victory is well known.
Union control of Missouri meant an unobstructed control of the upper Mississippi and a route to drive through the south. Even though the battle was lost by the Union, the Confederates lost the chance of seizing Missouri. Later, at Pea Ridge, the Confederate Army would suffer the defeat which heralded Union domination of Missouri and northern Arkansas.
These two battles settled early in the war the question of Missouri's loyalty to the Union. But there were many other battles, compromises, political manipulations and strategy which preceded these battles, all part of the nationwide turmoil over the unsettled argument of slavery.
MISSOURI: A SOUTHERN STATE
The argument over slavery had its roots in the beginning of the nation. Missouri and Arkansas were drawn into the question in 1803 with the Louisiana Purchase, which was the sale to the United States by France of the midwest continent.
The territory of Missouri was created in 1804 and in 1820 Missouri petitioned to become a state. Then the trouble began.
The northern and southern states were each vying for as many free or slave states, respectively, as possible, since the U. S. Congress decided on statehood admission, neither side would allow the other to gain an advantage, and thus a single proposed state, either slave or free, would never receive approval. After several months of argument and debate on admitting Missouri, Congress agreed on the Missouri Compromise. By this agreement, Missouri became a slave state and Maine a free state. In addition, no further slave states could be organized above parallel 36 degrees 30 minutes--the southern border of Missouri. Arkansas was admitted as a slave state under this same plan in 1839, with Michigan as the free counterpart. Other states were also admitted to the Union by the Missori Compromise.
In 1854, Illinois senator Stephen A. Douglas proposed the Kansas-Nebraska act, which created two territories out of the Indian lands. In order to gain support of the southern senators, he planned both territories be open to slavery. But when petitioning for statehood, each state then could choose by popular vote either free or slave status. Ideally, Nebraska would follow its free-state neighbor Iowa and Kansas would become slave like its neighbor Missouri. This act ignored the Missouri Compromise.
Nebraska was settled by anti-slavery people without difficulty. However, both pro- and anti-slavery residents moved into Kansas, recruited by outside interests. In the first election in 1858, 5,000 armed Missourians raided Kansas and established a pro-slavery legislature. After a pro-slavery sheriff was killed in predominantly anti-slavery Lawrence, 800 Southerners attacked the town. During the next three months, a small war raged on the Kansas-Missouri border, earning Kansas the name of "Bloody Kansas." Violence continued in Kansas up to and through the Civil War. In 1861, Kansas became a free state.
Throughout Missouri and Arkansas the majority of sentiments ran high in favor of slavery. Even though the Ozarks was too poor for many slaves, the .feelings for the South and slavery were as fierce as those in the prime farming regions.
Of course, many exceptions existed on moral grounds, but those against slavery were generally the minority. The base of anti-slavery was seated in St. Louis, where long established money depended on the states in the North. And it was there in St. Louis the argument would explode.
On the national level, politics were approaching the breaking point. Angered by trade tariffs, incessant preaching from northern abolitionists and underground activities freeing slaves, the southern states threatened secession and the formation of a Confederate States of America. After Lincoln's election in November, 1860, they made good their threat. On December 20, South Carolina seceded, followed by ten other slave states between December 1860 and June 1861. Missouri and Kentucky were expected to follow. Officially, neither did.
THE STRUGGLE FOR CONTROL
In the 1860 Missouri state elections, Claiborne F. Jackson, a staunch proslavery supporter, was elected Governor. The senate and half the house were also pro-slavery. The remainder of the house contained committed anti-slavery and not-too-sure-either-way representatives.
After the inaugural of Jackson on January 3, 1861 the question of secession was tackled. Jackson pushed for secession, arguing that Missouri should follow the other slave states. The state Senate agreed, but the House was unable to agree. According to the Pro-Union side, Missouri would be committing suicide because three free states bordered the state. Finally, the legislature chose to call a state convention of elected citizens to decide the issue of secession.
Francis P. Blair, Jr. was the leader of the pro-Union side. Backed by northern interestsand the long-established Union supporters in St. Louis, he managed to prevent any avowed secessionists from being elected to the convention. On February 28, 1861, the convention met at Jefferson City with Sterling Price elected president. After preliminary work, the convention moved to St. Louis and began serious business March 4. After several days' work, the convention voted against secession.
Two political factions now existed in Missouri. Those for secession, which included the state government offices (except half of the house) led by Governor Jackson. The other was pro-Union, led by Blair and supplied by other pro-Union interests, mainly St. Louis, Kansas and Illinois. Since Blair realized that a fight was eminent, he began organizing his own private army, consisting of about 750 men, called the Home Guard. Arms were supplied by the governor of Illinois and money through pro-Union supporters in St. Louis. At this same time, a private pro-secession army was organized in St. Louis numbering some 300 men. They joined with General Daniel M. Frost of the Missouri State Militia, commanding 280 men who had been earlier assigned in southwest Missouri controlling border ruffians.
During all the time after Jackson's inaugural, the Missouri legislature had been unable to pass a bill for the formation of any state militia besides General Frost's small brigade. Indeed, the legislature had been unable to do hardly anything but debate. With the formation and continual growth of Blair's army, action was crucially needed.
In Missouri two Federal arsenals existed--at Liberty and St. Louis. The arsenal at Liberty had about 500 arms, while St. Louis contained 60,000 firearms, many cannons and other necessary munitions of war. Blair badly needed the St. Louis arsenal, at first guarded by a small garrison of Federal troops. Through political influence (his brother was on Lincoln's cabinet) he managed to have the arsenal placed under the command of Captain Nathaniel Lyon who was agreeable to Blair's pro-Union aims and Purposes. Lyon immediately placed the arsenal and his at this time 500 Federal guards in a state of defense, expecting the pro-Confederate state militia to attempt to gain control. Lyon received authority from the War Department to distribute 5,000 rifles to Blair's Home Guard. He also began recruiting more men, soon commanding over 7,000 Union troops--the Home Guard and army regulars.
While Lyon was gaining strength, so were the secessionists. Another private group of pro-slavery men seized the Liberty arsenal. General Frost was preparing to take the St. Louis arsenal, but had not received the authority from the Missouri legislature to do so. Frost organized Camp Jackson outside St. Louis with 700 men. Governor Jackson received from the Confederate command two 12-pound howitzers and two 32-pound seige guns which were set up to defend Camp Jackson against Lyon. On May 10 Lyon surrounded the camp and ordered its surrender. Frost, hopelessly outnumbered, complied. As the prisoners were led to St. Louis, a crowd of pro-secession citizens made fun of the Home Guard soldiers. The soldiers fired on the crowd, killing twenty-eight.
A third party of conservative citizens existed, wishing for peace at any cost. They arranged a meeting between General William S. Harney, Union commander of St. Louis and the now General Sterling Price. On May 21 they met in St. Louis. Harney promised if the State Guard would disband, Union soldiers would take no military action. Consequently, when Price returned to Jefferson City, he ordered all troops to return home and form into regiments. With this action the State Guard was disbanded.
This agreement ruined the plans of Blair and Lyon. They attempted---and succeeded--in having Harney removed as commander and made Lyon brigadier general. On May 31 Lyon took command. Blair and Lyon outlined their battle plan made earlier after the capture of Camp Jackson to rid the state of Confederate forces. They planned to strike and hold Jefferson city, Lexington, St. Joseph, Hannibal, Macon, Springfield and other points if advisable. By this time LYon had 10,000 well-armed men in Missouri, with 2,000 in Kansas, five regiments in Iowa, and other troops in Illinois all ready to join him. The pro-Confederate State Guard, now recalled, numbered less than 1,000 troops, poorly armed and without supplies. Outnumbered, the State Guard nevertheless prepared to clash for what they believed was right.
But not just yet. Once again, peaceful conservatives arranged a meeting between Home Guard and State Guard. In St. Louis, Governor Jackson and General Price met General Lyon and (the now) Colonel Blair. After five hours of fruitless talk, Lyon declared war on the State Guard. Lyon then allowed the Governor and Price to leave the conference.
MILITARY ACTION IN MISSOURI
When Governor Jackson returned to Jefferson City he called the militia to arms. He then gathered all state documents and left for Boonville where General Price believed he could hold the Home Guard until reinforcements arrived. His plan was to make a stand at Lexington. On the way to Boonville he burned bridges, destroyed railways and cut telegraph lines to impede the Home Guard advance.
In the midst of fighting during the Battle of Wilson Creek. Taken from FRANK LESLIE'S ILLUSTRATED NEWSPAPER, August 24, 1861. Courtesy of the State Historical Society of Missouri.
Lyon meanwhile reached Jefferson City on June 15, meeting no resistance. Leaving a force to hold the city, he proceeded to Boonville with 17,000 men. But Price, fearing an attack from Kansas, had gone on to Lexington leaving Colonel John S. Marmaduke in command of about 400 Home Guard soldiers at Boonville. On June 17 Marmaduke met Lyon outside of Boonville and stopped Lyon's advance. However, when Lyon learned that Marmaduke was without artillery he withdrew and shelled Marmaduke's forces. Marmaduke retreated to Boonville after two hours of fighting. Though just a skirmish with only twenty-five casualties on each side, Lyon regarded it as a great victory. The defeat was most depressing to the State Guard.
General Price at Lexington, threatened by Lyon from Boonville and 3,000 other troops from Fort Leavenworth, Kansas, left with a small party for Arkansas to seek aid. He left General Rains in command at Lexington with orders to march toward Lamar.
Meanwhile Governor Jackson was in Warsaw. He learned that a party of 300 Home Guards, under orders to capture the Governor's party, were struck by a force of 350 State Guards twenty miles away.
The State Guards, raised locally, killed 200 and captured the remainder. With this victory came 400 new muskets and ammunition, at a loss of only 30 State Guard soldiers. The defeat also scared off a patrol sent from Boonville under Lyon's orders.
Governor Jackson left Warsaw toward Lamar. Enroute he joined with other State Guard regiments. Heading south toward Carthage he learned that under Lyon's orders Union General Sigel had left
St. Louis for Springfield with 4,000 men. Originally intending to strike against Price (now in Arkansas recruiting help), Sigel decided to strike Jackson instead. Five miles south of Lamar, Jackson, with 3,000 men, encountered Sigel on July 5. After several hours of fighting, Sigel, defeated, retreated to Springfield.
As Price was journeying toward Arkansas, he was joined by men in squads and companies. He stopped in the southwest corner of Missouri, where he learned that Confederate Generals Benjamin McCulloch and N. Bart Pearce were marching toward him. On July 4 McCulloch and Pearce agreed to help Price. Pearce loaned Price 650 muskets to help arm the soldiers now accompanying him. Price, McCulloch and Pearce each of the three leading a separate command headed toward Springfield on July 31.
On July 22, 1861, the state convention met and declared the state offices of governor, lieutenant governor and secretary of state vacated. Thomas Reynolds, a pro-Union man, replaced Jackson as governor.
Lyon meanwhile had reached Springfield around July 27 after delaying two weeks in Lexington. Learning of the approaching army, he asked for reinforcements but never received any. Rather than retreating to Rolla he decided to make a stand. Rolla lay 125 miles northeast over a poor road. Pulling out would probably mean an attack by Confederate cavalry on his rear and would also mean giving up all of southern Missouri. Under his command now were 7,000 to 8,000 men, well equipped and ready for battle. The possibility of defeat seemed better than turning back. He hoped to weaken the Confederate forces enough to keep Missouri in the Union.
The Confederate forces under Price, McCulloch and Pearce had arrived in the vicinity of Springfield and each army was maneuvering into the best position for battle. Price and McCulloch were arguing over command of the Confederate Army of 10,000. Price wanted to attack but McCulloch wished to wait. However, McCulloch agreed to attack if he received command of the army. Price agreed, reserving command of the Missourians if he chose. The Confederates advanced to Wilson Creek and camped three days. On the night of August 9 McCulloch made preparation to advance to Springfield at nine o'clock. However a sudden rain thwarted this plan. McCulloch feared the rain would dampen the gunpowder and leave his army defenseless.
The next day was the Battle of Wilson Creek already described. The Union army left Springfield on the afternoon of the ninth. Lyon had about 6,500 men for the main attack, while Sigel had 900 for an attack on their back. Well before dawn of the tenth the Union Army was in position north and south of the Confederate Army camped at Wilson Creek.
First winning by surprise, the greater numbers of the Confederates took a heavy toll of Union soldiers. After Sigel's defeat and Lyon's death, the Union Army left the field to the Confederates. General McCulloch was satisfied With the day's victory and refused to pursue the Union Army. Price, wanting to push the Union Army back to St. Louis, lacked the manpower to do so. McCulloch, Pearce and Price returned to Arkansas where Price began rebuilding his army for another assault.
In St. Louis the Union command declared Missouri under martial law. In less than a month Price marched north into Missouri with 4,500 men. At Drywood, west of Nevada and fifteen miles east of Fort Scott, Kansas, Price met several thousand Kansas troops under the co, and of General James H. Lane. Defeating the Kansas battalion at the Battle of Drywood, he moved toward Lexington where the Union forces were barricaded, well protected for a long seige. The battle for Lexington began. From the 12th through the 20th of September scattered fighting and sniping took place with neither side gaining an advantage. On the morning of the 20th Price took several bales of hemp from the wharf and had them soaked in the river to prevent their burning. Next, the bales were rolled toward the Union position with Confederate riflemen hiding behind. With the Confederates now in position for a massive offense, the Union army surrendered at two o'clock that afternoon.
The surrender at Lexington supplied Price with badly needed arms, but even these were not enough. Price had the only large Confederate army in central Missouri and was outnumbered by Union forces.
Price left Lexington on September 27 for Neosho. At Neosho the exiled Governor and legislature had met and on October 28 passed an act of secession. Price left for Osceola and arrived the first of December. There he ordered the State Guard to become part of the Confederate Army and also began recruiting for the confederacy. For over a month Price stayed in Osceola, unable to make any offensive for lack of men. After asking but not receiving aid from McCulloch in Arkansas, Price moved to Springfield.
On February 1, 1862 Price learned of a three-pronged attack aimed at him from Sedalia, Rolla and Fort Scott, Kansas. Ill equipped for a major battle, Price went to Arkansas reaching General McCulloch's winter encampment February 17. The army of both totaled 17,000 men.
Union Generals Sigel and Curtis now occupied Arkansas with a total of 18,000 men. Sigel had several hundred at Fayetteville with the remainder under Curtis near Bentonville (north of Fayetteville). While Sigel was enroute to Bentonville to join Curtis, Van Dorn's Confederate forces left early March 4 hoping to cut off Sigel before he reached Bentonville. They were too late. As the Confederates approached Bentonville that afternoon Sigel was already occupying the town. He repulsed two Confederate cavalry charges before moving on and joining Curtis.
During the night of March 4 and the early morning of March 5 Confederate commandeer Van Dorn ordered Price to move his command to the rear (north) of the Union position. From ten o'clock on the Battle of Pea Ridge raged. About three o'clock Price ordered an advance pushing the Union forces back nearly two miles.
McCulloch commanded the attack from the southern side. When he first heard the report of Price's guns, he charged driving the Union forces from their first position. His second charge was also successful. On the third charge, McCulloch and another senior officer were killed. The next ranking officer failed to rally and charge again because the ammunition supply had been moved fifteen miles south for safekeeping. On the morning of the 6th Sigel prepared to attack Price. Van Dorn decided to retreat because of the condition of affairs. Price checked Sigel's attack for two hours, then swung south to join Van Dorn. The Confederate forces then retreated to Van Buren, Arkansas. Sigel and Curtis had sustained heavy losses (as did the Confederates) and departed Pea Ridge for Missouri, fearing to pursue the Confederates deeper into Arkansas. In April, the pro-Confederate Missouri troops under Price left with Van Dorn for Mississippi.
MISSOURI UNDER UNION CONTROL
After the Battle of Pea Ridge the Union army had control of Missouri and northern Arkansas. Missouri never seceded and Governor Jackson and his followers set up a government in exile in Texas. During the remaining three years, a few large battles and numerous skirmishes occurred, without affecting the overall scope. Confederate soldiers were constantly recruited throughout Missouri and Arkansas and engaged Union troops all over the two states. Federal garrisons were stationed in most towns to maintain control. Raids by Confederate armies cut off communications and supplies between garrisons, but the raids rarely resulted in a maintained control. The Confederacy lacked supplies and men to take and keep Missouri. After the Battle of Pea Ridge, the Confederacy never achieved rule over Missouri and northern Arkansas.
Regardless, the war continued. And while the men fought and died on the battlefields in Missouri and elsewhere, the women and children remained home, existing as best they could. Every male between sixteen and sixty had to fight--by choice or conscription. This left the women, young children and older people alone to make a living. Staying alive on farms was difficult and made worse by military confiscation of horses, food and clothing. A poor fate was made worse by the constant fear of seizure.
No one could remain neutral, for both sides considered neutrals as enemies. However, aiding one side meant possible death if the occupying army changed. For men who decided against joining the army, few options were left open. To remain home meant hiding whenever an army was nearby. Capture resulted in forced joining or being shot as a spy. Some decided to make war on their own, or as a private army with other cohorts. Still others realized the war was a perfect opportunity to embark on crime, in the guise of one army or another, but preying upon everyone.
Bill Wilson was one example of a bushwhacker who embarked on his own. His father was killed by Union troops moving through the area near Waynesville, Missouri. After this, Wilson roamed the wilds, cheerfully gunning down small Union patrols in revenge. The Union command at Rolla placed a price on his head, but he was never captured and lived in freedom long after the war.
William Clarke Quantrill's band of Missouri-Kansas border raiders is perhaps best known. Quantrill was leader of a band of Southern sympathizers who were not beyond helping themselves to whatever was available. Union held Lawrence, Kansas, was the site of their prey in August 1863. Lawrence was a base for sporadic raids into the border counties of Missouri, where the sympathy lay with the South.
On August 21, Quantrill with 450 raiders attacked Lawrence, killing every male in sight and burning the town. The Federal command immediately branded Quantrill an outlaw and not a soldier at war. Four days later, the district Union commander at Kansas City, General Thomas Ewing, issued Order Number Eleven. This order forced all citizens in the Missouri counties of Cass, Bates, Jackson and Vernon to move to the interior of the state unless they proved Union loyalty, which few could do. The order was aimed at preventing Southern sympathizers from aiding the border raiders. The order was not successful in stopping the raiders, and only resulted in hatred towards the already despised Union control forces. When the citizens returned after the war, they found only the burnt remains of their homes and towns.
Many other robber bands roamed the region, stealing and killing when possible, running when necessary. The Confederate and Union units attacked these bands when possible, but the majority survived. These bands thrived because the war left little or no honest men available in defense of homes and towns. Even after the war they continued to exist until law could be re-established.
After four years of warfare the Confederate Army surrendered in April 1865. News of the surrender was expected and welcome to most soldiers and citizens. Those soldiers that survived returned home if home still existed. For some, life continued almost as before the war. Others, having lost friends and family, had to rebuild their shattered lives or move westward for a fresh start.
For those that stayed in Missouri the war had generally upset everything. Lawlessness was rampant in many places of the Ozarks. Robber bands continued, joined by soldiers without a future or those who refused to accept the defeat of the South. As late as the 1880's bushwhackers were still prevalent in many southern counties. Forsyth, Missouri, in Taney County, was one place where crime was unchecked and sorely hurting the residents. To combat crime, in 1885 an ex-Union officer named Nat. N. Kinney sought out his trusted friends and formed a vigilante group called the Baldknobbers. They proceeded to hang criminals and suspected criminals in the county. Several other groups, similar to the Baldknobbers, sprang up in nearby counties. Because the group was secretive and not always too choosy with its victims, county residents became fearful of the Baldknobbers. Eventually, the Baldknobbers were considered criminals themselves. No doubt several groups were organized with crime, rather than justice, in mind and used the Bald-knobbers' name as a disguise. As populations increased and towns grew, the hunting grounds of robber bands decreased and as elected officials increased in power and scope, the last of the bands died away.
For nearly a century the causes and effects of the Civil War had wrought their imprints upon Missouri. The absolute finality of war through death and irreversible changes of those living became part of what was and is. An era had ended and another began.
Copyright © 1981 BITTERSWEET, INC.
Next Article | Table of Contents | Other Issues
Local History Home | http://thelibrary.org/lochist/periodicals/bittersweet/sp78c.htm | 13 |
54 | What is spin? Spin is a fundamental property of nature like electrical charge or mass. Spin comes in multiples of 1/2 and can be + or -. Protons, electrons, and neutrons possess spin. Individual unpaired electrons, protons, and neutrons each possesses a spin of 1/2.
In the deuterium atom ( 2H ), with one unpaired electron, one unpaired proton, and one unpaired neutron, the total electronic spin = 1/2 and the total nuclear spin = 1.
Two or more particles with spins having opposite signs can pair up to eliminate the observable manifestations of spin. An example is helium. In nuclear magnetic resonance, it is unpaired nuclear spins that are of importance.
When placed in a magnetic field of strength B, a particle with a net spin can absorb a photon, of frequency . The frequency depends on the gyromagnetic ratio, of the particle.
For hydrogen, = 42.58 MHz / T.
Nuclei are composed of positively charged protons and uncharged neutrons held together by nuclear forces. Both protons and neutrons have approximately the same mass, which is about 1840 times as large as the mass of an electron. Neutrons and protons are referred to collectively as nucleons.
The shell model for the nucleus tells us that nucleons, just like electrons, fill orbitals. When the number of protons or neutrons equals 2, 8, 20, 28, 50, 82, and 126, orbitals are filled. Because nucleons have spin, just like electrons do, their spin can pair up when the orbitals are being filled and cancel out. Almost every element in the periodic table has an isotope with a non zero nuclear spin. NMR can only be performed on isotopes whose natural abundance is high enough to be detected, however some of the nuclei which are of interest in MRI are listed below.
To understand how particles with spin behave in a magnetic field, consider a proton. This proton has the property called spin. Think of the spin of this proton as a magnetic moment vector, causing the proton to behave like a tiny magnet with a north and south pole.
When the proton is placed in an external magnetic field, the spin vector of the particle aligns itself with the external field, just like a magnet would. There is a low energy configuration or state where the poles are aligned N-S-N-S and a high energy state N-N-S-S.
This particle can undergo a transition between the two energy states by the absorption of a photon. A particle in the lower energy state absorbs a photon and ends up in the upper energy state. The energy of this photon must exactly match the energy difference between the two states. The energy, E, of a photon is related to its frequency, , by Planck's constant (h = 6.626x10-34 J s).
In NMR and MRI, the quantity is called the resonance frequency and the Larmor frequency.
The energy of the two spin states can be represented by an energy level diagram. We have seen that = B and E = h , therefore the energy of the photon needed to cause a transition between the two spin states is
When the energy of the photon matches the energy difference between the two spin states an absorption of energy occurs.
In the NMR experiment, the frequency of the photon is in the radio frequency (RF) range. In NMR spectroscopy, is between 60 and 800 MHz for hydrogen nuclei. In clinical MRI, is typically between 15 and 80 MHz for hydrogen imaging.
The simplest NMR experiment is the continuous wave (CW) experiment. There are two ways of performing this experiment. In the first, a constant frequency, which is continuously on, probes the energy levels while the magnetic field is varied. The energy of this frequency is represented by the blue line in the energy level diagram.
The CW experiment can also be performed with a constant magnetic field and a frequency which is varied. The magnitude of the constant magnetic field is represented by the position of the vertical blue line in the energy level diagram.
When a group of spins is placed in a magnetic field, each spin aligns in one of the two possible orientations.
At room temperature, the number of spins in the lower energy level, N+, slightly outnumbers the number in the upper level, N-. Boltzmann statistics tells us that
E is the energy difference between the spin states; k is Boltzmann's constant, 1.3805x10-23 J/Kelvin; and T is the temperature in Kelvin.
As the temperature decreases, so does the ratio N-/N+. As the temperature increases, the ratio approaches one.
The signal in NMR spectroscopy results from the difference between the energy absorbed by the spins which make a transition from the lower energy state to the higher energy state, and the energy emitted by the spins which simultaneously make a transition from the higher energy state to the lower energy state. The signal is thus proportional to the population difference between the states. NMR is a rather sensitive spectroscopy since it is capable of detecting these very small population differences. It is the resonance, or exchange of energy at a specific frequency between the spins and the spectrometer, which gives NMR its sensitivity.
It is worth noting at this time two other factors which influence the MRI signal: the natural abundance of the isotope and biological abundance. The natural abundance of an isotope is the fraction of nuclei having a given number of protons and neutrons, or atomic weight. For example, there are three isotopes of hydrogen, 1H, 2H, and 3H. The natural abundance of 1H is 99.985%. The following table lists the natural abundances of some nuclei studied by MRI.
The biological abundance is the fraction of one type of atom in the human body. The following table lists the biological abundances of some nuclei studied by MRI.
It is cumbersome to describe NMR on a microscopic scale. A macroscopic picture is more convenient. The first step in developing the macroscopic picture is to define the spin packet. A spin packet is a group of spins experiencing the same magnetic field strength. In this example, the spins within each grid section represent a spin packet.
At any instant in time, the magnetic field due to the spins in each spin packet can be represented by a magnetization vector.
The size of each vector is proportional to (N+ - N-).
The vector sum of the magnetization vectors from all of the spin packets is the net magnetization. In order to describe pulsed NMR it is necessary from here on to talk in terms of the net magnetization.
Adapting the conventional NMR coordinate system, the external magnetic field and the net magnetization vector at equilibrium are both along the Z axis.
At equilibrium, the net magnetization vector lies along the direction of the applied magnetic field Bo and is called the equilibrium magnetization Mo. In this configuration, the Z component of magnetization MZ equals Mo. MZ is referred to as the longitudinal magnetization. There is no transverse (MX or MY) magnetization here.
It is possible to change the net magnetization by exposing the nuclear spin system to energy of a frequency equal to the energy difference between the spin states. If enough energy is put into the system, it is possible to saturate the spin system and make MZ=0.
The time constant which describes how MZ returns to its equilibrium value is called the spin lattice relaxation time (T1). The equation governing this behavior as a function of the time t after its displacement is:
T1 is the time to reduce the difference between the longitudinal magnetization (MZ) and its equilibrium value by a factor of e.
If the net magnetization is placed along the -Z axis, it will gradually return to its equilibrium position along the +Z axis at a rate governed by T1. The equation governing this behavior as a function of the time t after its displacement is:
Again, the spin-lattice relaxation time (T1) is the time to reduce the difference between the longitudinal magnetization (MZ) and its equilibrium value by a factor of e.
If the net magnetization is placed in the XY plane it will rotate about the Z axis at a frequency equal to the frequency of the photon which would cause a transition between the two energy levels of the spin. This frequency is called the Larmor frequency.
In addition to the rotation, the net magnetization starts to dephase because each of the spin packets making it up is experiencing a slightly different magnetic field and rotates at its own Larmor frequency. The longer the elapsed time, the greater the phase difference. Here the net magnetization vector is initially along +Y. For this and all dephasing examples think of this vector as the overlap of several thinner vectors from the individual spin packets.
The time constant which describes the return to equilibrium of the transverse magnetization, MXY, is called the spin-spin relaxation time, T2.
T2 is always less than or equal to T1. The net magnetization in the XY plane goes to zero and then the longitudinal magnetization grows in until we have Mo along Z.
Any transverse magnetization behaves the same way. The transverse component rotates about the direction of applied magnetization and dephases. T1 governs the rate of recovery of the longitudinal magnetization.
In summary, the spin-spin relaxation time, T2, is the time to reduce the transverse magnetization by a factor of e. In the previous sequence, T2 and T1 processes are shown separately for clarity. That is, the magnetization vectors are shown filling the XY plane completely before growing back up along the Z axis. Actually, both processes occur simultaneously with the only restriction being that T2 is less than or equal to T1.
Two factors contribute to the decay of transverse magnetization.
1) molecular interactions (said to lead to a pure T2 molecular effect)
2) variations in Bo (said to lead to an inhomogeneous T2 effect
The combination of these two factors is what actually results in the decay of transverse magnetization. The combined time constant is called T2 star and is given the symbol T2*. The relationship between the T2 from molecular processes and that from inhomogeneities in the magnetic field is as follows.
We have just looked at the behavior of spins in the laboratory frame of reference. It is convenient to define a rotating frame of reference which rotates about the Z axis at the Larmor frequency. We distinguish this rotating coordinate system from the laboratory system by primes on the X and Y axes, X'Y'.
A magnetization vector rotating at the Larmor frequency in the laboratory frame appears stationary in a frame of reference rotating about the Z axis. In the rotating frame, relaxation of MZ magnetization to its equilibrium value looks the same as it did in the laboratory frame.
A transverse magnetization vector rotating about the Z axis at the same velocity as the rotating frame will appear stationary in the rotating frame. A magnetization vector traveling faster than the rotating frame rotates clockwise about the Z axis. A magnetization vector traveling slower than the rotating frame rotates counter-clockwise about the Z axis .
In a sample there are spin packets traveling faster and slower than the rotating frame. As a consequence, when the mean frequency of the sample is equal to the rotating frame, the dephasing of MX'Y' looks like this.
A coil of wire placed around the X axis will provide a magnetic field along the X axis when a direct current is passed through the coil. An alternating current will produce a magnetic field which alternates in direction.
In a frame of reference rotating about the Z axis at a frequency equal to that of the alternating current, the magnetic field along the X' axis will be constant, just as in the direct current case in the laboratory frame.
This is the same as moving the coil about the rotating frame coordinate system at the Larmor Frequency. In magnetic resonance, the magnetic field created by the coil passing an alternating current at the Larmor frequency is called the B1 magnetic field. When the alternating current through the coil is turned on and off, it creates a pulsed B1 magnetic field along the X' axis.
The spins respond to this pulse in such a way as to cause the net magnetization vector to rotate about the direction of the applied B1 field. The rotation angle depends on the length of time the field is on, , and its magnitude B1.
In our examples, will be assumed to be much smaller than T1 and T2.
A 90o pulse is one which rotates the magnetization vector clockwise by 90 degrees about the X' axis. A 90o pulse rotates the equilibrium magnetization down to the Y' axis. In the laboratory frame the equilibrium magnetization spirals down around the Z axis to the XY plane. You can see why the rotating frame of reference is helpful in describing the behavior of magnetization in response to a pulsed magnetic field.
A 180o pulse will rotate the magnetization vector by 180 degrees. A 180o pulse rotates the equilibrium magnetization down to along the -Z axis.
The net magnetization at any orientation will behave according to the rotation equation. For example, a net magnetization vector along the Y' axis will end up along the -Y' axis when acted upon by a 180o pulse of B1 along the X' axis.
A net magnetization vector between X' and Y' will end up between X' and -Y' after the application of a 180o pulse of B1 applied along the X' axis.
A rotation matrix (described as a coordinate transformation in Chapter 2) can also be used to predict the result of a rotation. Here is the rotation angle about the X' axis, [X', Y', Z] is the initial location of the vector, and [X", Y", Z"] the location of the vector after the rotation.
Motions in solution which result in time varying magnetic fields cause spin relaxation.
Time varying fields at the Larmor frequency cause transitions between the spin states and hence a change in MZ. This screen depicts the field at the green hydrogen on the water molecule as it rotates about the external field Bo and a magnetic field from the blue hydrogen. Note that the field experienced at the green hydrogen is sinusoidal.
There is a distribution of rotation frequencies in a sample of molecules. Only frequencies at the Larmor frequency affect T1. Since the Larmor frequency is proportional to Bo, T1 will therefore vary as a function of magnetic field strength. In general, T1 is inversely proportional to the density of molecular motions at the Larmor frequency.
The rotation frequency distribution depends on the temperature and viscosity of the solution. Therefore T1 will vary as a function of temperature. At the Larmor frequency indicated by o, T1 (280 K ) < T1 (340 K). The temperature of the human body does not vary by enough to cause a significant influence on T1. The viscosity does however vary significantly from tissue to tissue and influences T1 as is seen in the following molecular motion plot.
Fluctuating fields which perturb the energy levels of the spin states cause the transverse magnetization to dephase. This can be seen by examining the plot of Bo experienced by the red hydrogens on the following water molecule. The number of molecular motions less than and equal to the Larmor frequency is inversely proportional to T2.
In general, relaxation times get longer as Bo increases because there are fewer relaxation-causing frequency components present in the random motions of the molecules.
The Bloch equations are a set of coupled differential equations which can be used to describe the behavior of a magnetization vector under any conditions. When properly integrated, the Bloch equations will yield the X', Y', and Z components of magnetization as a function of time.
Copyright © 1996-2010 J.P. Hornak.
All Rights Reserved. | http://www.cis.rit.edu/htbooks/mri/chap-3/chap-3.htm | 13 |
105 | In physics, the angular velocity is a vector quantity (more precisely, a pseudovector) which specifies the angular speed of an object and the axis about which the object is rotating. The SI unit of angular velocity is radians per second, although it may be measured in other units such as degrees per second, revolutions per second, degrees per hour, etc. When measured in cycles or rotations per unit time (e.g. revolutions per minute), it is often called the rotational velocity and its magnitude the rotational speed. Angular velocity is usually represented by the symbol omega (Ω or ω). The direction of the angular velocity vector is perpendicular to the plane of rotation, in a direction which is usually specified by the right hand grip rule.
The angular velocity of a particle in a 2-dimensional plane is the easiest to understand. As shown in the figure on the right (typically expressing the angular measures φ and θ in radians), if we draw a line from the origin (O) to the particle (P), then the velocity vector (v) of the particle will have a component along the radius (radial component, v∥) and a component perpendicular to the radius (cross-radial component, v). However, it must be remembered that the velocity vector can be also decomposed into tangential and normal components.
A radial motion produces no change in the distance of the particle relative to the origin, so for purposes of finding the angular velocity the parallel (radial) component can be ignored. Therefore, the rotation is completely produced by the tangential motion (like that of a particle moving along a circumference), and the angular velocity is completely determined by the perpendicular (tangential) component.
It can be seen that the rate of change of the angular position of the particle is related to the cross-radial velocity by:
Utilizing θ, the angle between vectors v∥ and v, or equivalently as the angle between vectors r and v, gives:
Combining the above two equations and defining the angular velocity as ω=dΦ/dt yields:
In two dimensions the angular velocity is a single number which has no direction. A single number which has no direction is either a scalar or a pseudoscalar, the difference being that a scalar does not change its sign when the x and y axes are exchanged (or inverted), while a pseudoscalar does. The angle as well as the angular velocity is a pseudoscalar. The positive direction of rotation is taken, by convention, to be in the direction towards the y axis from the x axis. If the axes are inverted, but the sense of a rotation does not, then the sign of the angle of rotation, and therefore the angular velocity as well, will change.
It is important to note that the pseudoscalar angular velocity of a particle depends upon the choice of the origin.
In three dimensions, the angular velocity becomes a bit more complicated. The angular velocity in this case is generally thought of as a vector, or more precisely, a pseudovector. It now has not only a magnitude, but a direction as well. The magnitude is the angular speed, and the direction describes the axis of rotation. The right-hand rule indicates the positive direction of the angular velocity pseudovector, namely:
Just as in the two dimensional case, a particle will have a component of its velocity along the radius from the origin to the particle, and another component perpendicular to that radius. The combination of the origin point and the perpendicular component of the velocity defines a plane of rotation in which the behavior of the particle (for that instant) appears just as it does in the two dimensional case. The axis of rotation is then a line normal to this plane, and this axis defined the direction of the angular velocity pseudovector, while the magnitude is the same as the pseudoscalar value found in the 2-dimensional case. Define a unit vector which points in the direction of the angular velocity pseudovector. The angular velocity may be written in a manner similar to that for two dimensions:
which, by the definition of the cross product, can be written:
Euler's rotation theorem states that, in an instant, for any dt there always exists a momentary axis of rotation. Therefore, any transversal section of the body by a plane perpendicular to this axis has to behave as a two dimensional rotation. The angular speed vector will be defined over the rotation axis (eigenvector of the linear map), and such as its value is the derivative of the angle rotated with respect to time.
In general, the angular velocity in an n-dimensional space is the time derivative of the angular displacement tensor which is a second rank skew-symmetric tensor. This tensor will have n(n-1)/2 independent components and this number is the dimension of the Lie algebra of the Lie group of rotations of an n-dimensional inner product space. It turns out that in three dimensional space angular velocity can be represented by vector because number of independent components is equal to number of dimensions of space.
In order to deal with the motion of a rigid body, it is best to consider a coordinate system that is fixed with respect to the rigid body, and to study the coordinate transformations between this coordinate and the fixed "laboratory" system. As shown in the figure on the right, the lab system's origin is at point O, the rigid body system origin is at O' and the vector from O to O' is R. A particle (i) in the rigid body is located at point P and the vector position of this particle is Ri in the lab frame, and at position ri in the body frame. It is seen that the position of the particle can be written:
The defining characteristic of a rigid body is that the distance between any two points in a rigid body is unchanging in time. This means that the length of the vector is unchanging. By Euler's rotation theorem, we may replace the vector with where is a rotation matrix and is the position of the particle at some fixed point in time, say t=0. This replacement is useful, because now it is only the rotation matrix which is changing in time and not the reference vector , as the rigid body rotates about point O'. The position of the particle is now written as:
Taking the time derivative yields the velocity of the particle:
where Vi is the velocity of the particle (in the lab frame) and V is the velocity of O' (the origin of the rigid body frame). Since is a rotation matrix its inverse is its transpose. So we substitute :
Continue by taking the time derivative of :
Applying the formula (AB)T = BTAT:
is the negative of its transpose. Therefore it is a skew symmetric 3x3 matrix. We can therefore take its dual to get a 3 dimensional vector. is called the angular velocity tensor. If we take the dual of this tensor, matrix multiplication is replaced by the cross product. Its dual is called the angular velocity pseudovector, ω.
Substituting ω into the above velocity expression:
It can be seen that the velocity of a point in a rigid body can be divided into two terms - the velocity of a reference point fixed in the rigid body plus the cross product term involving the angular velocity of the particle with respect to the reference point. This angular velocity is the "spin" angular velocity of the rigid body as opposed to the angular velocity of the reference point O' about the origin O.
It is an important point that the spin angular velocity of every particle in the rigid body is the same, and that the spin angular velocity is independent of the choice of the origin of the rigid body system or of the lab system. In other words, it is a physically real quantity which is a property of the rigid body, independent of one's choice of coordinate system. The angular velocity of the reference point about the origin of the lab frame will, however, depend on these choices of coordinate system. It is often convenient to choose the center of mass of the rigid body as the origin of the rigid body system, since a considerable mathematical simplification occurs in the expression for the angular momentum of the rigid body.
If the reference point is the "Instantaneous axis of rotation" the expression of velocity of a point in the rigid body will have just the angular velocity term. This is because the velocity of instantaneous axis of rotation is zero. An example of instantaneous axis of rotation is the hinge of a door. Another example is the point of contact of a pure rolling spherical rigid body.
A college text-book of physics By Arthur Lalanne Kimball (Angular Velocity of a particle)
and the orientation of the axis about which the rotation takes place. The direction of the angular velocity vector will be along the axis of rotation; in this case (counter-clockwise rotation) the vector points toward the viewer.]]
It is a vector quantity. The SI unit of angular velocity is radians per second. But it may be measured in other units as well (such as degrees per second, degrees per hour, etc). When it is measured in cycles or rotations per unit time (e.g. revolutions per minute), it is often called the rotational velocity and its magnitude the rotational speed. Angular velocity is usually represented by the symbol omega (Ω or ω). The direction of the angular velocity vector is perpendicular to the plane of rotation, in a direction which is usually specified by the right hand rule. | http://www.thefullwiki.org/Angular_velocity | 13 |
51 | Nuclear weapon designs are often divided into two classes, based on the dominant source of the nuclear weapon's energy.
- Fission bombs derive their power from nuclear fission, where heavy nuclei (uranium or plutonium) split into lighter elements when bombarded by neutrons (produce more neutrons which bombard other nuclei, triggering a nuclear chain reaction). These are historically called atomic bombs, atom bombs, or A-bombs, though this name is not precise due to the fact that chemical reactions release energy from atomic bonds and fusion is no less atomic than fission. Despite this possible confusion, the term atom bomb has still been generally accepted to refer specifically to nuclear weapons, and most commonly to pure fission devices.
- Fusion bombs are based on nuclear fusion where light nuclei such as hydrogen and helium combine together into heavier elements and release large amounts of energy. Weapons which have a fusion stage are also referred to as hydrogen bombs or H-bombs because of their primary fuel, or thermonuclear weapons because fusion reactions require extremely high temperatures for a chain reaction to occur.
The distinction between these two types of weapon is blurred by the fact that they are combined in nearly all complex modern weapons: a smaller fission bomb is first used to reach the necessary conditions of high temperature and pressure to allow fusion to occur. On the other hand, a fission device is more efficient when a fusion core first boosts the weapon's energy. Additionally, most fusion weapons derive a substantial portion of their energy (often around half of the total yield) from a final stage of fissioning which is enabled by the fusion reactions. Since the distinguishing feature of both fission and fusion weapons is that they release energy from transformations of the atomic nucleus, the best general term for all types of these explosive devices is nuclear weapon.
Other specific types of nuclear weapon design which are commonly referred to by name include: neutron bomb, cobalt bomb, enhanced radiation weapon, and salted bomb.
The simplest nuclear weapons are pure fission bombs. These were the first types of nuclear weapons built during the Manhattan Project and they are a building block for all advanced nuclear weapons designs.
A mass of fissile material is called critical when it is capable of a sustained chain reaction, which depends upon the size, shape and purity of the material as well as what surrounds the material. A numerical measure of whether a mass is critical or not is available as the neutron multiplication factor, k, where
- k = f - l
Where f is the average number of neutrons released per fission event and l is the average number of neutrons lost by either leaving the system or being captured in a non-fission event. When k = 1 the mass is critical, k < 1 is subcritical and k > 1 is supercritical. A fission bomb works by rapidly changing a subcritical mass of fissile material into a supercritical assembly, causing a chain reaction which rapidly releases large amounts of energy. In practice the mass is not made slightly critical, but goes from slightly subcritical (k = .9) to highly supercritical (k = 2 or 3), so that each neutron creates several new neutrons and the chain reaction advances more quickly. The main challenge in producing an efficient explosion using nuclear fission is to keep the bomb together long enough for a substantial fraction of the available nuclear energy to be released.
Until detonation is desired, the weapon must consist of a number of separate pieces each of which is below the critical size either because they are too small or unfavorably shaped. To produce detonation, the fissile material must be brought together rapidly. In the course of this assembly process the chain reaction is likely to start causing the material to heat up and expand, preventing the material from reaching its most compact (and most efficient) form. It may turn out that the explosion is so inefficient as to be practically useless. The majority of the technical difficulties of designing and manufacturing a fission weapon are based on the need to both reduce the time of assembly of a supercritical mass to a minimum and reduce the number of stray (pre-detonation) neutrons to a minimum.
The isotopes desirable for a nuclear weapon are those which have a high probability of fission reaction, yield a high number of excess neutrons, have a low probability of absorbing neutrons without a fission reaction, and do not release a large number of spontaneous neutrons. The primary isotopes which fit these criteria are U-235, Pu-239 and U-233.
Naturally occurring uranium consists mostly of U-238 (99.29%), with a small part U-235 (0.71%). The U-238 isotope has a high probability of absorbing a neutron without a fission, and also a higher rate of spontaneous fission. For weapons, uranium is enriched through isotope separation. Uranium which is more than 80% U-235 is called highly enriched uranium (HEU), and weapons grade uranium is at least 93.5% U-235. U-235 has a spontaneous fission rate of 0.16 fissions/s-kg. which is low enough to make super critical assembly relatively easy. The critical mass for an unreflected sphere of U-235 is about 50 kg, which is a sphere with a diameter of 17 cm. This size can be reduced to about 15 kg with the use of a neutron reflector surrounding the sphere.
Plutonium (atomic number 94, two more than uranium) occurs naturally only in infinitesimal amounts found in uranium ores. Military or scientific production of plutonium is achieved by exposing purified U-238 to a strong neutron source (eg, in a breeder reactor). When U-238 absorbs a neutron the resulting U-239 isotope then beta decays twice into Pu-239. Pu-239 has a higher probability for fission than U-235, and a larger number of neutrons produced per fission event, resulting in a smaller critical mass. Pure Pu-239 also has a reasonably low rate of neutron emission due to spontaneous fission (10 fission/s-kg), making it feasible to assemble a super critical mass before predetonation. In practice the plutonium produced will invariably contain a certain amount of Pu-240 due to the tendency of Pu-239 to absorb an additional neutron during production. Pu-240 has a high rate of spontaneous fission events (415,000 fission/s-kg), making it an undesirable contaminant. Weapons grade plutonium must contain no more than 7% Pu-240; this is achieved by only exposing U-238 to neutron sources for short periods of time to minimize the Pu-240 produced. The critical mass for an unreflected sphere of plutonium is 16 kg, but through the use of a neutron reflecting tamper the pit of plutonium in a fission bomb is reduced to 10 kg, which is a sphere with a diameter of 10 cm.
Roughly the following values apply: there are 80 generations of the chain reaction, each requiring the time a neutron with a speed of 10,000 km/s needs to travel 10 cm; this takes 80 times 0.01µs. Thus, the supercritical mass has to be kept together for 1µs.
There are two techniques for assembling a supercritical mass. Broadly speaking one brings two sub-critical masses together and the other compresses a sub-critical mass into a supercritical one.
The simplest technique for assembling a supercritical mass is to shoot one piece of fissile material as a projectile against a second part as a target, usually called the gun method. This is roughly how the Little Boy weapon which was detonated over Hiroshima worked. In detail it actually used three uranium rings fitted side by side through which a uranium "bullet" was fired using a cordite charge rather than two hemispheres. This allowed sub-critical assemblies to be tested using the same "bullet" but with just one ring.
This method of combination can only be used for U-235 because of the relatively long amount of time it takes to combine the materials, making predetonation likely for Pu-239 which has a higher spontaneous neutron release due to Pu-240 contamination. The method requires ca. 20 to 25 kg of U-235, versus 15 kg for the implosion method.
For technologically advanced states the method is essentially obsolete, see below. With regard to the risk of proliferation and use by terrorists, the relatively simple design is a concern, as it does not require as much fine engineering or manufacturing as other methods. With enough highly-enriched uranium, nations or groups with relatively low levels of technological sophistication could create an inefficient–though still quite powerful–gun-type nuclear weapon.
The scientists of the Manhattan Project were sure of the gun-type bomb's success and in any event it could not be tested before being deployed because there was sufficient U-235 available for only one device.
The more difficult, but in many ways superior, method of combination is referred to as the implosion method and uses conventional explosives surrounding the material to rapidly compress the mass to a supercritical state. For Pu-239 assemblies a contamination of only 1% Pu-240 produces so many neutrons that implosion systems are required to produce efficient bombs. This is the reason that the more technically difficult implosion method was used on the plutonium Fat Man weapon which was detonated over Nagasaki.
Weapons assembled with this method also tend to be more efficient than the weapons employing the gun method of combination. The reason that the implosion method is more efficient is because it not only combines the masses, but also increases the density of the mass, and thereby increases the neutron multiplication factor k of the fissionable assembly. Most modern weapons use a hollow plutonium core with an implosion mechanism for detonation. The core is known as a pit, from the large seed encased in hard wood found in a peach or similar fruit.
This precision compression of the pit creates a need for very precise design and machining of the pit and explosive lenses. The milling machines used are so precise that they could cut the polished surfaces of eyeglass lenses. There are strong suggestions that modern implosion devices use non-spherical configurations as well, such as ovoid shapes ("watermelons").
Casting and then machining plutonium is difficult not only because of its toxicity but also because plutonium has many different metallic phases and changing phases as it cools distorts the metal. This is normally overcome by alloying it with 3-3.5 molar% (0.9-1.0% by weight) gallium which makes it take up its delta phase over a wide temperature range. When cooling from molten it then only suffers a single phase change, from its epsilon phase to the delta one instead of four changes. Other trivalent metals would also work, but gallium has a small neutron absorption cross section and helps protect the plutonium against corrosion. A drawback is that gallium compounds themselves are corrosive and so if the plutonium is recovered from dismantled weapons for conversion to plutonium oxide for power reactors, there is the difficulty of removing the gallium. Modern pits are often composites of plutonium and uranium-235.
Because plutonium is chemically reactive and toxic if inhaled or enters the body by any other means, it is usual to plate the completed pit with a thin layer of inert metal. In the first weapons, nickel was used but gold is now preferred.
It is not good enough to pack explosive in a spherical shell around the tamper and detonate it simultaneously at several places because the tamper and plutonium pit will simply be squeezed out of the gaps in the detonation front. Instead the shock wave must be carefully shaped into a perfect sphere centred on the pit and travelling inwards. This is done by carefully shaping it using a spherical shell made of closely fitting and accurately shaped blocks of explosives of different propagation speeds to form explosive lenses.
The lenses must be accurately shaped, chemically pure and homogenous for precise control of the speed of the detonation front. The casting and testing of these lenses was a massive technical challenge in the development of the implosion method in the 1940s, as was measuring the speed of the shock wave and the performance of prototype shells. It also required electric detonators to be developed which would explode at exactly the same moment so that the explosion starts at the centre of each of the lenses simultaneously. Once the shock wave has been shaped, there may also be an inner homogenous spherical shell of explosive to give it greater force, known as a supercharge.
The bomb dropped onto Nagasaki used 32 lenses, while more efficient bombs would later use 92 lenses.
The explosion shock wave might be of such short duration that only a fraction of the pit is compressed at any instance as it passes through it. A pusher shell made out of low density metal such as aluminium or beryllium or an alloy of them (aluminium is easier and safer to shape but beryllium reflects neutrons back into the core) may be needed and is located between the explosive lens and the tamper. It works by reflecting some of the shock wave backwards which has the effect of lengthening it. The tamper might be designed to work as the pusher too, although a low density material is best for the pusher but a high density one for the tamper.
Most U.S. weapons since the 1950s have employed a concept called pit "levitation," whereby an air gap is introduced between the pusher and the pit. The effect of this is to allow the pusher to gain momentum before it hits the core, which allows for more efficient and complete compression. A common analogy is that of a hammer and a nail: leaving space between the hammer and nail before striking greatly increases the compression power of the hammer, rather than putting the hammer right on top of the nail before beginning to push.
Tamper / neutron reflector
A tamper is a layer of dense material (typically natural or depleted uranium or tungsten) surrounding the fissile material. It reduces the critical mass and increases the efficiency by two effects:
The tamper prolonges the very short time the material holds together under the extreme pressures of the explosion, and thereby increases the efficiency, i.e. the part of the fissile material that actually fissions. The high density has more effect on that than high tensile strength. A coincidence that is fortunate from the point of view of the weapon designer, is that materials of high density are also excellent as reflectors of neutrons.
While the effect of a tamper is to increase the efficiency - both by reflecting neutrons and by delaying the expansion of the bomb, the effect on the efficiency is not as great as on the critical mass. The reason for this is that the process of reflection is relatively time consuming and may not occur extensively before the chain reaction is terminated.
Neutron trigger / initiator
One of the key elements in the proper operation of a nuclear weapon is initiation of the fission chain reaction at the proper time. To obtain a significant nuclear yield of
the nuclear explosive, sufficient neutrons must be present within the supercritical core
at the right time. If the chain reaction starts too soon, the result will be only a 'fizzle
yield,' much below the design specification; if it occurs too late, there may be no yield whatsoever. Several ways to produce neutrons at the appropriate moment have been
Early neutron triggers consisted of a highly radioactive isotope of Polonium (Po-210), which is a strong alpha emitter combined with beryllium which will absorb alphas and emit neutrons. This isotope of polonium has a half life of almost 140 days. Therefore, a neutron initiator using this material needs to have the polonium replaced frequently. The polonium is produced in a nuclear reactor.
To supply the initiation pulse of neutrons at the right time, the
polonium and the beryllium need to be kept apart until the appropriate moment and
then thoroughly and rapidly mixed by the implosion of the weapon. This method of neutron initiation is sufficient for weapons utilizing the slower gun combination method, but the timing is not precise enough for an implosion weapon design.
Another method of providing source neutrons, is through a pulsed neutron emitter which is a small ion accelerator with a metal hydride target. When the ion source is turned on to create a plasma of deuterium or tritium, a large voltage is applied across the tube which accelerates the ions into tritium rich metal (usually scandium). The ions are accelerated so that there is a high probability of nuclear fusion occurring. The deuterium-tritium fusion reactions emit a short pulse of 14 MeV neutrons which will be sufficient to initiate the fission chain reaction. The timing of the pulse can be precisely controlled, making it better for an implosion weapon design.
Comparison of the two methods
The gun type method is essentially obsolete and was abandoned by the United States as soon as the implosion technique was perfected. Other nuclear powers, such as the United Kingdom, never even built an example of this type of weapon. As well as only being possible to produce this weapon using highly enriched U-235, the technique has other severe limitations.
The implosion technique is much better suited to the various methods employed to reduce the weight of the weapon and increase the proportion of material which fissions.
There are also safety problems with gun type weapons. For example, it is inherently dangerous to have a weapon containing a quantity and shape of fissile material which can form a critical mass through a relatively simple accident. Furthermore if the weapon is dropped from an aircraft into the sea, then the moderating effect of the light sea water can also cause a criticality accident without the weapon even being physically damaged. Neither can happen with an implosion type weapon since there is normally insufficent fissile material to form a critical mass without the correct detonation of the lenses.
Implosion type weapons normally have the pit physically removed from the centre of the weapon and only inserted during the arming procedure so that a nuclear explosion cannot occur even if a fault in the firing circuits causes them to detonate the explosive lenses simultaneously as would happen during correct operation. Alternatively, the pit can be "safed" by having its normally-hollow core filled with an inert material such as a fine metal chain. While the chain is in the center of the pit, the pit can't be compressed into an appropriate shape to fission; when the weapon is to be armed, the chain is removed. Similarly, a serious fire will detonate the explosives, destroying the pit and spreading plutonium to contaminate the surroundings (as has happened in several weapons accidents) but cannot possibly cause a nuclear explosion.
The South African nuclear program was probably unique in adopting the gun technique to the exclusion of implosion type devices, and built around five of these weapons before they abandoned their program.
Practical limitations of the fission bomb
A pure fission bomb is practically limited to a yield of a few hundred kilotons by the large amounts of fissile material needed to make a large weapon. It is technically difficult to keep a large amount of fissile material in a subcritical assembly while waiting for detonation, and it is also difficult to physically transform the subcritical assembly into a supercritical one quickly enough that the device explodes rather than prematurely detonating such that a majority of the fuel is unused (inefficient predetonation). The most efficient pure fission bomb would still only consume 20% of its fissile material before being blown apart, and can often be much less efficient (Fat Man only had an efficiency of 1.4%). Large yield, pure fission weapons are also unattractive due to the weight, size, and cost of using large amounts of highly enriched material.
Thermonuclear weapons (also Hydrogen bomb or fusion bomb)
The amount of energy released by a weapon can be greatly increased by the addition of nuclear fusion reactions. Fusion releases even more energy per reaction than fission, and can also be used as a source for additional neutrons. The light weight of the elements used as fusion fuel, combined with the larger energy release, means that fusion is a very efficient fuel by weight, making it possible to build extremely high yield weapons which are still portable enough to easily deliver. Fusion is the combination of two light atoms, usually isotopes of hydrogen, to form a more stable heavy atom and release excess energy. The fusion reaction requires the atoms involved to have a high thermal energy, which is why the reaction is called thermonuclear. The extreme temperatures and densities necessary for a fusion reaction are easily generated by a fission explosion.
The simplest way to utilize fusion is to put a mixture of deuterium and tritium inside the hollow core of an implosion style plutonium pit (which usually requires an external neutron generator mounted outside of it rather than the initiator in the core as in the earliest weapons). When the imploding fission chain reaction brings the fusion fuel to a sufficient pressure, a deuterium-tritium fusion reaction occurs and releases a large number of energetic neutrons into the surrounding fissile material. This increases the rate of burn of the fissile material and so more is consumed before the pit disintegrates. The efficiency (and therefore yield) of a pure fission bomb can be doubled (from about 20% to about 40% in an efficient design) through the use of a fusion boosted core, with very little increase in the size and weight of the device. The amount of energy released through fusion is only around 1% of the energy from fission, so the fusion chiefly increases the fission efficiency by providing a burst of additional neutrons.
The first boosted test was the United States' 45.5 kiloton Greenhouse Item test on May 24 1951 which used a cryogenic liquid deuterium-tritium mix instead of a gaseous one, and the Russians followed two years later on August 12 1953. Sophisticated modern weapons use lithium deuteride mainly because of maintenance issues — tritium is a dangerously radioactive gas with a short half life and so needs regular replacement. A lithium-6 atom can absorb a neutron and split into a tritium atom and a helium-4 one; thus providing the tritium for the boost reaction. A lithium-7 atom can be split into a tritium atom
and a helium-4 one by gamma ray impact or by high-energy neutron
Fission boosting provides two strategic benefits. The first is that it obviously allows weapons to be made very much smaller and use less fissile material for a given yield, making them cheaper to build and deliver. The second benefit is that it can be used to render weapons immune to radiation interference (RI). It was discovered in the mid-1950s that plutonium pits would be particularly susceptible to partial pre-detonation if exposed to the intense radiation of a nearby nuclear explosion (electronics might also be damaged, but this was a separate issue). RI was a particular problem before effective early warning radar systems because a first strike attack might make retaliatory weapons useless. Boosting can reduce the amount of plutonium needed in a weapon to below the quantity which would be vulnerable to this effect.
Staged thermonuclear weapons
The basic principles behind modern thermonuclear weapons design were developed independently by scientists in different countries. Edward Teller and Stanislaw Ulam at Los Alamos worked out the idea of staged detonation coupled with radiation implosion in what is known in the United States as the Teller-Ulam design in 1951. Soviet physicist Andrei Sakharov independently arrived at the same answer (which he called his "Third Idea") in 1955.
The full details of staged thermonuclear weapons have never been fully declassified and among different sources outside the wall of classification there is no strict consensus over how exactly a hydrogen bomb works. The basic principles are revealed through two separate declassified lines by the US Department of Energy: "The fact that in thermonuclear (TN) weapons, a fission 'primary' is used to trigger a TN reaction in thermonuclear fuel referred to as a 'secondary' and "The fact that, in thermonuclear weapons, radiation from a fission explosive can be contained and used to transfer energy to compress and ignite a physically separate component containing thermonuclear fuel."
A general interpretation of this, following on the 1979 court case of United States v. The Progressive (which sought to censor an article about the workings of the hydrogen bomb; the government eventually dropped the case and much new information about the weapon was declassified), is as follows:
A fission weapon (the "primary") is placed at one end of the warhead casing. When detonated, it first releases X-rays at the speed of light. These are reflected from the casing walls, which are made of heavy metals, machined into X-ray mirrors. The X-rays travel towards the fusion fuel (the "secondary"). Here reports about what happens differ: either the X-rays are used to cause a pentane-impregnated polystyrene foam filling the case to convert into a plasma, or the X-rays cause an ablation of the surface of the secondary, or the X-rays exert raw radiation pressure onto the surface of the secondary. In either case, the secondary, either a column or sphere of lithium-deuteride surrounded by a natural uranium "tamper"/"pusher", is compressed. Inside the secondary is a "sparkplug" of either enriched uranium or plutonium, which is caused to fission by the compression, and begins its own nuclear explosion. The combination of these two forces result in a tight compression of the fusion fuel, which is then subjected to high temperatures caused by the fission weapons, causes the deuterium to fuse into helium and emit copious neutrons. The neutrons transmute the lithium to tritium, which then also fuses and emits large amounts of gamma rays and more neutrons. The excess neutrons then cause the natural uranium in the "tamper", "pusher", case and x-ray mirrors to undergo fission as well, adding more power to the yield.
Advanced thermonuclear weapons designs
The largest modern fission-fusion-fission weapons include a fissionable outer shell of U-238, the more inert waste isotope of uranium, or X-ray mirrors constructed of polished U-238. This otherwise inert U-238 would be detonated by the intense fast neutrons from the fusion stage, increasing the yield of the bomb many times. For maximum yield, however, moderately enriched uranium is preferable as a jacket material. For the purposes of miniaturization of weapons (fitting them into the small re-entry vehicles on modern MIRVed missiles), it has also been suggested that many modern thermonuclear weapons use spherical secondary stages, rather than the column shapes of the older hydrogen bombs.
The cobalt bomb uses cobalt in the shell, and the fusion neutrons convert the cobalt into cobalt-60, a powerful long-term (5 years) emitter of gamma rays, which produces major radioactive contamination. In general this type of weapon is a salted bomb and variable fallout effects can be obtained by using different salting isotopes. Gold has been proposed for short-term fallout (days), tantalum and zinc for fallout of intermediate duration (months). To be useful for salting, the parent isotopes must be abundant in the natural element, and the neutron-bred radioactive product must be a strong emitter of penetrating gamma rays.
The primary purpose of this weapon is to create extremely radioactive fallout to deny a region to an advancing army, a sort of wind-deployed mine-field. No cobalt or other salted bomb has ever been atmospherically tested, and as far as is publicly known none have ever been built. In light of the ready availability of fission-fusion-fission bombs, it is unlikely any special-purpose fallout contamination weapon will ever be developed. The British did test a bomb that incorporated cobalt as an experimental radiochemical tracer (Antler/Round 1, 14 September 1957). This 1 kt device was exploded at the Tadje site, Maralinga range, Australia. The experiment was regarded as a failure and not repeated.
The thought of using cobalt, which has the longest half-life of the feasible salting materials, caused Leó Szilárd to refer to the weapon as a potential doomsday device. With a 5yr half-life people would have to remain shielded underground for many years, effectively wiping out humanity. However this would require a massive (unrealistic) amount of such bombs, yet the public heard of it and there were numerous stories involving a single bomb wiping out the planet. (Note: The movie Dr Strangelove incorporated such a doomsday weapon as a major plot device.)
A final variant of the thermonuclear weapons is the enhanced radiation weapon, or neutron bomb which are small thermonuclear weapons in which the burst of neutrons generated by the fusion reaction is intentionally not absorbed inside the weapon, but allowed to escape. The X-ray mirrors and shell of the weapon are made of chromium or nickel so that the neutrons are permitted to escape.
This intense burst of high-energy neutrons is the principal destructive mechanism. Neutrons are more penetrating than other types of radiation so many shielding materials that work well against gamma rays do not work nearly as well. The term "enhanced radiation" refers only to the burst of ionizing radiation released at the moment of detonation, not to any enhancement of residual radiation in fallout (as in the salted bombs discussed above).
- Glasstone, Samuel and Dolan, Philip J., The Effects of Nuclear Weapons (third edition), U.S. Government Printing Office, 1977. PDF Version
- Nuclear Weapon Archive from Carey Sublette is a reliable source of information and has links to other sources.
- The Federation of American Scientists provide solid information on weapons of mass destruction, including nuclear weapons and their effects
- Cohen, Sam, The Truth About the Neutron Bomb: The Inventor of the Bomb Speaks Out, William Morrow & Co., 1983
- Militarily Critical Technologies List (MCTL) from the US Government's Defense Technical Information Center
- Grace, S. Charles, Nuclear Weapons: Principles, Effects and Survivability (Land Warfare: Brassey's New Battlefield Weapons Systems and Technology, vol 10)
- Smyth, Henry DeWolf, Atomic Energy for Military Purposes, Princeton University Press, 1945. (see: Smyth Report)
- The Effects of Nuclear War, Office of Technology Assessment (May 1979).
- Rhodes, Richard. Dark Sun: The Making of the Hydrogen Bomb. Simon and Schuster, New York, (1995 ISBN 0684824140)
- Rhodes, Richard. The Making of the Atomic Bomb. Simon and Schuster, New York, (1986 ISBN 0684813785)
- More information on the design of two-stage fusion bombs | http://www.biologydaily.com/biology/Nuclear_weapon_design | 13 |
65 | Fact Sheet 2004–3123
The Everglades is a vast subtropical ecosystem on the southern tip of the Florida peninsula that is the subject of an extensive restoration effort. A major objective of Everglades restoration, as defined in the Comprehensive Everglades Restoration Plan (CERP) (http://www.evergladesplan.org/), is to restore the ecosystem to more natural predrainage conditions. This effort not only requires scientific insight into factors that affect flow behavior for effective restoration planning but also requires background knowledge of flow conditions for restoration assessment, identified as a critical need in the CERP Monitoring and Assessment Plan (http://www.evergladesplan.org/pm/recover/recover_map_2004.cfm). Knowledge of flow conditions also is vital to the development and use of models to evaluate, interpret, and compare restoration scenarios for implementation of the CERP. Moreover, the role of flow as a factor contributing to landscape changes in the Everglades is not well understood and needs to be investigated (National Research Council, 2003). In general, the Science Coordination Team of the South Florida Ecosystem Restoration Task Force (http://www.sfrestore.org/) has asserted that there is an urgent need for flow research in the Everglades to increase the level of understanding and awareness of its role in restoration activities (Science Coordination Team, 2003).
The freshwater wetlands of the Everglades are a mosaic of tree islands, sawgrass marshes, wet prairies, and sloughs. Flow typically is shallow and slow in the wetlands. The shallow-depth, slow-velocity flow, referred to as sheet flow, is controlled primarily by the small topographic gradient (~10-6), the high internal resistance of submerged aquatic plants and emergent vegetation, and to a lesser extent by water levels, microtopography, and meteorological factors. Sheet-flow conditions are vital to habitat sustainability. Strong but mostly anecdotal evidence suggests that sheet flow helps shape and preserve the ridge and slough landscape (Science Coordination Team, 2003). Drainage and compartmentalization coupled with decades of managed water controls have altered sheet-flow conditions in the Everglades.
Significant portions of the Everglades have been compartmentalized with the largest remaining free-flowing part being within Everglades National Park (ENP). Shark River Slough (fig. 1) is a prominent drainage feature in ENP that conveys freshwater inflows to the coastal mangrove ecotone as sheet flow through vegetated wetlands. In a recently completed study by the U.S. Geological Survey (USGS), sets of continuous flow-velocity, water-temperature, and conductivity data were collected in Shark River Slough during wet seasons from July 1999 to July 2003. The Everglades wet season typically coincides with the June through November tropical hurricane season. These flow data, documented in Riscassi and Schaffranek (2002, 2003, and 2004), define the range of flow velocities in a variety of vegetative communities and are yielding insight into factors that influence the sheet-flow regime. A summary of the data and an overview of study findings are presented in this Fact Sheet.
Figure 1. Satellite image showing monitoring sites in southern Florida.
Over the course of the study, flow velocities, water and air temperatures, and (or) conductivities were measured at five sites with different vegetative properties. Locations of the five monitoring sites are shown in figure 1. At four sites (SH1, GS-203, GS-33, and GS-36), flow velocities were measured at a fixed elevation in the water column using acoustic Doppler velocity (ADV) meters. At three sites (GS-203, GS-33, and GS-36), conductivities were measured at a fixed elevation near the top of the plant litter layer. At all five sites, temperatures were monitored in the plant litter, at vertical intervals throughout the water column, on the water surface, and in the air above the water column using thermally sensitive resistors. In addition to considerations of vegetation characteristics and regional location, monitoring sites were selected based on their proximity to existing hydrologic and meteorological stations.
Photographs of vegetation at the four sites where flow velocities were measured are shown in figure 2. Site SH1 (fig. 2A) is a medium-dense area of spikerush (Eleocharis cellulosa) on the edge of a sawgrass (Cladium jamaicense) stand. Site GS-203 (fig. 2B) is a medium-dense area of sawgrass on the edge of a dense sawgrass stand. Sites GS-33 (fig. 2C) and GS-36 (fig. 2D) are sparse and medium-dense spikerush areas, respectively, each having various amounts of submerged aquatic vegetation and periphyton. Periphyton, a matrix of algae found on the top of the litter layer, floating, submerged, or attached to plant stems in various concentrations at all sites, can be seen in mats floating near the water surface at site GS-36 in figure 2D. Un-pictured site NP202 (fig. 1) is a dense cattail (Typha domingensis) area. Dates for which valid velocity, temperature, and conductivity data were collected, typically bihourly, are identified in figure 3.
Figure 2. Vegetation at flow-monitoring sites (A) SH1, (B) GS-203, (C) GS-33, and (D) GS-36.
Figure 3. Dates of processed and edited flow-velocity, conductivity, and temperature data.
ADV meters measure the frequency shift between a transmitted acoustic pulse and its reflectance from particulate matter within a small sample volume (0.25 cm3) to determine flow velocity. An ADV meter is shown suspended vertically from a horizontal board and centered between two vertical boards at the SH1 site in figure 2A. The ADV meter senses and records three-dimensional velocity components to an accuracy of ±1 % of the measured velocity (SonTek, 2001) with a resolution of 0.01 cm/s. Because vertical velocity components are negligible, only horizontal velocity components were used to determine flow speeds and directions. A sample plot of velocity vectors showing bihourly flow speeds and directions at site GS-203 during the 2000-2001 wet season is presented in figure 4.
Figure 4. Hydrograph of water depths and bihourly flow velocities plotted as vectors at site GS-203. Velocity vectors are aligned to indicate flow direction relative to magnetic north and plotted in length according to y-axis scale to represent flow speed. Negative velocities indicate southerly flow.
ADV data collected at four sites indicate a range of sheet-flow velocities in typical vegetative communities within ENP. The daily means of all ADV data ranged between 0.20 and 5.16 cm/s, with an overall mean of 1.15 cm/s (Riscassi and Schaffranek, 2002, 2003, and 2004). Ninety percent of all daily mean flow velocities were between 0.46 and 2.29 cm/s. Mean flow velocities were 1.63, 0.82, 0.68, and 1.40 cm/s at directions of 225, 243, 200, and 229 degrees from magnetic north for all data collected at sites SH1, GS-203, GS-33, and GS-36, respectively. These mean flow velocities, although representative of sheet-flow conditions in similarly vegetated areas within ENP, should not be assumed to be typical of flow in compartmentalized regions of the Everglades, such as Water Conservation Areas (WCA).
Inhomogeneity in the speed, direction, and behavior of sheet flow is a function of external and internal forcing mechanisms, both locally and regionally driven. Local factors include water depth; microtopography; the type, amount, and properties of emergent vegetation; and the presence, density, and composition of submerged aquatic plants and (or) periphyton. Regional factors include the water-surface slope; land-surface gradient; vegetative heterogeneity; and the proximity of tree islands, airboat trails, hydraulic structures, roads, culverts, canals, and levees. Meteorological factors such as winds variously and intermittently affect flow conditions. Fires also have a residual effect on sheet-flow behavior (Schaffranek and others, 2003). Several factors affecting sheet-flow conditions are discussed in the following sections.
Data indicate that dynamic variability in flow directions is most evident during low water levels and is considerably damped with increased water depth. Implications are that when water levels are high, regional factors drive flow more uniformly; however, as water levels fall, flow velocities decrease, momentum is reduced, and the flow becomes more susceptible to local forcing mechanisms, such as microtopography and vegetative properties. A graph of flow speed plotted against direction for bihourly velocity data collected at site GS-203 during the 2000-2001 wet season is shown in figure 5. Flow velocities greater than about 0.5 cm/s are more consistent in direction than flow at slower velocities. The standard deviation in flow direction of velocities greater than 0.5 cm/s is less than a third of the standard deviation of flow velocities less than 0.5 cm/s. Near-convergence of flow velocities to a constant direction at faster speeds is clearly evident in figure 5. Analyses of velocity data in conjunction with water levels recorded at a nearby hydrologic monitoring station revealed that the most significant velocity fluctuations occurred during low water depths (see fig. 4). It is likely that flow velocities and sheet-flow conditions in WCA of the Everglades behave similarly to the slow (< 0.5 cm/s) velocity data plotted in figure 5 at nearly all flow depths. However, this hypothesis needs to be tested through concurrent acquisition of synoptic time-series of flow data in wetlands of both ENP and WCA.
Figure 5. Bihourly flow speeds and directions at site GS-203 during 2000-2001 wet season.
The vertical flow structure at all ADV sites was fairly uniform, with variations in flow speed and direction corresponding to differences in vegetation composition. Flow velocities at 3-cm depth intervals and volumes of organic material, primarily spikerush and periphyton, occupying the water column at 10-cm depth increments are shown in figure 6 for site GS-33. The reduced flow velocity near the water surface is due to the presence of periphyton, which was concentrated in mats floating near the water surface. At about 10 cm above the plant litter layer the flow speed begins to decline to zero velocity at the top of the litter layer. Vertical velocity profiles at each of the ADV monitoring sites revealed similar damped flow speeds within about 5-10 cm above the plant litter layer. Relatively uniform flow structure was observed throughout the middle to upper part of the water column for uniform vegetation composition. The flow speed and direction measured by the self-recording ADV meter deployed at the site are plotted at its sampling depth in figure 6 to show the close agreement of the two independent velocity measurements.
Figure 6. Flow velocities and volumes of organic material in water column measured at site GS-33 on October 30, 2002 (1330–1415 hours). Red symbols identify flow speed and direction measured at 1600 hours by self-recording ADV meter deployed at site. Note scale of top horizontal axis defining water column volume occupied by material ranges from 0 to 1 percent.
Intermittent storm and rainfall events appeared to have limited, mainly local, influence on flow behavior, but these effects varied greatly depending not only on the amount of submerged vegetation but also on the amount of emergent vegetation sheltering the water surface from wind effects. In general, the relatively deep sample depths of the ADV meters inhibited detection and assessment of the strength of meteorological effects, which tend to be damped with increased distance from the water surface. Some flow data suggested that strong storms affected vegetation composition, which in turn had residual effects on flow behavior, but these effects only persisted for a few days. The regional extent of storm effects on sheet-flow behavior could not be evaluated with any degree of confidence given the limited amount of data and few monitoring stations. Temperature-profile data indicate, however, that persistently strong winds, rainfall events, or cool cloud-covered days have atypical thermal effects on internal flow structure.
Concurrent temperature-profile and flow-velocity data collected at each of four sites have documented thermal effects on internal flow structure. Temperature profiles collected at 5- and 10-cm depth intervals link large fluctuations in flow velocities measured and recorded after sunset to thermal convection (Jenter and others, 2003). Temperature data indicate that the water column is typically isothermal at the beginning of each day, stratifies during the daytime, and de-stratifies during the night due to mixing driven by thermal convection. This mixing process has potentially profound implications on understanding processes of importance to successful implementation of the CERP, including mercury methylation, evapotranspiration, oxidation, and nutrient cycling.
A study of sheet flow in ENP from July 1999 to July 2003 has produced data defining flow velocities in varied vegetative communities and yielded insight into factors that affect flow behavior of importance to Everglades restoration. The mean of all daily mean flow velocities for the four sites was 1.15 cm/s. Ninety percent of all daily mean flow velocities were between 0.46 and 2.29 cm/s. Mean flow directions at all sites were southwesterly ranging between 200 and 243 degrees relative to magnetic north. Flows at velocities greater than 0.5 cm/s were more consistent in direction than flows at slower velocities. Vertical flow structure was fairly uniform and correlated to vegetation composition. Meteorological effects on flow were limited mainly to significant storm events, although data limitations precluded more definitive analyses. Thermal-driven vertical mixing was documented to occur nearly daily in the absence of storm events and passage of weather fronts. More such flow data would be desirable to define and evaluate sheet-flow conditions in other areas of ENP and in other regions of the Everglades to aid restoration efforts. Comparable flow data in WCA would enable comparison of sheet-flow behavior in compartmentalized regions with free-flowing conditions in ENP and provide a means to assess the role of flow in shaping and preserving the Everglades landscape.
Support for this study was provided by the USGS Priority Ecosystems Science Initiative. Nancy Rybicki and Alfonso Lombana, USGS, provided assessments of vegetative properties at the monitoring sites.
Jenter, H.L., Schaffranek, R.W., and Smith, T.J., 2003, Thermally driven vertical mixing in the Everglades, Joint Conference on the Science and Restoration of the Greater Everglades and Florida Bay, Palm Harbor, FL, April 13-18, 2003, pp 290-292, accessed November 3, 2004 at http://sofia.usgs.gov/projects/vege_resist/vertmix_geer03abs.html
National Research Council, 2003, Does water flow influence Everglades landscape patterns, Washington, D.C., The National Academies Press, 41 p, accessed November 3, 2004 at http://books.nap.edu/catalog/10758.html
Riscassi, A.L., and Schaffranek, R.W., 2002, Flow velocity, water temperature, and conductivity in Shark River Slough, Everglades National Park, Florida: July 1999 – August 2001, U.S. Geological Survey Open-File Report 02-159, 32 p.
Riscassi, A.L., and Schaffranek, R.W., 2003, Flow velocity, water temperature, and conductivity in Shark River Slough, Everglades National Park, Florida: August 2001 – June 2002, U.S. Geological Survey Open-File Report 03-348, 37 p.
Riscassi, A.L., and Schaffranek, R.W., 2004, Flow velocity, water temperature, and conductivity in Shark River Slough, Everglades National Park, Florida: June 2002 – July 2003, U.S. Geological Survey Open-File Report 04-1233, 56 p.
Schaffranek, R.W., Riscassi, A.L., Rybicki, N.B., and Lombana, A.V., 2003, Fire effects on flow in vegetated wetlands of the Everglades, Joint Conference on the Science and Restoration of the Greater Everglades and Florida Bay, Palm Harbor, FL, April 13-18, 2003, pp 470-472, accessed November 3, 2004 at http://sofia.usgs.gov/projects/dynamics/fireeffects_03geerab.html
Science Coordination Team, 2003, The role of flow in the Everglades ridge and slough landscape, South Florida Ecosystem Restoration Working Group, 62 p, accessed November 3, 2004 at http://www.sfrestore.org/sct/docs/.
SonTek, 2001, SonTek ADV acoustic Doppler velocimeter technical documentation: San Diego, CA, 202 p.
For more information contact:
Raymond W. Schaffranek
U.S. Geological Survey
12201 Sunrise Valley Drive
National Center, Mail Stop 430
Reston, VA 20192
(703) 648-5891 (voice)
(703) 648-5484 (fax)
Any use of trade, product, or firm names in this publication is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Document Accessibility: Adobe Systems Incorporated has information about PDFs and the visually impaired. This information provides tools to help make PDF files accessible. These tools convert Adobe PDF documents into HTML or ASCII text, which then can be read by a number of common screen-reading programs that synthesize text as audible speech. In addition, an accessible version of Acrobat Reader 6.0, which contains support for screen readers, is available. These tools and the accessible reader may be obtained free from Adobe at Adobe Access.
|AccessibilityFOIAPrivacyPolicies and Notices|
|U.S. Department of the Interior, U.S. Geological Survey
Persistent URL: http://pubs.water.usgs.gov/fs20043123
Page Contact Information: GS Pubs Web Contact
Last modified: Wednesday, January 09 2013, 07:42:02 PM | http://pubs.usgs.gov/fs/2004/3123/ | 13 |
106 | |Nucleus · Nucleons (p, n) · Nuclear force · Nuclear reaction|
In nuclear physics and nuclear chemistry, nuclear fission is either a nuclear reaction or a radioactive decay process in which the nucleus of an atom splits into smaller parts (lighter nuclei). The fission process often produces free neutrons and photons (in the form of gamma rays), and releasing a very large amount of energy even by the energetic standards of radioactive decay.
Nuclear fission of heavy elements was discovered in 1938 by Lise Meitner, Otto Hahn, Fritz Strassmann, and Otto Robert Frisch. It was named by analogy with biological fission of living cells. It is an exothermic reaction which can release large amounts of energy both as electromagnetic radiation and as kinetic energy of the fragments (heating the bulk material where fission takes place). In order for fission to produce energy, the total binding energy of the resulting elements must be greater than that of the starting element.
Fission is a form of nuclear transmutation because the resulting fragments are not the same element as the original atom. The two nuclei produced are most often of comparable but slightly different sizes, typically with a mass ratio of products of about 3 to 2, for common fissile isotopes. Most fissions are binary fissions (producing two charged fragments), but occasionally (2 to 4 times per 1000 events), three positively charged fragments are produced, in a ternary fission. The smallest of these fragments in ternary processes ranges in size from a proton to an argon nucleus.
Fission as encountered in the modern world is usually a deliberately produced man-made nuclear reaction induced by a neutron. It is less commonly encountered as a natural form of spontaneous radioactive decay (not requiring a neutron), occurring especially in very high-mass-number isotopes. The unpredictable composition of the products (which vary in a broad probabilistic and somewhat chaotic manner) distinguishes fission from purely quantum-tunnelling processes such as proton emission, alpha decay and cluster decay, which give the same products each time. Nuclear fission produces energy for nuclear power and to drive the explosion of nuclear weapons. Both uses are possible because certain substances called nuclear fuels undergo fission when struck by fission neutrons, and in turn emit neutrons when they break apart. This makes possible a self-sustaining nuclear chain reaction that releases energy at a controlled rate in a nuclear reactor or at a very rapid uncontrolled rate in a nuclear weapon.
The amount of free energy contained in nuclear fuel is millions of times the amount of free energy contained in a similar mass of chemical fuel such as gasoline, making nuclear fission a very dense source of energy. The products of nuclear fission, however, are on average far more radioactive than the heavy elements which are normally fissioned as fuel, and remain so for significant amounts of time, giving rise to a nuclear waste problem. Concerns over nuclear waste accumulation and over the destructive potential of nuclear weapons may counterbalance the desirable qualities of fission as an energy source, and give rise to ongoing political debate over nuclear power.
Physical overview
Nuclear fission can occur without neutron bombardment, as a type of radioactive decay. This type of fission (called spontaneous fission) is rare except in a few heavy isotopes. In engineered nuclear devices, essentially all nuclear fission occurs as a "nuclear reaction" — a bombardment-driven process that results from the collision of two subatomic particles. In nuclear reactions, a subatomic particle collides with an atomic nucleus and causes changes to it. Nuclear reactions are thus driven by the mechanics of bombardment, not by the relatively constant exponential decay and half-life characteristic of spontaneous radioactive processes.
Many types of nuclear reactions are currently known. Nuclear fission differs importantly from other types of nuclear reactions, in that it can be amplified and sometimes controlled via a nuclear chain reaction (one type of general chain reaction). In such a reaction, free neutrons released by each fission event can trigger yet more events, which in turn release more neutrons and cause more fissions.
The chemical element isotopes that can sustain a fission chain reaction are called nuclear fuels, and are said to be fissile. The most common nuclear fuels are 235U (the isotope of uranium with an atomic mass of 235 and of use in nuclear reactors) and 239Pu (the isotope of plutonium with an atomic mass of 239). These fuels break apart into a bimodal range of chemical elements with atomic masses centering near 95 and 135 u (fission products). Most nuclear fuels undergo spontaneous fission only very slowly, decaying instead mainly via an alpha/beta decay chain over periods of millennia to eons. In a nuclear reactor or nuclear weapon, the overwhelming majority of fission events are induced by bombardment with another particle, a neutron, which is itself produced by prior fission events.
Nuclear fissions in fissile fuels are the result of the nuclear excitation energy produced when a fissile nucleus captures a neutron. This energy, resulting from the neutron capture, is a result of the attractive nuclear force acting between the neutron and nucleus. It is enough to deform the nucleus into a double-lobed "drop," to the point that nuclear fragments exceed the distances at which the nuclear force can hold two groups of charged nucleons together, and when this happens, the two fragments complete their separation and then are driven further apart by their mutually repulsive charges, in a process which becomes irreversible with greater and greater distance. A similar process occurs in fissionable isotopes (such as uranium-238), but in order to fission, these isotopes require additional energy provided by fast neutrons (such as produced by nuclear fusion in thermonuclear weapons).
The liquid drop model of the atomic nucleus predicts equal-sized fission products as a mechanical outcome of nuclear deformation. The more sophisticated nuclear shell model is needed to mechanistically explain the route to the more energetically favorable outcome, in which one fission product is slightly smaller than the other.
The most common fission process is binary fission, and it produces the fission products noted above, at 95±15 and 135±15 u. However, the binary process happens merely because it is the most probable. In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, a process called ternary fission produces three positively charged fragments (plus neutrons) and the smallest of these may range from so small a charge and mass as a proton (Z=1), to as large a fragment as argon (Z=18). The most common small fragments, however, are composed of 90% helium-4 nuclei with more energy than alpha particles from alpha decay (so-called "long range alphas" at ~ 16 MeV), plus helium-6 nuclei, and tritons (the nuclei of tritium). The ternary process is less common, but still ends up producing significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors.
The fission of a heavy nucleus requires a total input energy of about 7 to 8 million electron volts (MeV) to initially overcome the strong force which holds the nucleus into a spherical or nearly spherical shape, and from there, deform it into a two-lobed ("peanut") shape in which the lobes are able to continue to separate from each other, pushed by their mutual positive charge, in the most common process of binary fission (two positively charged fission products + neutrons). Once the nuclear lobes have been pushed to a critical distance, beyond which the short range strong force can no longer hold them together, the process of their separation proceeds from the energy of the (longer range) electromagnetic repulsion between the fragments. The result is two fission fragments moving away from each other, at high energy.
About 6 MeV of the fission-input energy is supplied by the simple binding of an extra neutron to the heavy nucleus via the strong force; however, in many fissionable isotopes, this amount of energy is not enough for fission. Uranium-238, for example, has a near-zero fission cross section for neutrons of less than one MeV energy. If no additional energy is supplied by any other mechanism, the nucleus will not fission, but will merely absorb the neutron, as happens when U-238 absorbs slow and even some fraction of fast neutrons, to become U-239. The remaining energy to initiate fission can be supplied by two other mechanisms: one of these is more kinetic energy of the incoming neutron, which is increasingly able to fission a fissionable heavy nucleus as it exceeds a kinetic energy of one MeV or more (so-called fast neutrons). Such high energy neutrons are able to fission U-238 directly (see thermonuclear weapon for application, where the fast neutrons are supplied by nuclear fusion). However, this process cannot happen to a great extent in a nuclear reactor, as too small a fraction of the fission neutrons produced by any type of fission have enough energy to efficiently fission U-238 (fission neutrons have a median energy of 2 MeV, but a mode of only 0.75 MeV, meaning half of them have less than this insufficient energy).
Among the heavy actinide elements, however, those isotopes that have an odd number of neutrons (such as U-235 with 143 neutrons) bind an extra neutron with an additional 1 to 2 MeV of energy over an isotope of the same element with an even number of neutrons (such as U-238 with 146 neutrons). This extra binding energy is made available as a result of the mechanism of neutron pairing effects. This extra energy results from the Pauli exclusion principle allowing an extra neutron to occupy the same nuclear orbital as the last neutron in the nucleus, so that the two form a pair. In such isotopes, therefore, no neutron kinetic energy is needed, for all the necessary energy is supplied by absorption of any neutron, either of the slow or fast variety (the former are used in moderated nuclear reactors, and the latter are used in fast neutron reactors, and in weapons). As noted above, the subgroup of fissionable elements that may be fissioned efficiently with their own fission neutrons (thus potentially causing a nuclear chain reaction in relatively small amounts of the pure material) are termed "fissile." Examples of fissile isotopes are U-235 and plutonium-239.
Typical fission events release about two hundred million eV (200 MeV) of energy for each fission event. The exact isotope which is fissioned, and whether or not it is fissionable or fissile, has only a small impact on the amount of energy released. This can be easily seen by examining the curve of binding energy (image below), and noting that the average binding energy of the actinide nuclides beginning with uranium is around 7.6 Mev per nucleon. Looking further left on the curve of binding energy, where the fission products cluster, it is easily observed that the binding energy of the fission products tends to center around 8.5 Mev per nucleon. Thus, in any fission event of an isotope in the actinide's range of mass, roughly 0.9 Mev is released per nucleon of the starting element. The fission of U235 by a slow neutron yields nearly identical energy to the fission of U238 by a fast neutron. This energy release profile holds true for thorium and the various minor actinides as well.
By contrast, most chemical oxidation reactions (such as burning coal or TNT) release at most a few eV per event. So, nuclear fuel contains at least ten million times more usable energy per unit mass than does chemical fuel. The energy of nuclear fission is released as kinetic energy of the fission products and fragments, and as electromagnetic radiation in the form of gamma rays; in a nuclear reactor, the energy is converted to heat as the particles and gamma rays collide with the atoms that make up the reactor and its working fluid, usually water or occasionally heavy water.
When a uranium nucleus fissions into two daughter nuclei fragments, about 0.1 percent of the mass of the uranium nucleus appears as the fission energy of ~200 MeV. For uranium-235 (total mean fission energy 202.5 MeV), typically ~169 MeV appears as the kinetic energy of the daughter nuclei, which fly apart at about 3% of the speed of light, due to Coulomb repulsion. Also, an average of 2.5 neutrons are emitted, with a mean kinetic energy per neutron of ~2 MeV (total of 4.8 MeV). The fission reaction also releases ~7 MeV in prompt gamma ray photons. The latter figure means that a nuclear fission explosion or criticality accident emits about 3.5% of its energy as gamma rays, less than 2.5% of its energy as fast neutrons (total of both types of radiation ~ 6%), and the rest as kinetic energy of fission fragments (this appears almost immediately when the fragments impact surrounding matter, as simple heat). In an atomic bomb, this heat may serve to raise the temperature of the bomb core to 100 million kelvin and cause secondary emission of soft X-rays, which convert some of this energy to ionizing radiation. However, in nuclear reactors, the fission fragment kinetic energy remains as low-temperature heat, which itself causes little or no ionization.
So-called neutron bombs (enhanced radiation weapons) have been constructed which release a larger fraction of their energy as ionizing radiation (specifically, neutrons), but these are all thermonuclear devices which rely on the nuclear fusion stage to produce the extra radiation. The energy dynamics of pure fission bombs always remain at about 6% yield of the total in radiation, as a prompt result of fission.
The total prompt fission energy amounts to about 181 MeV, or ~ 89% of the total energy which is eventually released by fission over time. The remaining ~ 11% is released in beta decays which have various half-lives, but begin as a process in the fission products immediately; and in delayed gamma emissions associated with these beta decays. For example, in uranium-235 this delayed energy is divided into about 6.5 MeV in betas, 8.8 MeV in antineutrinos (released at the same time as the betas), and finally, an additional 6.3 MeV in delayed gamma emission from the excited beta-decay products (for a mean total of ~10 gamma ray emissions per fission, in all). Thus, an additional 6% of the total energy of fission is also released eventually as non-prompt ionizing radiation, and this is about evenly divided between gamma and beta ray energy. The remainder of the energy is emitted as antineutrinos, which as a practical matter are not considered ionizing radiation (see below).
The 8.8 MeV/202.5 MeV = 4.3% of the energy which is released as antineutrinos is not captured by the reactor material as heat, and escapes directly through all materials (including the Earth) at nearly the speed of light, and into interplanetary space (the amount absorbed is minuscule). Neutrino radiation is ordinarily not classed as ionizing radiation, because it is almost entirely not absorbed and therefore does not produce effects. Almost all of the rest of the radiation (beta and gamma radiation) is eventually converted to heat in a reactor core or its shielding.
Some processes involving neutrons are notable for absorbing or finally yielding energy — for example neutron kinetic energy does not yield heat immediately if the neutron is captured by a uranium-238 atom to breed plutonium-239, but this energy is emitted if the plutonium-239 is later fissioned. On the other hand, so-called delayed neutrons emitted as radioactive decay products with half-lives up to several minutes, from fission-daughters, are very important to reactor control, because they give a characteristic "reaction" time for the total nuclear reaction to double in size, if the reaction is run in a "delayed-critical" zone which deliberately relies on these neutrons for a supercritical chain-reaction (one in which each fission cycle yields more neutrons than it absorbs). Without their existence, the nuclear chain-reaction would be prompt critical and increase in size faster than it could be controlled by human intervention. In this case, the first experimental atomic reactors would have run away to a dangerous and messy "prompt critical reaction" before their operators could have manually shut them down (for this reason, designer Enrico Fermi included radiation-counter-triggered control rods, suspended by electromagnets, which could automatically drop into the center of Chicago Pile-1). If these delayed neutrons are captured without producing fissions, they produce heat as well.
Product nuclei and binding energy
In fission there is a preference to yield fragments with even proton numbers, which is called the odd-even effect on the fragments charge distribution. However, no odd-even effect is observed on fragment mass number distribution. This result is attributed to nucleon pair breaking.
In nuclear fission events the nuclei may break into any combination of lighter nuclei, but the most common event is not fission to equal mass nuclei of about mass 120; the most common event (depending on isotope and process) is a slightly unequal fission in which one daughter nucleus has a mass of about 90 to 100 u and the other the remaining 130 to 140 u. Unequal fissions are energetically more favorable because this allows one product to be closer to the energetic minimum near mass 60 u (only a quarter of the average fissionable mass), while the other nucleus with mass 135 u is still not far out of the range of the most tightly bound nuclei (another statement of this, is that the atomic binding energy curve is slightly steeper to the left of mass 120 u than to the right of it).
Origin of the active energy and the curve of binding energy
Nuclear fission of heavy elements produces energy because the specific binding energy (binding energy per mass) of intermediate-mass nuclei with atomic numbers and atomic masses close to 62Ni and 56Fe is greater than the nucleon-specific binding energy of very heavy nuclei, so that energy is released when heavy nuclei are broken apart. The total rest masses of the fission products (Mp) from a single reaction is less than the mass of the original fuel nucleus (M). The excess mass Δm = M – Mp is the invariant mass of the energy that is released as photons (gamma rays) and kinetic energy of the fission fragments, according to the mass-energy equivalence formula E = mc2.
The variation in specific binding energy with atomic number is due to the interplay of the two fundamental forces acting on the component nucleons (protons and neutrons) that make up the nucleus. Nuclei are bound by an attractive nuclear force between nucleons, which overcomes the electrostatic repulsion between protons. However, the nuclear force acts only over relatively short ranges (a few nucleon diameters), since it follows an exponentially decaying Yukawa potential which makes it insignificant at longer distances. The electrostatic repulsion is of longer range, since it decays by an inverse-square rule, so that nuclei larger than about 12 nucleons in diameter reach a point that the total electrostatic repulsion overcomes the nuclear force and causes them to be spontaneously unstable. For the same reason, larger nuclei (more than about eight nucleons in diameter) are less tightly bound per unit mass than are smaller nuclei; breaking a large nucleus into two or more intermediate-sized nuclei releases energy. The origin of this energy is the nuclear force, which intermediate-sized nuclei allows to act more efficiently, because each nucleon has more neighbors which are within the short range attraction of this force. Thus less energy is needed in the smaller nuclei and the difference to the state before is set free.
Also because of the short range of the strong binding force, large stable nuclei must contain proportionally more neutrons than do the lightest elements, which are most stable with a 1 to 1 ratio of protons and neutrons. Nuclei which have more than 20 protons cannot be stable unless they have more than an equal number of neutrons. Extra neutrons stabilize heavy elements because they add to strong-force binding (which acts between all nucleons) without adding to proton–proton repulsion. Fission products have, on average, about the same ratio of neutrons and protons as their parent nucleus, and are therefore usually unstable to beta decay (which changes neutrons to protons) because they have proportionally too many neutrons compared to stable isotopes of similar mass.
This tendency for fission product nuclei to beta-decay is the fundamental cause of the problem of radioactive high level waste from nuclear reactors. Fission products tend to be beta emitters, emitting fast-moving electrons to conserve electric charge, as excess neutrons convert to protons in the fission-product atoms. See Fission products (by element) for a description of fission products sorted by element.
Chain reactions
Several heavy elements, such as uranium, thorium, and plutonium, undergo both spontaneous fission, a form of radioactive decay and induced fission, a form of nuclear reaction. Elemental isotopes that undergo induced fission when struck by a free neutron are called fissionable; isotopes that undergo fission when struck by a thermal, slow moving neutron are also called fissile. A few particularly fissile and readily obtainable isotopes (notably 233U, 235U and 239Pu) are called nuclear fuels because they can sustain a chain reaction and can be obtained in large enough quantities to be useful.
All fissionable and fissile isotopes undergo a small amount of spontaneous fission which releases a few free neutrons into any sample of nuclear fuel. Such neutrons would escape rapidly from the fuel and become a free neutron, with a mean lifetime of about 15 minutes before decaying to protons and beta particles. However, neutrons almost invariably impact and are absorbed by other nuclei in the vicinity long before this happens (newly created fission neutrons move at about 7% of the speed of light, and even moderated neutrons move at about 8 times the speed of sound). Some neutrons will impact fuel nuclei and induce further fissions, releasing yet more neutrons. If enough nuclear fuel is assembled in one place, or if the escaping neutrons are sufficiently contained, then these freshly emitted neutrons outnumber the neutrons that escape from the assembly, and a sustained nuclear chain reaction will take place.
An assembly that supports a sustained nuclear chain reaction is called a critical assembly or, if the assembly is almost entirely made of a nuclear fuel, a critical mass. The word "critical" refers to a cusp in the behavior of the differential equation that governs the number of free neutrons present in the fuel: if less than a critical mass is present, then the amount of neutrons is determined by radioactive decay, but if a critical mass or more is present, then the amount of neutrons is controlled instead by the physics of the chain reaction. The actual mass of a critical mass of nuclear fuel depends strongly on the geometry and surrounding materials.
Not all fissionable isotopes can sustain a chain reaction. For example, 238U, the most abundant form of uranium, is fissionable but not fissile: it undergoes induced fission when impacted by an energetic neutron with over 1 MeV of kinetic energy. However, too few of the neutrons produced by 238U fission are energetic enough to induce further fissions in 238U, so no chain reaction is possible with this isotope. Instead, bombarding 238U with slow neutrons causes it to absorb them (becoming 239U) and decay by beta emission to 239Np which then decays again by the same process to 239Pu; that process is used to manufacture 239Pu in breeder reactors. In-situ plutonium production also contributes to the neutron chain reaction in other types of reactors after sufficient plutonium-239 has been produced, since plutonium-239 is also a fissile element which serves as fuel. It is estimated that up to half of the power produced by a standard "non-breeder" reactor is produced by the fission of plutonium-239 produced in place, over the total life-cycle of a fuel load.
Fissionable, non-fissile isotopes can be used as fission energy source even without a chain reaction. Bombarding 238U with fast neutrons induces fissions, releasing energy as long as the external neutron source is present. This is an important effect in all reactors where fast neutrons from the fissile isotope can cause the fission of nearby 238U nuclei, which means that some small part of the 238U is "burned-up" in all nuclear fuels, especially in fast breeder reactors that operate with higher-energy neutrons. That same fast-fission effect is used to augment the energy released by modern thermonuclear weapons, by jacketing the weapon with 238U to react with neutrons released by nuclear fusion at the center of the device.
Fission reactors
Critical fission reactors are the most common type of nuclear reactor. In a critical fission reactor, neutrons produced by fission of fuel atoms are used to induce yet more fissions, to sustain a controllable amount of energy release. Devices that produce engineered but non-self-sustaining fission reactions are subcritical fission reactors. Such devices use radioactive decay or particle accelerators to trigger fissions.
Critical fission reactors are built for three primary purposes, which typically involve different engineering trade-offs to take advantage of either the heat or the neutrons produced by the fission chain reaction:
- power reactors are intended to produce heat for nuclear power, either as part of a generating station or a local power system such as a nuclear submarine.
- research reactors are intended to produce neutrons and/or activate radioactive sources for scientific, medical, engineering, or other research purposes.
- breeder reactors are intended to produce nuclear fuels in bulk from more abundant isotopes. The better known fast breeder reactor makes 239Pu (a nuclear fuel) from the naturally very abundant 238U (not a nuclear fuel). Thermal breeder reactors previously tested using 232Th to breed the fissile isotope 233U (thorium fuel cycle) continue to be studied and developed.
While, in principle, all fission reactors can act in all three capacities, in practice the tasks lead to conflicting engineering goals and most reactors have been built with only one of the above tasks in mind. (There are several early counter-examples, such as the Hanford N reactor, now decommissioned). Power reactors generally convert the kinetic energy of fission products into heat, which is used to heat a working fluid and drive a heat engine that generates mechanical or electrical power. The working fluid is usually water with a steam turbine, but some designs use other materials such as gaseous helium. Research reactors produce neutrons that are used in various ways, with the heat of fission being treated as an unavoidable waste product. Breeder reactors are a specialized form of research reactor, with the caveat that the sample being irradiated is usually the fuel itself, a mixture of 238U and 235U. For a more detailed description of the physics and operating principles of critical fission reactors, see nuclear reactor physics. For a description of their social, political, and environmental aspects, see nuclear power.
Fission bombs
One class of nuclear weapon, a fission bomb (not to be confused with the fusion bomb), otherwise known as an atomic bomb or atom bomb, is a fission reactor designed to liberate as much energy as possible as rapidly as possible, before the released energy causes the reactor to explode (and the chain reaction to stop). Development of nuclear weapons was the motivation behind early research into nuclear fission: the Manhattan Project of the U.S. military during World War II carried out most of the early scientific work on fission chain reactions, culminating in the Trinity test bomb and the Little Boy and Fat Man bombs that were exploded over the cities Hiroshima, and Nagasaki, Japan in August 1945.
Even the first fission bombs were thousands of times more explosive than a comparable mass of chemical explosive. For example, Little Boy weighed a total of about four tons (of which 60 kg was nuclear fuel) and was 11 feet (3.4 m) long; it also yielded an explosion equivalent to about 15 kilotons of TNT, destroying a large part of the city of Hiroshima. Modern nuclear weapons (which include a thermonuclear fusion as well as one or more fission stages) are hundreds of times more energetic for their weight than the first pure fission atomic bombs (see nuclear weapon yield), so that a modern single missile warhead bomb weighing less than 1/8 as much as Little Boy (see for example W88) has a yield of 475,000 tons of TNT, and could bring destruction to about 10 times the city area.
While the fundamental physics of the fission chain reaction in a nuclear weapon is similar to the physics of a controlled nuclear reactor, the two types of device must be engineered quite differently (see nuclear reactor physics). A nuclear bomb is designed to release all its energy at once, while a reactor is designed to generate a steady supply of useful power. While overheating of a reactor can lead to, and has led to, meltdown and steam explosions, the much lower uranium enrichment makes it impossible for a nuclear reactor to explode with the same destructive power as a nuclear weapon. It is also difficult to extract useful power from a nuclear bomb, although at least one rocket propulsion system, Project Orion, was intended to work by exploding fission bombs behind a massively padded and shielded spacecraft.
The strategic importance of nuclear weapons is a major reason why the technology of nuclear fission is politically sensitive. Viable fission bomb designs are, arguably, within the capabilities of many being relatively simple from an engineering viewpoint. However, the difficulty of obtaining fissile nuclear material to realize the designs, is the key to the relative unavailability of nuclear weapons to all but modern industrialized governments with special programs to produce fissile materials (see uranium enrichment and nuclear fuel cycle).
Discovery of nuclear fission
The discovery of nuclear fission occurred in 1938, following nearly five decades of work on the science of radioactivity and the elaboration of new nuclear physics that described the components of atoms. In 1911, Ernest Rutherford proposed a model of the atom in which a very small, dense and positively charged nucleus of protons (the neutron had not yet been discovered) was surrounded by orbiting, negatively charged electrons (the Rutherford model). Niels Bohr improved upon this in 1913 by reconciling the quantum behavior of electrons (the Bohr model). Work by Henri Becquerel, Marie Curie, Pierre Curie, and Rutherford further elaborated that the nucleus, though tightly bound, could undergo different forms of radioactive decay, and thereby transmute into other elements. (For example, by alpha decay: the emission of an alpha particle—two protons and two neutrons bound together into a particle identical to a helium nucleus.)
Some work in nuclear transmutation had been done. In 1917, Rutherford was able to accomplish transmutation of nitrogen into oxygen, using alpha particles directed at nitrogen 14N + α → 17O + p. This was the first observation of a nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. Eventually, in 1932, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues Ernest Walton and John Cockcroft, who used artificially accelerated protons against lithium-7, to split this nucleus into two alpha particles. The feat was popularly known as "splitting the atom", although it was not the modern nuclear fission reaction later discovered in heavy elements, which is discussed below. Meanwhile, the possibility of combining nuclei—nuclear fusion—had been studied in connection with understanding the processes which power stars. The first artificial fusion reaction had been achieved by Mark Oliphant in 1932, using accelerated deuterium nuclei (each consisting of a single proton bound to a single neutron) to create a helium nuclei.
After English physicist James Chadwick discovered the neutron in 1932, Enrico Fermi and his colleagues in Rome studied the results of bombarding uranium with neutrons in 1934. Fermi concluded that his experiments had created new elements with 93 and 94 protons, which the group dubbed ausonium and hesperium. However, not all were convinced by Fermi's analysis of his results. The German chemist Ida Noddack notably suggested in print in 1934 that instead of creating a new, heavier element 93, that "it is conceivable that the nucleus breaks up into several large fragments." However, Noddack's conclusion was not pursued at the time.
After the Fermi publication, Otto Hahn, Lise Meitner, and Fritz Strassmann began performing similar experiments in Berlin. Meitner, an Austrian Jew, lost her citizenship with the "Anschluss", the occupation and annexation of Austria into Nazi Germany in 1938, but she fled to Sweden and started a correspondence by mail with Hahn in Berlin. By coincidence, her nephew Otto Robert Frisch, also a refugee, was also in Sweden when Meitner received a letter from Hahn dated 20 December describing his chemical proof that some of the product of the bombardment of uranium with neutrons was barium. Hahn suggested a bursting of the nucleus, but he was unsure of what the physical basis for the results were. Barium had an atomic mass 40% less than uranium, and no previously known methods of radioactive decay could account for such a large difference in the mass of the nucleus. Frisch was skeptical, but Meitner trusted Hahn's ability as a chemist. Marie Curie had been separating barium from radium for many years, and the techniques were well-known. According to Frisch:
Was it a mistake? No, said Lise Meitner; Hahn was too good a chemist for that. But how could barium be formed from uranium? No larger fragments than protons or helium nuclei (alpha particles) had ever been chipped away from nuclei, and to chip off a large number not nearly enough energy was available. Nor was it possible that the uranium nucleus could have been cleaved right across. A nucleus was not like a brittle solid that can be cleaved or broken; George Gamow had suggested early on, and Bohr had given good arguments that a nucleus was much more like a liquid drop. Perhaps a drop could divide itself into two smaller drops in a more gradual manner, by first becoming elongated, then constricted, and finally being torn rather than broken in two? We knew that there were strong forces that would resist such a process, just as the surface tension of an ordinary liquid drop tends to resist its division into two smaller ones. But nuclei differed from ordinary drops in one important way: they were electrically charged, and that was known to counteract the surface tension.
The charge of a uranium nucleus, we found, was indeed large enough to overcome the effect of the surface tension almost completely; so the uranium nucleus might indeed resemble a very wobbly unstable drop, ready to divide itself at the slightest provocation, such as the impact of a single neutron. But there was another problem. After separation, the two drops would be driven apart by their mutual electric repulsion and would acquire high speed and hence a very large energy, about 200 MeV in all; where could that energy come from? ...Lise Meitner... worked out that the two nuclei formed by the division of a uranium nucleus together would be lighter than the original uranium nucleus by about one-fifth the mass of a proton. Now whenever mass disappears energy is created, according to Einstein's formula E=mc2, and one-fifth of a proton mass was just equivalent to 200MeV. So here was the source for that energy; it all fitted!
In short, Meitner and Frisch had correctly interpreted Hahn's results to mean that the nucleus of uranium had split roughly in half. Frisch suggested the process be named "nuclear fission," by analogy to the process of living cell division into two cells, which was then called binary fission. Just as the term nuclear "chain reaction" would later be borrowed from chemistry, so the term "fission" was borrowed from biology.
On 22 December 1938, Hahn and Strassmann sent a manuscript to Naturwissenschaften reporting that they had discovered the element barium after bombarding uranium with neutrons. Simultaneously, they communicated these results to Meitner in Sweden. She and Frisch correctly interpreted the results as evidence of nuclear fission. Frisch confirmed this experimentally on 13 January 1939. For proving that the barium resulting from his bombardment of uranium with neutrons was the product of nuclear fission, Hahn was awarded the Nobel Prize for Chemistry in 1944 (the sole recipient) "for his discovery of the fission of heavy nuclei". (The award was actually given to Hahn in 1945, as "the Nobel Committee for Chemistry decided that none of the year's nominations met the criteria as outlined in the will of Alfred Nobel." In such cases, the Nobel Foundation's statutes permit that year's prize be reserved until the following year.)
News spread quickly of the new discovery, which was correctly seen as an entirely novel physical effect with great scientific—and potentially practical—possibilities. Meitner’s and Frisch’s interpretation of the discovery of Hahn and Strassmann crossed the Atlantic Ocean with Niels Bohr, who was to lecture at Princeton University. I.I. Rabi and Willis Lamb, two Columbia University physicists working at Princeton, heard the news and carried it back to Columbia. Rabi said he told Enrico Fermi; Fermi gave credit to Lamb. Bohr soon thereafter went from Princeton to Columbia to see Fermi. Not finding Fermi in his office, Bohr went down to the cyclotron area and found Herbert L. Anderson. Bohr grabbed him by the shoulder and said: “Young man, let me explain to you about something new and exciting in physics.” It was clear to a number of scientists at Columbia that they should try to detect the energy released in the nuclear fission of uranium from neutron bombardment. On 25 January 1939, a Columbia University team conducted the first nuclear fission experiment in the United States, which was done in the basement of Pupin Hall; the members of the team were Herbert L. Anderson, Eugene T. Booth, John R. Dunning, Enrico Fermi, G. Norris Glasoe, and Francis G. Slack. The experiment involved placing uranium oxide inside of an ionization chamber and irradiating it with neutrons, and measuring the energy thus released. The results confirmed that fission was occurring and hinted strongly that it was the isotope uranium 235 in particular that was fissioning. The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of the George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, which fostered many more experimental demonstrations.
During this period the Hungarian physicist Leó Szilárd, who was residing in the United States at the time, realized that the neutron-driven fission of heavy atoms could be used to create a nuclear chain reaction. Such a reaction using neutrons was an idea he had first formulated in 1933, upon reading Rutherford's disparaging remarks about generating power from his team's 1932 experiment using protons to split lithium. However, Szilárd had not been able to achieve a neutron-driven chain reaction with neutron-rich light atoms. In theory, if in a neutron-driven chain reaction the number of secondary neutrons produced was greater than one, then each such reaction could trigger multiple additional reactions, producing an exponentially increasing number of reactions. It was thus a possibility that the fission of uranium could yield vast amounts of energy for civilian or military purposes (i.e., electric power generation or atomic bombs).
Szilard now urged Fermi (in New York) and Frédéric Joliot-Curie (in Paris) to refrain from publishing on the possibility of a chain reactions, lest the Nazi government become aware of the possibilities on the eve of what would later be known as World War II. With some hesitation Fermi agreed to self-censor. But Joliot-Curie did not, and in April 1939 his team in Paris, including Hans von Halban and Lew Kowarski, reported in the journal Nature that the number of neutrons emitted with nuclear fission of 235U was then reported at 3.5 per fission. (They later corrected this to 2.6 per fission.) Simultaneous work by Szilard and Walter Zinn confirmed these results. The results suggested the possibility of building nuclear reactors (first called "neutronic reactors" by Szilard and Fermi) and even nuclear bombs. However, much was still unknown about fission and chain reaction systems.
Fission chain reaction
"Chain reactions" at that time were a known phenomenon in chemistry, but the analogous process in nuclear physics, using neutrons, had been foreseen as early as 1933 by Szilárd, although Szilárd at that time had no idea with what materials the process might be initiated. Szilárd considered that neutrons would be ideal for such a situation, since they lacked an electrostatic charge.
With the news of fission neutrons from uranium fission, Szilárd immediately understood the possibility of a nuclear chain reaction using uranium. In the summer, Fermi and Szilard proposed the idea of a nuclear reactor (pile) to mediate this process. The pile would use natural uranium as fuel. Fermi had shown much earlier that neutrons were far more effectively captured by atoms if they were of low energy (so-called "slow" or "thermal" neutrons), because for quantum reasons it made the atoms look like much larger targets to the neutrons. Thus to slow down the secondary neutrons released by the fissioning uranium nuclei, Fermi and Szilard proposed a graphite "moderator," against which the fast, high-energy secondary neutrons would collide, effectively slowing them down. With enough uranium, and with pure-enough graphite, their "pile" could theoretically sustain a slow-neutron chain reaction. This would result in the production of heat, as well as the creation of radioactive fission products.
In August 1939, Szilard and fellow Hungarian refugees physicists Teller and Wigner thought that the Germans might make use of the fission chain reaction and were spurred to attempt to attract the attention of the United States government to the issue. Towards this, they persuaded German-Jewish refugee Albert Einstein to lend his name to a letter directed to President Franklin Roosevelt. The Einstein–Szilárd letter suggested the possibility of a uranium bomb deliverable by ship, which would destroy "an entire harbor and much of the surrounding countryside." The President received the letter on 11 October 1939 — shortly after World War II began in Europe, but two years before U.S. entry into it. Roosevelt ordered that a scientific committee be authorized for overseeing uranium work and allocated a small sum of money for pile research.
In England, James Chadwick proposed an atomic bomb utilizing natural uranium, based on a paper by Rudolf Peierls with the mass needed for critical state being 30–40 tons. In America, J. Robert Oppenheimer thought that a cube of uranium deuteride 10 cm on a side (about 11 kg of uranium) might "blow itself to hell." In this design it was still thought that a moderator would need to be used for nuclear bomb fission (this turned out not to be the case if the fissile isotope was separated). In December, Werner Heisenberg delivered a report to the German Ministry of War on the possibility of a uranium bomb. Most of these models were still under the assumption that the bombs would be powered by slow neutron reactions—and thus be similar to a reactor undergoing a meltdown.
In Birmingham, England, Frisch teamed up with Peierls, a fellow German-Jewish refugee. They had the idea of using a purified mass of the uranium isotope 235U, which had a cross section just determined, and which was much larger than that of 238U or natural uranium (which is 99.3% the latter isotope). Assuming that the cross section for fast-neutron fission of 235U was the same as for slow neutron fission, they determined that a pure 235U bomb could have a critical mass of only 6 kg instead of tons, and that the resulting explosion would be tremendous. (The amount actually turned out to be 15 kg, although several times this amount was used in the actual uranium (Little Boy) bomb). In February 1940 they delivered the Frisch–Peierls memorandum. Ironically, they were still officially considered "enemy aliens" at the time. Glenn Seaborg, Joseph W. Kennedy, Arthur Wahl and Italian-Jewish refugee Emilio Segrè shortly thereafter discovered 239Pu in the decay products of 239U produced by bombarding 238U with neutrons, and determined it to be a fissile material, like 235U.
The possibility of isolating uranium-235 was technically daunting, because uranium-235 and uranium-238 are chemically identical, and vary in their mass by only the weight of three neutrons. However, if a sufficient quantity of uranium-235 could be isolated, it would allow for a fast neutron fission chain reaction. This would be extremely explosive, a true "atomic bomb." The discovery that plutonium-239 could be produced in a nuclear reactor pointed towards another approach to a fast neutron fission bomb. Both approaches were extremely novel and not yet well understood, and there was considerable scientific skepticism at the idea that they could be developed in a short amount of time.
On June 28, 1941, the Office of Scientific Research and Development was formed in the U.S. to mobilize scientific resources and apply the results of research to national defense. In September, Fermi assembled his first nuclear "pile" or reactor, in an attempt to create a slow neutron-induced chain reaction in uranium, but the experiment failed to achieve criticality, due to lack of proper materials, or not enough of the proper materials which were available.
Producing a fission chain reaction in natural uranium fuel was found to be far from trivial. Early nuclear reactors did not use isotopically enriched uranium, and in consequence they were required to use large quantities of highly purified graphite as neutron moderation materials. Use of ordinary water (as opposed to heavy water) in nuclear reactors requires enriched fuel — the partial separation and relative enrichment of the rare 235U isotope from the far more common 238U isotope. Typically, reactors also require inclusion of extremely chemically pure neutron moderator materials such as deuterium (in heavy water), helium, beryllium, or carbon, the latter usually as graphite. (The high purity for carbon is required because many chemical impurities such as the boron-10 component of natural boron, are very strong neutron absorbers and thus poison the chain reaction and end it prematurely.)
Production of such materials at industrial scale had to be solved for nuclear power generation and weapons production to be accomplished. Up to 1940, the total amount of uranium metal produced in the USA was not more than a few grams, and even this was of doubtful purity; of metallic beryllium not more than a few kilograms; and concentrated deuterium oxide (heavy water) not more than a few kilograms. Finally, carbon had never been produced in quantity with anything like the purity required of a moderator.
The problem of producing large amounts of high purity uranium was solved by Frank Spedding using the thermite or "Ames" process. Ames Laboratory was established in 1942 to produce the large amounts of natural (unenriched) uranium metal that would be necessary for the research to come. The critical nuclear chain-reaction success of the Chicago Pile-1 (December 2, 1942) which used unenriched (natural) uranium, like all of the atomic "piles" which produced the plutonium for the atomic bomb, was also due specifically to Szilard's realization that very pure graphite could be used for the moderator of even natural uranium "piles". In wartime Germany, failure to appreciate the qualities of very pure graphite led to reactor designs dependent on heavy water, which in turn was denied the Germans by Allied attacks in Norway, where heavy water was produced. These difficulties—among many others— prevented the Nazis from building a nuclear reactor capable of criticality during the war, although they did never put as much effort as the United States into nuclear research, focusing on other technologies (see German nuclear energy project for more details).
Manhattan Project and beyond
In the United States, an all-out effort for making atomic weapons was begun in late 1942. This work was taken over by the U.S. Army Corps of Engineers in 1943, and known as the Manhattan Engineer District. The top-secret Manhattan Project, as it was colloquially known, was led by General Leslie R. Groves. Among the project's dozens of sites were: Hanford Site in Washington state, which had the first industrial-scale nuclear reactors; Oak Ridge, Tennessee, which was primarily concerned with uranium enrichment; and Los Alamos, in New Mexico, which was the scientific hub for research on bomb development and design. Other sites, notably the Berkeley Radiation Laboratory and the Metallurgical Laboratory at the University of Chicago, played important contributing roles. Overall scientific direction of the project was managed by the physicist J. Robert Oppenheimer.
In July 1945, the first atomic bomb, dubbed "Trinity", was detonated in the New Mexico desert. It was fueled by plutonium created at Hanford. In August 1945, two more atomic bombs—"Little Boy", a uranium-235 bomb, and "Fat Man", a plutonium bomb—were used against the Japanese cities of Hiroshima and Nagasaki.
In the years after World War II, many countries were involved in the further development of nuclear fission for the purposes of nuclear reactors and nuclear weapons. The UK opened the first commercial nuclear power plant in 1956. In 2013, there are 437 reactors in 31 countries.
Natural fission chain-reactors on Earth
Criticality in nature is uncommon. At three ore deposits at Oklo in Gabon, sixteen sites (the so-called Oklo Fossil Reactors) have been discovered at which self-sustaining nuclear fission took place approximately 2 billion years ago. Unknown until 1972 (but postulated by Paul Kuroda in 1956), when French physicist Francis Perrin discovered the Oklo Fossil Reactors, it was realized that nature had beaten humans to the punch. Large-scale natural uranium fission chain reactions, moderated by normal water, had occurred far in the past and would not be possible now. This ancient process was able to use normal water as a moderator only because 2 billion years before the present, natural uranium was richer in the shorter-lived fissile isotope 235U (about 3%), than natural uranium available today (which is only 0.7%, and must be enriched to 3% to be usable in light-water reactors).
See also
- Arora, M. G.; Singh, M. (1994). Nuclear Chemistry. Anmol Publications. p. 202. ISBN 81-261-1763-X. Retrieved 2011-04-02.
- Saha, Gopal (2010). Fundamentals of Nuclear Pharmacy (Sixth ed.). Springer Science+Business Media. p. 11. ISBN 1-4419-5859-2. Retrieved 2011-04-02.
- Comparative study of the ternary particle emission in 243-Cm (nth,f) and 244-Cm(SF). S. Vermote, et. al. in Dynamical aspects of nuclear fission: proceedings of the 6th International Conference. Ed. J. Kliman, M. G. Itkis, S. Gmuca. World Scientific Publishing Co. Pte. Ltd. Singapore. (2008)
- Byrne, J. Neutrons, Nuclei, and Matter, Dover Publications, Mineola, NY, 2011, ISBN 978-0-486-48238-5 (pbk.) p. 259
- Marion Brünglinghaus. "Nuclear fission". European Nuclear Society. Retrieved 2013-01-04.
- Hans A. Bethe, "The Hydrogen Bomb", Bulletin of the Atomic Scientists, April 1950, page 99. Fetched from books.google.com on 18 April 2011.
- These fission neutrons have a wide energy spectrum, with range from 0 to 14 MeV, with mean of 2 MeV and mode (statistics) of 0.75 Mev. See Byrne, op. cite.
- "Nuclear Fission and Fusion, and Nuclear Interactions". National Physical Laboratory. Retrieved 2013-01-04.
- L. Bonneau; P. Quentin. "Microscopic calculations of potential energy surfaces: fission and fusion properties". Retrieved 2008-07-28.
- "Frequently Asked Questions #1". Radiation Effects Research Foundation. Retrieved September 18, 2007.
- E. Rutherford (1911). "The scattering of α and β particles by matter and the structure of the atom". Philosophical Magazine 21: 669–688.
- "Cockcroft and Walton split lithium with high energy protons April 1932". Outreach.phy.cam.ac.uk. 1932-04-14. Retrieved 2013-01-04.
- Chadwick announced his initial findings in: Chadwick, J. (1932). "Possible Existence of a Neutron". Nature 129 (3252): 312. Bibcode:1932Natur.129Q.312C. doi:10.1038/129312a0. Subsequently he communicated his findings in more detail in: Chadwick, J. (1932). "The existence of a neutron". Proceedings of the Royal Society, Series A 136 (830): 692–708. Bibcode:1932RSPSA.136..692C. doi:10.1098/rspa.1932.0112.; and Chadwick, J. (1933). "The Bakerian Lecture: The neutron". Proceedings of the Royal Society, Series A 142 (846): 1–25. Bibcode:1933RSPSA.142....1C. doi:10.1098/rspa.1933.0152.
- E. Fermi, E. Amaldi, O. D'Agostino, F. Rasetti, and E. Segrè (1934) "Radioattività provocata da bombardamento di neutroni III," La Ricerca Scientifica, vol. 5, no. 1, pages 452–453.
- Ida Noddack (1934). "Über das Element 93". Zeitschrift für Angewandte Chemie 47 (37): 653. doi:10.1002/ange.19340473707.
- Tacke, Ida Eva. Astr.ua.edu. Retrieved on 2010-12-24.
- Weintraub, Bob. Lise Meitner (1878–1968): Protactinium, Fission, and Meitnerium. Retrieved on June 8, 2009.
- O. Hahn and F. Strassmann (1939). "Über den Nachweis und das Verhalten der bei der Bestrahlung des Urans mittels Neutronen entstehenden Erdalkalimetalle ("On the detection and characteristics of the alkaline earth metals formed by irradiation of uranium with neutrons")". Naturwissenschaften 27 (1): 11–15. Bibcode:1939NW.....27...11H. doi:10.1007/BF01488241.. The authors were identified as being at the Kaiser-Wilhelm-Institut für Chemie, Berlin-Dahlem. Received 22 December 1938.
- Meitner, Lise; Frisch, O. R. (1939). "Disintegration of Uranium by Neutrons: a New Type of Nuclear Reaction". Nature 143 (3615): 239. Bibcode:1939Natur.143..239M. doi:10.1038/143239a0.. The paper is dated 16 January 1939. Meitner is identified as being at the Physical Institute, Academy of Sciences, Stockholm. Frisch is identified as being at the Institute of Theoretical Physics, University of Copenhagen.
- Frisch, O. R. (1939). "Physical Evidence for the Division of Heavy Nuclei under Neutron Bombardment". Nature 143 (3616): 276. Bibcode:1939Natur.143..276F. doi:10.1038/143276a0.
- http://dbhs.wvusd.k12.ca.us/webdocs/Chem-History/Frisch-Fission-1939.html. Missing or empty
|title=(help). The paper is dated 17 January 1939. [The experiment for this letter to the editor was conducted on 13 January 1939; see Richard Rhodes The Making of the Atomic Bomb. 263 and 268 (Simon and Schuster, 1986).]
- "The Nobel Prize in Chemistry 1944". Nobelprize.org. Retrieved 2008-10-06.
- Richard Rhodes. The Making of the Atomic Bomb, 268 (Simon and Schuster, 1986) ISBN 0-671-44133-7.
- Anderson, H.; Booth, E.; Dunning, J.; Fermi, E.; Glasoe, G.; Slack, F. (1939). "The Fission of Uranium". Physical Review 55 (5): 511. Bibcode:1939PhRv...55..511A. doi:10.1103/PhysRev.55.511.2.
- Richard Rhodes. The Making of the Atomic Bomb, 267–270 (Simon and Schuster, 1986) ISBN 0-671-44133-7.
- Von Halban, H.; Joliot, F.; Kowarski, L. (1939). "Number of Neutrons Liberated in the Nuclear Fission of Uranium". Nature 143 (3625): 680. Bibcode:1939Natur.143..680V. doi:10.1038/143680a0.
- Kuroda, P. K. (1956). "On the Nuclear Physical Stability of the Uranium Minerals". The Journal of Chemical Physics 25 (4): 781. Bibcode:1956JChPh..25..781K. doi:10.1063/1.1743058.
- DOE Fundamentals Handbook: Nuclear Physics and Reactor Theory Volume 1. U.S. Department of Energy. January 1993. Retrieved 2012-01-03.
- DOE Fundamentals Handbook: Nuclear Physics and Reactor Theory Volume 2. U.S. Department of Energy. January 1993. Retrieved 2012-01-03.
- The Effects of Nuclear Weapons
- Annotated bibliography for nuclear fission from the Alsos Digital Library
- The Discovery of Nuclear Fission Historical account complete with audio and teacher's guides from the American Institute of Physics History Center
- atomicarchive.com Nuclear Fission Explained
- Nuclear Files.org What is Nuclear Fission?
- Nuclear Fission Animation | http://en.wikipedia.org/wiki/Nuclear_fission | 13 |
67 | Centripetal Force - How it works
V ELOCITY = S PEED + D IRECTION
Speed is a scalar, meaning that it has magnitude but no specific direction; by contrast, velocity is a vector—a quantity with both a magnitude (that is, speed) and a direction. For an object in circular motion, the direction of velocity is the same as that in which the object is moving at any given point. Consider the example of the city of Atlanta, Georgia, and Interstate-285, one of several instances in which a city is surrounded by a "loop" highway. Local traffic reporters avoid giving
As with cars on I-285, the direction of the velocity vector for an object moving around a circle is a function entirely of its position and the direction of movement—clockwise or counter-clockwise—for the circle itself. The direction of the individual velocity vector at any given point may be described as tangential; that is, describing a tangent, or a line that touches the circle at just one point. (By definition, a tangent line cannot intersect the circle.)
It follows, then, that the direction of an object in movement around a circle is changing; hence, its velocity is also changing—and this in turn means that it is experiencing acceleration. As with the subject of centripetal force and "centrifugal force," most people have a mistaken view of acceleration, believing that it refers only to an increase in speed. In fact, acceleration is a change in velocity, and can thus refer either to a change in speed or direction. Nor must that change be a positive one; in other words, an object undergoing a reduction in speed is also experiencing acceleration.
The acceleration of an object in rotational motion is always toward the center of the circle. This may appear to go against common sense, which should indicate that acceleration moves in the same direction as velocity, but it can, in fact, be proven in a number of ways. One method would be by the addition of vectors, but a "hands-on" demonstration may be more enlightening than an abstract geometrical proof.
It is possible to make a simple accelerometer, a device for measuring acceleration, with a lit candle inside a glass. The candle should be standing at a 90°-angle to the bottom of the glass, attached to it by hot wax as you would affix a burning candle to a plate. When you hold the candle level, the flame points upward; but if you spin the glass in a circle, the flame will point toward the center of that circle—in the direction of acceleration.
M ASS × A CCELERATION = F ORCE
Since we have shown that acceleration exists for an object spinning around a circle, it is then possible for us to prove that the object experiences some type of force. The proof for this assertion lies in the second law of motion, which defines force as the product of mass and acceleration: hence, where there is acceleration and mass, there must be force. Force is always in the direction of acceleration, and therefore the force is directed toward the center of the circle.
In the above paragraph, we assumed the existence of mass, since all along the discussion has concerned an object spinning around a circle. By definition, an object—that is, an item of matter, rather than an imaginary point—possesses mass. Mass is a measure of inertia, which can be explained by the first law of motion: an object in motion tends to remain in motion, at the same speed and in the same direction (that is, at the same velocity) unless or until some outside force acts on it. This tendency to maintain velocity is inertia. Put another way, it is inertia that causes an object standing still to remain motionless, and
C ENTRIPETAL F ORCE
Now that we have established the existence of a force in rotational motion, it is possible to give it a name: centripetal force, or the force that causes an object in uniform circular motion to move toward the center of the circular path. This is not a "new" kind of force; it is merely force as applied in circular or rotational motion, and it is absolutely essential. Hence, physicists speak of a "centripetal force requirement": in the absence of centripetal force, an object simply cannot turn. Instead, it will move in a straight line.
The Latin roots of centripetal together mean "seeking the center." What, then, of centrifugal, a word that means "fleeing the center"? It would be correct to say that there is such a thing as centrifugal motion; but centrifugal force is quite a different matter. The difference between centripetal force and a mere centrifugal tendency—a result of inertia rather than of force—can be explained by referring to a familiar example. | http://www.scienceclarified.com/everyday/Real-Life-Chemistry-Vol-3-Physics-Vol-1/Centripetal-Force-How-it-works.html | 13 |
66 | Briefly explain all answers.
We believe that the halo stars and globular clusters formed early in the collapse of a GIANT swirling cloud of gas and dust, and maintained their orbits as the cloud continued its collapse. The old ages and random orbits of the halo stars support this idea. As the galaxy continued to form, the gas and dust formed a rotating disk (due to conservation of angular momentum), and stars formed at this point will be in the disk. Overall, the disk stars are younger and have circular orbits, which supports this model. We believe that heavy elements are formed only in stars, so the initial cloud of gas and dust must have had very low heavy-element abundances, and as stars synthesize these elements and they are mixed back into the interstellar medium, successive generations of stars will have increasing amounts of heavy elements. The halo stars in general have much lower abundances of heavy elements than the disk stars, which supports our theory since the halo stars formed earliest.
Some of the oldest stars are in the bulge, not the halo (which formed first in our model). Some of the youngest globular clusters seem to be farther from the center (in the region that formed first in this model). We don't see as many disk white dwarfs as the theory predicts. Where are the earliest stars that have no heavy elements?
We believe that galaxy collisions are likely because typical galaxy separations are only about 20 times the typical size of a galaxy. Stars, however, are typically separated by 10 million stellar diameters!
The clearest evidence that galaxies collide is found in optical images of interacting galaxies such as The Mice (Figure 13-9), the Whirlpool Galaxy (Figure 13-10) and The Antennae (Figure 13-12). These images show galaxies colliding and the effects of the gravitational forces of each galaxy on the material in them are clearly visible. Additional evidence of collisions is found in the bursts of star formation found in galaxies that have very little gas or dust. Evidence for galactic collisions can also be seen closer to home. The Milky Way is involved in a collision with both the Large and Small Magellanic Clouds (which show evidence of disturbances caused by their interaction with the Milky Way), and the Sagittarius Dwarf galaxy (which is apparently being destroyed by and absorbed into the Milky Way. Collisions between galaxies is also indicated by ring galaxies which appear to originate when two galaxies collide head on at high speed. Many ring galaxies do have nearby companions.
(Mathematically, the expression for the mass enclosed within an orbit of radius r is M = v2r/G, where G is Newton's gravitational constant (numerically equal to 4.30 x 10-6(kpc/M )·(km/s)2) and v is the orbital speed of a star at distance r. This formula is essentially another way of writing Kepler's Law Porb2 = constant * r3.)
This concept works equally well for the orbits of stars and gas within spiral galaxies. By looking at the mass inside the orbit of stars or gas at different distances from the center of the galaxy, the mass of a galaxy as a funcion of radial distance from the center (the mass of a galaxy INSIDE radius r) can be obtained from the rotation curve of the galaxy.
Basically, both when discussing the orbits of planets around a star, or the orbits of stars around the center of the galaxy, the mass inside the orbit determines, via gravity, the properties of the orbit. The difference here is that if you look at a bigger orbit (farther from the center), the mass inside that orbit is bigger than the mass inside the smaller orbit (something that doesn't happen with different-sized orbits around stars - remember that the mass of the planets is insignificant compared to the mass of the star).
Measuring the Mass
Shown above are two possible rotation curves of the Milky Way Galaxy. (Neither one is necessarily correct.)
Rotation curve A yields a smaller mass for the Milky Way Galaxy within a radius of 8 kpc. If a higher mass is enclosed within the same radius, the gravitational force on the orbiting (much less massive) object is larger, and the orbital velocity required to keep it in orbit (rather than falling) is larger. Rotation curve B has a larger velocity at 8kpc, and so a larger mass inside 8 kpc is inferred.
( Bonus: By what factor? (You can use ratios to solve this, or calculate the mass within 8 kpc in each case.))
Since mass is proportional to velocity squared (v2), and the velocity of curve A at 8 kpc is about half that of curve B at 8 kpc, the mass inferred from curve A is one quarter that inferred from curve B.
Once the radius is large enough to enclose all of the mass of the galaxy, the mass will no longer increase as the radius (from the galaxy's center) increases. In this case, as you move further out, the force of gravity decreases (the mass stays the same, but the distance to the center increases), and so the velocity required to stay in orbit about the galaxy (rather than fall in) decreases. In this case, rotation curve A would have to be true.
If the luminous material does not extend beyond 14kpc, and this material accurately traced the mass, then the mass would not increase as the radius increased. In this case, according to part (c), the velocity would decrease beyond 14 kpc, and rotation curve A would be true. But this question assumes that rotation curve B is correct! Therefore there must be mass beyond 14 kpc that we cannot see: Dark Matter!
The rotation curve of the Milky Way Galaxy is more like curve B than curve A! So there is matter that we cannot see: Dark Matter! In fact, only about 1% to 10% of our Galaxy's mass is luminous.
We have also applied this method to other spiral galaxies. We can estimate the the expected mass of a galaxy based on its brightness and distance. This gives us an estimate of the mass of the luminous matter in the galaxy. The rotation curve method tells us the mass based on the gravitational influence of all the mass in the galaxy. For most galaxies, the luminous mass is in the neighbourhood of 10 times less than the mass determined from the rotation curve method. Therefore, a significant fraction of the mass of the galaxy must be in dark matter.
Additional evidence comes from X-ray observations of hot gas in galaxy clusters, which would escape if the clusters only contained the mass inferred from the luminous matter in its galaxies (since high temperatures mean fast-moving particles). Similarly, the high velocities within the clusters of the galaxies themselves indicates the presence of substantial dark matter. Finally, gravitational lensing indicates the presence of dark matter in galaxy clusters.
All methods of measuring the masses of galaxies use some measurements of velocity and size (which requires distance!), combined with the laws of gravity. For example, measuring galaxy masses using rotation curves for spiral galaxies involves knowing the rotational velocity of stars/gas in the galaxy's disk (from Doppler shifts) at various distances from the center (so we need to know the distance to the galaxy), along with the assumption that the orbits of these stars/gas clouds are determined by the mass lying inside their orbits.
Another method involves looking at the widths of absorption lines in the galaxy overall. These widths allow us to estimate the overall motions of the gas in the galaxy, and we can therefore infer the mass that would be required to keep this gas inside the galaxy? This method gives less detailed information, but still involves a measurement of velocity, an estimate of the galaxy's size, and the assumption that gravity is the only thing keeping this moving material inside the galaxy.
Equivalently, the motions of galaxies or gas in a cluster of galaxies can be used to estimate the mass of the total cluster. Dividing the mass of the cluster by the number of galaxies gives the average mass of each galaxy. Once again, there is not detailed information, but we have measured masses.
Note that measuring the mass in these ways is different from adding up the mass expected from the luminous matter in the galaxy. That is merely an educated guess, based on our knowledge of the masses of objects we can see.
"Standard candles" are objects of known luminosity. Once we know the luminosity of a particular kind of object, we can combine that knowledge with the observed apparent brightness of the object to infer its distance. Cepheid variable stars are an example of this kind of object: by measuring the period of such a variable star, we can determine its average luminosity by using the "Period-Luminosity" relation for this kind of star. Combining this with its average apparent brightness allows us to determine the distance by using the inverse square law of light (more distant objects appear dimmer).
"Standard rulers" are objects of known size. If we know the size of an object, we can combine that knowledge with its observed apparent size in order to infer its distance. More distant objects appear smaller. There are no really good "standard rulers", but in general, the galaxies in a more distant cluster will appear smaller than those in a nearby cluster.
We must calibrate these standard objects in order to determine their intrinsic luminosity (in the case of standard candles) or absolute size (in the case of standard rulers). For example, in order to use the Period-Luminosity relationship for Cepheid variables (see above), we must first determine this relationship. In order to do this, we must KNOW the luminosity of several Cepheid variables, and to do this we must first know their distance. All distance-finding methods involving standard candles must be calibrated using a previous method.
Hubble's Law says that the expansion of space between the galaxies is simply v = Hod, where v is the apparent recessional velocity of a galaxy, Ho is the Hubble constant, and d is the distance of the galaxy.
In order to determine the Hubble constant, we need to measure both the distances and recessional velocities of many galaxies.
The recessional velocities can be determined by measuring the Doppler shift of absorption lines. This is straightforward and reasonably accurate. Distances, however, are extremely difficult to determine because the most direct method, parallax, is only useful for nearby stars within our own galaxy. All other methods must be calibrated before they can be used. This is a time consuming and difficult process, and as a result there are large uncertainties in the measured distances. In addition, gas and dust can dim the light from distant objects. This can further distort distances measured using standard candles. The Hubble Space Telescope was built to improve our measurements of distances to galaxies, and has improved our determination of the Hubble constant.
Quasars show large red shifts in their spectra. One way to interpret this is with the use of the Hubble law which relates the recessional velocity of distant galaxies to their distance. The red shifts are attributed to the Doppler effect, and provide a measurement of the recessional velocities of the quasars. The technique only works for distant galaxies, objects outside of the Local Group. However, it provides a direct method for determining the distance to an object whose red shift is known (and is due to the expansion of the universe). The large redshifts observed in quasars (in excess of z=0.1) imply that the quasars are at distances in excess of 400 Mpc (assuming that H = 70 km/s/Mpc).
No, Hubble's law is only valid on the large scales where the space between the galaxies is expanding. Locally, like within the Galaxy, gravity is holding everything together and the expansion of space is not observable. So redshifts measured for stars within the Galaxy only tell you about the speed at which that star is moving away, and nothing else.
First, the evidence that quasars are really as distant as their redshifts imply. Absorption lines in some quasar spectra are at the same redshift as a foreground galaxy, indicating that the quasar is more distant. Gravitational lensing of quasars (producing multiple images of a single quasar) implies the quasar is at a greater distance than the galaxy or material causing the lensing.
Now the evidence that quasars occur in distant galaxies: In images of some quasars, a faint "fuzz" is seen around the quasar, which is believed to be the galaxy surrounding the quasar (which is an extremely active nucleus). The spectra of such "fuzz" are similar to spectra of other distant galaxies. Secondly, a supernova was detected near QSO 1059+730. The rest of the galaxy was too faint to be detected, but this does indicate that the quasar and supernova were in the same galaxy. Finally, quasar 3C 273 shows jets of material very similar to those seen in some other galaxies.
The core of the galaxy appears to be a very energetic source of energy for several reasons. First, the gas that is very close to the core is highly ionized, indicating that extremely high temperatures exist in this region. This suggests that the core produces large amounts of x-ray, ultraviolet, and possibly gamma-ray radiation. Secondly, radio observations have shown the presence of a small, luminous radio source at the center, as well as matter apparently swirling around this source. Radio observations have also shown large powerful jets emanating from the core. Such jets require high energy phenomena to produce them. Finally, the rapid motions of the hot gas, cool clouds, and stars in the vicinity of the core suggest that it is extremely massive and very compact.
The "unified model" consists of a supermassive black hole at the centers of these galaxies. The black hole is surrounded by an accretion disk of infalling matter, which is accompanied by jets of matter and radiation perpendicular to this disk.
In the unified model, the unusually strong activity at the galaxy's center is triggered by a galaxy interaction, collision, or merger, as these disrupt the galaxies and push extra material toward the supermassive black hole. The increased material falling into the black hole results in increased emission of radiation.
Many quasars are found in tidally distorted galaxies which suggests mergers, collisions, and interactions of the parent galaxies. Such interactions could throw large amounts of material toward the black hole and form the accrtion disk that seems necessary to produce a quasar.
At low red shifts (nearby: z < 1), the distribution of galaxies is fairly scattered and interactions between the galaxies is not extremely common. At high red shifts (most distant: z > 4: early in universe's history), few galaxies had formed, so few interactions between them could take place. However, at distances associated with red shifts of about z = 2, galaxies were prevalent and they were fairly close together, so that they collided more often than they do at the present time (low red shifts). | http://setiathome.berkeley.edu/~korpela/astro10/solns/soln5.html | 13 |
131 | Chapter 6 Fruitful functions
6.1 Return values
Some of the built-in functions we have used, such as the math functions, produce results. Calling the function generates a value, which we usually assign to a variable or use as part of an expression.
e = math.exp(1.0) height = radius * math.sin(radians)
All of the functions we have written so far are void; they print something or move turtles around, but their return value is None.
In this chapter, we are (finally) going to write fruitful functions. The first example is area, which returns the area of a circle with the given radius:
def area(radius): temp = math.pi * radius**2 return temp
We have seen the return statement before, but in a fruitful function the return statement includes an expression. This statement means: “Return immediately from this function and use the following expression as a return value.” The expression can be arbitrarily complicated, so we could have written this function more concisely:
def area(radius): return math.pi * radius**2
On the other hand, temporary variables like temp often make debugging easier.
Sometimes it is useful to have multiple return statements, one in each branch of a conditional:
def absolute_value(x): if x < 0: return -x else: return x
Since these return statements are in an alternative conditional, only one will be executed.
As soon as a return statement executes, the function terminates without executing any subsequent statements. Code that appears after a return statement, or any other place the flow of execution can never reach, is called dead code.
In a fruitful function, it is a good idea to ensure that every possible path through the program hits a return statement. For example:
def absolute_value(x): if x < 0: return -x if x > 0: return x
This function is incorrect because if x happens to be 0, neither condition is true, and the function ends without hitting a return statement. If the flow of execution gets to the end of a function, the return value is None, which is not the absolute value of 0.
>>> print absolute_value(0) None
By the way, Python provides a built-in function called abs that computes absolute values.
6.2 Incremental development
As you write larger functions, you might find yourself spending more time debugging.
To deal with increasingly complex programs, you might want to try a process called incremental development. The goal of incremental development is to avoid long debugging sessions by adding and testing only a small amount of code at a time.
As an example, suppose you want to find the distance between two points, given by the coordinates (x1, y1) and (x2, y2). By the Pythagorean theorem, the distance is:
The first step is to consider what a distance function should look like in Python. In other words, what are the inputs (parameters) and what is the output (return value)?
In this case, the inputs are two points, which you can represent using four numbers. The return value is the distance, which is a floating-point value.
Already you can write an outline of the function:
def distance(x1, y1, x2, y2): return 0.0
Obviously, this version doesn’t compute distances; it always returns zero. But it is syntactically correct, and it runs, which means that you can test it before you make it more complicated.
To test the new function, call it with sample arguments:
>>> distance(1, 2, 4, 6) 0.0
I chose these values so that the horizontal distance is 3 and the vertical distance is 4; that way, the result is 5 (the hypotenuse of a 3-4-5 triangle). When testing a function, it is useful to know the right answer.
At this point we have confirmed that the function is syntactically correct, and we can start adding code to the body. A reasonable next step is to find the differences x2 − x1 and y2 − y1. The next version stores those values in temporary variables and prints them.
def distance(x1, y1, x2, y2): dx = x2 - x1 dy = y2 - y1 print 'dx is', dx print 'dy is', dy return 0.0
If the function is working, it should display
Next we compute the sum of squares of dx and dy:
def distance(x1, y1, x2, y2): dx = x2 - x1 dy = y2 - y1 dsquared = dx**2 + dy**2 print 'dsquared is: ', dsquared return 0.0
Again, you would run the program at this stage and check the output (which should be 25). Finally, you can use math.sqrt to compute and return the result:
def distance(x1, y1, x2, y2): dx = x2 - x1 dy = y2 - y1 dsquared = dx**2 + dy**2 result = math.sqrt(dsquared) return result
If that works correctly, you are done. Otherwise, you might want to print the value of result before the return statement.
The final version of the function doesn’t display anything when it runs; it only returns a value. The print statements we wrote are useful for debugging, but once you get the function working, you should remove them. Code like that is called scaffolding because it is helpful for building the program but is not part of the final product.
When you start out, you should add only a line or two of code at a time. As you gain more experience, you might find yourself writing and debugging bigger chunks. Either way, incremental development can save you a lot of debugging time.
The key aspects of the process are:
Use incremental development to write a function called hypotenuse that returns the length of the hypotenuse of a right triangle given the lengths of the two legs as arguments. Record each stage of the development process as you go.
As you should expect by now, you can call one function from within another. This ability is called composition.
As an example, we’ll write a function that takes two points, the center of the circle and a point on the perimeter, and computes the area of the circle.
Assume that the center point is stored in the variables xc and yc, and the perimeter point is in xp and yp. The first step is to find the radius of the circle, which is the distance between the two points. We just wrote a function, distance, that does that:
radius = distance(xc, yc, xp, yp)
The next step is to find the area of a circle with that radius; we just wrote that, too:
result = area(radius)
Encapsulating these steps in a function, we get:
def circle_area(xc, yc, xp, yp): radius = distance(xc, yc, xp, yp) result = area(radius) return result
The temporary variables radius and result are useful for development and debugging, but once the program is working, we can make it more concise by composing the function calls:
def circle_area(xc, yc, xp, yp): return area(distance(xc, yc, xp, yp))
6.4 Boolean functions
Functions can return booleans, which is often convenient for hiding complicated tests inside functions. For example:
def is_divisible(x, y): if x % y == 0: return True else: return False
It is common to give boolean functions names that sound like yes/no
Here is an example:
>>> is_divisible(6, 4) False >>> is_divisible(6, 3) True
The result of the == operator is a boolean, so we can write the function more concisely by returning it directly:
def is_divisible(x, y): return x % y == 0
Boolean functions are often used in conditional statements:
if is_divisible(x, y): print 'x is divisible by y'
It might be tempting to write something like:
if is_divisible(x, y) == True: print 'x is divisible by y'
But the extra comparison is unnecessary.
Exercise 3 Write a function
6.5 More recursion
We have only covered a small subset of Python, but you might be interested to know that this subset is a complete programming language, which means that anything that can be computed can be expressed in this language. Any program ever written could be rewritten using only the language features you have learned so far (actually, you would need a few commands to control devices like the keyboard, mouse, disks, etc., but that’s all).
Proving that claim is a nontrivial exercise first accomplished by Alan Turing, one of the first computer scientists (some would argue that he was a mathematician, but a lot of early computer scientists started as mathematicians). Accordingly, it is known as the Turing Thesis. For a more complete (and accurate) discussion of the Turing Thesis, I recommend Michael Sipser’s book Introduction to the Theory of Computation.
To give you an idea of what you can do with the tools you have learned so far, we’ll evaluate a few recursively defined mathematical functions. A recursive definition is similar to a circular definition, in the sense that the definition contains a reference to the thing being defined. A truly circular definition is not very useful:
If you saw that definition in the dictionary, you might be annoyed. On the other hand, if you looked up the definition of the factorial function, denoted with the symbol !, you might get something like this:
This definition says that the factorial of 0 is 1, and the factorial of any other value, n, is n multiplied by the factorial of n−1.
So 3! is 3 times 2!, which is 2 times 1!, which is 1 times 0!. Putting it all together, 3! equals 3 times 2 times 1 times 1, which is 6.
If you can write a recursive definition of something, you can usually write a Python program to evaluate it. The first step is to decide what the parameters should be. In this case it should be clear that factorial takes an integer:
If the argument happens to be 0, all we have to do is return 1:
def factorial(n): if n == 0: return 1
Otherwise, and this is the interesting part, we have to make a recursive call to find the factorial of n−1 and then multiply it by n:
def factorial(n): if n == 0: return 1 else: recurse = factorial(n-1) result = n * recurse return result
The flow of execution for this program is similar to the flow of countdown in Section 5.8. If we call factorial with the value 3:
Since 3 is not 0, we take the second branch and calculate the factorial of n-1...
Since 2 is not 0, we take the second branch and calculate the factorial of n-1...Since 1 is not 0, we take the second branch and calculate the factorial of n-1...Since 0 is 0, we take the first branch and return 1 without making any more recursive calls.
The return value (2) is multiplied by n, which is 3, and the result, 6, becomes the return value of the function call that started the whole process.
Here is what the stack diagram looks like for this sequence of function calls:
The return values are shown being passed back up the stack. In each frame, the return value is the value of result, which is the product of n and recurse.
In the last frame, the local variables recurse and result do not exist, because the branch that creates them does not execute.
6.6 Leap of faith
Following the flow of execution is one way to read programs, but it can quickly become labyrinthine. An alternative is what I call the “leap of faith.” When you come to a function call, instead of following the flow of execution, you assume that the function works correctly and returns the right result.
In fact, you are already practicing this leap of faith when you use built-in functions. When you call math.cos or math.exp, you don’t examine the bodies of those functions. You just assume that they work because the people who wrote the built-in functions were good programmers.
The same is true when you call one of your own functions. For
example, in Section 6.4, we wrote a function called
The same is true of recursive programs. When you get to the recursive call, instead of following the flow of execution, you should assume that the recursive call works (yields the correct result) and then ask yourself, “Assuming that I can find the factorial of n−1, can I compute the factorial of n?” In this case, it is clear that you can, by multiplying by n.
Of course, it’s a bit strange to assume that the function works correctly when you haven’t finished writing it, but that’s why it’s called a leap of faith!
6.7 One more example
After factorial, the most common example of a recursively defined mathematical function is fibonacci, which has the following definition1:
Translated into Python, it looks like this:
def fibonacci (n): if n == 0: return 0 elif n == 1: return 1 else: return fibonacci(n-1) + fibonacci(n-2)
If you try to follow the flow of execution here, even for fairly small values of n, your head explodes. But according to the leap of faith, if you assume that the two recursive calls work correctly, then it is clear that you get the right result by adding them together.
6.8 Checking types
What happens if we call factorial and give it 1.5 as an argument?
>>> factorial(1.5) RuntimeError: Maximum recursion depth exceeded
It looks like an infinite recursion. But how can that be? There is a base case—when n == 0. But if n is not an integer, we can miss the base case and recurse forever.
In the first recursive call, the value of n is 0.5. In the next, it is -0.5. From there, it gets smaller (more negative), but it will never be 0.
We have two choices. We can try to generalize the factorial function to work with floating-point numbers, or we can make factorial check the type of its argument. The first option is called the gamma function2 and it’s a little beyond the scope of this book. So we’ll go for the second.
We can use the built-in function isinstance to verify the type of the argument. While we’re at it, we can also make sure the argument is positive:
def factorial (n): if not isinstance(n, int): print 'Factorial is only defined for integers.' return None elif n < 0: print 'Factorial is not defined for negative integers.' return None elif n == 0: return 1 else: return n * factorial(n-1)
The first base case handles nonintegers; the second catches negative integers. In both cases, the program prints an error message and returns None to indicate that something went wrong:
>>> factorial('fred') Factorial is only defined for integers. None >>> factorial(-2) Factorial is not defined for negative integers. None
If we get past both checks, then we know that n is positive or zero, so we can prove that the recursion terminates.
This program demonstrates a pattern sometimes called a guardian. The first two conditionals act as guardians, protecting the code that follows from values that might cause an error. The guardians make it possible to prove the correctness of the code.
Breaking a large program into smaller functions creates natural checkpoints for debugging. If a function is not working, there are three possibilities to consider:
To rule out the first possibility, you can add a print statement at the beginning of the function and display the values of the parameters (and maybe their types). Or you can write code that checks the preconditions explicitly.
If the parameters look good, add a print statement before each return statement that displays the return value. If possible, check the result by hand. Consider calling the function with values that make it easy to check the result (as in Section 6.2).
If the function seems to be working, look at the function call to make sure the return value is being used correctly (or used at all!).
Adding print statements at the beginning and end of a function can help make the flow of execution more visible. For example, here is a version of factorial with print statements:
def factorial(n): space = ' ' * (4 * n) print space, 'factorial', n if n == 0: print space, 'returning 1' return 1 else: recurse = factorial(n-1) result = n * recurse print space, 'returning', result return result
space is a string of space characters that controls the indentation of the output. Here is the result of factorial(5) :
factorial 5 factorial 4 factorial 3 factorial 2 factorial 1 factorial 0 returning 1 returning 1 returning 2 returning 6 returning 24 returning 120
If you are confused about the flow of execution, this kind of output can be helpful. It takes some time to develop effective scaffolding, but a little bit of scaffolding can save a lot of debugging.
Draw a stack diagram for the following program. What does the program print?
def b(z): prod = a(z, z) print z, prod return prod def a(x, y): x = x + 1 return x * y def c(x, y, z): sum = x + y + z pow = b(sum)**2 return pow x = 1 y = x + 1 print c(x, y+3, x+y)
The Ackermann function, A(m, n), is defined3:
Write a function named ack that evaluates Ackerman’s function. Use your function to evaluate ack(3, 4), which should be 125. What happens for larger values of m and n?
A palindrome is a word that is spelled the same backward and forward, like “noon” and “redivider”. Recursively, a word is a palindrome if the first and last letters are the same and the middle is a palindrome.
The following are functions that take a string argument and return the first, last, and middle letters:
def first(word): return word def last(word): return word[-1] def middle(word): return word[1:-1]
We’ll see how they work in Chapter 8.
Exercise 7 A number, a, is a power of b if it is divisible by b and a/b is a power of b. Write a function called
The greatest common divisor (GCD) of a and b is the largest number that divides both of them with no remainder4.
One way to find the GCD of two numbers is Euclid’s algorithm, which is based on the observation that if r is the remainder when a is divided by b, then gcd(a, b) = gcd(b, r). As a base case, we can consider gcd(a, 0) = a.
Write a function called
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey. | http://www.greenteapress.com/thinkpython/html/book007.html | 13 |
50 | THE PYTHAGOREAN DISTANCE FORMULA
BASIC TO TRIGONOMETRY and calculus is the theorem that relates the squares drawn on the sides of a right-angled triangle. Credit for the proving the theorem goes to the Greek philosopher Pythagoras, who lived in the 6th century B. C.
Here is the statement of the theorem:
In a right triangle the square drawn on the side opposite the right angle
That means that if ABC is a right triangle with the right angle at A, then the square drawn on BC opposite the right angle, is equal to the two squares together on CA, AB.
In other words, if it takes one can of paint to paint the square on BC, then it will also take exactly one can to paint the other two squares.
The side opposite the right angle is called the hypotenuse ("hy-POT'n-yoos"; which literally means stretching under).
Algebraically, if the hypotenuse is c, and the sides are a, b:
a² + b² = c².
For a proof, see below.
Problem 1. State the Pythagorean theorem in words.
In a right triangle the square on the side opposite the right angle will equal the squares on the sides that make the right angle.
Problem 2. Calculate the length of the hypotenuse c when the sides are as follows.
To see the answer, pass your mouse over the colored area.
a) a = 5 cm, b = 12 cm.
b) a = 3 cm, b = 6 cm.
Since 9 is a square number, and a common factor of 9 and 36, then we may anticipate simplifying the radical by writing 9 + 36 = 9(1 + 4) = 9· 5.
We could, of course, have written 9 + 36 = 45 = 9· 5. But that first wipes out the square number 9. We then have to bring it back.
The distance d of a point (x, y) from the origin
According to the Pythagorean theorem, and the meaning of the rectangular coördinates (x, y),
d ² = x² + y².
"The distance of a point from the origin
Example 1. How far from the origin is the point (4, −5)?
Problem 3. How far from the origin is the point (−5, −12)?
The distance between any two points
How far is it from (4, 3) to (15, 8)?
Consider the distance d as the hypotenuse of a right triangle. Then according to Lesson 31, Problem 5, the coördinates at the right angle are (15, 3).
Therefore, the horizontal leg of that triangle is simply the distance from 4 to 15: 15 − 4 = 11.
The vertical leg is the distance from 3 to 8: 8 − 3 = 5.
To find a formula, let us use subscripts and label the two points as
(x1, y1) ("x-sub-1, y-sub-1") and (x2, y2) ("x-sub-2, y-sub-2") .
The subscript 1 labels the coördinates of the first point; the subscript 2 labels the coördinates of the second. We write the absolute value because distance is never negative.
Here then is the Pythagorean distance formula between any two points:
It is conventional to denote the difference of x-coördinates by the symbol Δx ("delta-x"):
Δx = x2 − x1
Δy = y2 − y1
Example 2. Calculate the distance between the points (1, 3) and (4, 8).
Note: It does not matter which point we call the first and which the second. Alternatively,
But (−3)² = 9, and (−5)² = 25. The distance between the two points is the same.
Example 3. Calculate the distance between the points (−8, −4) and (1, 2).
Problem 4. Calculate the distance between (2, 5) and (8, 1)
Problem 5. Calculate the distance between (−11, −6) and (−16, −1)
A proof of the Pythagorean theorem
Let a right triangle have sides a, b, and hypotenuse c. And let us arrange four of those triangles to form a square whose side is a + b. (Fig. 1)
Now, the area of that square is equal to the sum of the four triangles, plus the interior square whose side is c.
Two of those triangles taken together, however, are equal to a rectangle whose sides are a, b. The area of such a rectangle is a times b: ab. Therefore the four triangles together are equal to two such rectangles. Their area is 2ab.
As for the square whose side is c, its area is simply c². Therefore, the area of the entire square is
c² + 2ab. . . . . . . (1)
At the same time, an equal square with side a + b (Fig. 2)
a² + b² + 2ab.
But this is equal to the square formed by the triangles, line (1):
a² + b² + 2ab = c² + 2ab.
Therefore, on subtracting the two rectangles 2ab from each square, we are left with
a² + b² = c².
That is the Pythagorean theorem.
Please make a donation to keep TheMathPage online.
Copyright © 2012 Lawrence Spector
Questions or comments? | http://www.themathpage.com/Alg/pythagorean-distance.htm | 13 |
205 | Weather and Climate
Douglas R. Powell and Harold E. Klieforth
The highest range of the many mountain ranges that are arranged en echelon in the Great Basin between the Sierra Nevada and the Wasatch Mountains is the White Mountains, situated along the California-Nevada border about 225 mi (362 km) east of the Pacific Coast. Climatically, this location is transitional between the moderating maritime influence of the Pacific Ocean and the more extreme continental influence of interior North America. The very large variation in elevation within the range, from 4,000–5,000 ft (1,200–1,500 m) at the base to 14,246 ft (4,343 m) at the summit, results in rapid and significant changes in temperature and precipitation within short horizontal distances. Any air mass reaching the White Mountains must pass over an assemblage of other ranges that vary in width and altitude. By far the most important of these mountain barriers is the equally high Sierra Nevada, lying immediately to the west, which, by inducing air to rise, clouds to form, and precipitation to fall, intercepts some of the moisture from Pacific storms in the winter half of the year. To the north, east, and south are a series of lower ranges that have lesser climatic influence. Topographically, the least impeded avenue of approach for an air mass is from the southeast, but significant movement of air from this direction is uncommon. When it does occur, normally in July or August, the result may be spectacular thunderstorms with high precipitation intensities.
Atmospheric Circulation Patterns
Unlike other features of the physical environment, the gases that constitute the atmosphere are invisible, so that it is necessary to use indirect means to describe graphically and to map continuously changing atmospheric conditions. Atmospheric motions are complex, but when studied they are found to follow patterns. To understand the weather regimes of the White-Inyo mountains, it is helpful to recognize three principal scales of motion, each roughly an order of magnitude greater than the next.
The first and largest of these is the synoptic scale , so called because it is analyzed from numerous soundings, measurements, and observations made at the same time at hundreds of locations around the world. This is the scale of the familiar weather maps seen in daily newspapers and on television. These surface and upper-air charts cover horizontal distances of 100 to 2,000 mi (160 to 3,200 km) and depict such features as low- and high-pressure areas, cyclones, and anticyclones, air masses, weather fronts, and regions of precipitation. Generally, these features progress in a predictable manner,
and from them it is possible to produce, by computerized prediction models, future patterns of airflow, moisture, temperature, clouds, and precipitation. These are used to forecast, with varying degrees of certainty, weather conditions for a particular region for a few days following receipt of the climatic data.
The second scale of atmospheric motion, the mesoscale , describes airflow patterns over distances of, say, 1 to 100 mi (or 1.6 to 150 km), a range that includes many of the spectacular cloud formations and weather conditions experienced in and near the mountains of eastern California (Plates 1.1–1.8). These phenomena are closely related to the synoptic flow patterns but are controlled and shaped by major terrain features such as the Sierra Nevada and the White-Inyo Range.
The third scale, the toposcale , applies to weather and climatic conditions within a distance of, say, 1 mi (1.6 km) that vary in relation to prominent features shown on local (15-minute or 7.5-minute series) topographic maps. Thus, under differing synoptic and mesoscale conditions different air temperatures, wind velocities, and snow accumulations are measured, whether the location is on a mountain ridge or in a canyon, on a windward or leeward slope, or in a broad valley.
In the sections that follow we discuss the principal patterns that affect the White-Inyo Range, its inhabitants, and its visitors in each of these different scales. We begin with a survey of synoptic-scale airflow and its relation to seasonal weather.
Figure 1.1 shows the principal airflow patterns and air-mass types or source regions that determine regional weather in different seasons of the year. The arrows indicate the directions of air movement near and above the crest of the major mountain ranges, at levels between 10,000 and 20,000 ft (3 to 6 km) above sea level. The open circles are locations from which twice-daily (near 4 A.M. and 4 P.M. PST) rawinsonde balloon ascents are made to obtain data on air temperature, humidity, and wind velocity, from which upper-air (e.g., 500 mb) weather maps are plotted and analyzed. The black circle indicates the location of the Bishop Airport, the nearest (3 mi, or 5 km) National Weather Service station to the White-Inyo Range.
The air that flows across California at any time of year is most likely to have passed over some part of the Pacific Ocean. In summer the Pacific Anticyclone (a large, slow-moving clockwise whirl of air) lies just west of California, bringing an onshore flow of cool marine air, stratus clouds, and fog to the coast and mostly clear, dry air to the Sierra and White-Inyo Range. During much of the summer the Great Basin Anticyclone develops over the warm plateau region of Nevada and Utah. When this whirl expands and shifts westward, a flow of moist maritime tropical air from the Gulf of California or the Gulf of Mexico may persist for a few days before the normally dry Pacific flow reasserts itself. Thus, during the summer season the mountainous terrain of eastern California and western Nevada is contested for by two air masses, with that from the northern or central Pacific usually prevailing.
During fall, winter, and spring a series of traveling upper-air troughs (cyclonic bends with counterclockwise flow) and ridges (anticyclonic bends with clockwise flow) cross California and Nevada. Ridges and anticyclones usually bring subsiding air, few clouds, and "fair" weather, whereas the troughs and associated cyclones, low-pressure centers, and fronts bring much cloudiness and widespread precipitation. Fronts are boundaries between converging air masses from different source regions. The primary air masses affecting California are cold maritime polar air from the Gulf of Alaska and warmer, moist maritime subtropical air from lower latitudes. Occasionally there are invasions of cold continental polar air from northern Canada or the Rocky Mountains.
Cloud formation and precipitation result primarily from ascending motion in moist air. When air rises, it expands and cools. This causes its invisible water vapor to
condense, forming small cloud droplets. As the ascending motion continues, the clouds thicken and the droplets grow larger to form raindrops. If it is sufficiently below freezing (32°F, or 0°C), snow crystals form, which, when heavy enough, fall from the clouds and reach the ground as rain or snow. The precipitation that falls on the mountains each year is a result of four principal mechanisms of upward-moving air: general ascent produced by widespread horizontal convergence in cyclonic flow, more intense lifting in frontal zones, strong "orographic," or terrain-induced, lifting over the windward slopes of mountains, and thermal convective instability triggered by the ascending motion, which causes the cloud to billow upward and precipitate with greater intensity. Although heavy precipitation (40 to 80 in or 100 to 200 cm of water annually) falls on the upper western slope of the Sierra, the region immediately leeward, including the Owens Valley, the White-Inyo Range, and much of western Nevada, is in the so-called rain shadow of the Sierra and receives much less precipitation. The drying of the air results from both the loss of moisture due to precipitation on the windward slope and the adiabatic warming induced by descent over the leeward slope.
Cold fronts, moving in such a way that cold air replaces warmer air at the surface, usually approach from the northwest or the west. Before the passage of the front and its lagging upper-air trough, the airflow across the mountains is from the west or southwest, while surface winds in the valleys and basins are mild and from the south. After frontal passage the upper flow becomes northwesterly to northerly, and cold northerly to northeasterly winds sweep across the mountain ridges and along the valleys. The Sierra Nevada has a profound effect on most fronts, causing them to stall west of the crest while their northern sectors move more rapidly across Oregon and then southward over northern Nevada. Thus, many fronts converge on the Sierra Nevada and White-Inyo Range in nutcracker fashion, with the cold air reaching the leeward valleys and basins from the north before the cold air from the west can surmount the High Sierra.
When a surface low-pressure center forms in western Nevada accompanied by an upper trough that deepens excessively to form a cyclone over the region — a weather pattern known locally as a "Tonopah low" — a northeasterly to southeasterly flow often brings continental polar air or recycled maritime polar air, low clouds, and snowfall to the White-Inyo Range. When such storms involve moist Pacific air, they usually bring heavy snowfall to the region; some of the biggest snowstorms recorded in the White Mountains have occurred in such circulation patterns. Precipitation from closed cyclones over the region is most frequent in spring, resulting in a spring (April or May) precipitation maximum in much of the Great Basin, in contrast to the pronounced winter maximum in the Sierra Nevada.
On infrequent occasions, usually several years apart (e.g., January 1937, January 1949, December 1972, February 1989, and December 1990), a long northerly fetch of air may bring an invasion of true Arctic air from interior Alaska or the Yukon. These episodes bring record cold temperatures to the White-Inyo Range and adjacent valleys; at such times minimum temperatures may dip to -25°F (-31°C) or below.
Most winters include one or two episodes of "warm storms," periods of a few to several days in which very moist tropical air reaches California from the vicinity of Hawaii. In these events the freezing level may be above 10,000 ft (3,000 m), and the heavy rainfall may result in widespread flooding in much of California. It is during such storms that heavy rime icing may form on trees, structures, and power lines on high mountain ridges. This is caused by the combined effect of strong winds and supercooled clouds (composed of water droplets at air temperatures below freezing). The cloud droplets freeze on contact, building great formations of ice that grow into the direction of the wind.
Conversely, "cold storms" bring snow to low elevations, including the floor of Owen Valley and desert areas to the south and east of the White-Inyo Range. Major westerly storms that last for two or three days bring heavy accumulations of snow — a foot (30 cm) or more in the valleys, and two or three times as much at the highest elevations. Very cold storms from the northwest contain less water vapor, are of shorter duration, and usually bring only a few inches (several centimeters) of snow.
Mountain Lee Waves
When a cold front approaches California from the northwest and the westerly airflow increases in speed over the Sierra crest, spectacular "stationary" clouds are usually seen over the leeward valleys. These are manifestations of a mountain lee wave, as it is known (Fig. 1.2). If the ridges and troughs of the horizontal airflow pattern are likened to the bends or meanders of a stream, the lee wave phenomena are analogous to the falls and ripples. Figure 1.3 shows a typical pattern of airflow and cloud forms in a strong lee wave. Air flowing over the Sierra Nevada plunges downward, then upward, and then downward again in a series of crests and troughs. The wavelength depends on the airflow characteristics, mainly the variation of air temperature with height (lapse rate) and the increase of wind speed with height (wind shear). The amplitude is greatest in strong waves and in cases where the vertical flow pattern is in resonance with the terrain, as, for example, when the second wave crest lies over the next mountain range downwind, such as the White-Inyo mountains, east of Owens Valley.
Updrafts and downdrafts in a strong lee wave often have speeds of 2,000 ft (600 m) per minute, sometimes exceeding 4,000 ft (1,200 m) per minute. Where the air descends, it warms, and the relative humidity decreases. The warm, dry winds, which may reach speeds of 60 mph (30 m/s) or greater at the surface, are known as foehn winds. The stratocumulus cloud deck over the Sierra Nevada is called a foehn-wall or cap cloud, and its downslope extension is known as a cloudfall. After it evaporates, the invisible moisture is cooled again in the ascending current and forms the turbulent cumuliform roll cloud. Looking at the roll cloud, an observer has the impression that it is rotating, but this is an illusion caused by the wind shear. However, below the roll cloud there is commonly a true rotor circulation, which brings easterly winds at
the surface. On rare occasions a very strong lee wave will be in resonance with the terrain so that the first wave crest lies over the White-Inyo Range.
Above the roll cloud, there may be one layer or several decks of smooth, lens-shaped altocumulus clouds which appear stationary but through which the air is passing at 50–100 mph (22–45 m/s); the cloud droplets form at the windward edge and evaporate at the leeward edge. Soaring pilots make use of a mountain lee wave by flying into the wind and ascending in the updraft zone. In Figure 1.3 the dotted line shows a typical path of a sailplane: a line of flight under the roll cloud during airplane tow, release point at "x," and then a line of flight upward. Several flights above altitudes of 40,000 ft (12,000 m) have been made in lee waves by pilots equipped with oxygen to survive the low pressure (200 to 150 mb) and with warm clothing to withstand the cold temperatures (-94°F or -70°C).
Wave clouds may appear in any month of the year, but they are most often seen in late winter and in spring. They usually reach their maximum development in
midafternoon and are most beautiful at sunset, when the highest clouds, at 30,000–40,000 ft (9,000–12,000 m), remain colorful long after the sunlight has left the leeward valley.
A local wind of another sort may sometimes be observed by motorists in the winter season in the Owens Valley near Olancha. During strong lee wave conditions or the passage of a cold front, a great horizontal cyclonic eddy may develop about Owens (dry) Lake; there will be a northerly (i.e., from the north) wind along the route of Highway 395 but, as evidenced by blowing dust, a southerly wind on the east side toward Keeler. When the surface flow is southerly in Owens Valley, the dust from Owens Lake may be carried far north of Bishop, lowering visibility so that the mountains are nearly obscured, and the alkali dust is sometimes tasted by pilots flying at 12,000 ft (3,600 m)! Similar phenomena occur in other arid basins of California and Nevada.
At any time from early May to early October, there may be an incursion of tropical air from the south; then thunderstorms are possible. The intense heating of the arid Southwest during the summer months creates the upper-air anticyclone and surface low-pressure area that provide the circulation necessary for the northward flow of tropical air. In eastern California and Nevada, this summer monsoon is best developed during the period from early July to late August.
At first, the moisture enters the area at the high levels, and a thundery spell is commonly heralded by the appearance of rather exotic cirrus clouds from the south quadrant. Within a day or two, if the flow persists, the air at middle and low levels is also moist, and daily thundershowers can occur. These are strongly diurnal in their development; that is, they develop as a result of the daily heating of the mountain slopes by the sun, and they decline after sunset.
The appearance of patchy, turreted altocumulus clouds at sunrise is a good indication of possible thunderstorms later in the day. Heating of the rocky mountain slopes causes the air to rise toward the crests, and soon cumulus clouds form above these upslope currents. The clouds continue to rise upward, becoming what the weather observer calls towering cumulus. Near midday their tops develop a fibrous appearance indicative of ice crystal formation, and they are said to be glaciated. Soon they develop anvil-shaped cirrus tops with streamers of ice clouds stretching downwind at those levels (30,000–40,000 ft, or 9,000–12,000 m). At this time, lightning flashes from cloud to mountain and heavy local showers of rain and, commonly, graupel , or pea-sized hail, fall on the crests and ridges of the ranges. By this time hikers and climbers should have taken shelter.
Later, as downdrafts of cool air predominate, the thunderstorm ceases, and as the sky brightens to the west, the clouds begin to thicken over the leeward valley, where the day-long heating has created rising thermal currents. Often at 5:00 P.M. or 6:00 P.M. PST a brief thundershower is experienced in valley locations. As the shower moves eastward and the lowering sun in the west shines on the dark cloud and rain shafts, a brilliant rainbow is visible. Finally, when the sky has cleared and stars have appeared, lightning might continue to flash in the east if a nocturnal storm continues over central Nevada.
During some summers there are numerous thunderstorms, as in 1955, 1956, 1967, 1976, 1983, and 1984; in other years there are very few. It is difficult to predict exactly where the storms will occur, as this depends on subtle differences in wind velocity, amount of moisture, rate of growth, and topography. In the morning hours, the eastern slopes of the mountains are heated, the warm currents rise, and an easterly upslope wind forms in the valleys. If the upper synoptic flow has an easterly component, the clouds will develop even more rapidly. In the afternoon, the western slopes of the mountains are heated more effectively by the sun, and, especially if aided by a westerly breeze, the cloud development intensifies there. Once the cloud has formed, latent heat is released, which increases the cloud's buoyancy and causes it to rise more vigorously. Because these storms are so localized, commonly affecting a single canyon, intense cloudbursts may cause flash floods. These commonly occur when cloud bases are below the mountain tops. Such events often go unobserved in the Inyo Mountains and remote Nevada ranges and are discovered some days or weeks later. The damage is usually greater there, though, because roads commonly follow the canyons and washes.
As a general rule, it does not rain on summer nights in the mountains, but there are exceptions. Occasionally, the remnants of tropical storms called "easterly waves"
are carried northward along the Sierra and over much of Nevada. The cloudiness is general, and precipitation may be widespread, continuing at night and commonly accompanied by low clouds and fog, lightning, and thundershowers. Such episodes occurred in July 1956 and in August 1965.
The hiker's pocket altimeter and the altimeter in an aircraft are merely barometers measuring atmospheric pressure and indicating the equivalent height according to average weather conditions. Pressure, which always decreases with altitude, is usually measured in millibars, as shown at the right side of Table 1.1, or in inches (or millimeters) of mercury. Some equivalent pressures and heights for average atmospheric conditions in the California-Nevada region are listed in Table 1.1. The physiological effects of increasing altitude and decreasing pressure are mainly caused by the reduced amount of oxygen and the greater effort one has to exert to get enough oxygen into the lungs. Most hikers have to become acclimated for a day or two before they can be comfortable above 10,000 ft.
At most times and places, air temperature also decreases with height because the atmosphere is mainly heated from below. The atmosphere does not absorb much of the direct radiation from the sun, but the earth's surface does and reradiates the energy at a longer wavelength, which the atmosphere absorbs mainly through two of its variable gases: water vapor and carbon dioxide. The average lapse rate (generally, the decrease of temperature with altitude) is approximately 3.6°F per 1,000 ft (6.5°C per km). At midday in summer with strong thermal activity, the lapse rate on the slopes approaches the adiabatic value of about 5.4°F per 1,000 ft (10°C per km) of ascent. Thus, on days when the temperature is 90°F (32°C), for example, in the Owens Valley, it can be 45° to 55°F (7° to 13°C) on Mt. Whitney or White Mountain Peak (both above 14,000 ft or 4,000 m). On clear nights, on the other hand, cooler air with its greater density sinks and collects in the valleys, forming inversions in which the temperature increases with height in the lowest few hundred feet (» 100 m) above the ground.
Topography influences local weather and climate in many ways, examples of which will be noted throughout the following sections of this chapter.
Weather Observations and Climatological Data
Sparseness of permanent population, little human use of the area, and inaccessibility of the terrain have led to a scarcity of reliable weather records. Students and aficionados of mountain weather and climate always bemoan the paucity of weather instruments and observers at high elevations. Figure 1.4 shows the locations and elevations for weather stations in and immediately adjacent to the White Mountains. Table 1.2 gives the average monthly and annual temperatures for these stations, and Table 1.3 lists the corresponding precipitation data. Unfortunately, there are gaps in the records from most of the stations, especially during extreme weather events when such data can be very useful. Most reliable is Bishop (see Fig. 1.4), at 4,108 ft (1,250 m) in Owens Valley to the west of the White-Inyo Range, operated by the National Weather Service since 1947. The other stations are maintained by cooperative agencies, institutions, or individuals. The only continuous records from within the mountains proper are from stations operated by the White Mountain Research Station (WMRS) — White Mountain I in the valley of Crooked Creek, at 10,150 ft (3,095 m), and White Mountain II on the east slope of Mt. Barcroft, at 12,470 ft (3,800 m). Regrettably, maintenance costs and problems have closed White Mountain II during the winter months from January 1980 to the present, and the record from White Mountain I stopped after 1977. Automated weather-recording equipment is now being installed in the White Mountains by the WMRS. Dyer, at 4,975 ft (1,517 m) in Fish Lake Valley, and Deep Springs College, at 5,225 ft (1,593 m) in Deep Springs Valley, are representative of the lowland valleys to the east and southeast of the mountains. Benton, 5,377 ft (1,640 m), and Basalt, 6,358 ft (1,940 m), to the northwest and
north of the mountains have incomplete records and were little used in this climatic analysis. Unless otherwise noted in the tables, the period of record is from 1956 through 1985; 30 years is generally considered by climatologists to be the minimum length of time necessary to establish meaningful averages.
Following is a summary of three important components of weather and climate in and around the White Mountains: temperature, precipitation, and wind. Data from the above-mentioned stations serve as a basis for the discussion. Monthly average temperatures are calculated by averaging the daily maximum readings for the month, averaging the daily minima, adding these two totals, and dividing by 2. In the following discussion, winter includes December, January, and February; spring March, April, and May; summer June, July, and August; and fall September, October, and November. Both authors of this chapter have many years of direct observation
of weather events in the White Mountains and have used personal experience and knowledge to augment interpretation of the formal record, especially in the large portions of the range not covered by the recorded data.
The summer visitor to Owens Valley at midday may well think the valley and the apparently barren, flat-lighted slopes of the White Mountains to the east to be under the full possession of the sun, with a forbidding aspect of heat. The same visitor to elevations above 10,000 ft (3,049 m) would likely comment — perhaps complain — about coolness, or even the cold. In July, the warmest month, average daily temperatures are 70°F (21°C) or higher on all the adjacent valley floors; the maximum daily temperature reached 109°F (43°C) at Bishop in June 1969 and July 1982, 106°F (41°C) in June 1961 at Dyer, and 104°F (40°C) at Deep Springs in June 1964. The July average is 51.7°F (10.9°C) at White Mountain I and 46.7°F (8.2°C) at White Mountain II, with maximum summer temperatures reaching 79°F (26°C) at I in July 1967, and 73°F (23°C) at II in August 1978. The decline in average July temperatures with an increase in altitude is close to the 3.6°F drop per 1,000 ft (6.5°C per km) rise in elevation regarded as the normal lapse rate throughout the world. It is a rarity for any of the summer months at either mountain station not to have one or more readings of 32°F (0°C) or below, and even the adjacent valleys have such readings in June and August. For any location on earth, trees are generally absent if the average of the warmest month is below 50°F (10°C). This critical value occurs between White Mountain I and White Mountain II; the former has trees, and the latter has none.
In January, the coldest month, the average temperature is 37.2°F (2.9°C) at Bishop, with a winter low of -7°F (-21.5°C) in January 1982. Temperatures are somewhat colder in the valleys to the east, with Dyer at 31.4°F (-0.3°C) with a winter low of -21°F (-29.5°C) in January 1962 and 1974, and Deep Springs at 30.6°F (-0.8°C), with a low of -10°F (-23.5°C) in January 1973. Fish Lake and Deep Springs Valley are colder than Owens Valley because they are at higher elevations, they are farther from the maritime influence of the Pacific Ocean, and they are more open to invasions of cold air from the north and east. At White Mountain I January is the coldest month, with an average of 20.6°F (-6.3°C) and a winter low of -25°F (-31.5°C) in March 1968. February is the coldest month at White Mountain II, with an average of 14.8°F (-9.6°C) and a winter low of -35°F (-37°C) in March 1964. In winter there is less of a decrease in monthly average temperatures with a rise in elevation than in summer. Cold air settling in the lower valleys at night from radiational cooling and downslope drainage seems to be the major reason: the temperature difference between the valleys and the slopes or highlands is less pronounced for minimum readings than for daily maxima, especially on clear nights. This phenomenon is characteristic of high mountains everywhere. At White Mountain I, located in a valley, average minima for the three coldest months
(January, February, and March) are 2°F (1°C) warmer than at White Mountain II, which is 2,320 ft (707 m) higher on a slope; average maxima for those same months are 9.5°F (5.3°C) higher. Thus, a typical early winter morning at the lower station would be just about as cold as at the higher station, but midafternoon would be noticeably warmer.
There is also a much greater variation in average monthly temperatures in winter than in other seasons at all stations and elevations. At some time during most winters, there is an invasion of continental polar or arctic air from northern Canada and Alaska, which brings below-normal temperatures. The frequency and duration of these incursions of cold air vary greatly from year to year; on rare occasions they dominate the weather for weeks. The last such major occurrence was in January and February 1949, and in January 1937 before that. In 1937, a minimum of -42°F (-41°C) was recorded in January at Fish Lake Valley; readings close to that may have occurred within the mountains. In general, the White Mountains are protected from common and prolonged cold air invasions by the many mountain ranges to the north and east.
February and March averages are lower than those for January at White Mountain II, and they are only slightly higher than January averages at White Mountain I. This is not the case at the lowland stations in the area or at most inland stations in cold-climate regions elsewhere in the United States, where January is nearly always the coldest month, with definite warming occurring in February and, particularly, March. At the two high-elevation stations March is colder than December, an anomaly for the latitude. A continous snow cover during the winter and spring could partially account for this delay in warming at high elevation, but other regions in the United States with persistent snow cover do not show this effect. It is probably the result of the frequent passage of closed low-pressure systems in late winter and spring over the White Mountain area. An analysis of 500 mb (near 18,000 ft or 5,500 m) weather maps shows that these closed lows commonly contain very cold air, which could affect the highest elevations of the White Mountains. At lower elevations the increased solar radiation from longer days and a higher angle of the sun would offset the influence of the cold air aloft. Thus, there is a pronounced lag in temperature increase in late winter and early spring at high elevation in the White Mountains. Significant warming there usually does not occur until mid-May, when the frequency and intensity of the upper-level closed lows diminish. Major cooling throughout the area generally comes in late September and October, at a slower rate at high elevation than the warming in late spring, with a marked decrease in November at all stations.
Midwinter temperatures at White Mountain I are comparable to those in central Iowa, and at White Mountain II to southern Minnesota or Anchorage, Alaska. Summer temperatures at White Mountain I are similar to those at the northern limit of trees in Alaska and Canada, and at White Mountain II to the treeless Arctic Slope of Alaska. At comparable elevations and latitudes, temperatures in the White Mountains are generally warmer in winter and cooler in summer than in Utah or Colorado, reflecting less maritime influence farther inland. From meager data and personal ex-
perience, there seem to be no significant temperature differences between the White Mountains and the Sierra Nevada at similar elevations, although the Sierra might be expected to have warmer winters and cooler summers because of closer proximity to the Pacific Ocean. It is difficult to compare climates in different mountain ranges because of topographical variations at individual recording sites — north slope, south slope, valley, orientation to prevailing winds, and other factors.
As discussed previously, precipitation in the White Mountain area results primarily from the passage of cyclones with associated fronts during fall, winter, and spring; from closed cyclones in late winter and spring; and from the flow of moist tropical air from the southeast to the southwest quadrant in the summer. Annual amounts vary from 5–6 in (125–150 mm) on the valley floors to 20 in (508 mm) or a little more at the highest elevations. Totals appear to increase right up to the crest of the range. The rate of increase averages about 1.5 to 2.5 in per 1,000 ft (120–205 mm per km) rise. However, this average is difficult to apply to any one portion of the range, and the increase is not linear, being higher at upper elevations. Table 1.3 gives average monthly and annual precipitation amounts for stations within the region.
From west to east in the White Mountain area, there are important differences in the seasonal distribution of precipitation. Bishop, on the west, has the typical regime of most California stations: winter wet and summer dry, with January the wettest month. White Mountain I and White Mountain II have precipitation much more evenly distributed throughout the year. At both mountain stations January is the wettest month, but only by a slight margin. There is no pronounced dry season; June is the driest month, reflecting the gap between the cyclones of winter and spring and the thunderstorms of July and August. Early fall is relatively dry, with a gradual buildup of precipitation to the winter months. Deep Springs, at the southeast edge of the White Mountains, has maximum precipitation amounts in January and February, with a minor peak in July and August, and minimum amounts in June and October. On the east margin of the White Mountains, Dyer has a slight maximum in spring and mid-summer and a minor minimum in December and January. At lower elevations, the western slope of the White Mountains is relatively open to cyclones from the west in winter, partially subject to closed cyclones from the north in the spring, and somewhat protected from thunderstorms in the summer. The eastern slope, in the double rain shadow of the Sierra Nevada and the White Mountains, is protected from winter cyclones but is more open to closed cyclones in spring and thunderstorms in summer. Upper elevations in the White Mountains are relatively open to all three types of storms and show a trimodal maximum of precipitation. Thus, the White Mountain range is truly transitional in seasonal distribution of precipitation between the winter maximum of California and the Sierra Nevada and the more even annual distribution of the eastern Great Basin and Rocky Mountains.
Yearly precipitation totals not only increase with higher elevation in the mountains but very likely are larger in the northern part of the range. There are no station
records to substantiate this assertion, but the experience of many long-time residents of the area and of both writers suggests that the portion of the range from White Mountain Peak north to Boundary Peak receives more precipitation from cyclones and thunderstorms than the region south of the main peak. Occasional measurements made with a standard snow sampler at comparable elevations show greater depth and water content in the snowpack in the northern part of the range. There is also higher streamflow, more extensive former glaciation, and a less xerophytic vegetation north of White Mountain Peak. In the nearby Sierra Nevada, snow survey records show a general decrease in precipitation from north to south, reflecting a lower frequency of passing cyclones. This could also affect the White Mountains. Moreover, the crest of the Sierra Nevada is lower opposite the northern half of the White Mountains than opposite the southern half, and this may allow more moisture to reach across to the northern segment of the White Mountain Range. Still another possible effect of the Sierra Nevada is that, as previously mentioned, fronts may be retarded in crossing the massive barrier of that range, bringing in cooler air from the north and northeast, which may strengthen the fronts and increase precipitation in the northern portion of the White Mountains.
Empirical observation also indicates that the buildup of cumulonimbus clouds in summer thunderstorms is more likely to occur over specific portions of the summit upland than at random. Topographic influence on air moving into the area from characteristic directions is the probable cause. This could add a checkerboard pattern of precipitation distribution independent of more general patterns, such as the increase with elevation and from south to north. Four areas of cloud concentration are noticeable. From south to north, these are Sheep Mountain-Piute (or Paiute) Mountain, the plateau just south of White Mountain Peak, Chiatovich Flats and the area just north of the Cabin Creek-Birch Creek saddle, and the northern portion of Pellisier Flats at the head of Chiatovich Creek. Common features of the four areas are rises in elevation from south to north and broad lateral extent from west to east. Cumulonimbus clouds may form over any part of the range on any summer day, and during extensive storms all or most of the higher elevations may be cloud-covered, but initial formation and greater subsequent development more commonly occur over these four areas.
There are significant departures from normal in amounts of precipitation from month to month and year to year at all elevations in the White Mountains. Most weather stations in the United States use the calendar year in calculating annual amounts. This causes problems in much of California, with its winter-wet, summer-dry regime, and in high-mountain regions, where much of the significant precipitation falls as snow. Thus, the annual snowpack begins in the fall of one calendar year and builds to a maximum in late winter or spring of the next calendar year. Most California stations use a 1 July–30 June precipitation year to avoid this problem. An even better breakdown for the White Mountain Range is to use the water year employed by many hydrologists — 1 October–30 September. This has the advantage of including snow buildup and important July and August precipitation in one annual total, thus giving a more accurate figure of the water available for streamflow and plant growth, much of which occurs from July to September.
The following discussion uses the 1 October–30 September year to show extremes and variation from normal (see Table 1.4). Thus, the year mentioned in the table ends on 30 September and includes the precipitation from October through December of the previous year. Bishop and, by inference, the lower western slope of the range show the largest departures from normal, with the wettest year (17.28 in or 43.9 cm) in 1969 and the driest (1.68 in or 42.5 mm) in 1960. This is a range of 308% to 30% of average. Fish Lake Valley and Deep Springs Valley to the east and southeast show less variation, with both Dyer and Deep Springs ranging from about 190% to 40%. White Mountain I had maximum precipitation in 1967 (26.59 in, or 67.55 cm) and minimum in 1960 (5.57 in, or 14.15 mm), a range of 206% to 45%. At White Mountain II the high total was 33.56 in (85.35 cm) in 1967, and the low was 9.51 in (24.15 cm) in 1960, a range of 187% to 53%. It is to be expected that Bishop, with its low annual average, would have a greater variation from normal than the mountain stations, with higher averages. But Bishop also varies more than the other lowland stations. Bishop and the lower western slope of the range receive most of their rain and snow from winter cyclones, and precipitation totals reflect seasons of frequent or sporadic passage of such storms. The eastern valleys and lower slopes get relatively more moisture from spring and summer storms, and the upper elevations are more open to precipitation in all three seasons. At all stations 1967 and 1969 were very wet, and 1960 was the driest year. Unfortunately, records for the obviously wet years of 1982 and 1983 are incomplete or missing at the mountain stations; both years brought high totals to lowland stations. It is noteworthy that neither White Mountain I nor White Mountain II was very dry in 1976 and 1977, critical drought years in central California. At both stations spring and summer precipitation partially
made up for winter deficiencies. It seems probable that higher elevations in the White Mountains are less subject to either very dry or very wet years than the neighboring Sierra Nevada and Owens Valley; there is less chance that all three types of storms will be common or rare in any one precipitation year.
Extreme monthly totals at Bishop vary from 8.93 in (22.7 cm) in January 1969 to 0.00 for all months of the year. At Dyer, the extremes are 3.44 in (8.75 cm) in August 1983 and 0.00 for all months; at Deep Springs, totals vary from 4.86 in (12.35 cm) in August 1983 to 0.00 for all months. At higher elevation, White Mountain I shows a maximum of 7.53 in (19.1 cm) in December 1966 and a minimum of no rainfall or a trace for all months but January; White Mountain II shows a high at 8.55 in (21.7 cm) for December 1966 and a low of no rainfall or a trace for all months but February. However, summer thunderstorms at high elevation have certainly exceeded these monthly totals. In part of July 1955, one of the authors (D. R. Powell) measured over 11 in (28.0 cm) in a standard rain gauge at Chiatovich Flats, between 10,000 ft (3,050 m) and 11,000 ft (3,350 m), 8.48 in (21.55 cm) of which fell in 2 1/2 hours on July 23. This is the greatest 24-hour total yet recorded in the White Mountains, although it probably has been approached or exceeded during other summer thunderstorms in the range. At the two mountain stations, maximum summer 24-hour totals are about 2 in (50 mm). It is evident that neither station has yet been in the direct path of the most intense thunderstorms. White Mountain II has a winter high 24-hour sum of 4.40 in (11.2 cm) on 6 December 1966, and White Mountain I recorded 3.80 in (9.65 cm) on the same date. Dyer and Deep Springs each show about 2 in (50 mm) for the maximum daily total for any season; Bishop has received more than 3 in (7.6 cm) in each of the three winter months, from Pacific cyclones.
At elevations above 10,000 ft (3,050 m), over 80% of the mean annual precipitation falls as snow. On the valley floors, from 15 to 25% of the average precipitation is snow, with wide fluctuations from year to year in the snow-to-rain ratio. Regardless of the temperature of air masses moving onto land, rainfall is rare from November through April at White Mountain I and practically nonexistent from October through May at White Mountain II. Moreover, snow has been recorded, usually in amounts of less than 6 in (15 cm), in all of the warmest months from June through September at White Mountain II (see Table 1.5 for a summary of snowfall).
Average annual snowfall is low at the base of the mountains, with 9 in (23 cm) at Bishop, 12 in (30 cm) at Dyer, and 15 in (38 cm) at Deep Springs. It builds to 106 in (270 cm) at White Mountain I and 164 in (417 cm) at White Mountain II. As discussed earlier, annual totals are very likely higher at upper elevations north of White Mountain Peak. Maximum seasonal snowfall amounts have been 50–60 in (125–150 cm) at Bishop and Deep Springs, and a little less at Dyer, with low average precipitation in winter. Bishop has had a few seasons with no snowfall; the other lowland stations have had at least 1 in (2.5 cm) or more of snow in all years of
record. At higher elevations measurable snowfall certainly occurs every year. Seasonal totals, 1 October to 30 September, range from 170 in (432 cm) in 1969 to 48.5 in (123 cm) in 1960 at White Mountain I, and from 238 in (605 cm) in 1969 to 83 in (211 cm) in 1960, at White Mountain II. Maximum monthly falls are 76 in (193 cm) at White Mountain I, e.g., in January 1969, and 86 in (218 cm) at White Mountain II in December 1966. Maximum 24-hour totals are 38 in (97 cm) at I and 44 in (36.37 cm) at II, both on 6 December 1966. At White Mountain II, 76 in (193 cm) of snow fell on the two days 5–6 December. Daily accumulations of 10–24 in (25–60 cm) are not infrequent at higher elevation. These are impressive figures for any location in the world, though exceeded in the nearby Sierra Nevada, and winter travelers in the White Mountains should be aware of the difficulty and danger of being out in such intense snowstorms, commonly accompanied by high winds. Where snow has been measured for water equivalent in both stations, or elsewhere at high elevations, a ratio of about 10 in (25 cm) of snow, as it falls, to 1 in (2.5 cm) of water is common. In lieu of actual measurements of melted snow, a ratio of 10-to-1 (snow depth to water equivalent) has been used in the records from all stations. A 10% density is occasionally too high, but less commonly too low, for snowfall at the two mountain stations.
Continuous snow cover at elevations above 10,000 ft (3,050 m) usually begins in late October or mid-November but can begin as early as the end of September and as late as February. Disappearance of snow cover usually occurs in May or June at White Mountain I, and in June or July at White Mountain II. Average duration of snow cover is about 160 days at the lower station and 210 days at the higher locations, but there is great variation from year to year, with a range at White Mountain II of 292 days in 1973 to 54 days in 1964. Snow depths at upper elevations generally increase until March or April, and occasionally May. Maximum recorded depth at White Mountain I is 94 in (239 cm), in March 1969, and 123 in (312 cm) at White Mountain II, in the same month. A measurement of 144 in (366 cm) was made with a snow sampler at Chiatovich Flats, at 10,600 ft (3,230 m), in March 1967. It is difficult to measure accurately such parameters as snowfall, snow cover, and snow depth; significant differences can occur in short horizontal distances. Much depends on the location and exposure of the site, the instruments used, the times of observation, and the knowledge, persistence, and hardiness of the observer. It is commonly difficult to differentiate snow that has fallen directly from the sky and that subsequently removed or deposited by wind. Thus, the figures used here should be taken as approximations.
At the base of the mountains snowfall is much lower than above 10,000 ft (3,050 m), but individual storms can still bring impressive 24-hour totals. All of the lowland stations have had daily accumulations up to 20 in (60 cm). Snow cover in the lowlands is discontinuous, with durations rarely longer than six weeks. There is notably rapid disappearance of snow cover on the western slopes of the range, facing Owens Valley, up to elevations of 8,000–9,000 ft (2,440–2,745 m), even following major snowfalls. This slope faces a high angle of the sun during the relatively warm temperatures of afternoon. As expected, snow cover lasts longer on north- and east-facing slopes
than on south- or west-facing ones — often many weeks longer. In late spring and early summer, the snowline is distinctly lower and the cover more continuous in the mountains as viewed from Fish Lake Valley in the east than from Owens Valley in the west.
Precise measurements are nonexistent, but it is obvious that some snow is removed from its site of original fall on the extensive summit upland to slopes and canyons adjacent to the plateau. Because prevailing winds generally have a westerly component, the wind-blown snow is generally deposited on eastern slopes, although occasionally the direction is reversed, especially in conjunction with northeast winds following the passage of storms or with closed cyclones in late winter and spring. Quite probably, the sites of deepest snow accumulation in the White Mountains are at the heads of the major canyons on the east side of the range. Maximum Pleistocene glaciation occurred in these canyons, the heads of which are now steep-walled, east- or northeast-facing cirques — favorable locations for the accumulation and retention of snow. A conspicuous sight from Fish Lake Valley is a discontinuous, but commonly prominent, line of snow cornices at the edge of the summit plateau, commonly lasting through the summer months and sometimes well into fall or until the next season's snowpack begins. A worthwhile addition to the knowledge of the White-Inyo Range climate would be the acquisition of reliable data on how much snow is moved by wind and deposited either east or west of the summit upland. It may well be less than visual inspection, perhaps affected by the discomfort and poor visibility caused by fine blowing snow, indicates. Sporadic measurements of wind-blown snow in the Sierra Nevada in similar terrain do not substantiate the notion of a significant increase in snowpack in leeward sites, except in localized areas.
Wind is even more difficult to measure accurately than snow. Instruments are commonly of doubtful accuracy, and there are wide fluctuations in speed and even direction in short periods of time at any one location, and across very short horizontal distances. Topography obviously exerts a major influence on wind speed and direction. Wind data, somewhat incomplete, exist for the two mountain stations, and there is a more complete and accurate record at Bishop.
Direction is easier to measure than speed. At both mountain stations, for about two-thirds of the year the prevailing direction (from which the air is moving) is westerly — northwest, southwest, and west. East is the least common, although the northeast can be of importance in some years. At Bishop prevailing directions are northerly and southerly, reflecting the topographical influence of Owens Valley. Fish Lake Valley and Deep Springs Valley very likely have the same regime as Owens Valley.
Maximum wind velocities have approached or exceeded 100 mph (45 m/s) at both mountain stations. The strongest winds usually come in the winter, from west to south in association with storm fronts, or from west to north after frontal passage. Monthly average maximum speeds are about 30 mph (13.5 m/s) during the winter
at White Mountain II and near 20 mph (9 m/s) in the summer; averages are about 5 mph (2.5 m/s) less in all seasons at White Mountain I. At Bishop average speeds are much less than at high elevation, but a peak gust of 75 mph (33.5 m/s) was recorded in August 1976. In general, maximum velocities in the lowlands occur in winter and early spring. Experience indicates that Fish Lake Valley and the eastern slopes of the White Mountains have the highest velocities of the lowland areas.
The intensity, frequency, and duration of high-velocity winds in the White Mountains do not seem extraordinary for a mountain range of its height and latitude. However, an extensive portion of the range above treeline has no shelter from wind. The most notable aspect of wind at higher elevations is not high velocity, although that may occur, but rather the constancy of moderate wind, with a conspicuous lack of calm, even in summer. Significant results of wind are the redistribution of snow, poor visibility from blowing snow, and the wind chill factor. At 10°F (-12°C), a wind speed of 30 mph (13.5 m/s) is calculated to have roughly the same effect on humans as an equivalent temperature of -33°F (-36°C) without wind.
Wind also significantly affects snow texture. At upper elevations the snow surface is seldom smooth or powdery but is generally hard-packed, ridged, and of unequal depth, with patches of bare ground present even in wet years, because of frequent wind action. The snow is commonly packed and ridged as it strikes the ground; most snowstorms occur with moderate to high wind velocities. Despite increasing attention in recent years to the White Mountains as an area for cross-country skiing and winter snow camping, snow conditions in the range are not particularly favorable for such activities. Added to the adverse surface texture is the problem that, in many years, the amount of snow accumulation is insufficient to cover ground irregularities adequately.
The traveler or resident in the White Mountains should be aware of troublesome or possibly hazardous weather and climatic conditions that can occur. Snowstorms, sometimes heavy and often accompanied by moderate to high winds, can happen from fall through spring, and even in summer. These may bring poor visibility and drifting snow, which can make vehicular, foot, or air travel difficult or dangerous. From October through May, temperatures well below freezing make frostbite a problem if adequate clothing is not worn. The wind chill factor commonly has the effect on humans of lowering the effective temperature 20 to 40°F (10 to 22°C).
June through September is the thunderstorm season, which can bring flash floods in canyons, such as the Narrows below Westgard Pass, and common lightning strokes. Anyone who has experienced a major thunderstorm above treeline in the White Mountains with little or no protection available is acutely aware of danger from lightning.
At any elevation, but particularly above 10,000 ft (3,048 m), solar radiation — especially ultraviolet — may be intense, resulting all too quickly in painful sunburn and chapped lips. Water is not readily available throughout most of the range, par-
ticularly along the road to the two mountain stations, and dehydration due to the wind and low humidity can be a problem.
Travelers into the White Mountains — by foot, ski, road vehicle, or helicopter — are strongly advised to check weather reports before entering the range. San Francisco, Los Angeles, Reno, and Salt Lake City are major forecast centers, and the National Weather Service station at Bishop can be contacted during daylight hours for briefing on local conditions. Mountain weather is notorious for rapidly developing adverse events, but knowledge of seasonal patterns, empirical observation of cloud sequences, and current information from forecast offices can do much to reduce potential hazards.
Goodridge, James D. 1981. California rainfall summary, monthly total precipitation, 1849–1980 . California Dept. of Water Resources.
Holmboe, J., and H. Klieforth. 1957. Investigations of mountain lee waves and the air flow over the Sierra Nevada, final report . Contract AF 19 (604)-708. Meteorology Department, University of California, Los Angeles.
Houghton, J. 1969. Characteristics of rainfall in the Great Basin . Desert Research Institute, University of Nevada System, Reno.
Houghton, J., C. Sakamoto, and R. Gifford. 1975. Nevada's weather and climate . Nevada Bureau of Mines and Geology, University of Nevada, Reno.
National Oceanic and Atmospheric Administration. Climatological data (1950–1990): California, vols. 54–94; Nevada, vols. 65–105 (monthly and daily tabulations and annual summary). NOAA, National Climate Center.
Neiburger, Morris, James G. Edinger, and William D. Bonner. 1982. Understanding our atmospheric environment , 2d ed. W. H. Freeman, San Francisco, p. 453.
Powell, Douglas. 1963. Physical geography of the White Mountains, California-Nevada. M.A. thesis, Geography Department, University of California, Berkeley.
White Mountain Research Station. Unpublished observations from White Mountain Research Station weather recording sites I and II. White Mountain Station, Bishop, Calif.
Deborah L. Elliott-Fisk
Geomorphology is the scientific study of landforms. This discipline, which has traditionally been a part of both geography and geology, focuses on (1) describing the various landforms that make up the earth's natural landscape, (2) determining the processes that have shaped these landforms, and (3) reconstructing the environments in which these features formed. Because of its high diversity of bedrock types, large elevational gradient, and geologic history, the White-Inyo Range possesses a diverse suite of landforms.
What is a landform? It is simply a part of the landscape that has a distinctive shape or morphology, a unit that can be delineated either qualitatively by visual means in the field or quantitatively through an analysis of morphology, composition, and relative position on the landscape. Hills, valleys, mountain peaks, dunes, and floodplains are all landforms. Landforms in the White-Inyo Range are listed by their origin in Table 2.1.
It is easiest to think of a landform as being a function of process, materials, and time (Gregory, 1978). The three external processes shaping landforms are (1) weathering, (2) erosion, and (3) deposition. These three processes will each be discussed briefly.
Weathering is the chemical decomposition and mechanical disintegration of rock materials. We can think of this as the wearing away of earth materials through time. Different weathering processes have relative degrees of importance in different environments and are partially functions of climate. Many mechanical weathering processes require temperature and moisture fluctuations. These changes exert a mechanical stress on the rock (such as frost wedging), which causes it to disintegrate along joint planes or between individual mineral grains (Fig. 2.1). Chemical weathering, like all chemical processes, is a function of temperature. With the presence of water, at higher temperatures chemical weathering will take place more rapidly.
Thus, we see different types and rates of weathering of particular rock types (such as granite) in low-, middle-, and high-elevation climates of the White-Inyo Range. Weathering rates in the range have been quantified by Denis Marchand (1968, 1970, 1971, and 1974) in his studies of weathering and soil development at Sage Hen Flat and the Cottonwood Basin. He estimated that weathering and accompanying erosion (removal of weathered materials) of 0.3 to 1.2 in (1 to 3 cm) of material occur in 1,000
years. In conjunction with this, Valmore LaMarche (1968) has measured weathering and erosion on slopes inhabited by Bristlecone Pine (Pinus longaeva ). Tree roots are exposed through time as material is removed around them. Using dendrochronology (tree-ring dating), LaMarche was able to quantify the amount of material removed for a given unit of time. Elliott-Fisk (1987) has investigated the weathering of glacial till boulders deposited by a series of glaciers over time; her results show that weathering is slow but progressive in the White Mountains. All of these researchers have shown that weathering is a relatively slow process in the dry climate of the White Mountains. Because precipitation (available moisture) increases with altitude (to the crest), weathering may be thought to be more rapid at higher elevations, but it must be remembered that temperature decreases with altitude as well, and frost and ice are weathering agents.
It may prove fruitful to look at the relative degree of soil development as an indication of weathering along an altitudinal gradient through the range. Soils form as the result of weathering and the decomposition of organic matter and are functions of climate, organisms, parent material, relief, and time (Jenny, 1941). Thus, one must try to hold these factors constant to assess the relative importance of climate (or altitude) to soil formation and weathering. It is possible to find the same geologic formation (for example, a particular granitic pluton) at different altitudes in the range. The relief, or slope position, can also be held constant. If we can determine
that the surface of the landscape has not been exposed to disturbance through various erosional or depositional agents, time can also be held constant. However, it has been difficult to locate a sequence of landforms of the same age that span the range's entire elevational gradient. Elliott-Fisk (1987) has studied soils formed on glacial deposits of different ages (i.e., allowing time to vary) and shown that soils do progressively develop through time. It is also difficult to hold the organisms, especially the vegetation, constant as one goes up an altitudinal gradient. Sagebrush (Artemisia spp.) communities span virtually all altitudes of the range, but, as their productivity varies, they do not allow soil development to be evaluated only as a function of climate.
Erosion, the second geomorphic process, is the removal of weathered material from a slope. In order for material to be removed from a slope, its initial inertia must be overcome by the mass and momentum of the erosional agent. Erosional agents include running water (streams, rivers, sheetflow, soil water, and groundwater), glacier ice, ground ice, waves, tides and currents, wind, gravity, and organisms (especially humans). Geomorphic agents are linked to climate because some of them occur only in particular climatic settings, and their relative importance varies with the climate of an area. Waves, tides, and currents are obviously limited to water bodies, and gravity is common to all slopes, regardless of climatic setting. However, running water, glacier ice, ground ice, wind, and organisms (to some degree) vary with climate. All of these terrestrial geomorphic agents have operated in the White-Inyo Range during recent geologic time.
It is possible to reconstruct climatic events of the recent or distant past by studying landforms. Beaty (1963, 1968, 1970, and 1974) has worked for many years on determining the role that debris flows play in the development of alluvial fans flanking the White Mountains (Fig. 2.2). Debris flows are triggered by intense summer thunderstorms or very rapid snowmelt, where a large amount of weathered material is catastrophically eroded, transported down valley, and deposited at the valley floor on the alluvial fan. Debris flows and other deposits that result from flooding can be devastating on the landscape below, especially if it is inhabited, so these events are regarded as natural hazards. By mapping and dating debris flows, one may be able to calculate the periodicity of the events (with changes in frequency and magnitude of the flows an indication of climatic change) and use this information to estimate the probability of their recurrence.
Another erosional agent that has left its mark on the eastern slope of the White Mountains is the glacier (Fig. 2.3). A glacier is a moving body of ice; ice actually flows through a glacier much as water slowly flows down a stream channel. The moving ice can erode and transport materials, later depositing them down gradient (e.g., down valley). A series of valley glaciers formed south to north from Wyman Creek to Trail Creek along the eastern slope of the White Mountains, on high plateaus along the crest, and in the upper reaches of a few of the western drainages (Elliott-
Fisk, 1987). Studies of glacial landforms (Elliott-Fisk, 1987; Elliott-Fisk and Dorn, 1987; Swanson et al., 1988) show at least six stages of Quaternary glaciation that correlate with the glaciations of the Sierra Nevada and also provide more detailed information on the glacial history of the region. One can reconstruct climate during these glacial events using several techniques, perhaps most directly by mapping the glacial deposits, reconstructing the glacier that deposited them, and then determining the climatic conditions necessary to form such a glacier.
After the materials are transported by the erosional agent, they are eventually deposited at a new location. Deposition is the third geomorphic process. These depositional materials commonly form distinct landforms that are attributed to a particular geomorphic agent. These landforms can then be identified as fluvial (running water), glacial (glacier), colluvial (gravity), periglacial (ground ice), aeolian (wind), marine (waves, tides, and currents), or biological (organisms) in origin. They too can then yield climatic information, as these depositional agents vary with climate.
Depositional landforms commonly can be identified by their distinctive shapes, but landforms deposited many years ago may have been worn down by weathering processes and lost their once-distinct appearance. It is still possible to identify these landforms as to their depositional agent based on knowledge of how various geomorphic agents shape and sort materials. Table 2.2 lists the characteristic shapes and degrees of sorting of sediments by various agents. This table serves as a guide to identifying depositional landforms in the White-Inyo Range. It can be seen from this chart that glacial clasts (particles) are commonly subangular and poorly sorted, in contrast to fluvial clasts, which are rounded and well sorted. (These differences are due to the densities and velocities of the fluids [agents], among other factors.) Thus, if we find a deposit with no characteristic morphology on a ridgetop, valley floor, or intermediate slope, we can commonly identify its agent of deposition. Early Quaternary high-altitude ridgerop (interfluve) deposits on Cottonwood Plateau (Fig. 2.4) and Chiatovich Flats (Fig. 2.5) have been identified as glacial in origin by sediment analyses, as surface expression of glacial landforms is lacking.
Geomorphologists have long been interested in the presence of apparent high-altitude planation surfaces at crestal and mid-elevation positions in many mountain ranges (see Fig. 2.5). These surfaces are commonly present on fault-block mountains and have been hypothesized to have been eroded in valley floor positions, then uplifted with the range. They are typically mantled with depositional material. It is possible to identify multiple planation surfaces at several distinct elevations in a particular mountain range. If these surfaces were formed at valley bottom locations, determination of their time of formation could shed light on rates of tectonic activity for a range (e.g., when was the range uplifted?) (Curry, 1984). However, many workers believe that these high-altitude surfaces were simply eroded in place by various agents, especially frost action (ground ice); as such, they have been referred to as cryoplanation surfaces. A third group of scientists believe that these surfaces were very likely the result of both past lower-elevation and present higher-elevation processes.
High-altitude erosion surfaces are distinct in the White Mountains. The best example is Pellisier Flats, which extends from an elevation of 12,400 ft (3,775 m), near Mt. Hogue at the south, to 13,400 ft (4,080 m) at The Jumpoff, just south
of Boundary and Montgomery peaks to the north. What appears to be an extremely flat surface from distant positions to the west or east of the range is actually a gently rolling surface with small, residual bedrock outcrops (inselbergs, tors, and monadnocks) (Fig. 2.6). The surface is mantled with frost-shattered debris and is presently a periglacial landscape. Both active and relict patterned ground occur here (Mitchell, LaMarche, and Lloyd, 1966; Elliott-Fisk, 1987; Mahacek-King et al., 1987) and dominate the surficial geology of other high plateaus in the range.
Ongoing research focuses on how extensively these plateaus were covered with glacial ice, which requires either the formation of an ice cap, under present topographic constraints, or the presence of valley glaciers prior to tectonic uplift and an episode of major stream incision (valley formation). Many of the high plateau surfaces are mantled by glacial deposits that have been somewhat reworked by frost action and fluvial processes (Elliott-Fisk, 1987). Chiatovich Flats (10,200–11,600 ft, or 3,130–3,560 m, elevation) and the Cottonwood Plateau (11,200–12,000 ft, or 3,435–3,680 m, elevation) are the best examples. Other surfaces are mantled by rock glaciers (such as the North Fork Perry Aiken Creek, Fig. 2.7) and felsenmeer (i.e., frost-shattered boulder fields; see Figs. 2.1 and 2.6).
Geomorphologists have long debated whether felsenmeer indicates the absence of glaciation. This is a difficult question because our knowledge of high-altitude weather-
ing rates is poor. The thin ice cover of a cold-based glacier (frozen to the surface) may allow the preservation of felsenmeer. The plateau between Mt. Barcroft and White Mountain Peak apparently had such an ice cover in the past, as large, frost-shattered blocks on the plateau above the cirques of the North and South Forks of McAfee Creek show surface polish and abrasion, indicating that this area was in the zone of accumulation of a former glacier (Elliott-Fisk, 1987). The same or an adjacent glacier may have deposited the granodiorite erratics on the dolomite surface of the Cottonwood Plateau to the south (see Fig. 2.4). If the time of formation of these glaciers can be deduced, the rate of high-altitude weathering since its disintegration can be calculated.
The tectonic activity of the earth can be characterized as an internal geomorphic process. Displaced, faulted, and warped landforms attest to local tectonic activity. Good examples of these exist along the western and eastern escarpments of the White-Inyo Range in the form of displaced alluvial fan (Fig. 2.8) and lake bed materials.
Alluvial fans are triangular or cone-shaped masses of debris at the base of a mountain front (see Fig. 2.2). Deposition of alluvial and colluvial material results when the lower gradient and wider channel of the valley floor are encountered as the material is transported down the mountain flanks. The velocity of the fluid drops, causing
the deposition of these materials. Streams spread out as distributaries at this point, instead of coming together in the form of tributaries to a main stream channel, as they do in the drainage basin above.
As one ascends the stream canyons at the apex (head) of these alluvial fans, older fan materials (fanglomerates) may be seen perched on the walls many meters above the present fan surface (as along Indian Creek, Milner Creek, and Black Canyon). It is also possible to find an older fan apex at a higher elevation back in the mountain front (as at Jeffrey Mine Canyon on the west slope of the White Mountains). If these materials can be dated, the amount of tectonic uplift can be calculated. Unfortunately, these deposits are difficult to date, as they are usually beyond the age range of radiocarbon or lack organic materials necessary for radiocarbon dating. However, along Black Canyon in the southwestern White Mountains (see Fig. 2.8), a volcanic deposit is interbedded with fanglomerate. This material is tephra (air-filled lapilli and ash) from the catastrophic eruption of the Long Valley caldera and is referred to as the Bishop Tuff (volcanic ash) (Bateman, 1965). It has been dated by uranium fission and other techniques, with the best estimate of its age currently at 0.79 Ma (millions of years before present). This deposit is present in Black Canyon 490 ft (150 m) above the valley floor, attesting to 7.6 in (19 cm) of uplift per 1,000 years.
Near the Westgard Pass road at the Waucoba Embayment of the western front of the White-Inyo Range, a series of lake beds (the Waucoba Lake beds) is exposed.
Walcott (1897) speculated that these lake deposits have been uplifted 3,300 ft (1,000 m) above the valley floor. Although intensive studies of these deposits have not yet been conducted, their age is estimated at 3 million years, suggesting an uplift rate of 13 in (33 cm) per 1,000 years.
The relatively slow tectonic uplift of the range may not seem important in reference to our short lifetimes, but some tectonic activity, generating earthquakes and displacement along fault lines, can be abrupt. Escarpments along fault lines cutting through alluvial fans suggest rapid vertical displacements of several meters. Further research needs to focus on dating this tectonic activity, which can be accomplished through studies of soil formation on the fans. The presence of ash from the Mono and Inyo craters interbedded with uplifted fan sequences and other deposits (Fig. 2.9) suggests tectonic disturbance in the last 5,000 years (Mahacek-King et al., 1987; Mahacek-King et al., 1988). It is now well known that the entire region is tectonically, and hence geomorphically, very active, as has been shown by intensive studies in the Long Valley caldera region and along the White Mountains fault zone, and by historical records of earthquake occurrence. It is believed by some workers (Curry, 1984) that this tectonic activity is accompanied by volcanism and basin formation and is migrating north from the Owens Valley (which witnessed a catastrophic eruption about 790,000 ka (thousands of years before the present) to the Mono Basin (which is still volcanically active) and the Bridgeport basin. This may be related to major plate rifting along the axis of the Gulf of California and to crustal extension.
A wide variety of materials is available for landform generation in the White-Inyo Range. All three basic rock classes (igneous, metamorphic, and sedimentary) are present and range from Precambrian to Quaternary in age (see Chapter 3). The mineralogy and petrology of these rocks vary, with some more susceptible to jointing, chemical weathering, and other processes. Thus, some types of landforms tend to be associated with distinct rock formations. For example, the Campito Formation (especially the Andrews Mountain Member), a sandstone or quartzite, is very susceptible to frost action, resulting in the formation of angular felsenmeer seen as stone stripes on slopes along White Mountain Road. Bateman (1965) contrasts this with the Montenegro Member shale, which forms slabs and breaks down rapidly to clay-rich soils. Differences in patterned ground derived from metamorphic and granitic materials are apparent at high elevations.
The White-Inyo Range is largely lacking in organic-rich deposits and landforms, in the form of either peat or fossilized organic remains. This is most likely due to the present aridity of the range and the lack of topographic depressions along the steep mountain flanks. Organic-rich deposits may compose distinct landforms, but they possess other advantages in that they are a superb source of material for the study of fossil plant and animal communities and for dating of geomorphic events (see Fig. 2.9). Recently, turf and earth hummocks have been found in the Alpine Zone of Pellisier Flats and register geomorphic change with the deposition of volcanic tephra and changing hydrology of the site (Mahacek-King et al., 1987). The arid climates of the range have allowed woodrat middens to be preserved in a diversity
of environments. Analysis of plant macrofossils from these middens is providing valuable information on the late Quaternary geologic and climatic history of the range (Jennings, 1988; Jennings et al., 1989).
Reference has been made throughout this section to the role of time in the control of landform development. If we think of the White-Inyo Range as a dynamic mountain system that is continuing to evolve, we can ask not only what landforms are developing now, but which ones were formed in the past, when they were formed, and what this tells us about the evolution of the range.
The geologic evolution of the range is discussed in Chapter 3 of this book. However, in a general sense, it can be stated that all landforms visible today are the
products of late Cenozoic processes. We can imagine many of the high peaks (such as White Mountain Peak) forming over millions of years through weathering and erosion. It is very likely that the peak was always exposed to the subaerial environment, perhaps a nunatak above a hypothetical White Mountain ice cap. Other landforms are more recent, forming with the glaciation of the range or by recent fluvial activity.
Bateman, P. C. 1965. Geology and tungsten mineralization of the Bishop District California . U.S. Geological Survey Professional Paper 470.
Beaty, C. B. 1963. Origin of alluvial fans, White Mountains, California and Nevada. Annals, Association of American Georgarphers 53:516–535.
Beaty, C. B. 1968. Sequential study of desert flooding in the White mountains of California and Nevada . Technical Report 68-31-ES. U.S. Army natick Laboratories, Natick, Mass., January.
Beaty, C. B. 1970. Age and estimated rate of accumulation of an alluvial fan, White Mountains, California, U.S.A. American Journal of Science 268:50–77.
Beaty, C. B. 1974. Debris flows, alluvial fans, and a revitalized catastrophism. Zeitshrift für Geomorphologie , N.F. Suppl. Bd. 21:39–51.
Curry, R. R. 1984. Mountain summit glacial tills and their tectonic implications, eastern Sierra Nevada, California. Abstracts, Annual Meeting of the Geological Society of America , Reno, Nevada, p. 481.
Elliott-Fisk, D. L. 1987. Glacial geomorphology of the White Mountains, California and Nevada: Establishment of a glacial chronology. Physical Geography 8:299–323.
Elliott-Fisk, D. L., and R. I. Dorn. 1987. Pleistocene glaciation of the White Mountains, CA-NV, and correlation with the Sierra Nevada. Geological Society of America, 1987 Annual Meeting, Abstracts with Programs , p. 655.
Gregory, K. J. 1978. A physical geography equation. National Geographer 12:137–141.
Jennings, S. 1988. Late Quaternary vegetation change in the White Mountain region. In C. A. Hall, Jr. and V. Doyle-Jones (eds.). Plant biology of eastern California . Natural History of the White-Invo Range, symposium vol. 2, pp. 139–147. University of California, Los Angeles.
Jennings, S. A., D. L. Elliott-Fisk, T. W. Swanson, and R. I. Dorn. 1989. A late-Pleistocene chronology of the White Mountains, CA-NV. Association of American Geographers Program Abstracts, Baltimore . Washington, D. C.
Jenny, H. 1941. Factors of soil formation . McGraw-Hill, New York.
LaMarche, V. C., Jr. 1968. Rates of slope degradation as determined from botanical evidence, White Mountains, California . U.S. Geological Survey Professional Paper 325-I, pp. 341–377.
Mahacek-King, V. L., J. A. Onken, D. L. Elliott-Fisk, and R. L. Bettinger. 1987. Quaternary silicic tephras in the White Mountains, CA-NV: Depositional environment and geomorphic history. Geological Society of America, Annual Meeting, Abstracts with Programs, p. 756.
Mahacek-King, V. L., D. L. Elliott-Fisk, T. E. Gill, and T. A. Cahill. 1988. Elemental analysis by PIXE applied to tephrochronology of the White Mountains, California-Nevada. Geological Society of America, Abstracts with Programs , vol. 20, no. 7, p. A54.
Marchand, D. E. 1968. Chemical weathering, soil formation, geobotanical correlations in a portion of the White Mountains, Mono and Inyo Countries, California. Ph. D. thesis, University of California, Berkeley.
Marchand, D. E. 1970. Soil contamination in the White Mountains, eastern California. Geological Society of America, Bulletin 81:2497–2505.
Marchand, D. E. 1971. Rates and modes of denudation, White Mountains, eastern California. American Journal of Science 270:109–135.
Marchard, D. E. 1974. Chemical weathering, soil development, and geochemical fractionation in a part of the White Mountains, Mono and Inyo Counties, California . U.S. Geological survey Professional Paper 352-J.
Mitchell, R. S., V. C. LaMarche, and R. M. Lloyd. 1966. Alpine vegetation and active frost features of Pellisier Flats, White Mountains, California. American Midland Naturalist 75:516–525.
Swanson, T. W., D. L. Elliott-Fisk, R. I. Dorn, and F. M. Phillips. 1988. Quaternary glaciation of the Chiatovich Creek Basin, White Mountains, CA-NV: A multiple dating approach. Geological Society of American, Abstract with Programs , vol. 20, no. 7, p. A209.
Walcott, C. D. 1897. The post-Pleistocene elevation of the Inyo Range, and the lake beds of Waucobi Embayment, Inyo County, California. Journal of Geology 5:340–348.
Geologic History of the White-Inyo Range
Clemens A. Nelson, Clarence A. Hall, Jr., and W. G. Ernst
Introduction and General History
The White-Inyo Range (Fig. 3.1), representing the westernmost range of the Basin and Range structural province, extends for 110 mi (175 km) from Montgomery Pass south-southeastward to Malpais Mesa opposite Owens Lake. Its maximum width, east of Bishop, is approximately 22 mi (35 km). The terminological separation of White from Inyo mountains is placed along the Westgard-Cedar Flat-Deep Springs Valley Road, a division that has no particular topographic or geologic significance. As is typical of the ranges of the province, it is bounded, generally on both sides, by normal faults of large-magnitude slip. The northern part of the range is mainly an easterly tilted block marked by an impressive escarpment from the Owens-Chalfant-Hamill valleys on the west, at 4,300 ft (1,310 m), to the crest of the range at White Mountain Peak, 14,246 ft (4,342 m). The southern, Inyo Mountains, part of the range has been tilted slightly to the west, with its maximum relief from Saline Valley at 1,100 ft (353 m) on the east to the range crest at Mt. Inyo, 11,107 ft (3,385 m).
Rocks in the White-Inyo Range span the time from the late Precambrian (700 Ma [Ma, millions of years ago]) to the Holocene, or Recent (i.e., last 10,000 years). Figure 3.2 is a simplified geologic timetable showing major time units and millions of years before the present (Ma) for the beginning of each, and the time spans of mountain building episodes (orogenies) referred to in the text. All periods of the Paleozoic (570–225 Ma), Mesozoic (225–65 Ma) and Cenozoic (65 Ma to the present) are represented, some incompletely. Rocks from the late Precambrian to the end of the Devonian Period (345 Ma) are entirely of sedimentary origin, having been deposited as sand, shale, dolomite, and limestone along the western edge of the North American continent by stream systems flowing westward into a marine basin called the Cordilleran geosyncline (Fig. 3.3). The total accumulation of uppermost Precambrian through Lower Jurassic surficial deposits in the geosyncline exceeded 4.5 mi (7 km) in thickness in the White-Inyo region. Beginning in Mississippian time, about 345 million years ago, the sedimentation pattern changed in response to elevated lands lying to the north and possibly west, resulting in the accumulation of coarse-grained sands and conglomerate, reflecting higher-energy stream systems. This pulse of uplift has been considered by some geologists as the result of the first of several collisions of the North American lithospheric plate and an ancient Pacific plate; the inferred plate collision is held responsible for the Antler orogeny, or time
of mountain building. The Pennsylvanian and Permian (320–225 Ma) were times of renewed deposition of limestone, reflecting a return to conditions of quiet carbonate bank formation.
This lull was again succeeded by extensive tectonic activity, apparently resulting from a second plate collision, the Sonoma orogeny (230–220 Ma). This event, as in
the Sierra Nevada to the west, resulted in extensive volcanism in a marine environment, the products of which are exposed as interlayered lava flows, ash beds, and continental sedimentary rocks in the southern Inyo Mountains and in the northern White Mountains.
In both the White-Inyo Range and the Sierra Nevada, surficial volcanism and its deep-seated counterpart, igneous intrusion, continued intermittently until approximately 155 million years ago as a consequence of the most intense orogeny to affect the region, the Nevadan (see Fig. 3.2). This complex episode, the result of terrane
accretion (plate collision) and amalgamation of far-traveled subcontinental fragments outboard of the North American plate and large-scale subduction of an ancient Pacific plate beneath the North American plate, produced major compressional deformation, metamorphism, and the emplacement of numerous coarse-grained intrusive bodies, termed plutons. The latest Mesozoic (90–75 Ma) was a time of renewed plutonic activity, represented by cross-cutting granitic intrusive rocks that transect the earlier-formed deformational structures. The physical conditions inferred to have attended crystallization of the metamorphic minerals associated with the intrusive rocks document the presence of a volcanic-plutonic arc marking the western margin of the North American continent throughout Mesozoic time (Fig. 3.4).
The early and middle Cenozoic Era was a time of large-scale uplift and extensive erosion; no sedimentary record of this time has been left in the White-Inyo Range. A major erosion surface truncates all previously formed rocks, including the deep-seated granitic plutons, which had solidified at depths of perhaps 6 mi (10 km) or more. Beginning approximately 10 million years ago, the range experienced the outpourings of extensive fragmental volcanic ejecta (volcanic tuff) and basaltic lava flows. Remnants of this episode can be seen in the volcanic rocks that mantle the northeast corner of the White Mountains at Montgomery Pass and east, in the table-like flows extending from the Cottonwood Basin (SE of Mt. Barcroft) to the north end of Deep Springs Valley, in the large expanse of volcanic rocks covering the saddle area between Eureka and Saline valleys on the east flank of the Inyo Mountains, and in the Malpais Mesa at the south end of the Inyo Mountains.
The latest Cenozoic was a time of renewed uplift along Basin and Range normal faults flanking the range, and transcurrent motion along the Furnace Creek fault zone at the margin of the White Mountains and Fish Lake Valley. Both styles of faulting reflect a major episode of crustal extension that was initiated in the Basin and Range Province approximately 15 million years ago. That this style of deformation continues to the present in the White-Inyo region is attested to by such events as the Chalfant earthquakes of July 1986, which measured up to 5.5 on the Richter scale.
An additional result of the uplift in latest Cenozoic time was the onset of Quaternary glaciation, which affected the area east and north of Mt. Barcroft and White Mountain Peak. This was followed by further uplift and the development of extensive aprons of alluvial deposits at the western and eastern margins of the range.
The Central White-Inyo Range
The area of principal interest for this guide extends from directly north of White Mountain Peak south to approximately 37°00¢ N. lat., a distance of 47 mi (75 km). All of this area has been mapped geologically at a scale of 1:62,500 or 1:24,000 (Fig. 3.5), and a more detailed discussion of its sedimentary, deformational, and igneous history is possible. This is also the area covered by the enclosed geologic map and by the principal and subsidiary road logs included at the end of this chapter.
The central White-Inyo Range exposes the best stratigraphic sections of the uppermost Precambrian to middle Paleozoic strata in the range. The basal part of the section (Fig. 3.6) contains the Precambrian-Cambrian transition. The Lower Cambrian portion of this section, the Waucoban Series, is regarded as the North American-type succession for rocks of this age. It contains the oldest trilobite faunas in the Americas, abundant archeocyathans (primitive reef-forming animals), numerous criss-crossing tracks and trails of primitive molluscs and arthropods, and molluscan body fossils (Wyattia and others), now regarded as marking the beginning of the Paleozoic Era. This section of strata and its fossils illustrate a remarkable explosion of life at the Precambrian-Cambrian boundary, in which "explosion" all invertebrate phyla are represented. These fossil representatives occur in the limited span extending from the upper Wyman Formation (animal tracks and trails), through the Reed and Deep Spring formations (primitive molluscs and fossil animal trails), to the Campito and Poleta formations (trilobites). This succession represents a group of rocks deposited in largely tidal and subtidal environments, as well as reefal and off-reef carbonate bank and shoal environments. The terrigenous strata are largely shale-siltstone and quartzite deposited on the shallow continental shelf of the time. It has been speculated that one could have walked across the early Paleozoic sea in the White-Inyo region in water only chest-deep. The strata contain abundant shallow water indicators, such as sedimentary rocks with cross-bedding, current and wave ripple marks, mud cracks, and very highly bioturbated beds.
The geologic structure of the area from 37°00¢ N. lat. north to just south of Mt. Barcroft is relatively simple: the range is largely anticlinal. North of Westgard Pass, the White Mountains are principally a gently south-plunging, asymmetrical anticlinorium (east flank nearly vertical) exposing the oldest stratigraphic unit, the Wyman Formation, in the core. The central, relatively simple structure (Fig. 3.7), is modified on both the west and east sides by more complex, closely appressed sets of compressional structures. The Inyo Mountains are dominated by a more open and nearly horizontal anticline with a southeast trend, also exposing the Wyman Formation at its core and similarly modified on its flanks by folds as well as faults.
Lying between the Inyo anticline and the White Mountain anticlinorium is a structural downwarp containing many closely spaced anticlinal and synclinal folds and associated faults, along the western side of Deep Springs Valley. These are the famed Poleta folds, an area used for instructional purposes by many academic departments of geology.
The Inyo Mountains, and areas to the southeast, expose structures interpreted as the consequence of extreme compression: a system of low-angle reverse faults — extensive, gently inclined surfaces across which old rocks have been thrust from beneath, onto and over younger rocks. In the Inyo Mountains, this fault system has been termed the Last Chance thrust, along which Precambrian and lower Paleozoic rocks have been juxtaposed above rocks of middle and late Paleozoic age. In its northernmost exposures, the fault is inclined to the north, suggesting that the northern Inyo Range and a part of the White Mountains may lie entirely above the Last Chance thrust.
A roughly similar interpretation can be made for the area of the White Mountains north of Mt. Barcroft. Even though this part of the range has been invaded by many Mesozoic intrusive bodies (to be discussed later), the gross distribution of the pre-intrusive strata suggests that the lower to middle Mesozoic core of the range has been overthrust by Paleozoic and Precambrian strata, which appear to lie structurally above, along the northwest, north, and northeast flanks of the range.
Dating the many structural features of the White-Inyo Range is difficult because of the large temporal gap between the deformed and undeformed strata. Many of the structures are truncated by one or more of the Mesozoic intrusive bodies. Consequently, the major folding and thrust faulting can be regarded as pre-intrusive (pre-Nevadan), generally before 180 Ma, possibly consequences of the Antler (» 350–340 Ma) and/or the Sonoma (» 230–220 Ma) mountain building episodes (see Fig. 3.2).
The uppermost Precambrian to middle Mesozoic strata of the central White-Inyo Range are essentially "swimming" in a sea of middle to late Mesozoic intrusive rocks. The plutons have disrupted pre-intrusive structures, have metamorphosed the sedimentary strata to slate, schist, and quartzite, and in some cases have strongly deformed and stretched sedimentary layers. The plutons have been radiometrically dated and range in age from 180 to 75 Ma.
The plutons of the range exhibit a variety of emplacement mechanisms. This is well illustrated by the several bodies in the southern White Mountains and the northern Inyo Mountains (see Fig. 3.7). The oldest, the Marble Canyon composite pluton and the Eureka Valley and Joshua Flat plutons, are Jurassic in age. They have invaded the axial portion of major synclinal (downwarp) and basinal structures to the east and northeast of the White Mountain and Inyo anticlines, respectively. They are in part discordant to the original sedimentary layering but have principally shouldered aside their host rocks and locally thinned them to approximately 40% of the initial thicknesses.
The slightly younger Jurassic Beer Creek and Cottonwood plutons occupy the same synclinal downwarp but are more broadly discordant, as they cut across the eastern limb of the White Mountain anticlinorium. The youngest Jurassic pluton, the Sage
Hen Flat (145 Ma), is unique among the White-Inyo intrusive rocks in that it has not significantly deformed its host rocks. It has passively invaded the west flank of the White Mountain anticlinorium, apparently by a process of magmatic stoping: blocks of the overlying host rocks were fractured and, because of their greater density, sank into the magma chamber, allowing the molten granitic column to rise buoyantly.
Cretaceous intrusive activity in this area is represented by the Birch Creek and Papoose Flat plutons, each of which attained its present structural position by a process of forcible injection. Both appear to have initially invaded their host rocks along pre-intrusive faults (see Fig. 3.7). The Birch Creek Pluton occupies the east flank of the central portion of the White Mountain anticlinorium. During emplacement, it deflected to the northwest and overturned the central anticlinal structure and the strata of the east flank. The Papoose Flat Pluton was emplaced within the southwest limb of the Inyo anticline, drastically disrupting the southeast trend of the structure, producing a pronounced westward bulge and overturning the strata along its northeast border. Accompanying the bulging, the sedimentary succession at the contact with the pluton was in places stretched and thinned to less than 10% of initial thicknesses, an extreme example of forcible emplacement and accompanying attenuation of wall rocks.
North of the area of Fig. 3.7, Jurassic plutonism is represented by the Barcroft Pluton (see enclosed geologic map). The pluton transects the White Mountain anti-clinorium, diverting the western limb of the structure from its regional northward trend to the northeast, parallel to the southeast border of the intrusive (Fig. 3.8). Alternatively, this change in trend could be related to reverse movement along the Marmot thrust, a south-dipping fault within the Precambrian-Cambrian rocks directly south of the Barcroft Pluton. It has also been suggested that the pluton was emplaced along a steeply inclined structural discontinuity along which the Paleozoic rocks of the southern block had been uplifted or overthrust into contact with the Mesozoic rocks of the northern block.
The White Mountain anticlinorium with its N-S trend is thought by some to have developed during the Sonoma orogeny of latest Paleozoic to early Mesozoic age. The Cottonwood Pluton (Fig. 3.8) intruded preexisting folds in the Precambrian and Cambrian sequence and is apparently responsible for the overturning of the east limb of the anticlinorium.
The dominant minor fold axes in the Wyman and Deep Spring formations are approximately N-S and are nearly horizontal. Exposure of the Wyman and Deep Spring formations in the northeast part of Fig. 3.8 and south of the Marmot thrust are more highly deformed than in the southern part of the area depicted in Fig. 3.8. The dominant trends of minor fold axes in this more deformed area are N30°E. These folds are interpreted to be a result of deformation associated with the Marmot thrust and postdate the folds with N-S trends. An alternative interpretation is that the more highly deformed Wyman and Deep Spring formations in this area are related to emplacement of the Barcroft Pluton, and that the Marmot thrust follows the preexisting structural grain or was reactivitated during intrusion of the granodiorite pluton.
Folds with NE-SW or E-W trends in the Poleta Formation (see Fig. 3.8) north of the Marmot thrust are associated with the emplacement of the Barcroft Pluton. Metamorphic minerals (tremolite, scapolite, and diopside) in the Wyman Formation of this area and south of the surface trace of the Marmot thrust could reflect either the close proximity of the Barcroft Pluton at depth or, less likely, metamorphism that occurred during movement along the Marmot thrust. Thinning of the Reed Dolomite and the Deep Spring Formation in the northeast part of the area and above the Wyman-cored anticline or dome, as depicted in Fig. 3.8, suggests either: regional upwarping during deposition of the Reed and Deep Spring formations; thinning related to deformation associated with the White Mountain anticlinorium, along whose trend the dome in the Wyman Formation lies; or attenuation owing to the proximity of the Barcroft and Cottonwood plutons in a manner similar to that proposed for the Papoose Flat Pluton. Hence, folding or upwarping within the study area could have first occurred during the Cambrian or late Precambrian, and again during the Antler and/or Sonoma orogenies, or perhaps partly during the Nevadan tectonic events.
Plutons of Cretaceous age (90–75 Ma) in the region to the east and northeast of Mt. Barcroft and White Mountain Peak are represented by the Indian Garden Creek and McAfee Creek granitic bodies, which occupy large areas on the east slope of the White Mountains (see enclosed geologic map). The Indian Garden Creek Pluton is entirely within and discordant to the Cottonwood Pluton. The McAfee Creek Pluton intrudes Precambrian strata, the Cottonwood Pluton, and the Barcroft Pluton.
Recrystallization (metamorphism) of the Precambrian, Paleozoic, and Mesozoic strata of the White-Inyo Range took place repeatedly in response to the periods of heating and deformation just described. The early metamorphic mineral assemblages were overprinted and were partly to completely replaced by other assemblages produced by the younger thermal events. The rocks are typified by a regional development of greenschist facies minerals; common phases in recrystallized, stratified rocks include quartz, albite, microcline, white mica, chlorite, biotite, epidote, and magnetite. Thermal annealing reflects more intense baking occasioned by the emplacement of plutons of various ages; many, but not all, have produced distinct metamorphic aureoles or halos in the surrounding strata, indicated by the localized formation of new, higher-temperature minerals such as garnet, cordierite, tremolite, tourmaline, fluorite, scapolite, andalusite, sillimanite, and calcic plagioclase.
In the central White-Inyo Range, the record of the early to middle Cenozoic (65-15 Ma) is missing. During this time, broad regional uplift resulted in the unroofing of the range, exposing the deep-seated granitic bodies. Erosional products of the uplift and resulting denudation were probably shed chiefly westward, across what is now the Sierra Nevada (then low-lying), to the Cenozoic seaway in the region of the present San Joaquin Valley. With the development of crustal extension beginning about 15 million years ago, erosional debris was transported to adjacent down-dropped basins such as the Owens, Fish Lake, Eureka, and Saline valleys.
The character of the erosional surface produced by this long period of denudation is well illustrated in the area of Cottonwood Basin, southeast of Mt. Barcroft. From there to east of the north end of Deep Springs Valley, Miocene volcanic tuff and
overlying basaltic flows (10 Ma) lie upon a generally smooth surface planed across the Jurassic plutonic rocks of the range. This erosional surface and its basaltic cover was uplifted and tilted, mainly to the east, as shown by its elevation of 11,800 ft (3,595 m), 2.5 mi east of Mt. Barcroft on the northwest, to 5,500 ft (1,675 m) in Deep Springs Valley, from where it has been uplifted along a marginal fault to an elevation of 7,700 ft (2,350 m) east of the valley at Piper Mountain.
The development of Deep Springs Valley is typical of the late Cenozoic history of the region. The valley lies wholly within the range and trends northeastward across the regional structural grain. It is marked by a major fault system along its eastern margin, as illustrated by the steep escarpment, with a relief of 2,400 ft (730 m), east of the valley, and numerous small fault scarps, which attest to the recency of uplift. Geophysical data suggest that structural relief between the granitic rocks on the east side of the valley to the granite surface below the valley fill is as much as 5,000 ft (1,520 m) and that the alluvial fill beneath Deep Springs Lake on the east side of the valley is approximately 2,600 ft (790 m) thick.
An additional aspect of the crustal extension that the region has experienced is illustrated by the fault system that marks the margin between the northeast flank of the White Mountains and Fish Lake Valley. This is the Furnace Creek fault, which extends from Death Valley to its termination at the northwest end of Fish Lake Valley. It is a major transcurrent fault, along which the White Mountain block has been moved, principally horizontally, northwestward perhaps as much as 13 mi (20 km) relative to rocks on the northeast side of the fault.
As discussed previously, uplift during the late Cenozoic resulted in elevations sufficiently high to support small valley glaciers during Quaternary time. Small cirque basins and glacial moraines occupy the upper reaches of Leidy, Perry Aiken, McAfee, and Cottonwood creeks on the east slope of the White Mountains.
A major, relatively recent geological event, the extrusion of the Bishop Tuff from the Long Valley caldera east of Mammoth Lakes, had a marginal effect on the White Mountains. The event, which took place 700,000 years ago, was a huge volcanic explosion that resulted in the outpouring of pumice and ash amounting to more than 144 cu mi (600 cu km) and the collapse of the base of the caldera to below sea level. The volcanic tuff resulting from this eruption mantles or underlies much of northern Owens Valley and most of Chalfant and Hamill valleys, and it occurs interbedded with alluvial materials flanking the west slope of the White Mountains and in several canyons, notably Black Canyon, southeast of Bishop.
That uplift and associated volcanism has continued to the present is suggested by the presence of the very recent scarps along the range margins, especially marked along the east side of Deep Springs Valley (Road Log C), by a series of young volcanic cinder cones on the west side of the Inyo Mountains southeast of Big Pine, and by the historic earthquakes of the general eastern Sierra Nevada-Owens Valley region. This region has had a long and diverse geologic history, and geologic activity continues to the present day.
Road Logs A, B, C
Clemens A. Nelson
with botanical additions
by Mary DeDecker and James Morefield
In the following road logs, figures in the left column are distances between points of interest with comments; figures in the next column (in parentheses) represent cumulative mileage.
Frequent reference is made to clock directions to various features, such as prominent peaks (e.g., Mt. Tom is at 11:00). The clock is oriented with noon straight ahead.
For each road log, reference is made to appropriate U.S. Geological Survey 15-min-topographic quadrangle maps showing road networks.
Road Log A
Owens Valley Laboratory (OVL) to Barcroft Station of White Mountain Research Station (WMRS) and return (Bishop, Big Pine, Waucoba Mountain, Blanco Mountain, and Mt. Barcroft 15-min quandrangles).
Road Log B
Owens Valley to Papoose Flat (Big Pine and Waucoba Mountain 15-min quadrangles).
For travel to Papoose Flat, either a 4 ́ 4 vehicle or one with extra-low gear is essential. Log begins at intersection of Waucoba Road and Westgard Pass Road.
Road Log C
Cedar Flat to Deep Springs Valley (Blanco Mountain and Soldier Pass 15-min quandrangles).
Bateman, P. C. 1965. Geology and tungsten mineralization of the Bishop District, California. U.S. Geological Society Professional Paper 470.
Ernst, W. G., and C. A. Hall. 1987. Geology of the Mount Barcroft-Blanco Mountain area, eastern California. Geological Society of America Map and Chart Series. Map MCH066.
Hanson, R. B. 1986. Geology of Mesozoic metavolcanic and metasedimentary rocks, northern White Mountains, California. Ph.D. dissertation, University of California, Los Angeles.
Oakeshott, G. B. 1978. California's changing landscapes: A guide to the geology of the state . McGraw-Hill, New York.
Rinehart, C. D., and W. C. Smith. 1982. Earthquakes and young volcanoes, along the eastern Sierra Nevada . Genny Smith, Palo Alto, Calif.
Ross, D. C. 1965. Geology of the Independence quadrangle, Inyo County, California. U.S. Geological Survey Bull. 1181-O.
Sharp, R. P. 1976. Geology field guide to southern California , rev. ed. Kendall Hunt, Dubuque, Iowa.
Smith, Genny, ed. 1978. Deepest valley: Guide to Owens Valley . Wm. Kaufman, Los Altos, Calif.
U.S. Geological Survey Geologic Maps
Crowder, D. F., P. T. Robinson, and D. L. Harris. 1972. Geologic map of the Benton quadrangle, Mono County, California, and Esmeralda and Mineral Counties, Nevada. U.S. Geological Survey Map GQ-1013.
Crowder, D. F., and M. F. Sheridan. 1972. Geologic map of the White Mountain peak quandrangle, Mono County, California. U.S. Geological Survey Map GQ-1012.
Krauskopf, K. B. 1971. Geologic map of the Mt. Barcroft Quandrangle, California-Nevada. U.S. Geological Survey Map GQ-960.
McKee, E. H., and C. A. Nelson. 1967. Geologic map of Soldier Pass quandrangle, California and Nevada. U.S. Geological Survey Map GQ-654.
Nelson, C. A. 1966. Geologic map of the Waucoba Mtn. quadrangle, Inyo County, California. U.S. Geological Survey Map GQ-528.
Nelson, C. A. 1966. Geologic map of the Blanco Mtn. quadrangle, Inyo and Mono Counties, California. U.S. Geological Survey Map GQ-529.
Nelson, C. A. 1971. Geologic map of the Waucoba Spring Quadrangle, Inyo County, California. U.S. Geological Survey Map GQ-921.
Robinson, P. T., and D. F. Crowder. 1973. Geologic map of the Davis Mountain quadrangle, Esmeralda and Mineral Counties, Nevada, and Mono County, California. U.S. Geological Survey Map GQ-1078.
Ross, D. C. 1967. Geologic map of Waucoba Wash quadrangle. U.S. Geological Survey Map GQ-612.
Stewart, J. H., P. T. Robinson, J. P. Albers, and D. F. Crowder. 1974. Geologic map of the Piper Peak quadrangle, Nevada-California. U.S. Geological Survey Map GQ-1186. | http://publishing.cdlib.org/ucpressebooks/view?docId=ft3t1nb2pn&doc.view=content&chunk.id=d0e299&toc.depth=1&anchor.id=0&brand=eschol | 13 |
233 | Hey there! This tutorial is for those who are new to 3D programming, and need to brush up on that math. I will teach you two primary things here, Vectors and Matrices (with determinants). I'm not going to go into everything, so this isn't designed as a standalone reference. A lot of mathematics books can probably discuss this much better, but anyway, without further ado, lets get on with it shall we? Vectors Vector basics – What is a vector?
Vectors are the backbone of games. They are the foundation of graphics, physics modelling, and a number of other things. Vectors can be of any dimension, but are most commonly seen in 2 or 3 dimensions. I will focus on 2D and 3D vectors in this text. Vectors are derived from hyper-numbers, a sub-set of hyper-complex numbers. But enough of that, you just want to know how to use them right? Good.
The notation for a vector is that of a bold lower-case letter, like i
, or an italic letter with an underscore, like i
. I'll use the former in this text. You can write vectors in a number of ways, and I will teach you 2 of them: vector equations
and column vectors
. Vectors can also be written using the two end points with an arrow above them. So, if you have a vector between the two points A and B, you can write that as
A vector equation takes the form a= xi
+ zk i
are unit vectors in the 3 standard Cartesian directions. i
is a unit vector aligned with the x axis, j
is a unit vector aligned with the y axis, and k
is a unit vector aligned with the z axis. Unit vectors are discussed later.
The coefficients of the i
parts of the equation are the vector's components
. These are how long each vector is in each of the 3 axes. This may be easier to understand with the aid of a diagram.
This diagram shows a vector from the origin to the point ( 3, 2, 5 ) in 3D space. The components
of this vector are the i
coefficients ( 2, 3 and 5 ). So, in the above example, the vector equation would be: a
This can also be related to the deltas of a line going through 2 points.
The second way of writing vectors is as column vectors
. These are written in the following form
are the components of that vector in the respective directions. These are exactly the same as the components of the vector equation. So in column vector form, the above example could be written as:
There are various advantages to both of the above forms, but I will continue to use the column vector form, as it is easier when it comes to matrices. Position vectors
are those that originate from the origin. These can define points in space, relative to the origin. Vector Math
You can manipulate vectors in various ways, including scalar multiplication, addition, scalar product and vector product. The latter two are extremely useful in 3D applications.
There are a few things you should know before moving to the methods above. The first is finding the modulus
(also called the magnitude
) of a vector. This is basically its length. This can be easily found using Pythagorean theorem, using the vector components. The modulus of a
is written |a
in 3D and
in 2D, where x
are the components of the vector in the 3 axes of movement. Unit vectors
are vectors with a magnitude of 1, so |a
| = 1. Addition
Vector addition is pretty easy. All you do is add the respective components together. So for instance, take the vectors:
The addition of these vectors would be:
Get it? This can also be represented very easily in a diagram, but I will only consider this in 2D, because it's easier to draw.
This works in the same way as moving the second vector so that its beginning is at the first vector's end, and taking the vector from the beginning of the first vector to the end of the second one. So, in a diagram, using the above example, this would be:
This means that you can add multiple vectors together to get the resultant
vector. This is used extensively in mechanics for finding resultant forces. Subtracting
Subtracting is very similar to adding, and is also quite helpful. All you do is subtract the components in one vector from the components in the other. The geometric representation however is very different.
The visual representation of this is:
are set to be from the same origin. The vector c
is the vector from the end of the second vector to the end of the first, which in this case is from the end of b
to the end of a
. It may be easier to think of this as a vector addition.
Where instead of having: c
we have c
which, according to what was said about the addition of vectors would produce:
You can see that putting a
on the end of –b
has the same result. Scalar multiplication
Scalar multiplication is easy to come to grips with. All you do is multiply each component by that scalar.
So, say you had the vector a
and a scalar k
, you would multiply each component by the scalar, getting this result:
This has the effect of lengthening or shortening the vector by the amount k
. For instance, take k
= 2; this would make the vector a
twice as long. Multiplying by a negative scalar reverses the direction of the vector. You can use scalar multiplication to find the unit vector of another vector. So, take the following example:
To find the unit vector of this, we would divide a
|. Calling the unit vector "b
That is the unit vector b
in the direction of a
. This just scales each of the components, so that the magnitude is equal to 1.
Scalar multiplication is also used in the vector equation discussed earlier. The constants x
are the scalars that scale the i
vectors, before adding them to find the resultant vector. The Scalar Product (Dot Product)
The scalar product
, also known as the dot product
, is very useful in 3D graphics applications. The scalar product is written
and is read "a
The definition the scalar product is the following:
Where q is the angle between the 2 vectors a
. This produces a scalar result, hence the name scalar product. From this you can see that the scalar product of 2 parallel unit vectors is 1, as |a
| = 1, and cos(0) also is 1. You should also have seen that the scalar product of two perpendicular vectors is 0, as cos(90) = 0, which makes the rest of the expression 0. The geometric interpretation is:
The scalar product can also be written in terms of Cartesian components., I will not go into how this is derived, but the final, simplified formula of a
We can now put these two equations equal to each other, yielding the equation:
With this, we can find angles between vectors. This is used extensively in the lighting part of the graphics pipeline, as it can see whether a polygon is facing towards or away from the light source. This is also used in deciding what side of planes points are on, which is also used extensively for culling.The Vector Product (Cross Product)
The vector product
, which is also commonly known as the cross product
is also useful. The vector product basically finds a vector perpendicular to two other vectors. Great for finding normal vectors to surfaces.
For those that are already familiar with determinants, the vector product is basically the expansion of the following determinant:
For those that aren't, the vector product in expanded form is:
Since the cross product finds the perpendicular vector, we can say that: i
= k j
= i k
Using scalar multiplication along with the vector product we can find the "normal" vector to a plane. A plane can be defined by two vectors, a
. The normal vector is a vector that is perpendicular to a plane and is also a unit vector. Using the formulas discussed earlier, we have:
This first finds the vector perpendicular to the plane made by a
then scales that vector so it has a magnitude of 1. Note however, that there are 2 possible normals to the plane defined by a
. You will get different results by swapping a
in the vector product. That is:
This is a very important point. If you put the inputs the wrong way round, the graphics API will not produce the desired lighting, as the normal will be facing in the opposite direction. The Vector Equation of a Straight Line
The vector equation of a straight line is very useful, and is given by a point on the line and a vector parallel to it:
is a point on the line and v
is the vector. t
is called the parameter, and scales v
. From this it is easy to see that as t
varies, a line is formed in the direction of v
. If t
only takes positive values, then p0
is the starting point of the line. Diagrammatically, this is:
In expanded form, the equation becomes:
This is called the parametric
form of a straight line.
Using this to find the vector equation of a line through two points is easy:
is confined to values between 0 and 1, then what you have is a line segment
between the points p0
Using the vector equation we can define planes, and test for intersections. I won't go into planes much here, as there are many tutorials on them elsewhere, I'll just skim over it.
A plane can be defined as a point on the plane, and two vectors that are parallel to the plane, or:
the parameters and u
are the vectors that are parallel to the plane. Using this, it becomes easy to find the intersection of a line and a plane, because the point of intersection must lie on both the line and the plane, so we simply make the two equations equal to each other.
Take the line:
and the plane:
To find the intersection point we simply equate, so that:
Ed. note: The v
on the left in the above equation is not the same vector as the v
on the right.
We then solve for w
, and then plug into either the line or plane equation to find the point. When testing for a line segment intersection, w
must be between 0 and 1.
There are many benefits for using the normal-distance form of a plane too. It's especially useful for testing what sides of a plane points or other shaped objects are. To do this, you dot the normal vector and the position vector of the point being tested, and add the distance of the plane from the origin.
So, if you have the plane
and the point ( x, y, z )
, the point is in front of the plane if
and behind if it is < 0. If the result equals zero, the point is on the plane. This is used heavily in culling and BSP trees. Matrices What is a Matrix anyway?
A matrix can be considered a 2D array of numbers. They take the form:
Matrices are very powerful, and form the basis of all modern computer graphics, the advantage of them being that they are so fast. We define a matrix with an upper-case bold type letter. Look at the above example. The dimension of a matrix is its height followed by its width, so the above example has dimension 3x3. Matrices can be of any dimensions, but in terms of computer graphics, they are usually kept to 3x3 or 4x4. There are a few types of special matrices; these are the column matrix
, row matrix
, square matrix
, identity matrix
and zero matrix
. A column matrix is one that has a width of 1, and a height of greater than 1. A row matrix is a matrix that has a width of greater than 1, and a height of 1. A square matrix is when the dimensions are the same. For instance, the above example is a square matrix, because the width equals the height. The identity matrix is a special type of matrix that has values in the diagonal from top left to bottom right as 1 and the rest as 0. The identity matrix is known by the letter I
The identity matrix can be any dimension, as long as it is also a square matrix. The zero matrix is a matrix that has all its elements set to 0.
of a matrix are all the numbers in it. They are numbered by the row/column position so that :
Vectors can also be used in column or row matrices. I will use column matrices here so that it is easier to understand. A 3D vector a
in matrix form will use a matrix A
with dimension 3x1 so that:
which you can see is the same layout as using column vectors. Matrix arithmetic
I won't go into every matrix manipulation, but instead I'll focus on the ones that are used extensively in computer graphics.Matrix Multiplication
There are two ways to multiply a matrix: by a scalar, and by another conformable matrix. First, let's deal with the matrix/scalar multiplication.
This is pretty easy, all you do is multiply each element by the scalar. So, let A
be the original matrix, B
be the matrix after multiplication, and k
the constant. We perform:
are the positions in the matrix.
this can also be written as:
Multiplying a matrix by another matrix is more difficult. First, we need to know if the two matrices are conformable
. For a matrix to be conformable with another matrix, the number of rows in A
needs to equal the number of columns in B
. For instance, take matrix A
as having dimension 3x3 and matrix B
having dimension 3x2. These two matrices are conformable because the number of rows in A
is the same as the number of columns in B
. This is important as you'll see later. The product of these two matrices is another matrix with dimension 3x2. So, in general terms:
Take three matrices A
is the product of A
have dimension m
respectively. They are conformable if n=p
. The matrix C
has dimension m
You perform the multiplication by multiplying each row in A
by each column in B
. So let A
have dimension 3x3 and B
have dimension 3x2.
So, with that in mind, let's try an example:
It's as simple as that! Some things to note:
A matrix multiplied by the identity matrix is the same, so:
of a matrix is it flipped on the diagonal from the top left to the bottom right, so for example:
The transpose of this matrix would be:
Simple enough eh? And you thought it was going to be hard! Determinants
I'm going to talk a little bit about determinants now, as they are useful for solving certain types of equations. I will discuss easy 2x2 determinants first.
Take a 2x2 matrix:
The determinant of a matrix A
is written |A
| and is:
For higher dimension matrices, the determinant gets more complicated. Let's discuss a 3x3 matrix. You pass along the first row, and at each element, you discount the row and column that intersects it, and calculate the determinant of the resultant 2x2 matrix multiplied by that value.
So, for example, take this 3x3 matrix:
Ok then, Step 1: move to the first value in the top row, a11
. Take out the row and column that intersects with that value. Step 2: multiply that determinant by a11
. So, using diagrams:
We repeat this all along the top row, with the sign in front of the value of the top row alternating between a "+" and a "-", so the determinant of A
Now, how do we use these for equation solving? Good question. I will first show you how to solve a pair of equations with 2 unknowns.
Take the two equations:
We first push
the coefficients of the variables into a determinant, producing:
You can see it's laid out in the same way, which makes it easy. Now, to solve the equation in terms of x
, we replace the x coefficients in the determinant with the constants k1
, dividing the result by the original determinant. So, that would be:
To solve for y
we replace the y coefficients with the constants instead.
Let's try an example to see this working:
We push the coefficients into a determinant, and solve:
To find x
substitute constants into x
co-efficients, and divide by D
To find y
substitute constants into y
co-efficients and divide by D
See, it's as simple as that! Just for good measure, I'll do an example using 3 unknowns in 3 equations:
Solve for x
Solve for y
Solve for z
And there we have it, how to solve a series of simultaneous equations using determinants, something that can be very useful. Matrix Inversion
Equations can also be solved by inverting a matrix. Take the following equations again:
We push these into 3 matrices to solve:
Let's give these names such that:
We need to solve for B, and since there is no "matrix divide" operation, we need to invert the matrix A
and multiply it by D
, such that:
Now we need to know how to actually do the matrix inversion. There are many ways to do this, and the way I am going to show you is by no means the fastest.
To find the inverse of a matrix, we need to first find its co-factor
. We use a method similar to what we used when finding the determinant. What you do is this: at every element, eliminate the row and column that intersects it, and make it equal the determinant of the remaining part of the matrix. Let's find the first element in a 3x3 matrix. Let's call it c11
. We need to get rid of the row and column that intersects this, so that:
will then take the value of the following determinant:
The sign in front of c11
is decided by the expression:
are the positions of the element in the matrix.
That's easy enough isn't it? Thought so J. Just do the same for every element, and build up the co-factor matrix. Done? Good. Now that the co-factor matrix has been found, the inverse matrix can be calculated using the following equation:
Taking the previous example and equations, let's find the inverse matrix of A
First, the co-factor matrix C
To solve the equations, we then do:
We can then find the values of x
by pulling them out of the last matrix, such that x = -62, y = 39 and z = 3, which is what the other method using determinants found.
A matrix is called orthogonal
if its inverse equals its transpose. Matrices in computer graphics
All graphics APIs use a set of matrices to define transformations in space. A transformation is a change, be it translation, rotation, or whatever. Using column a matrix to define a point in space, a vertex
, we can define matrices that alter that point in some way. Transformation Matrices
Most graphics APIs use 3 different types of primary transformations. These are:
I won't go into the derivation of the matrices for these transformations, as that will take up far too much space. Any good math book that explains affine space transformations will explain their derivations. You have to pre-multiply points by the transformation matrix, as it is impossible to post-multiply because of the dimensions.
Therefore, a point p
can be transformed to point p'
using a transformation matrix T
so that: Translation
To translate a point onto another point, there needs to be a vector of movement, so that
is the translated point, p
is the original point, and v
is the vector along which to translate.
In matrix form, this turns into:
are the components of the vector in the respective axis of movement. Note that a 4D vertex is used. These are called homogeneous
co-ordinates, BUT I will not discuss them here. Scaling
You can scale a vertex by multiplying it by a scalar value, so that
is the scalar constant. You can multiply each component of p
by a different constant. This will make it so you can scale each axis by a different amount.
In matrix form this is: Rotation
Rotation is the most complex transformation. Rotation can be performed around the 3 Cartesian axes.
The rotation matrices around these axis are:
To find out more about how these matrices are derived, please pick up a good math book, I haven't got the time to write it here. Some things about these matrices though:
Any rotation about an axis by q can be undone by a successive rotation by -q. So:
Also, notice that the cosine terms are always on the top-left to bottom-right diagonal, and the sine terms are always on the top-right to bottom-left diagonal, we can also say that:
Rotations matrices that act about the origin are orthogonal.
Note that these transformations are cumulative. That is, if you multiplied a vertex by a translation matrix, then by a scale matrix, it would have the effect of moving the vertex, then scaling it. The order that you multiply becomes very important when you multiply rotation and translation matrices together, as RT
does NOT equal TR
! Projection Matrices
These are also complicated matrices. They come in two flavours, perspective correct
. There are some very good books that derive these matrices in an understandable way, so I won't cover it here. Since I don't work with projection matrices very often, I had to look a lot of this material up using the book Interactive Computer Graphics
by Edward Angel. A very good book that I suggest you buy.
Anyway, on to the matrices.
The orthographic projection matrix:
and z max/min
variables define the viewing volume.
The perspective correct projection matrix is: Conclusion
Well, that's it for this tutorial. I hope that I've helped you understand vectors and matrices including how to use them. For further reading I can recommend a few books that I have found really useful, these are: Interactive Computer Graphics – A Top Down Approach with OpenGL" –
Edward Angel: Covers a lot of theory in computer graphics, including how we perceive the world around us. This book covers a lot of the matrix derivations that I left out. All in all, a very good book on graphics programming and theory. With exercises too, which is nice. Mathematics for Computer Graphics Applications – Second Edition
– M.E Mortenson: This is solely about the mathematics behind computer graphics, and explains a lot of material in a very easy to understand manner. There are loads of exercises to keep you occupied. The book explains things such as vectors, matrices, transformations, topology and continuity, symmetry, polyhedra, half-spaces, constructive solid geometry, points, lines, curves, surfaces, and more! A must for anyone serious in graphics programming. You won't see a line of code or pseudo-code though. Advanced National Certificate Mathematics Vol.: 2
– Pedoe: I don't know whether you can actually get this book anymore, but if you can get a copy! This book explains mathematical concepts well, and is easy to learn from. This book is about general mathematics though, each volume expands on the other. So vol. 1 introduces concepts, vol. 2 expands on them. A book well worth the money (although I have no idea how much it is, as I got my copy off my dad ).
That's about it! I hope I haven't scared you off graphics programming. Most APIs, including Direct3D and OpenGL, will hide some of this away from you.
If you need to contact me at all, my email address is: [email protected]
. I don't want any abuse though - if you don't like this tutorial I accept constructive advice only. Credits
I'd like to give credit to "Advanced National Certificate Mathematics Vol.: 2" as that's where I got the simultaneous equations from in the part on determinants, so I knew the answers were whole, and that they worked out. I would also like to give credit to Miss. A Miller who proof read this tutorial for me. | http://www.gamedev.net/page/resources/_/technical/math-and-physics/vectors-and-matrices-a-primer-r1832?st=30 | 13 |
90 | "... and what is the use of a book," thought Alice, "without pictures or conversations?"
The Domain Name System is basically a database of host information. Admittedly, you get a lot with that: funny dotted names, networked name servers, a shadowy "name space." But keep in mind that, in the end, the service DNS provides is information about internet hosts.
We've already covered some important aspects of DNS, including its client-server architecture and the structure of the DNS database. However, we haven't gone into much detail, and we haven't explained the nuts and bolts of DNS's operation.
In this chapter, we'll explain and illustrate the mechanisms that make DNS work. We'll also introduce the terms you'll need to know to read the rest of the book (and to converse intelligently with your fellow domain administrators).
First, though, let's take a more detailed look at concepts introduced in the previous chapter. We'll try to add enough detail to spice it up a little.
DNS's distributed database is indexed by domain names. Each domain name is essentially just a path in a large inverted tree, called the domain name space. The tree's hierarchical structure, shown in Figure 2.1, is similar to the structure of the UNIX filesystem. The tree has a single root at the top. In the UNIX filesystem, this is called the root directory, represented by a slash ("/"). DNS simply calls it "the root." Like a filesystem, DNS's tree can branch any number of ways at each intersection point, called a node. The depth of the tree is limited to 127 levels (a limit you're not likely to reach).
Clearly this is a computer scientist's tree, not a botanist's.
Each node in the tree has a text label (without dots) that can be up to 63 characters long. A null (zero-length) label is reserved for the root. The full domain name of any node in the tree is the sequence of labels on the path from that node to the root. Domain names are always read from the node toward the root ("up" the tree), and with dots separating the names in the path.
If the root node's label actually appears in a node's domain name, the name looks as though it ends in a dot, as in "www.oreilly.com.". (It actually ends with a dot - the separator - and the root's null label.) When the root node's label appears by itself, it is written as a single dot, ".", for convenience. Consequently, some software interprets a trailing dot in a domain name to indicate that the domain name is absolute. An absolute domain name is written relative to the root, and unambiguously specifies a node's location in the hierarchy. An absolute domain name is also referred to as a fully qualified domain name, often abbreviated FQDN. Names without trailing dots are sometimes interpreted as relative to some domain other than the root, just as directory names without a leading slash are often interpreted as relative to the current directory.
DNS requires that sibling nodes - nodes that are children of the same parent - have different labels. This restriction guarantees that a domain name uniquely identifies a single node in the tree. The restriction really isn't a limitation, because the labels only need to be unique among the children, not among all the nodes in the tree. The same restriction applies to the UNIX filesystem: You can't give two sibling directories the same name. Just as you can't have two hobbes.pa.ca.us nodes in the name space, you can't have two /usr/bin directories (Figure 2.2). You can, however, have both a hobbes.pa.ca.us node and a hobbes.lg.ca.us, as you can have both a /bin directory and a /usr/bin directory.
A domain is simply a subtree of the domain name space. The domain name of a domain is the same as the domain name of the node at the very top of the domain. So, for example, the top of the purdue.edu domain is a node named purdue.edu, as shown in Figure 2.3.
Likewise, in a filesystem, at the top of the /usr directory, you'd expect to find a node called /usr, as shown in Figure 2.4.
Any domain name in the subtree is considered a part of the domain. Because a domain name can be in many subtrees, a domain name can also be in many domains. For example, the domain name pa.ca.us is part of the ca.us domain and also part of the us domain, as shown in Figure 2.5.
So in the abstract, a domain is just a subtree of the domain name space. But if a domain is simply made up of domain names and other domains, where are all the hosts? Domains are groups of hosts, right?
The hosts are there, represented by domain names. Remember, domain names are just indexes into the DNS database. The "hosts" are the domain names that point to information about individual hosts. And a domain contains all the hosts whose domain names are within the domain. The hosts are related logically, often by geography or organizational affiliation, and not necessarily by network or address or hardware type. You might have ten different hosts, each of them on a different network and each one perhaps even in a different country, all in the same domain.
One note of caution: Don't confuse domains in the Domain Name System with domains in Sun's Network Information Service (NIS). Though an NIS domain also refers to a group of hosts, and both types of domains have similarly structured names, the concepts are quite different. NIS uses hierarchical names, but the hierarchy ends there: hosts in the same NIS domain share certain data about hosts and users, but they can't navigate the NIS name space to find data in other NIS domains. NT domains, which provide account management and security services, also don't have any relationship to DNS domains.
Domain names at the leaves of the tree generally represent individual hosts, and they may point to network addresses, hardware information, and mail routing information. Domain names in the interior of the tree can name a host and can point to information about the domain. Interior domain names aren't restricted to one or the other. They can represent both the domain they correspond to and a particular host on the network. For example, hp.com is both the name of the Hewlett-Packard Company's domain and the domain name of a host that runs HP's main web server.
The type of information retrieved when you use a domain name depends on the context in which you use it. Sending mail to someone at hp.com would return mail routing information, while telneting to the domain name would look up the host information (in Figure 2.6, for example, hp.com's IP address).
The terms domain and subdomain are often used interchangeably, or nearly so, in DNS and BIND documentation. Here, we use subdomain only as a relative term: a domain is a subdomain of another domain if the root of the subdomain is within the domain.
A simple way of deciding whether a domain is a subdomain of another domain is to compare their domain names. A subdomain's domain name ends with the domain name of its parent domain. For example, the domain la.tyrell.com must be a subdomain of tyrell.com because la.tyrell.com ends with tyrell.com. Similarly, it's a subdomain of com, as is tyrell.com.
Besides being referred to in relative terms, as subdomains of other domains, domains are often referred to by level. On mailing lists and in Usenet newsgroups, you may see the terms top-level domain or second-level domain bandied about. These terms simply refer to a domain's position in the domain name space:
The data associated with domain names are contained in resource records, or RRs. Records are divided into classes, each of which pertains to a type of network or software. Currently, there are classes for internets (any TCP/IP-based internet), networks based on the Chaosnet protocols, and networks that use Hesiod software. (Chaosnet is an old network of largely historic significance.)
The internet class is by far the most popular. (We're not really sure if anyone still uses the Chaosnet class, and use of the Hesiod class is mostly confined to MIT.) We concentrate here on the internet class.
Within a class, records also come in several types, which correspond to the different varieties of data that may be stored in the domain name space. Different classes may define different record types, though some types may be common to more than one class. For example, almost every class defines an address type. Each record type in a given class defines a particular record syntax, which all resource records of that class and type must adhere to. (For details on all internet resource record types and their syntaxes, see Appendix A, DNS Message Format and Resource Records.)
If this information seems sketchy, don't worry - we'll cover the records in the internet class in more detail later. The common records are described in Chapter 4, Setting Up BIND, and a comprehensive list is included as part of Appendix A. | http://www.thehackademy.net/madchat/ebooks/Oreilly_Nutshells/books/tcpip/dnsbind/ch02_01.htm | 13 |
54 | Common Lisp/First steps/Experienced tutorial
Basic Operations
This chapter gives some theoretical basics about structure of Lisp programs.
Lisp operates on forms. Each Lisp form is either an atom or a list of forms. Atoms are numbers, strings, symbols and some other structures. Lisp symbols are actually quite interesting - I'll talk about them in another section.
When Lisp is forced to evaluate the form it looks whether it's an atom or a list. If it's an atom then its value is returned (numbers, strings and other data return themselves, symbols return their value). If the form is a list, Lisp looks at the first element of the list, which is called its car (an archaic term which stands for Contents of the Address part of Register). The car of a list should be a symbol or a lambda-expression (lambda-expressions would be discussed later). If it's a symbol Lisp takes its function (the function associated to that symbol - NOT its value) and executes that function with the arguments taken from the rest of the list (if it contains forms they're evaluated as well).
Example: (+ 1 2 3) returns 6. Symbol "+" is associated with the function + that performs the addition of its arguments. (+ 1 (+ 2 3) 4) returns 10. The second argument contains a form and is evaluated before being passed to the outer +.
Some interesting functions
+, -, *, / are basic operations on numbers. They can accept multiple arguments. Note that (/ 1 2) is 1/2, not 0.5 - Lisp knows rational numbers (as well as complex ones...). <, <=, = and so on are used for number comparison. Note that =, <, <= and others are polyadic:
(= 1 1 1) ⇒ t (= 1 1 2) ⇒ nil (< 1 2 3) ⇒ t (< 1 3 2) ⇒ nil
list as the name suggests creates a list.
(list 1 2 3) ⇒ (1 2 3)
cons creates a pair (which is NOT a list of 2 elements).
(cons 1 2) ⇒ (1 . 2) ;;note the dot.
car or first returns the first element of the cons (pair). cdr or rest returns the second element of the cons.
(car (cons 1 2)) ⇒ 1 (cdr (cons 1 2)) ⇒ 2
Lists and conses
Since lists are so prominent in Lisp it's good to know what they exactly are. The truth is that, with one exception, lists consist of conses. This exception is a special list called nil - also known as (). nil is a self-evaluating symbol that is used both as a falsehood constant and an empty list. nil is the only false value in Lisp - everything else is true for the purpose of if and similar constructs. The opposite of nil is t, which is also self-evaluating and represents the truth. t is, however, not a list. Let's return to the lists, then... A proper list (improper lists will not be explained here) is defined as any list that is either nil or a cons whose cdr is a proper list. (Note that since proper lists have to start from somewhere,
(cdr (cdr (cdr... (cdr x)))...) is nil for some finite number of cdrs.)
Basically, a proper list is a sequence of conses, such that the next cons is a cdr of a previous cons. It's easy to understand how lists are constructed if you consider a graphical representation. A cons can be represented as a rectangle divided into two squares. Each of squares can hold a value. In a proper list left square holds the element of the list, and the right square holds the next cons (or nil if it is the end of the list). Note that each cons holds exactly one element of the list. That's how (1 2 3) would look in graphical representation:
.-------. | * | * | '-|---|-' V V 1 .-------. | * | * | '-|---|-' V V 2 .-------. | * | * | '-|---|-' V V 3 nil
So (1 2 3) is really (1 . (2 . (3 . nil))). What follows from that is that (car (list 1 2 3)) is 1 and (cdr (list 1 2 3)) is (2 3). What doesn't follow from all of the above is that (car nil) and (cdr nil) are nil. This is not very consistent, because (cons nil nil) is not the same thing as nil, but it happens to be convenient.
Symbols play the same role as variable names in other programming languages. Basically a symbol is a string associated with some values. The string can consist of any characters, including spaces and control characters. However, most symbol names do not use characters other than letters, numbers, and hyphens, because they're awkward to type. In addition, the characters "(", ")", "#", "\", ".", "|", ";", whitespace, and the double and single quote marks are likely to be misunderstood by the lisp reader; and other characters such as "*" are conventionally used only for certain purposes. By default Lisp converts what you type to uppercase.
Symbols are created as you use them. For example, when you type (setf x 1) symbol called "X" is created (remember that Lisp uppercases your input) and its value is set to 1. However it is good style to define your symbols before you use them. defvar and defparameter are used for that purpose.
(defparameter x 1) ;;defines symbol "X" and sets its value to 1.
A symbol can also have other parameters associated with it besides its name and value - functions, classes and so on. To get a function associated with a symbol, a special operator (these are discussed in the next chapter) function is used.
Macros and special operators
There are operators in Lisp that look like functions, but behave slightly differently. These are macros and special operators. Functions always evaluate their arguments, but occasionally this is undesirable, so these forms must be implemented.
For example, consider the ubiquitous if construct. If takes the form (if condition then else); condition is first evaluated, followed by then if condition is not nil or else if it is nil. Thus, (if t 1 2) returns 1 and (if nil 1 2) returns 2. Obviously, if cannot be implemented as a function because only one of its two final arguments will be evaluated. Thus, it is created as one of about 25 special operators, which are all predefined in the Lisp implementation.
Another special operator is quote. It returns its only argument, unevaluated. Again, this is impossible with functions, as they always evaluate their arguments. Quote is used quite often, so it can be expressed by the single character ' . Thus, (quote x) is equivalent to 'x. quote may be used to quickly create lists: '(1 2 3) returns (1 2 3), '(x y z) returns (x y z) - compare that to (list x y z) which would create a list of values of x, y and z, or signal an error if no values were assigned. In fact, '(x y z) is the same value as (list 'x 'y 'z).
Macros are like special operators, but they're not hardcoded in a Lisp implementation. Instead, they can be defined in Lisp code. A lot of Lisp constructs you will be using are actually macros. Only very essential constructs are hardcoded. Of course, to the user there is no difference.
Simple programming
In this chapter I'll explain how to do simple things in Lisp. Many useful constructs will be introduced. After reading this chapter, you should be able to write simple programs.
Storing values
While storing values in variables is an important process in many programming languages, in Lisp it is used much less often. Although it is multi-paradigm, Lisp is often referred to and programmed in as a functional language. Functional languages do not allow (or at least discourage) the use of state, or stored information that implicitly changes the behavior of a function. In theory, assignments are never needed in a purely functional program. As you may have noticed, I never stored values anywhere in the previous chapter, except in the "Symbols" section. This shows that the storing of global values is rarely needed.
That being said, it is still useful to store values, and Lisp does provide for this. The macros setf and setq store values into symbols:
(setq x 1) => 1 x => 1 (setq x 1 y 2 z 3) => 3 (list x y z) => (1 2 3)
Setf is much more powerful than setq, as it allows the programmer to change a single part of a variable:
(setq abc '(1 2 3)) => (1 2 3) (setq (car abc) 3) => error! (setf (car abc) 3) => 3 abc => (3 2 3)
For this reason, setf is used more often than setq.
In other languages, assignments are usually denoted as something like x=1. The = here means something different than the mathematical = sign. Lisp reserves = for the mathematical definition, i.e. testing for numerical equality. It may appear that (setf place value) is more complicated than needed, but it is useful to remember that this facility is extensible and allows the user to remove the internal representation of data from the assignment. This is similar to reassigning the = operator in other languages (something not possible in C and some other languages).
Setf is a useful utility, despite the extra keystrokes it requires. In practice, though, you'll be using assignment less often than in other languages, because there is another way in Lisp to remember values: by binding them.
Note: setq stands for set quote. Originally, a set function existed that would evaluate its first argument. Programmers got tired of quoting the argument, and defined this special operator. Set is now deprecated.
Binding values
When values are bound to symbols, they are temporarily stored and then unbound, the values forgotten. With let and let* you can bind some values to some variables within some part of your program. The difference between let and let* is that let initialises its variables in parallel and let* does it sequentially.
(let ((x 1) (y 2) (z 3)) (+ x y z)) => 6 (let* ((x 1) (y (+ x 1)) (z (+ y 1))) (+ x y z)) => 6
Inside the body of let you can use variables you defined as though they're real symbols - outside of let these symbols can be unbound or have completely different values. If you call or define functions within the let body, the bindings stick around, making some interesting interactions possible (these are outside of the scope of this manual). You can even use setf on these variables and the new value would be stored temporarily. As soon as the last form in let body is executed its result is returned, and the variables are restored back to their original values.
(setf x 3 y 4 z 5) => 5 (let ((x 1) (y 2) (z 3)) (+ x y z)) => 6 (+ x y z) => 12
Good programming practices generally recommend using local variables whenever possible, and limiting global variables only to when it is absolutely necessary. Thus, you should use let whenever possible, and treat setf like there is a tax on each use.
Control flow
The if operator was already explained before, but may seem hard to use at this point. This is because if allows only one form for each branch, which makes it hard to use in most situations. Fortunately, Lisp syntax gives you more freedom to define your blocks than C's curly braces or Pascal's begin/end. progn creates a very simple block of code, executing its arguments one by one and returning the result of the last one.
(progn (setf x 1 y 2) (setf z (+ x y)) (* y z)) => 6
let and let* can also be used for this purpose, especially if you want some temporary variables inside the branches.
block creates a named block, from which you can return with return-from:
(block aaa (return-from aaa 1) (+ 1 2 3)) => 1 ;;The form (+ 1 2 3) is not evaluated.
...and other forms exist, such as the, locally, prog1, tagbody and so on. Fortunately, if you're not into writing Lisp macros, if is probably the only construct where you'll have to use blocks.
As if is quite ugly when used repeatedly, there are some convenient macros in place so you won't have to use it all that often. when evaluates its first argument and, if it's not nil, evaluates the rest of its arguments, returning the last result. unless does the same if its first argument is nil. Otherwise they both return nil.
cond is slightly more complicated, but also more useful. It tests its conditions until one of them is not nil and then evaluates the associated code. This is easier to parse and better-looking than nested if's. The syntax of cond is as follows:
(cond (condition1 some-forms1) (condition2 some-forms2) ...and so on...)
case is similar to cond except it branches after examining the value of some expression:
(case expression (values1 some-forms1) ;values is either one value (values2 some-forms2) ;or a list of values. ... (t some-forms-t)) ;executed if no values match
or evaluates its arguments until one of them is not nil, returning its value, or the value of last argument.
and evaluates its arguments until one of them is nil, returning its value (nil, that is) - otherwise the value of the last argument is returned.
You may notice that or and and could be used as logical operations as well - remember that everything non-nil is true.
Iteration, or evaluating a form many times, is covered by many tools evaluating something many times. The most useful tool is loop - in its simplest form it simply executes its body until the return operator is called, in which case it returns the specified value:
(setf x 0) (loop (setf x (+ x 1)) (when (> x 10) (return x))) => 11
More complicated forms of loop are better learned through examples. Depending on the type of iteration wanted, different operators, such as for and until, are used. While loop should be enough for all kind of loops, there are other constructs for those who wouldn't like to learn its full syntax.
dotimes executes some code a fixed number of times. Its syntax is:
(dotimes (var number result) forms)
dotimes increments var from 0 to number and executes forms each time with that value of var, returning result in the end.
dolist iterates through a list: its syntax is the same as of dotimes except that a list replaces number. var begins as the list's car and is moved until it reaches the list's last element.
mapcar applies a function to different sets of arguments and returns a list of results, for example:
(mapcar #'+ '(1 3 6) '(2 4 7) '(3 5 8)) => (6 12 21)
#'+ is a shortcut for (function +). The function + is first applied to the list of arguments (1 2 3), then to (3 4 5) and then to (6 7 8).
Defining functions
Functions are defined using macro defun:
(defun function-name (arguments) (body))
The created function is associated with the symbol function-name and can be called like any other function. It's worth mentioning that functions defined that way can be recursive or can call each other - this is an essential part of most Lisp programming. An example of a recursive function is:
(defun factorial (x) (if (= x 0) 1 (* x (factorial (- x 1)))))
As can be seen, this function simply describes an otherwise complicated operation by calling itself over and over. The process is stopped when x = 0, and x decreases with each call, eliminating (for positive integer x) the possibility of an infinite loop. Note that x, being an argument, takes on a different value for each call of the function. As seen in "Binding values" above, none of these values override any of the previous.
The special operator lambda creates an anonymous function, which you can use for a one-time purpose. Its syntax is the same except instead of "defun function-name" you write "lambda". These functions cannot be recursive. For the most part, lambda functions are used in forms such as mapcar (and funcall and apply below) to eliminate excessive repetition or memory use.
You can also bind functions temporarily with flet and labels. They are very similar to let and let*. The difference between them is that in labels a function can refer to itself, while in flet it refers to a former function with the same name instead.
Calling functions
To call a function f with arguments a1, a2, and a3, simply type
(f a1 a2 a3)
Sometimes the function which should be called is stored in a variable and you don't know its name beforehand. Or maybe you don't know how many arguments are to be passed. In those cases the functions funcall and apply become handy. Like all functions they evaluate their arguments - the first one should produce a function to be called, the rest should produce arguments. Funcall just calls the function with whatever arguments supplied. Apply checks if its last argument is a list and if it is, it treats it as a list of arguments. Compare:
(funcall #'list '(1 2) '(3 4)) => ((1 2) (3 4)) (apply #'list '(1 2) '(3 4)) => ((1 2) 3 4)
User interaction
For this tutorial I've cover only the simple tasks of input and output. To read a value from the user use the read function. It will attempt to read an arbitrary Lisp expression from the input, and the value returned by read is that expression.
>(read) ;Run read function (+ 1 x) ;That's what the user types (+ 1 X) ;that's what is returned
In that example the returned value is a list of three elements: symbol +, number 1 and symbol X (note how it got uppercased). read is one of the three functions that comprise read-eval-print loop, the core element of Lisp. This is the same function that is used to read the expressions you type at the Lisp prompt.
While read is handy for receiving numbers and lists from the user, other data-types are expected by most users to be fed to computer in a different manner. Take strings for example. To make read recognise a string one must add double quotes around it
"like this". But a normal user expects to just type
like this without the quotes and press Enter. So, there is a different general-purpose function used for input: read-line. read-line returns a string that contains what user typed before pressing Enter. You can then process that string to extract the information you need.
For output there is also quite a number of functions available. One that is interesting to us is princ, which simply prints a supplied value, and also returns it. This may be confusing when using it in the Lisp console:
>(princ "aaa") aaa "aaa"
The first aaa (without the quotes) is what is printed, while the second one is a returned value (which is printed by the print function and it is not as nice-looking). Another function that could be useful is terpri, which prints a new line to the output (the name "terpri" is historical and means "TERminate PRInt line").
For more information, return to Common Lisp. | http://en.wikibooks.org/wiki/Common_Lisp/First_steps/Experienced_tutorial | 13 |
55 | The Pythagorean Theorem Study Guide
The Pythagorean Theorem
In this lesson, we examine a very powerful relationship between the three lengths of a right triangle. With this relationship, we will find the exact length of any side of a right triangle, provided we know the lengths of the other two sides. We also study right triangles where all three sides have whole number lengths.
Proving the Pythagorean Theorem
The Pythagorean theorem is this relationship between the three sides of a right triangle. Actually, it relates the squares of the lengths of the sides. The square of any number (the number times itself) is also the area of the square with that length as its side. For example, the square in Figure 3.1 with sides of length a will have area a · a = a2.
The longest side of a right triangle, the side opposite the right angle, is called the hypotenuse, and the other two sides are called legs. Suppose a right triangle has legs of length a and b, and a hypotenuse of length c, as illustrated in Figure 3.2.
The Pythagorean theorem states that a2 + b2 = c2, This means that the area of the squares on the two smaller sides add up to the area of the biggest square. This is illustrated in Figures 3.3 and 3.4.
This is a surprising result. Why should these areas add up like this? Why couldn't the areas of the two smaller squares add up to a bit more or less than the big square?
We can convince ourselves that this is true by adding four copies of the original triangle to each side of the equation. The four triangles can make two rectangles, as shown in Figure 3.5. They could also make a big square with a hole in the middle, as in Figure 3.6.
If we add the a2 and b2 squares to Figure 3.4, and the c2 square to Figure 3.5, they fit exactly. The result in either case is a big square with each side of length a + b, as shown in Figure 3.7.
The two big squares have the same area. If we take away the four triangles from each side, we can see that the two smaller squares have the exact same area as the big square, as shown in Figure 3.8. Thus, a2 + b2 = c2
This proof of the Pythagorean theorem has been adapted from a proof developed by the Chinese about 3,000 years ago. With the Pythagorean theorem, we can use any two sides of a right triangle to find the length of the third side.
Suppose the two legs of a right triangle measure 8 inches and 12 inches, as shown in Figure 3.9. What is the length of the hypotenuse?
By the Pythagorean theorem:
122 + 82 = H2
208 = H2
H = √208
While the equation gives two solutions, a length must be positive, so H = √208. This can be simplified to H = 4√13.
What is the height of the triangle in Figure 3.10?
Even though the height is labeled h, it is not the hypotenuse. The longest side has length 10 feet, and thus must be alone on one side of the equation.
h2 + 32 = 102
h2 = 100 – 9
h = √91 ≈ 9.54
With the help of a calculator, we can see that the height of this triangle is about 9.54 feet.
We can use the Pythagorean theorem on triangles without illustrations. All we need to know is that the triangle is right and which side is the hypotenuse.
Add your own comment
Today on Education.com
SUMMER LEARNINGJune Workbooks Are Here!
TECHNOLOGYAre Cell Phones Dangerous for Kids?
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- First Grade Sight Words List
- Graduation Inspiration: Top 10 Graduation Quotes
- 10 Fun Activities for Children with Autism
- What Makes a School Effective?
- Child Development Theories
- Should Your Child Be Held Back a Grade? Know Your Rights
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- Smart Parenting During and After Divorce: Introducing Your Child to Your New Partner | http://www.education.com/study-help/article/pythagorean-theorem/ | 13 |
117 | Lesson 1: Vectors - Fundamentals and Operations
Lesson 2: Projectile Motion
Describing Projectiles with Numbers:
Lesson 3 : Forces in Two Dimensions
Lesson 1: Vectors - Fundamentals and Operations
A variety of mathematical operations can be performed with and upon vectors. One such operation is the addition of vectors. Two vectors can be added together to determine the result (or resultant). This process of adding two or more vectors has already been discussed in an earlier unit. Recall in our discussion of Newton's laws of motion, that the net force experienced by an object was determined by computing the vector sum of all the individual forces acting upon that object. That is the net force was the result (or resultant) of adding up all the force vectors. During that unit, the rules for summing vectors (such as force vectors) were kept relatively simple. Observe the following summations of two force vectors:
These rules for summing vectors were applied to free-body diagrams in order to determine the net force (i.e., the vector sum of all the individual forces). Sample applications are shown in the diagram below.
In this unit, the task of summing vectors will be extended to more complicated cases in which the vectors are directed in directions other than purely vertical and horizontal directions. For example, a vector directed up and to the right will be added to a vector directed up and to the left. The vector sum will be determined for the more complicated cases shown in the diagrams below.
There are a variety of methods for determining the magnitude and direction of the result of adding two or more vectors. The two methods which will be discussed in this lesson and used throughout the entire unit are:
The Pythagorean Theorem
The Pythagorean theorem is a useful method for determining the result of adding two (and only two) vectors which make a right angle to each other. The method is not applicable for adding more than two vectors or for adding vectors which are not at 90-degrees to each other. The Pythagorean theorem is a mathematical equation which relates the length of the sides of a right triangle to the length of the hypotenuse of a right triangle.
Eric leaves the base camp and hikes 11 km, north and then hikes 11 km east. Determine Eric's resulting displacement.
This problem asks to determine the result of adding two displacement vectors which are at right angles to each other. The result (or resultant) of walking 11 km north and 11 km east is a vector directed northeast as shown in the diagram to the right. Since the northward displacement and the eastward displacement are at right angles to each other, the Pythagorean theorem can be used to determine the resultant (i.e., the hypotenuse of the right triangle).
The result of adding 11 km, north plus 11 km, east is a vector with a magnitude of 15.6 km. Later, the method of determining the direction of the vector will be discussed.
Let's test your understanding with the following two practice problems. In each case, use the Pythagorean theorem to determine the magnitude of the vector sum. When finished, click the button to view the answer.
Using Trigonometry to Determine a Vector's Direction
The direction of a resultant vector can often be determined by use of trigonometric functions. Most students recall the meaning of the useful mnemonic SOH CAH TOA from their course in trigonometry. SOH CAH TOA is a mnemonic which helps one remember the meaning of the three common trigonometric functions - sine, cosine, and tangent functions. These three functions relate an acute angle in a right triangle to the ratio of the lengths of two of the sides of the right triangle. The sine function relates the measure of an acute angle to the ratio of the length of the side opposite the angle to the length of the hypotenuse. The cosine function relates the measure of an acute angle to the ratio of the length of the side adjacent the angle to the length of the hypotenuse. The tangent function relates the measure of an angle to the ratio of the length of the side opposite the angle to the length of the side adjacent to the angle. The three equations below summarize these three functions in equation form.
These three trigonometric functions can be applied to the hiker problem in order to determine the direction of the hiker's overall displacement. The process begins by the selection of one of the two angles (other than the right angle) of the triangle. Once the angle is selected, any of the three functions can be used to find the measure of the angle. Write the function and proceed with the proper algebraic steps to solve for the measure of the angle. The work is shown below.
Once the measure of the angle is determined, the direction of the vector can be found. In this case the vector makes an angle of 45 degrees with due East. Thus, the direction of this vector is written as 45 degrees. (Recall from earlier in this lesson that the direction of a vector is the counterclockwise angle of rotation which the vector makes with due East.)
The measure of an angle as determined through use of SOH CAH TOA is not always the direction of the vector. The following vector addition diagram is an example of such a situation. Observe that the angle within the triangle is determined to be 26.6 degrees using SOH CAH TOA. This angle is the southward angle of rotation which the vector R makes with respect to West. Yet the direction of the vector as expressed with the CCW (counterclockwise from East) convention is 206.6 degrees.
Test your understanding of the use of SOH CAH TOA to determine the vector direction by trying the following two practice problems. In each case, use SOH CAH TOA to determine the direction of the resultant. When finished, click the button to view the answer.
In the above problems, the magnitude and direction of the sum of two vectors is determined using the Pythagorean theorem and trigonometric methods (SOH CAH TOA). The procedure is restricted to the addition of two vectors which make right angles to each other. When the two vectors which are to be added do not make right angles to one another, or when there are more than two vectors to add together, we will employ a method known as the head-to-tail vector addition method. This method is described below.
The magnitude and direction of the sum of two or more vectors can also be determined by use of an accurately drawn scaled vector diagram. Using a scaled diagram, the head-to-tail method is employed to determine the vector sum or resultant. A common Physics lab involves a vector walk. Either using centimeter-sized displacements upon a map or meter-sized displacements in a large open area, a student makes several consecutive displacements beginning from a designated starting position. Suppose that you were given a map of your local area and a set of 18 directions to follow. Starting at home base, these 18 displacement vectors could be added together in consecutive fashion to determine the result of adding the set of 18 directions. Perhaps the first vector is measured 5 cm, East. Where this measurement ended, the next measurement would begin. The process would be repeated for all 18 directions. Each time one measurement ended, the next measurement would begin. In essence, you would be using the head-to-tail method of vector addition.
The head-to-tail method involves
drawing a vector to scale
on a sheet of paper beginning at a designated starting
position. Where the head of this first vector ends, the tail
of the second vector begins (thus, head-to-tail
method). The process is repeated for all vectors which are
being added. Once all the vectors have been added
head-to-tail, the resultant is then drawn from the tail of
the first vector to the head of the last vector; i.e., from
start to finish. Once the resultant is drawn, its length can
be measured and converted to real units using the
given scale. The direction of
the resultant can be determined by using a protractor and
measuring its counterclockwise angle of rotation from due
© Tom Henderson | http://gbhsweb.glenbrook225.org/gbs/science/phys/Class/vectors/u3l1b.html | 13 |
332 | A-level Physics (Advancing Physics)/Print Version
|This is the print version of A-level Physics (Advancing Physics)
You won't see this message or any elements not part of the book's content when you print or preview this page.
Welcome to the Wikibooks textbook on Physics, designed to contain everything you need to know for the OCR Physics B (Advancing Physics) specification. All sorts of useful documents for this specification are available at http://www.ocr.org.uk/qualifications/as_alevelgce/physics_b_advancing_physics/documents.html .
All units are assumed to be
SI units are used throughout science in many countries of the world. There are seven base units, from which all other units are derived.
Every other unit is either a combination of two or more base units, or a reciprocal of a base unit. With the exception of the kilogram, all of the base units are defined as measurable natural phenomena. Also, notice that the kilogram is the only base unit with a prefix. This is because the gram is too small for most practical applications.
|Amount of Substance||mole||mol|
Most of the derived units are the base units divided or multiplied together. Some of them have special names. You can see how each unit relates to any other unit, and knowing the base units for a particular derived unit is useful when checking if your working is correct.
Note that "m/s", "m s-1", "m·s-1" and are all equivalent. The negative exponent form is generally preferred, for example "kg·m-1·s-2" is easier to read than "kg/m/s2".
|Quantity||Name||Symbol||In terms of other derived units||In terms of base units|
|Speed/Velocity||metre per second|
|Acceleration||metre per second squared|
|Density||kilogram per cubic metre|
|Specific Volume||cubic metre per kilogram|
|Current Density||ampere per square metre|
|Magnetic Field Strength||ampere per metre|
|Concentration||mole per cubic metre|
|Energy/Work/Quantity of Heat||joule||J||N m|
|Electric Charge/Quantity of Electricity||coulomb||C||s A|
|Electric Potential/Potential Difference/Electromotive Force||volt||V|
|Magnetic Flux||weber||Wb||V s|
|Magnetic Flux Density||Tesla||T|
|Celsius Temperature||degree Celsius||°C||K - 273.15|
|Luminous Flux||lumen||lm||cd sr|
|Activity of a Radionuclide||bequerel||Bq|
The SI units can have prefixes to make larger or smaller numbers more manageable. For example, visible light has a wavelength of roughly 0.0000005 m, but it is more commonly written as 500 nm. If you must specify a quantity like this in metres, you should write it in standard form. As given by the table below, 1nm = 1*10-9m. In standard form, the first number must be between 1 and 10. So to put 500nm in standard form, you would divide the 500 by 100 to get 5, then multiply the factor by 100 (so that it's still the same number), getting 5*10-7m. The power of 10 in this answer, i.e.,. -7, is called the exponent, or the order of magnitude of the quantity.
Equations must always have the same units on both sides, and if they don't, you have probably made a mistake. Once you have your answer, you can check that the units are correct by doing the equation again with only the units.
For example, to find the velocity of a cyclist who moved 100 metres in 20 seconds, you have to use the formula , so your answer would be 5 .
This question has the units , and should give an answer in . Here, the equation was correct, and makes sense.
Often, however, it isn't that simple. If a car of mass 500kg had an acceleration of 0.2 , you could calculate from that the force provided by the engines is 100N. At first glance it would seem the equation is not homogeneous, since the equation uses the units , which should give an answer in . If you look at the derived units table above, you can see that a newton is in fact equal to , and therefore the equation is correct.
Using the same example as above, imagine that we are only given the mass of the car and the force exerted by the engines, and have been asked to find the acceleration of the car. Using again, we need to rearrange it for , and we now have the formula: . By inserting the numbers, we get the answer . You already know that this is wrong from the example above, but by looking at the units, we can see why this is the case:. The units are , when we were looking for . The problem is the fact that was rearranged incorrectly. The correct formula was , and using it will give the correct answer of 0.2 . The units for the correct formula are .
, unless stated otherwise. Diagrams can be enlarged by clicking on them.
Chapters are probably marked as 75% complete () because they are complete, but the author is not entirely happy with the correctness of the contents. Please look at these and check them!
Physics in Action
Curvature of Wavefronts
Light can be viewed as beams travelling between points. However, from most light sources, the light radiates outwards as a series of wavefronts. Light from a light source is bent - wavefronts of light have a property known as curvature.
As light travels further away from its source, its curvature decreases. Consider a sphere expanding gradually from a point, which represents a given wavefront of light. As the sphere expands, the curvature of its surface decreases when we look at any part of the surface with a constant area. It should be noted at this point that light from a source infinitely far away has 0 curvature - it is straight. This is useful, as ambient light (light from a source that is far away) can be assumed to have a curvature of 0, as the difference between this and its actual curvature is negligible.
The curvature of a wavefront is given as:
where v is the distance from the wavefront to the in-focus image depicted by the light. Curvature is measured in dioptres (D).
Power of lenses
The function of a lens is to increase or decrease the curvature of a wavefront. Lenses have a 'power'. This is the curvature which the lens adds to the wavefront. Power is measured in dioptres, and is given by the formula:
where f equals the focal length of the lens. This is the distance between the lens and the point where an image will be in focus, if the wavefronts entering the other side of the lens are parallel.
The Lens Equation
Overall, then, the formula relating the curvature of the wavefronts leaving a lens to the curvature of the wavefronts entering it is:
where v is the distance between the lens (its centre) and the in-focus image formed, u is the distance between the lens (its centre) and the object which the in-focus image is of, and f is the focal length of the lens. The power of the lens can be substituted in for the reciprocal of f, as they are the same thing.
The Cartesian Convention
If we were to place a diagram of the lens on a grid, labelled with cartesian co-ordinates, we would discover that measuring the distance of the object distance is negative, in comparison to the image distance. As a result, the value for u must always be negative. This is known as the Cartesian convention.
This means that, if light enters the lens with a positive curvature, it will leave with a negative curvature unless the lens is powerful enough to make the light leave with a positive curvature.
Types of Lens
There are two types of lens:
Converging lenses add curvature to the wavefronts, causing them to converge more. These have a positive power, and have a curved surface which is wider in the middle than at the rim.
Diverging lenses remove curvature from the wavefronts, causing them to diverge more. These have a negative power, and have a curved surface with a dip in the middle.
Magnification is a measure of how much an image has been enlarged by a lens. It is given by the formula:
where h1 and h2 are the heights of the image (or object) before and after being magnified, respectively. If an image is shrunk by a lens, the magnification is between 0 and 1.
Magnification can also be given as:
where v and u are the image and object distances. Therefore:
An easy way to remember this in the middle of a exam is the formula:
where I is image size, A is actual size of the object M is the magnification factor.
1. A lens has a focal length of 10cm. What is its power, in dioptres?
2. Light reflected off a cactus 1.5m from a 20D lens forms an image. How many metres is it from the other side of the lens?
3. A lens in an RGB projector causes an image to focus on a large screen. What sort of lens is it? Is its power positive or negative?
4. What is the focal length of a 100D lens?
5. The film in a camera is 5mm from a lens when automatically focussed on someone's face, 10m from the camera. What is the power of the lens?
6. The light from a candle is enlarged by a factor of 0.5 by a lens, and produces an image of a candle, 0.05m high, on a wall. What is the height of the candle?
Reflection is when light 'bounces' off a material which is different to the one in which it is travelling. You may remember from GCSE (or equivalent) level that we can calculate the direction the light will take if we consider a line known as the 'normal'. The normal is perpendicular to the boundary between the two materials, at the point at which the light is reflected. The angle between the normal and the ray of light is known as the angle of reflection (r). The ray of light will be reflected back at the same angle as it arrived at the normal, on the other side of the normal.
Refraction is when light changes velocity when it travels across the boundary between two materials. This causes it to change direction. The angle between the normal and the refracted ray of light is known as the angle of refraction (r).
The Refractive Index
The refractive index is a measure of how much light will be refracted on the boundary between a material and a 'reference material'. This reference material is usually either air or a vacuum. It is given by the following formula:
where c0 is the speed of light in a vacuum (3 x 108 m/s) and c1 is the speed of light in the material.
We can relate the refractive index to the angles of incidence (i) and refraction (r) using the following formula, known as Snell's Law:
Total Internal Reflection
Normally, when light passes through a non-opaque material, it is both reflected and refracted. However, sometimes, rays of light are totally internally reflected; in other words, they are not refracted, so no light goes outside the material. This is useful in optic fibres, which allow a signal to be transmitted long distances at the speed of light because the light is totally internally reflected.
The critical angle is the minimum angle of incidence, for a given material, at which rays of light are totally internally reflected. At the critical angle (C), the angle of refraction is 90°, as any smaller angle of incidence will result in refraction. Therefore:
Since sin 90° = 1:
In word form, in a material with refractive index n, light will be totally internally reflected at angles greater than the inverse sine of the reciprocal of the refractive index.
1. A ray of light is reflected from a mirror. Its angle to the normal when it reaches the mirror is 70°. What is its angle of reflection?
2. The speed of light in diamond is 1.24 x 108 m/s. What is its refractive index?
3. The refractive index of ice is 1.31. What is the speed of light in ice?
4. A ray of light passes the boundary between air and a transparent material. The angle of refraction is 20°, and the angle of incidence is 10°. What is the speed of light in this material? Why is it impossible for this material to exist?
5. What is the critical angle of a beam of light leaving a transparent material with a refractive index of 2?
There are two different types of data: analogue and digital. Analogue data can, potentially, take on any value. Examples include a page of handwritten text, a cassette, or a painting. Digital data can only take on a set range of values. This enables it to be processed by a computer. Examples include all files stored on computers, CDs, DVDs, etc.
Digital images are made up of pixels. A pixel represents the value of an individual square of the image, and it has a value assigned to it. The total number of pixels in an image is just like the formula for the area of a rectangle: number of pixels across multiplied by number of pixels down. When representing text, each pixel is a component of one character (for example, a letter, a number, a space, or a new line), it is not the entirety of a character. For instance if the letter 'E' was to be taken as an example and a section was to be taken through the three protrusions; a minimum of seven (7) pixels would be used, one white pixel at the top, then one black (for the first protrusion), then one white for the gap, then a black one for the centre - and so on. A type face - such as Helvetica, or Times New Roman, maybe made up of a more complex pattern of pixels to allow for serif details.
Each pixel's value is digital: it takes on a definite value. In a higher quality image, each pixel can take on a greater variety of values. Each pixel's value is encoded as a number of bits. A bit is a datum with a value of either 0 or 1. The more values a pixel can take on, the more bits must be used to represent its value. The number of values (N) that a pixel represented by I bits can take on is given by the formula:
N = 2I
Log base 10 used here. For ratios, the base of the log does not matter, now we have evaluated log 2 using base 10 log N must be base 10 as well.
A pixel may be represented by values for red, green and blue, in which case each colour channel will have to be encoded separately. When dealing with text, the number of values is equal to the number of possible characters.
Overall, for an image:
Amount of information in an image (bits) = number of pixels x bits per pixel.
A byte is equal to 8 bits. The major difference between bytes and SI units is that when prefixes (such as kilo-, mega-, etc.) are attached, we do not multiply by 103 as the prefix increases. Instead, we multiply by 1024. So, 1 kilobyte = 1024 bytes, 1 megabyte = 10242 bytes, 1 gigabyte = 10243 bytes, and 1 terabyte = 10244 bytes.
1. An image transmitted down a SVGA video cable is 800 pixels wide, and 600 pixels high. How many pixels are there in the image?
2. A grayscale image is encoded using 3 bits. How many possible values can each pixel have?
3. The characters in a text document are numbered from 0 - 255. How many bits should each character be encoded with?
4. A page contains 30 lines of text, with an average of 15 characters on each line. Each character is represented by 4 bits. How many megabytes of uncompressed storage will a book consisting of 650 pages like this fill on a computer's hard disk?
5. A 10cm wide square image is scanned into a computer. Each pixel is encoded using 3 channels (red, green and blue), and each channel can take on 256 possible values. One pixel is 0.01 mm wide. How much information does the scanned image contain? Express your answer using an appropriate unit.
As we have already seen, a digital image consists of pixels, with each pixel having a value which represents its colour. For the purposes of understanding how digital images are manipulated, we are going to consider an 8-bit grayscale image, with pixel values ranging from 0 to 255, giving us 256 (28) levels of grey. 0 represents white, and 255 represents black. This is the image we are going to consider:
The image consists of an edge, and some random noise. There are two methods of smoothing this image (i.e. removing noise) that you need to know about:
In order to attempt to remove noise, we can take the mean average of all the pixels surrounding each pixel (and the pixel itself) as the value of the pixel in the smoothed image, as follows:
This does remove the noise, but it blurs the image.
A far better method is, instead of taking the mean, to take the median, as follows:
For this image, this gives a perfect result. In more complicated images, however, data will still be lost, although, in general, less data will be lost by taking the median than by taking the mean.
We can detect the positioning of edges in an image using the 'Laplace rule', or 'Laplace kernel'. For each pixel in the image, we multiply its value by 4, and then subtract the values of the pixels above and below it, and on either side of it. If the result is negative, we treat it as 0. So, taking the median-smoothed image above, edge detection gives the following result:
1. How could the above methods be applied to a digital sound sample?
2. Which of the above methods would be suitable for smoothing sharp edges? Why?
3. Use median smoothing to remove noise from the following image of a white cat in a snowstorm (the black pixels have a value of 255):
4. Why would mean sampling not be appropriate for smoothing the image given in question 3?
5. Use mean smoothing to remove noise from the following image of a black cat in a coal cellar:
Digitisation of a signal is the process by which an analogue signal is converted to a digital signal.
Digitisation & Reconstruction
Let us consider the voltage output from a microphone. The signal which enters the microphone (sound) is an analogue signal - it can be any of a potentially infinite range of values, and may look something like this waveform (from an artificial (MIDI) piano):
When the microphone converts this signal to an electrical signal, it samples the signal a number of times, and transmits the level of the signal at that point. The following diagram shows sample times (vertical black lines) and the transmitted signal (the red line):
When we wish to listen to the sound, the digital signal has to be reconstructed. The gaps between the samples are filled in, but, as you can see, the reconstructed signal is not the same as the original sound:
The sampling rate when digitising an analogue signal is defined as the number of samples per. second, and is measured in Hertz (Hz), as it is a frequency. You can calculate the sampling rate using the formula:
The higher the sampling rate, the closer the reconstructed signal is to the original signal, but, unfortunately, we are limited by the bandwidth available. Theoretically, a sampling rate of twice the highest frequency of the original signal will result in a perfect reconstructed signal. In the example given above, the sampling rate is far too low, hence the loss of information.
Number of Levels
Another factor which may limit the quality of the reconstructed signal is the number of bits with which the signal is encoded. For example, if we use 3 bits per. sample, we only have 8 (23) levels, so, when sampling, we must take the nearest value represented by one of these levels. This leads to quantization errors - when a sample does not equal the value of the original signal at a given sample point.
1. Take samples for the signal below every 0.1ms, and then produce a reconstructed signal. How does it differ from the original?
2. A signal is sampled for 5 seconds at a sampling rate of 20 kHz. How many samples were taken?
3. Most sounds created by human speech except for 'ss' and 'ff' have a maximum frequency of 4 kHz. What is a suitable sampling rate for a low-quality telephone?
4. Using a sampling rate of 20 kHz and 3 bits, sample the following signal, and then produce a reconstructed signal. What is the maximum frequency that can be perfectly reconstructed using this sampling rate?
The frequency of a wave describes how many waves go past a certain point in one second. Frequency is measured in Hertz (usually abbreviated Hz), and can be calculated using the formula:
V = fλ
where V is the velocity of the wave (in ms-1, f is the frequency of the wave (in Hz), and λ (the Greek letter lambda) is the wavelength of the wave (distance from one peak / trough to the next, in m).
Let us consider the following signal (time is in ms, and the y-axis represents volts):
This signal is constructed from a number of different sine waves, with different frequencies, added together. These sine waves are as follows:
Each of these sine waves has a different frequency. You can see this, as they have different distances between their peaks and troughs. These frequencies can be plotted against the amplitude of the wave, as in the table, and chart drawn from it, below:
|Wave (y=)||Period (ms)||Amplitude (V)||Frequency (Hz)|
|sin(0.5x + 40)||12.566||1||80|
|2sin(3x - 60)||2.093||2||478|
This chart is known as the frequency spectrum of a signal.
The fundamental freqency is the lowest frequency that makes up a signal. In the above example, the fundamental frequency is 80 Hz. It is always the frequency farthest to the left of a frequency spectrum, ignoring noise. Other frequencies are known as overtones, or harmonics.
1. What is the frequency of an X-ray (wavelength 0.5nm)?
2. A sound wave, with a frequency of 44 kHz, has a wavelength of 7.7mm. What is the speed of sound?
3. What is the fundamental frequency of the following signal?
4. Approximately how many harmonics does it contain?
5. The three sine waves sin x°, 4sin(2x-50)° and 0.5sin(3x+120)° are added together to form a signal. What are the frequencies of each of the waves? What is the signal's fundamental frequency? Assume that the waves are travelling at the speed of light, and that 60° = 1mm.
Bandwidth is the frequency of a signal. Although original signals have varying frequencies, when these are transmitted, for example, as FM radio waves, they are modulated so that they only use frequencies within a certain range. FM radio modulates the frequency of a wave, so it needs some variation in the frequencies to allow for transmission of multiple frequencies. Since bandwidth is a frequency, it is the number of bits per. second. The bandwidth required to transmit a signal accurately can be calculated by using 1 as the number of bits, giving the formula:
where B is bandwidth (in Hz), and t is the time taken to transmit 1 bit of data (in s).
The bandwidth of a signal regulates the bit rate of the signal, as, with a higher frequency, more information can be transmitted. This give us the formula (similar to the formula for lossless digital sampling):
b = 2B
where b is the bit rate (in bits per. second), and B is the bandwidth (in Hz).
1. A broadband internet connection has a bit rate of 8Mbit s-1 when downloading information. What is the minimum bandwidth required to carry this bit rate?
2. The same connection has a bandwidth of 100 kHz reserved for uploading information. What is the maximum bit rate that can be attained when uploading information using this connection?
3. A lighthouse uses a flashing light and Morse Code to communicate with a nearby shore. A 'dash' consists of the light being on for 2s. The light is left off for 1s between dots and dashes. What is the bandwidth of the connection?
4. The broadband connection in question two is used to upload a 1Mbyte image to a website. How long does it take to do this?
Electrons, like many other particles, have a charge. While some particles have a positive charge, electrons have a negative charge. The charge on an electron is equal to approximately -1.6 x 10-19 coulombs. Coulombs (commonly abbreviated C) are the unit of charge. One coulomb is defined as the electric charge carried by 1 ampere (amp) of current in 1 second. It is normal to ignore the negative nature of this charge when considering electricity.
If we have n particles with the same charge Qparticle, then the total charge Qtotal is given by:
Qtotal = n Qparticle
By a simple rearrangement:
1. How much charge do 1234 electrons carry?
2. How many electrons does it take to carry 5 C of charge?
3. The total charge on 1 mole of electrons (6 x 1023 particles) is equal to 1 faraday of charge. How many coulombs of charge are equal to 1 faraday?
4.Mass of a ball is 50mg. It is supplied 5C of charge. Will there be any change in the mass of the ball? If does calculate the change of the mass.
Current is the amount of charge (on particles such as electrons) flowing through part of an electric circuit per second. Current is measured in amperes (usually abbreviated A), where 1 ampere is 1 coulomb of charge per second. The formula for current is:
([The triangle (Greek letter delta) means change in the quantity])
where I is current (in A), Q is charge (in C) and t is the time it took for the charge to flow (in seconds).
In a series circuit, the current is the same everywhere in the circuit, as the rate of flow of charged particles is constant throughout the circuit. In a parallel circuit, however, the current is split between the branches of the circuit, as the number of charged particles flowing cannot change. This is Kirchoff's First Law, stating that:
|“||At any point in an electrical circuit where charge density is not changing in time [ie. there is no buildup of charge, as in a capacitor], the sum of currents flowing towards that point is equal to the sum of currents flowing away from that point.||”|
In mathematical form:
(The character that resembles a sideways M is the Greek letter sigma, meaning 'sum of'.)
1. 10 coulombs flow past a point in a wire in 1 minute. How much current is flowing through the point?
2. How long does it take for a 2A current to carry 5C?
3. In the diagram on the left, I = 9A, and I1 = 4.5A. What is the current at I2?
4. What would I equal if I1 = 10A and I2 = 15A?
5. In the diagram on the left, in 5 seconds, 5C of charged particles flow past I1, and 6.7C flow past I2. How long does it take for 10C to flow past I?
Charge moves through a circuit, losing potential energy as it goes. This means that the charge travels as an electric current. Voltage is defined as the difference in potential energy per. unit charge, i.e.
where V is voltage (in V), E is the difference in potential energy (in joules) and Q is charge (in coulombs).
There are two electrical properties which are both measured in volts (commonly abbreviated V), and so both are known under the somewhat vague title of 'voltage'. Both are so called because they change the potential energy of the charge.
Electromotive Force (EMF)
Keep in mind, that EMF as the name suggests is not an electrical force, it is basically the potential difference across the terminals when the key is open i.e. when no current is drawn from the cell. EMF is named so by the scientists who performed faulty experiments and named it so, hence, just a tribute to their contribution to physics it is still called EMF but the definition has changed with time.
As charge travels around a circuit, each coulomb of charge has less potential energy, so the voltage (relative to the power source) decreases. The difference between the voltage at two points in a circuit is known as potential difference, and can be measured with a voltmeter.
In a series circuit, the total voltage (EMF) is divided across the components, as each component causes the voltage to decrease, so each one has a potential difference. The sum of the potential differences across all the components is equal to the potential difference (but batteries have their own 'internal resistances', which complicates things slightly, as we will see).
In a parallel circuit, the potential difference across each branch of the circuit is equal to the EMF, as the same 'force' is pushing along each path of the circuit. The number of charge carriers (current) differs, but the 'force' pushing them (voltage) does not.
1. A battery has an EMF of 5V. What is the total potential difference across all the components in the circuit?
2. The voltages (relative to the voltage of the battery) on either side of a resistor are -6V and -5V. What is the potential difference across the resistor?
3. At a given point in a circuit, 5C of charge have 10 kJ of potential energy. What is the voltage at this point?
4. Why do the electrons move to a point 1cm further along the wire?
Power is a measure of how much potential energy is dissipated (i.e. converted into heat, light and other forms of energy) by a component or circuit in one second. This is due to a drop in the potential energy, and so the voltage, of charge. Power is measured in Watts (commonly abbreviated W), where 1 W is 1 Js-1. It can be calculated by finding the product of the current flowing through a component / circuit and the potential difference across the component / circuit. This gives us the equation:
where P is the power dissipated (in W), E is the drop in potential energy (in Joules, J), t is the time taken (in s), I is the current (in A) and V is either potential difference or electromotive force (in V), depending on the component being measured.
Since power is the amount of energy changing form per. second, the amount of energy being given out each second will equal the power of the component giving out energy.
You should be able to substitute in values for I and V from other formulae (V=IR, Q=It) in order to relate power to resistance, conductance, charge and time, giving formulae like these:
1. The potential difference across a 9W light bulb is 240V. How much current is flowing through the light bulb?
2. How much energy is dissipated by a 10W component in 1 hour?
3. The potential difference across a top-notch kettle, which can hold up to 1 litre of water, is 240V, and the current is 12.5 A. 4.2 kJ of energy is required to raise the temperature of 1kg of water by 1°C. Assuming 100% efficiency and that the temperature has to be raised 80°C (20°C to 100°C), how long does it take to boil 1 litre of water?
4. How much energy is dissipated by a 100Ω resistor in 10 seconds if 2A of current are flowing?
5. The charge on an electron is -1.6 x 10-19 C. How long does it take for a mole (6 x 1023 particles) of electrons to flow through a 40W light bulb on a 240V ring main?
Resistance and Conductance
Conductance is a measure of how well an artefact (such as an electrical component, not a material, such as iron) carries an electric current. Resistance is a measure of how well an artefact resists an electric current.
Resistance is measured in Ohms (usually abbreviated using the Greek letter Omega, Ω) and, in formulae, is represented by the letter R. Conductance is measured in Siemens (usually abbreviated S) and, in formulae, is represented by the letter G.
Resistance and conductance are each other's reciprocals, so:
Ohm's Law states that the potential difference across an artefact constructed from Ohmic conductors (i.e. conductors that obey Ohm's Law) is equal to the product of the current running through the component and the resistance of the component. As a formula:
V = IR
where V is potential difference (in V), I is current (in A) and R is resistance (in Ω).
In terms of Resistance
This formula can be rearranged to give a formula which can be used to calculate the resistance of an artefact:
In terms of Conductance
Since conductance is the reciprocal of resistance, we can deduce a formula for conductance (G):
The Relationship between Potential Difference and Current
From Ohm's Law, we can see that potential difference is directly proportional to current, provided resistance is constant. This is because two variables (let us call them x and y) are considered directly proportional to one another if:
where k is any positive constant. Since we are assuming that resistance is constant, R can equal k, so V=RI states that potential difference is directly proportional to current. As a result, if potential difference is plotted against current on a graph, it will give a straight line with a positive gradient which passes through the origin. The gradient will equal the resistance.
In Series Circuits
In a series circuit (for example, a row of resistors connected to each other), the resistances of the resistors add up to give the total resistance. Since conductance is the reciprocal of resistance, the reciprocals of the conductances add up to give the reciprocal of the total conductance. So:
In Parallel Circuits
In a parallel circuit, the conductances of the components on each branch add up to give the total conductance. Similar to series circuits, the reciprocals of the total resistances of each branch add up to give the reciprocal of the total resistance of the circuit. So:
When considering circuits which are a combination of series and parallel circuits, consider each branch as a separate component, and work out its total resistance or conductance before finishing the process as normal.
1. The potential difference across a resistor is 4V, and the current is 10A. What is the resistance of the resistor?
2. What is the conductance of this resistor?
3. A conductor has a conductance of 2S, and the potential difference across it is 0.5V. How much current is flowing through it?
4. A graph is drawn of potential difference across an Ohmic conductor, and current. For every 3cm across, the graph rises by 2cm. What is the conductance of the conductor?
5. On another graph of potential difference and current, the graph curves so that the gradient increases as current increases. What can you say about the resistor?
6. 3 resistors, wired in series, have resistances of 1kΩ, 5kΩ and 500Ω each. What is the total resistance across all three resistors?
7. 2 conductors, wired in parallel, have conductances of 10S and 5S. What is the total resistance of both branches of the parallel circuit?
8. The circuit above is attached in series to 1 10Ω resistor. What is the total conductance of the circuit now?
Batteries, just like other components in an electric circuit, have a resistance. This resistance is known as internal resistance. This means that applying Ohm's law (V = IR) to circuits is more complex than simply feeding the correct values for V, I or R into the formula.
The existence of internal resistance is indicated by measuring the potential difference across a battery. This is always less than the EMF of the battery. This is because of the internal resistance of the battery. This idea gives us the following formula:
PD across battery = EMF of battery - voltage to be accounted for
Let us replace these values with letters to give the simpler formula:
Vexternal = E - Vinternal
Since V = IR:
Vexternal = E - IRinternal
You may also need to use the following formula to work out the external potential difference, if you are not given it:
Vexternal = IΣRexternal
You should also remember the effects of using resistors in both series and parallel circuits.
1. A 9V battery is short-circuited. The potential difference across the battery is found to be 8V, and the current is 5A. What is the internal resistance of the battery?
2. What is the EMF of the battery in the following circuit?
3. What is the internal resistance of the battery in the following circuit?
A potential divider, or potentiometer, consists of a number of resistors, and a voltmeter. The voltage read by the voltmeter is determined by the ratio of the resistances on either side of the point at which one end of the voltmeter is connected.
To understand how a potential divider works, let us consider resistors in series. The resistances add up, so, in a circuit with two resistors:
If we apply Ohm's law, remembering that the current is constant throughout a series circuit:
Multiply by current (I):
So, just as the resistances in series add up to the total resistance, the potential differences add up to the total potential difference. The ratios between the resistances are equal to the ratios between the potential differences. In other words, we can calculate the potential difference across a resistor using the formula:
In many cases, you will be told to assume that the internal resistance of the power source is negligible, meaning that you can take the total potential difference as the EMF of the power source.
A potential divider may work by combining a variable resistor such as an LDR or thermistor with a constant resistor, as in the diagram below. As the resistance of the variable resistor changes, the ratio between the resistances changes, so the potential difference across any given resistor changes.
Alternatively, a potential divider may be made of many resistors. A 'wiper' may move across them, varying the number of resistors on either side of the wiper as it moves, as in the following diagram:
1. A 12 kΩ resistor and a 20 kΩ resistor are connected to a 9V battery. A voltmeter is connected across the 12kΩ resistor. What is the reading on the voltmeter? (Assume negligible internal resistance.)
2. A potential divider consists of 100 5Ω resistors, with a wiper which moves on one resistor for every 3.6° a handle connected to it turns. The wiper is connected to a voltmeter, and the circuit is powered by a 120V power source with negligible internal resistance. What is the reading on the voltmeter when the handle turns 120°?
3. A 9V battery with internal resistance 0.8Ω is connected to 3 resistors with conductances of 3, 2 and 1 Siemens. A voltmeter is connected across the 3 and 2 Siemens resistors. An ammeter is placed in the circuit, between the battery and the first terminal of the voltmeter, and reads 2A. What is the reading on the voltmeter?
A sensor is a device which converts a physical property into an electrical property (such as resistance). A sensing system is a system (usually a circuit) which allows this electrical property, and so the physical property, to be measured.
A common example of a sensing system is a temperature sensor in a thermostat, which uses a thermistor. In the most common type of thermistor (an NTC), the resistance decreases as the temperature increases. This effect is achieved by making the thermistor out of a semiconductor. The thermistor is then used in a potential divider, as in the diagram on the right. In this diagram, the potential difference is divided between the resistor and the thermistor. As the temperature rises, the resistance of the thermistor decreases, so the potential difference across it decreases. This means that potential difference across the resistor increases as temperature increases. This is why the voltmeter is across the resistor, not the thermistor.
There are three main properties of sensing systems you need to know about:
This is the amount of change in voltage output per. unit change in input (the physical property). For example, in the above sensing system, if the voltage on the voltmeter increased by 10V as the temperature increased by 6.3°C:
This is the smallest change in the physical property detectable by the sensing system. Sometimes, the limiting factor is the number of decimal places the voltmeter can display. So if, for example, the voltmeter can display the voltage to 2 decimal places, the smallest visible change in voltage is 0.01V. We can then use the sensitivity of the sensor to calculate the resolution.
This is the time the sensing system takes to display a change in the physical property it is measuring. It is often difficult to measure.
Sometimes, a sensing system gives a difference in output voltage, but the sensitivity is far too low to be of any use. There are two solutions to this problem, which can be used together:
An amplifier can be placed in the system, increasing the signal. The main problem with this is that the signal cannot exceed the maximum voltage of the system, so values will be chopped off of the top and bottom of the signal because it is so high.
This solution is far better, especially when used prior to amplification. Instead of using just one pair of resistors, a second pair is used, and the potential difference between the two pairs (which are connected in parallel) is measured. This means that, if, at the sensing resistor (e.g. thermistor / LDR) the resistance is at its maximum, a signal of 0V is produced. This means that the extremes of the signal are not chopped off, making for a much better sensor.
An LDR's resistance decreases from a maximum resistance of 2kΩ to a minimum resistance of 0Ω as light intensity increases. It is used in a distance sensing system which consists of a 9V power supply, a 1.6 kΩ resistor, the LDR and a multimeter which displays voltage to 2 decimal places measuring the potential difference across one of the two resistors.
1. Across which resistor should the multimeter be connected in order to ensure that, as the distance from the light source to the sensor increases, the potential difference recorded increases?
2. In complete darkness, what voltage is recorded on the multimeter?
3. When a light source moves 0.5m away from the sensor, the voltage on the multimeter increases by 2V. What is the sensitivity of the sensing system when using this light source, in V m-1?
4. When the same light source is placed 0m from the sensor, the potential difference is 0V. When the light source is 1m away, what voltage is displayed on the multimeter?
5. What is the resolution of the sensing system?
6. Draw a circuit diagram showing a similar sensing system to this, using a Wheatstone bridge and amplifier to improve the sensitivity of the system.
7. What is the maximum potential difference that can reach the amplifier using this new system (ignore the amplification)?
8. If this signal were to be amplified 3 times, would it exceed the maximum voltage of the system? What would the limits on the signal be?
Resistivity and Conductivity
Resistivity and conductivity are material properties: they apply to all examples of a certain material anywhere. They are not the same as resistance and conductance, which are properties of individual artefacts. This means that they only apply to a given object. They describe how well a material resists or conducts an electric current.
Symbols and Units
Resistivity is usually represented by the Greek letter rho (ρ), and is measured in Ω m. Conductivity is usually represented by the Greek letter sigma (σ), and is measured in S m-1.
The formula relating resistivity (ρ) to resistance (R), cross-sectional area (A) and length (L) is:
Conductivity is the reciprocal of resistivity, just as conductance (G) is the reciprocal of resistance. Hence:
You should be able to rearrange these two formulae to be able to work out resistance, conductance, cross-sectional area and length. For example, it all makes a lot more sense if we write the first formula in terms of ρ, A and L:
From this, we can see that the resistance of a lump of material is higher if it has a higher resistivity, or if it is longer. Also, if it has a larger cross-sectional area, its resistance is smaller.
1. A material has a conductivity of 106 S m-1. What is its resistivity?
2. A pure copper wire has a radius of 0.5mm, a resistance of 1 MΩ, and is 4680 km long. What is the resistivity of copper?
3. Gold has a conductivity of 45 MS m-1. What is the resistance of a 0.01m across gold connector, 0.05m long?
4. A strand of metal is stretched to twice its original length. What is its new resistance? State your assumptions.
5. Which has the greater resistivity: a plank or a piece of sawdust, made from the same wood?
A semiconductor has a conductivity between that of a conductor and an insulator. They are less conductive than metals, but differ from metals in that, as a semiconductor heats up, its conductivity rises. In metals, the opposite effect occurs.
The reason for this is that, in a semiconductor, very few atoms are ionised, and so very few electrons can move, creating an electric current. However, as the semiconductor heats up, the covalent bonds (atoms sharing electrons, causing the electrons to be relatively immobile) break down, freeing the electrons. As a result, a semiconductor's conductivity rises at an increasing rate as temperature rises.
Examples of semiconductors include silicon and germanium. A full list of semiconductor materials is available at Wikipedia. At room temperature, silicon has a conductivity of about 435 μS m-1.
Semiconductors are usually 'doped'. This means that ions are added in small quantities, giving the semiconductor a greater or lesser number of free electrons as required. This is controlled by the charge on the ions.
1. What is the resistivity of silicon, at room temperature?
2. What sort of variable resistor would a semiconductor be useful in?
3. If positive ions are added to silicon (doping it), how does its conductivity change?
- The book on Semiconductors.
Stress, Strain & the Young Modulus
Stress is a measure of how strong a material is. This is defined as how much pressure the material can stand without undergoing some sort of physical change. Hence, the formula for calculating stress is the same as the formula for calculating pressure:
where σ is stress (in Newtons per square metre but usually Pascals, commonly abbreviated Pa), F is force (in Newtons, commonly abbreviated N) and A is the cross sectional area of the sample.
The tensile strength is the level of stress at which a material will fracture. Tensile strength is also known as fracture stress. If a material fractures by 'crack propagation' (i.e., it shatters), the material is brittle.
The yield stress is the level of stress at which a material will deform permanently. This is also known as yield strength.
with mathematical form ax+by=c
Stress causes strain. Putting pressure on an object causes it to stretch. Strain is a measure of how much an object is being stretched. The formula for strain is:
where is the original length of some bar being stretched, and l is its length after it has been stretched. Δl is the extension of the bar, the difference between these two lengths.
Young's Modulus is a measure of the stiffness of a material. It states how much a material will stretch (i.e., how much strain it will undergo) as a result of a given amount of stress. The formula for calculating it is:
The values for stress and strain must be taken at as low a stress level as possible, provided a difference in the length of the sample can be measured. Strain is unitless so Young's Modulus has the same units as stress, i.e. N/m² or Pa.
Stress (σ) can be graphed against strain (ε). The toughness of a material (i.e., how much it resists stress, in J m-3) is equal to the area under the curve, between the y-axis and the fracture point. Graphs such as the one on the right show how stress affects a material. This image shows the stress-strain graph for low-carbon steel. It has three main features:
In this region (between the origin and point 2), the ratio of stress to strain (Young's modulus) is constant, meaning that the material is obeying Hooke's law, which states that a material is elastic (it will return to its original shape) if force is directly proportional to extension of the material
Hooke's law of elasticity is an approximation that states that the Force (load) is in direct proportion with the extension of a material as long as this load does not exceed the elastic limit. Materials for which Hooke's law is a useful approximation are known as linear-elastic
The relation is often denoted
The work done to stretch a wire or the Elastic Potential Energy is equal to the area of the triangle on a Tension/Extension graph, but can also be expressed as
In this region (between points 2 and 3), the rate at which extension is increasing is going up, and the material has passed the elastic limit. It will no longer return to its original shape. After point 1, the amount of stress decreases due to 'necking', so the cross-sectional area is going down. The material will 'give' and extend more under less force.
At point 3, the material finally breaks/fractures and the curve ends.
Other Typical Graphs
In a brittle material, such as glass or ceramics, the stress-strain graph will have an extremely short elastic region, and then will fracture. There is no plastic region on the stress-strain graph of a brittle material.
- 10N of force are exerted on a wire with cross-sectional area 0.5mm2. How much stress is being exerted on the wire?
- Another wire has a tensile strength of 70MPa, and breaks under 100N of force. What is the cross-sectional area of the wire just before breaking?
- What is the strain on a Twix bar (original length 10cm) if it is now 12cm long?
- What is this strain, expressed as a percentage?
- 50N are applied to a wire with a radius of 1mm. The wire was 0.7m long, but is now 0.75m long. What is the Young's Modulus for the material the wire is made of?
- Glass, a brittle material, fractures at a strain of 0.004 and a stress of 240 MPa. Sketch the stress-strain graph for glass.
- (Extra nasty question which you won't ever get in an exam) What is the toughness of glass?
There are several physical properties of metals you need to know about:
Metals consist of positive metal ions in a 'soup' or 'sea' of free (delocalized) electrons. This means that the electrons are free to move through the metal, conducting an electric current.
The electrostatic forces of attraction between the negatively charged electrons and the positively charged ions holds the ions together, making metals stiff.
Since there are no permanent bonds between the ions, they can move about and slide past each other. This makes metals ductile.
Metals are tough for the same reason as they are ductile: the positive ions can slide past each other while still remaining together. So, instead of breaking apart, they change shape, resulting in increased toughness. This effect is called plasticity.
When a metal is stretched, it can return to its original shape because the sea of electrons which bonds the ions together can be stretched as well.
The opposite of tough: a material is likely to crack or shatter upon impact or force. It will snap cleanly due to defects and cracks.
Metals are maleable because their atoms are aranged in flat planes that can slide past each other.
Diffusive transformation: occur when the planes of atoms in the material move past each other due to the stresses on the object. This transformation is permanent and cannot be recovered from due to energy being absorbed by the structure
Diffusionless transformation: occurs where the bonds between the atoms stretch, allowing the material to deform elastically. An example would be rubber or a shape memory metal/alloy (often referred to as SMA) such as a nickel-titanium alloy. In the shape memory alloy the transformation occurs via the change of phase of the internal structure from martensitic to deformed martensitic, which allows the SMA to have a high percentage strain (up to 8% for some SMA's in comparison to approximately 0.5% for steel). If the material is then heated above a certain temperature the deformed martensite will form austenite, which returns to twinned martensite after cooling.
1. Would you expect a metal to have more or less conductivity than a semiconductor? Why?
2. How can the stress-strain graph for a metal be explained in terms of ions in a sea of electrons?
3. As a metal heats up, what happens to its conductivity? Why?
A simple polymer consists of a long chain of monomers (components of molecules) joined by covalent bonds. A polymer usually consists of many of these bonds, tangled up. This is known as a bulk polymer.
A bulk polymer may contain two types of regions. In crystalline regions, the chains run parallel to each other, whereas in amorphous regions, they do not. Intermolecular bonds are stronger in crystalline regions. A polycrystalline polymer consists of multiple regions, in which the chains point in a different direction in each region.
Polymers which are crystalline are usually opaque or translucent. As a polymer becomes less polycrystalline, it becomes more transparent, whilst an amorphous polymer is usually transparent.
In some polymers, such as polythene, the chains are folded up. When they are stretched, the chains unravel, stretching without breaking. When the stress ceases, they will return to their original shape. If, however, the bonds between the molecules are broken, the material reaches its elastic limit and will not return to its original shape.
Polymer chains may be linked together, causing the polymer to become stiffer. An example is rubber, which, when heated with sulfur, undergoes a process known as vulcanization. The chains in the rubber become joined by sulfur atoms, making the rubber suitable for use in car tyres. A stiffer polymer, however, will usually be more brittle.
When a polymer is stretched, the chains become parallel, and amorphous areas may become crystalline. This causes an apparent change in colour, and a process known as 'necking'. This is when the chains recede out of an area of the substance, making it thinner, with fatter areas on either side.
Polymers consist of covalent bonds, so the electrons are not free to move according to potential difference. This means that polymers are poor conductors.
Polymers do not have boiling points. This is because, before they reach a theoretical boiling point, polymers decompose. Polymers do not have melting points for the same reason.
1. Different crystalline structures have different refractive indexes. Why does this mean that a polycrystalline polymer is translucent?
2. What sort of polymer is a pane of perspex?
3. What sort of polymer does the pane of perspex become when shattered (but still in one piece)?
4. What sort of polymer is a rubber on the end of a pencil?
5. What happens to the translucency of an amorphous polymer when it is put under stress?
- C. A. Heaton, The Chemical industry, page 113.
What is a wave?
At this point in the course, it is easy to get bogged down in the complex theories and equations surrounding 'waves'. However, a better understanding of waves can be gained by going back to basics, and explaining what a wave is in the first place.
A wave, at its most basic level, is a disturbance by which energy is transferred because this disturbance is a store, of sorts, of potential energy. This begs the question "How is this disturbance transferred across space?" In some cases, this is easy to answer, because some waves travel through a medium. The easiest example to think about is a water wave. One area moves up, pulling the next one up with it, and pressure and gravity pull it back to its original position.
However, some waves (electro-magnetic waves) do not appear to travel through a medium. Physicists have puzzled over how light, which behaves like a wave in many situations, travels for a long time. One theory was that there was a mysterious 'ether' which pervaded all of space, and so light was just like water waves, except that the water was invisible. This theory is widely regarded to be incorrect, but, since light is assumed to be a wave, what is it a disturbance in?
Another explanation is that light is not a wave, but instead is a stream of particles. This idea would explain away the need for an 'ether' for light to travel through. This, too, has its problems, as it does not explain why light behaves as a wave.
So, we are left with a paradox. Light behaves as both a wave and a particle, but it can be shown not to be either. Quantum physics attempts to explain this paradox. However, since light behaves as both a wave and a particle, we can look at it as both, even if, when doing this, we know that we don't fully understand it yet.
The image on the right shows a waveform. This plots the distance through the medium on the x-axis, and the amount of disturbance on the y-axis. The amount of disturbance is known as the amplitude. Wave amplitudes tend to oscillate between two limits, as shown. The distance in the medium between two 'peaks' or 'troughs' (maxima and minima on the waveform) is known as the wavelength of the wave.
Types of Waves
Waves can be categorised according to the direction of the effect of the disturbance relative to the direction of travel. A wave which causes disturbance in the direction of its travel is known as a longitudinal wave, whereas a wave which causes disturbance perpendicular to the direction of its travel is known as a transverse wave.
|Longitudinal wave (e.g. sound)||Transverse wave (e.g. light)|
One feature of waves is that they superpose. That is to say, when they are travelling in the same place in the medium at the same time, they both affect the medium independently. So, if two waves say "go up" to the same bit of medium at the same time, the medium will rise twice as much. In general, superposition means that the amplitudes of two waves at the same point at the same time at the same polarisation add up.
Consider two identical waveforms being superposed on each other. The resultant waveform will be like the two other waveforms, except its amplitude at every point will be twice as much. This is known as constructive interference. Alternatively, if one waveform moves on by half a wavelength, but the other does not, the resultant waveform will have no amplitude, as the two waveforms will cancel each other out. This is known as destructive interference. Both these effects are shown in the diagram below:
These effects occur because the wavefronts are travelling through a medium, but electromagnetic radiation also behaves like this, even though it does not travel through a medium.
Velocity, frequency and wavelength
You should remember the equation v = fλ from earlier in this course, or from GCSE. v is the velocity at which the wave travels through the medium, in ms-1, f (or nu, ν) is the frequency of the wave, in Hz (no. of wavelengths per. second), and λ is the wavelength, in m.
This equation applies to electromagnetic waves, but you should remember that there are different wavelengths of electromagnetic radiation, and that different colours of visible light have different wavelengths. You also need to know the wavelengths of the different types of electromagnetic radiation:
1. Through what medium are sound waves propagated?
2. What aspects of the behaviour of light make it look like a wave?
3. What aspects of the behaviour of light make it look like a particle?
4. Consider the diagram on the right. White light is partially reflected by the transparent material. Some of the light, however, is refracted into the transparent material and reflected back by the opaque material. The result is two waves travelling in the same place at the same time at the same polarisation(the light is not a single beam). Why does, say, the red light disappear? (Variations on this question are popular with examiners.)
5. What is the wavelength of green light?
6. The lowest frequency sound wave humans can hear has a frequency of approximately 20Hz. Given that the speed of sound in air is 343ms-1, what is the wavelength of the lowest frequency human-audible sound?
Consider the image on the right. It shows a wave travelling through a medium. The moving blue dot represents the displacement caused to the medium by the wave. It can be seen that, if we consider any one point in the medium, it goes through a repeating pattern, moving up and down, moving faster the nearer it is to the centre of the waveform. Its height is determined by the amplitude of the wave at that point in time. This is determined by a sine wave.
Phasors are a method of describing waves which show two things: the displacement caused to the medium, and the point in the repeating waveform which is being represented. They consist of a circle. An arrow moves round the circle anticlockwise as the wave pattern passes. For every wavelength that goes past, the arrow moves 360°, or 2πc, starting from the right, as in trigonometry. The angle of the arrow from the right is known as the phase angle, and is usually denoted θ, and the radius of the circle is usually denoted a. The height of the point at the end of the arrow represents the displacement caused by the wave to the medium, and so the amplitude of the wave at that point in time. The time taken to rotate 360° is known as the periodic time, and is usually denoted T.
Phase difference is the difference between the angles (θ) of two phasors, which represent two waves. It is never more than 180°, as, since the phasor is moving in a circle, the angle between two lines touching the circumference will always be less than or equal to 180°. It can also be expressed in terms of λ, where λ is the total wavelength (effectively, 360°). You can use trigonometry to calculate the displacement from the angle, and vice-versa, provided you know the radius of the circle. The radius is equal to the maximum amplitude of the wave.
Phasors can be added up, just like vectors: tip-to-tail. So, for example, when two waves are superposed on each other, the phasors at each point in the reference material can be added up to give a new displacement. This explains both constructive and destructive interference as well. In destructive interference, the phasors for each wave are pointing in exactly opposite directions, and so add up to nothing at all. In constructive interference, the phasors are pointing in the same direction, so the total displacement is twice as much.
1. A sine wave with wavelength 0.1m travels through a given point on the surface of the sea. A phasor arrow representing the effect of this wave on this point rotates 1000°. How many wavelengths have gone past in the time taken for the phasor to rotate this much?
2. A sine wave has a maximum amplitude of 500nm. What is its amplitude when the phasor has rotated 60° from its start position?
3. Two waves have a phase difference of 45°. When the first wave is at its minimum amplitude of -0.3m, what is the total amplitude of the superposed waveforms?
When two coherent waves - waves of equal frequency and amplitude - travel in opposite directions through the same area, an interesting superposition effect occurs, as is shown in the following animation:
Some areas of the resultant waveform consistently have an amplitude of 0. These are known as nodes. At other points (half-way between the nodes), the resultant waveform varies from twice the amplitude of its constituent waveforms in both directions. These points are known as antinodes. Everywhere in between the nodes and antinodes varies to a lesser degree, depending on its position.
This effect only occurs if the two waveforms have the same amplitude and frequency. If the two waves have different amplitudes, the resultant waveform is similar to a standing wave, except that it has no nodes, and 'moves'.
Because of these conditions, standing waves usually only occur when a waveform is reflected back on itself. For example, in a microwave oven, the microwaves are reflected by the metal on the other side of the oven from the transmitter. This creates nodes and antinodes. Since nothing cooks at the nodes, a turntable is necessary to ensure that all of the food passes through the antinodes and gets cooked.
Consider a string, attached at either end, but allowed to move freely in between. If you pluck it, you create a wave which travels along the string in both directions, and is reflected at either end of the string. This process keeps on happening, and so a standing wave pattern is created. The string goes up, and then down, as shown in the first row of the diagram on the right. If you imagine the top arc as the first half of a waveform, you can see that when the string is vibrating at the fundamental frequency, the string is half as long as the wavelength: it is ½λ long.
If you were to pinch the string in the middle, and then pluck it on one side, a different standing wave pattern would be generated. By plucking, you have created an antinode, and by pinching, you have created a node. If you then let go of the string, the standing wave pattern spreads, and the string length now equals the wavelength. This is known as the first harmonic.
As you pinch the string in descending fractions (½, ⅓, ¼, etc.), you generate successive harmonics, and the total length of the string is equal to additional ½λ wavelengths.
Consider a pipe which is open at one end, and closed at the other. In pipes, waves are reflected at the end of the pipe, regardless of whether it is open or not. If you blow across the end of the tube, you create a longitudinal wave, with the air as the medium. This wave travels down the tube, is reflected, travels back, is reflected again, and so on, creating a standing wave pattern.
The closed end of the pipe must be a node; it is the equivalent of pinching a string. Similarly, the open end must be an antinode; blowing across it is the equivalent of plucking the string.
Harmonics can be present in pipes, as well. This is how musical instruments work: an open hole in a wind instrument creates an antinode, changing the frequency of the sound, and so the pitch.
Tom Duncan states that the fundamental frequency IS the same as the first harmonic (Adavanced Physics 5th edition page 317)
1. The air in a 3m organ pipe is resonating at the fundamental frequency. Organ pipes are effectively open at both ends. What is the wavelength of the sound?
2. A string is vibrating at the second harmonic frequency. How many wavelengths long is the standing wave created?
3. Express, in terms of λ, the length of a pipe which is closed at one end, where λ is the length of one wave at the fundamental frequency.
You should be familiar with the idea that, when light passes through a slit, it is diffracted (caused to spread out in arcs from the slit). The amount of diffraction increases the closer the slit width is to the wavelength of the light. Consider the animation on the right. Light from a light source is caused to pass through two slits. It is diffracted at both these slits, and so it spreads out in two sets of arcs.
Now, apply superposition of waves to this situation. At certain points, the peaks (or troughs) of the waves will coincide, creating constructive interference. If this occurs on a screen, then a bright 'fringe' will be visible. On the other hand, if destructive interference occurs (a peak coincides with a trough), then no light will be visible at that point on the screen.
Calculating the angles at which fringes occur
If we wish to calculate the position of a bright fringe, we know that, at this point, the waves must be in phase. Alternatively, at a dark fringe, the waves must be in antiphase. If we let the wavelength equal λ, the angle of the beams from the normal equal θ, and the distance between the slits equal d, we can form two triangles, one for bright fringes, and another for dark fringes (the crosses labelled 1 and 2 are the slits):
The length of the side labelled λ is known as the path difference. For bright fringes, from the geometry above, we know that:
However, bright fringes do not only occur when the side labelled λ is equal to 1 wavelength: it can equal multiple wavelengths, so long as it is a whole wavelength. Therefore
where n is any integer.
Now consider the right-hand triangle, which applies to dark fringes. We know that, in this case:
We can generalise this, too, for any dark fringe. However, if 0.5λ is multiplied by an even integer, then we will get a whole wavelength, which would result in a bright, not a dark, fringe. So, n must be an odd integer in the following formula:
Calculating the distances angles correspond to on the screen
At this point, we have to engage in some slightly dodgy maths. In the following diagram, p is path difference, L is the distance from the slits to the screen and x is the perpendicular distance from a fringe to the normal:
Here, it is necessary to approximate the distance from the slits to the fringe as the perpendicular distance from the slits to the screen. This is acceptable, provided that θ is small, which it will be, since bright fringes get dimmer as they get further away from the point on the screen opposite the slits. Hence:
If we substitute this into the equation for the path difference p:
So, at bright fringes:
, where n is an integer.
And at dark fringes:
, where n is an odd integer.
A diffraction grating consists of a lot of slits with equal values of d. As with 2 slits, when , peaks or troughs from all the slits coincide and you get a bright fringe. Things get a bit more complicated, as all the slits have different positions at which they add up, but you only need to know that diffraction gratings form light and dark fringes, and that the equations are the same as for 2 slits for these fringes.
1. A 2-slit experiment is set up in which the slits are 0.03 m apart. A bright fringe is observed at an angle 10° from the normal. What sort of electromagnetic radiation was being used?
2. Light, with a wavelength of 500 nm, is shone through 2 slits, which are 0.05 m apart. What are the angles to the normal of the first three dark fringes?
3. Some X-rays, with wavelength 1 nm, are shone through a diffraction grating in which the slits are 50 μm apart. A screen is placed 1.5m from the grating. How far are the first three light fringes from the point at which the normal intercepts the screen?
We have already seen why fringes are visible when light passes through multiple slits. However, this does not explain why, when light is only passing through 1 slit, a pattern such as the one on the right is visible on the screen.
The answer to this lies in phasors. We already know that the phasor arrows add up to give a resultant phasor. By considering the phasor arrows from many paths which light takes through a slit, we can explain why light and dark fringes occur.
At the normal line, where the brightest fringe is shown, all the phasor arrows are pointing in the same direction, and so add up to create the greatest amplitude: a bright fringe.
At other fringes, we can use the same formulæ as for diffraction gratings, as we are effectively treating the single slit as a row of beams of light, coming from a row of slits.
Now consider the central beam of light. By trigonometry:
where θ = beam angle (radians), W = beam width and L = distance from slit to screen. Since θ is small, we can approximate sin θ as θ, so:
and since λ = d sin θ:
1. What is the width of the central bright fringe on a screen placed 5m from a single slit, where the slit is 0.01m wide and the wavelength is 500nm?
And that's all there is to it ... maybe.
Finding the Distance of a Remote Object
In the final section (Section C) of the exam, you have to be able to write about how waves are used to find the distance of a remote object. I would recommend that you pick a method, and then answer the following questions about it:
1. What sort of wave does your system use? What is an approximate wavelength of this wave?
2. What sort of distance is it usually used to measure? What sort of length would you expect the distance to be?
3. Why is measuring this distance useful to society?
4. Draw a labelled diagram of your system.
5. Explain how the system works, and what data are collected.
6. Explain how the distance to the object is calculated using the data collected.
7. What limitations does your system have? (e.g. accuracy, consistency)
8. What percentage error would you expect these limitations to cause?
9. How might these problems be solved?
Some example answers to these questions are given in the following pages:
Light as a Quantum Phenomenon
We have already seen how light behaves like both a wave and a particle, yet can be proven not to be either. This idea is not limited to light, but we will start our brief look at quantum physics with light, since it is easiest to understand.
Quantum physics is the study of quanta. A quantum is, to quote Wiktionary, "The smallest possible, and therefore indivisible, unit of a given quantity or quantifiable phenomenon". The quantum of light is the photon. We are not describing it as a particle or a wave, as such, but as a lump of energy which behaves like a particle and a wave in some cases. We are saying that the photon is the smallest part of light which could be measured, given perfect equipment. A photon is, technically, an elementary particle. It is also the carrier of all electromagnetic radiation. However, its behaviour - quantum behaviour - is completely weird, so we call it a quantum.
Evidence for the Quantum Behaviour of Light
The first, and easiest to understand, piece of evidence is photographic in nature. When you take a photo with very little light, it appears 'grainy', such as the image on the right. This means that the light is arriving at the camera in lumps. If light were a wave, we would expect the photograph to appear dimmer, but uniformly so. In reality, we get clumps of light distributed randomly across the image, although the density of the random lumps is higher on the more reflective materials (the nuts). This idea of randomness, according to rules, is essential to quantum physics.
The second piece of evidence is more complex, but more useful since a rule can be derived from it. It can be shown experimentally that, when light of an adequate frequency falls on a metallic surface, then the surface absorbs the light and emits electrons. Hence, a current and voltage (between the surface and a positively charged terminal nearby) are produced, which can be measured.
The amount of current produced varies randomly around a certain point. This point changes depending on the frequency of the electromagnetic radiation. Furthermore, if the frequency of the radiation is not high enough, then there is no current at all! If light were a wave, we would expect energy to build up gradually until an electron was released, but instead, if the photons do not have enough energy, then nothing happens. This is evidence for the existence of photons.
The Relationship between Energy and Frequency
The photoelectric effect allows us to derive an equation linking the frequency of electromagnetic radiation to the energy of each quantum (in this case, photons). This can be achieved experimentally, by exposing the metallic surface to light of different colours, and hence different frequencies. We already know the frequencies of the different colours of light, and we can calculate the energy each photon carries into the surface, as this is the same as the energy required to supply enough potential difference to cause the electron to move. The equation for the energy of the electron is derived as follows:
First, equate two formulae for energy:
Rearrange to get:
We also know that:
So, by substituting the previous equation into the equation for energy:
where P = power, E = energy, t = time, I = current, V = potential difference, Q = charge, e = charge of 1 electron = -1.602 x 10-19 C, ΔV = potential difference produced between anode and cathode at a given frequency of radiation. This means that, given this potential difference, we can calculate the energy released, and hence the energy of the quanta which caused this energy to be released.
Plotting frequency (on the x-axis) against energy (on the y-axis) gives us an approximate straight line, with a gradient of 6.626 x 10-34. This number is known as Planck's constant, is measured in Js, and is usually denoted h. Therefore:
In other words, the energy carried by each quantum is proportional to the frequency of the quantum. The constant of proportionality is Planck's constant.
1. How much energy does a photon with a frequency of 50kHz carry?
2. A photon carries 10-30J of energy. What is its frequency?
3. How many photons of frequency 545 THz does a 20W bulb give out each second?
4. In one minute, a bulb gives out a million photons of frequency 600 THz. What is the power of the bulb?
5. The photons in a beam of electromagnetic radiation carry 2.5μJ of energy each. How long should the phasors representing this radiation take to rotate?
So far, we have identified the fact that light travels in quanta, called photons, and that these photons carry an amount of energy which is proportional to their frequency. We also know that photons aren't waves or particles in the traditional sense of either word. Instead, they are lumps of energy. They don't behave the way we would expect them to.
In fact, what photons do when they are travelling is to take every path possible. If a photon has to travel from point A to point B it will travel in a straight line and loop the loop and go via Alpha Centauri and take every other possible path. This is the photon's so-called 'quantum state'. It is spread out across all space.
However, just because a photon could end up anywhere in space does not mean that it has an equal probability of ending up in any given place. It is far more likely that a photon from a torch I am carrying will end up hitting the ground in front of me than it is that the same photon will hit me on the back of the head. But both are possible. Light can go round corners; just very rarely!
The probability of a photon ending up at any given point in space relative to another point in space can be calculated by considering a selection of the paths the photon takes to each point. The more paths considered, the greater the accuracy of the calculation. Use the following steps when doing this:
1. Define the light source.
2. Work out the frequency of the photon.
3. Define any objects which the light cannot pass through.
4. Define the first point you wish to consider.
5. Define a set of paths from the source to the point being considered, the more, the better.
6. Work out the time taken to traverse one of the paths.
7. Work out how many phasor rotations this corresponds to.
8. Draw an arrow representing the final phasor arrow.
9. Repeat steps 6-8 for each of the paths.
10. Add all the phasor arrows together, tip-to-tail.
11. Square the amplitude of this resultant phasor arrow to gain the intensity of the light at this point. It may help to imagine a square rotating around, instead of an arrow.
12. Repeat steps 4-11 for every point you wish to consider. The more points you consider, the more accurate your probability distribution will be.
13. Compare all the resultant intensities to gain a probability distribution which describes the probabilities of a photon arriving at one point to another. For example, if the intensity of light at one point is two times the intensity of light at another, then it is twice as likely that a photon will arrive at the first point than the second.
14. If all the points being considered were on a screen, the intensities show you the relative brightnesses of light at each of the points.
If we now take this method and apply it to several situations, we find that, in many cases, the results are similar to those obtained when treating light as a wave, with the exception that we can now reconcile this idea with the observable 'lumpiness' of light, and can acknowledge the fact that there is a certain probability that some light will not behave according to some wave laws.
Travelling from A to B
This is the simplest example to consider. If we consider a range of paths going from point A to point B, and calculate the phasor directions at the end of the paths, we get a resultant phasor arrow which gives us some amplitude at point B. Since there are no obstructions, at any point this far away from the source, we will get the same resultant amplitude.
It is important to note that different paths contribute to the resultant amplitude by different amounts. The paths closer to the straight line between the two points are more parallel to the resultant angle, whereas the paths further away vary in direction more, and so tend to cancel each other out. The conclusion: light travelling in straight lines contributes most to the resultant amplitude.
Here, we just need to consider two paths: one through each slit. We can then calculate two phasor arrows, add them together to gain a resultant phasor arrow, and square its amplitude to gain the intensity of the light at the point the two paths went to. When calculated, these intensities give a pattern of light and dark fringes, just as predicted by the wave theory.
This situation is very to similar to what happens when light travels in a 'straight line'. The only difference is that we consider the paths which involve rebounding off an obstacle. The results are more or less the same, but the paths from which they were obtained are different. This means that we can assume the same conclusions about these different paths: that most of the resultant amplitude comes from the part of the mirror where the angle of incidence equals the angle of reflection. In other words, the likelihood is that a photon will behave as if mirrors work according to wave theory.
Different paths have different lengths, and so photons take different amounts of time to traverse them (these are known as trip times). In the diagram on the right, the photons again traverse all possible paths. However, the paths with the smallest difference between trip times have phasor arrows with the smallest difference in direction, so the paths with the smallest trip times contribute most to the resultant amplitude. This shortest path is given by Snell's law. Yet again, quantum physics provides a more accurate picture of something which has already been explained to some degree.
Diffraction occurs when the photons are blocked from taking every other path. This occurs when light passes through a gap less than 1 wavelength wide. The result is that, where the amplitudes would have roughly cancelled each other out, they do not, and so light spreads out in directions it would not normally spread out in. This explains diffraction, instead of just being able to observe and calculate what happens.
Electron Behaviour as a Quantum Phenomenon
So far, we have considered how quantum physics applies to photons, the quanta of light. In reality, every other particle is also a quantum, but you only need to know about photons and electrons.
The image on the right shows what happens when you fire electrons through a pair of slits: it arrives in lumps, but you get fringes due to superposition as well. The electrons are behaving as both waves and particles. Actually, they are behaving as quanta. The equations describing quantum behaviour in electrons are similar to those describing it in photons.
Frequency and Kinetic Energy
We know that, for photons:
In suggesting that electrons are quanta, we assume that they must have a frequency at which the phasors representing them rotate. We also know that h is a constant; it does not change. So, when adapting the above equation to apply to electrons, all we need to adapt is E. In electrons, this energy is their kinetic energy. If the electron has some form of potential energy, this must first be subtracted from the kinetic energy, as this portion of the energy does not affect frequency. So:
De Broglie Wavelength
If electrons exhibit some wavelike properties, they must also have a 'wavelength', known as the de Broglie wavelength, after its discoverer. This is necessary in order to work out a probability distribution for the position of an electron, as this is the distance the electron travels for each phasor arrow rotation. The de Broglie wavelength λ is given by the equation:
where h = Planck's constant, p = momentum, m = mass of electron = 9.1 x 10-31kg and v = velocity of electron.
Potential Difference and Kinetic Energy
Potential difference is what causes electrons to move. You already know how power is related to charge, voltage and time:
Since power is the rate at which work is done:
We know that the charge on an electron equals -1.6 x 10-19, and that work done is energy, so:
Energy, in the SI system of units, is measured in Joules, but, sometimes it is measured in electronvolts, or eV. 1 eV is the kinetic energy of 1 electron accelerated by 1V of potential difference. So, 1 eV = 1.6 x 10-19 J.
1. An electron moves at 30,000 ms-1. What is its de Broglie wavelength?
2. What is its frequency?
3. What is its kinetic energy, in eV?
4. Given that it is travelling out of an electron gun, what was the potential difference between the anode and the cathode?
5. An electron is accelerated by a potential difference of 150V. What is its frequency?
What is a vector?
Two types of physical quantity are scalars and vectors. Scalar quantities are simple: they are things like speed, distance, or time. They have a magnitude, but no direction. A vector quantity consists of two parts: both a scalar and a direction. For example, the velocity of an object is made up of both the speed of an object and the direction in which it is moving. Speed is a scalar; add a direction and it becomes velocity, a vector. Similarly, take a distance and give it a direction and it becomes a displacement, such as '2 miles south-east'. Distance is a scalar, whereas displacement is a vector.
Vectors and scalars are both useful. For example, if I run around the room several times and end up back where I started, I may have covered a distance of 50m. My displacement is 0 - the null vector. The null vector is the only vector which has no direction. If I want to calculate how much work I have done, I should use the distance. If I want to know where I am, I should use the displacement.
As we shall see, the directional component of a vector can be expressed in several different ways. '2 miles south-east' is the same as saying '2 miles on a bearing of 135°', or '1.4 miles east, -1.4 miles north'. The scalar component of a vector is known as the modulus of a vector.
You need to be able to understand the following algebraic representations of vectors:
|A vector from point a to point b.|
|a||A vector named 'a'. This is used in typed algebra.|
|a||A vector named 'a'. This is used in handwritten algebra.|
|or |a| or |a|||The modulus of a vector.|
Sometimes, it is useful to express a vector in terms of two other vectors. These two vectors are usually pointing up and right, and work similarly to the Cartesian co-ordinate system. So, for example, 'an acceleration of 3.4 ms-2 west' becomes 'a vertical acceleration of 0 ms-2 and an horizontal acceleration of -3.4 ms-2 east. However, this is a very simple example.
Consider the diagram on the right. The vector a consists of a vertical component j and a horizontal component i. a has a modulus |a|. |i| and |j| can be calculated using |a|, the angle between i and a θ and some basic trigonometry. We know that:
|i| = |a| cos θ and |j| = |a| sin θ.
This will be given to you in the formula booklet in the exam.
You also need to know how to add vectors together. This enables us to answer questions such as, "If I travel 5 miles north-west and then 6 miles east, where am I?", or "If I accelerate at 3 ms-2 in a northerly direction, and accelerate south-east at 1 ms-2, what is my total acceleration?". Vectors can be added 'tip-to-tail'; that is to say, the resultant vector is equal to 'travelling' the first vector, and then travelling the second vector.
This is shown in the diagram on the left. When vectors a and b are added together, the resultant vector a + b is produced, joining the tail of the first vector to the tip of the last, with the vectors joined together. In practise, the easiest way to add two vectors together is to calculate (if you do not already know this) the vertical and horizontal components, and add them all together to get two total vertical and horizontal components. You can then use Pythagoras' theorem to calculate the modulus of the resultant vector, and some more basic trigonometry to calculate its direction.
where a1 ... an are the vectors to be added together, i1 ... in are their horizontal components, j1 ... jn are their vertical components, and θ is the angle between the θ=0 line and the resultant vector Σan, as shown in the diagram on the right.
If you use a diagram to represent vectors (making the lengths of the arrows proportional to the modulus of the vectors they represent, and the directions equal), you can predict graphically the trajectory an object (such as a ball) will take. Use the following steps:
- Draw a vector to represent the velocity of the ball (in ms-1). Since this is the number of metres travelled in 1 second, and each step of the process is 1 second long, this vector represents both the velocity and the displacement of the ball, i.e.
- Copy this vector, and connect its tail to the tip of the first vector. This new vector represents the velocity and displacement that the ball would have had over the next second, if gravity did not exist.
- Draw another vector to represent the change in velocity due to gravity (9.81 ms-2) on Earth. This should be pointing downwards, and be to the same scale as the other vectors. This represents the fact that the velocity of the ball changes due to gravity (velocity is a vector, so both the speed and angle of the ball's travel change).
- Add these vectors together, as shown above, to give a new vector. This new vector represents the velocity and displacement of the ball over the second second.
- Repeat this process until the ball hits something (draw the ground, if in doubt).
1. Which of the following are vectors?
- 20 cm
- 9.81 ms-2 towards the centre of the earth
- 5 km south-east
- 500 ms-1 on a bearing of 285.3°
2. A displacement vector a is the resultant vector of two other vectors, 5 m north and 10 m south-east. What does a equal, as a displacement and a bearing?
3. If I travel at a velocity of 10 ms-1 on a bearing of 030°, at what velocity am I travelling north and east?
4. An alternative method of writing vectors is in a column, as follows:
where x and y are the vertical and horizontal components of the vector respectively. Express |a| and the angle between a and in terms of x and y.
5. A more accurate method of modelling the trajectory of a ball is to include air resistance as a constant force F. How would this be achieved?
There are two types of graphs of motion you need to be able to use and understand: distance-time graphs and velocity-time graphs.
A distance-time graph plots the distance of an object away from a certain point, with time on the x-axis and distance on the y-axis.There are several types of graphs of motion you need to be able to use and understand: distance-time graphs, position-time graphs, and velocity-time graphs.
Position-time Graphs or Displacement - Time Graphs
Distance-Time Graphs give you speed, but speed is never negative so you can only have a positive slope in a distance-time graph. Position-Time graphs show displacement, have direction, and from which you can calculate velocity. If we were to imagine the line on the position-time graph to the right as a function f(t), giving an equation for s = f(t), we could differentiate this to gain:
where s is displacement, and t is time. By finding f'(t) at any given time t, we can find the rate at which distance is increasing (or decreasing) with respect to t. This is the gradient of the line. A positive gradient means that distance is increasing, and a negative gradient means that distance is decreasing. A gradient of 0 means that the object is stationary. The velocity of the object is the rate of change of its displacement, which is the same as the gradient of the line on a distance-time graph. This is not necessarily the same as the average velocity of the object v:
Here, t and s are the total changes in displacement and time over a certain period - they do not tell you exactly what was happening at any given point in time.
A velocity-time graph plots the velocity of an object, relative to a certain point, with time on the x-axis and velocity on the y-axis. We already know that velocity is the gradient (derivative) of the distance function. Since integration is the inverse process to differentiation, if we have a velocity-time graph and wish to know the distance travelled between two points in time, we can find the area under the graph between those two points in time. In general:
where v is velocity (in ms-1), t is time (in s), and s is the distance travelled (in m) between two points in time t1 and t2.
Also, by differentiation, we know that the gradient (or derivative of v = f(t)) is equal to the acceleration of the object at any given point in time (in ms-2) since:
2. What is the velocity at 12 seconds?
4. What is the object's acceleration at 8 seconds?
5. A car travels at 10ms-1 for 5 minutes in a straight line, and then returns to its original location over the next 4 minutes, travelling at a constant velocity. Draw a distance-time graph showing the distance the car has travelled from its original location.
6. Draw the velocity-time graph for the above situation.
The following question is more difficult than anything you will be given, but have a go anyway:
7. The velocity of a ball is related to the time since it was thrown by the equation . How far has the ball travelled after 2 seconds?
Kinematics is the study of how objects move. One needs to understand a situation in which an object changes speed, accelerating or decelerating, and travelling a certain distance. There are four equations you need to be able to use which relate these quantities.
Before we can understand the kinematic equations, we need to understand the variables involved. They are as follows:
- t is the length of the interval of time being considered, in seconds.
- v is the speed of the object at the end of the time interval, in ms-1.
- u is the speed of the object at the beginning of the time interval, in ms-1.
- a is the acceleration of the object during the time interval, in ms-2. Has to be a constant.
- s is the displacement (distance travelled) of the object during the time interval, in metres.
The four equations are as follows:
It is also useful to know where the above equations come from. We know that acceleration is equal to change in speed per. unit time, so:
We also know that the average speed over the time interval is equal to displacement per. unit time, so:
If we substitute the value of v from equation 1 into equation 2, we get:
If we take the equation for acceleration (*), we can rearrange it to get:
If we substitute this equation for t into equation 2, we obtain:
1. A person accelerates from a speed of 1 ms-1 to 1.7 ms-1 in 25 seconds. How far has he travelled in this time?
2. A car accelerates at a rate of 18 kmh-2 to a speed of 60 kmh-1, travelling 1 km in the process. How fast was the car travelling before it travelled this distance?
3. A goose in flight is travelling at 4 ms-1. It accelerates at a rate of 1.5 ms-2 for 7 seconds. What is its new speed?
4. How far does an aeroplane travel if it accelerates from 400 kmh-1 at a rate of 40 kmh-2 for 1 hour?
Forces and Power
Forces are vectors. When solving problems involving forces, it is best to think of them as lots of people pulling ropes attached to an object. The forces are all pulling in different directions, with different magnitudes, but the effect is in only one direction, with only one magnitude. So, you have to add the forces up as vectors.
Forces cause things to happen. They cause an object to accelerate in the same direction as the force. In other words, forces cause objects to move in a direction closer to the direction they are pulling in. If the object is already moving, then they will not cause it to move in the direction of the force, as forces do not create velocities: they create accelerations.
If a force is acting on an object, it seems logical that the object will not accelerate as much as a result of the force if it has a lower mass. This gives rise to the equation:
where F = force applied (in Newtons, denoted N), m = mass (in kg) and a = acceleration (in ms-2). If we rearrange the equation, it makes more sense:
In other words, the acceleration in a given direction as the result of a force is equal to the force applied per. unit mass in that direction.
You should already know how to calculate some types of energy, for example:
The amount of energy converted by a force is equal to the work done, which is equal (as you already know) to the force multiplied by the distance the object it is acting on moves:
When answering questions about work done, you may be given a force acting in a direction other than that of the displacement. In this case, you will have to find the displacement in the direction of the force, as shown in the section on Vectors.
Power is the rate of change of energy. It is the amount of energy converted per. unit time, and is measured in Js-1:
where E = energy (in J) and t = time (in s). Since ΔE = work done, power is the rate at which work is done. Since:
where P = power (in Watts, denoted W), F = force and v = velocity.
Gravity is something of a special case. The acceleration due to gravity is denoted g, and is equal to 9.81359ms-2. It is uniform over small distances from the Earth. The force due to gravity is equal to mg, since F = ma. Therefore:
Therefore, when things are dropped, they all fall at the same acceleration, regardless of mass. Also, the acceleration due to gravity (in ms-2) is equal to the gravitational field strength (in Nkg-1).
1. I hit a ball of mass 5g with a cue on a billiards table with a force of 20N. If friction opposes me with a force of 14.2N, what is the resultant acceleration of the ball away from the cue?
2. A 10g ball rolls down a 1.2m high slope, and leaves it with a velocity of 4ms-1. How much work is done by friction?
3. An electric train is powered on a 30kV power supply, where the current is 100A. The train is travelling at 90 kmh-1. What is the net force exerted on it in a forwards direction?
(incorporating Energy and Work Done)
Rise and Fall of the Clockwork Universe
Field and Particle Pictures
Advances in Physics | http://en.wikibooks.org/wiki/A-level_Physics_(Advancing_Physics)/Print_Version | 13 |
75 | TenMarks teaches you how to use scale factor concepts to solve real life problems.
Read the full transcript »
Learn Applications of Scale Factor In this video lesson, we’ll learn about scale factors. We’re given two problems. Let’s do them one by one. The first problem says rectangle A which is given below is similar to a rectangle B that we have to draw and the scale factor from rectangle A to B is 3. What are the dimensions? That’s what we need to find, the dimensions of rectangle B. Let’s do this. I’m going to take a little bit of space and try Problem 1. We are given rectangle A which is given above. I’m going to redraw it and it has one dimension as 7 centimeters which is the length and the width is 5 centimeters. That’s what’s were given. We are also given that there’s a rectangle B, we need to find the dimensions of this but the scale factor given to us is 3. So, here is what we know. What we know is the length of A = 7 centimeters. We know the width of rectangle A= 5 centimeters. That’s were given. We don’t know the length of B or the width of B but what we do know is the scale factor. What is the scale factor B? Scale factor is the length of A over the length of B which is the ratio of their corresponding sides. Corresponding side is the length or the width of A over the width of B equals the scale factor which in this case is 3. What are we given? Let’s substitute the values. We know the length of A is 7 centimeters, length of B, let’s call it L of B = width of A which is 5 centimeters over width of B equals scale factor which is 3. So now that we know this, we can take these two first and solve for length of B. So, 7 centimeters divided by the length of B equals 3. That’s what we’re given. By cross multiplying, this gives us length of B×3 which is one multiplication equals 7 centimeters. I’m going to divide both sides by 3 which gives me length of B = 7/3 centimeters. Let’s look at the second one which is the width of B. Width of B, I can determine by using this against 3. So, 5 centimeters divided by width of B equals 3 which means by cross multiplying, I get 5 centimeters = 3 × width of B. We can divide both sides by 3 again and we get 5/3 centimeters = width of B. So, what do we ultimately find out? The rectangle B has dimensions, length is 7/3 centimeters and the width is 5/3 centimeters. I can write this in decimals as well if you want. This will be slightly greater than 2, so this will be equals 2. 3×2 is 6, 2.33 centimeters and this will be 1.66 centimeters. Let’s look at the second problem which says two rectangles A and B are similar. We’re given two rectangles that are similar. Dimensions of rectangle A are 5 centimeters by 4 centimeters. Find the scale factor, that’s what we need to find from rectangle A to rectangle B, if the area of the rectangle B is 80 cm2. Let’s write down what we know. In the second problem we are given that there’s a rectangle A which measures 5 centimeters by 4 centimeters. This is rectangle A. We are also given a rectangle B where the area of rectangle B is 80 cm2. We need to find the scale factor. That’s what we need to determine. What is the area of rectangle A? Area of rectangle A is length × width which is 5 × 4, 5 centimeters × 4 centimeters = 20 cm2. So now, what do we know? From this, what we know is area of rectangle A = 20 cm2 and we know the area of rectangle B is given to us as 80 cm2. If area of rectangle A is 20 and the area of rectangle B is 80 then the (scale factor)2 = area B/area A because this is in cm2 and this is in cm2. What are we given? Well, area B is 80, area A is 20, centimeter on both sides. 80/20 = 4. So, square of the scale factor is 4. (Scale factor)2 = 4 or a scale factor = under root of 4 which is 2. So, the ultimate answer we were looking for, the scale factor is 2 and where we got that is we looked at the area of rectangle A, we looked at the area of rectangle B. Area of rectangle A, we computed, area of rectangle B was given to us. If we have the areas of two rectangles, the scale factor times itself or (scale factor)2 = ratio of the two. Ratio of the t | http://www.healthline.com/hlvideo-5min/learn-applications-of-scale-factor-285026226 | 13 |
65 | 2.6.1. What are they?
Starbursts are galaxies (sometimes, the term also refers only to particular regions of galaxies) undergoing a large-scale star formation episode. They feature strong infrared emission originating in the high levels of interstellar extinction, strong HII-region-type emission-line spectrum (due to a large number of O and B-type stars), and considerable radio emission produced by recent SNRs. Typically, starburst regions are located close to the galactic center, in the central kiloparsec. This region alone can be orders of magnitude brighter than the center of normal spiral galaxies. From such an active region, a galactic-scale superwind is driven by the collective effect of supernovae and particular massive star winds. The enhanced supernova explosion rate creates a cavity of hot gas (~ 108 K) whose cooling time is much greater than the expansion time scale. Since the wind is sufficiently powerful, it can blow out the interstellar medium of the galaxy preventing it from remaining trapped as a hot bubble. As the cavity expands, a strong shock front is formed on the contact surface with the cool interstellar medium. The shock velocity can reach several thousands of kilometers per second and ions like iron nuclei can be efficiently accelerated in this scenario, up to ultrahigh energies, by Fermi's mechanism . If the super-GZK particles are heavy nuclei from outside our Galaxy, then the nearby (~ 3 Mpc ) starburst galaxies M82 (l = 141°, b = 41°) and NGC 253 (l = 89°, b = -88°) are prime candidates for their origin.
2.6.2. M82 and NGC253
M82 is probably the best studied starburst galaxy, located at only 3.2 Mpc. The total star formation rate in the central parts is at least ~ 10 M yr-1 . The far infrared luminosity of the inner region within 300 pc of the nucleus is ~ 4 × 1010 L . There are ~ 1 × 107 M of ionized gas and ~ 2 × 108 M of neutral gas in the IR source [304, 305]. The total dynamical mass in this region is ~ (1 - 2) × 109 M . The main observational features of the starburst can be modelled with a Salpeter IMF extending from 0.1 to 100 M. The age of the starburst is estimated in ~ (1 - 3) × 107 yr . Around ~ 2.5 × 108 M (i.e. ~ 36 % of the dynamical mass) is in the form of new stars in the burst . The central region, then, can be packed with large numbers of early-type stars.
NGC 253 has been extensively studied from radio to -rays (e.g. [306, 307, 308]). A TeV detection was reported by CANGAROO , but has been yet unconfirmed by other experiments. More than 60 individual compact radio sources have been detected within the central 200 pc , most of which are supernova remnants (SNRs) of only a few hundred years old. The supernova rate is estimated to be as high as 0.2 - 0.3 yr-1, comparable to the massive star formation rate, ~ 0.1 M yr-1 [310, 311]. The central region of this starburst is packed with massive stars. Four young globular clusters near the center of NGC 253 can account for a mass well in excess of 1.5 × 106 M [312, 313]. Assuming that the star formation rate has been continuous in the central region for the last 109 yrs, and a Salpeter IMF for 0.08-100 M, the bolometric luminosity of NGC 253 is consistent with 1.5 × 108 M of young stars . Based on this evidence, it appears likely that there are at least tens of millions of young stars in the central region of the starburst. These stars can also contribute to the -ray luminosity at high energies [314, 138]. Physical, morphological, and kinematic evidence for the existence of a galactic superwind has been found for NGC 253 . Shock interactions with low and high density clouds can produce X-ray continuum and optical line emission, respectively, both of which have been directly observed.
A region about 1 kpc of the M82 galactic center appears to be a fossil starburst, presenting a main sequence stellar cutoff corresponding to an age of 100-200 Myr and a current average extinction of 0.6 mag (compare with the extinction of the central and current starburst region, 2.2 mag) whereas, nearby globular glusters age estimations are between 2 × 108 and 109 yr . It appears possible for this galaxy, then, that a starburst (known as M82 "B") of similar amplitude than the current one was active in the past.
2.6.3. Two-step acceleration-process in starbursts
The acceleration of particles in starburst galaxies is thought to be a two-stage process . First, ions are thought to be diffusively accelerated at single SNRs within the nuclear region of the galaxy. Energies up to ~ 1014-15 eV can be achieved in this step (see, e.g. ). Due to the nature of the central region, and the presence of the superwind, the escape of the iron nuclei from the central region of the galaxy is expected to be dominated by convection. (24) Collective plasma motions of several thousands of km per second and the coupling of the magnetic field to the hot plasma forces the CR gas to stream along from the starburst region. Most of the nuclei then escape through the disk in opposite directions along the symmetry axis of the system, being the total path travelled substantially shorter than the mean free path.
Once the nuclei escape from the central region of the galaxy they are injected into the galactic-scale wind and experience further acceleration at its terminal shock. CR acceleration at superwind shocks was firstly proposed in Ref. in the context of our own Galaxy. The scale length of this second shock is of the order of several tens of kpc (see Ref. ), so it can be considered as locally plane for calculations. The shock velocity vsh can be estimated from the empirically determined superwind kinetic energy flux sw and the mass flux generated by the starburst through: sw = 1/2 vsh2. The shock radius can be approximated by r vsh , where is the starburst age. Since the age is about a few tens of million years, the maximum energy attainable in this configuration is constrained by the limited acceleration time arisen from the finite shock's lifetime. For this second step in the acceleration process, the photon field energy density drops to values of the order of the cosmic background radiation (we are now far from the starburst region), and consequently, iron nuclei are safe from photodissociation while energy increases to ~ 1020 eV.
To estimate the maximum energy that can be reached by the nuclei, consider the superwind terminal shock propagating in a homogeneous medium with an average magnetic field B. If we work in the frame where the shock is at rest, the upstream flow velocity will be v1 (|v1| = vsh) and the downstream velocity, v2. The magnetic field turbulence is assumed to lead to isotropization and consequent diffusion of energetic particles which then propagate according to the standard transport theory . The acceleration time scale is then : tacc = 4 / v12 where is the upstream diffusion coefficient which can be written in terms of perpendicular and parallel components to the magnetic field, and the angle between the (upstream) magnetic field and the direction of the shock propagation: = || cos2 + sin2. Since strong turbulence is expected from the shock we can take the Bohm limit for the upstream diffusion coefficient parallel to the field, i.e. || = 1/3 E / Z e B1, where B1 is the strength of the pre-shock magnetic field and E is the energy of the Z-ion. For the component we shall assume, following Biermann , that the mean free path perpendicular to the magnetic field is independent of the energy and has the scale of the thickness of the shocked layer (r / 3). Then, = 1/3 r(v1 - v2) or, in the strong shock limit, = rv12 / 12. The upstream time scale is tacc ~ r / (3v1), r / 3v1 = 4 / v12 (E / (3ZeB1) cos2 + rv12 / 12sin2 ). Thus, using r = v1 and transforming to the observer's frame one obtains
The predicted kinetic energy and mass fluxes of the starburst of NGC 253 derived from the measured IR luminosity are 2 × 1042 erg s-1 and 1.2 M yr-1, respectively . The starburst age is estimated from numerical models that use theoretical evolutionary tracks for individual stars and make sums over the entire stellar population at each time in order to produce the galaxy luminosity as a function of time . Fitting the observational data these models provide a range of suitable ages for the starburst phase that, in the case of NGC 253, goes from 5 × 107 to 1.6 × 108 yr (also valid for M82) . These models must assume a given initial mass function (IMF), which usually is taken to be a power-law with a variety of slopes. Recent studies has shown that the same IMF can account for the properties of both NGC 253 and M82 . Finally, the radio and -ray emission from NGC 253 are well matched by models with B ~ 50µG . With these figures, already assuming a conservative age = 50 Myr, one obtains a maximum energy for iron nuclei of EmaxFe > 3.4 × 1020 eV.
2.6.4. The starburst hypothesis: UHECR-luminosity and correlations
For an extragalactic, smooth, magnetic field of 15 - 20 nG, diffusive propagation of nuclei below 1020 eV evolves to nearly complete isotropy in the CR arrival directions [324, 325]. Thus, we could use the rates at which starbursts inject mass, metals and energy into superwinds to get an estimate on the CR-injection spectra. Generalizing the procedure discussed in Sec. 2.4.3 - using equal power per decade over the interval 1018.5 eV < E < 1020.6 eV - we obtain a source CR-luminosity
where is the efficiency of ultra high energy CR production by the superwind kinetic energy flux. With this in mind, the energy-weighted approximately isotropic nucleus flux at 1019 eV is given by
where I* = IM82 + INGC 253. To estimate the diffusion coefficient we used BnG = 15, Mpc = 0.5, and an average Z = 20. We fix
after comparing Eq. (54) to the observed CR-flux. Note that the contribution of IM82 and INGC 253 to I* critically depends on the age of the starburst. The relation "starburst-age/superwind-efficiency" derived from Eq. (55), leads to 10%, if both M82 and NGC 253 were active for 115 Myr. The power requirements may be reduced assuming contributions from M82 "B" .
Above > 1020.2 eV iron nuclei do not propagate diffusively. Moreover, the CR-energies get attenuated by photodisintegration on the CMB and the intergalactic infrared background photons. However, the energy-weighted flux beyond the GZK-energy due to a single M82 flare
is easily consistent with observation . Here, R is the effective nucleon loss rate of the nucleus on the CBM .
In the non-diffusive regime (i.e., 1020.3 eV E 1020.5 eV), the accumulated deflection angle from the direction of the source in the extragalactic B-field is roughly 10° 20° . The nuclei suffer additional deflection in the Galactic magnetic field. In particular, if the Galactic field is of the ASS type, the arrival direction of the 4 highest energy CRs can be traced backwards to one of the starbursts . Figure 8 shows the extent to which the observed arrival directions of the highest energy CRs deviate from their incoming directions at the Galactic halo because of bending in the magnetic field given in Eq. (13). The incoming CR trajectories are traced backwards up to distances of 20 kpc away from the Galactic center, where the effects of the magnetic field is negligible. The diamond at the head of each solid line denotes the observed arrival direction, and the points along these lines indicate the direction from which different nuclear species (with increasing mass) entered the Galactic halo. In particular, the tip of the arrows correspond to incoming directions at the halo for iron nuclei, whereas the circles correspond to nuclei of neon. Regions within the dashed lines comprise directions lying within 20° and 30° degrees of the starbursts. It is seen that trajectories for CR nuclei with Z 10 can be further traced back to one of the starbursts, within the uncertainty of the extragalactic deviation.
Figure 8. Left: Directions in Galactic coordinates of the four highest energy CRs at the boundary of the Galactic halo. The diamonds represent the observed incoming directions. The circles and arrows show the directions of neon and iron nuclei, respectively, before deflection by the Galactic magnetic field. The solid line is the locus of incoming directions at the halo for other species with intermediate atomic number. The stars denote the positions of M82 and NGC253. The dashed lines are projections in the (l, b) coordinates of angular directions within 20° and 30° of the starbursts. Right: Curves of constant probabilities in the two-dimensional parameter space defined by the size of the cone and the minimum number of events originating within the resulting effective solid angle .
The effects of the BSS configuration are completely different. Because of the averaging over the frequent field reversals, the resulting deviations of the CR trajectories are markedly smaller, and in the wrong direction for correlation of current data with the starburst sources. We note that the energy-ordered 2D correlation distribution of the AGASA data is in disagreement with expectations for positively charged particles and the BSS configuration .
We now attempt to assess to what extent these correlations are consistent with chance coincidence. We arrive at the effective angular size of the source in a two-step process. Before correcting for bias due to the coherent structure of the Galactic magnetic field, the deflections in the extragalactic and Galactic fields (regular and random components) may be assumed to add in quadrature, so that the angular sizes of the two sources are initially taken as cones with opening half-angles between 40° and 60°, which for the purpose of our numerical estimate we approximate to 50°. However, the global structure of the field will introduce a strong bias in the CR trajectories, substantially diminishing the effective solid angle. The combined deflections in the l and b coordinates mentioned above concentrate the effective angular size of the source to a considerably smaller solid angle. As a conservative estimate, we retain 25% of this cone as the effective source size. A clear prediction of this consideration is then that the incoming flux shows a strong dipole anisotropy in the harmonic decomposition.
Now, by randomly generating four CR positions in the portion of the sky accessible to the existing experiments (declination range > -10°), an expected number of random coincidences can be obtained. The term "coincidence" is herein used to label a synthetic CR whose position in the sky lies within an effective solid angle eff of either starburst. eff is characterized by a cone with opening half-angle reduced from 50° to 24° to account for the 75% reduction in effective source size due to the magnetic biasing discussed above. Cosmic ray positional errors were considered as circles of 1.6° radius for AGASA. For the other experiments the asymmetric directional uncertainty was represented by a circle with radius equal to the average experimental error. The random prediction for the mean number of coincidences is 0.81 ± 0.01. The Poisson probability (25) for the real result to be no more than the tail of the random distribution is 1%. Alternatively, we may analyze this in terms of confidence intervals. For the 4 observed events, with zero background, the Poisson signal mean 99% confidence interval is 0.82 - 12.23 . Thus our observed mean for random events, 0.81 ± 0.01, falls at the lower edge of this interval, yielding a 1% probability for a chance occurrence. Of course, this is not compelling enough to definitively rule out chance probability as generating the correlation of the observed events with the candidate sources, but it is suggestive enough to deserve serious attention in analyses of future data.
Assuming an extrapolation of AGASA flux (E3 Jobs(E)) up to 1020.5 eV, the event rate at Pampa Amarilla (26) is given by
where E1 = 1020.3 eV and E2 = 1020.5 eV. Considering a 5-year sample of 25 events and that for this energy range the aperture of PAO is mostly receptive to cosmic rays from NGC 253, we allow for different possibilities of the effective reduction of the cone size because of the Galactic magnetic field biasing previously discussed. In Fig. 8 we plot contours of constant probabilities (P = 10-4, 10-5) in the two-dimensional parameter space of the size of the cone (as a fraction of the full 50° circle) and the minimum number of events originating within the resulting effective solid angle. The model predicts that after 5 years of operation, all of the highest energy events would be observed in the aperture described above. Even if 7 or 8 are observed, this is sufficient to rule out a random fluctuation at the 10-5 level. Thus, a clean test of the starburst hypothesis can be achieved at a very small cost: < 10-5 out of a total 10-3 PAO probability budget .
24 The relative importance of convection and diffusion in the escape of the CRs from a region of disk scale height h is given by the dimensionless parameter, q = V0 h / 0, where V0 is the convection velocity and 0 is the CR diffusion coefficient inside the starburst . When q 1, the CR outflow is difussion dominated, whereas when q 1 it is convection dominated. For the central region of NGC 253 a convection velocity of the order of the expanding SNR shells ~ 10000 km s-1, a scale height h ~ 35 pc, and a reasonable value for the diffusion coefficient 0 ~ 5 × 1026 cm2 s-1 , lead to q ~ 216. Thus, convection dominates the escape of the particles. The residence time of the iron nuclei in the starburst results tRES ~ h / V0 1 × 1011 s. Back.
25 Because of constraints inherent in partitioning events among clusters, the distributions are very close to, but not precisely Poisson . Back.
26 The Southern Site of PAO has been christened Pampa Amarilla. Recall that it has an aperture A 7000 km2 sr for showers with incident zenith angle less than 60°. Back. | http://ned.ipac.caltech.edu/level5/March04/Torres/Torres2_6.html | 13 |
56 | Required math: calculus
Required physics: Newton’s law
The most common example of a system exhibiting harmonic oscillation is that of a mass on a spring. If you want to keep an image in your mind as we discuss things, imagine that the spring is horizontal, with the left end firmly fixed to a wall, and a mass fixed to the right end. The mass is free to slide horizontally while attached to the spring, so the spring alternately compresses and expands as the mass moves back and forth. As always in these simplified situations, we’re imagining the mass sliding along some horizontal track as it moves back and forth, and we stipulate that there is no friction between the sliding mass and the track. Even though the reader is probably aware of the approximations being made, it is worth stating explicitly that such a system does not exist in reality, since there will always be some friction in a system like this.
Having accepted the idealized situation, however, we can observe that the mass on the spring experiences a force that is directly proportional to its distance from the equilibrium point, and that is always directed towards the equilibrium point. That is, if the spring’s right end (the end attached to the oscillating mass) lies at position at equilibrium (the point at which, if the spring is released, nothing would move), then the force experienced by the mass is
where is a positive constant, and the minus sign indicates that the force resists any extension or compression of the spring. That is, if then then force tends to pull the mass back to the left towards the equilibrium point, while if , the force is in the positive direction and tends to push the mass back to the right.
Since we have the force law, we can invoke Newton’s law in the form
to get an expression for the mass’s motion as a function of time. First, we can notice that the units of must be those of , (since the units of force are ) so the quantity has the units of . To make the calculations easier, we can introduce a quantity which has the dimensions of and can thus be regarded as a frequency. We’ll see how it fits into the solution in a minute, but we can first rewrite the equation to be solved as
The general solution of this equation is
In order to determine and we need to impose some initial conditions, so let’s suppose we are doing an experiment in which we pull the mass out to a position at time and let it go. In that case, we have . Since the mass starts off with zero velocity , then we can find by calculating the derivative of the general solution:
from which we get . Thus the particular solution for our little experiment is
We can now see the significance of the frequency
It is the frequency (in radians per second) of the oscillation of the mass on the spring, since it is the frequency inside the cosine function. If we make the spring stiffer so that it exerts more force per unit distance, this increases and in turn increases the frequency of oscillation. If we increase the mass attached to the spring, this decreases the frequency.
We can also work out the kinetic and potential energies of the mass as functions of time. The kinetic energy is
On the first oscillation, the kinetic energy will be maximum when , and from the equation for the position above we see this occurs when , so the kinetic energy is maximum just as the mass passes through the spring’s equilibrium point. Since the total energy (kinetic + potential) must be a constant in the absence of any outside forces, the total energy therefore must be this maximum kinetic energy, from which we can also calculate the potential energy as a function of time:
The potential energy can obtained another way if we consider the work done by the force. The usual convention is that if a mass is moving against the action of a force, the potential energy being stored in the mass is the negative of the work done on the mass. As the mass oscillates on the spring, it is moving against the force on those parts of each oscillation where it is moving away from the equilibrium point at and moving with the force whenever it is moving towards the equilibrium point. If we consider part of a cycle where the mass is moving the positive direction starting at , then the mass is moving against the force and is slowing down, so its potential energy is increasing, and the work done by the spring is negative. So we can get a measure of the potential energy by calculating how much work is done as the mass moves from to some other point . This motion happens in every oscillation, of course, so we can take any oscillation we like and calculate the work. The first time at which the mass passes moving to the right is when . Let the time when the mass reaches position be . The work done is the integral of the force times the distance, so we get
In the second line, we changed the integral from one over to one over , since all our formulas are expressed in terms of time. We used the chain rule formula to write . Since was chosen to represent any time, we can drop the suffix to get a general formula for the potential energy, which is
which agrees with the previous formula.
Note that the harmonic oscillator force is conservative, since it can be expressed as the derivative of a potential function:
So the potential for the harmonic oscillator is | http://physicspages.com/2011/02/03/harmonic-oscillator-classical-physics/ | 13 |
87 | Displacement: A vector of physical displacement between two points in space. Displacement has direction. This typically uses the algebraic symbol "s" or sometimes "d". In our everyday lives, we express this in units of inches, feet, centimeters, meters, kilometers, miles.
Distance: Magnitude of Displacement vector. Distance has no direction, only a number. It is often misused to indicate displacement.
Velocity: A vector of change in distance over time. Velocity has direction. This typically uses the algebraic symbol "v". In our everyday lives, we typically express this in mph (miles per hour), or kph (km per hr), or meters per second (m/s).
Speed: Magnitude of a Velocity vector. Speed has no direction, but the term is often misused to indicate velocity.
Acceleration: A vector of change in velocity over time. Acceleration has direction. This typically uses the algebraic symbol "a". In our everyday lives we don't usually use these units, but they are usually expressed in units of meters/sec^2 (m/s^2).
Jerk: A vector of change in acceleration over time. Jerk has direction. Impulse denotes the "quality" of an acceleration. For instance, the hard braking of a car has a high jerk because it imparts an acceleration on the car very quickly. Please don't get this confused with acceleration itself. This is an even less commonly used term/quantity than acceleration, and is usually denoted in units of m/s^3
Newton's Three Laws of Motion:
1) The Law of Inertia. Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. In other words, a body at rest stays at rest; a body in motion stays in motion. This says that in absence of any external forces, a body will move at a constant velocity. example: an astronaut drifting in space with no propulsion will keep moving at a constant velocity until an external force is applied. He will not be able to even swim in space because there is no fluid to push against.
2) F = m*a In layman's terms, this defines Force as the acceleration of a given mass. More technically, F = dP/dt, where P = momentum. When the mass stays constant, then F = m*dv/dt = m*a. Where m = mass, v = velocity, dv/dt = derivative of velocity over time, a = acceleration. Note, that in the case of cars and rocket ships, the mass of the body is changing due to burning of fuel, so it is no longer simply mass times acceleration. This law defines Force.
3) The Law of Action-Reaction. For every action there is an equal and opposite reaction. This says that if you exert a force on an object, an equal and opposite reaction force must exist. In the case of an everyday interaction with a baseball, when you throw the ball, you exert a force on it. You exert an equal and opposite force on the earth when you throw the ball. If you are in space, and you throw the ball, the force you exert to throw the ball forward will also accelerate yourself backward.
Terminology for Momentum and Energy:
Momentum: P = m*v. Momentum is a vector of mass at a given velocity. Force = change in momentum over time. A high Force will change momentum quickly, and vice versa. Momentum has direction.
Work: W = F*s. Work is Force applied over a given distance. Work is energy. It has no direction.
Kinetic Energy: E = (1/2)*m*v^2. Kinetic energy is the energy of a body as it moves at a given velocity. When you accelerate a baseball to a certain velocity, you also impart kinetic energy to it.
Frictional Force: F = u*N. u (the greek letter mu) is the coefficient of friction. Friction is a force which acts 90 degrees from a force normal to a surface. It often converts kinetic energy into heat.
Law of Conservation of Momentum: The total momentum of a given system will always remain constant. The classic example of this is a bunch of billiard balls on a table. Let's say you, the cue, the balls, and the table, and the earth (ground) are one system. When you break the rack, you impart momentum to the ball with the cue (applying a force), and simultaneously apply force to the earth (newton's 3rd law), changing its momentum by that same amount. Then the cue ball hits the racked balls, it imparts momentum to first one, then other balls in a chain reaction. They impart momentum to each other but if you take the sum total of all of the momentum vectors of each ball, they will add up to the momentum of the single cue ball which you accelerated with the cue stick. At some point, frictional force between each ball and the felt/bumpers of the table will slow the balls down, and impart momentum back into the earth.
Law of Conservation of Energy: The total energy of a given system will always remain constant. Energy can never be destroyed--it can only change form. Again, take the billiard example. You impart kinetic energy into the cue ball via the cue stick, simultaneously imparting kinetic energy into the earth (newton's 3rd law). Energy has no vector, so you basically can think about each ball as having a number above it indicating how much kinetic energy is in each ball, and it goes down as the ball rolls on the felt, losing energy to friction. You also lose energy to internal friction within the bumpers, and to sonic energy for every "clack" you hear for an impact between balls. At some point, all the balls stop moving, with all kinetic energy lost to friction.
In this example energy started as chemical energy in your body, changed into kinetic energy in your muscles, which then transferred into the cue ball, and then all the other billiard balls. That energy then gradually gets lost to sonic energy (on impacts), and friction (on ball-to-ball impacts, to felt, and bumpers). Total energy of the system remains constant. net effect is your body loses chemical energy, and the environment gains heat (sound becomes heat after friction)
Potential Energy: Potential energy is the energy "stored" in an object. It is energy that once released, will often express itself as kinetic energy. One of the most common forms of this is gravitational potential energy. Another form is spring potential energy, where energy is stored in a spring. The tendons/ligaments in our bodies are springs, but while we may think of our bodies like coiled springs sometimes, this form of energy is actually not going to provide nearly as much energy as your muscles. Chemical energy can also be thought of as potential energy, as it is the energy of molecular bonds that is released so we can move and think.
Gravitational Potential Energy: E = m*g*h. This is the amount of gravitational energy stored in a body. Since Work = F*s, when you apply force to the body equal to or greater than gravity, in the opposite direction of gravity's pull, then the energy you add to the ball becomes stored as gravitational potential energy. For example, when you lift a baseball off the ground, you impart kinetic energy to the ball, until you stop moving it at, say, chest height. At that moment, the baseball has gravitational potential energy. When you let go of the ball, the potential energy becomes kinetic energy, due to gravity, and it drops down. After it bounces a few times and stops moving, all that energy is converted into heat, and dissipates into the environment.
Centripetal Acceleration: a = v^2 / r. This is the acceleration that is required to allow a point mass to travel in a circular trajectory. The direction of this acceleration is always towards the center of the circle. The velocity of the point mass is tangent to the circle at all times. In the stationary reference frame, if the centripetal acceleration is suddenly stopped (say because the string snapped), the point mass will continue moving in a straight line tangential to the circle, starting at the location of the point mass when the string broke. This is easily demonstrated using a sling, or any weight on a string. Spin the weight on the string, and let go. or have a friend do it, so you can see it clearly. it will launch away from the original circle in a tangent direction, starting when the string was let go.
Centrifugal Acceleration: This is a so-called virtual acceleration, that is the acceleration of a body within the rotating reference frame. In the rotating reference frame, this acceleration apparently "pushes" the body out of the circle. The true, centripetal acceleration is pulling inwards on the body to keep it rotating in a circle. Once the rotation stops, this virtual force disappears, and you fly along the tangent of the circle just like with centripetal acceleration.
My personal martial musings relating to these concepts
Taoist concepts of yin and yang, true and false, solid and insubstantial resonate strongly with newton's laws. Especially, for obvious reasons, the law of action-reaction, conservation of energy, and conservation of momentum.
The law of action-reaction resonates clearly with the section of the taiji treatise: 有上及有下. 有前則有後. 有左則有右. When there is up there is down, when there is front there is back, when there is left there is right.
Conservation of energy and momentum basically tell you that the power you generate in a punch, should be directly proportional with the amount your legs and body push off the earth, if you do it right.
The law of inertia encourages you to not get in the way of an opponent's momentum, but to step aside and guide it in the same direction they already started with. Also it encourages deflection of blows instead of directly getting in the way.
The concept of centripetal motion/acceleration is the same thing, in that if you grab someone's punch as they come to you, it is much easier to guide it in a circular arc just by "pulling inward", than it would be to knock it aside.
That should be about it for this chapter. Next chapter will be rotational/angular mechanics, torque, and conservation of angular momentum. Any questions are welcome. I will pose "test" questions to those of you who are interested.
Last edited by nianfong
on Wed Dec 14, 2011 2:08 pm, edited 20 times in total. | http://rumsoakedfist.org/viewtopic.php?f=18&t=15028 | 13 |
107 | Basic Terminology and Concepts
Need to see it? View the Which Path Requires the Most Energy? animation from the Multimedia Physics Studios.The Physics Classroom: Calculator Pad: Mechanics: Work, Energy, and Power problems
Improve your problem-solving skills with problems, answers and solutions from The Calculator Pad.Energy Story
Learn about the wealth of natural resources associated with energy.PhET Simulation: The Ramp
Explore the relationship between work and energy with this PHET simulation.Flickr Physics
Visit The Physics Classroom's Flickr Galleries and enjoy a visual overview of the topic of work, energy and power.
Looking for a lab that coordinates with this page? Try the It's All Uphill Lab from The Laboratory.The Laboratory
Looking for a lab that coordinates with this page? Try the Marble Energy Lab from The Laboratory.Curriculum Corner
Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Treasures from TPF
Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources for teaching about energy.
Definition and Mathematics of Work
In the first three units of The Physics Classroom, we utilized Newton's laws to analyze the motion of objects. Force and mass information were used to determine the acceleration of an object. Acceleration information was subsequently used to determine information about the velocity or displacement of an object after a given period of time. In this manner, Newton's laws serve as a useful model for analyzing motion and making predictions about the final state of an object's motion. In this unit, an entirely different model will be used to analyze the motion of objects. Motion will be approached from the perspective of work and energy. The affect that work has upon the energy of an object (or system of objects) will be investigated; the resulting velocity and/or height of the object can then be predicted from energy information. In order to understand this work-energy approach to the analysis of motion, it is important to first have a solid understanding of a few basic terms. Thus, Lesson 1 of this unit will focus on the definitions and meanings of such terms as work, mechanical energy, potential energy, kinetic energy, and power.
When a force acts upon an object to cause a displacement of the object, it is said that work was done upon the object. There are three key ingredients to work - force, displacement, and cause. In order for a force to qualify as having done work on an object, there must be a displacement and the force must cause the displacement. There are several good examples of work that can be observed in everyday life - a horse pulling a plow through the field, a father pushing a grocery cart down the aisle of a grocery store, a freshman lifting a backpack full of books upon her shoulder, a weightlifter lifting a barbell above his head, an Olympian launching the shot-put, etc. In each case described here there is a force exerted upon an object to cause that object to be displaced.
Read the following five statements and determine whether or not they represent examples of work. Then click on the See Answer button to view the answer.
A teacher applies a force to a wall and becomes exhausted.
A book falls off a table and free falls to the ground.
A waiter carries a tray full of meals above his head by one arm straight across the room at constant speed. (Careful! This is a very difficult question that will be discussed in more detail later.)
A rocket accelerates through space.
where F is the force, d is the displacement, and the angle (theta) is defined as the angle between the force and the displacement vector. Perhaps the most difficult aspect of the above equation is the angle "theta." The angle is not just any 'ole angle, but rather a very specific angle. The angle measure is defined as the angle between the force and the displacement. To gather an idea of it's meaning, consider the following three scenarios.
- Scenario A: A force acts rightward upon an object as
it is displaced rightward. In such an instance, the force
vector and the displacement vector are in the same
direction. Thus, the angle between F and d is 0 degrees.
- Scenario B: A force acts leftward upon an object
that is displaced rightward. In such an instance, the
force vector and the displacement vector are in the
opposite direction. Thus, the angle between F and d is
- Scenario C: A force acts upward on an object as it is displaced rightward. In such an instance, the force vector and the displacement vector are at right angles to each other. Thus, the angle between F and d is 90 degrees.
Let's consider Scenario C above in more detail. Scenario C involves a situation similar to the waiter who carried a tray full of meals above his head by one arm straight across the room at constant speed. It was mentioned earlier that the waiter does not do work upon the tray as he carries it across the room. The force supplied by the waiter on the tray is an upward force and the displacement of the tray is a horizontal displacement. As such, the angle between the force and the displacement is 90 degrees. If the work done by the waiter on the tray were to be calculated, then the results would be 0. Regardless of the magnitude of the force and displacement, F*d*cosine 90 degrees is 0 (since the cosine of 90 degrees is 0). A vertical force can never cause a horizontal displacement; thus, a vertical force does not do work on a horizontally displaced object!!
It can be accurately noted that the waiter's hand did push forward on the tray for a brief period of time to accelerate it from rest to a final walking speed. But once up to speed, the tray will stay in its straight-line motion at a constant speed without a forward force. And if the only force exerted upon the tray during the constant speed stage of its motion is upward, then no work is done upon the tray. Again, a vertical force does not do work on a horizontally displaced object.
equation for work lists three variables - each variable is
associated with one of the three key words mentioned in the
definition of work (force, displacement,
and cause). The angle theta in the equation is associated
with the amount of force that causes a displacement. As
mentioned in a previous
unit, when a force is exerted on an object at an angle
to the horizontal, only a part of the force contributes to
(or causes) a horizontal displacement. Let's consider the
force of a chain pulling upwards and rightwards upon Fido in
order to drag Fido to the right. It is only the horizontal
component of the tension force in the chain that causes
Fido to be displaced to the right. The horizontal component
is found by multiplying the force F by the cosine of the
angle between F and d. In this sense, the cosine theta in
the work equation relates to the cause factor - it
selects the portion of the force that actually
causes a displacement.
When determining the measure of the angle in the work equation, it is important to recognize that the angle has a precise definition - it is the angle between the force and the displacement vector. Be sure to avoid mindlessly using any 'ole angle in the equation. A common physics lab involves applying a force to displace a cart up a ramp to the top of a chair or box. A force is applied to a cart to displace it up the incline at constant speed. Several incline angles are typically used; yet, the force is always applied parallel to the incline. The displacement of the cart is also parallel to the incline. Since F and d are in the same direction, the angle theta in the work equation is 0 degrees. Nevertheless, most students experienced the strong temptation to measure the angle of incline and use it in the equation. Don't forget: the angle in the equation is not just any 'ole angle. It is defined as the angle between the force and the displacement vector.
On occasion, a force acts upon a moving object to hinder a displacement. Examples might include a car skidding to a stop on a roadway surface or a baseball runner sliding to a stop on the infield dirt. In such instances, the force acts in the direction opposite the objects motion in order to slow it down. The force doesn't cause the displacement but rather hinders it. These situations involve what is commonly called negative work. The negative of negative work refers to the numerical value that results when values of F, d and theta are substituted into the work equation. Since the force vector is directly opposite the displacement vector, theta is 180 degrees. The cosine(180 degrees) is -1 and so a negative value results for the amount of work done upon the object. Negative work will become important (and more meaningful) in Lesson 2 as we begin to discuss the relationship between work and energy.
Whenever a new quantity is introduced in physics, the standard metric units associated with that quantity are discussed. In the case of work (and also energy), the standard metric unit is the Joule (abbreviated J). One Joule is equivalent to one Newton of force causing a displacement of one meter. In other words,
The Joule is the unit
1 Joule = 1 Newton * 1
1 J = 1 N *
In fact, any unit of force times any unit of displacement is equivalent to a unit of work. Some nonstandard units for work are shown below. Notice that when analyzed, each set of units is equivalent to a force unit times a displacement unit.
In summary, work is done when a force acts upon an object to cause a displacement. Three quantities must be known in order to calculate the amount of work. Those three quantities are force, displacement and the angle between the force and the displacement. | http://www.physicsclassroom.com/Class/energy/U5L1a.cfm | 13 |
78 | A platform mound is any earthwork or mound intended to support a structure or activity. The indigenous peoples of North America built substructure mounds for well over a thousand years starting in the Archaic period and continuing through the Woodland period. Many different archaeological cultures (Poverty Point culture, Troyville culture, Coles Creek culture, Plaquemine culture and Mississippian culture) of North Americas Eastern Woodlands are specifically well known for using platform mounds as a central aspect of their overarching religious practices and beliefs.
These platform mounds are usually four-sided truncated pyramids, steeply sided, with steps built of wooden logs ascending one side of the earthworks. When European first arrived in North America, the peoples of the Mississippian culture were still using and building platform mounds. Documented uses for Mississippian platform mounds include semi-public chief's house platforms, public temple platforms, mortuary platforms, charnel house platforms, earth lodge/town house platforms, residence platforms, square ground and rotunda platforms, and dance platforms.
Many of the mounds underwent multiple episodes of mound construction, with the mound becoming larger with each event. The site of a mound was usually a site with special significance, either a pre-existing mortuary site or civic structure. This site was then covered with a layer of basket-transported soil and clay known as mound fill and a new structure constructed on its summit.
At periodic intervals averaged about twenty years these structures would be removed, possibly ritually destroyed as part of renewal ceremonies, and a new layer of fill added, along with a new structure on the now higher summit. Sometimes the surface of the mounds would get a several inches thick coat of brightly colored clay. These layers also incorporated layers of different kinds of clay, soil and sod, an elaborate engineering technique to forestall slumping of the mounds and to ensure their steep sides did not collapse. This pattern could be repeated many times during the life of a site. The large amounts of fill needed for the mounds left large holes in the landscape now known by archaeologists as "borrow pits". These pits were sometimes left to fill with water and stocked with fish.
Some mounds were developed with separate levels (or terraces) and aprons, such as Emerald Mound, which is one large terrace with two smaller mounds on its summit; or Monks Mound, which has four separate levels and stands close to 100 feet (30 m) in height. Monks Mound had at least ten separate periods of mound construction over a 200-year period. Some of the terraces and aprons on the mound seem to have been added to stop slumping of the enormous mound. Although the mounds were primarily meant as substructure mounds for buildings or activities, sometimes burials did occur. Intrusive burials occurred when a grave was dug into a mound and the body or a bundle of defleshed, disarticulated bones was deposited into it.
Mound C at Etowah Mounds has been found to have more than 100 intrusive burials into the final layer of the mound, with many grave goods such as Mississippian copper plates (Etowah plates), monolithic stone axes, ceremonial pottery and carved whelk shell gorgets. Also interred in this mound was a paired set of white marble Mississippian stone statues.
A long-standing interpretation of Mississippian mounds comes from Vernon James Knight, who stated that the Mississippian platform mounds were one of the three "sacra", or objects of sacred display, of the Mississippian religion - also see Earth/fertility cult and Southeastern Ceremonial Complex. His logic is based on analogy to ethnographic and historic data on related Native American tribal groups in the Southeastern United States.
Knight suggests a microcosmic ritual organization based around a "native earth" autochthony, agriculture, fertility, and purification scheme, in which mounds and the site layout replicate cosmology. Mound rebuilding episodes are construed as rituals of burial and renewal, while the four-sided construction acts to replicate the flat earth and the four quarters of the earth.
The varying cultures collectively called Mound Builders were prehistoric inhabitants of North America who, during a 5,000-year period, constructed various styles of earthen mounds for religious and ceremonial, burial, and elite residential purposes. These included the Pre-Columbian cultures of the Archaic period; Woodland period (Adena and Hopewell cultures); and Mississippian period; dating from roughly 3400 BCE to the 16th century CE, and living in regions of the Great Lakes, the Ohio River valley, and the Mississippi River valley and its tributaries. Beginning with the construction of Watson Brake about 3400 BCE in present-day Louisiana, nomadic indigenous peoples started building earthwork mounds in North America nearly 1000 years before the pyramids were constructed in Egypt.
Since the 19th century, the prevailing scholarly consensus has been that the mounds were constructed by indigenous peoples of the Americas, early cultures distinctly separate from the historical Native American tribes extant at the time of European colonization of North America. The historical Native Americans were generally not knowledgeable about the civilizations that produced the mounds. Research and study of these cultures and peoples has been based on archaeology and anthropology.
Mound Builder or Mound People is a general term referring to the Native North American peoples who constructed various styles of earthen mounds for burial, residential, and ceremonial purposes. These included Archaic, and Woodland period, and Mississippian period Pre-Columbian cultures.
The term Mound Builder was also applied to an imaginary race believed to have constructed the great earthworks of the United States, this while Euro-american racial ideology of the 16th-19th centuries did not recognize that Native Americans were sophisticated enough to construct such monumental architecture.
The final blow to this myth was dealt by an official appointee of the United States Government, Cyrus Thomas of the Bureau of American Ethnology. His lengthy report (727 pages, published in 1894) concluded finally that it was the opinion of himself and thus the United States Government that the prehistoric earthworks of the eastern United States were the work of Native Americans. Thomas Jefferson was an early proponent of this view after he excavated a mound and ascertained the continuity of burial practices observed in contemporaneous native populations.
Poverty Point in what is now Louisiana is a prominent example of early archaic Mound Builder construction from about 2500 BC. While other and earlier Archaic mound centers existed, Poverty Point remains one of the best recognized centers. Throughout the United States, the Archaic period was followed by the Woodland period, and mound building continued.
Some well understood examples would be the Adena culture of Ohio and nearby states, and the subsequent Hopewell culture known from Illinois to Ohio and renowned for their geometric earthworks. The Adena and Hopewell were not, however, the only mound building peoples during this time period. There were contemporaneous mound building cultures throughout the Eastern United States.
Around 900-1000 AD the Mississippian culture developed and spread through Eastern United States, primarily along the river valleys. The major location where the Mississippian culture is clearly developed is located in Illinois, and is referred to today as Cahokia.
The namesake cultural trait of the Mound Builders was the building of mounds and other earthworks. These burial and ceremonial structures were typically flat-topped pyramids, flat-topped or rounded cones, elongated ridges, and sometimes a variety of other forms.
Some mounds took on unusual shapes, such as the outline of cosmologically significant animals. These are considered to be distinct and are known as effigy mounds.
The best known flat-topped pyramidal earthen structure, which is also the largest pre-Columbian earthwork north of Mexico at over 100 feet tall, is Monk's Mound at Cahokia. The most famous effigy mound, Serpent Mound in southern Ohio, is 5 feet tall, 20 wide, over 1330 feet long, and shaped as a serpent.
The most complete reference for these earthworks is Ancient Monuments of the Mississippi Valley, written by Ephraim G. Squier and Edwin H. Davis and published by the Smithsonian Institution in 1848. Since a large number of the features they documented have since been destroyed or diminished by farming and development, their surveys, sketches and descriptions are still used by modern archaeologists. A smaller regional study in 1931 by author and archaeologist Fred Dustin charted and examined the mounds and Ogemaw Earthworks near Saginaw, Michigan.
The mound builders included many different tribal groups and chiefdoms, probably involving a bewildering array of beliefs and unique cultures, united only by the shared architectural practice of mound construction. This practice, believed to be associated with a cosmology that had a cross-cultural appeal, may indicate common cultural antecedents. The first mound building is an early marker of incipient political and social complexity among the cultures in the Eastern United States.
As with other continents, the mounds and pyramids of North America vary greatly. It could be that humankind has a primal need to build fake mountains, and that there are absolutely no connections between these sites. Perhaps size and shape are irrelevant, and location is everything, and the guidelines for their placement was once universally known.
It is difficult to determine how many mounds were built in North America, for many have been destroyed by modern civilization - but there were thousands.
Poverty Point combines mounds with an aspect of ancient Rome - an amphitheatre. Consisting of concentric ridges 5-10 feet high and 150 wide, the construction has a diameter of 3-4 of a mile, five times the diameter of the Colosseum in Rome. The ridges were built with 530,000 cubic yards of earth (over 35 times the cubic amount of the Great Pyramid of Giza). Of the earth mounds, one has a base of 700 feet by 800 feet and is 70 feet high. It is shaped like a bird.
Poverty Point is a prehistoric earthworks of the Poverty Point culture, now a historic monument located in the Southern United States. It is 15.5 miles (24.9 km) from the current Mississippi River, and situated on the edge of Macon Ridge, near the village of Epps in West Carroll Parish, Louisiana.
Poverty Point comprises several earthworks and mounds built between 1650 and 700 BCE, during the Archaic period in the Americas by a group of Native Americans of the Poverty Point culture. The culture extended 100 miles (160 km) across the Mississippi Delta. The original purposes of Poverty Point have not been determined by archaeologists, although they have proposed various possibilities including that it was: a settlement, a trading center, and/or a ceremonial religious complex.
Mound A (The Bird Mound)
Alongside these ridges are other earthworks, primarily platform mounds. The largest of these, Mound A, is to the west of the ridges, and is roughly T-shaped when viewed from above. Many have interpreted it as being in the shape of a bird and also as an "Earth island", representing the cosmological center of the site. Scholars use the fact that Mound A is in the center of a direct alignment between Mound B and E as an element demonstrating the complex planning exercised by the sites' builders.
Researchers have learned that Mound A was constructed quickly, probably over a period of less than three months. Prior to construction, the vegetation covering the site was burned. According to radiocarbon analysis, this burning occurred between approximately 1450 and 1250 BCE. Workers immediately covered the area with a cap of silt, followed quickly by the main construction effort. There are no signs of construction phases or weathering of the mound fill even at microscopic levels, indicating that construction proceeded in a single massive effort over a short period. In total volume, Mound A is made up of approximately 238,000 cubic meters of fill, making it the second-largest earthen mound (by volume) in eastern North America. It is second in overall size to the later Mississippian-culture Monks Mound at Cahokia, built beginning about 950-1000 CE in present-day Illinois.
Mound B, a platform mound, is north-west of the rings. Below the mound was found a human bone interred with ashes, a likely indication of cremation, suggesting that this might have been a burial mound or the individual was a victim of human sacrifice. Mound B aligns in a straight north to south line with both mounds A and E.
Mound E (Ballcourt Mound)
The Ballcourt Mound, which is also a platform mound, is so called because "two shallow depressions on its flattened top reminded some archaeologists of playing areas in front of outdoor basketball goals, not because they had any revelation about Poverty Point's sports scene."
Mound E forms a north-south line with mounds A and B.
Dunbar and Lower Jackson mounds
Within the enclosure created by the curving earthworks, two additional platform mounds were located. The Dunbar Mound, had various pieces of chipped precious stones upon it, indicating that people used to sit atop it and make jewelry. South of the site center is the Lower Jackson Mound, which is believed to be the oldest of all the earthworks at the site. In the southern edge of the site, the Motley Mound rises 51 ft (16 m). The conical mound is circular and reaches a height of 24.5 ft (7.5 m). These three platform mounds are much smaller than the other mounds.
Some followers of the New Age movement believe the site has spiritual qualities. John Ward, in his controversial pseudo-archaeological Ancient Archives among the Cornstalks (1984), claimed that Poverty Point was built by refugees who fled up the Mississippi River after their home, Atlantis, was destroyed in 1198 BCE. A similar connection to the legendary lost city was made by Frank Joseph, who claimed that individuals who were the reincarnation of former Atlanteans were able to unleash the psychic energies of Poverty Point by spilling purified water on the oak tree upon the main mound at the site. Erich Von Daniken has suggested a connection to extraterrestrials. He suggested that one of the mounds was a landing platform for alien aircraft.
Archaic Native Americans built massive Louisiana mound in fewer than 90 days, research confirms PhysOrg - January 30, 2013
Nominated early this year for recognition on the UNESCO World Heritage List, which includes such famous cultural sites as the Taj Mahal, Machu Picchu and Stonehenge, the earthen works at Poverty Point, La., have been described as one of the world's greatest feats of construction by an archaic civilization of hunters and gatherers. Now, new research in the current issue of the journal Geoarchaeology, offers compelling evidence that one of the massive earthen mounds at Poverty Point was constructed in less than 90 days, and perhaps as quickly as 30 days - an incredible accomplishment for what was thought to be a loosely organized society consisting of small, widely scattered bands of foragers.
The Great Serpent Mound is the largest effigy mound in the world. While there are several burial mounds around the Serpent mound site, the Serpent itself does not contain any human remains and wasn't constructed for burial purposes. It is located in Adams County, Ohio.
1,330 feet in length along its coils and averaging three feet in height. One of many sacred places associated with ancient wisdom identified by the serpent symbol. Nearly a quarter of a mile long, Serpent Mound apparently represents an uncoiling serpent.
The head of the serpent is aligned to the summer solstice sunset and the coils also may point to the winter solstice sunrise and the equinox sunrise. Today, visitors may walk along a footpath surrounding the serpent and experience the mystery and power of this monumental effigy. A public park for more than a century, Serpent Mound attracts visitors from all over the world. The museum contains exhibits on the effigy mound and the geology of the surrounding area.
Serpent Mound lies on a plateau overlooking the valley of Brush Creek. It is located on a plateau with a unique cryptoexplosion structure that contains faulted and folded bedrock, which is usually either produced by a meteorite or volcanic explosion.
This cryptoexplosion structure has caused Serpent Mound to become misshapen over the years. This is one of the only places in North America where such an occurrence is seen. Though the meaning is grounds for debate, the mound's placement on such an area is almost undoubtedly not by coincidence. Glotzhober & Lepper summarize the dispute in their work.
Put it another way. The experts can not agree whether the immediate geological area of Serpent Mound was created from within the earth or from without. Geologists from the Ohio Division of Natural Resources Division of Geological Survey and from the University of Glasgow (Scotland) concluded in 2003 that a meteorite strike was responsible for the formation after studying core samples collected at the site in the 1970s. Further analyses of the rock core samples recovered at the site indicated the meteorite impact occurred during the Permian Period, about 248 to 286 million years ago.
Nearby conical mounds contained burials and implements characteristic of the prehistoric Adena people (800 BC-AD 100). Many questions surround the meaning of Serpent Mound, but there is little doubt it symbolized some religious or mythical principle for its builders. The museum contains exhibits on the mound and the geology of the surrounding area.
The date and creators of the Serpent mound is still debated among archaeologists. Several legitimate attributions have been made concerning both of these questionable factors: The Adena culture and the Fort Ancient culture. Both of these sub-cultures belonged to the broader Hopewell culture, a term used to encompass all of the pre-Columbian Native American groups that resided in Southern Ohio. All of these civilizations had similar characteristics, including burial mounds and effigy mounds, such as the Serpent Mound.
Historically, the mound has been attributed to the Adena Indians (800 BC-AD 100). Many nearby mounds can be assuredly contributed to the Adena culture. The Adena are also renowned for their elaborate earthworks.
However, recent carbon dating studies place the serpent mound outside of the span of the Adenas. There are also no cultural artifacts present within the mound, a trait of most other Adena mounds. This could possibly be because the mound is not of Adena origin, or that it held a special significance above other burial mounds.
A few pieces of wood charcoal were found in the undisturbed portion of the serpent mound. When carbon dating experiments were undertaken on these artifacts, the first two yielded a date of ca. 1070 AD, with the third piece dating to the Late Archaic period.
The first two dates place the Serpent Mound within the realm of the Fort Ancient Indians, a Mississippian culture, but the third back to very early Adena or before. The Fort Ancient Indians could very well have been the erectors of the Serpent Mound. A significant symbol in the Mississippian culture is the rattlesnake, which would explain the design of the mound.
However, this mound, if built by the Fort Ancient Indians, is uncharacteristic for that group. They also buried many artifacts in their mounds, something of which the Serpent Mound is devoid. Also, the Fort Ancient Indians did not usually bury their dead in the manner which the remains have been found at the effigy.
Astronomy - The head of the serpent is aligned to the summer solstice sunset and the snake’s coils align with the winter solstice sunrise and the equinox sunrise. It is thought that perhaps the mound was created as a response to astrological occurrences.
The carbon dating attribution of 1070 coincides with two significant astronomic events - The appearance of Halley's Comet in 1066 and the light from the supernova that created Crab Nebula in 1054. This light was visible for two weeks after it first reached earth, even during the day. There is speculation that the serpent mound was to emulate a comet, slithering across the night sky like a snake.
The Serpent Mound was first discovered by two Chillicothe men, Ephraim G. Squier and Edwin H. Davis. During a routine surveying expedition, Squier and Davis discovered the unusual mound in 1846. They took particularly careful note of the area. When they published their book, Ancient Monuments of the Mississippi Valley, in 1848, they included a detailed description and a map of the serpent mound.
One man who it particularly intrigued was Frederick Ward Putnam of the Peabody Museum of Harvard University. Putnam was fascinated with the mounds, specifically the Serpent Mound. When he visited the mounds in 1885, Putnam found that they were gradually being destroyed by plowing. Putnam raised funds, and in 1886 purchased the land in the name of the university to be used as a public park.
Excavation of the Serpent Mound - After raising sufficient funds, Putnam returned to the site in 1886. He worked for three years excavating the contents and burial sequences of both the Serpent Mound and two nearby conical mounds. After his work was completed and his findings documented, Putnam worked on restoring the mounds to their original state. In 1900, Harvard University turned over the Serpent Mound to the Ohio Historical Society to operate as a public park.
The Serpent Mound is one of those rare loci of the planet's topography where the consummate joining of terrestrial magnetism with astronomical alignments serves to astonish one at the accomplishments of our ancestry's knowledge of Earth and Heaven.
Unless you are an experienced geologist, the unique features of the topography of the lands surrounding the Serpent Mound are not obvious. The land rises and falls, sometime in gentle slopes but often sharply with steep contours, like the outcrop of stone and earth the serpent sits upon.
Through the land, flows many streams, some maintaining their flow throughout the summer. From atop the tower constructed to give tourists an elevated view of the mound, you also see a land covered with mixed hardwoods and the occasional evergreen. The view appears little different from the rest of southern Ohio, but within recent years the land here has been found to be unique. I believe the Adena peoples knew it over two thousand years ago when they sculpted this serpent out of stone, clay, and dirt.
In 1933 W.H. Bucher published an account of this area calling it a cryptovolcanic structure. Bucher was German, and his article was published in a German publication. Perhaps it takes an outsider to see the inner qualities of a place. Bucher saw similarities in the land forms at the Serpent Mound to barely recognizable volcanic upheavals in Germany. But like so many who speculate about the mounds, he saw what he wished to see.
No volcanic materials have been found here; however, he helped people see what is hardest to see: the familiar as strange. In 1947 R.D. Dietz in Science magazine suggested that a better name to describe the land features was "cryptoexplosion" - the folded and faulted beds of landforms from different geologic eras exposed from the impact of meteors.
The central area is characterized by uplifted and faulted Silurian and Ordovician rocks that have been folded sharply into seven radiating anticlines. The forces that produced this structure caused the central area to be uplifted a minimum of 950 feet. Shatter cones - shock - produced structures - are found in moderate amounts in the central area.
This description is from a map showing a nearly circular area representing the disturbed landscape; looking closely you can see the serpent mound sitting on the circumference of the circle. There is a great appeal to Dietz's theory even if the geology does not completely support it; there is no meteoric metal here.
But there are serious suggestions that the serpent is intimately connected with the heavens. Several writers have suggested that the serpent is a model of the constellation we call the Little Dipper, its tail coiled about the north star. It is tempting to believe that the Indians knew of the meteor's explosion into the earth, and they built the mound to honor that event.
Bucher's theory and the variation of it is supported more by the evidence of the rocks and the symbolism of the mound. The explosion came from within the earth from the incredible pressure of accumulated but repressed energies, trapped, blocked, but finally exploding upward as gas forcing its way to be released through the body of the earth toward the sky above.
Old maps of the area show Mounds at these places where the waterways meet, which some people consider gateways - the ways of passage, movement of consciousness between realities. This signifies the inner energies of the earth all embody.
This powerful energy rising from the depths of the earth-body is the energy of transformation, the energy that destroys blockages and barriers to the higher states of consciousness. It is the energy charted by shamans of every primary culture, the energy inherent in every human body.
In Ohio we also find the Decalogue Stone - Battle Creek Stone.
Wonders of geometric procession, the earthworks and mounds of the lower Mississippi were centers of life long before the Europeans arrived in America, as was the river itself. The alluvial soil of its banks yielded a bounty of beans, squash, and corn to foster burgeoning communities. Over the Mississippi's waters, from near and far, came prized pearls, copper, and mica.
Along Mississippi's scenic Natchez Trace Parkway sits an immense flat-topped platform 35 feet high, spanning eight acres.
Emerald Mound the second largest ceremonial earthwork in the United States, was built over two centuries before Columbus waded ashore in the Caribbean. The Mississippians erected hundreds - maybe thousands - of earthworks across the southeast while Europe was living through the Middle Ages and the Renaissance.
As the Mississippians flourished, the mounds evolved into urban centers with the common city problems of overcrowding and waste disposal. Sometimes one large flat-topped mound dominated a village or ceremonial center. More often, as at Emerald, several mounds surrounded a plaza, with the village at its edges. Structures atop the plaza - temples or official residences - sat on large four-sided flat-topped mounds. A palisade of saplings surrounded the entire complex.
Periodically, the Mississippians would raze one of the wood-and-mud structures, bury the remains of a deceased leader in a fresh layer of earth, and erect a new building on top. Commonly, the well-to-do were laid to rest in specially built burial mounds, conical or round.
Crews labored periodically over generations, sometimes a century or more, before an earthwork reached its final dimensions. A mound might begin as a slight rise with an important building on it. After a time, perhaps it might burn accidentally or people would burn it down as part of a cleansing ceremony. The crews brought basket after basket of dirt to cover the old and lay a new foundation, and another building went up.
Many workers, hauling 60 pounds of soil apiece, labored to complete each stage. Some archeologists say that the culture's survival depended on a steady flow of immigrants to compensate for the high death rates. When the flow ceased, they argue, the cities collapsed.
Today, most of the moundbuilders' legacy is gone. Many of their earthworks have been plowed, pilfered, eroded, and built over. Yet evidence of the culture remains. This website is part of an effort to preserve the legacy that survives along the banks of the lower Mississippi.
The Mississippian Native American Platform Mound
Spiro Mounds is an important pre-Columbian Caddoan Mississippian culture archaeological site located in present-day eastern Oklahoma in the United States. The site is located seven miles north of Spiro, and is the only prehistoric Native American archaeological site in Oklahoma open to the public. The prehistoric Spiro people thrived and created a strong religious center and political system.
The site was eventually abandoned after several hundred years of occupation, although it is still unclear why. The Great Mortuary at the site was looted in the 1930s. Many of the looted artifacts were eventually tracked down, although many others were destroyed by the looters, who used dynamite on the mound to gain access to its contents. The mounds site has been significant to North American archaeology since the 1930s, especially in the defining of the Southeastern Ceremonial Complex. Listed on the National Register of Historic Places, the site is under the protection of the Oklahoma Historical Society and open to the public.
Spiro is the western-most known outpost of the Mississippian culture, which arose and spread along the lower Mississippi River and its tributaries between the 9th century and 16th century CE. Cahokia, a major chiefdom that built a six-mile-square city, arose east of St. Louis in present-day Illinois. Mississippian culture extended along the Ohio River and into the southeast, and the trading network ranged from the Great Lakes to the Gulf Coast and into the southeastern mountains.
The Spiro area includes twelve mounds and 150 acres of land. As in other Mississippian-culture towns, the people built a number of large, complex earthworks. These included earthen mounds surrounding a large, planned and leveled central plaza, where important religious rituals, the politically and culturally significant game of chunkey, and other important community activities were carried out. The population lived in a village that bordered the plaza. In addition, archaeologists have found more than twenty other related village sites within five miles of the main town. Other village sites linked to Spiro through culture and trade have been found up to a 100 miles (160 km) away.
Spiro has been the site of human activity for at least 8000 years, but was a major settlement from 800 to 1450 CE. The cultivation of maize allowed accumulation of crop surpluses and the gathering of more dense populations. It was the headquarters town of a regional chiefdom, whose powerful leaders directed the building of eleven platform mounds and one burial mound in an 80-acre (0.32 km2) area on the south bank of the Arkansas River.
The heart of the site is a group of nine mounds surrounding an oval plaza. These mounds elevated the homes of important leaders or formed the foundations for religious structures that focused the attention of the community. Brown Mound, the largest platform mound, is located on the eastern side of the plaza. It had an earthen ramp that gave access to the summit from the north side. Here, atop Brown Mound and the other mounds, the town's inhabitants carried out complex rituals, centered especially on the deaths and burials of Spiro's powerful rulers.
Archaeologists have shown that Spiro had a large resident population until about 1250 CE. After that, most of the population moved to other towns nearby. Spiro continued to be used as a regional ceremonial center and burial ground until about 1450 CE. Its ceremonial and mortuary functions continued and seem to have grown after the main population moved away.
Craig Mound - also called "The Spiro Mound" - is the second-largest mound on the site and the only burial mound. It is located about 1,500 feet (460 m) southeast of the plaza. A cavity created within the mound, about 10 feet (3.0 m) high and 15 feet (4.6 m) wide, allowed for almost perfect preservation of fragile artifacts made of wood, conch shell, and copper. The conditions in this hollow space were so favorable that objects made of perishable materials such as basketry, woven fabric of vegetal and animal fibers, lace, fur, and feathers were preserved inside it. Such objects have traditionally been created by women in historic tribes. Also found inside were several examples of Mississippian stone statuary made from Missouri flint clay and Mill Creek chert bifaces, all thought to have originally come from the Cahokia site in Illinois.
The "Great Mortuary," as archaeologists called this hollow chamber, appears to have begun as a burial structure for Spiro's rulers. It was created as a circle of sacred cedar posts sunk in the ground and angled together at the top like a tipi. The cone-shaped chamber was covered with layers of earth to create the mound, and it never collapsed. Some scholars believe that minerals percolating through the mound hardened the chamber's log walls, making them resistant to decay and shielding the perishable artifacts inside from direct contact with the earth. No other Mississippian mound has been found with such a hollow space inside it and with such spectacular preservation of artifacts. Craig Mound has been called "an American King Tut's Tomb."
Artifact hunters looted Craig Mound between 1933 and 1935, tunneling into the mound and breaking through the Great Mortuary's log wall. They found many human burials, together with their associated grave goods. The looters discarded the human remains and the fragile artifacts, which were made of copper, shell, stone, basketry and textile, traditionally made by women of the culture. Most of these rare and historically priceless objects disintegrated before scholars could reach the site, although some were sold to collectors.The looters dynamited the burial chamber when they were finished and quickly sold the commercially valuable artifacts, made of stone, pottery, and conch shell, to collectors in the United States and overseas. Most of these valuable objects are probably lost, but some have been recovered and documented by scholars.
Cahokia Mounds State Historic Site is located on the site of an ancient Native American city (c. 600-1400 CE) situated directly across the Mississippi River from modern St. Louis, Missouri. This historic park lies in Southern Illinois between East St. Louis and Collinsville. The park, operated by the Illinois Historic Preservation Agency, is quite large, covering 2,200 acres (890 ha), or about 3.5 square miles, and containing about 80 mounds, but the ancient city was actually much larger. In its heyday, Cahokia covered about 6 square miles and included about 120 man-made earthen mounds in a wide range of sizes, shapes, and functions.
Cahokia was the largest and most influential urban settlement in the Mississippian culture which developed advanced societies across much of what is now the Southeastern United States, beginning more than 500 years before European contact. Cahokia's population at its peak in the 1200s was as large as, or larger than, any European city of that time, and its ancient population would not be surpassed by any city in the United States until about the year 1800. Today, Cahokia Mounds is considered the largest and most complex archaeological site north of the great Pre-Columbian cities in Mexico.
Cahokia began to decline after 1300 CE. It was abandoned more than a century before Europeans arrived in North America, in the early 16th century, and the area around it was largely uninhabited by indigenous tribes. Scholars have proposed environmental factors, such as over-hunting and deforestation as explanations. The houses, stockade, and residential and industrial fires would have required the annual harvesting of thousands of logs. In addition, climate change could have aggravated effects of erosion due to deforestation, and adversely affected the cultivation of maize, on which the community had depended.
Another possible cause is invasion by outside peoples, though the only evidence of warfare found so far is the wooden stockade and watchtowers that enclosed Cahokia's main ceremonial precinct. Due to the lack of other evidence for warfare, the palisade appears to have been more for ritual or formal separation than for military purposes. Diseases transmitted among the large, dense urban population are another possible cause of decline. Many recent theories propose conquest-induced political collapse as the primary reason for Cahokia’s abandonment.
Cahokia Mounds is a National Historic Landmark and designated site for state protection. In addition, it is one of only 21 World Heritage Sites within the United States. It is the largest prehistoric earthen construction in the Americas north of Mexico.
Although there is some evidence of Late Archaic period (approximately 1200 BCE) occupation in and around the site, Cahokia as it is now defined was settled around 600 CE, during the Late Woodland period. Mound building at this location began with the Emergent Mississippian cultural period, about the 9th century CE. The inhabitants left no written records beyond symbols on pottery, shell, copper, wood and stone, but the elaborately planned community, woodhenge, mounds and burials reveal a complex and sophisticated society. The city's original name is unknown.
The original site contained 120 earthen mounds over an area of six square miles, of which 80 remain today. To achieve that, thousands of workers over decades moved more than an "estimated 55 million cubic feet of earth in woven baskets to create this network of mounds and community plazas. Monks Mound, for example, covers 14 acres (5.7 ha), rises 100 ft (30 m), and was topped by a massive 5,000 sq ft (460 m2) building another 50 ft (15 m) high."
The Mounds were later named after a clan of historic Illiniwek people living in the area when the first French explorers arrived in the 17th century. As this was centuries after Cahokia was abandoned by its original inhabitants, the Cahokia were not necessarily descendants of the original Mississippian-era people. Scholars do not know which, if any Native American groups, are the living descendants of the people who originally built and lived at the Mound site, although many are plausible. Native American bands migrated through different areas, and those living in territories at the time of European encounter were often not the descendants of peoples who had lived there before.
Monks Mound is the largest Pre-Columbian earthwork in America north of Mesoamerica. Located at the Cahokia Mounds UNESCO World Heritage Site near Collinsville, Illinois, its size was calculated in 1988 as about 100 feet (30 m) high, 955 feet (291 m) long including the access ramp at the southern end, and 775 feet (236 m) wide. This makes Monks Mound roughly the same size at its base as the Great Pyramid of Giza (13.1 acres / 5.3 hectares). Its base circumference is larger than the Pyramid of the Sun at Teotihuacan.
Unlike Egyptian pyramids which were built of stone, the platform mound was constructed almost entirely of layers of basket-transported soil and clay. Because of this construction and its flattened top, over the years, it has retained rainwater within the structure. This has caused "slumping", the avalanche-like sliding of large sections of the sides at the highest part of the mound. Its designed dimensions would have been significantly smaller than its present extent, but recent excavations have revealed that slumping was a problem even while the mound was being made.
Construction of Monks Mound by the Mississippian culture began about 900-950 CE, on a site which had already been occupied by buildings. The original concept seems to have been a much smaller mound, now buried deep within the northern end of the present structure. At the northern end of the summit plateau, as finally completed around 1100 CE, is an area raised slightly higher still, on which was placed a building over 100 ft (30 m) long, the largest in the entire Cahokia Mounds urban zone.
Deep excavations in 2007 confirmed findings from earlier test borings, that several types of earth and clay from different sources had been used successively. Study of various sites suggests that the stability of the mound was improved by the incorporation of bulwarks, some made of clay, others of sods from the Mississippi flood-plain, which permitted steeper slopes than the use of earth alone.
The most recent section of the mound, added some time before 1200 CE, is the lower terrace at the south end, which was added after the northern end had reached its full height. It may partly have been intended to help minimize the slumping which by then was already under way.
Today, the western half of the summit plateau is significantly lower than the eastern; this is the result of massive slumping, beginning about 1200 CE. This also caused the west end of the big building to collapse. It may have led to the abandonment of the mound's high status, following which various wooden buildings were erected on the south terrace, and garbage was dumped at the foot of the mound. By about 1300, the urban society at Cahokia Mounds was in serious decline. When the eastern side of the mound started to suffer serious slumping, it was not repaired.
The Grand Plaza is a large open plaza that spreads out to the south of Monks Mound. Researchers originally thought the flat, open terrain in this area reflected Cahokia's location on the Mississippi's alluvial flood plain but instead soil studies have shown that the landscape was originally undulating. In one of the earliest large-scale construction projects, the site had been expertly and deliberately leveled and filled by the city's inhabitants. It is part of the sophisticated engineering displayed throughout the site. The Grand Plaza covered roughly 50 acres (20 ha) and measured over 1,600 ft (490 m) in length by over 900 ft (270 m) in width. It was used for large ceremonies and gatherings, as well as for ritual games, such as chunkey. Along with the Grand Plaza to the south, three other very large plazas surround Monks Mound in the cardinal directions to the east, west, and north.
The high-status district of Cahokia was surrounded by a long palisade that was equipped with protective bastions. Where the palisade passed, it separated neighborhoods. Archaeologists found evidence of the stockade during excavation of the area and indications that it was rebuilt several times. Its bastions showed that it was mainly built for defensive purposes.
Beyond Monks Mound, as many as 120 more mounds stood at varying distances from the city center. To date, 109 mounds have been located, 68 of which are in the park area. The mounds are divided into several different types: platform, conical, ridge-top, etc.. Each appeared to have had its own meaning and function. In general terms, the city center seems to have been laid out in a diamond-shaped pattern approximately 1 mi (1.6 km) from end to end, while the entire city is 5 mi (8.0 km) across from east to west.
The reconstructed Woodhenge, erected in 1985.
Archaeologists discovered postholes during excavation of the site to the west of Monks Mound, revealing a timber circle. Noting that the placement of posts marked solstices and equinoxes, they referred to it as "an American Woodhenge", likening it to England's well-known circles at Woodhenge and Stonehenge.[ Detailed analytical work supports the hypothesis that the placement of these posts was by design. The structure was rebuilt several times during the urban center's roughly 300-year history. Evidence of another timber circle was discovered near Mound 72, to the south of Monks Mound.
According to Chappell, "A beaker found in a pit near the winter solstice post bore a circle and cross symbol that for many Native Americans symbolizes the Earth and the four cardinal directions. Radiating lines probably symbolized the sun, as they have in countless other civilizations." The woodhenges were significant to the timing of the agricultural cycle.
Etowah Mounds is a 54-acre (220,000 sq. miles) archaeological site in Bartow County, Georgia south of Cartersville, in the United States. Built and occupied in three phases, from 1000–1550 CE, the prehistoric site is located on the north shore of the Etowah River. Etowah Indian Mounds Historic Site is a designated National Historic Landmark, managed by the Georgia Department of Natural Resources. It is the most intact Mississippian culture site in the Southeastern United States.
These were made during the same Mississippian Temple Mound Building Period, as were mounds at Moundville (near Tuscaloosa, Alabama) and at Cahokia - roughly 700 AD to 1400 AD.
The six flat-topped earthen knolls and a plaza were used for rituals by several thousand Native Americans between 1000 and 1500 A.D. The largest mound has a height of 63 feet. Only nine percent of this site has been excavated, but we already know that the mounds have caves underneath them as do some Mayan and Giza pyramids.
It may also just be a coincidence, but there is a Limonite mine at Etowah. Limonite is a iron-bearing ore with a very special use - as radiation shielding for atomic bomb tests, nuclear reactors and space stations. It is also what gives Mars its red color.
Etowah has three main platform mounds and three lesser mounds. The Temple Mound, Mound A, is 63 feet (19 m) high, taller than a six-story building, and covers 3 acres (12,000 m2) at its base.
In 2005-2008 ground mapping with magnetometers revealed new information and data, showing that the site was much more complex than had previously been believed. The study team has identified a total of 140 buildings on the site. In addition, Mound A was found to have had four major structures and a courtyard at the height of the community's power.
Mounds A and B as seen from Mound C
Mound B is 25 feet (7.6 m) high; Mound C, which rises 10 feet (3.0 m), is the only one to have been completely excavated. Magnetometers enabled archaeologists to determine the location of temples of log and thatch, which were originally built on top of the mounds. Adjacent to the mounds is a raised ceremonial plaza, which was used for ceremonies, stickball and chunkey games, and as a bazaar for trade goods.
When visiting the Etowah Mounds, guests can view the "borrow pits" (which archaeologists at one time thought were moats) which were dug out to create the three large mounds in the center of the park.
Older pottery found on the site suggest that there was an earlier village (ca. 200 BCE-600 CE) associated with the Swift Creek culture. This earlier Middle Woodland period occupation at Etowah may have been related to the major Swift Creek center of Leake Mounds, approximately two miles downstream (west) of Etowah.
War was commonplace; many archaeologists believe the people of Etowah battled for hegemony over the Alabama river basin with those of Moundville, a Mississippian site in present-day Alabama. The town was protected by a sophisticated semicircular fortification system. An outer band formed by nut tree orchards prevented enemy armies from shooting masses of flaming arrows into the town. A 9 feet (2.7 m) to 10 feet (3.0 m) deep moat blocked direct contact by the enemy with the palisaded walls.
It also functioned as a drainage system during major floods, common for centuries, from this period and into the 20th century. Workers formed the palisade by setting upright 12 feet (3.7 m) high logs into a ditch approximately 12 inches (300 mm) on center and then back-filling around the timbers to form a levee. Guard towers for archers were spaced approximately 80 feet (24 m) apart.
The artifacts discovered in burials within the Etowah site indicate that its residents developed an artistically and technically advanced culture. Numerous copper tools, weapons and ornamental copper plates accompanied the burials of members of Etowah's elite class. Where proximity to copper protected the fibers from degeneration, archaeologists also found brightly colored cloth with ornate patterns. These were the remnants of the clothing of social elites. Numerous clay figurines and ten Mississippian stone statues have been found through the years in the vicinity of Etowah. Many are paired statues, which portray a man sitting cross-legged and a woman kneeling. The female figures wear wrap-around skirts and males are usually portrayed without visible clothing, although both usually have elaborate hairstyles. The pair are thought to represent lineage ancestors. Individual statues of young women also show them kneeling, but with additional characteristics such as visible sex organs, which are not visible on the paired statues. This female figure is thought to represent a fertility or Earth Mother goddess. The birdman, hand in eye, solar cross, and other symbols associated with the Southeastern Ceremonial Complex appear in many artifacts found at Etowah.
Warren K. Moorehead's excavations into Mound C at the site revealed a rich array of Mississippian culture burial goods. These artifacts, along with the collections from Cahokia, Moundville Site, Lake Jackson Mounds, and Spiro Mounds, would comprise the majority of the materials which archaeologists used to define the Southeastern Ceremonial Complex (SECC). The professional excavation of this enormous burial mound contributed major research impetus to the study of Mississippian artifacts and peoples. It greatly increased the understanding of pre-Contact Native American artwork.
Lake Jackson Mounds Archaeological State Park (8LE1) is one of the most important archaeological sites in Florida, the capital of chiefdom and ceremonial center of the Fort Walton Culture inhabited from 1050-1500. The complex originally included seven earthwork mounds, a public plaza and numerous individual village residences.
One of several major mound sites in the Florida Panhandle, the park is located in northern Tallahassee, on the south shore of Lake Jackson. The complex has been managed as a Florida State Park since 1966. On May 6, 1971, the site was listed on the U.S. National Register of Historic Places as reference number 71000241.
The site was built and occupied between 1000 and 1500 by people of the Fort Walton culture, the southernmost expression of the Mississippian culture. The scale of the site and the number and size of the mounds indicate that this was the site of a regional chiefdom, and was thus a political and religious center.
After the abandonment of the Lake Jackson site the chiefdom seat was moved to Anhaica (rediscovered in 1987 by B. Calvin Jones and located within DeSoto Site Historic State Park), where in 1539 it was visited by the Hernando de Soto entrada, who knew the residents as the historic Muskogean-speaking Apalachee people. Other related Fort Walton sites are located at Velda Mound (also a park), Cayson Mound and Village Site and Yon Mound and Village Site.
When the site was abandoned it was a large complex (19.0 hectares (0.073 sq mi)) that included seven platform mounds, six arranged near a plaza and a seventh (Mound 1) located 250 metres (820 ft) to the north. The mounds were the result of skilled planning, knowledge of soils and organization of numerous laborers over the period of many years. The ceremonial plaza was a large flat area, constructed and leveled for this purpose, where ritual games and gatherings took place.
Diagram showing the various components of mound construction
The area around the mounds and plaza had several areas of heavy village habitation with individual residences, where artisans and workers lived. There were also communal agricultural fields in the surrounding countryside, where the people cultivated maize in the rich local soil, the major reason such a dense population and large site were possible. Only a few of the mounds in the park have been systematically excavated by archaeologists.
The site itself is oriented on an east-west axis, oriented perpendicular to the north-south axis of the Meginnis Arm, a nearby extension of Lake Jackson. All of the mounds are laid out to reflect this alignment, although it is unclear if this is symbolic or merely the result of the lake arms orientation.
The layout and arrangement of the mounds in the central area of the site suggests that there may have been two large plaza areas. Mounds 2, 3, 4, and 5 form a large rectangular shape that was mostly free of debris. Mounds 2, 3, 6, and 7 also form a rectangular shape that suggests it too was a plaza. Both plazas would have had Butler's Mill Creek (a small stream that once bisected these areas, but whose course was altered in historic times) running through it. Excavations have shown that a clean area between Mounds 2 and 4 was a plaza, but not enough work has been done at the rest of the site to confirm the larger dimension suggested by the first arrangement or the existence of a plaza at the second arrangement at all.
A view of the site from the top of Mound B looking toward Mound A and the plaza.
Moundville Archaeological Site also known as the Moundville Archaeological Park, is a Mississippian culture site on the Black Warrior River in Hale County, near the town of Tuscaloosa, Alabama. Extensive archaeological investigation has shown that the site was the political and ceremonial center of a regionally organized Mississippian culture chiefdom polity between the 11th and 16th centuries.
The archaeological park portion of the site is administered by the University of Alabama Museums and encompasses 185 acres (75 ha), consisting of 29 platform mounds around a rectangular plaza. The site was declared a National Historic Landmark in 1964 and was added to the National Register of Historic Places in 1966.
Moundville is the second-largest site of the classic Middle Mississippian era, after Cahokia in Illinois. The culture was expressed in villages and chiefdoms throughout the central Mississippi River Valley, the lower Ohio River Valley, and most of the Mid-South area, including Kentucky, Tennessee, Alabama, and Mississippi as the core of the classic Mississippian culture area. The park contains a museum and an archaeological laboratory.
The site was occupied by Native Americans of the Mississippian culture from around 1000 AD to 1450 AD. Around 1150 AD it began its rise from a local to a regional center. At its height, the community took the form of a roughly 300-acre (121 ha) residential and political area protected on three sides by a bastioned wooden palisade wall, with the remaining side protected by the river bluff.
A view across the plaza from mound J to mound B, with mound A in the center.
The largest platform mounds are located on the northern edge of the plaza and become increasingly smaller going either clockwise or counter clockwise around the plaza to the south. Scholars theorize that the highest-ranking clans occupied the large northern mounds, with the smaller mounds' supporting buildings used for residences, mortuary, and other purposes.
Of the two largest mounds in the group, Mound A occupies a central position in the great plaza, and Mound B lies just to the north, a steep, 58 feet (18 m) tall pyramidal mound with two access ramps; it rises to a height of 58 feet. Along with both mounds, archaeologists have also found evidence of borrow pits, other public buildings, and a dozen small houses constructed of pole and thatch.
Archaeologists have interpreted this community plan as a sociogram, an architectural depiction of a social order based on ranked clans. According to this model, the Moundville community was segmented into a variety of different clan precincts, the ranked position of which was represented in the size and arrangement of paired earthen mounds around the central plaza.
By 1300, the site was being used more as a religious and political center than as a residential town. This signaled the beginning of a decline, and by 1500 most of the area was abandoned.
Crooks Mound in Louisiana is a large, conical, burial mound that was part of at least six episodes of burials. It is located in La Salle Parish in south central Louisiana. It is a large, conical, burial mound that was part of at least six episodes of burials. It measured about 16 ft high (4.9 m) and 85 ft wide (26 m). It contained roughly 1,150 remains that were placed however they were able to be fit into the structure of the mound. Sometimes body parts were removed in order to achieve that goal. Archaeologists think it was a holding house for the area that was emptied periodically in order to achieve this type of setup
Most of the time, the people were just placed into the mound, but a few of the burials were in log-lined tombs or rarely stone lined tombs. Only a few out of each burial were interred with copper tools as grave goods. This suggests that the area was mainly for common people to be buried in. The site is on private land, usually with no public access but you are able to view it from the roadway.
There were two separate mounds that make up the site. In 1938-1939 the site was completely excavated under the direction of James A. Ford. The mounds were 1,200 feet (370 m) southeast of French Fork Bayou and 450 feet (140 m) southwest of Cypress Bayou. Mound A was a conical mound that stood 21 feet high and 84 feet in diameter.
Mound B was 2 feet (0.61 m) high and 50 feet (15 m) in diameter and located 110 feet (34 m) southwest of Mound A. Excavations revealed that Mound A had been built in three stages; Mound B was a single-stage structure. The mounds held 1,175 burials: 1,159 from Mound A, and 13 from Mound B (3 unknown). Pottery accompanied some burials; the weight of mound fill apparently crushed the vessels. The mounds were used for burials around 100 BCE to 400 CE. No evidence for domestic structures exists on or near the mounds, leading archaeologists to believe they were strictly for mortuary purposes.
The Miamisburg Mound -- Miamisburg is the location of a prehistoric Indian burial mound (tumulus), believed to have been built by the Adena Culture, about 1000 to 200 BCE. Once serving as an ancient burial site, the mound has become perhaps the most recognizable historic landmark in Miamisburg. It is the largest conical burial mound in Ohio, originally nearly 70 feet tall (the height of a seven-story building) and 877 feet in circumference; it remains virtually intact from its construction perhaps 2500 years ago.
Located in a city park at 900 Mound Avenue, it has been designated an Ohio historical site. It is a popular attraction and picnic destination for area families. Visitors can climb to the top of the Mound, via the 116 concrete steps built into its side.
NATIVE AMERICAN INDEX
SACRED PLACES INDEX
ANCIENT CIVILIZATIONS INDEX
ALPHABETICAL INDEX OF ALL FILES
CRYSTALINKS HOME PAGE
PSYCHIC READING WITH ELLIE
2012 THE ALCHEMY OF TIME | http://www.crystalinks.com/pyrnorthamerica.html | 13 |
62 | A Gentle Introduction to Haskell, Version 98
In this section we introduce the predefined standard type classes in Haskell. We have simplified these classes somewhat by omitting some of the less interesting methods in these classes; the Haskell report contains a more complete description. Also, some of the standard classes are part of the standard Haskell libraries; these are described in the Haskell Library Report.
The classes Eq and Ord have already been discussed. The
definition of Ord in the Prelude is somewhat more complex than the
simplified version of Ord presented earlier. In particular, note the
data Ordering = EQ | LT | GT
compare :: Ord a => a -> a -> Ordering
The compare method is sufficient to define all other methods (via defaults) in this class and is the best way to create Ord instances.
Class Enum has a set of operations that underlie the syntactic sugar of arithmetic sequences; for example, the arithmetic sequence expression [1,3..] stands for enumFromThen 1 3 (see §3.10 for the formal translation). We can now see that arithmetic sequence expressions can be used to generate lists of any type that is an instance of Enum. This includes not only most numeric types, but also Char, so that, for instance, ['a'..'z'] denotes the list of lower-case letters in alphabetical order. Furthermore, user-defined enumerated types like Color can easily be given Enum instance declarations. If so:
[Red .. Violet] => [Red, Green, Blue, Indigo, Violet]
Note that such a sequence is arithmetic in the sense that the increment between values is constant, even though the values are not numbers. Most types in Enum can be mapped onto fixed precision integers; for these, the fromEnum and toEnum convert between Int and a type in Enum.
The instances of class Show are those types that can be converted
to character strings (typically for I/O). The class Read
provides operations for parsing character strings to obtain the values
they may represent. The simplest function in the class Show is
show :: (Show a) => a -> String
Naturally enough, show takes any value of an appropriate type and returns its representation as a character string (list of characters), as in show (2+2), which results in "4". This is fine as far as it goes, but we typically need to produce more complex strings that may have the representations of many values in them, as in
"The sum of " ++ show x ++ " and " ++ show y ++ " is " ++ show (x+y) ++ "."
and after a while, all that concatenation gets to be a bit inefficient. Specifically, let's consider a function to represent the binary trees of Section 2.2.1 as a string, with suitable markings to show the nesting of subtrees and the separation of left and right branches (provided the element type is representable as a string):
showTree :: (Show a) => Tree a -> String
showTree (Leaf x) = show x
showTree (Branch l r) = "<" ++ showTree l ++ "|" ++ showTree r ++ ">"
Because (++) has time complexity linear in the length of its left argument, showTree is potentially quadratic in the size of the tree.
To restore linear complexity, the function shows is provided:
shows :: (Show a) => a -> String -> String
shows takes a printable value and a string and returns that string with the value's representation concatenated at the front. The second argument serves as a sort of string accumulator, and show can now be defined as shows with the null accumulator. This is the default definition of show in the Show class definition:
show x = shows x ""
We can use shows to define a more efficient version of showTree, which also has a string accumulator argument:
showsTree :: (Show a) => Tree a -> String -> String
showsTree (Leaf x) s = shows x s
showsTree (Branch l r) s= '<' : showsTree l ('|' : showsTree r ('>' : s))
This solves our efficiency problem (showsTree has linear complexity), but the presentation of this function (and others like it) can be improved. First, let's create a type synonym:
type ShowS = String -> String
This is the type of a function that returns a string representation of something followed by an accumulator string. Second, we can avoid carrying accumulators around, and also avoid amassing parentheses at the right end of long constructions, by using functional composition:
showsTree :: (Show a) => Tree a -> ShowS
showsTree (Leaf x) = shows x
showsTree (Branch l r) = ('<':) . showsTree l . ('|':) . showsTree r . ('>':)
Something more important than just tidying up the code has come about by this transformation: we have raised the presentation from an object level (in this case, strings) to a function level. We can think of the typing as saying that showsTree maps a tree into a showing function. Functions like ('<' :) or ("a string" ++) are primitive showing functions, and we build up more complex functions by function composition.
Now that we can turn trees into strings, let's turn to the inverse
problem. The basic idea is a parser for a type a, which
is a function that takes a string and returns a list of (a, String)
pairs . The Prelude provides
a type synonym for such functions:
type ReadS a = String -> [(a,String)]
Normally, a parser returns a singleton list, containing a value of type a that was read from the input string and the remaining string that follows what was parsed. If no parse was possible, however, the result is the empty list, and if there is more than one possible parse (an ambiguity), the resulting list contains more than one pair. The standard function reads is a parser for any instance of Read:
reads :: (Read a) => ReadS a
We can use this function to define a parsing function for the string representation of binary trees produced by showsTree. List comprehensions give us a convenient idiom for constructing such parsers: (An even more elegant approach to parsing uses monads and parser combinators. These are part of a standard parsing library distributed with most Haskell systems.)
readsTree :: (Read a) => ReadS (Tree a)
readsTree ('<':s) = [(Branch l r, u) | (l, '|':t) <- readsTree s,
(r, '>':u) <- readsTree t ]
readsTree s = [(Leaf x, t) | (x,t) <- reads s]
Let's take a moment to examine this function definition in detail. There are two main cases to consider: If the first character of the string to be parsed is '<', we should have the representation of a branch; otherwise, we have a leaf. In the first case, calling the rest of the input string following the opening angle bracket s, any possible parse must be a tree Branch l r with remaining string u, subject to the following conditions:
The second defining equation above just says that to parse the representation of a leaf, we parse a representation of the element type of the tree and apply the constructor Leaf to the value thus obtained.
We'll accept on faith for the moment that there is a Read (and Show) instance of Integer (among many other types), providing a reads that behaves as one would expect, e.g.:
(reads "5 golden rings") :: [(Integer,String)] => [(5, " golden rings")]
With this understanding, the reader should verify the following evaluations:
|readsTree "<1|<2|3>>"||=>||[(Branch (Leaf 1) (Branch (Leaf 2) (Leaf 3)), "")]|
There are a couple of shortcomings in our definition of readsTree.
One is that the parser is quite rigid, allowing no white space before
or between the elements of the tree representation; the other is that
the way we parse our punctuation symbols is quite different from the
way we parse leaf values and subtrees, this lack of uniformity making
the function definition harder to read. We can address both of these
problems by using the lexical analyzer provided by the Prelude:
lex :: ReadS String
lex normally returns a singleton list containing a pair of strings: the first lexeme in the input string and the remainder of the input. The lexical rules are those of Haskell programs, including comments, which lex skips, along with whitespace. If the input string is empty or contains only whitespace and comments, lex returns [("","")]; if the input is not empty in this sense, but also does not begin with a valid lexeme after any leading whitespace and comments, lex returns .
Using the lexical analyzer, our tree parser now looks like this:
readsTree :: (Read a) => ReadS (Tree a)
readsTree s = [(Branch l r, x) | ("<", t) <- lex s,
(l, u) <- readsTree t,
("|", v) <- lex u,
(r, w) <- readsTree v,
(">", x) <- lex w ]
[(Leaf x, t) | (x, t) <- reads s ]
We may now wish to use readsTree and showsTree to declare
(Read a) => Tree a an instance of Read and (Show a) => Tree a an
instance of Show. This would allow us to
use the generic overloaded functions from the Prelude to parse and
display trees. Moreover, we would automatically then be able to parse
and display many other types containing trees as components, for
example, [Tree Integer]. As it turns out, readsTree and showsTree
are of almost the right types to be Show and Read methods
and readsPrec methods are parameterized versions of shows and
reads. The extra parameter is a precedence level, used to properly
parenthesize expressions containing infix constructors. For types
such as Tree, the precedence can be ignored. The Show and Read
instances for Tree are:
instance Show a => Show (Tree a) where
showsPrec _ x = showsTree x
instance Read a => Read (Tree a) where
readsPrec _ s = readsTree s
Alternatively, the Show instance could be defined in terms of showTree:
instance Show a => Show (Tree a) where
show t = showTree t
This, however, will be less efficient than the ShowS version. Note that the Show class defines default methods for both showsPrec and show, allowing the user to define either one of these in an instance declaration. Since these defaults are mutually recursive, an instance declaration that defines neither of these functions will loop when called. Other classes such as Num also have these "interlocking defaults".
We refer the interested reader to §D for details of the Read and Show classes.
We can test the Read and Show instances by applying (read . show)
(which should be the identity) to some trees, where read is a
specialization of reads:
read :: (Read a) => String -> a
This function fails if there is not a unique parse or if the input contains anything more than a representation of one value of type a (and possibly, comments and whitespace).
Recall the Eq instance for trees we presented in Section
5; such a declaration is
simple---and boring---to produce: we require that the
element type in the leaves be an equality type; then, two leaves are
equal iff they contain equal elements, and two branches are equal iff
their left and right subtrees are equal, respectively. Any other two
trees are unequal:
instance (Eq a) => Eq (Tree a) where
(Leaf x) == (Leaf y) = x == y
(Branch l r) == (Branch l' r') = l == l' && r == r'
_ == _ = False
Fortunately, we don't need to go through this tedium every time we
need equality operators for a new type; the Eq instance can be
derived automatically from the data declaration if we so specify:
data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving Eq
The deriving clause implicitly produces an Eq instance declaration just like the one in Section 5. Instances of Ord, Enum, Ix, Read, and Show can also be generated by the deriving clause. [More than one class name can be specified, in which case the list of names must be parenthesized and the names separated by commas.]
The derived Ord instance for Tree is slightly more complicated than
the Eq instance:
instance (Ord a) => Ord (Tree a) where
(Leaf _) <= (Branch _) = True
(Leaf x) <= (Leaf y) = x <= y
(Branch _) <= (Leaf _) = False
(Branch l r) <= (Branch l' r') = l == l' && r <= r' || l <= l'
This specifies a lexicographic order: Constructors are ordered by the order of their appearance in the data declaration, and the arguments of a constructor are compared from left to right. Recall that the built-in list type is semantically equivalent to an ordinary two-constructor type. In fact, this is the full declaration:
data [a] = | a : [a] deriving (Eq, Ord) -- pseudo-code
(Lists also have Show and Read instances, which are not derived.) The derived Eq and Ord instances for lists are the usual ones; in particular, character strings, as lists of characters, are ordered as determined by the underlying Char type, with an initial substring comparing less than a longer string; for example, "cat" < "catalog".
In practice, Eq and Ord instances are almost always derived, rather than user-defined. In fact, we should provide our own definitions of equality and ordering predicates only with some trepidation, being careful to maintain the expected algebraic properties of equivalence relations and total orders. An intransitive (==) predicate, for example, could be disastrous, confusing readers of the program and confounding manual or automatic program transformations that rely on the (==) predicate's being an approximation to definitional equality. Nevertheless, it is sometimes necessary to provide Eq or Ord instances different from those that would be derived; probably the most important example is that of an abstract data type in which different concrete values may represent the same abstract value.
An enumerated type can have a derived Enum instance, and here again,
the ordering is that of the constructors in the data declaration.
data Day = Sunday | Monday | Tuesday | Wednesday
| Thursday | Friday | Saturday deriving (Enum)
Here are some simple examples using the derived instances for this type:
|[Wednesday .. Friday]||=>||[Wednesday, Thursday, Friday]|
|[Monday, Wednesday ..]||=>||[Monday, Wednesday, Friday]|
Derived Read (Show) instances are possible for all types whose component types also have Read (Show) instances. (Read and Show instances for most of the standard types are provided by the Prelude. Some types, such as the function type (->), have a Show instance but not a corresponding Read.) The textual representation defined by a derived Show instance is consistent with the appearance of constant Haskell expressions of the type in question. For example, if we add Show and Read to the deriving clause for type Day, above, we obtain
show [Monday .. Wednesday] => "[Monday,Tuesday,Wednesday]" | http://www.haskell.org/tutorial/stdclasses.html | 13 |
58 | |Analysis index||History Topics Index|
The use of trigonometric functions arises from the early connection between mathematics and astronomy. Early work with spherical triangles was as important as plane triangles.
The first work on trigonometric functions related to chords of a circle. Given a circle of fixed radius, 60 units were often used in early calculations, then the problem was to find the length of the chord subtended by a given angle. For a circle of unit radius the length of the chord subtended by the angle x was 2sin (x/2). The first known table of chords was produced by the Greek mathematician Hipparchus in about 140 BC. Although these tables have not survived, it is claimed that twelve books of tables of chords were written by Hipparchus. This makes Hipparchus the founder of trigonometry.
The next Greek mathematician to produce a table of chords was Menelaus in about 100 AD. Menelaus worked in Rome producing six books of tables of chords which have been lost but his work on spherics has survived and is the earliest known work on spherical trigonometry. Menelaus proved a property of plane triangles and the corresponding spherical triangle property known the regula sex quantitatum.
Ptolemy was the next author of a book of chords, showing the same Babylonian influence as Hipparchus, dividing the circle into 360° and the diameter into 120 parts. The suggestion here is that he was following earlier practice when the approximation 3 for π was used. Ptolemy, together with the earlier writers, used a form of the relation sin2 x + cos2 x = 1, although of course they did not actually use sines and cosines but chords.
Similarly, in terms of chords rather than sin and cos, Ptolemy knew the formulas
sin(x + y) = sinx cos y + cosx sin y
a/sin A = b/sin B = c/sin C.
Ptolemy calculated chords by first inscribing regular polygons of 3, 4, 5, 6 and 10 sides in a circle. This allowed him to calculate the chord subtended by angles of 36°, 72°, 60°, 90° and 120°. He then found a method of finding the cord subtended by half the arc of a known chord and this, together with interpolation allowed him to calculate chords with a good degree of accuracy. Using these methods Ptolemy found that sin 30' (30' = half of 1°) which is the chord of 1° was, as a number to base 60, 0 31' 25". Converted to decimals this is 0.0087268 which is correct to 6 decimal places, the answer to 7 decimal places being 0.0087265.
The first actual appearance of the sine of an angle appears in the work of the Hindus. Aryabhata, in about 500, gave tables of half chords which now really are sine tables and used jya for our sin. This same table was reproduced in the work of Brahmagupta (in 628) and detailed method for constructing a table of sines for any angle were give by Bhaskara in 1150.
The Arabs worked with sines and cosines and by 980 Abu'l-Wafa knew that
sin 2x = 2 sin x cos x
although it could have easily have been deduced from Ptolemy's formula sin(x + y) = sin x cos y + cos x sin y with x = y.
The Hindu word jya for the sine was adopted by the Arabs who called the sine jiba, a meaningless word with the same sound as jya. Now jiba became jaib in later Arab writings and this word does have a meaning, namely a 'fold'. When European authors translated the Arabic mathematical works into Latin they translated jaib into the word sinus meaning fold in Latin. In particular Fibonacci's use of the term sinus rectus arcus soon encouraged the universal use of sine.
Chapters of Copernicus's book giving all the trigonometry relevant to astronomy was published in 1542 by Rheticus. Rheticus also produced substantial tables of sines and cosines which were published after his death. In 1533 Regiomontanus's work De triangulis omnimodis was published. This contained work on planar and spherical trigonometry originally done much earlier in about 1464. The book is particularly strong on the sine and its inverse.
The term sine certainly was not accepted straight away as the standard notation by all authors. In times when mathematical notation was in itself a new idea many used their own notation. Edmund Gunter was the first to use the abbreviation sin in 1624 in a drawing. The first use of sin in a book was in 1634 by the French mathematician Hérigone while Cavalieri used Si and Oughtred S.
It is perhaps surprising that the second most important trigonometrical function during the period we have discussed was the versed sine, a function now hardly used at all. The versine is related to the sine by the formula
versin x = 1 - cos x.
It is just the sine turned (versed) through 90°.
The cosine follows a similar course of development in notation as the sine. Viète used the term sinus residuae for the cosine, Gunter (1620) suggested co-sinus. The notation Si.2 was used by Cavalieri, s co arc by Oughtred and S by Wallis.
Viète knew formulas for sin nx in terms of sin x and cos x. He gave explicitly the formulas (due to Pitiscus)
sin 3x = 3 cos 2x sin x - sin 3 x
cos 3x = cos 3x - 3 sin 2x cos x.
The tangent and cotangent came via a different route from the chord approach of the sine. These developed together and were not at first associated with angles. They became important for calculating heights from the length of the shadow that the object cast. The length of shadows was also of importance in the sundial. Thales used the lengths of shadows to calculate the heights of pyramids.
The first known tables of shadows were produced by the Arabs around 860 and used two measures translated into Latin as umbra recta and umbra versa. Viète used the terms amsinus and prosinus. The name tangent was first used by Thomas Fincke in 1583. The term cotangens was first used by Edmund Gunter in 1620.
Abbreviations for the tan and cot followed a similar development to those of the sin and cos. Cavalieri used Ta and Ta.2, Oughtred used t arc and t co arc while Wallis used T and t. The common abbreviation used today is tan by we write tan whereas the first occurrence of this abbreviation was used by Albert Girard in 1626, but tan was written over the angle
The secant and cosecant were not used by the early astronomers or surveyors. These came into their own when navigators around the 15th Century started to prepare tables. Copernicus knew of the secant which he called the hypotenusa. Viète knew the results
cosec x/sec x = cot x = 1/tan x
1/cosec x = cos x/cot x = sin x.
The abbreviations used by various authors were similar to the trigonometric functions already discussed. Cavalieri used Se and Se.2, Oughtred used se arc and sec co arc while Wallis used s and σ. Albert Girard used sec, written above the angle as he did for the tan.
The term 'trigonometry' first appears as the title of a book Trigonometria by B Pitiscus, published in 1595. Pitiscus also discovered the formulas for sin 2x, sin 3x, cos 2x, cos 3x.
The 18th Century saw trigonometric functions of a complex variable being studied. Johann Bernoulli found the relation between sin-1z and log z in 1702 while Cotes, in a work published in 1722 after his death, showed that
ix = log(cos x + i sin x ).
De Moivre published his famous theorem
(cos x + i sin x )n = cos nx + i sin nx
in 1722 while Euler, in 1748, gave the formula (equivalent to that of Cotes)
exp(ix) = cos x + i sin x .
The hyperbolic trigonometric functions were introduced by Lambert.
References (17 books/articles)
Other Web sites:
Article by: J J O'Connor and E F Robertson
|History Topics Index||Analysis index|
|Main index||Biographies Index
|Famous curves index||Birthplace Maps
|Mathematicians of the day||Anniversaries for the year
|Search Form|| Societies, honours, etc
The URL of this page is: | http://turnbull.dcs.st-and.ac.uk/~history/HistTopics/Trigonometric_functions.html | 13 |
165 | This page gives a background to some of the clips shown in the multimedia tutorial.
We all know about physical work, so we started the tutorial with this example, which also gives an idea of the size of the quantities involved. We begin with the calculations behind the histograms we showed. These are 20 kg bags so the weight of each is about 200 newtons down, which is the grey arrow. Normally we don't say 'down' in this context, because weight is always in a direction close to down. I did so here to remind you that weight is a vector. So let's write, for one bag,
W = − mgj = − (200 N)j.
Check that notation: weight, W, is a vector, whereas work, W, is a scalar. (Occasionally we shall also need W for the magnitude of W, but you will know from context which is which.) The motion of the bags is slow, their accelerations are small compared to g, so the force required to accelerate them is small compared to their weight. So when I lift them, I'm applying a force F ≅ − W = (200 N)j, which is the black arrow. If you remember the scalar product, you'll know that i.j = 0 but j.j = 1. So, if I apply this constant force over a displacement Δs = Δxi + Δyj, the work (W) I do is
W = F.Δs = (mgj).(Δxi + Δyj) = mg Δy
(which, as we show in the multimedia tutorial, is the increase in potential energy of a mass m in a gravitational field of magnitude g when raised a height Δy). So, for the first bag, the force is 200 N, I lift it about 0.7 m (the red arrow), so I do 140 J of work. The joule (symbol J) is the SI unit of work: one newton times one metre. The joule is not very big on a human scale: lift a small apple (weight about 1 N) through a height of 1 m and you've done 1 joule of work on the peach – but rather more than that in moving your arm! Similarly, although I only do 140 J on the bag, I do more work on moving my arms and torso. It's possible for a fit person to do a megajoule of work in an hour.
The pile is now shorter so I must lift the second bag through a larger increase in height: which reminds me that work is proportional to displacement!
Work is also proportional to the force, so lifting two bags requires twice the force and I do twice as much work as on the first bag.
Hmm, I've not planned this well and must lift the last two bags further: 400 N times 1.5 m is 600 J.
Look at the big displacement of the trolley and how easy it is. The trolley is supporting six bags, so the force it applies upwards has magnitude 1.2 kN (black arrow), and it moves several metres (red arrow). But the force is in the j (or y) direction and the displacement in the i (or x) direction: they are at right angles. Remember
i.j = 0. Or F.Δcos θ = 0.
So no work is done.
Lifting 20 kg bags (weight = 200 N) is not so hard. Lifting my own 70 kg mass (weight W = 700 N)requires more force. But not if we use pulleys (which are discussed in more detail in the physics of blocks and pulleys).
Here, a single rope goes from the support, down to my harness, round the pulley, back to the support, round another pulley and back to my hands. The pulleys turn easily, so the tension T in each section of the rope is the same. There are three sections pulling me upwards. From Newton's second law, the total force acting on me equals my mass times my acceleration. Compared with g, my acceleration here is negligible. So
3T + W = ma ≅ 0
So, if we neglect my (modest) acceleration, the force of the three sections pulling upwards on me equals my weight: the magnitude of the tension is T = |W|/3 = (700 N)/3.
So the force I need supply with my arms is reduced. However, to lift my body through say 2 m in altitude, I must still do (700 N)(2 m) = 1.4 kJ of work. How is this possible?
Well, each of the three sections of rope shortens by 2 m. So my hands pull the rope 6 m. I do work (6 m)(700 N)/3 = 1.4 kJ of work (plus a bit extra to overcome friction in the pulleys). Like levers, blocks and pulleys don't save you work, but they can reduce (or increase) the force, which can make a task more convenient and comfortable.
Kinetic energy and the work energy theorem
The multimedia tutorial presents this theorem, but perhaps you'd like to see it again here. Let's apply a constant force F to a mass m as it moves, in one dimension, a distance x. (It might, for instance, be the magnetic force that we used in our section on Newton's laws.) The force is constant over x, x increases linearly over x, so the work done ∫ F.dx increases linearly over x.
Once we relate this to time and velocity, we shall have to do the integration. (Remember that there is help with calculus.) So let's consider the case – still one dimensional – in which the force is applied over a short distance dx, and that the mass m increases in velocity from v to v+dv.
The total work done on the mass is
dW = Fdx
where F is the total force acting on the mass. Subsituting for Newton's second law, F = ma = m(dv/dt), gives:
dW = m(dv/dt)dx = m*dx*dv/dt
where we have written the mutliplication and division explicitly.
dx, dv and dt are all small quantities, but there is no reason why we cannot change the order of multiplication. So let's write:
dW = m*dv*dx/dt = m*v*dv
The advantage of this rearrangment is that we can now do the integral easily:
W = ∫ dW = ∫ mv.dv
Suppose we start from v = 0, then the total work done to accelerate mass m from rest to a speed v is:
W = ∫ mv.dv = ½mv2
This quantity is so useful that we give it a name, the kinetic energy and write
K = ½mv2
So you don't like calculus? Let's use the equations from one-dimensional kinematics (for which there is a multimedia tutorial). Let's suppose that a body starts from rest and that we apply a constant (total) force F for a certain time T in one example, and for twice that time (2T) in a later example (the black graphs at right).
The final velocity will be v = aT = (F/m)T in the first example, and twice that value in the second example (the red graphs at right).
The distance travelled while the force is acting, i.e. the distance travelled during the acceleration, is now four times as great, as shown in the purple graphs at right. So the constant force has been applied over four times the distance, and has done four times the work. So, even though the velocity has only doubled, we have done four times as much work (blue graph at right).
That is an important consequence: at twice the speed, a mass has four times the kinetic energy. This has important implications for road safety, as we see next.
Stopping distances and the work energy theorem
If I travel twice as fast on my bicycle, how much further does it take to stop? (I include only the distance after I apply the brakes, not the time it takes me to react to danger and to apply the brakes.
At twice the speed, my kinetic energy K = ½mv2 is four times as great. So, to do four times as much (negative) work, the braking force (assumed constant) must be applied over four times the distance. Please remember this on the road.
Suppose I slowly lift a mass in a gravitational field. In this clip from the multimedia tutorial, the rope, with a little assistance from me, is slowly lifting a container of water. The tension force F is doing work W on the container, but it is not increasing its kinetic energy. The reason, of course, is that the weight mg of the container is pulling the other way: it is doing negative work on the container. However, the work W is not lost: we can recover it: we can slowly lower the container, and thus lift the the brick on the other end of the rope.
So where is the work done by F going when we lift the container? It is, in a sense, stored in the gravitational interaction between the container and the earth. This 'stored work' has the potential to do work for us. This is an example of potential energy – in this case gravitational potential energy. So, how much potential energy do we store in this case?
We use a force F to move an object of mass m a displacement ds in a gravitational field, so we do work
dW = F.ds ,
(where you might wish to revise vectors). Suppose that we are moving it in such a way that we do not change its velocity (and so don't change its kinetic energy). Then the total force on it is zero, so
F + mg = 0, so
dW = −mg.ds .
However, g is in the negative vertical direction, say the minus y direction, so
dW = mgdy.
For displacements on the planetary scale, we'd have to consider the variation of the gravitational field with height, which we do in the section on gravity. For more modest displacements, g is uniform and integration gives us
∫ dUgrav = ∫ dW = ∫ mg dy = mgΔy = mgΔh
where h is commonly used for the vertical coordinate. U is defined by an integral, and integrals require a constant of integration. For potential energy, this constant is the reference for the zero of potential energy. If we define Ugrav to be zero at h = 0, then we can write
Ugrav = mgh.
As we shall see, not all forces allow one to define a potential energy. However, another example is the
Potential energy of a spring.
If we slowly compress or extend a spring from its resting position, again we do work without creating kinetic energy. But again it is 'stored' – we can get it back. From Hooke's law, the force exerted by a spring is Fspring = − kx, where x is the displacement from its unstretched length, and k is the spring constant for that particular spring. Because we are not accelerating anything, we have to apply a force F = − Fspring, so
Again, we have a constant of integration and a zero of potential energy to define. Usually, we set U = 0 at x = 0, so
Uspring = ½kx2.
Note that, with this reference value, Uspring is always positive: with respect to the unstressed state, both stretching (x > 0) and compressing (x < 0) require work, so the potential energy is positive in each case.
In the film clip, I do work to store potential energy in the spring, the spring then does work on the mass, giving it kinetic energy. Biochemical energy in my arm was converted into potential energy in the spring and then to kinetic energy.
Conservative and non-conservative forces
Let's look at the work that I do in moving a mass in a gravitational field. We'll pretend that I do this with accelerations so slow that the mass is always in mechanical equilibrium, i.e. that the force exerted by my hand plus the weight of the mass add to zero, so
Fhand = −mg
work I do against gravity is ∫ Fhand.ds, which is shown as the brown coloured histogram.
As I lift the mass, Fhand is upwards (positive) and s is also positive,
so the work done by me is positive:
∫ Fhand.ds > 0.
As I lower the mass, Fhand is still upwards (positive) but now s is negative,
so the work done by me is negative:
∫ Fhand.ds < 0.
Consequently, round a complete cycle that returns the mass to its starting point, ∫ Fhand.ds =0. Similarly, the work done by gravity around the cycle is zero (because Fgrav = −Fhand). This makes gravity a conservative force:
Definition: A conservative force is one that does zero work around a closed loop in space. It follows that, for a conservative force F, we may define a potential energy as a function of position r:
U = U(r) ≡ ∫ F.dr .
the work done around a closed loop is not zero, then we cannot define such a function: its value would have to change with time if we went around such a loop. Forces with this property are called, obviously, nonconservative forces.
So, what sort of force is that exerted by an ideal spring?
Again, let's imagine that I do this so slowly that the spring is in mechanical equilibrium: Fhand = −Fspring. I move my hand to the right, stretching the spring. Fhand is positive and ds is positive. I do positive work (shown in the histogram) and the spring does negative work. Then I move my hand to the left, but still pulling to maipntain the stretch in the spring. As the spring shortens, Fhand is still positive but now ds is negative. I do negative work (shown in the histogram) and the spring does positive work. For the spring, ∫ F.dr around a closed path is zero. The force exerted by an ideal spring is a conservative force.
What sort of a force is friction?
Again, let's imagine that I do this so slowly that the mass is in mechanical equilibrium: Fhand = −Ffriction. Moving to the right, I apply a force to the right and the object moves to the right: Fhand and ds are both positive: I do positive work (shown in the histogram) and friction does negative work. Moving to the left, I apply a force to the left and the object moves to the left: Fhand and ds are both positive: I do positive work (shown in the histogram) and friction does negative work. So, around a closed loop, the work done against friction is greater than zero, so friction is a nonconservative force.
Conservation of mechanical energy
We saw above in the work energy theorem that the total work ΔW done on an object equals the increase ΔK in its kinetic energy. But consider the case where all of the forces that do the work
ΔW are conservative forces: here, the work done by those forces is minus one times the work done against them, in other words it is −ΔU.
So, if the only forces that act are conservative forces, then
ΔU + ΔK = 0.
Define the mechanical energy: E ≡ U + K.
, if the only forces that act are conservative forces, mechanical energy is conserved. We shall make this stronger below but, before we do let's look at an example in which mechanical energy is (nearly) conserved.
Kinetic and potential energy in the pendulum
This video clip shows an example of the exchange of kinetic and potential energy in a pendulum. A warning, however: for the sake of keeping the download time small, this film clips is a single cycle repeated. In the original film, the pendulum gradually loses energy: in each cycle, a small fraction of the energy is lost, partly in pushing the air out of the way.
The kinetic energy K is shown in red: as a function of x on the graph, and as a histogram that varies with time. Note that the K goes to zero at the extremes of the motion. The potential energy U is shown in purple. It has maxima at the extremes of the motion, when the mass is highest. Because the zero of potential energy is arbitrary, so is the zero of the total mechanical energy E = U + K. Here, E (shown in white) is constant.
We have seen that, if the only forces present are conservative, then mechanical energy is conserved. However, we can go further. Provided that nonconservative forces do no work, then the increase ΔK in the kinetic energy of a body is still the work done by the conservative f orces, which is −ΔU. So we can conclude that
If nonconservative forces do no work then mechanical energy (E ≡ U + K) is conserved.
This statement can be written in several ways, of which here are two:
If nonconservative forces do no work, ΔU + ΔK = 0 or U i+ Ki = Uf + Kf ,
where i and f mean initial and final. I strongly advise that you always write the qualifying clause because, in general, mechanical energy is not conserved. (And never, ever, write "kinetic energy equals potential energy". That is not true, and you shouldn't tell lies.)
On a rolling wheel, friction does no work. Here, I'm travelling slowly so let's neglect air resistance and rolling resistance. There is a substantial friction force: it is friction between tires and paving that accelerates me in a circle. In this case, the frictional force force is at right angles to the displacement, so friction does no work. So, while I'm not pedalling, (approximately) no work is done and my mechanical energy is (approximately) constant.
Work and power
Power is defined as the rate of doing work or the rate of transforming or transferring energy: P = dW/dt. In this example, my kinetic energy is approximately constant. However, my potential energy is increasing. Because I'm climbing, I'm not going very fast, so the rate at which I'm doing work against nonconservative forces such as air resistance is small. The equations below allow us to calculate the rate at which I'm doing work against gravity (which is an underestimate of the rate at which I'm doing work).
My altitude is increasing at 1 m.s−1, and my weight is 700 N, so
P = dW/dt ≅ dU/dt = mg(dh/dt) = 700 W.
The sliding problem
Here is the problem from the tutorial: doing work against a nonconservative force. Here I apply a force F via the tension in a string. The work dW that I do is
dW = F.ds = F ds cos θ
Now v = ds/dt, so the power I am applying, i.e. the rate at which I am doing work is:
P = dW/dt = F v cos θ
I'll leave it to the reader to draw a free body diagram. Then use Newton's second law, then relate P to m, g, v and μk.
The loop-the-loop problem
This is a classic problem. A small toy car runs on wheels that turn are assumed to turn freely and whose mass is negligible, so we can treat it as a particle. From how high must I release it so that it will loop the loop, remaining in contact with the track all the way around?
If the car retains contact with the track then, at the top of the loop, which is circular, the centripetal acceleration will be downwards and its magnitude will be v2/r.
The forces providing this acceleration are its weight mg (acting down) and the normal force N from the track, also acting down at this point.
So, if N > 0, we require v2/r > g, or, for the critical condition at which it just loses contact, we require
vcrit2/r = g or vcrit2 = rg
We can do this problem using the conservation of mechanical energy.
Uinitial + Kinitial = Ufinal + Kfinal
Choosing the bottom of the track as the zero for U, we could write,
mghinitial + 0 = mg.(2r) + ½mvfinal
and, if vfinal = vcrit = √(rg)
mghinitial = 2mgr + ½mgr
So the cricital
height is 5r/2 above the bottom of the track.
The hydroelectric dam problem
The water level in a hydroelectric dam is 100 m above the height at which water comes out of the pipes. Assuming that the turbines and generators are 100% efficient, and neglecting viscosity and turbulence, calculate the flow of water required to produce 10 MW of power. The output pipes have a cross section of 5 m2. This problem has the work-energy theorem, uses power, and requires a bit of thought. Let's do it.
Let's consider what is happening in steady state for this system. Over a time dt, some water of mass dm exits the lower pipe at speed v. This water is delivered to the top of the dam at negligible speed. So the nett effect is to take dm of stationary water at height h and deliver it at the bottom of the dam at height zero and speed v. Looks straightforward. Let's go.
Let the flow be dm/dt. The work done by the water, dW, is minus the energy increase of the water, so
Of course the flow dm/dt depends on v. Let's see how: In time dt, the water flows a distance vdt along the pipe. The cross section of the pipe is A, so the volume of water that has passed a given point is dV = A(vdt). Using the definition of density, ρ = dm/dV, we have
dm/dt = ρdV/dt = ρA.(vdt)/dt = ρAv. Substituting in the equation above gives us
P = ρAv(gh − ½v2) or
½v3 − ghv + P/ρA = 0.
However you look at it, it's a cubic equation, which sounds like a messy solution. However, let's think of what the
terms mean. The first one came from the kinetic energy term. The second is the work done by gravity. The third is the work done on the turbines. Now, if I had designed this dam, I'd have wanted to convert as much gravitational potential energy as possible into work done on the turbines, so I'd make the pipes wide enough so that the kinetic energy lost by the water outflow would be negligible. Let's see if my guess is correct.
If the first term is negligible, then we simply have hgv = P/ρA. So v = P/ρghA = 2 m.s−1. So the first term would be 4 m3.s−3, the second would be − 2000 m3.s−3 and the third would be 2000 m3.s−3. So yes, the guess was correct and, to the precision required of this problem, the answer is v = 2 m.s−1.
Bernoulli's equation is an example of the work-energy theorem. In the animation, a fluid flows at a steady rate into a pipe with cross section A1 and height h1, where it has velocity v1 and pressure P1. The fluid leaves the pipe with cross section A2 and height h2, where it has velocity v2 and pressure P2. The fluid has constant density ρ and we assume that its viscosity is negligible, and that there is no turbulence, so that nonconservative forces do no work.
What is the relation among the velocity, height and pressure?
What is the relation among the velocity, height and pressure?
Before doing this quantitatively, we can ask how pressure and velocity are related. At the same height, and if we have no turbulence or viscosity, then the only thing that accelerates the fluid is the difference in pressure. The fluid will accelerate from high to low pressure so, where P is high, v should be low and vice versa. Let's see:
In a short time dt, a mass dm enters the pipe at left and, because the flow is steady, an equal mass dm
flows out at right. Because the flow is steady, the total energy of the water in the pipe is unchanged. So the total work done on dm, by the work-energy theorem, is
dWtotal = ½ dm.v22 − ½ dm.v12.
The work is done by two forces: gravity, which does work − dUgrav, and the pressure. dUgrav is dm.gΔh, so
dWpressure − dUgrav = ½ dm.v22 − ½ dm.v12 , so
dWpressure = ½ dm.v22 + dm.gh2 − ½ dm.v12 − dm.gh2 .
So, how much work is done by the difference in pressure acting across the pipe? By definition, the pressure is the force per unit area, so the force exerted by P on cross sectional area A is PA. If this force is applied over a distance ds at right angles to A, it does work PAds. But the volume moved is dV = Ads, so the work done by pressure is PdV. The work done by P1 is positive and that done by P2 is negative, so
P1dV − P2dV = ½ dm.v22 + dm.gh2 − ½ dm.v12 − dm.gh2 .
Now we use the definition of density: ρ = dm/dV. So, if we divide both sides of the equation by dV and rearrange the terms,
P1 + ½ρv12 + ρgh1 = P2 + ½ρv22 + ρgh2.
Of course, we could apply this analysis to any two points in the pipe so, provided that the flow is steady, incompressible, non-viscous and non turbulent, we have Bernoulli's equation
P + ½ρv2 + ρgh = constant.
Remembering that ρ = dm/dV, we can see the significance of each of these terms: P is the work done by pressure, per unit volume, ½ρv2 is the kinetic energy per unit volume, and ρgh is the gravitational potential energy per unit volume. Bernoulli's equation is just the work energy theorem, written per unit volume.
In the absence of flow, this just gives the variation of pressure with depth:
ΔP = − ρgΔh , if there is no flow.
If the height is constant, we have
ΔP = − Δ(½ρv2) , if there is no change in height.
This last observation tells us that (at equal height), pressure will be high when velocity is low and vice versa. This makes sense: if the velocity has increased at constant h, then pressure must have acted to accelerate the fluid. The fluid of course flows from high pressure to low, so it must be slower in high pressure and faster in low pressure.
(With the reminder that we are neglecting viscosity and turbulence: there are no non-conservative forces acting.)
This is a nice demonstration: the hose delivers a high speed jet of air. What is holding the ball up in the air?
The drag of the air jet as it passes the ball makes it rotate, so we can deduce from the direction of rotation that most of the jet passes above the ball.
The ball has weight, and the only forces acting on it are those due to the pressure of the air around it. So we can conclude that the pressure above the ball is substantially less than that below the ball.
This is not, however, a simple demonstration of the effect described by Bernoulli's equation. It is certainly true that the fast moving air coming out of the hose has a pressure somewhat less than the pressure in the stationary air. Because the jet of air coming out of the hose is mainly deflected above the ball, this makes the pressure above the ball less than atmospheric. However, in this case the jet itself is deflected by the presence of the ball, so there is also a contribution from the change in momentum of the jet. (Further, the drag that causes the rotation tells us that there is a nonconservative force present and so Bernoulli's equation would not apply accurately here.)
Centre of mass work
When we write W = ∫F.ds, for an extended object, what is F and what is ds?
F is the total external force acting on the object which, because of Newton's third law, equals the total force on the object. ds in this case is the displacement of the centre of mass, dsCoM. In this simple demonstration, the force that accelerates me is the force that the wall exerts on my hand. The wall, however, doesn't move. What does move during my acceleration is my centre of mass, so the kinetic energy associated with the motion of my centre of mass is increased by ∫Fexternal.dsCoM.
We'll leave the derivation of this to the section on centre of mass. | http://www.animations.physics.unsw.edu.au/jw/work.htm | 13 |
95 | Students are introduced to basic measurement techniques such as length measurement with a meter stick and vernier caliper, mass measurement with a triple-beam balance and time with a stopwatch. They obtain the volumes of various size cylinders from measurements of the cylinders’ lengths and diameters. After measuring the masses of the cylinders, they calculate the densities of the metals and compare them to the expected values. They also determine the density from all their data by plotting mass vs. volume of the three or four cylinders they measure and finding the slope. Students also measure the time it takes a freely falling ball to hit the ground. Errors, which are relevant to the equipment and measurement methods and their propagation, are determined.
2. Free Fall
Students measure the successive positions of a freely falling weight attached to a paper tape onto which a timer makes a mark every 25 msec. From these data, they determine the speed as a function of time, as well as the acceleration (gravity) of the motion. By repeating the experiment with different weights, the independence of the acceleration of a falling object from its mass is investigated. For small masses the paper tape provides a significant drag.
3. Motion in One and Two Dimensions
Tracing the path of a puck moving on a frictionless inclined plane, students determine the x and y components of its successive positions at constant time intervals. From theses data, puck velocities are obtained. From the velocity data, the accelerations in the vertical and horizontal directions are determined. The independence of the two components of the motion is investigated.
4. Newton’s Second Law
Students measure the acceleration of two masses connected to each other by a string that runs over a pulley. One mass moves on a frictionless horizontal table. The other one is freely suspended from the string. Both masses can be varied. Therefore the dependence of the acceleration on force or mass can be studied.
5. Uniform Circular Motion
Students measure the average speed of an object in uniform circular motion. They separately measure the centripetal force, which produces this motion. The apparatus consists of a rotating gallows from which a plumb bob is suspended. A horizontal spring attached to the plumb bob provides the centripetal force. Student measure the period of rotation required to stretch the spring a given distance. Then they determine the static force that stretches the spring by an equal length. After repeating this experiment with various springs, they plot angular velocity vs. centripetal force to verify that the centripetal force increases as v2.
6. Conservation of Momentum in Collisions
Conservation of momentum is investigated in this experiment. Tracing the path of two pucks that collide with each other on the frictionless air table, students measure the momentum of each puck before and after the collision. By comparing the total momentum and the total kinetic energy of two pucks before and after the collision the conservation laws under the elastic case are investigated. Next inelastic collisions are studied. Attaching Velcro to the pucks makes them stick together in a collision. Momentum and energy conservation under these conditions are studied. Also by tracing the center of mass of the two pucks, the motion of the center of mass of a two-body system is studied.
7. Work and Energy on an Air Track
An air track with a computer is used in this experiment. By analyzing the data collected with the computer of the glider on a tilted air track, students measure the glider’s velocities during the motion at fixed time intervals. From this, they get kinetic energy as function of position. They measure the tilt angle which provides the potential energy. From these data they can obtain the gravitational acceleration. Next, velocities of the glider that is moving on frictionless level track under the influence of a hanging mass are measured. By plotting V2 against position, the acceleration of the motion is estimated and compared to the theoretical predictions. This experiment is similar to Exp. 10, except here the important quantities used are energies.
8. Forces and Torques in Equilibrium
The static equilibrium conditions on a weighted meter stick are investigated. First, by balancing a weighted meter stick on one’s finger, the center of mass is determined. Next, students mount one end of the meter stick on a pivot so that the stick can swing freely in a vertical plane. The other end is then supported by a string attached to a spring scale in different ways such that the meter stick and the support string make different angles with the horizontal. They measure components of forces and of torques acting on the meter stick, calculate the sum of torques and compare to the expected values. Finally, after adding an unknown force to the balanced meter stick, the students estimate the torque caused by this unknown force from the equilibrium condition and compare it to the directly calculated value and verify the equilibrium condition.
9. Maxwell’s Wheel
Maxwell’s wheel consists of a vertical aluminum disk mounted on a horizontal axle, suspended on two threads from above. When the disk is rotated, the threads wind themselves on the axle, somewhat similar to a yo-yo, and the disk rises. Once at the top the threads begin to unwind, and the disk falls. Students first measure the dimensions of the wheel plus axle and calculate the moment of inertia. Next they measure falling distances and falling times of the wheel, and determine the downward acceleration. From the motion, they obtain a moment of inertia and compare it to the value calculated in the first part of the experiment. Brass knobs are attached to the Maxwell’s wheel to change the moment of inertia in the last investigation.
10. Rolling Motion on an Inclined Track
This experiment is about one dimensional rolling motion of cylinders on U-channel track with adjustable ramp. It introduces the student to the Ultrasonic Measurement System which uses a computer equipped with sonar to collect position and time data of a cylinder moving up and down the track under the influence of gravity. The computer records cylinder’s position vs. time. The students are asked to perform investigations using transparent cylinders, which are filled with water, glycerin and silicone. They collect and analyze trajectories of the cylinders filled with materials of different viscosity, and must explain quantitative differences. The main qualitative analysis step is finding the relation between measured V2 and acceleration. As always, the students are asked to compare their results with the theoretical predictions.
11. Fluid Flow
After the definition of pressure and Pascal’s Law are explained, students are required to measure the densities of water and glycerin in three different ways. First the densities are determined by measuring the volume and the masses of liquids directly; next by measuring the pressure in a liquid using a manometer, and finally by measuring the buoyancy force of an object in a liquid. By measuring the fluid resistance, the dependence on pressure of the water flow rate through a capillary is studied. The dependence of flow on capillary diameter is also investigated.
12. The Simple Pendulum
Students measure the periods of a simple pendulum for several different lengths of string and make a plot of period squared against length. Then they determine the gravitational acceleration from their plot. By repeating the experiment with different mass, they investigate the independence of period on the mass.
13. Simple Harmonic Motion
This is an air track experiment. A glider is connected with two springs to the track. When deflected from its equilibrium it oscillates. The motion of the glider is obtained from ultrasonic measurements collected by a computer. The computer can plot position, velocity, and acceleration vs. time. The students find the period of the wave from the position plot. They are next asked to calculate the displacement of the glider at several times and compare their results with the raw data. Next, the students calculate the angular velocity. Using this calculated value, they find the period and compare it to the measured period. Finally, the students are asked to investigate the phase relationships between the position, velocity, and acceleration plots and also show that the period is independent of the amplitude.
14. Standing Waves
Standing waves in two different media, a string and an air column, are investigated. In the first experiment, transverse waves are set up in a string, which is fixed at both ends. Near one end is a vibrator that produces the waves. Students measure the wavelength of the standing waves, which are formed when the tension in the string allows the appropriate wave velocity. They calculate the velocity and plot the velocity-squared versus the tension. They verify the relation between the tension and mass per unit length of the string on one hand, and the wavelength of the wave on the other. The students also measure the velocity of sound in air by measuring the wavelength of sound, obtained by listening to a resonance in an air column the length of which can be varied. The sound is produced by a tuning fork held over the air column.
15. Geometric Optics
In this experiment students study the law of thin lenses, the difference between real and virtual images and mirror images. In the first experiment, students determine the focal length of a thin lens by measuring the distance between the object and the lens and the distance between the lens and the image. They also examine the magnification law by plotting the ratio of the size of the image to the size of the object against the distance ratio. Then students observe a real image and a virtual image by looking at these two images at the object side of the lens and they compare these two images. Finally, they investigate the mirror images with small cylinders and a plane mirror. By tracing the position of an image, they find the law of specular reflection experimentally.
16. Electric Field and Electric Potential
The characteristics of E and V for two different two-dimensional charge configurations are investigated. For parallel Electrodes, students measure the equipotential lines, determine the electric fields and reconstruct the electric field lines near the center of the two electrodes and fringe-fields near the electrode ends. Students repeat the experiment for concentric electrodes and find the radial electric fields for this geometry. The experiment gives a good feel for the relation between E and V, and for graphical representations of E and V.
17. DC Circuits
Kirchhoff’s rules are studied for simple series and parallel circuits consisting of batteries or a power supply, circuit element box, some patch cables, and one or two digital multimeters (DMM). The students learn how to use DMMs to measure currents, voltages, and resistances, and the effects of meters on the measurements using basic 2-3 element circuits. Next, the students take and analyze V–I characteristics of simple resistors networks. They compare measured and calculated values of ohmic resistance of the resistors combined in various configurations.
18. R–C Circuits
Transient behavior of RC circuits is studied. A circuit consisting of a resistor and capacitor in series is connected to a 6 V power supplies via a switch. As the capacitor is being charged, students measure the change in voltage first across the resistor, then across the capacitor using a DMM and a stopwatch. The RC is large enough so that this works! They determine the time constant in both measurements and compare them to each other and the calculated value. They also explore the exponential behavior of the capacitor voltage by plotting the voltages against the time. As the capacitor is being discharged, the current in the circuit is measured and log of current is plotted as a function of time. The slope of this graph is compared to the time constant found earlier. Finally, the capacitance of two capacitors in series or parallel combination are studied by measuring the new time constant of the circuit.
19. Magnetic Force & Lorentz’s Law
First the repulsion and attraction of two ceramic ring magnets are studied, as a qualitative indicator of the strength of magnetic forces. The main experiment is a study of Lorentz’ law F= ILB by use of a current balance. The dependence of the magnetic force on a length of wire is measured by balancing it against a known gravitational force. All three variables, I, L, and B can be varied in this experiment, and plots of the magnetic force vs. these three variables verify the linearity of the relation. The experiment also determines the magnetic field strength of the horse shoe magnets used.
20. Electromagnetic Induction
Faraday’s Law of induction is the topic of this experiment. First, the voltage produced in a pickup coil by quickly moving a permanent magnet. Then a voltage is produced by quickly moving a pickup coil away from a permanent magnet is found. Next Faraday’s Law is tested in integral form by using a function of the digital oscilloscope. Students are given 2 coils (with an unknown number of turns) and asked to find the number of turns in each coil. In the main part of the experiment, a function generator producing either a sinusoidal or a saw tooth shaped current in a field coil sets up a time varying field. This field induces a voltage in a pickup coil. The field generating current and the voltage of the pickup coil are displayed and measured with a dual trace scope. The dependence of induced emf on the changing magnetic flux is studied.
21. Radioactive Decay
The students observe radioactive decay as a random process and measure the half-life of a short -lived isotope. The first investigation calibrates the Geiger tube by collecting data using a computer and plotting on the computer count rate vs. Geiger tube voltage. The students then measure the background radiation using the computer and plot their data using a histogram to find the average background count rate. Next, the students take data from a 137Cs source and histogram it. They observe what percentage of their data lies within one and two standard deviations of the average count rate. Finally, the students investigate effect of shielding on the 137Cs source radiation to determine the absorption coefficient and the half-value layer thickness of lead. | http://www.northeastern.edu/physics/undergraduate/introductory-physics-lab/abstracts-of-experiments/ | 13 |
57 | A belt is a loop of flexible material used to mechanically link two or more rotating shafts. Belts may be used as a source of motion, to transmit power efficiently, or to track relative movement. Belts are looped over pulleys. In a two pulley system, the belt can either drive the pulleys in the same direction, or the belt may be crossed, so that the direction of the shafts is opposite. As a source of motion, a conveyor belt is one application where the belt is adapted to continuously carry a load between two points.
Power transmission
Belts are the cheapest utility for power transmission between shafts that may not be axially aligned. Power transmission is achieved by specially designed belts and pulleys. The demands on a belt drive transmission system are large and this has led to many variations on the theme. They run smoothly and with little noise, and cushion motor and bearings against load changes, albeit with less strength than gears or chains. However, improvements in belt engineering allow use of belts in systems that only formerly allowed chains or gears.
Power transmitted between a belt and a pulley is expressed as the product of difference of tension and belt velocity:
where, T1 and T2 are tensions in the tight side and slack side of the belt respectively. They are related as:
where, μ is the coefficient of friction, and α is the angle subtended by contact surface at the centre of the pulley.
Pros and cons
Belt drive is simple, inexpensive, and does not require axially aligned shafts. It helps protect the machinery from overload and jam, and damps and isolates noise and vibration. Load fluctuations are shock-absorbed (cushioned). They need no lubrication and minimal maintenance. They have high efficiency (90-98%, usually 95%), high tolerance for misalignment, and are inexpensive if the shafts are far apart. Clutch action is activated by releasing belt tension. Different speeds can be obtained by step or tapered pulleys.
The angular-velocity ratio may not be constant or equal to that of the pulley diameters, due to slip and stretch. However, this problem has been largely solved by the use of toothed belts. Temperatures ranges from −31 °F (−35 °C) to 185 °F (85 °C). Adjustment of center distance or addition of an idler pulley is crucial to compensate for wear and stretch.
Flat belts
Flat belts were widely used in the 19th and early 20th centuries in line shafting to transmit power in factories. They were also used in countless farming, mining, and logging applications, such as bucksaws, sawmills, threshers, silo blowers, conveyors for filling corn cribs or haylofts, balers, water pumps (for wells, mines, or swampy farm fields), and electrical generators. Flat belts are still used today, although not nearly as much as in the line shaft era. The flat belt is a simple system of power transmission that was well suited for its day. It can deliver high power at high speeds (500 hp at 10,000 ft/min), in cases of wide belts and large pulleys. But these drives are bulky, requiring high tension leading to high loads, and are poorly suited to close-centers applications, so vee belts have mainly replaced flat-belts for short-distance power transmission; and longer-distance power transmission is typically no longer done with belts at all. For example, factory machines now tend to have individual electric motors.
Because flat belts tend to climb towards the higher side of the pulley, pulleys were made with a slightly convex or "crowned" surface (rather than flat) to allow the belt to self-center as it runs. Flat belts also tend to slip on the pulley face when heavy loads are applied, and many proprietary belt dressings were available that could be applied to the belts to increase friction, and so power transmission.
Flat belts were traditionally made of leather or fabric. Today some are made of rubber or polymers. Grip of leather belts is often better if they are assembled with the hair side (outer side) of the leather against the pulley, although some belts are instead given a half-twist before joining the ends (forming a Möbius strip), so that wear can be evenly distributed on both sides of the belt. Belts ends are joined by lacing the ends together with leather thonging, steel comb fasteners, or glued splices (with thonging being the oldest of the methods). Flat belts were traditionally jointed, and still usually are, but they can also be made with endless construction.
Round belts
Round belts are a circular cross section belt designed to run in a pulley with a 60 degree V-groove. Round grooves are only suitable for idler pulleys that guide the belt, or when (soft) O-ring type belts are used. The V-groove transmits torque through a wedging action, thus increasing friction. Nevertheless, round belts are for use in relatively low torque situations only and may be purchased in various lengths or cut to length and joined, either by a staple, a metallic connector (in the case of hollow plastic), glueing or welding (in the case of polyurethane). Early sewing machines utilized a leather belt, joined either by a metal staple or glued, to great effect.
Vee belts
Vee belts (also known as V-belt or wedge rope) solved the slippage and alignment problem. It is now the basic belt for power transmission. They provide the best combination of traction, speed of movement, load of the bearings, and long service life. They are generally endless, and their general cross-section shape is trapezoidal (hence the name "V"). The "V" shape of the belt tracks in a mating groove in the pulley (or sheave), with the result that the belt cannot slip off. The belt also tends to wedge into the groove as the load increases—the greater the load, the greater the wedging action—improving torque transmission and making the V-belt an effective solution, needing less width and tension than flat belts. V-belts trump flat belts with their small center distances and high reduction ratios. The preferred center distance is larger than the largest pulley diameter, but less than three times the sum of both pulleys. Optimal speed range is 1000–7000 ft/min. V-belts need larger pulleys for their larger thickness than flat belts.
For high-power requirements, two or more vee belts can be joined side-by-side in an arrangement called a multi-V, running on matching multi-groove sheaves. This is known as a multiple-V-belt drive (or sometimes a "classical V-belt drive").
V-belts may be homogeneously rubber or polymer throughout, or there may be fibers embedded in the rubber or polymer for strength and reinforcement. The fibers may be of textile materials such as cotton or polyester or, for greatest strength, of steel or aramid (such as Twaron or Kevlar).
When an endless belt does not fit the need, jointed and link V-belts may be employed. However they are weaker and only usable at speeds up to 4000 ft/min. A link v-belt is a number of rubberized fabric links held together by metal fasteners. They are length adjustable by disassembling and removing links when needed.
Vee belt history
Trade journal coverage of V-belts in automobiles from 1916 mentioned leather as the belt material, and mentioned that the V angle was not yet well standardized. The endless rubber V-belt was developed in 1917 by John Gates of the Gates Rubber Company. Multiple-V-belt drive was first arranged a few years later by Walter Geist of the Allis-Chalmers corporation, who was inspired to replace the single rope of multi-groove-sheave rope drives with multiple V-belts running parallel. Geist filed for a patent in 1925 and Allis-Chalmers began marketing the drive under the "Texrope" brand; the patent was granted in 1928 (U.S. Patent 1,662,511). The "Texrope" brand still exists, although it has changed ownership and no longer refers to multiple-V-belt drive alone.
Multi-groove belts
A multi-groove or polygroove belt is made up of usually 5 or 6 "V" shapes alongside each other. This gives a thinner belt for the same drive surface, thus it is more flexible, although often wider. The added flexibility offers an improved efficiency, as less energy is wasted in the internal friction of continually bending the belt. In practice this gain of efficiency causes a reduced heating effect on the belt and a cooler-running belt lasts longer in service.
A further advantage of the polygroove belt that makes them popular is that they can run over pulleys on the ungrooved back of the belt. Though this is sometimes done with Vee belts with a single idler pulley for tensioning, a polygroove belt may be wrapped around a pulley on its back tightly enough to change its direction, or even to provide a light driving force.
Any Vee belt's ability to drive pulleys depends on wrapping the belt around a sufficient angle of the pulley to provide grip. Where a single-Vee belt is limited to a simple convex shape, it can adequately wrap at most three or possibly four pulleys, so can drive at most three accessories. Where more must be driven, such as for modern cars with power steering and air conditioning, multiple belts are required. As the polygroove belt can be bent into concave paths by external idlers, it can wrap any number of driven pulleys, limited only by the power capacity of the belt.
This ability to bend the belt at the designer's whim allows it to take a complex or "serpentine" path. This can assist the design of a compact engine layout, where the accessories are mounted more closely to the engine block and without the need to provide movable tensioning adjustments. The entire belt may be tensioned by a single idler pulley.
Ribbed belt
A ribbed belt is a power transmission belt featuring lengthwise grooves. It operates from contact between the ribs of the belt and the grooves in the pulley. Its single-piece structure is reported to offer an even distribution of tension across the width of the pulley where the belt is in contact, a power range up to 600 kW, a high speed ratio, serpentine drives (possibility to drive off the back of the belt), long life, stability and homogeneity of the drive tension, and reduced vibration. The ribbed belt may be fitted on various applications : compressors, fitness bikes, agricultural machinery, food mixers, washing machines, lawn mowers, etc.
Film belts
Though often grouped with flat belts, they are actually a different kind. They consist of a very thin belt (0.5-15 millimeters or 100-4000 micrometres) strip of plastic and occasionally rubber. They are generally intended for low-power (10 hp or 7 kW), high-speed uses, allowing high efficiency (up to 98%) and long life. These are seen in business machines, printers, tape recorders, and other light-duty operations.
Timing belts
Timing belts, (also known as toothed, notch, cog, or synchronous belts) are a positive transfer belt and can track relative movement. These belts have teeth that fit into a matching toothed pulley. When correctly tensioned, they have no slippage, run at constant speed, and are often used to transfer direct motion for indexing or timing purposes (hence their name). They are often used in lieu of chains or gears, so there is less noise and a lubrication bath is not necessary. Camshafts of automobiles, miniature timing systems, and stepper motors often utilize these belts. Timing belts need the least tension of all belts, and are among the most efficient. They can bear up to 200 hp (150 kW) at speeds of 16,000 ft/min.
Timing belts with a helical offset tooth design are available. The helical offset tooth design forms a chevron pattern and causes the teeth to engage progressively. The chevron pattern design is self-aligning. The chevron pattern design does not make the noise that some timing belts make at certain speeds, and is more efficient at transferring power (up to 98%).
Disadvantages include a relatively high purchase cost, the need for specially fabricated toothed pulleys, less protection from overloading and jamming, and the lack of clutch action.
Specialty belts
Belts normally transmit power on the tension side of the loop. However, designs for continuously variable transmissions exist that use belts that are a series of solid metal blocks, linked together as in a chain, transmitting power on the compression side of the loop.
Rolling roads
Belts used for rolling roads for wind tunnels can be capable of 250 km/h.
Flying rope
For transmission of mechanical power over distance without electrical energy, a flying rope can be used. A wire or manila rope can be used to transmit mechanical energy from a steam engine or water wheel to a factory or pump located a considerable distance (10 to 100s of meters or more) from the power source. A flying rope way could be supported on poles and pulleys similar to the cable on a chair lift or aerial tramway. Transmission efficiency is generally high.
Standards for use
The open belt drive has parallel shafts rotating in the same direction, whereas the cross-belt drive also bears parallel shafts but rotate in opposite direction. The former is far more common, and the latter not appropriate for timing and standard V-belts, because the pulleys contact both the inner and outer belt surfaces. Nonparallel shafts can be connected if the belt's center line is aligned with the center plane of the pulley. Industrial belts are usually reinforced rubber but sometimes leather types, non-leather non-reinforced belts, can only be used in light applications.
The pitch line is the line between the inner and outer surfaces that is neither subject to tension (like the outer surface) nor compression (like the inner). It is midway through the surfaces in film and flat belts and dependent on cross-sectional shape and size in timing and V-belts. Calculating pitch diameter is an engineering task and is beyond the scope of this article. The angular speed is inversely proportional to size, so the larger the one wheel, the less angular velocity, and vice versa. Actual pulley speeds tend to be 0.5–1% less than generally calculated because of belt slip and stretch. In timing belts, the inverse ratio teeth of the belt contributes to the exact measurement. The speed of the belt is:
Speed = Circumference based on pitch diameter × angular speed in rpm
Selection criteria
Belt drives are built under the following required conditions: speeds of and power transmitted between drive and driven unit; suitable distance between shafts; and appropriate operating conditions. The equation for power is:
power (kW) = (torque in newton-meters) × (rpm) × (2π radians)/(60 sec × 1000 W)
Factors of power adjustment include speed ratio; shaft distance (long or short); type of drive unit (electric motor, internal combustion engine); service environment (oily, wet, dusty); driven unit loads (jerky, shock, reversed); and pulley-belt arrangement (open, crossed, turned). These are found in engineering handbooks and manufacturer's literature. When corrected, the horsepower is compared to rated horsepowers of the standard belt cross sections at particular belt speeds to find a number of arrays that perform best. Now the pulley diameters are chosen. It is generally either large diameters or large cross section that are chosen, since, as stated earlier, larger belts transmit this same power at low belt speeds as smaller belts do at high speeds. To keep the driving part at its smallest, minimum-diameter pulleys are desired. Minimum pulley diameters are limited by the elongation of the belt's outer fibers as the belt wraps around the pulleys. Small pulleys increase this elongation, greatly reducing belt life. Minimum pulley diameters are often listed with each cross section and speed, or listed separately by belt cross section. After the cheapest diameters and belt section are chosen, the belt length is computed. If endless belts are used, the desired shaft spacing may need adjusting to accommodate standard length belts. It is often more economical to use two or more juxtaposed V-belts, rather than one larger belt.
In large speed ratios or small central distances, the angle of contact between the belt and pulley may be less than 180°. If this is the case, the drive power must be further increased, according to manufacturer's tables, and the selection process repeated. This is because power capacities are based on the standard of a 180° contact angle. Smaller contact angles mean less area for the belt to obtain traction, and thus the belt carries less power.
Belt friction
Belt drives depend on friction to operate, but excessive friction wastes energy and rapidly wears the belt. Factors that affect belt friction include belt tension, contact angle, and the materials used to make the belt and pulleys.
Belt tension
Power transmission is a function of belt tension. However, also increasing with tension is stress (load) on the belt and bearings. The ideal belt is that of the lowest tension that does not slip in high loads. Belt tensions should also be adjusted to belt type, size, speed, and pulley diameters. Belt tension is determined by measuring the force to deflect the belt a given distance per inch of pulley. Timing belts need only adequate tension to keep the belt in contact with the pulley.
Belt wear
Fatigue, more so than abrasion, is the culprit for most belt problems. This wear is caused by stress from rolling around the pulleys. High belt tension; excessive slippage; adverse environmental conditions; and belt overloads caused by shock, vibration, or belt slapping all contribute to belt fatigue.
To fully specify a belt, the material, length, and cross-section size and shape are required. Timing belts, in addition, require that the size of the teeth be given. The length of the belt is the sum of the central length of the system on both sides, half the circumference of both pulleys, and the square of the sum (if crossed) or the difference (if open) of the radii. Thus, when dividing by the central distance, it can be visualized as the central distance times the height that gives the same squared value of the radius difference on, of course, both sides. When adding to the length of either side, the length of the belt increases, in a similar manner to the Pythagorean theorem. One important concept to remember is that as D1 gets closer to D2 there is less of a distance (and therefore less addition of length) until its approaches zero.
On the other hand, in a crossed belt drive the sum rather than the difference of radii is the basis for computation for length. So the wider the small drive increases, the belt length is higher.
See also
- belt-drive turntable
- belt-driven bicycle
- Belt track
- Conveyor belt
- Gilmer belt
- Lariat chain - a science exhibit showing the effects when a belt is run 'too fast'
- Poly chain GT carbon belt drive system
- Roller chain
- Timing belt (camshaft)
- "Belt drives, IIT Kharagpur". 1. NPTEL.
- By Rhys Jenkins, Newcomen Society, (1971). Links in the History of Engineering and Technology from Tudor Times, Ayer Publishing. Page 34, ISBN 0-8369-2167-4
- James N. Boblenz. "How to lace a flat belt". Farm Collector. Retrieved 2010-04-04.
- "Belt lacing patterns" (PDF). North Dakota Statue Univ.
- "Flat Belt Pulleys, Belting, Splicing". Archived from the original on 17 March 2010. Retrieved 2010-04-04. Text " Hit N Miss Enterprises " ignored (help)
- Editorial staff (1916-04-15), "Radiator fans and their design", Horseless Age 37 (8): 324.
- Editorial staff (1916-04-15), "S.A.E. divisions exhibit activity", Horseless Age 37 (8): 322.
- DIN 7867
- Automotive Handbook (3rd ed.). Robert Bosch GmbH. 1993. p. 304. ISBN 0-8376-0330-7.
- "Pininfarina Aerodynamic and Aeroacoustic Research Center". Arc.pininfarina.it. Retrieved 2009-10-24.
- John Joseph Flather (1895). Rope-driving: a treatise on the transmission of power by means of fibrous ropes. | http://en.wikipedia.org/wiki/Belt_(mechanical) | 13 |
205 | Topics covered: Polar coordinates; area in polar coordinates
Instructor: Prof. David Jerison
Lecture Notes (PDF - 2.0MB)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Today we're going to continue our discussion of parametric curves. I have to tell you about arc length. And let me remind me where we left off last time. This is parametric curves, continued. Last time, we talked about the parametric representation for the circle. Or one of the parametric representations for the circle. Which was this one here. And first we noted that this does parameterize, as we say, the circle. That satisfies the equation for the circle. And it's traced counterclockwise. The picture looks like this. Here's the circle. And it starts out here at t = and it gets up to here at time t = pi / 2. So now I have to talk to you about arc length. In this parametric form. And the results should be the same as arc length around this circle ordinarily. And we start out with this basic differential relationship. dx ^2 is dx ^2 + dy ^2. And then I'm going to take the square root, divide by dt, so the rate of change with respect to t of s is going to be the square root. Well, maybe I'll write it without dividing. Just write it as ds. So this would be (dx / dt)^2 + (dy / dt)^2 dt.
So this is what you get formally from this equation. If you take its square roots and you divide by dt squared in the inside, the square root and you multiply by dt outside. So that those cancel. And this is the formal connection between the two. We'll be saying just a few more words in a few minutes about how to make sense of that rigorously. Alright so that's the set of formulas for the infinitesimal, the differential of arc length. And so to figure it out, I have to differentiate x with respect to t. And remember x is up here. It's defined by a cos t, so its derivative is - a sin t. And similarly, dy / dt = a cos t.
And so I can plug this in. And I get the arc length element, which is the square root of )- a sin t) ^2 (+ a cos t) ^2 dt. Which just becomes the square root of a ^2 dt, or a dt. Now, I was about to divide by t. Let me do that now. We can also write the rate of change of arc length with respect to t. And that's a, in this case. And this gets interpreted as the speed of the particle going around. So not only, let me trade these two guys, not only do we have the direction is counterclockwise, but we also have that the speed is, if you like, it's uniform. It's constant speed. And the rate is a. So that's ds / dt. Travelling around.
And that means that we can play around with the speed. And I just want to point out. So the standard thing, what you'll have to get used to, and this is a standard presentation, you'll see this everywhere. In your physics classes and your other math classes, if you want to change the speed, so a new speed going around this would be, if I set up the equations this way. Now I'm tracing around the same circle. But the speed is going to turn out to be, if you figure it out, there'll be an extra factor of k. So it'll be a k. That's what we'll work out to be the speed. Provided k is positive and a is positive. So we're making these conventions. The constants that we're using are positive.
Now, that's the first and most basic example. The one that comes up constantly. Now, let me just make those comments about notation that I wanted to make. And we've been treating these squared differentials here for a little while and I just want to pay attention a little bit more carefully to these manipulations. And what's allowed and what's not. And what's justified and what's not. So the basis for this was this approximate calculation that we had, that delta s ^2 was delta x ^2 + delta y ^2. This is how we justified the arc length formula before. And let me just show you that the formula that I have up here, this basic formula for arc length in the parametric form, follows just as the other one did. And now I'm going to do it slightly more rigorously.
I do the division really in disguise before I take the limit of the infinitesimal. So all I'm really doing is I'm doing this. Dividing through by this, and sorry this is still approximately equal. So I'm not dividing by something that's or infinitesimal. I'm dividing by something non-0. And here I have (delta x/ delta t) ^2 + (delta y / delta t) ^2. And then in the limit, I have ds / dt = to the square root of this guy. Or, if you like, the square of it, so. So it's legal to divide by something that's almost and then take the limit as we go to 0. This is really what derivatives are all about. That we get a limit here. As the denominator goes to 0. Because the numerator's going to too. So that's the notation.
And now I want to warn you, maybe just a little bit, about misuses, if you like, of the notation. We don't do absolutely everything this way. This expression that came up with the squares, you should never write it as this. This, put it on the board but very quickly, never. OK. Don't do that. We use these square differentials, but we don't do it with these ratios here. But there was another place which is slightly confusing. It looks very similar, where we did use the square of the differential in a denominator. And I just want to point out to you that it's different. It's not the same. And it is OK. And that was this one. This thing here. This is a second derivative, it's something else. And it's got a dt squared in the denominator. So it looks rather similar. But what this represents is the quantity d / dt ^2. And you can see the squares came in. And squared the two expressions. And then there's also an x over here.
So that's legal. Those are notations that we do use. And we can even calculate this. It has a perfectly good meaning. It's the same as the derivative with respect to t of the derivative of x, which we already know was - sine. Sorry, a sine t, I guess. Not this example, but the previous one. Up here. So the derivative is this and so I can differentiate a second time. And I guess - a cosine t. So that's a perfectly legal operation. Everything in there makes sense. Just don't use that. There's another really unfortunate thing, right which is that the 2 creeps in funny places with signs. You have sin^2. It would be out here, it comes up here for some strange reason. This is just because typographers are lazy or somebody somewhere in the history of mathematical typography decided to let the 2 migrate. It would be like putting the 2 over here. There's inconsistency in mathematics right. We're not perfect and people just develop these notations. So we have to live with them. The ones that people accept as conventions.
The next example that I want to give you is just slightly different. It'll be a non-constant speed parameterization. Here x = 2 sine t. And y = say, cosine t. And let's keep track of what this one does. Now, this is a skill which I'm going to ask you about quite a bit. And it's one of several skills. You'll have to connect this with some kind of rectangular equation. An equation for x and y. And we'll be doing a certain amount of this today. In another context. Right here, to see the pattern, we know that the relationship we're going to want to use is that sin^2 + cos^2 = 1. So in fact the right thing to do here is to take 1/4 x ^2 + y ^2. And that's going to turn out to be sin ^2 t + cos ^2 t. Which is 1. So there's the equation. Here's the rectangular equation for this parametric curve. And this describes an ellipse.
That's not the only information that we can get here. The other information that we can get is this qualitative information of where we start, where we're going, the direction. It starts out, I claim, at t = 0. That's when t = = 0, this is (2 sine 0, cosine 0), right? (2 sine 0, cosine 0) = the point (0, 1). So it starts up, up here. At (0, 1). And then the next little place, so this is one thing that certainly you want to do. t = pi / 2 is maybe the next easy point to plot. And that's going to be (2 sine pi / 2, cosine pi / 2). And that's just (2, 0). And so that's over here somewhere. This is (2, 0). And we know it travels along the ellipse. And we know the minor axis is 1, and the major axis is 2, so it's doing this.
So this is what happens at t = 0. This is where we are at t = pi / 2. And it continues all the way around, etc. To the rest of the ellipse. This is the direction. So this one happens to be clockwise.
Alright, now let's keep track of its speed. Let's keep track of the speed, and also the arc length. So the speed is the square root of the derivatives here. That would be (2 cosine t) ^2 + (sine t) ^2. And the arc length is what? Well, if we want to go all the way around, we need to know that that takes a total of 2 pi. So to 2 pi. And then we have to integrate ds, which is this expression. Or ds/ dt, dt. So that's the square root of 4 cosine^2 t + sine ^2 t dt.
The bad news, if you like, is that this is not an elementary integral. In other words, no matter how long you try to figure out how to antidifferentiate this expression, no matter how many substitutions you try, you will fail. That's the bad news. The good news is this is not an elementary integral. It's not an elementary integral. Which means that this is the answer to a question. Not something that you have to work on. So if somebody asks you for this arc length, you stop here. That's the answer, so it's actually better than it looks. And we'll try to -- I mean, I don't expect you to know already what all of the integrals are that are impossible. And which ones are hard and which ones are easy. So we'll try to coach you through when you face these things. It's not so easy to decide. I'll give you a few clues, but. OK. So this is the arc length.
Now, I want to move on to the last thing that we did. Last type of thing that we did last time. Which is the surface area. And yeah, question.
PROFESSOR: The question, this is a good question. The question is, when you draw the ellipse, do you not take into account what t is. The answer is that this is in disguise. What's going on here is we have a trouble with plotting in the plane what's really happening. So in other words, it's kind of in trouble. So the point is that we have two functions of t, not one. x ( t) and y ( t). So one thing that I can do if I plot things in the plane. In other words, the main point to make here is that we're not talking about the situation. y is a function of x. We're out of that realm now. We're somewhere in a different part of the universe in our thought. And you should drop this point of view. So this depiction is not y as a function of x. Well, that's obvious because there are two values here, as opposed to one. So we're in trouble with that. And we have that background parameter, and that's exactly why we're using it. This parameter t. So that we can depict the entire curve. And deal with it as one thing.
So since I can't really draw it, and since t is nowhere on the map, you should sort of imagine it as time, and there's some kind of trajectory which is travelling around. And then I just labelled a couple of the places. If somebody asked you to draw a picture of this, well, I'll tell you exactly where you need the picture in just one second, alright. It's going to come up right now in surface area. But otherwise, if nobody asks you to, you don't even have to put down t = and t = pi / 2 here. Because nobody demanded it of you. Another question.
PROFESSOR: So, another very good question which is exactly connected to this picture. So how is it that we're going to use the picture, and how is it we're going to use the notion of the t. The question was, why is this from t = to t = 2 pi? That does use the t information on this diagram. the point is, we do know that t starts here. This is pi / 2, this is pi, this is 3 pi / 2, and this is 2 pi. When you go all the way around once, it's going to come back to itself. These are periodic functions of period 2 pi. And they come back to themselves exactly at 2 pi. And so that's why we know in order to get around once, we need to go from to 2 pi. And the same thing is going to come up with surface area right now. That's going to be the issue, is what range of t we're going to need when we compute the surface area.
PROFESSOR: In a question, what you might be asked is what's the rectangular equation for a parametric curve? So that would be 1/4 x^2 + y ^2 = 1. And then you might be asked, plot it. Well, that would be a picture of the ellipse. OK, those are types of questions that are legal questions.
PROFESSOR: The question is, do I need to know any specific formulas? Any formulas that you know and remember will help you. They may be of limited use. I'm not going to ask you to memorize anything except, I guarantee you that the circle is going to come up. Not the ellipse, the circle will come up everywhere in your life. So at least at MIT, your life at MIT. We're very round here. Yeah, another question.
STUDENT: I'm just a tiny bit confused back to the basics. This is more a question from yesterday, I guess. But when you have your original ds^2= dx^2 + dy ^2, and then you integrate that to get arc length, how are you, the integral has dx's and dy's. So how are you just integrating with respect to dx?
PROFESSOR: OK, the question is how are we just integrating with respect to x? So this is a question which goes back to last time. And what is it with arc length. so. I'm going to have to answer that question in connection with what we did today. So this is a subtle question. But I want you to realize that this is actually an important conceptual step here. So shhh, everybody, listen.
If you're representing one-dimensional objects, which are curves, maybe, in space. Or in two dimensions. When you're keeping track of arc length, you're going to have to have an integral which is with respect to some variable. But that variable, you get to pick. And we're launching now into this variety of choices of variables with respect to which you can represent something. Now, there are some disadvantages on the circle to representing things with respect to the variable x. Because there are two points on the circle here. On the other hand, you actually can succeed with half the circle. So you can figure out the arc length that way. And then you can set it up as an integral dx. But you can also set it up as an integral with respect to any parameter you want. And the uniform parameter is perhaps the easiest one. This one is perhaps the easiest one.
And so now the thing that's strange about this perspective, and I'm going to make this point later in the lecture as well. Is that the letters x and y, as I say, you should drop this notion that y is a function of x. This is what we're throwing away at this point. What we're thinking of is, you can describe things in terms of any coordinate you want. You just have to say what each one is in terms of the others. And these x and y over here are where we are in the Cartesian coordinate system. They're not, and in this case they're functions of some other variable. Some other variable. So they're each functions. So the letters x and y just changed on you. They mean something different. x is no longer the variable. It's the function. Right?
You're going to have to get used to that. That's because we run out of letters. And we kind of want to use all of them the way we want. I'll say some more about that later.
So now I want to do this surface area example. I'm going to just take the surface area of the ellipsoid. The surface of the ellipsoid formed by revolving this previous example, which was Example 2. Around the y axis. So we want to set up that surface area integral here for you. Now, I remind you that the area element looks like this. If you're revolving around the y axis, that means you're going around this way and you have some curve. In this case it's this piece of an ellipse. If you sweep it around you're going to get what's called an ellipsoid. And there's a little chunk here, that you're wrapping around. And the important thing you need besides this ds, this arc length piece over here, is the distance to the axis. So that's this horizontal distance here. I'll draw it in another color. And that horizontal distance now has a name. And this is, again, the virtue of this coordinate system. The t is something else. This has a name. This distance has a name. This distance is called x.
And it even has a formula. Its formula is 2 sin t. In terms of t. So the full formula up for the integral here is, I have to take the circumference when I spin this thing around. And this little arc length element. So I have here 2 pi ( 2 sin t). That's the x variable here. And then I have here ds, which is kind of a mess. So unfortunately I don't quite have room for it. Plan ahead. Square root of 4 cos^2 t + sin^2 t, is that what it was, dt? Alright, I guess I squeezed it in there. So that was the arc length, which I re-copied from this board above. That was the ds piece. It's this whole thing including the dt. That's the answer except for one thing. What else do we need? We don't just need the integrand, this is half of setting up an integral. The other half of setting up an integral is the limits. We need specific limits here. Otherwise we don't have a number that we can get out.
So we now have to think about what the limits are. And maybe somebody can see. It has something to do with this diagram of the ellipse over here. Can somebody guess what it is? to pi. Well, that was quick. That's it. Because we go from the top to the bottom, but we don't want to continue around. We don't want to go from to 2 pi, because that would be duplicating what we're going to get when we spin around. And we know that we start at 0. It's interesting because it descends when you change variables to think of it in terms of the y variable it's going the opposite way. But anyway, just one piece of this is what we want.
So that's this setup. And now I claim that this is actually a doable integral. However, it's long. I'm going to spare you, I'll just tell you how you would get started. You would use the substitution u = cos t. And then the du is going to be - sin t dt. But then, unfortunately, there's a lot more. There's another trig substitution with some other multiple of the cosine and so forth. So it goes on and on. If you want to check it yourself, you can. There's an inverse trig substitution which isn't compatible with this one. But it can be done. Calculated. In elementary terms. Yeah, another question.
PROFESSOR: So, if you get this on an exam, I'm going to have to coach you through it. Either I'm going to have to tell you don't evaluate it or, you're going to have to work really hard. Or here's the first step, and then the next step is, keep on going. Or something. I'll have to give you some cues. Because it's quite long. This is way too long for an exam, this particular one. OK. It's not too long for a problem set. This is where I would leave you off if I were giving it to you on a problem set. Just to give you an idea of the order of magnitude. Whereas one of the ones that I did yesterday, I wouldn't even give you on a problem set, it was so long.
So now, our next job is to move on to polar coordinates. Now, polar coordinate involve the geometry of circles. As I said, we really love circles here. We're very around. Just as I love 0, the rest of the Institute loves circles. So we're going to do that right now.
What we're going to talk about now is polar coordinates. Which are set up in the following way. It's a way of describing the points in the plane. Here is a point in a plane, and here's what we think of as the usual x-y axes. And now this point is going to be described by a different pair of coordinates, different pair of numbers. Namely, the distance to the origin. And the second parameter here, second number here, is this angle theta. Which is the angle of ray from origin with the horizontal axis. So that's what it is in language. And you should put this in quotation marks, because it's not a perfect match. This is geometrically what you should always think of, but the technical details involve dealing directly with formulas.
The first formula is the formula for x. And this is the fundamental, these two are the fundamental ones. Namely, x = r cos theta. The second formula is the formula for y, which is r sin theta. So these are the unambiguous definitions of polar coordinates. This is it. And this is the thing from which all other almost correct statements almost follow. But this is the one you should trust always. This is the un ambiguous statement.
So let me give you an example something that's close to being a good formula and is certainly useful in its way. Namely, you can think of r as being the square root of x ^2 + y ^2. That's easy enough to derive, it's the distance to the origin. That's pretty obvious. And the formula for theta, which you can also derive, which is that it's the inverse tangent of y / x. However, let me just warn you that these formulas are slightly ambiguous. So somewhat ambiguous. In other words, you can't just apply them blindly. You actually have to look at a picture in order to get them right. In particular, r could be plus or minus here. And when you take the inverse tangent, there's an ambiguity between, it's the same as the inverse tangent of - y / - x. So these minus signs are a plague on your existence. And you're not going to get a completely unambiguous answer out of these formulas without paying attention to the diagram. On the other hand, the formula up in the box there always works. So when people mean polar coordinates, they always mean that. And then they have conventions, which sometimes match things up with the formulas over on this next board.
Let me give you various examples here first. But maybe first I should I should draw the two coordinate systems. So the coordinate system that we're used to is the rectangular coordinate system. And maybe I'll draw it in orange and green here. So these are the coordinate lines y = 0, y = 1, y = 2. That's how the coordinate system works. And over here we have the rest of the coordinate system. And this is the way we're thinking of x and y now. We're no longer thinking of y as a function of x and x as a function of y, we're thinking of x as a label of a place in a plane. And y as a label of a place in a plane.
So here we have x = 0, x = 1, x = 2, etc. Here's x = - 1. So forth. So that's what the rectangular coordinate system looks like. And now I should draw the other coordinate system that we have. Which is this guy here. Well, close enough. And these guys here. Kind of this bulls-eye or target operation. And this one is, say, theta = pi / 2. This is theta = 0. This is theta = - pi / 4. For instance, so I've just labeled for you three of the rays on this diagram. It's kind of like a radar screen. And then in pink, this is maybe r = 2, the radius 2. And inside is r = 1. So it's a different coordinate system for the plane. And again, the letter r represents measuring how far we are from the origin. The theta represents something about the angle, which ray we're on. And they're just two different variables. And this is a very different kind of coordinate system.
OK so, our main job is just to get used to this. For now. You will be using this a lot in 18.02. It's very useful in physics. And our job is just to get started with it. And so, let's try a few examples here. Tons of examples. We'll start out very slow. If you have (x, y) = (1, - 1), that's a point in the plane. I can draw that point. It's down here, right? This is - 1 and this is 1, and here's my point, (1, - 1). I can figure out what the representative is of this in polar coordinates. So in polar coordinates, there are actually a bunch of choices here.
First of all, I'll tell you one choice. If I start with the angle horizontally, I wrap all the way around. That would be to this ray here, let's do it in green again. Alright, I labeled it actually as - pi / 4, but another way of looking at over here it is that it's this angle here. So that would be r = square root of 2. Theta = 7 pi / 4. So that's one possibility of the angle and the distance. I know the distance is a square root of 2, that's not hard.
Another way of looking at it is the way which was suggested when I labeled this with a negative angle. And that would be r = square root of 2, theta = - pi / 4. And these are both legal. These are perfectly legal representatives. And that's what I meant by saying that these representations over here are somewhat ambiguous. There's more than one answer to this question, of what the polar representation is.
A third possibility, which is even more dicey but also legal, is r = - square root of 2. Theta = 3 pi / 4. Now, what that corresponds to doing is going around to here. We're pointing out 3/4 pi, direction. But then going negative square root of 2, distance. We're going backwards. So we're landing in the same place. So this is also legal. Yeah.
PROFESSOR: The question is, don't the radiuses have to be positive because they represent a distance to the origin? The answer is I lied to you here. All of these things that I said are wrong, except for this. Which is the rule for what polar coordinates mean. So it's maybe plus or minus the distance, is what it is always. I try not to lie to you too much, but I do succeed. Now, let's do a little bit more practice here.
There are some easy examples, which I will run through very quickly. r = a, we already know this is a circle. And the 3 theta = a constant is a ray. However, this involves an implicit assumption, which I want to point out to you. So this is Example 3. Theta's equal to a constant as a ray. But this implicitly assumes <= r < infinity. If you really wanted to allow minus infinity < r < infinity in this example, you would get a line. Gives the whole line. It gives everything behind. So you go out on some ray, you go backwards on that ray and you get the whole line through the origin, both ways. If you allow r going to minus infinity as well.
So the typical conventions, so here are the typical conventions. And you will see people assume this without even telling you. So you need to watch out for it. The typical conventions are certainly this one, which is a nice thing to do. Pretty much all the time, although not all the time. Most of the time. And then you might have theta ranging from minus pi to pi, so in other words symmmetric around 0. Or, another very popular choice is this one. Theta's >= and strictly less than 2 pi. So these are the two typical ranges in which all of these variables are chosen. But not always. You'll find that it's not consistent.
As I said, our job is to get used to this. And I need to work up to some slightly more complicated examples. Some of which I'll give you on next Tuesday. But let's do a few more. So, I guess this is Example 4. Example 4, I'm going to take y = 1. That's awfully simple in rectangular coordinates. But interestingly, you might conceivably want to deal with it in polar coordinates. If you do, so here's how you make the translation. But this translation is not so terrible. What you do is, you plug in y = r sin theta. That's all you have to do. And so that's going to be equal to 1. And that's going to give us our polar equation. The polar equation is r = 1 / sin theta. There it is. And let's draw a picture of it. So here's a picture of the line y = 1. And now we see that if we take our rays going out from here, they collide with the line at various lengths.
So if you take an angle, theta, here there'll be a distance r corresponding to that and you'll hit this in exactly one spot. For each theta you'll have a different radius. And it's a variable radius. It's given by this formula here. And so to trace this line out, you actually have to realize that there's one more thing involved. Which is the possible range of theta. Again, when you're doing integrations you're going to need to know those limits of integration. So you're going to need to know this. The range here goes from theta = 0, that's sort of when it's out at infinity. That's when the denominator is here. And it goes all the way to pi. Swing around just one half-turn. So the range here is < theta < pi. Yeah, question.
PROFESSOR: The question is, is it typical to express r as a function of theta, or vice versa, or does it matter? The answer is that for the purposes of this course, we're almost always going to be writing things in this form. r as a function of theta. And you can do whatever you want. This turns out to be what we'll be doing in this course, exclusively. As you'll see when we get to other examples, it's the traditional sort of thing to do when you're thinking about observing a planet or something like that. You see the angle, and then you guess far away it is. But it's not necessary. The formulas are often easier this way. For the examples that we have. Because it's usually a trig function of theta. Whereas the other way, it would be an inverse trig function. So it's an uglier expression. As you can see. The real reason is that we choose this thing that's easier to deal with.
So now let me give you a slightly more complicated example of the same type. Where we use a shortcut. This is a standard example. And it comes up a lot. And so this is an off-center circle. A circle is really easy to describe, but not necessarily if the center is on the rim of the circle. So that's a different problem. And let's do this with a circle of radius a. So this is the point (a, 0) and this is (2a, 0). And actually, if you know these two numbers, you'll be able to remember the result of this calculation. Which you'll do about five or six times and then finally you'll memorize it during 18.02 when you will need it a lot. So this is a standard calculation here. So the starting place is the rectangular equation. And we're going to pass to the polar representation. The rectangular representation is (x - a) ^2 + y ^2 = a ^2. So this is a circle centered at (a, 0) of radius a.
And now, if you like, the slow way of doing this would be to plug in x = r cos theta, y = r sin theta. The way I did in this first step. And that works perfectly well. But I'm going to do it more quickly than that. Because I can sort of see in advance how it's going to work. I'm just going to expand this out. And now I see the a ^2's cancel. And not only that, but x^2 + y &2 = r ^2. So this becomes r ^2. That's x ^2 + y ^2 - 2ax = 0.
The r came from the fact that r ^2 = x ^2 + y ^2. So I'm doing this the rapid way. You can do it by plugging in, as I said. r equals. So now that I've simplified it, I am going to use that procedure. I'm going to plug in. So here I have r ^2 - 2a r cos theta = 0. I just plugged in for x. As I said, I could have done that at the beginning. I just simplified first. And now, this is the same thing as r ^2 = 2ar cos theta. And we're almost done. There's a boring part of this equation, which is r = 0. And then there's, if I divide by r, there's the interesting part of the equation. Which is this. So this is or r = 0. Which is already included in that equation anyway.
So I'm allowed to divide by r because in the case of r = 0, this is represented anyway. Question.
PROFESSOR: r = is just one case. That is, it's the union of these two. It's both. Both are possible. So r = is one point on it. And this is all of it. So we can just ignore this. So now I want to say one more important thing. You need to understand the range of this. So wait a second and we're going to figure out the range here. The range is very important, because otherwise you'll never be able to integrate using this representation here. So this is the representation. But notice when theta = 0, we're out here at 2a. That's consistent, and that's actually how you remember this factor 2a here. Because if you remember this picture and where you land when theta = 0. So that's the theta = part. But now as I tip up like this, you see that when we get to vertical, we're done. With the circle. It's gotten shorter and shorter and shorter, and at theta = pi / 2, we're down at 0. Because that's cos pi / 2 = 0. So it swings up like this. And it gets up to pi / 2. Similarly, we swing down like this. And then we're done. So the range is - pi / 2 < theta < pi / 2. Or, if you want to throw in the r = case, you can throw in this, this is repeating, if you like, at the ends. So this is the range of this circle. And let's see. Next time we'll figure out area in polar coordinates. | http://ocw.mit.edu/courses/mathematics/18-01-single-variable-calculus-fall-2006/video-lectures/lecture-32-polar-coordinates/ | 13 |
75 | Department of Mathematics
York College (CUNY)
Jamaica, New York 11451
Given a triangle there are lines associated with the triangle, whether one is talking about a triangle in the Euclidean plane or the Taxicab plane, which are of importance for historical and practical reasons. These lines include the median, altitude, perpendicular bisector of a side, a perimeter bisector, and angle bisector. My goal here is to briefly call attention to these lines and some of their properties.
Figure 1 shows a typical triangle ABC in the Euclidean or Taxicab plane.
Note this particular triangle is an acute angle triangle but one of the angles might be an obtuse angle. One could also have a right triangle where one of the angles of the triangle is a right angle. No triangle in the Taxicab or Euclidean plane can have two right angles or two obtuse angles. This is because in these planes the sum of the angles in a triangle is 180 degrees. What is shown in Figure 1 is a line segment AM joining vertex A to the midpoint M of the side BC of the triangle. The line segment AM is known as the median from vertex A to the opposite side, in this case CB. A triangle has three medians, one from each vertex to the side opposite it. The medians of a triangle intersect in a single point which is the centroid of the triangle. If one supports a triangle and its interior with a pencil point at the centroid, it will exactly balance (but be in unstable equilibrium).
Although the median is treated as a segment, it is often convenient to think of this segment as being part of the infinite line which contains this segment. The same can be said for the other lines we will shortly define. The centroid will always be an interior point of the triangle, though this will not always be true for the other special points which we will shortly discuss. Also, the centroid point lies 2/3 of the way from each vertex to the midpoint of the opposite side along the median line.
Another collection of important lines for a triangle are the perpendicular bisectors of the sides of the triangle. In Figure 1 we can draw a line through M which is perpendicular to the side BC of the triangle. The points on this perpendicular bisector are equidistant (Euclidean distance) from the points B and C. Similarly, there are two other lines which are the perpendicular bisectors of the sides of AC and BC. These three lines meet in a single point O, which does not necessarily lie in the interior of the triangle (for an obtuse triangle the point is outside the triangle and for a right triangle it lies at the midpoint of the hypothenuse of the triangle). The point where the perpendicular bisectors of the sides of a triangle meet is the center of a circle which passes through the vertices of the triangle. This circle is known as the circumcircle for the triangle. Since, in general, three points not on a straight line form a triangle, if one wants to find a circle which passes through the three points, one can find such a circle by finding the point O where the perpendicular bisectors of the sides meet, using as the radius of the circle the distance from O to any of the three points.
Another surprise concerns the altitudes of a triangle. (It is convenient here to think of an altitude as a line rather than a segment.) Given the vertex A of triangle ABC we can draw a line through A perpendicular to the side opposite A, BC as in Figure 4. Sometimes it is necessary for the side BC to be extended beyond vertices B and C for this perpendicular line to intersect the opposite side. This happens in the case where the angle at B or C is obtuse. (If the angle at B or C is obtuse, then A must be an acute angle since there can be no more than one obtuse angle in a triangle.) The point where the three altitudes of a triangle meet is called the orthocenter of the triangle (Figure 5).
Given a triangle, one can bisect its interior angles. The resulting lines are called the angle bisectors; they are concurrent at a single point. If one takes a point on an angle bisector at vertex A and drops a perpendicular to the sides of the triangle that meet at A, one gets two equal (Euclidean distance) length segments. The point where the three angle bisectors meet is the center of a circle which will be tangent to the three sides of the triangle. This circle is known as the incircle of the triangle (Figure 6).
All of these situations create a certain sense of "wonder" because perhaps the fact that the lines involved are concurrent is unexpected. Historically, these special points were known to the Greek geometers, except for the orthocenter. The first proofs of the concurrency of the altitudes seems to date from the 18th century.
A general tool for proving concurrency of lines through the three vertices of a triangle is usually attributed to Giovanni Ceva (1647-1734).
Ceva's Theorem states that the lines CF, AD, and BE are concurrent at O if and only if the lengths of the segments obey: (AF)(DB)(CE) = (FB)(DC)(EA). (Sometimes the theorem is stated in terms of directed line segments.)
Coxeter, H. S. M. and S. Greitzer, Geometry Revisited, MAA, Washington, 1967. | http://www.york.cuny.edu/~malk/geometricstructures/triangle-centers.html | 13 |
92 | Deoxyribonucleic acid (DNA) is a nucleic acid that contains the genetic instructions used in the development and functioning of all known living organisms. The main role of DNA molecules is the long-term storage of information. DNA is often compared to a set of blueprints, since it contains the instructions needed to construct other components of cells, such as proteins and RNA molecules. The DNA segments that carry this genetic information are called genes, but other DNA sequences have structural purposes, or are involved in regulating the use of this genetic information.
Chemically, DNA is a long polymer of simple units called nucleotides, with a backbone made of sugars (deoxyribose) and phosphate groups joined by ester bonds. Attached to each sugar is one of four types of molecules called bases. It is the sequence of these four bases along the backbone that encodes information. This information is read using the genetic code, which specifies the sequence of the amino acids within proteins. The code is read by copying stretches of DNA into the related nucleic acid RNA, in a process called transcription. Most of these RNA molecules are used to synthesize proteins, but others are used directly in structures such as ribosomes and spliceosomes. RNA also serves as a a genetic blueprint for certain viruses.
Within cells, DNA is organized into structures called chromosomes. These chromosomes are duplicated before cells divide, in a process called DNA replication. Eukaryotic organisms such as animals, plants, and fungi store their DNA inside the cell nucleus, while in prokaryotes such as bacteria, which lack a cell nucleus, it is found in the cell's cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA, which helps control its interactions with other proteins and thereby control which genes are transcribed. Some eukaryotic cell organelles, mitochondria and chloroplasts, also contain DNA, giving rise to the endosymbionic theory that these organelles may have arisen from prokaryotes in a symbionic relationship.
The identification of DNA, combined with human creativity, has been of tremendous importance not only for understanding life but for practical applications in medicine, agriculture, and other areas. Technologies have been developed using recombinant DNA to mass produce medically important proteins, such as insulin, and have found application in agriculture to make plants with desirable qualities. Through understanding the alleles that one is carrying for particular genes, one can gain an understanding of the probability that one's offspring may inherent certain genetic disorders, or one's own predisposition for a particular disease. DNA technology is used in forensics, anthropology, and many other areas as well.
DNA and the biological processes centered on its activities (translation, transcription, replication, genetic recombination, and so forth) are amazing in their complexity and coordination. The presence of DNA also reflects on the unity of life, since organisms share nucleic acids as genetic blueprints and share a nearly universal genetic code. On the other hand, the discovery of DNA has at times led to an overemphasis on DNA to the point of believing that life can be totally explained by physico-chemical processes alone.
DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein." In 1919, this discovery was followed by Phoebus Levene's identification of the base, sugar, and phosphate nucleotide unit. Levene suggested that DNA consisted of a string of nucleotide units linked together through the phosphate groups. However, Levene thought the chain was short and the bases repeated in a fixed order. In 1937, William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure.
In 1928, Frederick Griffith discovered that traits of the "smooth" form of the Pneumococcus bacteria could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carried genetic information, when Oswald Theodore Avery, along with coworkers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle in 1943. DNA's role in heredity was confirmed in 1953, when Alfred Hershey and Martha Chase, in the Hershey-Chase experiment, showed that DNA is the genetic material of the T2 phage.
In 1953, based on X-ray diffraction images taken by Rosalind Franklin and the information that the bases were paired, James D. Watson and Francis Crick suggested what is now accepted as the first accurate model of DNA structure in the journal Nature. Experimental evidence for Watson and Crick's model were published in a series of five articles in the same issue of Nature. Of these, Franklin and Raymond Gosling's paper was the first publication of X-ray diffraction data that supported the Watson and Crick model, This issue also contained an article on DNA structure by Maurice Wilkins and his colleagues. In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. However, speculation continues on who should have received credit for the discovery, as it was based on Franklin's data.
In an influential presentation in 1957, Crick laid out the "Central Dogma" of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson-Stahl experiment. Further work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley, and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology.
Physical and chemical properties
DNA is a long polymer made from repeating units called nucleotides. The DNA chain is 22 to 26 Ångströms wide (2.2 to 2.6 nanometres), and one nucleotide unit is 3.3 Ångstroms (0.33 nanometres) long. Although each individual repeating unit is very small, DNA polymers can be enormous molecules containing millions of nucleotides. For instance, the largest human chromosome, chromosome number 1, is 220 million base pairs long.
In living organisms, DNA does not usually exist as a single molecule, but instead as a tightly-associated pair of molecules. These two long strands entwine like vines, in the shape of a double helix. The nucleotide repeats contain both the segment of the backbone of the molecule, which holds the chain together, and a base, which interacts with the other DNA strand in the helix. In general, a base linked to a sugar is called a nucleoside and a base linked to a sugar and one or more phosphate groups is called a nucleotide. If multiple nucleotides are linked together, as in DNA, this polymer is referred to as a polynucleotide.
The backbone of the DNA strand is made from alternating phosphate and sugar residues. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These asymmetric bonds mean a strand of DNA has a direction. In a double helix, the direction of the nucleotides in one strand is opposite to their direction in the other strand. This arrangement of DNA strands is called antiparallel. The asymmetric ends of DNA strands are referred to as the 5′ (five prime) and 3′ (three prime) ends. One of the major differences between DNA and RNA is the sugar, with 2-deoxyribose being replaced by the alternative pentose sugar ribose in RNA.
The DNA double helix is stabilized by hydrogen bonds between the bases attached to the two strands. The four bases found in DNA are adenine (abbreviated A), cytosine (C), guanine (G), and thymine (T). These four bases are shown below and are attached to the sugar/phosphate to form the complete nucleotide, as shown for adenosine monophosphate.
These bases are classified into two types; adenine and guanine are fused five- and six-membered heterocyclic compounds called purines, while cytosine and thymine are six-membered rings called pyrimidines. A fifth pyrimidine base, called uracil (U), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. Uracil is not usually found in DNA, occurring only as a breakdown product of cytosine, but a very rare exception to this rule is a bacterial virus called PBS1 that contains uracil in its DNA. In contrast, following synthesis of certain RNA molecules, a significant number of the uracils are converted to thymines by the enzymatic addition of the missing methyl group. This occurs mostly on structural and enzymatic RNAs like transfer RNAs and ribosomal RNA.
Major and minor grooves
The double helix is a right-handed spiral. As the DNA strands wind around each other, they leave gaps between each set of phosphate backbones, revealing the sides of the bases inside (see animation). There are two of these grooves twisting around the surface of the double helix: one groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove.
Each type of base on one strand forms a bond with just one type of base on the other strand. This is called complementary base pairing. Here, purines form hydrogen bonds to pyrimidines, with A bonding only to T, and C bonding only to G. This arrangement of two nucleotides binding together across the double helix is called a base pair. In a double helix, the two strands are also held together via forces generated by the hydrophobic effect and pi stacking, which are not influenced by the sequence of the DNA. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can therefore be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. Indeed, this reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in living organisms.
The two types of base pairs form different numbers of hydrogen bonds, AT forming two hydrogen bonds, and GC forming three hydrogen bonds (see figures, left). The GC base pair is therefore stronger than the AT base pair. As a result, it is both the percentage of GC base pairs and the overall length of a DNA double helix that determine the strength of the association between the two strands of DNA. Long DNA helices with a high GC content have stronger-interacting strands, while short helices with high AT content have weaker-interacting strands. Parts of the DNA double helix that need to separate easily, such as the TATAAT Pribnow box in bacterial promoters, tend to have sequences with a high AT content, making the strands easier to pull apart. In the laboratory, the strength of this interaction can be measured by finding the temperature required to break the hydrogen bonds, their melting temperature (also called Tm value). When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules have no single common shape, but some conformations are more stable than others.
Sense and antisense
A DNA sequence is called "sense" if its sequence is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is complementary to the sense sequence and is therefore called the "antisense" sequence. Since RNA polymerases work by making a complementary copy of their templates, it is this antisense strand that is the template for producing the sense messenger RNA. Both sense and antisense sequences can exist on different parts of the same strand of DNA (that is, both strands contain both sense and antisense sequences).
In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.
A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction made above between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read 5′ to 3′ along one strand, and a second protein when read in the opposite direction (still 5′ to 3′) along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome. Another way of reducing genome size is seen in some viruses that contain linear or circular single-stranded DNA as their genetic material.
DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily.
In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.
Alternative double-helical structures
DNA exists in several possible conformations. The conformations so far identified are: A-DNA, B-DNA, C-DNA, D-DNA, E-DNA, H-DNA, L-DNA, P-DNA, and Z-DNA. However, only A-DNA, B-DNA, and Z-DNA have been observed in naturally occurring biological systems.
Which conformation DNA adopts depends on the sequence of the DNA, the amount and direction of supercoiling, chemical modifications of the bases, and also solution conditions, such as the concentration of metal ions and polyamines. Of these three conformations, the "B" form described above is most common under the conditions found in cells. The two alternative double-helical forms of DNA differ in their geometry and dimensions.
The A form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, as well as in enzyme-DNA complexes. Segments of DNA where the bases have been chemically-modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.
At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. As a result, if a chromosome lacked telomeres it would become shorter each time it was replicated. These specialized chromosome caps also help protect the DNA ends from exonucleases and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.
These guanine-rich sequences may stabilize chromosome ends by forming very unusual structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases form a flat plate and these flat four-base units then stack on top of each other, to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. The structure shown to the left is a top view of the quadruplex formed by a DNA sequence found in human telomere repeats. The single DNA strand forms a loop, with the sets of four bases stacking in a central quadruplex three plates deep. In the space at the center of the stacked bases are three chelated potassium ions. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.
In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.
The expression of genes is influenced by the chromatin structure of a chromosome and regions of heterochromatin (low or no gene expression) correlate with the methylation of cytosine. For example, cytosine methylation, to produce 5-methylcytosine, is important for X-chromosome inactivation. The average level of methylation varies between organisms, with Caenorhabditis elegans lacking cytosine methylation, while vertebrates show higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the biological role of 5-methylcytosine it is susceptible to spontaneous deamination to leave the thymine base, and methylated cytosines are therefore mutation hotspots. Other base modifications include adenine methylation in bacteria and the glycosylation of uracil to produce the "J-base" in kinetoplastids.
- Further information: Mutation
DNA can be damaged by many different sorts of mutagens. These include oxidizing agents, alkylating agents, and also high-energy electromagnetic radiation such as ultraviolet light and x-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light mostly damages DNA by producing thymine dimers, which are cross-links between adjacent pyrimidine bases in a DNA strand. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, as well as double-strand breaks. It has been estimated that in each human cell, about 500 bases suffer oxidative damage per day. Of these oxidative lesions, the most dangerous are double-strand breaks, as these lesions are difficult to repair and can produce point mutations, insertions and deletions from the DNA sequence, as well as chromosomal translocations.
Many mutagens intercalate into the space between two adjacent base pairs. Intercalators are mostly aromatic and planar molecules, and include ethidium, daunomycin, doxorubicin, and thalidomide. In order for an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. These structural changes inhibit both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators are often carcinogens, with benzopyrene diol epoxide, acridines, aflatoxin, and ethidium bromide being well-known examples. Nevertheless, due to their properties of inhibiting DNA transcription and replication, they are also used in chemotherapy to inhibit rapidly-growing cancer cells.
Overview of biological functions
DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome. The human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes.
The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. Alternatively, a cell may simply copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here we focus on the interactions between DNA and other molecules that mediate the function of the genome.
Genomic DNA is located in the cell nucleus of eukaryotes, as well as small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid.
The genetic information in a genome is held within genes. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, as well as regulatory sequences such as promoters and enhancers, which control the expression of the open reading frame.
In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much non-coding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species represent a long-standing puzzle known as the "C-value enigma."
However, DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression.
Some non-coding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes, but are important for the function and stability of chromosomes. An abundant form of non-coding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.
Transcription and translation
A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter "words" called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT).
In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (43 combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three "stop" or "nonsense" codons signifying the end of the coding region; these are the TAA, TGA and TAG codons.
Cell division is essential for an organism to grow, but when a cell divides it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent.
The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing, and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.
Interactions with proteins
All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.
Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are therefore largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation, and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins found in chromatin include the high-mobility group proteins, which bind preferentially to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into more complex chromatin structures.
A distinct group of DNA-binding proteins are the single-stranded-DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-characterized member of this family and is essential for most processes where the double helix is separated, including DNA replication, recombination, and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem loops or being degraded by nucleases.
In contrast, other proteins have evolved to specifically bind particular DNA sequences. The most intensively studied of these are the various classes of transcription factors, which are proteins that regulate transcription. Each one of these proteins bind to one particular set of DNA sequences and thereby activates or inhibits the transcription of genes with these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter; this will change the accessibility of the DNA template to the polymerase.
As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that mediate responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.
Nucleases and ligases
Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently-used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GAT|ATC-3′ and makes a cut at the vertical line.
In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.
Enzymes called DNA ligases can rejoin cut or broken DNA strands, using the energy from either adenosine triphosphate or nicotinamide adenine dinucleotide. Ligases are particularly important in lagging strand DNA replication, as they join together the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.
Topoisomerases and helicases
Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzyme work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.
Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly ATP, to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases.
Polymerases are enzymes that synthesise polynucleotide chains from nucleoside triphosphates. They function by adding nucleotides onto the 3′ hydroxyl group of the previous nucleotide in the DNA strand. As a consequence, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the nucleoside triphosphate substrate base-pairs to a single-stranded polynucleotide template: this allows polymerases to accurately synthesise the complementary strand of this template. Polymerases are classified according to the type of template that they use.
In DNA replication, a DNA-dependent DNA polymerase makes a DNA copy of a DNA sequence. Accuracy is vital in this process, so many of these polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.
RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure.
Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.
- Further information: Genetic recombination
A DNA helix usually does not interact with other segments of DNA, and in human cells the different chromosomes even occupy separate areas in the nucleus called "chromosome territories." This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is during chromosomal crossover when they recombine. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.
Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which can be important for variability added into a population, and thus evolution, and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.
The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break either caused by an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA.
Evolution of DNA metabolism
DNA contains the genetic information that allows all modern living things to function, grow, and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world, where nucleic acid would have been used for both catalysis and genetics, may have influenced the development of the current genetic code based on four nucleotide bases. This would occur since the number of unique bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes.
Unfortunately, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible. This is because DNA will survive in the environment for less than one million years and slowly degrades into short fragments in solution. Although claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250-million years old, these claims are controversial and have been disputed.
Uses in technology
Modern biology and biochemistry make intensive use of recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector. The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research, or be grown in agriculture.Recombinant DNA technology allows scientists to transplant a gene for a particular protein into a rapidly reproducing bacteria to mass produce the protein. As a result of this technology, bacteria have been used to produce human insulin beginning in 1978.
Forensic scientists can use DNA in blood, semen, skin, saliva, or hair at a crime scene to identify a perpetrator. This process is called genetic fingerprinting, or more accurately, DNA profiling. In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a criminal. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case. Some criminal investigations have been solved when DNA from crime scenes has matched relatives of the guilty individual, rather than the individual himself or herself.
People convicted of certain types of crimes may be required to provide a sample of DNA for a database. This has helped investigators solve old cases where only a DNA sample was obtained from the scene. DNA profiling can also be used to identify victims of mass casualty incidents.
Bioinformatics involves the manipulation, searching, and data mining of DNA sequence data. The development of techniques to store and search DNA sequences have led to widely-applied advances in computer science, especially string searching algorithms, machine learning, and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. In other applications such as text editors, even simple algorithms for this problem usually suffice, but DNA sequences cause these algorithms to exhibit near-worst-case behaviour due to their small number of distinct characters. The related problem of sequence alignment aims to identify homologous sequences and locate the specific mutations that make them distinct.
These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without annotations, which label the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products in an organism even before they have been isolated experimentally.
DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembing branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has lead to the creation of two-dimensional periodic lattices (both tile-based as well as using the "DNA origami" method) as well as three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins.
DNA and computation
DNA was first used in computing to solve a small version of the directed Hamiltonian path problem, an NP-complete problem. DNA computing is advantageous over electronic computers in power use, space use, and efficiency, due to its ability to compute in a highly parallel fashion. A number of other problems, including simulation of various abstract machines, the boolean satisfiability problem, and the bounded version of the traveling salesman problem, have since been analysed using DNA computing. Due to its compactness, DNA also has a theoretical role in cryptography, where in particular it allows unbreakable one-time pads to be efficiently constructed and used.
History and anthropology
Because DNA collects mutations over time, which are then inherited, it contains historical information and by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology; for example, DNA evidence is being used to try to identify the Ten Lost Tribes of Israel.
DNA has also been used to look at modern family relationships, such as establishing family relationships between the descendants of Sally Hemings and Thomas Jefferson. This usage is closely related to the use of DNA in criminal investigations detailed above.
- ↑ R. Dahm, "Friedrich Miescher and the discovery of DNA," Dev Biol 278(2008): 274-88. PMID 15680349.
- ↑ P. Levene, "The structure of yeast nucleic acid," J Biol Chem (40)(1919, issue 2):415-424. Retrieved December 22, 2007.
- ↑ W. Astbury, "Nucleic acid," Symp. SOC. Exp. Bbl 1(1947).
- ↑ M. G. Lorenze, and W. Wackernagel, "Bacterial gene transfer by natural genetic transformation in the environment," Microbiol. Rev. 58(1994): 563–602. PMID 7968924. Retrieved December 20, 2007
- ↑ O. Avery, C. MacLeod, and M. McCarty, "Studies on the chemical nature of the substance inducing transformation of pneumococcal types. Inductions of transformation by a desoxyribonucleic acid fraction isolated from pneumococcus type III," J Exp Med 79 (1944): 137-158. Retrieved December 20, 2007.
- ↑ A. Hershey and M. Chase, "Independent functions of viral protein and nucleic acid in growth of bacteriophage," J Gen Physiol 36(1952): 39-56. PMID 12981234. Retrieved December 20, 2007.
- ↑ 7.0 7.1 J. D. Watson, and F. H. C. Crick, "A structure for deoxyribose nucleic acid," Nature 171(1953): 737-738. Retrieved February 13, 2007.
- ↑ 8.0 8.1 J. Watson, and F. Crick, "Molecular structure of nucleic acids: a structure for deoxyribose nucleic acid," Nature 171(1953): 737-738 PMID 13054692.
- ↑ Nature Archives, "Double helix of DNA: 50 Years," Nature (2003). Retrieved December 22, 2007.
- ↑ R. Franklin, and R. G. Gosling, "Molecular configuration in sodium thymonucleate," Nature 171(1953):740-741. Retrieved December 22, 2007.
- ↑ R. Franklin and R. G. Gosling, "Rosalind Franklin's x-ray diffraction photo of sodium deoxyribose nucleate from calf thymus, structure B" (Original X-ray diffraction image), OSU Library. Retrieved December 22, 2007.
- ↑ M. H. F. Wilkins, A. R. Stokes, and H. R. Wilson, "Molecular structure of deoxypentose nucleic acids," Nature 171(1953): 738-740. Retrieved December 20, 2007.
- ↑ F. H. C. Crick, "On degenerate templates and the adaptor hypothesis (PDF)," Genome.wellcome.ac.uk (Lecture, 1955). Retrieved December 22, 2006.
- ↑ M. Meselson, and F. Stahl, "The replication of DNA in Escherichia coli, Proc Natl Acad Sci U S A 44(1958): 671-682. PMID 16590258.
- ↑ Nobel Foundation, "The Nobel Prize in Physiology or Medicine 1968," Nobelprize.org. Retrieved December 22, 2007.
- ↑ 16.0 16.1 B. Alberts, A. Johnson, J. Lewis, M. Raff, K. Roberts, and P. Walters, Molecular Biology of the Cell, 4th edition. (New York: Garland Science, 2002, ISBN 0815332181).
- ↑ J. Butler, Forensic DNA Typing (San Diego: Academic Press, 2001, ISBN 9780121479510).
- ↑ M. Mandelkern, J. Elias, D. Eden, and D. Crothers, "The dimensions of DNA in solution," J Mol Biol 152(1981): 153–161
- ↑ S. Gergory, et al., "The DNA sequence and biological annotation of human chromosome," Nature 441(2006, issue 7091):315-321. PMID 16710414.
- ↑ 20.0 20.1 20.2 J. Berg, J. Tymoczko, and L. Stryer, Biochemistry. (W. H. Freeman and Company, 2002, ISBN 0716749556).
- ↑ IUPAC, "Abbreviations and symbols for nucleic acids, polynucleotides and their constituents," IUPAC-IUB Commission on Biochemical Nomenclature (CBN). Retrieved January 3, 2006.
- ↑ 22.0 22.1 A. Ghosh, and M. Basal, "A glossary of DNA structures from A to Z," Acta Crystallogr D Biol Crystallogr 59(2003): 620–626. PMID 12657780.
- ↑ I. Takahashi, and J. Marmur, "Replacement of thymidylic acid by deoxyuridylic acid in the deoxyribonucleic acid of a transducing phage for Bacillus subtilis," Nature 197(1963): 794–795. PMID 13980287.
- ↑ P. Agris, "Decoding the genome: a modified view," Nucleic Acids Res 32(2004): 223–238. PMID 14715921. Retrieved December 22, 2007.
- ↑ R. Wing, H. Drew, T. Takano, C. Broka, S. Tanaka, K. Itakura, and R. Dickerson, "Crystal structure analysis of a complete turn of B-DNA," Nature 287(1980): 755–758. PMID 7432492.
- ↑ C. Pabo, and R. Sauer, "Protein-DNA recognition," Annu Rev Biochem 53(1984): 293–321. PMID 6236744.
- ↑ P. Ponnuswamy, and M. Gromiha, "On the conformational stability of oligonucleotide duplexes and tRNA molecules," J Theor Biol 169(4): 419–432. PMID 7526075.
- ↑ H. Clausen-Schaumann, M. Rief, C. Tolksdorf, and H. Gaub, "Mechanical stability of single DNA molecules," Biophys J 78 (2000, issue 4): 1997–2007. PMID 10733978. Retrieved December 22, 2007.
- ↑ T. Chalikian, J. Völker, G. Plum, and K. Breslauer, "A more unified picture for the thermodynamics of nucleic acid duplex melting: a characterization by calorimetric and volumetric techniques," Proc Natl Acad Sci U S A 96(1999, issue 14): 7853–7858. PMID 10393911. Retrieved December 22, 2007.
- ↑ P. deHaseth, and J. Helmann. "Open complex formation by Escherichia coli RNA polymerase: the mechanism of polymerase-induced strand separation of double helical DNA," Mol Microbiol 16(1995, issue 5): 817–824. PMID 7476180.
- ↑ J. Isaksson, S. Acharya, J. Barman, P. Cheruku, and J. Chattopadhyaya, "Single-stranded adenine-rich DNA and RNA retain structural characteristics of their respective double-stranded conformations and show directional differences in stacking pattern," Biochemistry 43(2004, issue 51): 15996–16010. PMID 15609994.
- ↑ A. Hüttenhofer, P. Schattner, and N. Polacek, "Non-coding RNAs: hope or hype?" Trends Genet 21(2005, issue 5): 289–297. PMID 15851066.
- ↑ S. Munroe, "Diversity of antisense regulation in eukaryotes: multiple mechanisms, emerging patterns," J Cell Biochem 93(2004, issue 4): 664–671. PMID 15389973.
- ↑ I. Makalowska, C. Lin, and W. Makalowski, "Overlapping genes in vertebrate genomes," Comput Biol Chem 29 (2005, issue 1): 1-12. PMID 15680581.
- ↑ Z. Johnson, and S. Chisholm, "Properties of overlapping genes are conserved across microbial genomes," Genome Res 14(2004, issue 11): 2268–2272. PMID 15520290.
- ↑ R. Lamb, and C. Horvath, "Diversity of coding strategies in influenza viruses," Trends Genet 7(1991, issue 8): 261–266. PMID 1771674.
- ↑ J. Davies, and J. Stanley, "Geminivirus genes and vectors," Trends Genet 5(1989, issue 3): 77–81. PMID 2660364.
- ↑ K. Berns, "Parvovirus replication," Microbiol Rev 54(1990, issue 3): 316–329. PMID 2215424.
- ↑ C. Benham, and S. Mielke, "DNA mechanics," Annu Rev Biomed Eng 7(2005): 21–53. PMID 16004565.
- ↑ 40.0 40.1 J. Champoux, "DNA topoisomerases: structure, function, and mechanism," Annu Rev Biochem 70(2001): 369–413. PMID 11395412.
- ↑ 41.0 41.1 J. Wang, "Cellular roles of DNA topoisomerases: a molecular perspective," Nat Rev Mol Cell Biol 3(2002, issue 6): 430–440. PMID 12042765.
- ↑ 42.0 42.1 G. Hayashi, M. Hagihara, and K. Nakatani, "Application of L-DNA as a molecular tag," Nucleic Acids Symp Ser (Oxf) 49(2005): 261–262. PMID 17150733.
- ↑ J. M. Vargason, B. F. Eichman, and P. S. Ho, "The extended and eccentric E-DNA structure induced by cytosine methylation or bromination," Nature Structural Biology 7(2000): 758-761. PMID 10966645.
- ↑ G. Wang, and K. M. Vasquez, "Non-B DNA structure-induced genetic instability," Mutat Res 598(2006, issue 1-2): 103-119. PMID 16516932.
- ↑ Allemand, et al, "Stretched and overwound DNA forms a Pauling-like structure with exposed bases," PNAS 24(1998): 14152-14157. PMID 9826669.
- ↑ E. Palecek, "Local supercoil-stabilized DNA structures," Critical Reviews in Biochemistry and Molecular Biology 26(1991, issue 2): 151-226. PMID 1914495.
- ↑ H. Basu, B. Feuerstein, D. Zarling, R. Shafer, and L. Marton, "Recognition of Z-RNA and Z-DNA determinants by polyamines in solution: experimental and theoretical studies," J Biomol Struct Dyn 6 (1988, issue 2): 299-309. PMID 2482766.
- ↑ A. G. Leslie, S. Arnott, R. Chandrasekaran, and R. L. Ratliff, "Polymorphism of DNA double helices," J. Mol. Biol. 143(1980, issue 1): 49–72. PMID 7441761.
- ↑ M. Wahl and M. Sundaralingam, "Crystal structures of A-DNA duplexes," Biopolymers 44 (1997, issue 1): 45-63. PMID 9097733.
- ↑ X. J. Lu, Z. Shakked, and W. K. Olson, "A-form conformational motifs in ligand-bound DNA structures," J. Mol. Biol. 300 (2000, issue 4): 819-840. PMID.
- ↑ S. Rothenburg, F. Koch-Nolte, and F. Haag, "DNA methylation and Z-DNA formation as mediators of quantitative differences in the expression of alleles," Immunol Rev 184(2001): 286-298. PMID 12086319.
- ↑ D. Oh, Y. Kim, and A. Rich, "Z-DNA-binding proteins can act as potent effectors of gene expression in vivo," Proc. Natl. Acad. Sci. U.S.A. 99(2002, issue 26): 16666-16671. PMID 12486233. Retrieved December 21, 2007.
- ↑ Created from NDB UD0017. Retreived January 14, 2009.
- ↑ 54.0 54.1 C. Greider, and E. Blackburn, "Identification of a specific telomere terminal transferase activity in Tetrahymena extracts," Cell 43(1985, issue 2 Pt 1): 405-413. PMID 3907856.
- ↑ 55.0 55.1 C. Nugent, and V. Lundblad, "The telomerase reverse transcriptase: components and regulation," Genes Dev 12(1998, issue 8): 1073-1085. PMID 9553037. Retrieved December 22, 2007.
- ↑ W. Wright, V. Tesmer, K. Huffman, S. Levene, and J. Shay, "Normal human chromosomes have long G-rich telomeric overhangs at one end," Genes Dev 11(1997, issue 21): 2801-2809. PMID 9353250. Retrieved December 21, 2007.
- ↑ 57.0 57.1 S. Burge, G. Parkinson, P. Hazel, A. Todd, and S. Neidle, "Quadruplex DNA: sequence, topology and structure," Nucleic Acids Res 34(2006, issue 19): 5402-5415. PMID 17012276.
- ↑ G. Parkinson, M. Lee, and S. Neidle, "Crystal structure of parallel quadruplexes from human telomeric DNA," Nature 417(2002, issue 6891): 876-880. PMID 12050675.
- ↑ J. Griffith, L. Comeau, S. Rosenfield, R. Stansel, A. Bianchi, H. Moss and T. de Lange, "Mammalian telomeres end in a large duplex loop," Cell 97(1999, issue 4): 503-514. PMID 10338214.
- ↑ R. Klose, and A. Bird, "Genomic DNA methylation: the mark and its mediators," Trends Biochem Sci 31(2006, issue 2): 89-97. PMID 16403636.
- ↑ A. Bird, "DNA methylation patterns and epigenetic memory," Genes Dev 16(2002, issue 1): 6-21. PMID 11782440.
- ↑ C. Walsh, and G. Xu, "Cytosine methylation and DNA repair," Curr Top Microbiol Immunol 301(2006): 283-315. PMID 16570853.
- ↑ D. Ratel, J. Ravanat, F. Berger, and D. Wion, "N6-methyladenine: the other methylated base of DNA," Bioessays 28(2006, issue 3): 309-315. PMID 16479578.
- ↑ J. Gommers-Ampt, F. Van Leeuwen, A. de Beer, J. Vliegenthart, M. Dizdaroglu, J. Kowalak, P. Crain, and P. Borst, "Beta-D-glucosyl-hydroxymethyluracil: a novel modified base present in the DNA of the parasitic protozoan T. brucei," Cell 75(1993, issue 6): 1129-1136. PMID 8261512.
- ↑ Created from PDB 1JDG. Retrieved January 14, 2009.
- ↑ T. Douki, A. Reynaud-Angelin, J. Cadet, and E. Sage, "Bipyrimidine photoproducts rather than oxidative lesions are the main type of DNA damage involved in the genotoxic effect of solar UVA radiation," Biochemistry 42(2003, issue 30): 9221-9226. PMID 12885257.
- ↑ J. Cadet, T. Delatour, T. Douki, D. Gasparutto, J. Pouget, J. Ravanat, and S. Sauvaigo, "Hydroxyl radicals and DNA base damage," Mutat Res 424(1999, issue 1-2): 9-21. PMID 10064846.
- ↑ M. Shigenaga, C. Gimeno, and B. Ames, "Urinary 8-hydroxy-2′-deoxyguanosine as a biological marker of in vivo oxidative DNA damage," Proc Natl Acad Sci U S A 86(1989, issue 24): 9697-9701 PMID 2602371.
- ↑ R. Cathcart, E. Schwiers, R. Saul, and B. Ames, "Thymine glycol and thymidine glycol in human and rat urine: A possible assay for oxidative DNA damage," Proc Natl Acad Sci U S A 81(1984, issue 18): 5633-5637. PMID 6592579.
- ↑ K. Valerie , and L. Povirk, "Regulation and mechanisms of mammalian double-strand break repair," Oncogene 22 (2003, issue 37): 5792-5812. PMID 12947387.
- ↑ L. Ferguson, and W. Denny, "The genetic toxicology of acridines," Mutat Res 258(1991, issue 2): 123-160. PMID 1881402.
- ↑ A. Jeffrey, "DNA modification by chemical carcinogens," Pharmacol Ther 28(1985, issue 2): 237–272. PMID 3936066.
- ↑ T. Stephens, C. Bunde, and B. Fillmore, "Mechanism of action in thalidomide teratogenesis," Biochem Pharmacol 59(2000, issue 12): 1489–1499. PMID 10799645.
- ↑ M. Braña, M. Cacho, A. Gradillas, B. Pascual-Teresa, and A. Ramos, "Intercalators as anticancer drugs," Curr Pharm Des 7 (2001, issue 17): 1745–1780. PMID 11562309.
- ↑ J. Venter, et al., "The sequence of the human genome," Science 291(2001, issue 5507): 1304–1351. PMID 11181995.
- ↑ M. Thanbichler, S. Wang, and L. Shapiro, "The bacterial nucleoid: a highly organized and dynamic structure," J Cell Biochem 96(2005, issue 3): 506–521. PMID 15988757.
- ↑ T. Wolfsberg, J. McEntyre, and G. Schuler, "Guide to the draft human genome," Nature 409(2001, issue 6822): 824–826. PMID 11236998.
- ↑ T. Gregory, "The C-value enigma in plants and animals: a review of parallels and an appeal for partnership," Ann Bot (Lond) 95(2005, issue 1): 133–146. PMID 15596463.
- ↑ The ENCODE Project Consortium, "Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project," Nature 447(2007, issue 7146): 799-816. doi:10.1038/nature05874.
- ↑ A. Pidoux, and R. Allshire, "The role of heterochromatin in centromere function," Philos Trans R Soc Lond B Biol Sci 360(2005, issue 1455): 569–579. PMID 15905142. Retrieved December 22, 2007.
- ↑ P. Harrison, H. Hegyi, S. Balasubramanian, N. Luscombe, P. Bertone, N. Echols, T. Johnson, and M. Gerstein, "Molecular fossils in the human genome: identification and analysis of the pseudogenes in chromosomes 21 and 22," Genome Res 12(2002, issue 2): 272–280. PMID 11827946. Retrieved December 22, 2007.
- ↑ P. Harrison, and M. Gerstein, "Studying genomes through the aeons: protein families, pseudogenes and proteome evolution," J Mol Biol 318(2002, issue 5): 1155–1174. PMID 12083509.
- ↑ M. M. Albà, "Replicative DNA polymerases," Genome Biol 2(2001, issue 1). PMID 11178285. Retrieved December 21, 2007.
- ↑ K. Sandman, S. Pereira, and J. Reeve, "Diversity of prokaryotic chromosomal proteins and the origin of the nucleosome," Cell Mol Life Sci 54(1998, issue 12): 1350–1364. PMID 9893710.
- ↑ R. T. Dame, "The role of nucleoid-associated proteins in the organization and compaction of bacterial chromatin," Mol. Microbiol. 56(2005, issue 4): 858-870. PMID 15853876.
- ↑ K. Luger, A. Mäder, R. Richmond, D. Sargent, and T. Richmond, "Crystal structure of the nucleosome core particle at 2.8 A resolution," Nature 389(1997, issue 6648): 251–260. PMID 9305837.
- ↑ T. Jenuwein, and C. Allis, "Translating the histone code," Science 293(2001, issue 5532): 1074-1080. PMID 11498575.
- ↑ T. Ito, "Nucleosome assembly and remodelling," Curr Top Microbiol Immunol 274(2003): 1–22. PMID 12596902.
- ↑ J. Thomas, "HMG1 and 2: architectural DNA-binding proteins," Biochem Soc Trans 29(2001, issue Pt 4): 395–401. PMID 11497996.
- ↑ R. Grosschedl, K. Giese, and J. Pagel, "HMG domain proteins: architectural elements in the assembly of nucleoprotein structures," Trends Genet 10(1994, issue 3): 94–100. PMID 8178371.
- ↑ C. Iftode, Y. Daniely, and J. Borowiec, "Replication protein A (RPA): the eukaryotic SSB," Crit Rev Biochem Mol Biol 34(1999, issue 3): 141–180. PMID 10473346.
- ↑ Created from PDB 1LMB
- ↑ L. Myers, and R. Kornberg, "Mediator of transcriptional regulation," Annu Rev Biochem 69(2000): 729–749. PMID 10966474.
- ↑ B. Spiegelman, and R. Heinrich, "Biological control through regulated transcriptional coactivators," Cell 119(2004, issue 2): 157-167. PMID 15479634.
- ↑ Z. Li, S. Van Calcar, C. Qu, W. Cavenee, M. Zhang, and B. Ren, "A global transcriptional regulatory role for c-Myc in Burkitt's lymphoma cells," Proc Natl Acad Sci U S A 100(2003, issue 14): 8164–8169. PMID 12808131.
- ↑ C. Pabo, and R. Sauer, "Protein-DNA recognition," Annu Rev Biochem 53(1984): 293–321. PMID 6236744.
- ↑ Created from PDB 1RVA
- ↑ T. Bickle, and D. Krüger, "Biology of DNA restriction," Microbiol Rev 57(1993, issue 2): 434–450. PMID 8336674. Retrieved December 21, 2007.
- ↑ 99.0 99.1 A. J. Doherty, and S. W. Suh, "Structural and mechanistic conservation in DNA ligases," Nucleic Acids Res 28(2000, issue 21): 4051–4058. PMID 11058099. Retrieved December 21, 2007.
- ↑ A. Schoeffler, and J. Berger, "Recent advances in understanding structure-function relationships in the type II topoisomerase mechanism," Biochem Soc Trans 33(2005, issue Pt 6): 1465–1470. PMID 16246147.
- ↑ N. Tuteja, and R. Tuteja, "Unraveling DNA helicases. Motif, structure, mechanism and function," Eur J Biochem 271(2004, issue 10): 1849–1863. PMID 15128295.
- ↑ 102.0 102.1 C. Joyce and T. Steitz, "Polymerase structures and function: variations on a theme?" J Bacteriol 177(1995, issue 22): 6321–6329. PMID 7592405. Retrieved December 21, 2007.
- ↑ U. Hubscher, G. Maga, and S. Spadari, "Eukaryotic DNA polymerases," Annu Rev Biochem 71(2002): 133–163. PMID 12045093.
- ↑ A. Johnson, and M. O'Donnell, "Cellular DNA replicases: components and dynamics at the replication fork," Annu Rev Biochem 74(2005): 283–315. PMID 15952889.
- ↑ L. Tarrago-Litvak, M. Andréola, G. Nevinsky, L. Sarih-Cottin, and S. Litvak, "The reverse transcriptase of HIV-1: from enzymology to therapeutic intervention," FASEB J 8(1994, issue 8): 497–503. PMID 7514143. Retrieved December 21, 2007.
- ↑ E. Martinez, "Multi-protein complexes in eukaryotic gene transcription," Plant Mol Biol 50(2002, issue 6): 925–947. PMID 12516863.
- ↑ Created from PDB 1M6G
- ↑ T. Cremer and C. Cremer, "Chromosome territories, nuclear architecture and gene regulation in mammalian cells," Nat Rev Genet 2(2001, issue 4): 292–301. PMID 11283701.
- ↑ C. Pál, B. Papp, and M. Lercher, "An integrated view of protein evolution," Nat Rev Genet 7(2006, issue 5): 337–348. PMID 16619049.
- ↑ M. O'Driscoll and P. Jeggo, "The role of double-strand break repair—insights from human genetics," Nat Rev Genet 7(2006, issue 1): 45–54. PMID 16369571.
- ↑ S. Vispé and M. Defais, "Mammalian Rad51 protein: a RecA homologue with pleiotropic functions," Biochimie 79(1997, issue 9-10): 587-592. PMID 9466696.
- ↑ M. J. Neale, and S. Keeney, "Clarifying the mechanics of DNA strand exchange in meiotic recombination," Nature 442 (2006, issue 7099): 153-158. PMID 2006.
- ↑ M. Dickman, S. Ingleston, S. Sedelnikova, J. Rafferty, R. Lloyd, J. Grasby, and D. Hornby, " The RuvABC resolvasome," Eur J Biochem 269(2002, issue 22): 5492–5501. PMID 12423347.
- ↑ L. E. Orgel, "Prebiotic chemistry and the origin of the RNA world," Crit Rev Biochem Mol Biol 39(2004, issue 2): 99–123. PMID 15217990. Retrieved December 21, 2007.
- ↑ R. Davenport, "Ribozymes. Making copies in the RNA world," Science 292(2001, issue 5520): 1278. PMID 11360970}}
- ↑ E. Szathmáry, "What is the optimum size for the genetic alphabet?" Proc Natl Acad Sci U S A 89(1992, issue 7): 2614–1618. PMID 1372984. Retrieved December 21, 2007.
- ↑ T. Lindahl, "Instability and decay of the primary structure of DNA," Nature 362(1993, issue 6422): 709–715. PMID 8469282.
- ↑ R. Vreeland, W. Rosenzweig, and D. Powers, "Isolation of a 250 million-year-old halotolerant bacterium from a primary salt crystal," Nature 407(2000, issue 6806): 897–900. PMID 11057666.
- ↑ M. Hebsgaard, M. Phillips, and E. Willerslev, "Geologically ancient DNA: fact or artefact?" Trends Microbiol 13(2005, issue 5): 212–220. PMID 15866038.
- ↑ D. Nickle, G. Learn, M. Rain, J. Mullins, and J. Mittler, "Curiously modern DNA for a '250 million-year-old' bacterium," J Mol Evol 54(2002, issue 1): 134–137. PMID 11734907.
- ↑ S. P. Goff and P. Berg, "Construction of hybrid viruses containing SV40 and lambda phage DNA segments and their propagation in cultured monkey cells," Cell 9(1976, issue 4 PT 2): 695–705. PMID 189942.
- ↑ L. Houdebine, "Transgenic animal models in biomedical research," Methods Mol Biol 360(2007): 163–202. PMID 17172731.
- ↑ H. Daniell, and A. Dhingra, "Multigene engineering: dawn of an exciting new era in biotechnology," Curr Opin Biotechnol 13(2002, issue 2): 136–141. PMID 11950565.
- ↑ D. Job, "Plant biotechnology in agriculture," Biochimie 84(2002, issue 11): 1105–1110. PMID 12595138.
- ↑ A. Collins and N. Morton, "Likelihood ratios for DNA identification," Proc Natl Acad Sci U S A 91(1994, issue 13): 6007–6011. PMID 8016106.
- ↑ B. Weir, C. Triggs, L. Starling, L. Stowell, K. Walsh, and J. Buckleton, "Interpreting DNA mixtures," J Forensic Sci 42(1997, issue 2): 213–222. PMID 9068179.
- ↑ A. Jeffreys, V. Wilson, and S. Thein, "Individual-specific 'fingerprints' of human DNA," Nature 316(1985, issue 6023): 76–79. PMID 2989708.
- ↑ Forensic Science Service, "Colin Pitchfork—first murder conviction on DNA evidence also clears the prime suspect," Forensic Science Service. Retrieved December 23, 2006.
- ↑ S. Bhattacharya, "Killer convicted thanks to relative's DNA," Newscientist.com (April 20, 2004). Retrieved Dec 22, 2006.
- ↑ National Institute of Justice, "DNA Identification in Mass Fatality Incidents," National Institute of Justice: President's DNA Initiative. Retrieved December 22, 2007.
- ↑ P. Baldi and S. Brunak, Bioinformatics: The Machine Learning Approach (MIT Press, 2001, ISBN 9780262025065).
- ↑ D. Gusfield, Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology (Cambridge University Press 1997, ISBN 9780521585194).
- ↑ K. Sjölander, "Phylogenomic inference of protein molecular function: Advances and challenges," Bioinformatics 20(2004, issue 2): 170-179. PMID 14734307.
- ↑ D. M. Mount, Bioinformatics: Sequence and Genome Analysis, 2nd edition (Cold Spring, NY: Cold Spring Harbor Laboratory Press 2004, ISBN 0879697121).
- ↑ L. Adleman, "Molecular computation of solutions to combinatorial problems," Science 266(1994, issue 5187): 1021–1024. PMID 7973651.
- ↑ J. Parker, "Computing with DNA, EMBO Rep 4(2003, issue 1): 7–10. PMID 12524509.
- ↑ A. Gehani, T. LaBean, and J. Reif, "DNA-Based cryptography," Proceedings of the 5th DIMACS Workshop on DNA Based Computers, Cambridge, MA, USA, 14–15 June 1999. Retrieved December 22, 2007.
- ↑ G. Wray, "Dating branches on the tree of life using DNA," Genome Biol 3(2002, issue 1). PMID 11806830. Retrieved December 22, 2007.
- ↑ NOVA, "Lost Tribes of Israel," NOVA, PBS airdate, February 22, 2000.
- ↑ Y. Kleiman, "The Cohanim/DNA Connection: The fascinating story of how DNA studies confirm an ancient biblical tradition," Aish.com (January 13, 2000). Retrived March 4, 2006.
- Calladine, C. R., H. R. Drew, B. F. Luisi, and A. A. Travers. 2003. Understanding DNA. Elsevier Academic Press. ISBN 9780121550899.
- Clayton, J. (Ed.). 2003. 50 Years of DNA. Palgrave MacMillan Press. ISBN 9781403914798.
- Judson, H. F. 1996. The Eighth Day of Creation: Makers of the Revolution in Biology. Cold Spring Harbor Laboratory Press. ISBN 9780879694784.
- Olby, R. 1974. The Path to The Double Helix: Discovery of DNA. MacMillan, 1974. ISBN 9780486681177.
- Ridley, M. 2006. Francis Crick: Discoverer of the Genetic Code (Eminent Lives). HarperCollins Publishers. ISBN 9780060823337.
- Watson, J. D. and F. H. C. Crick. 1953. "A structure for deoxyribose nucleic acid (PDF). Nature 171: 737–738.
- Watson, J. D. 2003. DNA: The Secret of Life New York: Alfred A. Knopf. ISBN 9780375415463.
- Watson, J. D. 1980. The Double Helix: A Personal Account of the Discovery of the Structure of DNA. New York: Norton. ISBN 9780393950755.
- Watson, J. D. 2007. Avoid Boring People and Other Lessons From a Life in Science. New York: Random House. ISBN 9780375412844.
- Crick's personal papers at Mandeville Special Collections Library, Geisel Library, University of California, San Diego. Retrieved December 22, 2007.
- DNA Interactive This site from the Dolan DNA Learning Center included dozens of animations as well as interviews with James Watson and others (requires Adobe Flash). Retrieved December 22, 2007.
- DNA from the Beginning Another DNA Learning Center site on DNA, genes, and heredity from Mendel to the human genome project. Retrieved December 22, 2007.
- Double Helix 1953–2003 National Centre for Biotechnology Education. Retrieved December 22, 2007.
- Double helix: 50 years of DNA, Nature. Retrieved December 22, 2007.
- Rosalind Franklin's contributions to the study of DNA. Retrieved December 22, 2007.
- Genetic Education Modules for Teachers—DNA from the Beginning Study Guide. Retrieved December 22, 2007.
- Listen to Francis Crick and James Watson talking on the BBC in 1962, 1972, and 1974. Retrieved December 22, 2007.
- DNA under electron microscope. Retrieved December 22, 2007.
- DNA at the Open Directory Project . Retrieved December 22, 2007.
- DNA coiling to form chromosomes. Retrieved December 22, 2007.
- DISPLAR: DNA binding site prediction on protein. Retrieved December 22, 2007.
- Olby, R. (2003) "Quiet debut for the double helix" Nature 421 (January 23): 402–405. Retrieved December 22, 2007.
- DNA the Double Helix Game From the official Nobel Prize web site. Retrieved December 22, 2007.
|Nucleic acids edit|
|Nucleobases: Adenine - Thymine - Uracil - Guanine - Cytosine - Purine - Pyrimidine|
|Nucleosides: Adenosine - Uridine - Guanosine - Cytidine - Deoxyadenosine - Thymidine - Deoxyguanosine - Deoxycytidine|
|Nucleotides: AMP - UMP - GMP - CMP - ADP - UDP - GDP - CDP - ATP - UTP - GTP - CTP - cAMP - cGMP|
|Deoxynucleotides: dAMP - dTMP - dUMP - dGMP - dCMP - dADP - dTDP - dUDP - dGDP - dCDP - dATP - dTTP - dUTP - dGTP - dCTP|
|Nucleic acids: DNA - RNA - LNA - PNA - mRNA - ncRNA - miRNA - rRNA - siRNA - tRNA - mtDNA - Oligonucleotide|
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/DNA | 13 |
76 | History of Poland
|Part of a series on the|
|History of Poland|
|Prehistory and protohistory|
The History of Poland is rooted in the arrival of the Slavs, who gave rise to permanent settlement and historic development on Polish lands. During the Piast dynasty Christianity was adopted in 966 and medieval monarchy established. The Jagiellon dynasty period brought close ties with the Grand Duchy of Lithuania, cultural development and territorial expansion, culminating in the establishment of the Polish–Lithuanian Commonwealth in 1569.
The Commonwealth in its early phase constituted a continuation of the Jagiellon prosperity, with its remarkable development of a sophisticated noble democracy. From the mid-17th century, the huge state entered a period of decline caused by devastating wars and deterioration of the country's system of government. Significant internal reforms were introduced during the later part of the 18th century, but the reform process was not allowed to run its course, as the Russian Empire, the Kingdom of Prussia and the Austrian Habsburg Monarchy through a series of invasions and partitions terminated the Commonwealth's independent existence in 1795.
From then until 1918 there was no independent Polish state. The Poles engaged intermittently in armed resistance until 1864. After the failure of the last uprising, the nation preserved its identity through educational uplift and the program called "organic work", intended to modernize the economy and society. The opportunity for freedom appeared only after World War I, when the partitioning imperial powers were defeated by war and revolution.
The Second Polish Republic was established and existed from 1918 to 1939. It was destroyed by Nazi Germany and the Soviet Union by their Invasion of Poland at the beginning of World War II. Millions of Polish citizens perished in the course of the Nazi occupation. The Polish government in exile kept functioning and through the many Polish military formations on the western and eastern fronts the Poles contributed to the Allied victory. Nazi Germany's forces were compelled to retreat from Poland as the Soviet Red Army advanced, which led to the creation of the People's Republic of Poland.
The country's geographic location was shifted to the west and Poland existed as a Soviet satellite state. Poland largely lost its traditional multi-ethnic character and the communist system was imposed. By the late 1980s Solidarity, a Polish reform movement, became crucial in causing a peaceful transition from a communist state to the capitalist system and parliamentary democracy. This process resulted in the creation of the modern Polish state.
Prehistory and protohistory
Members of the Homo genus have lived in the glaciation disrupted environment of north Central Europe for a long time. In prehistoric and protohistoric times, over the period of at least 500,000 years, the area of present-day Poland went through the Stone Age, Bronze Age and Iron Age stages of development, along with the nearby regions. Settled agricultural people have lived there for the past 7500 years, since their first arrival at the outset of the Neolithic period. Following the earlier La Tène and Roman influence cultures, the Slavic people have been in this territory for over 1500 years. They organized first into tribal units, and then combined into larger political structures.
Piast dynasty
During the Piast dynasty rule (10th–14th century), Poland was formed and established as a state and a nation. The historically recorded Polish state begins with the rule of Mieszko I in the second half of the 10th century. Mieszko chose to be baptized in the Western Latin Rite in 966. Mieszko completed the unification of the West Slavic tribal lands fundamental to the new country's existence. Following its emergence, the Polish nation was led by a series of rulers who converted the population to Christianity, created a strong kingdom and integrated Poland into the European culture.
Mieszko's son Bolesław I Chrobry established a Polish Church province, pursued territorial conquests and was officially crowned in 1025, becoming the first King of Poland. This was followed by a collapse of the monarchy and restoration under Casimir I. Casimir's son Bolesław II the Bold became fatally involved in a conflict with the ecclesiastical authority, and was expelled from the country. After Bolesław III divided the country among his sons, internal fragmentation eroded the initial Piast monarchy structure in the 12th and 13th centuries. One of the regional Piast dukes invited the Teutonic Knights to help him fight the Baltic Prussian pagans, which caused centuries of Poland's warfare with the Knights and then with the German Prussian state. The Kingdom was restored under Władysław I the Elbow-high, strengthened and expanded by his son Casimir III the Great. The western provinces of Silesia and Pomerania were lost after the fragmentation, and Poland began expanding to the east. The consolidation in the 14th century laid the base for, after the reigns of two members of the Angevin dynasty, the new powerful Kingdom of Poland that was to follow.
Jagiellon dynasty
Beginning with the Lithuanian Grand Duke Jogaila (Władysław II Jagiełło), the Jagiellon dynasty (1386–1572) formed the Polish–Lithuanian union. The partnership brought vast Lithuania-controlled Rus' areas into Poland's sphere of influence and proved beneficial for the Poles and Lithuanians, who coexisted and cooperated in one of the largest political entities in Europe for the next four centuries. In the Baltic Sea region, Poland's struggle with the Teutonic Knights continued and included the Battle of Grunwald (German: Battle of Tannenberg; Lithuanian: Battle of Žalgiris) (1410) and in 1466 the milestone Peace of Thorn under King Casimir IV Jagiellon; the treaty created the future Duchy of Prussia. In the south, Poland confronted the Ottoman Empire and the Crimean Tatars, and in the east helped Lithuania fight the Grand Duchy of Moscow. Poland was developing as a feudal state, with predominantly agricultural economy and an increasingly dominant landed nobility component. The Nihil novi act, adopted in 1505 by the Sejm, the Polish parliament, transferred most of the legislative power from the monarch to the Sejm. This event marked the beginning of the period known as "Golden Liberty", when the state was ruled by the "free and equal" Polish nobility. Protestant Reformation movements made deep inroads into the Polish Christianity, which resulted in unique at that time in Europe policies of religious tolerance. The European Renaissance currents evoked in late Jagiellon Poland (kings Sigismund I the Old and Sigismund II Augustus) an immense cultural and scientific flowering, of which the astronomer Nicolaus Copernicus is the best known representative. Poland's and Lithuania's territorial expansion included the far north region of Livonia.
Polish–Lithuanian Commonwealth
Establishment (1569–1648)
The Union of Lublin of 1569 established the Polish–Lithuanian Commonwealth, a more closely unified federal state. The Union was largely run by the nobility, through the system of central parliament and local assemblies, but led by elected kings. The formal rule of the nobility, who were proportionally more numerous than in other European countries, constituted an early democratic system ("a sophisticated noble democracy"), in contrast to the absolute monarchies prevalent at that time in the rest of Europe. The beginning of the Commonwealth coincided with the period of Poland's great power, advancements in civilization and prosperity. The Polish–Lithuanian Union had become an influential player in Europe and a vital cultural entity, spreading the Western culture eastward. In the second half of the 16th and the first half of the 17th century, the Polish–Lithuanian Commonwealth was a huge state in central-eastern Europe, with an area approaching one million square kilometers. The Catholic Church embarked on an ideological counteroffensive and Counter-Reformation claimed many converts from Protestant circles. The Union of Brest split the Eastern Christians of the Commonwealth. The Commonwealth, assertive militarily under King Stephen Báthory, suffered from dynastic distractions during the reigns of the Vasa kings Sigismund III and Władysław IV. The Commonwealth fought wars with Russia, Sweden and the Ottoman Empire and dealt with a series of Cossack uprisings. Allied with the Habsburg Monarchy, it did not directly participate in the Thirty Years' War.
Decline (1648–1764)
Beginning in the middle of the 17th century, the nobles' democracy, subjected to devastating wars, falling into internal disorder and then anarchy, gradually declined, making the once powerful Commonwealth vulnerable to foreign intervention. From 1648, the Cossack Khmelnytsky Uprising engulfed the south and east, and was soon followed by a Swedish invasion, which raged through core Polish lands. Warfare with the Cossacks and Russia left Ukraine divided, with the eastern part, lost by the Commonwealth, becoming the Tsardom's dependency. John III Sobieski, fighting protracted wars with the Ottoman Empire, revived the Commonwealth's military might once more, in process helping decisively in 1683 to deliver Vienna from a Turkish onslaught. From there, it all went downhill. The Commonwealth, subjected to almost constant warfare until 1720, suffered enormous population losses as well as massive damage to its economy and social structure. The government became ineffective because of large scale internal conflicts (e.g. Lubomirski's Rokosz against John II Casimir and rebellious confederations), corrupted legislative processes and manipulation by foreign interests. The nobility class fell under control of a handful of powerful families with established territorial domains, the urban population and infrastructure fell into ruin, together with most peasant farms. The reigns of two kings of the Saxon Wettin dynasty, Augustus II and Augustus III, brought the Commonwealth further disintegration. The Great Northern War, a period seen by the contemporaries as a passing eclipse, may have been the fatal blow destined to bring down the Noble Republic. The Kingdom of Prussia became a strong regional power and took Silesia from the Habsburg Monarchy. The Commonwealth-Saxony personal union however gave rise to the emergence of the reform movement in the Commonwealth, and the beginnings of the Polish Enlightenment culture.
Reforms and loss of statehood (1764–1795)
During the later part of the 18th century, the Commonwealth attempted fundamental internal reforms. The reform activity provoked hostile reaction and eventually military response on the part of the neighboring powers. The second half of the century brought improved economy and significant growth of the population. The most populous capital city of Warsaw replaced Danzig (Gdańsk) as the leading trade center, and the role of the more prosperous urban strata was increasing. The last decades of the independent Commonwealth existence were characterized by intense reform movements and far-reaching progress in the areas of education, intellectual life, art, and especially toward the end of the period, evolution of the social and political system.
The royal election of 1764 resulted in the elevation of Stanisław August Poniatowski, a refined and worldly aristocrat connected to a major magnate faction, but hand-picked and imposed by Empress Catherine II of Russia, who expected Poniatowski to be her obedient follower. The King accordingly spent his reign torn between his desire to implement reforms necessary to save the state, and his perceived necessity of remaining in subordinate relationship with his Russian sponsors.
The Bar Confederation of 1768 was a szlachta rebellion directed against Russia and the Polish king, fought to preserve Poland's independence and in support of szlachta's traditional causes. It was brought under control and followed in 1772 by the First Partition of the Commonwealth, a permanent encroachment on the outer Commonwealth provinces by the Russian Empire, the Kingdom of Prussia and Habsburg Austria. The "Partition Sejm" under duress "ratified" the partition fait accompli, but in 1773 also established the Commission of National Education, a pioneering in Europe government education authority.
The long-lasting sejm convened by Stanisław August in 1788 is known as the Great, or Four-Year Sejm. The Sejm's landmark achievement was the passing of the May 3 Constitution, the first in modern Europe singular pronouncement of a supreme law of the state. The reformist but moderate document, accused by detractors of French Revolution sympathies, soon generated strong opposition coming from the Commonwealth's upper nobility conservative circles and Catherine II, determined to prevent a rebirth of the strong Commonwealth. The nobility's Targowica Confederation appealed to the Empress for help and in May 1792 the Russian army entered the territory of the Commonwealth. The defensive war fought by the forces of the Commonwealth ended when the King, convinced of the futility of resistance, capitulated by joining the Targowica Confederation. The Confederation took over the government, but Russia and Prussia in 1793 arranged for and executed the Second Partition of the Commonwealth, which left the country with critically reduced territory, practically incapable of independent existence.
The radicalized by the recent events reformers, in the still nominally Commonwealth area and in exile, were soon working on national insurrection preparations. Tadeusz Kościuszko was chosen as its leader; the popular general came from abroad and on March 24, 1794 in Cracow (Kraków) declared a national uprising under his supreme command. Kościuszko emancipated and enrolled in his army many peasants, but the hard-fought insurrection, strongly supported also by urban plebeian masses, proved incapable of generating the necessary foreign collaboration and aid. It ended suppressed by the forces of Russia and Prussia, with Warsaw captured in November. The third and final partition of the Commonwealth was undertaken again by all three partitioning powers, and in 1795 the Polish–Lithuanian Commonwealth effectively ceased to exist.
Despite the long history of close relations between Poland and Prussia, the Prussians treated their new Polish lands as conquered territory rather than as a recovered long-lost province. The response of the Polish leadership is a matter of historical debate. Literary scholars found that the dominant emotion of the first decade was despair, producing a moral desert, ruled by violence and treason. On the other hand historians have looked for signs of resistance to foreign rule. Apart from those who went into exile the nobility took oaths of loyalty to their new rulers, and served as officers in their armies.
Partitioned Poland
Armed resistance (1795–1864)
While there was no separate Polish state at all, the idea of Polish independence was kept alive throughout the 19th century and led to more Polish uprisings and other warfare against the partitioning powers. Military efforts after the Partitions were first based on Polish alliances with post-revolutionary France. Henryk Dąbrowski's Polish Legions fought in French campaigns outside of Poland, hoping that their involvement and contribution result in liberation of their Polish homeland. The Polish national anthem - Dąbrowski's Mazurka - was written in praise of his actions by Józef Wybicki in 1797. The Duchy of Warsaw, a small, semi-independent Polish state, was created in 1807 by Napoleon Bonaparte, following his defeat of Prussia. The Duchy's military forces, led by Józef Poniatowski, participated in numerous campaigns, including the Polish–Austrian War of 1809, the French invasion of Russia in 1812, and the German campaign of 1813.
After the defeat of Napoleon, a new European order was established at the Congress of Vienna. Adam Czartoryski, at one time a close associate of Emperor Alexander I of Russia, became the leading advocate for the Polish national cause. The Congress implemented a new partition scheme, which took into account some of the gains realized by the Poles during the Napoleonic period. The Duchy of Warsaw was replaced with the Kingdom of Poland, a residual Polish state in personal union with the Russian Empire, ruled by the Russian tsar. East of the Kingdom, large areas of the former Commonwealth remained directly incorporated into the Empire; together with the Kingdom they were part of the Russian partition. There was a Prussian partition, with a portion of it separated as the Grand Duchy of Posen, and an Austrian partition. The newly created Republic of Kraków was a tiny state under a joint supervision of the three partitioning powers. "Partitions" were the lands of the former Commonwealth, not actual administrative units.
The increasingly repressive policies of the partitioning powers led to Polish conspiracies, and in 1830 to the November Uprising in the Kingdom. The uprising developed into a full-scale war with Russia, but the leadership was taken over by the Polish conservative circles reluctant to challenge the Empire, and hostile to broadening the independence movement's social base through measures such as land reform. Despite the significant resources mobilized and self-sacrifice of the participants, a series of missteps by several successive unwilling or incompetent chief commanders appointed by the Polish government ultimately led to the defeat of the insurgents by the Russian army.
After the fall of the November Uprising, thousands of former Polish combatants and other activists emigrated to Western Europe, where they were initially enthusiastically received. This element, known as the Great Emigration, soon dominated the Polish political and intellectual life. Together with the leaders of the independence movement, the exile community included the greatest Polish literary and artistic minds, including the Romantic poets Adam Mickiewicz, Juliusz Słowacki, Cyprian Norwid, and composer Frédéric Chopin. In the occupied and repressed Poland, some sought progress through self-improvement activities known as organic work; others, in cooperation with emigrant circles, organized conspiracies and prepared for the next armed insurrection.
The planned national uprising, after authorities in the partitions had found out about secret preparations, ended in a fiasco in early 1846. In its most significant manifestation, the Kraków Uprising of February 1846, patriotic action was combined with revolutionary demands, but the result was the incorporation of the Republic of Kraków into the Austrian partition. Austrian authorities took advantage of peasant discontent by inciting the villagers against noble-dominated insurgent units; it led to the Galician slaughter, a violent anti-feudal rebellion, beyond the intended scope of the provocation. A new wave of Polish military and other involvement, in the partitions and in other parts of Europe, soon took place in the context of the 1848 Spring of Nations revolutions. In particular, the events in Berlin precipitated the Greater Poland Uprising, where peasants in Prussia, who were by then largely enfranchised, played a prominent role.
Despite the limited liberalization measures allowed in Congress Kingdom under the rule of Alexander II, a renewal of popular liberation activities took place in 1860-1861. During the large scale demonstrations in Warsaw the Russian forces inflicted numerous casualties on the civilian participants. The "Red", or left-wing conspiracy faction, which promoted peasant enfranchisement and cooperated with Russian revolutionaries, became involved in immediate preparations for a national uprising. The "White", or right-wing faction, inclined to cooperate with the Russian authorities, countered with partial reform proposals. Aleksander Wielopolski, the conservative leader of the Kingdom's government, in order to cripple the manpower potential of the Reds, arranged for a partial selective conscription of young Poles for the Russian army, which hastened the outbreak of the hostilities. The January Uprising, joined and led after the initial period by the Whites, was fought by partisan units against an overwhelming enemy advantage. The warfare lasted from January 1863 to the spring of 1864, when Romuald Traugutt, the dedicated last supreme commander of the insurgence, was captured by the tsarist police.
On March 2, 1864, the Russian authority — compelled by the uprising to compete for the loyalty of Polish peasants — officially published an enfranchisement decree in the Kingdom, along the lines of an earlier insurgent land reform proclamation. The act created the conditions necessary for the development of the capitalist system on central Polish lands. At the time when the futility of armed resistance without external support was realized by most Poles, the various segments of the Polish society were undergoing deep and far-reaching social, economic and cultural transformations.
Formation of modern Polish society under foreign rule (1864–1914)
Following the failure of the last of the national uprisings, the January Uprising of 1863, the Polish nation, subjected within the territories under the Russian and Prussian administrations to still stricter controls and increased persecution, preserved its identity in nonviolent ways. After the Uprising Congress Poland, downgraded in official usage from the Kingdom of Poland to the Vistula Land, was more fully integrated into Russia proper, but not entirely obliterated. The Russian and German languages were respectively imposed in all public communication and the Catholic Church was not spared from severe repression. On the other hand the Galicia region in western Ukraine and southern Poland, economically and socially backward, but under the Austro-Hungarian Monarchy rule increasingly allowed limited autonomy, experienced gradual relaxation of authoritarian policies and even a Polish cultural revival. Positivism replaced Romanticism as the leading intellectual, social and literary trend.
"Organic work" social activities consisted of self-help organizations that promoted economic advancement and worked on improving competitiveness of Polish-held business entities, industrial, agricultural, or other. New commercial methods and ways of generating higher productivity were discussed and implemented through trade associations and special interest groups, while Polish banking and cooperative financial institutions made necessary business loans available. The other major area of organic work concern was education and intellectual development of the common people. Many libraries and reading rooms were established in small towns and villages, and numerous printed periodicals reflected the growing interest in popular education. Scientific and educational societies were active in a number of cities.
Economic and social changes, such as land reform and industrialization, combined with the effects of foreign domination, altered the centuries old social structure of the Polish society. Among the newly emergent strata were wealthy industrialists and financiers, distinct from the traditional, but still critically important landed aristocracy. The intelligentsia, an educated, professional or business middle class, often originated from gentry alienated from their rural possessions (many smaller serfdom-based agricultural enterprises had not survived the land reforms) and from urban people. Industrial proletariat, the new underprivileged class, were usually poor peasants or townspeople forced by deteriorating conditions to migrate and search for work in urban centers in countries of their origin or abroad. Millions of residents of the former Commonwealth of various ethnic backgrounds worked or settled in Europe and in North and South America.
The changes were partial and gradual, and the degree of the fast-paced in some areas industrialization and capitalist development on Polish lands lagged behind the advanced regions of western Europe. The three partitions developed different economies, and were economically integrated with their mother states more than with each other.
In the 1870s-1890s, large scale socialist, nationalist and agrarian movements of great ideological fervor and corresponding political parties became established in partitioned Poland and Lithuania. The main minority ethnic groups of the former Commonwealth, including Ukrainians, Lithuanians, Belarusians and Jews, were getting involved in their own national movements and plans, which met with disapproval on the part of those ethnically Polish independence activists, who counted on an eventual rebirth of the Commonwealth.
Around the start of the 20th century the Young Poland cultural movement, centered on Galicia and taking advantage of the conducive to liberal expression milieu there, was the source of Poland's finest artistic and literary productions. Marie Skłodowska-Curie was a pioneer radiation scientist, who did her groundbreaking research in Paris.
The Revolution of 1905 arose new waves of Polish unrest, political maneuvering, strikes and rebellion, with Roman Dmowski and Józef Piłsudski active as leaders of the nationalist and socialist factions respectively. As the authorities reestablished control within the Empire, the revolt in the Kingdom, placed under martial law, had withered as well, leaving tsarist concessions in the areas of national and workers' rights, including Polish representation in the newly created Russian Duma. Some of the acquired gains were however rolled back, which coupled with intensified Germanization in the Prussian partition, left the Austrian Galicia as the most amenable to patriotic action territory.
World War I
World War I and the political turbulence that was sweeping Europe in 1914 offered the Polish nation hopes for regaining independence. On the outbreak of war the Poles found themselves conscripted into the armies of Germany, Austria and Russia, and forced to fight each other in a war that was not theirs. In 1917 France formed the Blue Army comprising about 100,000 Poles, including men captured from German and Austrian units as well as 20,000 volunteers from the U.S. Dmowski, operating from Paris as head of the Polish National Committee (KNP), became the spokesman for Polish nationalism in the Allied camp.
Piłsudski's paramilitary units stationed in Galicia were turned into the Polish Legions, and as a part of the Austro-Hungarian Army fought on the Russian front until 1917, when it was disbanded. Piłsudski was arrested by the Germans and became a heroic symbol of Polish nationalism.
In all about two million Poles served in the war, counting both sides, and about 450,000 died. Much of the fighting on the Eastern Front took place in Poland, and civilian casualties and devastation were high.
During the course of the war the area of Congress Poland became occupied by the Central Powers, with Warsaw captured by the Germans on 5 August 1915. In the Act of 5th November 1916, the Kingdom of Poland (Królestwo Regencyjne) was recreated by Germany and Austria on formerly Russian-controlled territory. This new puppet state existed until November 1918, when it was replaced by the newly established Republic of Poland. The independence of Poland had been campaigned for in the West by Dmowski and Ignacy Paderewski. At the initiative of Woodrow Wilson, Polish independence was officially endorsed in June 1918 by the Allies. On the ground in Poland in October–November the final upsurge of the push for independence was taking place, with Ignacy Daszyński heading a short-lived Polish government in Lublin from November 6. Germany, now defeated, was forced by the Allies to stand down its large military forces in Poland. It released imprisoned Piłsudski, who arrived in Warsaw on November 10.
Second Polish Republic (1918–1939)
Securing national borders
After more than a century of foreign rule Poland was given its independence by the powers at the end of World War I. The rebirth of Poland was one of the outcomes of the negotiations that took place at the Paris Peace Conference of 1919. The Treaty of Versailles set up an independent nation with an outlet to the sea, but left some of the boundaries to be decided by plebiscites (East Prussia plebiscite and Upper Silesia plebiscite took place). The largely German Free City of Danzig was granted a separate status that guaranteed its use as a port by Poland.
Other boundaries were settled by warfare and subsequent treaties. Most important was the Polish–Soviet War of 1919-1921. Piłsudski had entertained far-reaching anti-Russian cooperative designs for Eastern Europe, and in 1919 the Polish forces pushed eastward into Lithuania, Belarus and Ukraine (previously a theater of the Polish–Ukrainian War), taking advantage of the Russian preoccupation with the civil war. By June 1920, the Polish armies were past Vilnius, Minsk and (allied with the Directorate of Ukraine) reached Kiev, but then a massive Soviet counteroffensive pushed the Poles out of most of Ukraine and on the northern front arrived at the outskirts of Warsaw. A Soviet triumph and the quick end of Poland seemed inevitable. However the Poles scored a stunning victory at the Battle of Warsaw in August, 1920. The Soviets pulled back and left to Polish rule swaths of territory occupied largely by Belarusians or Ukrainians. The new eastern boundary was finalized by the Treaty of Riga in 1921.
The defeat of the Russian armies forced Lenin and the Soviet leadership to abandon for the time being their strategic objective of linking up with the German and other European revolution-minded comrades (Lenin's hope of generating support for the Red Army in Poland had already failed to materialize). Piłsudski's seizure of Vilnius (Wilno) in October 1920 poisoned Polish–Lithuanian relations for the remainder of the interwar period. Piłsudski's planned East European federation of states (inspired by the tradition of the multiethnic Polish–Lithuanian Commonwealth and including a hypothetical multinational successor state to the Grand Duchy of Lithuania) was incompatible, at the time of rising national movements, with his assumption of Polish domination and with the encroachment on the neighboring peoples' lands and aspirations; as such it was doomed to failure.[a] A larger federated structure was also opposed by Dmowski's National Democrats. Their representative at the Peace of Riga talks opted for leaving Minsk, Berdychiv, Kamianets-Podilskyi and the surrounding areas on the Soviet side of the border, not wanting to allow population shifts National Democrats considered politically undesirable, including what would be a reduced proportion of citizens who were ethnically Polish.
The Peace of Riga settled the eastern border, preserving for Poland, at the cost of partitioning the lands of the former Grand Duchy of Lithuania (Lithuania and Belarus) and Ukraine, a good portion of the old Commonwealth's eastern lands. Ukrainians ended up with no state of their own and felt betrayed by the Riga arrangements; their resentment gave rise to extreme nationalism and anti-Polish hostility. The territories in the east won by 1921 would form the basis for a swap arranged and carried out by the Soviets in 1943-1945, who at that time compensated the reemerging Polish state for its eastern lands lost to the Soviet Union with conquered areas of eastern Germany.
The successful outcome of the Polish–Soviet War gave Poland a false sense of being a major and self-sufficient military power, and the government a justification for trying to resolve international problems through imposed unilateral solutions. The interwar period's Polish territorial and ethnic policies contributed to bad relations with most of Poland's neighbors and to uneasy cooperation with the more distant centers of power, including France, Britain and the League of Nations.
Democratic politics
The rapidly growing population of Poland within the new boundaries was ¾ agricultural and ¼ urban, with Polish being the primary language of ⅔ of the inhabitants. The minorities had very little voice in the government. A constitution was adopted in 1921. Due to the insistence of the National Democrats, worried about the potential power of Piłsudski if elected, it introduced limited prerogatives for the presidency.
What followed was the Second Republic's short (1921–1926) and turbulent period of constitutional order and parliamentary democracy. The legislature remained fragmented and lacking stable majorities, governments changed frequently, corruption was commonplace. The open-minded Gabriel Narutowicz was constitutionally elected president by the National Assembly in 1922, but deemed by the nationalist right wing a traitor pushed through by the votes of alien minorities, was assassinated.
Poland had suffered under a plethora of economic calamities and experienced waves of strikes and a worker revolt in 1923, but there were also signs of progress and stabilization. Władysław Grabski's economically competent government accomplished critical reform of finances and lasted for almost two years. The achievements of the democratic period, such as the establishment, strengthening and expansion of the various governmental and civil society structures and integrative processes necessary for normal functioning of the reunited state and nation, were too easily overlooked. Lurking on the sidelines was the disgusted army upper corps, not willing to subject itself to civilian control, but ready to follow its equally dissatisfied, at that time retired, legendary chief.
Piłsudski's coup and the Sanation Era
On May 12, 1926, Piłsudski staged a military overthrow of the Polish government, confronting President Stanisław Wojciechowski and overpowering the troops loyal to him. Hundreds died in fratricidal fighting. Piłsudski was supported by several leftist factions, who ensured the success of his coup by blocking during the fighting the railway transportation of government forces.[l]
The authoritarian "Sanation" regime that Piłsudski was to lead for the rest of his life and that stayed in power until 1939 was neither leftist, nor overtly fascist. Political institutions and parties were allowed to function, which was combined with electoral manipulation and strong-arming of those not willing to cooperate into submission. Eventually persistent opponents of the regime, many of the leftist persuasion, were subjected to long staged trials and harsh sentences, or detained in camps for political prisoners. Rebellious peasants, striking industrial workers and nationalist Ukrainians became targets of ruthless military pacification, other minorities were also harassed. Piłsudski, conscious of Poland's precarious international situation, signed non-aggression pacts with the Soviet Union in 1932 and with Nazi Germany in 1934. Piłsudski kept personal control of the army, but it was poorly equipped, poorly trained and had poor planning. His only war plan was a defensive war against a Soviet invasion.
Social and economic trends
The mainstream of the Polish society was not affected by the repressions of the Sanation authorities; many enjoyed the relative stability and the economy improved between 1926 and 1929, when it became caught up in the global Great Depression. Independence had stimulated the development of thriving culture and intellectual achievement was high, but the Great Depression brought low prices for farmers and unemployment for workers. Social tensions increased, such as rising antisemitism. The reconstituted Polish state had had only 20 years of relative stability and uneasy peace between the two wars. A major economic transformation and national industrial development plan led by Minister Eugeniusz Kwiatkowski, the main architect of the Gdynia seaport project, was in progress at the time of the outbreak of the war.
Ther population grew steadily, reaching 35 million in 1939. However the interwar period's overall economic situation was stagnant. There was little money for investment inside Poland, and few foreigners were interested in investing there. The total industrial production had barely increased between 1913 and 1939, but because of the population growth, the per capita output actually decreased by 18%.
Final years
The regime of Piłsudski's "colonels", left in power after the Marshal's death in 1935, had neither the vision nor resources to cope with the deteriorating situation in Europe. The foreign policy was the responsibility of Józef Beck. He had numerous schemes but alienated most of the neighbors (not blamed for the worsening relations with Germany). The government undertook opportunistic hostile actions against Lithuania and Czechoslovakia. At home increasingly alienated minorities threatened unrest and violence and were suppressed. Extreme nationalist circles were getting more outspoken. One of the groups, the Camp of National Unity, was connected to the new strongman, Marshal Edward Rydz-Śmigły.
In March 1939, the Polish government rejected the German offer of an alliance and Hitler abrogated the Polish-German pact. Instead Poland entered into a military alliance with Britain and France. However the western powers were weaker than Nazi Germany and their vocal assurances of imminent military action was a bluff that did not deter Hitler. The mid-August British-French talks with the Soviets on forming an anti-Nazi defensive military alliance had failed, in part over Warsaw's refusal to allow the Red Army to operate on Polish territory.[b] On August 23, 1939, Germany and the Soviet Union signed the Molotov–Ribbentrop non-aggression pact, which secretly provided for the dismemberment of Poland into Nazi and Soviet-controlled zones.
World War II and its violence
Invasions and resistance
On September 1, 1939 Hitler ordered his troops into Poland. Poland had signed a pact with Britain (as recently as August 25) and France and the two western powers soon declared a war on Germany, but remained rather inactive and extended no aid to the attacked country. On September 17, the Soviet troops moved in and took control of most of the areas of eastern Poland having majority Ukrainian and Belarusian populations under the terms of the German-Soviet agreement. While the nation's military forces were fighting the invading armies, Poland's top government officials and military high command fled the country; both arrived at the Romanian border in mid-September.
Weinberg argues that the most significant Polish contribution to the Allied war effort was sharing its code-breaking results. This allowed the British to break "Enigma", the main German military code, giving it a major advantage in combat. However, some Polish historians have argued that fighting the initial "September Campaign" of World War II was the most significant Polish contribution to the allied war effort. The nearly one million Polish soldiers mobilized significantly delayed Hitler's attack on Western Europe, planned for 1939. When the Nazi offensive did happen, the delay caused it to be less effective, a possibly crucial factor in the case of the defense of Britain.
After Germany invaded the Soviet Union in June 1941, Poland was completely occupied by German troops.
The Poles formed an underground resistance movement and a Polish government in exile, first in Paris and later in London, which was recognized by the Soviet Union (diplomatic relations, broken since September 1939, were resumed in July 1941). During World War II, about 400,000 Poles joined the underground Polish Home Army, about 200,000 went into combat on western fronts in units loyal to the Polish government in exile, and about 300,000 fought under the Soviet command in the last stages of the war.
In April 1943, the Soviet Union broke the deteriorating relations with the Polish government in exile after the German military announced that they had discovered mass graves of murdered Polish army officers at Katyn, in the USSR. The Soviets claimed that the Poles had committed a hostile act by requesting that the Red Cross investigate these reports.
As the Jewish ghetto in occupied Warsaw was being liquidated by the Nazi SS units, in 1943 the city was the scene of the Warsaw Ghetto Uprising. The eliminations of the ghettos took place in Polish cities and uprisings were fought there against impossible odds by desperate Jewish insurgents, whose people were being removed and exterminated.
Soviet advance 1944-45, Warsaw Uprising
At the time of the western Allies' increasing cooperation with the Soviet Union, the standing and influence of the Polish government in exile were seriously diminished by the death of its most prominent leader — Prime Minister Władysław Sikorski — on July 4, 1943.
In July 1944, the Soviet Red Army and the People's Army of Poland controlled by the Soviets entered Poland, and through protracted fighting in 1944 and 1945 destroyed the German army, losing 600,000 of their soldiers.
The greatest single instance of armed struggle in the occupied Poland and a major political event of World War II was the Warsaw Uprising of 1944. The uprising, in which most of the Warsaw population participated, was instigated by the underground Armia Krajowa (Home Army) and approved by the Polish government in exile, in an attempt to establish a non-communist Polish administration ahead of the approaching Red Army. The uprising was planned with the expectation that the Soviet forces, who had arrived in the course of their offensive and were present on the other side of the Vistula River, would help in battle over Warsaw. However, the Soviets had never agreed and they stopped their advance at the Vistula. The Germans brutally suppressed the forces of the pro-Western Polish underground.[m]
The bitterly fought uprising lasted for two months and resulted in hundreds of thousands of civilians killed and expelled. After a hopeless surrender on the part of the Poles (October 2), the Germans carried out Hitler's order to destroy the remaining infrastructure of the city. The Polish First Army, fighting along the Soviet Red Army, entered Warsaw on 17 January 1945.[n]
Changing boundaries, war losses, extermination of Jews
As a consequence of the war and by the decision of the Soviet leadership, agreed to by the United States and Britain beginning with the Tehran Conference (late 1943), Poland's geographic location was fundamentally altered.[c] Stalin's proposal that Poland should be moved very far to the west was readily accepted by the Polish communists, who were at that time at the early stages of forming the post-war government. In July 1944, a communist-controlled "Polish Committee of National Liberation" was established in Lublin, which caused protests by Prime Minister Stanisław Mikołajczyk and the Polish government in exile.
By the time of the Yalta Conference (February 1945), seen by many Poles as the pivotal point when the nation's fate was sealed by the Great Powers, the communists had established a provisional government in Poland. The Soviet position at the Conference was strong, corresponding to their advance on the German battlefield. The three Great Powers gave assurances for the conversion of the communist provisional government, by including in it democratic forces from within the country and currently active abroad (the Provisional Government of National Unity and subsequent democratic elections were the agreed stated goals), but the London-based government in exile was not mentioned.
After the final (for all practical purposes) settlement at Potsdam, the Soviet Union retained most of the territories captured as a result of the 1939 German-Soviet pact (now western Ukraine, western Belarus and part of Lithuania around Vilnius). Poland was compensated with parts of Silesia including Breslau (Wrocław) and Grünberg (Zielona Góra), of Pomerania including Stettin (Szczecin), and of East Prussia, along with Danzig (Gdańsk), collectively referred to as the "Recovered Territories", which were incorporated into the reconstituted Polish state. Most of the German population there was expelled to Germany. 1.5-2 million Poles were expelled from Polish areas annexed by the Soviet Union. The vast majority of them were resettled in the former German territories.
Scientific and numerically correct estimation of the human losses suffered by Polish citizens during World War II does not seem possible because of the paucity of available data. Some conjectures can be arrived at and they suggest that assertions made in the past have been incorrect and motivated by political needs. To begin with, the total population of 1939 Poland and of the several nationalities/ethnicities present there are not accurately known, since the last population census took place in 1931.
Modern research indicates that during the war about 5 million Polish citizens were killed, including 3 million Polish Jews. According to the Holocaust Memorial Museum, at least 1.9 to two million ethnic Poles and 3 million Polish Jews were killed. Millions were deported to Germany for forced labor or to German extermination camps such as Treblinka and Auschwitz. According to a recent estimate, between 2.35 and 2.9 million Polish Jews and about 2 million ethnic Poles were killed. The Nazis executed tens of thousands of members of the Polish intelligentsia during the AB Aktion and the Operation Tannenberg, and the Soviets did the same during the Katyn massacre.[j] Over 95% of the Polish Jewish losses (less directly also many of the rest)[d] and 90% of the ethnic Polish losses were caused by Nazi Germany; 5% of the ethnic Polish losses were caused by the Soviets and 5% by Ukrainian nationalists. This Jewish loss of life, together with the numerically much less significant waves of displacement during the war and emigration after the war, after the Polish October 1956 thaw and following the 1968 Polish political crisis, put an end to several centuries of large scale, well-established Jewish settlement and presence in Poland. The magnitudes of the (also substantial) losses of Polish citizens of German, Ukrainian, Belarusian and other nationalities are not known.
In 1940-1941, some 325,000 Polish citizens were deported by the Soviet regime. The number of Polish citizen deaths at the hands of the Soviets is estimated at less than 100,000. In 1943–1944, Ukrainian nationalists (OUN and Ukrainian Insurgent Army) massacred tens of thousands of Poles in Volhynia and Galicia.
Approximately 90% of Poland's war losses were the victims of prisons, death camps, raids, executions, annihilation of ghettos, epidemics, starvation, excessive work and ill treatment. There were one million war orphans and 590,000 war disabled. The country lost 38% of its national assets (Britain lost 0.8%, France 1.5%). Nearly half the prewar Poland was expropriated by the Soviet Union, including the two great cultural centers of Lwów and Wilno. Many Poles could not return to the country for which they had fought because they belonged to the "wrong" political group, or came from prewar eastern Poland incorporated into the Soviet Union (see Polish population transfers (1944–1946)), or having fought in the West were warned not to return because of the high risk of persecution. Others were pursued, arrested, tortured and imprisoned by the Soviet authorities for belonging to the Home Army (see anti-communist resistance in Poland (1944–1946)), or persecuted because of having fought on the western front.
With Germany's defeat, as the reestablished Polish state was shifted west to the area between the Oder–Neisse and Curzon lines, the Germans who had not fled were expelled. Of those who remained, many chose to emigrate to post-war Germany. According to a recently quoted estimate, of the 200-250 thousand Jews who escaped the Nazis, 40-60 thousand had survived in Poland. More had been repatriated from the Soviet Union and elsewhere, and the February 1946 population census showed ca. 300,000 Jews within the new borders.[e] Of the surviving Jews, many chose or felt compelled to emigrate. Of the Ukrainians and Lemkos living in Poland within the new borders (about 700,000), close to 95% were forcibly moved to Soviet Ukraine (see Repatriation of Ukrainians from Poland to the Soviet Union), and in 1947 to the new territories in northern and western Poland under Operation Vistula. In all the mutual violence in the 1940s (during and after the war), about 70,000 Poles and about 20,000 Ukrainians were killed.
Because of the changing borders and of mass movements of people of various nationalities, sponsored by governments and spontaneous, the emerging communist Poland ended up with a mainly homogeneous, ethnically Polish population (97.6% according to the December 1950 census). The remaining members of the minorities were not encouraged, by the authorities or by their neighbors, to emphasize their ethnic identity.
People's Republic of Poland (1945–1989)
Post-war struggle for power
In June 1945, as an implementation of the February Yalta Conference directives, according to the Soviet interpretation, a Polish Provisional Government of National Unity was formed; it was soon recognized by the United States and many other countries. A communist rule and Soviet domination were apparent from the beginning: sixteen prominent leaders of the Polish anti-Nazi underground were brought to trial in Moscow in June 1945. In the immediate post-war years, the emerging communist rule was challenged by people and groups not reconciled with it and many thousands perished in the fight or were pursued by the security forces and executed.
A national referendum arranged for by the communist Polish Workers' Party was used to legitimize its dominance in Polish politics and claim widespread support for the party's policies. Although the Yalta agreement called for free elections, those held in January 1947 were controlled by the communists. Some democratic and pro-Western elements, led by Stanisław Mikołajczyk, the former Prime Minister in Exile, participated in the Provisional National Unity Government and the 1947 elections, but were ultimately eliminated through electoral fraud, intimidation and violence. In times of radical change, they attempted to preserve some degree of mixed economy. The Polish government in exile remained in continuous existence until 1990, although its influence was degraded.
Under Stalinism
A Polish People's Republic (Polska Rzeczpospolita Ludowa) was created (so named only in the communist constitution of 1952), effectively under the communist Polish United Workers' Party rule, after the brief period of coalition "National Unity" government.
The ruling party itself was a result of the forced amalgamation (December 1948) of the communist Polish Workers' Party and the historically non-communist, more popular Polish Socialist Party (the party, reestablished in 1944 by its left wing, had been from that time allied with the communists). The ruling communists, who in post-war Poland preferred to use the term "socialism",[f] needed to include the socialist junior partner to broaden their appeal, claim greater legitimacy and eliminate competition on the left. The socialists, who were losing their organization, had to be subjected to political pressure, ideological cleansing and purges in order to become suitable for the unification on the "Workers' Party"'s terms. The socialist pro-communist leaders were the prime ministers Edward Osóbka-Morawski and Józef Cyrankiewicz.
During the most oppressive Stalinist period, terror, justified by the necessity to eliminate the reactionary subversion, was widespread. Many thousands of perceived opponents of the regime were arbitrarily tried and large numbers executed. The People's Republic was led by discredited Moscow's operatives such as Bolesław Bierut, Jakub Berman and Konstantin Rokossovsky. In 1953 and later, despite a partial thaw after Stalin's death, the persecution of the independent Polish Catholic Church intensified and its head, Cardinal Stefan Wyszyński, was detained.
Larger rural estates and agricultural holdings as well as post-German property were redistributed through land reform and industry was nationalized beginning in 1944. Communist-introduced restructuring and imposition of work-space rules encountered active worker opposition already in 1945-1947. The Three-Year Plan (1947–1949) continued with the rebuilding, socialization and restructuring of the economy. The rejection of the Marshall Plan (1947), however, made the aspirations of catching-up with the West European standard of living unrealistic.
The government's economic high priority was the development of militarily useful heavy industry. State-run institutions, collectivization and cooperative entities were imposed (the last category dismantled in the 1940s as not socialist enough, later reestablished), while even small-scale private enterprises were being eradicated. Stalinism introduced heavy political and ideological indoctrination in social life, culture and education.
Great strides, however, were made in the areas of universal public education (including elimination of adult illiteracy), health care and recreational amenities for working people. Many historic sites, including central districts of war-destroyed Warsaw and Gdańsk (Danzig), were rebuilt at a great cost. A majority of Poland's urban residents still live in apartment blocks built during the communist era.
In March 1956, after the 20th Soviet Party Congress in Moscow ushered in de-Stalinization, Edward Ochab was chosen to replace the deceased Bierut as the Polish Communist Party's First Secretary. Poland was rapidly overtaken by social restlessness and reformist undertakings; thousands of political prisoners were released and many people previously persecuted were officially rehabilitated. Riots by economically distressed workers in Poznań ensued in June, giving rise to a new pattern in communist Poland's politics.
Amidst the continuing social and national upheaval, in October there was a further shakeup in the party leadership.[k] While retaining most traditional communist economic and social aims, the regime led by the new Polish Party's First Secretary Władysław Gomułka began to liberalize internal life in Poland. The dependence on the Soviet Union was somewhat mollified and the state's relationships with the Church and Catholic lay activists were put on a new footing. Collectivization efforts were abandoned and agricultural land, unlike in other Comecon countries, had mostly remained a domain of private family farmers.
Sophisticated cultural life, to varying degrees involved in intelligentsia's opposition to the totalitarian system, developed under Gomułka and his successors. The creative process had often been compromised by state censorship. Nevertheless, significant productions were accomplished in fields such as literature, theater, cinema and music, among others. Journalism of veiled understanding and native varieties of popular trends and styles of western mass culture were well represented. Uncensored information and works generated by émigré circles (the Paris-based Kultura magazine developed a conceptual framework for dealing with the issues of borders and neighbors of a future free Poland) were conveyed by a variety of channels, the Radio Free Europe being of foremost importance.
Stagnation and crackdown
Several years of relative stabilization, accompanied by economic stagnation and curtailment of reforms and reformists, followed the legislative election of 1957. A nuclear weapon-free zone in Central Europe was proposed in 1957 by Adam Rapacki, Poland's foreign minister. Several prominent "revisionists" were expelled from the party in the 1960s.
In 1965, the Conference of Polish Bishops issued the Letter of Reconciliation of the Polish Bishops to the German Bishops. In 1966, the celebrations of the 1,000th anniversary of the Baptism of Poland led by Cardinal Stefan Wyszyński and other bishops turned into a huge demonstration of the power and popularity of the Polish Catholic Church.
The post-1956 liberalizing trend, in decline for a number of years, was reversed in March 1968, when student demonstrations were suppressed. Motivated in part by the Prague Spring movement by then in progress, the Polish opposition leaders, intellectuals, academics and students used a Warsaw historical-patriotic classic theater spectacle series and its forced termination as a springboard for protests, which soon spread to centers of higher education and turned nationwide. The authorities responded with a major crackdown on opposition activity, which included especially reorganization, firing of faculty and dismissals of students at universities and other institutions of learning. At the center of the controversy were also the few Znak Catholic deputies in the Sejm, who attempted to defend the students.
In an official speech, Gomułka raised an artificial issue of the role of Jewish activists in the ongoing events, giving ammunition to a nationalistic party faction opposed to his rule. Using the context of the 1967 military victory of Israel, some in the Polish communist leadership waged an antisemitic campaign against the remnants of the Polish Jewish community. The assimilated and secular, often well-placed people, were accused of actively sympathizing with an Israeli aggression (most Poles welcomed a defeat of a Soviet ally) and being disloyal. Branded "Zionists", they became a scapegoat and were blamed for the March unrest, which eventually led to the emigration of much of Poland's remaining Jewish population (about 15,000 Polish citizens left the country).
With Gomułka regime's active support, after the Brezhnev Doctrine was informally announced, the Polish People's Army took part in the infamous Warsaw Pact invasion of Czechoslovakia in August 1968.
In December 1970, the governments of Poland and West Germany signed a treaty which normalized their relations and made possible meaningful cooperation in a number of areas of bilateral interest. The Federal Republic recognized the post-war de facto border between Poland and East Germany.
Worker revolts and Solidarity
In December 1970, disturbances and strikes in the port cities of Gdańsk (Danzig), Gdynia, and Szczecin (Stettin), triggered by a government-announced price increase for essential consumer goods, reflected deep dissatisfaction with living and working conditions in the country. The activity was centered on the industrial shipyard areas of the three coastal cities. Dozens of protesting workers and bystanders were killed in police and military actions, generally directed by Gomułka and under the command of Minister of Defense Wojciech Jaruzelski. In the aftermath, Edward Gierek replaced Gomułka as First Secretary of the Communist Party. The new regime was seen as more modern, friendly and pragmatic and enjoyed initially a degree of popular (and foreign) support.[g][o]
Gierek's years (1970-1980) brought wide-ranging, if ultimately unsuccessful government efforts to revitalize the economy on the one hand, and maturation of opposition circles, emboldened by the Helsinki Conference processes on the other. Another attempt to raise food prices resulted in the June 1976 protests. Jacek Kuroń was among the activists defending the accused rioters from Radom and other towns. The Workers' Defense Committee (KOR), established in response to the crackdown, consisted of dissident intellectuals willing to openly support industrial workers, farmers and students who were organizing, struggling with and persecuted by the authorities throughout the late 1970s.
In October 1978, the Archbishop of Kraków, Cardinal Karol Józef Wojtyła, became Pope John Paul II, head of the Roman Catholic Church. Catholics and others rejoiced at the elevation of a Pole to the papacy and greeted his June 1979 visit to Poland with an outpouring of emotion.
Fueled by large infusions of Western credit, Poland's economic growth rate was one of the world's highest during the first half of the 1970s. But much of the borrowed capital was misspent, and the centrally planned economy was unable to use the new resources effectively. The growing debt burden became insupportable in the late 1970s, and economic growth had become negative by 1979.
On July 1, 1980, with the Polish foreign debt at more than $20 billion, the government made another attempt to increase meat prices. A chain reaction of strikes virtually paralyzed the Baltic coast by the end of August and, for the first time, closed most coal mines in Silesia. Poland was entering into an extended crisis that would change the course of its future development.
On August 31, workers at the Lenin Shipyard in Gdańsk, led by an electrician named Lech Wałęsa, signed a 21-point agreement with the government that ended their strike. Similar agreements were signed at Szczecin and in Silesia. The key provision of these agreements was the guarantee of the workers' right to form independent trade unions and the right to strike. After the Gdańsk Agreement was signed, a new national union movement "Solidarity" swept Poland.
The discontent underlying the strikes was intensified by revelations of widespread corruption and mismanagement within the Polish state and party leadership. In September 1980, Gierek was replaced by Stanisław Kania as First Secretary.
Alarmed by the rapid deterioration of the Party's authority following the Gdańsk agreement, the Soviet Union proceeded with a massive military buildup along Poland's border in December 1980. In February 1981, Defense Minister Gen. Wojciech Jaruzelski assumed the position of Prime Minister, and in October 1981, was named First Secretary. At the first Solidarity national congress in September–October 1981, Lech Wałęsa was elected national chairman of the union.
Martial law and end of communism
On December 12–13, the regime declared martial law, under which the army and ZOMO riot police were used to crush the union. Virtually all Solidarity leaders and many affiliated intellectuals were arrested or detained. The United States and other Western countries responded to martial law by imposing economic sanctions against the Polish regime and against the Soviet Union. Unrest in Poland continued for several years thereafter.
Having achieved some semblance of stability, the Polish regime in several stages relaxed and then rescinded martial law. By December 1982, martial law was suspended, and a small number of political prisoners, including Wałęsa, were released. Although martial law formally ended in July 1983 and a partial amnesty was enacted, several hundred political prisoners remained in jail.
In September 1986, general amnesty was declared and the government released nearly all political prisoners. Throughout the period the authorities continued to harass dissidents and Solidarity activists. Solidarity remained proscribed and its publications banned; independent publications were censored. However, with the economic crisis unresolved and societal institutions dysfunctional, both the ruling establishment and Solidarity-led opposition began looking for ways out of the stalemate, and exploratory contacts were being established.
The government's inability to forestall Poland's economic decline led to waves of strikes across the country in April, May and August 1988. Under the reformist leadership of Mikhail Gorbachev, the Soviet Union was becoming increasingly destabilized and unwilling to apply military and other pressure to prop up allied regimes in trouble. In the late 1980s, the government was forced to negotiate with Solidarity in the Polish Round Table Negotiations. The resulting Polish legislative election in 1989 was a watershed event marking the fall of communism in Poland.
Third Polish Republic (1989–today)
Transition and Solidarity government
The "round-table" talks with the opposition began in February 1989. These talks produced the Round Table Agreement in April for partly open National Assembly elections. The failure of the communists at the polls resulted in a political crisis. The agreement called for a communist president, and on July 19, the National Assembly, with the support of a number of Solidarity deputies, elected General Wojciech Jaruzelski to that office. However, two attempts by the communists to form a government failed.
On August 19, President Jaruzelski asked journalist/Solidarity activist Tadeusz Mazowiecki to form a government; on September 12, the Sejm (the national legislature) voted approval of Prime Minister Mazowiecki and his cabinet. For the first time in post-war history, Poland had a government led by noncommunists, setting a precedent to be soon followed by many other communist-ruled nations.
In December 1989, the Sejm approved the government's reform program to transform the Polish economy rapidly from centrally planned to free-market, amended the constitution to eliminate references to the "leading role" of the communist party, and renamed the country the "Republic of Poland." The communist Polish United Workers' Party dissolved itself in January 1990, creating in its place a new party, Social Democracy of the Republic of Poland.
In October 1990, the constitution was amended to curtail the term of President Jaruzelski. In November 1990, the German–Polish Border Treaty was signed.
In the early 1990s, Poland made great progress towards achieving a fully democratic government and a market economy. In November 1990, Lech Wałęsa was elected president for a five-year term. In December Wałęsa became the first popularly elected President of Poland. Poland's first free parliamentary election was held in 1991. More than 100 parties participated, and no single party received more than 13% of the total vote. In 1993 the Soviet Northern Group of Forces finally left Poland.
"Post-communist" and post-Solidarity governments; joining NATO and the European Union
In 1997 parliamentary election, two parties with roots in the Solidarity movement — Solidarity Electoral Action (AWS) and the Freedom Union (UW) — won 261 of the 460 Sejm seats and formed a coalition government. In April 1997, the new Constitution of Poland was finalized, and in July put into effect, replacing the previously used amended communist statute.
In the presidential election of 2000, Aleksander Kwaśniewski, the incumbent former leader of the SLD, was re-elected in the first round of voting. After September 2001 parliamentary election, SLD (a successor of the communist party) formed a coalition with the agrarian Polish People's Party (PSL) and the leftist Labor Union (UP).
Poland joined the European Union in May 2004. Both President Kwaśniewski and the government were vocal in their support for this cause. The only party decidedly opposed to EU entry was the populist right-wing League of Polish Families (LPR).
After the fall of communism, the government policy of guaranteed full employment had ended and many large unprofitable state enterprises were closed or restructured. Beginning in 1989, the Balcerowicz Plan and liberal economic policies in general had been implemented with the support of the leading Solidarity figures.
The restructuring and other economic woes of the transition period caused the unemployment to be at times as high as 20%. With the EU access, the gradual opening of West European labor markets to Polish workers, combined with the domestic economic growth, led to marked improvement in the employment situation in Poland.
Civic Platform rivalry with Law and Justice; Civic Platform-led government from 2007
September's 2005 parliamentary election was expected to produce a coalition of two center-right parties, PiS (Law and Justice) and PO (Civic Platform). During the bitter campaign PiS overtook PO, gaining 27% of votes cast and becoming the largest party in the Sejm, ahead of PO with 24%. In the presidential election in October the early favorite, Donald Tusk, leader of the PO, was beaten 54% to 46% in the second round by the PiS candidate Lech Kaczyński. Coalition talks ensued simultaneously with the presidential elections, but negotiations ended up in a stalemate and the PO decided to go into opposition. PiS formed a minority government which relied on the support of smaller populist and agrarian parties (Samoobrona, LPR) to govern. This became a formal coalition, but its deteriorating state made early parliamentary election necessary.
In the 2007 parliamentary election, the Civic Platform was most successful (41.5%), ahead of Law and Justice (32%), and the government of Donald Tusk, the chairman of PO, was formed. PO governed in a parliamentary majority coalition with the smaller Polish People's Party (PSL).
In the great worldwide economic downturn, triggered and exemplified in particular by the 2008 USA collapse and bailout of the banking system, the Polish economy weathered the crisis, in comparison with many European and other countries, relatively unscathed. However, worrisome signs, signalling upcoming difficulties, were present, and the European sovereign debt crisis, unraveling some of Europe's economies, was expected to negatively affect also the economy of Poland, not a member of the eurozone.
The social price paid by the Poles for the implementation of liberal free market economic policies had been the sharply more inequitable distribution of wealth and the associated impoverishment of large segments of the society. According to 2010 Eurostat data, 14% of Poles were "severely materially deprived", compared to the 8% EU average. This however represented the steepest of any European country drop in poverty rates over the previous several years (it had taken place since Poland joined the EU).[h]
Poland's president Lech Kaczyński and all aboard died in a plane crash on April 10, 2010 in western Russia, near Smolensk. President Kaczyński and other prominent Poles were on the way to the Katyn massacre anniversary commemoration.
In the second final round of the Polish presidential election on July 4, 2010, Bronisław Komorowski, Acting President, Marshal of the Sejm and a Civic Platform politician, defeated Jarosław Kaczyński by 53% to 47%.
The Smolensk tragedy brought into the open deep divisions within the Polish society and became a destabilizing factor in Poland's politics. A marked increase in nationalistic rhetoric and activity followed in its wake.
Poland's relations with its European neighbors tended to be good or improving, with Belarus being a sore point.[i] The Eastern Partnership summit, hosted in September 2011 by Poland, the holder of the rotating Presidency of the Council of the European Union, resulted in no agreement on near future expansion of the Union to include the several considered Eastern European and Caucasus states, formerly Soviet republics. The European Union membership for at least some of those countries, including Ukraine, had been a long-standing goal of Polish diplomacy. Poland also promoted NATO membership for Ukraine and Georgia, a plan seen by Russia as threatening to its security and not supported by a majority of Ukrainian voters. For a number of years, Poland and Russia remained in disagreement regarding the planned NATO missile defense system in Europe, in which Poland sought to be an active participant, and which Russia opposed.
The 2011 parliamentary election results were generally an affirmation of the current distribution of political forces. The Civic Platform won over 39% of the votes, Law and Justice almost 30%, Palikot's Movement 10%, the Polish People's Party and the Democratic Left Alliance over 8% each. The new element was the successful debut of the left-of-center movement of Janusz Palikot, a maverick politician, which resulted in decreased electoral appeal of the Democratic Left Alliance.
Poland's foreign minister, Radosław Sikorski, delivered a speech on 28 November 2011 in Berlin, in which he emphatically appealed to Germany and other European Union countries for closer economic and political integration and coordination, to be accomplished through a more powerful central government of the Union. Sikorski felt that decisive action and substantial reform, led by Germany, were necessary to prevent a collapse of the euro and subsequent destabilization and possible demise of the European Union. His remarks, directed primarily at the German audience, encountered hostile reception in Poland from Jarosław Kaczyński and his conservative parliamentary opposition, who accused the minister of betraying Poland's sovereignty and demanded his ouster and trial. Sikorski identified in his speech several areas important to Polish traditionalists, which, he said, should permanently remain within the domain of individual national governments.
As of 9 December 2011 UE summit in Brussels, the British opposition prevented any changes to the EU Treaty. All the remaining UE governments, including the actively involved Polish delegation led by Prime Minister Tusk, indicated support for the new "fiscal compact" and greater coordination agreement, intended to impose financial discipline on member states and to cure the present instability in the eurozone. The agreed reforms would be implemented in the laws of individual states.
In January 2012, Poland's Prosecutor General's office initiated investigative proceedings against Zbigniew Siemiątkowski, the former Polish intelligence chief. Siemiątkowski was charged with facilitating the alleged CIA detention operation in Poland, where foreign suspects may have been tortured in the context of the War on Terror. The alleged constitutional and international law trespasses took place when Leszek Miller, presently member of parliament and leader of the Democratic Left Alliance, was Prime Minister (2001-2004), and he was also considered a possible subject of future legal action. Aleksander Kwaśniewski, the President of Poland at that time, acknowledged knowing about and consenting to (together with the Prime Minister) the secret "CIA prisons".
During the Russian Patriarch's visit to Poland on August 16–19, 2012, Kirill I, Primate of the Russian Orthodox Church and Archbishop Józef Michalik, President of the Catholic Conference of Polish Bishops, signed a historic Polish-Russian church message on reconciliation and mutual forgiveness.
Proposed legislation regarding limited legal status and rights for same-sex civil unions was defeated in all three different versions considered by the Sejm, the lower house of Polish parliament, on January 25, 2013.
See also
- History of Austria
- History of Belarus
- History of the Czech Republic
- History of Europe
- History of the European Union
- History of Germany
- History of Lithuania
- History of Russia
- History of Slovakia
- History of Sweden
- History of Ukraine
- List of Kings of Poland
- List of Presidents of Poland
- List of Prime Ministers of Poland
- Military history of Poland
- Old Polish units of measurement
- Polish American
- Polish British
- Polish United Workers' Party
- Politics of Poland
- Poland and West-Slavs 800–950
- Poland 990–1040
- Poland 1040–1090
- Poland 1090–1140
- Poland 1140–1250
- Poland 1250–1290
- Poland 1290–1333
- Poland 1333–1350
- Poland 1350–1370
- Poland 1550
- Poland 1773
- Poland 2004
- Poland (flash version)
a.^ Piłsudski's family roots in the Polonized gentry of the Grand Duchy of Lithuania and the resulting point of view (seeing himself and people like him as legitimate Lithuanians) put him in conflict with the modern Lithuanian nationalists (who in Piłsudski's lifetime redefined the scope of the "Lithuanian" connotation), by extension with other nationalists, and also with the Polish modern nationalist movement.
c.^ An establishment of Poland restricted to "minimal size", according to ethnographic boundaries (such as the ones shown on this 1920 map, or the lands common to both prewar Poland and postwar Poland), was planned by the Soviet People's Commissariat for Foreign Affairs in 1943-1944, and recommended by Ivan Maisky to Vyacheslav Molotov in early 1944 because of what Maisky saw as Poland's historically unfriendly disposition toward Russia and the Soviet Union. Joseph Stalin opted for a larger version, allowing a "swap" (territorial compensation for Poland), which involved the eastern lands gained by Poland at the Peace of Riga of 1921 and now lost, and eastern Germany conquered form the Nazis in 1944-1945. In regard to the several disputed areas, including Stettin, "Zakerzonia" and Białystok (Białystok was claimed by the communists of the Byelorussian SSR), the Soviet leader made determinations favorable to Poland.
Other territorial and ethnic scenarios were also possible, generally with outcomes less advantageous to Poland than its present form.
d.^ Timothy Snyder spoke of about 100,000 Jews killed by Poles during the Nazi occupation, majority probably by members of the collaborationist Blue Police. This number would have likely been many times higher had Poland entered into an alliance with Germany in 1939, as advocated by some Polish post-war and post-1989 historians and others.
e.^ Some may have had falsely claimed the Jewish identity hoping for permissions to emigrate. The communist authorities, pursuing the concept of Poland of single ethnicity (in accordance with the recent border changes and expulsions), were allowing the Jews to leave the country. For a discussion of early communist Poland's ethnic politics, see Timothy Snyder, The Reconstruction of Nations, chapters on modern "Ukrainian Borderland".
g.^ The Soviet leadership, which had previously ordered the Uprising in East Germany, the Hungarian Revolution and the Prague Spring crushed, now became worried about the demoralization of the Polish army, a crucial Warsaw Pact component, because of it being used against Polish workers. The Soviets withdrew their support for Gomułka, who insisted on the use of force; he and his close associates were ousted from the Polish politburo.
h.^ The shrinking poverty sphere has been possible because of the reduced unemployment and the influx of the European Union funds. Over two million Poles however have left the country to work abroad. According to some Polish academic researchers, the Eurostat data, based on the information collected by Central Statistical Office of Poland, does not fully reflect the scope of poverty in Poland. Poverty is becoming increasingly permanent in post-industrial urban areas and has highest levels among children. The two articles quoted agree on the greatest poverty rate reduction of any European country, both invoke Eurostat, but give widely divergent numerical figures.
i.^ Following the presidential election of Viktor Yanukovych and the subsequent persecution of Yulia Tymoshenko, Poland's and European Union's relations with Ukraine have also entered a period of difficulty.
j.^ "All the currently available documents of Nazi administration show that, together with the Jews, the stratum of the Polish intelligentsia was marked for total extermination. In fact, Nazi Germany achieved this goal almost by half, since Poland lost 50 percent of her citizens with university diplomas and 35 percent of those with a gimnazium diploma."
k.^ Decisive political events took place in Poland shortly before the Soviet intervention in Hungary. Władysław Gomułka, a reformist leader at that time, was reinstated to the Polish Politburo and the Eighth Plenum of the party's Central Committee was announced to convene on October 19, 1956, all without seeking the Soviet approval. The Soviet Union responded with military moves and intimidation and its "military-political delegation", led by Nikita Khrushchev, quickly arrived in Warsaw. Gomułka tried to convince them of his loyalty but insisted on the reforms that he considered essential, including a replacement of Poland's Soviet-trusted minister of defense, Konstantin Rokossovsky. The disconcerted Soviets returned to Moscow, the Polish Plenum elected Gomułka First Secretary and removed Rokossovsky from the Politburo. On October 21, the Soviet Presidium followed Khrushchev's lead and decided unanimously to "refrain from military intervention" in Poland, a decision likely influenced also by the ongoing preparations for the invasion of Hungary. The Soviet gamble paid off because Gomułka in the coming years turned out to be a very dependable Soviet ally and an orthodox communist.
l.^ The delayed reinforcements were coming and the government military commanders General Tadeusz Rozwadowski and Władysław Anders wanted to keep on fighting the coup perpetrators, but President Stanisław Wojciechowski decided to surrender to prevent the imminent widening of civil war. The coup brought to power the "Sanation" regime under Józef Piłsudski and Edward Rydz-Śmigły after Piłsudski's death. The Sanation regime persecuted the opposition within the military and in general. Rozwadowski died imprisoned, according to some accounts murdered. At the time of Rydz-Śmigły's command, the Sanation camp embraced the ideology of Roman Dmowski, Piłsudski's nemesis. Rydz-Śmigły did not allow General Władysław Sikorski, an anti-Sanation enemy, to participate as a soldier in the September 1939 defense of the country. During World War II in France and Britain the Polish government in exile became dominated by anti-Sanation politicians. The perceived Sanation followers were in turn persecuted (in exile) under prime ministers Sikorski and Stanisław Mikołajczyk.
m.^ General Zygmunt Berling of the Soviet-allied First Polish Army attempted in mid-September a crossing of the Vistula and landing at Czerniaków to aid the insurgents, but the operation was defeated by the Germans and its participants suffered heavy losses.
o.^ One of the party leaders Mieczysław Rakowski, who abandoned his mentor Gomułka following the 1970 crisis, saw the demands of the demonstrating workers as "exclusively socialist" in character, because of the way they were phrased. Most people in communist Poland, including opposition activists, did not question the supremacy of "socialism" or the socialist idea; misconduct by party officials, such as not following the provisions of the constitution, was blamed. This assumed standard of political correctness was increasingly challenged in the years that followed, when pluralism became a frequently used concept.
- Various authors, ed. Marek Derwich and Adam Żurek, U źródeł Polski (do roku 1038) (Foundations of Poland (until year 1038)), Wydawnictwo Dolnośląskie, Wrocław 2002, ISBN 83-7023-954-4, p. 1-143
- Jerzy Wyrozumski – Historia Polski do roku 1505 (History of Poland until 1505), Państwowe Wydawnictwo Naukowe (Polish Scientific Publishers PWN), Warszawa 1986, ISBN 83-01-03732-6, p. 1-177
- Jerzy Wyrozumski – Historia Polski do roku 1505 (History of Poland until 1505), p. 178-250
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), Państwowe Wydawnictwo Naukowe (Polish Scientific Publishers PWN), Warszawa 1986, ISBN 83-01-03732-6, p. 1-105
- Richard Overy (2010), The Times Complete History of the World, Eights Edition, p. 176-177. London: Times Books. ISBN 978-0-00-788089-8.
- Norman Davies, Europe: A History, p. 555, 1998 New York, HarperPerennial, ISBN 0-06-097468-0
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 105-173
- Józef Andrzej Gierowski – Historia Polski 1505–1764 (History of Poland 1505–1764), p. 174-301
- Józef Andrzej Gierowski – Historia Polski 1764-1864 (History of Poland 1764-1864), Państwowe Wydawnictwo Naukowe (Polish Scientific Publishers PWN), Warszawa 1986, ISBN 83-01-03732-6, p. 1-74
- Józef Andrzej Gierowski – Historia Polski 1764-1864 (History of Poland 1764-1864), p. 74-101
- Philip W. Barker (2008). Religious Nationalism in Modern Europe: If God be for Us. Taylor & Francis US. p. 91.
- Karin Friedrich, Brandenburg-Prussia, 1466-1806: The Rise of a Composite State (2012) p 93
- Jarosław Czubaty, "'What is to be Done When the Motherland Has Died?' The Moods and Attitudes of Poles After the Third Partition, 1795–1806," Central Europe (2009) 7#2 pp 95-109.
- Józef Andrzej Gierowski – Historia Polski 1764-1864 (History of Poland 1764-1864), p. 102-181
- Józef Andrzej Gierowski – Historia Polski 1764-1864 (History of Poland 1764-1864), p. 181-231
- Józef Andrzej Gierowski – Historia Polski 1764-1864 (History of Poland 1764-1864), p. 181-311
- Jerzy Zdrada, Biel, czerwień, czerń (Whiteness, redness, blackness), Polityka www.polityka.pl, January 27, 2010
- Józef Andrzej Gierowski – Historia Polski 1764-1864 (History of Poland 1764-1864), p. 311-345
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki. Cambridge: Cambridge University Press, 2nd edition 2006, ISBN 0-521-61857-6, p. 182-216
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), Państwowe Wydawnictwo Naukowe (Polish Scientific Publishers PWN), Warszawa 1986, ISBN 83-01-03732-6, p. 84-85
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 44
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 140
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 217-222
- Davies, Heart of Europe (1986) 112
- Andrzej Gawryszewski (2005). Ludność Polski w XX wieku (Population of Poland in the 20th Century). Warsaw: Polska Akademia Nauk (Polish Academy of Sciences). ISBN 83-87954-66-7.
- Margaret MacMillan (2007). Paris 1919: Six Months That Changed the World. Random House Digital, Inc. p. 207.
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 224, 226-227
- Heart of Europe. A Short History of Poland by Norman Davies. Oxford: Oxford University Press paperback 1986. ISBN 0-19-285152-7 (pbk.), p. 115-121
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 226-227
- M. B. Biskupski, "Paderewski, Polish Politics, and the Battle of Warsaw, 1920," Slavic Review (1987) 46#3 pp. 503-512 in JSTOR
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 231
- Timothy Snyder, The Reconstruction of Nations, p. 60-65, 2003 New Haven & London, Yale University Press, ISBN 978-0-300-10586-5
- Anita J. Prażmowska – A History of Poland, 2004 Palgrave Macmillan, ISBN 0-333-97253-8, p. 164-172
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 225, 230, 231
- Timothy Snyder, The Reconstruction of Nations, p. 57-60, 62
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 230
- Timothy Snyder, The Reconstruction of Nations, p. 64-65, 68-69
- Timothy Snyder, The Reconstruction of Nations, p. 63-69
- Heart of Europe. A Short History of Poland by Norman Davies, p. 147
- Timothy Snyder, The Reconstruction of Nations, p. 139-144
- Heart of Europe. A Short History of Poland by Norman Davies, p. 115-121, 73-80
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 232
- Heart of Europe. A Short History of Poland by Norman Davies, p. 121-123
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 237-238
- Davies, Norman (2005). God's Playground: A History of Poland, Volume II. New York: Columbia University Press. ISBN 978-0-231-12819-3, p. 307, 308
- Davies, Norman (2005). God's Playground: A History of Poland, Volume II, p. 312
- Heart of Europe. A Short History of Poland by Norman Davies, p. 123-127
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 248-249
- Walter M. Drzewieniecki,"The Polish Army on the Eve of World War II," Polish Review (1981) 26#3 pp 54-64 in JSTOR
- Heart of Europe. A Short History of Poland by Norman Davies, p. 126
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 242
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 249-250
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 360
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 247-248, 251-252
- Heart of Europe. A Short History of Poland by Norman Davies, p. 127-129
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 252-253
- Davies, Norman (2005). God's Playground: A History of Poland, Volume II, p. 319-320
- Nick Holdsworth (2008-10-18). "Stalin 'planned to send a million troops to stop Hitler if Britain and France agreed pact'". The Telegraph. Retrieved 2011-10-28.
- Heart of Europe. A Short History of Poland by Norman Davies, p. 155-156
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 362-369
- Halik Kochanski, The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 59-93, Harvard University Press, Cambridge, Mass. 2012, ISBN 978-0-674-06814-8
- Wladyslaw Kozaczuk and Jerzy Straszak, Enigma: How the Poles Broke the Nazi Code (2004)
- Gerhard Weinberg, A World at Arms: A Global History of World War II (1994) p 50
- Czesław Brzoza, Andrzej Leon Sowa – Historia Polski 1918-1945 (History of Poland 1918-1945), Wydawnictwo Literackie, Kraków 2009, ISBN 978-83-08-04125-3, p. 693
- Heart of Europe. A Short History of Poland by Norman Davies, p. 68-69
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 375-382
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 264-265.
- Czesław Brzoza, Andrzej Leon Sowa – Historia Polski 1918-1945 (History of Poland 1918-1945), p. 693-694
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 382-384
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 389-390
- Heart of Europe. A Short History of Poland by Norman Davies, p. 73-75
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 394-395
- Czesław Brzoza, Andrzej Leon Sowa – Historia Polski 1918-1945 (History of Poland 1918-1945), p. 650-663
- Poland under Communism: A Cold War History, A. Kemp-Welch. Cambridge: Cambridge University Press 2008. ISBN 978-0-521-71117-3 paperback, p. 4-5
- Czesław Brzoza – Polska w czasach niepodległości i II wojny światowej (1918-1945) (Poland in times of independence and World War II (1918-1945)), p. 386-387, 390
- Heart of Europe. A Short History of Poland by Norman Davies, p. 75, 104-105
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 1
- Holocaust: The Ignored Reality, by Timothy Snyder, The New York Review of Books, July 16, 2009
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 398-401
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 6-7
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 408-410
- Nicholas A. Robins, Adam Jones, Genocides by the Oppressed: Subaltern Genocide in Theory and Practice. Indiana University Press. 2009. pp. 59-60.
- Czesław Brzoza, Andrzej Leon Sowa – Historia Polski 1918-1945 (History of Poland 1918-1945), p. 694-695
- styl.pl, PANI 10/2011, Polskość noszę z sobą w plecaku (I carry Polishness with me in the backpack), Małgorzata Domagalik's conversation with Jan T. Gross
- Haar, Ingo (2007). ""Bevölkerungsbilanzen" und "Vertreibungsverluste"". Herausforderung Bevölkerung Part 6. VS Verlag für Sozialwissenschaften. p. 267. doi:10.1007/978-3-531-90653-9. ISBN 978-3-531-15556-2. Retrieved 2009-08-28.
- "Polish victims". United States Holocaust Memorial Museum. Retrieved 2009-08-28.
- Czesław Brzoza, Andrzej Leon Sowa – Historia Polski 1918-1945 (History of Poland 1918-1945), p. 695-696
- Norman M. Naimark. Stalin's Genocides. Princeton University Press. 2010. p. 91.
- Timothy Snyder. Bloodlands: Europe Between Hitler and Stalin. Basic Books. 2010. pp. 415, 126, 146-147.
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 157-163
- Czesław Brzoza, Andrzej Leon Sowa – Historia Polski 1918-1945 (History of Poland 1918-1945), p. 696
- Czesław Brzoza, Andrzej Leon Sowa – Historia Polski 1918-1945 (History of Poland 1918-1945), p. 695
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 410-411
- Czesław Brzoza, Andrzej Leon Sowa – Historia Polski 1918-1945 (History of Poland 1918-1945), p. 694
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 23-24
- John Radzilowski, A Traveller's History of Poland; Northampton, Massachusetts: Interlink Books, 2007, ISBN 1-56656-655-X, p. 223-225
- Gazeta Wyborcza newspaper wyborcza.pl 2011-01-17, Marcin Zaremba - Biedni Polacy na żniwach - Recenzja "Złotych Żniw" (Poor Poles at the harvest - review of "Golden Harvest")
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 410
- Anita J. Prażmowska – A History of Poland, p. 191
- Timothy Snyder, The Reconstruction of Nations, p. 179-201
- Timothy Snyder (Spring 1999). ""To resolve the Ukrainian Problem Once and for All": The Ethnic Cleansing of Ukrainians in Poland, 1943-1947". Journal of Cold War Studies. Retrieved 2012-12-08.
- Timothy Snyder, The Reconstruction of Nations, p. 204-205
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 410, 414-417
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 406-408
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 8
- The Polish Way: A Thousand-Year History of the Poles and Their Culture by Adam Zamoyski, 1994 New York: Hippocrene Books, ISBN 0-7818-0200-8, p. 369-370
- Anita J. Prażmowska – A History of Poland, p. 192
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 9
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 417-425
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 26, 32-35
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 63
- Andrzej Leon Sowa – Historia polityczna Polski 1944-1991 (Political history of Poland 1944-1991), Wydawnictwo Literackie, Kraków 2011, ISBN 978-83-08-04768-9, p. 178-179
- David Ost, Solidarity and the Politics of Anti-Politics, p. 36-38, 1990 Philadelphia, Temple University Press, ISBN 0-87722-655-5
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 442-445
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 18, 39
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 285-286
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 18
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 398-399, 407
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 40
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 66-68
- Anita J. Prażmowska – A History of Poland, p. 194-195
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 286-292
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 39-48, 63
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 24-26
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 12-16
- Józef Buszko – Historia Polski 1864–1948 (History of Poland 1864–1948), p. 434-440
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 27, 39
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 35-39
- Andrzej Stelmachowski – Kształtowanie się ustroju III Rzeczypospolitej (The formation of the Third Republic system), Łośgraf, Warszawa 2011, ISBN 978-83-62726-06-6, p. 189
- Anita J. Prażmowska – A History of Poland, p. 195, 196
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 282
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 21-22
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 68-75
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 76-86
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 86-92
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 96-104
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 116-123
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 80, 101
- Timothy Snyder, The Reconstruction of Nations, p. 218-222
- Anita J. Prażmowska – A History of Poland, p. 198-200
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 59-60
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 124-143
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 148-163
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 163-171
- Anita J. Prażmowska – A History of Poland, p. 203
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 177-180
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 180-198
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 198-206
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 206-212
- Anita J. Prażmowska – A History of Poland, p. 205
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 212-223
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 228-229
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 325-331
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 336-348
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. ix
- David Ost, Okrągły Stół nie był spiskiem (The Round Table was not a conspiracy), a Newsweek conversation, www.polska transformacja.pl
- Matthew Kaminski (2012-12-14). "Leszek Balcerowicz: The Anti-Bernanke". The Wall Street Journal. Retrieved 2012-12-19.
- "Unemployment rate Poland". Google. Retrieved 2010-12-03.
- "Poland Unemployment rate". Index Mundi. Retrieved 2010-12-03.
- "Harmonised unemployment rate". Eurostat. Retrieved 2012-02-09.
- Analytica, Oxford (June 5, 2009). "Poland Likely To Resist Global Slowdown". Forbes.com. Retrieved 2010-07-14.
- Nicholas Kulish (2012-07-17). "Economic Gloom in Europe Barely Touches Proud Poland". The New York Times. Retrieved 2012-07-18.
- M.C.K. (2012-12-18). "Learning from abroad: Don't forget Poland". The Economist. Retrieved 2012-12-20.
- "Get a move on. The government should win re-election this year. Then it can get on with eform". The Economist. 2011-01-06. Retrieved 2011-11-12.
- Jack Ewing (2011-12-14). "Poland Skirts Euro Zone Woes, for Now". The New York Times. Retrieved 2011-12-16.
- Jack Ewing (2012-12-17). "Poland Finds It’s Not Immune to Euro Crisis". The New York Times. Retrieved 2012-12-20.
- "Poles apart?". International Socialism, Jane Hardy's book review by Adam Fabry. Retrieved 2010-09-06.
- "In 2010, 23% of the population were at risk of poverty or social exclusion". Eurostat. Retrieved 2012-02-10.
- Marcin Bojanowski, W sześć lat w Polsce ubyło 8 mln biednych. Najwięcej w całej UE (The number of the poor in Poland decreased by 8 million over the past six years. The largest decrease in the entire European Union). Gazeta Wyborcza newspaper wyborcza.pl, 2012-02-09
- Katarzyna Pawłowska-Salińska, Bieda nasza powszechna (Our common poverty). Gazeta Wyborcza newspaper wyborcza.pl, 2012-02-16
- Michael Slackman (2010-11-28). "Poland, Lacking External Enemies, Turns on Itself". The New York Times. Retrieved 2010-11-30.
- Adam Easton (2011-04-09). "Division mars Poland's Smolensk plane crash anniversary". BBC News. Retrieved 2011-04-10.
- Patryk Wasilewski, Rob Strybel (2011-04-10). "Divided Poles mark crash anniversary, rap Russia". Reuters. Retrieved 2011-04-11.
- "Polish paper Rzeczpospolita fires editors over article on presidential crash". Irish Independent/Reuters. 2012-11-06. Retrieved 2012-11-09.
- "Polish nationalists rally on Smolensk air crash anniversary". Expatica, Agence France-Presse. 2012-10-04. Retrieved 2012-12-17.
- K.T. (2012-11-12). "Polish nationalism: Punching for Poland". The Economist. Retrieved 2012-12-17.
- "Viewpoint: Is Poland moving on from its turbulent past?". BBC News. 2012-04-27. Retrieved 2012-04-29.
- Paul Krugman (2011-01-16). "Can Europe Be Saved?". The New York Times. Retrieved 2011-01-23.
- Kevin Quealy, Karl Russell (2011-06-16). "Debt Rising in Europe". The New York Times. Retrieved 2011-06-17.
- Judy Dempsey (2011-05-20). "France Joins Poland and Germany on Wider Unity". The New York Times. Retrieved 2011-05-22.
- Michael Schwirtz (2012-03-01). "Belarus Warns European Union Over Withdrawal of Envoys". The New York Times. Retrieved 2012-03-01.
- Judy Dempsey (2011-09-30). "Move East Not on European Union's Agenda for the Moment". The New York Times. Retrieved 2011-10-01.
- The Associated Press (2011-11-17). "Russia:Border Conflicts Risk Nuclear War, Officer Says". The New Times. Retrieved 2011-11-18.
- The Associated Press (2011-11-17). "Russia's military chief warns that heightened risks of conflict near borders may turn nuclear". Washington Post. Retrieved 2011-11-19.
- "Russia warns on missile defence deal with Nato and US". BBC News. 2012-05-03. Retrieved 2012-05-03.
- Jaroslaw Adamowski (2011-10-11). "Poland election: In historic first, PM gets a second term". The Christian Science Monitor. Retrieved 2011-10-11.
- "Poland and the future of the European Union". Government of Poland. 2011-11-28. Retrieved 2011-12-01.
- "Statement by the Euro Area Heads of State or Government". European Council. 2011-12-09. Retrieved 2011-12-09.
- Nicholas Kulish (2011-12-08). "Europe's Debt Crisis Brings Two Former Foes Closer Than Ever". The New York Times. Retrieved 2011-12-09.
- Matthew Day (2012-03-27). "Poland ex-spy boss 'charged over alleged CIA secret prison'". The Telegraph. Retrieved 2012-03-28.
- Joanna Berendt, Nicholas Kulish (2012-03-27). "Polish Ex-Official Charged With Aiding the C.I.A.". The New York Times. Retrieved 2012-03-28.
- Wojciech Czuchnowski, Więzienia CIA w Polsce. Siedem pytań (CIA prisons in Poland. The seven questions), Gazeta Wyborcza 2012-04-02
- Ewa Siedlecka, Kwaśniewski w szarej strefie więzień CIA (Kwaśniewski in the grey area of CIA prisons), Gazeta Wyborcza 2012-05-01
- "Russian Orthodox leader to visit Poland, sign reconciliation statement". Catholic News Agency. 2012-07-17. Retrieved 2012-08-10.
- "Russian Patriarch Kirill makes historic visit to Poland". BBC News. 2012-08-16. Retrieved 2012-08-16.
- "Churches in Polish-Russian appeal for friendly ties". BBC News. 2012-08-17. Retrieved 2012-08-17.
- Marcin Sobczyk (2013-01-25). "Poland Rejects Proposed Gay Civil Unions". The Wall Street Journal. Retrieved 2013-01-27.
- Timothy Snyder, The Reconstruction of Nations, p. 40-41, 64-65, 68-69
- Heart of Europe. A Short History of Poland by Norman Davies, p. 145
- Richard Overy (2010), The Times Complete History of the World, Eights Edition, p. 236, map
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 1-3
- Mirosław Maciorowski, Kresowianie nie mieli wyboru, musieli jechać na zachód (The Kresy inhabitants had no choice, had to move west), conversation with Professor Grzegorz Hryciuk. Gazeta Wyborcza Wrocław newspaper wroclaw.gazeta.pl, 2010-12-20
- Norman Davies: W 1939 r. Polacy się świetnie spisali (In 1939 the Poles performed exceedingly well): Włodzimierz Kalicki talks with Norman Davies, Gazeta Wyborcza, 2009-08-24
- Timothy Snyder, Polacy wobec Holocaustu (Poles and the Holocaust), Gazeta Wyborcza 2012-09-07
- Timothy Snyder, The Reconstruction of Nations, p. 89
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 23
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 18, 64-65
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 57-59, 187, 196
- Marcin Wojciechowski, Ukraina odpływa ku Rosji (Ukraine sails away toward Russia), Gazeta Wyborcza, 2012-05-12
- Aleksander Gella. Development of Class Structure in Eastern Europe: Poland and Her Southern Neighbours. SUNY Press. 1989. p. 182.
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 114-116
- Jerzy Kirchmayer – Powstanie Warszawskie (The Warsaw Uprising), 6th edition, Książka i Wiedza, Warszawa 1970, p. 381-396
- J.P. (2010-07-31). "The Warsaw Rising: Was it all worth it?". The Economist. Retrieved 2013-01-22.
- Marek Jan Chodakiewicz (2004-06-04). "The Warsaw Rising 1944: Perception and Reality". warsawuprising.com. Retrieved 2013-01-22.
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 193
- Poland under Communism: A Cold War History, A. Kemp-Welch, p. 215
Further reading
More recent general history of Poland books in English
- Biskupski, M. B. The History of Poland. Greenwood, 2000. 264 pp. online edition
- The Cambridge History of Poland, 2 vols., Cambridge: Cambridge University Press, 1941 (1697–1935), 1950 (to 1696). New York: Octagon Books, 1971 online edition vol 1 to 1696, old fashioned but highly detailed
- Davies, Norman. God's Playground. A History of Poland. Vol. 1: The Origins to 1795, Vol. 2: 1795 to the Present. Oxford: Oxford University Press, 1981. ISBN 0-19-925339-0 / ISBN 0-19-925340-4.
- Davies, Norman. Heart of Europe: A Short History of Poland. Oxford University Press, 1984. 511 pp. excerpt and text search
- Frucht, Richard. Encyclopedia of Eastern Europe: From the Congress of Vienna to the Fall of Communism Garland Pub., 2000 online edition
- Oskar Halecki. History of Poland, New York: Roy Publishers, 1942. New York: Barnes and Noble, 1993, ISBN 0-679-51087-7
- Kenney, Padraic. “After the Blank Spots Are Filled: Recent Perspectives on Modern Poland,” Journal of Modern History Volume 79, Number 1, March 2007 pp 134–61, historiography
- Stefan Kieniewicz, History of Poland, Hippocrene Books, 1982, ISBN 0-88254-695-3
- Kloczowski, Jerzy. A History of Polish Christianity. Cambridge U. Pr., 2000. 385 pp.
- Lerski, George J. Historical Dictionary of Poland, 966-1945. Greenwood, 1996. 750 pp. online edition
- Leslie, R. F. et al. The History of Poland since 1863. Cambridge U. Press, 1980. 494 pp.
- Lewinski-Corwin, Edward Henry. The Political History of Poland (1917), well-illustrated; 650pp online at books.google.com
- Lukowski, Jerzy and Zawadzki, Hubert. A Concise History of Poland. Cambridge U. Press, 2nd ed 2006. 408pp. excerpts and search
- Iwo Cyprian Pogonowski. Poland: An Illustrated History, New York: Hippocrene Books, 2000, ISBN 0-7818-0757-3
- Pogonowski, Iwo Cyprian. Poland: A Historical Atlas. Hippocrene, 1987. 321 pp.
- Anita J. Prazmowska. A History of Poland, Basingstoke: Palgrave Macmillan 2004, ISBN 0-333-97254-6
- Radzilowski, John. A Traveller's History of Poland, Northampton, Massachusetts: Interlink Books, 2007, ISBN 1-56656-655-X
- Roos, Hans. A History of Modern Poland (1966)
- Sanford, George. Historical Dictionary of Poland. Scarecrow Press, 2003. 291 pp.
- Wróbel, Piotr. Historical Dictionary of Poland, 1945-1996. Greenwood, 1998. 397 pp.
- Zamoyski, Adam. The Polish Way. A Thousand-Year History of the Poles and their Culture. J. Murray, 1987. 422 pp.; heavily illustrated excerpt and text search
Published in Poland
- History of Poland, Aleksander Gieysztor et al. Warsaw: PWN, 1968
- History of Poland, Stefan Kieniewicz et al. Warsaw: PWN, 1979
- An Outline History of Poland, by Jerzy Topolski. Warsaw: Interpress Publishers, 1986, ISBN 83-223-2118-X
- An Illustrated History of Poland, by Dariusz Banaszak, Tomasz Biber, Maciej Leszczyński. Poznań: Publicat, 2008, ISBN 978-83-245-1587-5
- Poland: History of Poland, by Stanisław Kołodziejski, Roman Marcinek, Jakub Polit. Kraków: Wydawnictwo Ryszard Kluszczyński, 2005, 2009, ISBN 83-7447-018-6
- Movie (on-line)
- Halecki, Oscar. "BORDERLANDS OF WESTERN CIVILIZATION A History of East Central Europe" (PDF). Oscar Halecki. Retrieved 2010-08-08.
- History of Poland, in paintings
- History of Poland on Historycy.org forum
- Commonwealth of Diverse Cultures: Poland's Heritage
- "Poland, Christianity in" The New Schaff-Herzog Encyclopedia of Religious Knowledge (1910) vol 9 pp 104-8 | http://en.wikipedia.org/wiki/History_of_Poland | 13 |
67 | Important Mathematics Vocabulary for Praxis II ParaPro Test Prep Study Guide
The practice quiz for this study guide can be found at:
The math section of the ParaPro Assessment will test your knowledge of important math terms. You will be expected to know the bolded words in the following paragraphs. Make sure you are familiar with every one of these terms before taking the test.
Area is the amount of space inside a two-dimensional shape.
The average (arithmetic mean) of a set of values is the number found when all the values are added together and divided by the number of values. For example, the average of 1, 2, and 6 is 3 because 1 + 2 + 6 = 9 and 9 ÷ 3 = 3.
A bar graph is a chart or graph that compares amounts for different categories.
A circle is a curved, two-dimensional figure where every point is the same distance from the enter.
A circle graph is a diagram in the shape of a circle which shows the parts of a whole.
The circumference is the total distance around a circle.
A composite number is a number that has more than two factors. For example, the numbers 4, 9, and 100 are all composite numbers.
A coordinate plane is a grid created by a horizontal x-axis and a vertical y-axis.
The denominator is the bottom number of a fraction. For example, in the fraction , the denominator is 2.
The diameter is a line that goes directly through the center of a circle—the longest line segment that can be drawn in a circle.
The difference is the solution to a subtraction problem.
A digit is a number from 0 through 9. For example, the number 123 has three digits: 1, 2, and 3.
An equation is a mathematical statement that states the equality of two expressions, and uses an equals sign, =. For example, 4 + 5 = 9 is an equation.
An equilateral triangle is a triangle that has three sides with the same length.
An expression is a mathematical statement that does not use an equals sign, =, or inequality symbol, such as < or >. For example, 3 + 1 is an expression.
A factor of a number is any integer that divides evenly into another integer without a remainder. For example, the factors of 6 are –6, –3, –2, –1, 1, 2, 3, and 6.
A fraction is a part of a whole, represented with one number over another number. For example, and are fractions.
A hexagon is a polygon with six sides and six angles.
The hundredths digit is the digit two places to the right of the decimal point. For example, in the number 12.34, the digit 4 is in the hundredths place.
An integer is positive or negative whole number or 0. For example, the numbers –3, –1, 0, and 128 are all integers.
An isosceles triangle is a triangle that has two sides with the same length.
A line graph is a diagram that uses a line to show a change over time.
The mean of a data set is the average found by adding all of the numbers together and dividing by the quantity of numbers in the set. For example, the mean of 2, 4, 6, 8, and 10 is 6. (2 + 4 + 6 + 8 + 10 = 30; and 30 ÷ 5 = 6.)
The median of a data set is the center number, if the values are in ascending or descending order. For example, the median of 3, 5, 7, 9, 11 is 7, because 7 is the digit in the middle of the set.
The mode of a data set is the number that appears the greatest number of times. For example, the mode of 3, 8, 3, 9, 16 is 3, because it appears the most out of the available numbers.
A multiple of a number is the product of an integer and another integer. For example, the numbers 3, 6, 9, and 12 are all multiples of 3.
The numerator is the top number of a fraction. In the fraction , the numerator is 1.
An octagon is a polygon with eight sides and eight angles. It may help to remember the meaning of this polygon by seeing the prefix of the word, oct–, which means eight (like octopus).
A pattern is a series of figures of numbers that repeat in a predictable way.
A percent is a way to show a numerical fraction of 1 (whole), where 1 is equal to 100%.
The perimeter is the total distance around the edges of a polygon.
A pictograph is a diagram or chart that uses pictures, or graphics, to show the level of occurrence for different categories.
A polygon is a two-dimensional object with straight lines that create a closed figure.
A prime number is a number that is only evenly divisible by itself and the number 1. For example, 3, 7, and 29 are all prime numbers.
The product of two or more numbers is the result when they are multiplied together.
A quadrilateral is a polygon with four sides and four angles. It may help to remember the meaning of this polygon by seeing the prefix of the word, quad–, which means four.
The quotient is the solution to a division problem.
The radius is a line segment from the center of a circle to a point on the circle (half of the diameter).
A rectangle is a four-sided polygon with four right angles. All rectangles have two pairs of parallel sides.
A right triangle is a triangle that has one 90-degree (right) angle.
A scalene triangle is a triangle that has no sides that are the same length.
A square is a four-sided polygon with four right angles and four equal sides. All squares have two pairs of parallel sides.
The sum of two or more numbers is the result when they are added together.
The tenths digit is the digit one place to the right of the decimal point. For example, in the number 12.34, the digit 3 is in the tenths place.
The thousandths digit is the digit three places to the right of the decimal point. For example, in the number 12.345, the digit 5 is in the thousandths place.
A triangle is a polygon with three sides and three angles. It may help to remember the meaning of this polygon by seeing the prefix of the word, tri–, which means three (like tripod).
A variable is a letter that represents an unknown number.
Volume is the amount of space inside a three-dimensional shape.
A whole number is a positive number that is neither a decimal nor a fraction, or zero. The numbers 0, 3, 19, and 1,218 are all whole numbers.
Add your own comment
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- First Grade Sight Words List
- Graduation Inspiration: Top 10 Graduation Quotes
- 10 Fun Activities for Children with Autism
- What Makes a School Effective?
- Child Development Theories
- Should Your Child Be Held Back a Grade? Know Your Rights
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- Smart Parenting During and After Divorce: Introducing Your Child to Your New Partner | http://www.education.com/reference/article/important-mathematics-vocabulary/ | 13 |
51 | Kinetic Molecular Theory
Ideal gases don't exist, but if they did, they would fit the following descriptions:
- Full of tiny particles that are far apart
- Neither attract nor repel each other
- Are constantly and randomly moving, creating pressure
- Do not lose energy when colliding.
Pressure is measured with a barometer (for atmospheric pressure) or a manometer (for sealed containers of gases).
As the result of many different scientists and experiments, several gas laws have been discovered. These laws relate the various state variables of a gas.
- State Variables of a Gas
- Pressure (P)
- Volume (V)
- Temperature (T)
- Molar mass (n)
These gas laws can be used to compare two different gases, or determine the properties of a gas after one of its state variables have changed.
|Avogadro's Law states that equal volumes of all ideal gases (at the same temperature and pressure) contain the same number of molecules.|
|Boyle's Law states that equal pressure is inversely proportional to volume (when temperature is constant).|
|Charles' Law states that volume is proportional to temperature (when pressure is constant). Remember that temperature must be measured in Kelvin.|
|Gay-Lussac's Law states that pressure is proportional to temperature (when volume is constant).|
Combined Gas Law
Combining Charles' Law, Boyle's Law, and Gay-Lussac's Law gives us the combined gas law.
|For a gas with constant molar mass, the three other state variables are interrelated.|
|The Combined Gas Law can be used for comparisons between gases.|
Ideal Gas Law
When Avogadro's Law is considered, all four state variables can be combined into one equation. Furthermore, the "constant" that is used in the above gas laws becomes the Universal Gas Constant (R).
To better understand the Ideal Gas Law, you should first see how it is derived from the above gas laws.
|and||This is simply a restatement of Avogadro's Law and the Combined Gas Law.|
|We can now combine the laws together.|
|Let R be a constant, and write the proportion in the form of an equation.|
|Rearranging the fraction gives one form of the ideal gas law.|
The ideal gas law is the most useful law, and it should be memorized. If you know the ideal gas law, you do not need to know any other gas laws, for it is a combination of all the other laws. If you know any three of the four state variables of a gas, the unknown can be found with this law. If you have two gases with different state variables, they can be compared.
There are three ways of writing the ideal gas law, but all of them are simply algebraic rearrangements of each other.
|This is the most common form.|
|This form is useful for predicting the effects of changing a state variable. To maintain a constant value of R, an change in the numerator must result in a proportional change in the denominator, and vice versa. If, for example, the pressure is decreased in a constant-volume container, you can use this form to easily predict that the temperature must decrease.|
|Because R is the same constant for all gases, this equation can be used to relate two gases to each other.|
| ;Rules for Using the Ideal Gas Law
Kinetic Molecular Theory
The Kinetic Molecular Theory attempts to explain the gas laws. It describes the behavior of microscopic gas molecules to explain the macroscopic behavior of gases. According to this theory, an ideal gas is composed of continually moving molecules of negligible volume. The molecules move in straight lines unless they collide into each other or the walls of their container.
|The pressure of the gas on the container is explained as the force the molecules exert on the walls during a collision. Pressure is equal to the average force of collisions divided by the total surface area of the container.|
|The temperature of the gas is proportional to the average kinetic energy of the molecules. denotes the average kinetic energy of the molecules, and is the Boltzman constant (1.388 x 10-23).|
The gas laws are now explained by the microscopic behavior of gas molecules:
- Boyle's Law: The pressure of a gas is inversely proportional to its volume. A container's volume and surface area are obviously proportional. Based on the pressure equation, an increase in volume (and thus surface area) will decrease pressure.
- Charles' Law: the volume of a gas is proportional to its temperature. As the volume (and surface area) increases, the pressure will decrease unless the force also increase. When pressure is constant, the volume and temperature must be proportional. The temperature equation above explains why: the energy of the molecules (and their collision force) is proportional to temperature.
- Gay-Lussac's Law: The temperature of a gas is directly proportional to its pressure. An increase in temperature will increase the kinetic energy of the molecules (shown by the temperature equation). Greater kinetic energy causes the molecules to move faster. Their collisions with the container will have more force, which increases pressure.
- Avogadro's Law: Equal volumes of all ideal gases (at the same temperature and pressure) contain the same number of molecules. According to the Kinetic Molecular Theory, the size of individual molecules is negligible compared to distances between molecules. Even though different gases have different sized molecules, the size difference is negligible, and the volumes are the same.
Derivation of Ideal Gas Law
|Suppose there are molecules, each with mass , in a cubic container with side length . Even though the molecules are moving in all directions, we may assume, on average, that one third of the molecules are moving along the x-axis, one third along the y-axis, and one third along the z-axis. We may assume this because the motion of the molecules is random, so no direction is preferred.|
|Suppose the average speed of the molecules is . Let a specific wall of the container be labeled A. Because the collisions in Kinetic Molecular Theory are perfectly elastic, the speed after a collision is . Therefore, the average change in momentum (the product of mass and velocity) per collision is .
Each molecule, on average, travels a distance of between two consecutive collisions with wall A. Therefore, it will collide times per second with wall A.
|The average change in momentum per molecule per second.|
|Therefore, this is the total change in momentum per second for the molecules that collide into wall A. This is the momentum per second that was exerted onto wall A. Because force equals the change in momentum over time, this value is the force exerted on wall A.|
|Pressure is defined as force per unit area, so this is the pressure of the gas.|
|Because the volume of the container is , we can rearrange the equation.|
|The kinetic energy of a single particle is given by this equation.|
|Substitute kinetic energy into the equation.|
|Substitute the temperature equation (from the previous section).|
|Avogadro's number is equal to the number of molecules per mole.|
|By definition, the ideal gas constant is equal to the Boltzmann constant times Avogadro's number.|
|The ideal gas law is derived from the Kinetic Molecular Theory.|
Deviations from the Ideal Gas Law
In an ideal gas, there are no intermolecular attractions, and the volume of the gas particles is negligible. However, there is no real gas that can perfectly fits this behavior, so the Ideal Gas Law only approximates the behavior of gases. This approximation is very good at high temperatures and low pressures.
At high temperature the molecules have high kinetic energy, so intermolecular attractions are minimized. At low pressure the gas occupies more volume, making the size of the individual molecules negligible. These two factors make the gas behave ideally.
At low temperature or high pressure, the size of the individual molecules and intermolecular attractions becomes significant, and the ideal gas approximation becomes inaccurate.
Eudiometers and Water Vapor
|In calculations for a gas above a liquid, the vapor pressure of the liquid must be considered.|
A eudiometer is a device that measures the downward displacement of a gas. The apparatus for this procedure involves an inverted container or jar filled with water and submerged in a water basin. The lid of the jar has an opening for a tube through which the gas to be collected can pass. As the gas enters the inverted container, it forces water to leave the jar (displacing it downward). To fill the entire container with gas, there must enough gas pumped into the container to expel all of the water.
As seen in this diagram, the downward displacement involves water. Therefore, in the container where the gas is collected, there is unwanted water vapor. To account for the water vapor, subtract the pressure of water vapor from the pressure of the gases in the container to find the pressure of the collected gas. This is simply a restatement of Dalton's Law of Partial Pressure:
The pressure of water vapour can be found on this webpage.
Gas Laws Practice Questions
- Between the Combined Gas Law and the Ideal Gas Law, which one accounts for chemical change? Explain.
- Calculate the density of hydrogen at a temperature of 298 K and pressure of 100.0 kPa.
- What volume does 5.3 moles of oxygen take up at 313 K and 96.0 kPa?
- Hydrogen and sulfur chemically combine to form the gas hydrogen sulfide, according to the reaction: H2 (g) + S(s) → H2S(g). How many liters of hydrogen are required to form 7.4 L of hydrogen sulfide (at STP: 273 K, 101.3 kPa)?
One mole of gas particles at STP takes up 22.4L
Ideal Gas Equation
PV=nRT, where R = 0.0821 L*atm/K*mol = ideal gas constant. Note how if P is in atm, V is in L, n is in moles, and T is in Kelvin, the units cancel out.
Remember that these gas laws only work in Kelvin.
For every x times heavier a gas is, it travels times slower:
When gases are polar, massive, at high pressure and low temperature, they do not behave like ideal gases. They may even condense into liquids or freeze into solids. | http://en.m.wikibooks.org/wiki/AP_Chemistry/Gases | 13 |
72 | Introduction to SAT Math
The Math section of the SAT is designed to assess your ability to reason
and think about high school level mathematical problems. Your SAT Math score
is based on your performance on 3 timed math sections:
- one 25-minute section with 20 multiple choice questions
- one 25-minute section with 8 multiple choice, and 10 grid-in questions
- one 20-minute section with 16 multiple choice questions
Multiple choice questions require you to select the correct answers from
among five answer choices, and the ‘grid in’ questions require
you to calculate the correct answer and enter it into the answer sheet.
Breakdown of the Math Topics Covered on the SAT with Sample Questions
Each SAT Math section includes questions drawn from four main topic areas,
which are discussed below:
- Number and Operations
- Algebra and Functions
- Geometry and Measurement
- Data Analysis, Statistics, and Probability
1. Number and Operations Concepts
Mathematical concepts you need to know for these types of questions include:
- Basic properties of numbers and their terminology (e.g. negative numbers,
prime numbers, factors, integers, sets, sequences)
- Squares, square roots, and exponents
- Order of Operations
- Fractions and decimals
- Ratios and Proportions
- Arithmetic Word Problems
Number and Operations Sample Questions
Here are some examples of sample questions from this topic area that you might
see in SAT Math sections:
- If the average of 45,70, 80, and a number x is 55, what is the value of
- If 25% of x is 80, what is 10% of x?
- The ratio of 1.4 to 2 is equal to which of the following ratios?
2. Algebra and Functions Concepts
To answer SAT Math questions on Algebra and Functions you need to be familiar
with the following topics and skills:
- Simplifying algebraic expressions
- Operations on Algebraic expressions (including factoring of quadratic
- Equations and inequalities involving roots, exponents, and absolute values
- Solving systems of equations and inequalities
- Direct and inverse variation
- Algebraic word problems
- Functions (their domain, range, translations of graphs, functions using
- Equations of lines (slope, intercept)
Algebra and Functions Sample Questions
Algebra and function questions that are representative of what you might see
on SAT Math sections are below:
- If x and y are inversely proportional and x = 10 when y = 5, what is y
when x is 30?
- If 22x = 50x-12, what is the value of x?
E) it cannot be determined from the information given
- If x•y = 2x - 3y, and 1•2 = 2•z, what is the value of
C) 2 2/3
D) 3 2/3
3. Geometry and Measurement Concepts
SAT Math questions in this topic area will require you to know about:
- The properties of parallel and perpendicular lines
- Coordinate Geometry (slopes, distance between points, midpoints of lines)
- Triangles (area, angles, properties of equilateral, isosceles, right and
special triangles, the congruency and similarity of triangles, Pythagorean
- Quadrilaterals and other polygons (including area, interior and exterior
- Circles (area, circumference)
- Solid geometry (volume and surface area of solids)
- Transformations (translations, rotations, reflections)
Geometry and Measurement Sample Questions
Below are sample questions on Geometry and Measurement that you might see
in SAT Math sections:
- A triangle has two angles that are equal. If the length of two of the sides
of the triangle are 40 and 20, what is the least possible value for the perimeter
of the triangle?
- Three line segments meet at a point to form three angles. One angle is
equal to 2x?, the second angle is equal to 3x?, the fourth is equal to 4x?.
What is the value of x?
- In the xy-coordinate place, the distance between A(7,14) and B(x,2) is
13. What is one possible value of x?
4. Data Analysis, Statistics, and Probability Concepts
The topics you need to know to master questions on this section are:
- Data interpretation (reading tables and line, bar, and other graphs)
- Descriptive statistics (mean, median, mode, weighted average)
- Probability (of one event or two or more independent or dependent events)
- Geometric Probability (probably that a random point chosen will fall within
a particular geometric figure)
Data Analysis, Statistics, and Probability Sample Questions
Below are example SAT Math questions on this topic area:
- In the following figure, the circle has a radius of 9 and the small square
has sides of length 4. If a point is chosen at random from the large circle,
what is the probability that the point chosen will fall in the small circle?
- According to the graph below, between what two consecutive months was
there the greatest change in auto sales?
- The complete cycle of a fly’s life is 80 days. It spends 10 days
as an egg, 40 days in larval form, and lives as a mature adult for 30 days.
At a randomly chosen time, what is the probability that a given fly will NOT
be in its adult form?
Study Strategies for SAT Math
- Get familiar with the SAT Math section instructions
Make sure you know how to correctly enter answers into the answer sheet for
the grid-in questions. Don’t waste time on test day figuring out how
to enter fractions and decimals into the grid, know this stuff in advance.
Also be aware of the formulas that are available at the beginning of the test
(e.g. the formula for the volume of a cylinder or circumference of a circle),
although you are never required to memorize these formulas, your work will
go much quicker during the test if you do not have to refer to this information,
so take the time to memorize it now. It will always be there if you need to
refer to it on the test, but try not to rely on it.
- Practice taking timed math sections
One of the things that makes the SAT Math section challenging is the time
limit. The College Board offers an SAT preparation book with real practice
tests designed by the test-maker and these are the best bet for getting an
idea of the pace you need to be going at to get to every question. Remember
that more difficult questions are not worth more points, so if a question
seems difficult on first reading, skip it, move on to easier questions, and
come back to try and make an educated guess if you have time left over.
- Check your work
When taking practice tests, make sure to go through your answers carefully
and note what questions you got wrong and why. Keep track of what types of
questions you are missing consistently (e.g. algebra, geometry, etc.) and
then do a focused review of material in that area, using the practice drills
found in SAT prep books.
- Try questions from many different sources
Although the practice tests from the College Board are going to be the most
representative of what real SAT questions will look like, if you try the math
exercises from many different SAT prep books you will build up your general
skill level and be more prepared for new question types you haven’t | http://www.ivyglobal.com/SAT/sat_math.asp | 13 |
53 | W: [IS.3 - Struggling Learners] “Today’s lesson is going to continue to focus on the concept of volume, along with mass and weight. We need to understand that there is a difference between mass and weight. After one of the activities, we will be able to discuss the similarities and differences between volume and how much an object weighs.”
H: “If I weighed each item on the table, which item would have the least amount of mass? Which item on the table would have the greatest amount of mass? Whenever I use the word ‘weigh’ in this lesson, I am referring to determining the amount of mass of an object or group of objects. Take a few minutes with the students around you to discuss your ideas about which items might have the least mass and most mass. Then record your answers on the Which Has the Greatest Mass? ranking sheet I passed out to you (5-4-3_Hook Activity- Ranking Sheet.doc). Be prepared to explain your thinking.” Once students are finished ranking the objects, have students share which object they think is the heaviest and why. Collect the student ranking sheets. This can inform you of the readiness levels of your students and their prior knowledge of mass. “Some of us agree about which item we think is the heaviest. Some of us disagree. I like how you supported your predictions with good reasoning. How could we show if our predictions are accurate or not?” (Allow students to respond. One response might be to use a scale to weigh the objects. Give students time to offer other suggestions.) Weigh the objects on the balance and record the mass for each object on a class chart for students to see. If a scale is not available, choose items that have their mass listed on them or weigh the items before the activity and share the mass of each during the activity. Ask students to rank the objects from lightest to heaviest while you arrange them in that order on the table. “Is this the order you ranked the items in originally?”
“I have a statement for you to think about. [IS.4 - All Students] If you think the answer is true give me thumbs up; if you think the answer is false give me thumbs down. Do weight and mass refer to the same thing?” Quickly scanning student responses to this question can give you a better awareness of your students’ level of understanding. Adjust the pacing of the lesson based on student readiness and need. The difference between weight and mass can become very scientific. The purpose is simply to expose students to the idea that indeed the words mass and weight are used interchangeably by many people, yet they are technically different. “It looks as though we have some disagreement. The correct answer is false. Do any volunteers who gave thumbs down want to share how they arrived at their thinking?”Allow students time to share their thoughts. Then using an overhead or document camera, show students the Weight and Mass Chart (5-4-3_Weight and Mass Chart.doc) and discuss it. To bring clarity to the difference between weight and mass, explain to students that if they went to the moon or traveled to Mars, they would still have the same mass, but their weight would be different because of the different gravitational pull on the moon and Mars.
Review the units used to measure mass in the customary English system of measurement and the metric system of measurement. If possible, have some items on hand so students can “feel” the difference in their mass. “Many people use the words weight and mass interchangeably, yet these two words mean something different. For the purpose of this lesson, we will be looking at the mass of different objects.”
“When measuring mass, we can use pounds and ounces or grams and kilograms. For the next activity, we are going to start with the customary English system of measurement. In a few moments, you will be assigned to a small group. There are stations set up around the room that you will be assigned to. Each station has a set of objects weighing one pound. Even though the objects weigh the same, the quantity may look different.”
E: “Now we are going to introduce volume into the discussion and see how that fits with what we already know. I would like you and your group members to look at the items; you can confirm that the objects do weigh one pound using the balance at each station if you like.” (If balances are not available, students will have to be reminded that the items weigh approximately one pound.) “Your task is to compare and contrast the different volumes of the objects. Remember from a previous lesson, volume is the amount of space occupied by an object or material. Your task will be to complete Part I of the activity sheet Is One Pound Always the Same?” (5-4-3_Is One Pound Always the Same.doc). If alternate items like school supplies (box of paper clips, pencils, erasers, chalk) are used, a blank template is available (5-4-3_Is One Pound Always the Same-Blank Template.doc).
While students are in groups, [IS.5 - All Students] monitor student interaction and responses. For those groups who are not as fluent in generating similarities and differences, ask questions to guide thinking.
- “Remember volume is the amount of space objects take up. [IS.6 - All Students] Which object has the biggest volume?”
- “Remember mass refers to the amount of matter an object has. Which object has the greatest mass?”
- “Are the answers to these two questions the same?”
- “Does an object with a larger mass always have a larger volume? Smaller volume? Give evidence to support your thinking with the objects at your station.”
- “What causes an object to have more volume?”
- “What causes an object to have more mass?”
R: “The objects used for this activity were measured using the customary English system of measurement. For your next task, using the balance at your station, I would like your group to measure the [IS.7 - All Students] objects in kilograms.” (Only ask students to measure the items in kilograms if balance scales are available. If balance scales are not available, refer to the chart and skip the actual measuring.) “Remember from the chart I shared with you earlier (put the Weight and Mass Chart back on the overhead or document camera), a kilogram is equivalent to a whole pineapple or a major league baseball. Mass can be measured in pounds or kilograms. Is a kilogram more than a pound? Or is a kilogram less than a pound? Talk with the students around you and discuss your thinking.”Allow students time to share their reasoning. (Some may say a kilogram is more than a pound because a pineapple is bigger than a pound of butter. Some may say a kilogram is less than a pound because they may not think a major league baseball weighs more than a pound.)
(If balance scales are not available, students can use a converter calculator listed in the Related Resources to determine how many kilograms are in a pound. Students should recognize that an object that weighs one kilogram is heavier than an object that weighs one pound.) “When you use the balance, be sure you measure the mass of each object in the set. Record the results you get on the Is One Pound Always the Same activity sheet, Part II. Once you are finished, complete the questions under the chart.” While groups are working, monitor student dialogue and the procedures used by the groups to weigh the mass of each set of objects in kilograms. If group measurements are inaccurate, ask the groups to repeat the procedure while you guide and monitor the group for understanding. Students may not come up with the exact conversion of 1 kilogram = 2.2 pounds for each set of objects due to errors like inaccuracy of the balance used or the original weight of an object may not have been “exactly” one pound. “I see that most groups are finished. Would you please discuss your findings with other groups and compare your results? Are you close?” (Give groups some time to dialogue with each other and then bring the class back together.) “What did you discover about the relationship between pounds and kilograms?” (1 kilogram is 2.2 pounds.)
E: The ranking sheet given to students at the beginning of the lesson can be a quick formative assessment that gives you some insight into how well students understand mass. Listening to student interaction and responses to questions is another way to informally assess student understanding throughout the lesson. When misconceptions become evident, be sure to clarify student thinking. Monitoring groups for accuracy during the weighing of objects and asking groups to repeat the process if they don’t determine the correct answer can rectify any misconceptions a group may have. This immediate feedback can facilitate student learning. “In today’s lesson, we briefly discussed the difference between weight and mass. We also looked at the relationship between mass and volume and the relationship between pounds and kilograms. Do objects with the same mass have to have the same volume? (no) Do objects with the same volume have to have the same mass? (no) Is a pound more or less than a kilogram?” (less; 1 pound is a little less than half a kilogram because 1 kilogram = 2.2 pounds) | http://pdesas.org/module/content/resources/4650/view.ashx | 13 |
54 | Grades K - 3
This lesson will help students use different units of measure
and learn about the relationship between sizes of measuring units and the
results of measuring. They will compare different units of measure as they
use them, thus learning their relative sizes through use.
It Figures #2: Deciding How Close to Measure.
Students will be able to:
- compare common objects in terms of length
- use non-standard units and standard units of measurement
- measure different objects
- define measurement procedure as a critical part of data collection
For each group of five students:
- Chart paper
- laminated tagboard cards (see Post Viewing Activities for what to
write on the cards. You will need to prepare a set of cards for each student
- marker (washable)
- set of laminated cards
- five rulers
- paper clips
- old shoes
- pipe cleaners
- tape measurer
For each student:
- stuffed animal(brought from home)
- construction paper
Teacher: "Today we are going to play a game called Giant
Teacher: "I want you to estimate, and then count, distance in the room
in giant steps." Select a student to be the giant. The giant is to
stand at the front of the classroom, take two giant steps, and freeze.
Teacher: Ask the other students to estimate the whole length of the classroom
in those giant steps.
Teacher: "Make a picture in your head and try to imagine how many giant
steps it would take to get to the far wall. "
Write the estimates on the board. Next, as the giant paces the whole length
of the room, have the students count out loud and record the answer.
Teacher: "If you each measure the length of the room in your own giant
steps, will all your answers be the same? What do you think?"
Allow students to talk about whether they think there will be any variation
in their results. Primary students have a wide range of theories about how
and why measurement result might vary.
Next select student pairs that will pace the length of the classroom in
giant steps. While one partner paces, the other counts. Students then switch
Teacher: Ask students to record their results on the board.
Teacher: "Look at the number on the board. Did everyone get the same
results? Does this surprise you?"
Teacher: "Now you will measure the same distance, but you'll use baby
steps. Do you think you'll get different results?" Now have a student
demonstrate baby steps.
Teacher: "Imagine, now, how many baby steps like that will it take
to get to the other wall. Imagine the baby steps in a straight path across
the room. How many will it take?"
Now pair up students to pace and count. Then have students enter the data
on a line plot on the board.
Baby steps produce much larger numbers than giant steps.
Teacher: "Why should the littlest steps give us the biggest number?"
Allow for student answers.
Teacher: "Now we have measured the length of our classroom in giant
steps and baby steps. We are now going to study other units to measure with."
Teacher: "Can you think of something we can use to measure with in
the classroom?" (record responses on board)
Teacher: "Now we are going to learn about measurement by
actually watching a group of boys and girls building a tree house. Each
time the boys and girls use a type of unit of measurement, I want you to
show me the inch on your finger. (The inch on your finger is in the middle
of your pointer finger.)
START at the beginning after the opening credits.
PAUSE when the Chinese boy says: "Well, let's go."
Teacher: "Did the boys and girls plan what size boards they needed?"
(No) "Did they plan how many boards they needed?" (No)
PAUSE when the boy says: "What do you think Zig, will this part
work for part of the floor?"
Teacher: "What type of measurement is Zig using to determine if the
board will fit for the floor?" (eyes)
PAUSE when the blonde boy says: "Don't be so picky Nancy."
Teacher: "Do you think Nancy was right, that they should have used
a tape measure?" (yes, to make sure it fit)
PAUSE when Zig tells Lisa he can measure with that pencil.
Teacher: "How can you measure with a pencil?" (you can measure
the length, as one unit)
PAUSE when the spider says, "I know."
Teacher: "Do you use the same unit of measure when you measure big
and small things?" (no, there are different types of measuring units.)
Stop when Zig says, "Lets' think about putting on a second story on
our tree house."
Teacher: "We just saw several examples of how we can use
different types of units to measure. What were some of the units the boys
and girls used to measure their tree house?" (a pencil, tape measurer,
a centimeter ruler, and a millimeter ruler)
Teacher: "Today we are going to work in groups and measure the items
in our measurement boxes which have been set on your table."
Teacher: Divide the students into groups of five. In each box have an assortment
of ribbons lengths of yarn, old ties, straws, paper clips, pencils, crayons,
old shoes, belts, comb, pipe cleaners, rulers and tape measurer. Write questions
on tagboard cards that have been laminated. Have students record response
on card with a washable marker.
EXAMPLE of questions you might put in each box.
Teacher: Circulate among group, checking answers.
- Which is longer, the blue ribbon or the red ribbon?
- What color are the two crayons that have the same length?
- How many paper clips long is the yellow pencil?
- Name three things in the box that are longer than the eraser?
- How many inches long is the shoe?
- Which one is longer the shoe or the straw?
- How many paper clips does it take to measure the comb?
- How many crayons long is the tie?
- How may paper clips is the shoe?
- Which is shorter the belt or the blue ribbon? etc., other questions
can be used.
Teacher: Wrap up activity, prepare for tomorrow by writing letters to parents
asking if they may bring a stuffed animal to school. The students should
explain in the letter that his/her stuffed animal will have to spend the
night at school.
Day 2 Background: Today we are going to spend some time investigating
our stuffed animals. Each child needs a stuffed animal to measure. If your
students can't bring their own stuffed animals to school, you can set up
a center with a box of stuffed animals. This activity could be done as a
Teacher: Begin the day by reading, the story Inch by Inch by Leo
Lionni. The story is about how a quick-thinking inchworm saves his life
by offering to measure the birds who want to eat him. Inch by inch, he measures
the robin's tail, the flamingo's neck, the toucan's beak, the heron's legs,
the pheasant's tail, and the hummingbird's body. But, when he agrees to
measure the nightingale's song, he takes the opportunity to inch away to
freedom. The birds in Inch by Inch all had something they wanted
measured. They had an inchworm do the measuring for them using his body
length as one unit of measurement (an inchworm is the caterpillar larva
of a geometric moth). Many things can be used as units of measurement from
the time we are born and as we grow. We measure things around us all our
Teacher: Start lesson by reading the story, Inch by Inch.
Teacher: "The birds in Inch by Inch wanted to be measured. The
inchworm measured them in inches."
Teacher: "Can you name the animals that inchworm measured?" (list
Teacher: Now lets color and cut out our own inchworm to measure our stuffed
animals. (see page )
Teacher: After students have completed page , have the students introduce
their stuffed animal to the class and estimate how long he/she is in length
in inchworm parts.
Teacher: "Do you see different ways we can sort our animals?"
For example by color, by types of animals, whether or not the animal has
clothes, or length of the animal?"
Teacher: Have your students draw pictures of their stuffed animals and write
down the measurements. Listing leg measurement, arms, head or special features
that the animal possess. See page , worksheet 2.
Teacher: "Remember when you measure, you must start at one end of the
thing being measured and measure to the other end." *stress that measuring
is continuous- there are now spaces between the units.
Day 3: Ask the students to compare their stuffed animal to the one
you brought in. Have the students measure your animal, and then graph if
their animal measures bigger, smaller or about the same size as the teacher's
stuffed animal. Using a venn diagram would work great for this activity.
Teacher: "How many were bigger than my stuffed animal? How many were
smaller? How many animals were the same in measurement?"
Teacher: "Now let's write a story about your stuffed animal and what
he/she learned and experienced in school.
Day 4: Have students graph their stuffed animal measurement on a
Teacher: "Did our animals all measure the same?"
Teacher: "Now let's use the information and have fun writing word problems."
Have your students write math stories about the stuffed animals. For example,
"There are two pink bunnies, one blue bunny and five brown bunnies
in our classroom. How many bunnies are visiting our room?" Share problems
Students will visit a hardware store like Scotty's or Home Depot.
Have manager explain how important measurement is to his customers when
they are planning some type of project. Have students prepare questions
before your visit. Invite a contractor in to explain how important measurement
is when planning to build a building.
Let's Measure Center!
Place a math center in the room, which may be filled with objects students
can measure. Some suggestions for objects include pencils, books, crayons,
straws etc. Provide workmats divided in half with the word Longer
at the top of one side and the word Shorter at the top of the other
side. Have students take two objects out of the box compare their lengths
and place the objects on the proper side of the workmat. A partner could
How Tall and Wide are You?
Have students work in pairs. One partner uses yarn or string to measure
the other's arm span from fingertip to fingertip. Cut the yarn to this measured
length. Using this same piece of yarn, the partner then measures the other's
height from the top of the head to the floor. They will be surprised to see
that the result is the same! To check the validity of this concept have
partners' switch roles and repeat the procedure. Now have the partner measure
the other's foot using a strip of construction paper and then switch to
measure the other's foot. Then, hold the paper between the elbow and wrist!
The length of the forearm and the foot will be the same!
Have students conduct some research and collect more data about foot size.
Is there another group whose feet they might measure? They may be interested
in some tall peoples' foot sizes. Robert Pershing Wadlow, the world's tallest
man at 8 feet 11.1 inches, wore size 37AA shoes. His feet were 18-1/2 inches
long. Reng Jinlian, the world's tallest woman, had 14 inch long feet. These
facts come from the Guiness Book of World Records, New York; Bantam Books,
Longer or Shorter?
Take two hula hoops and have them overlap. Place all of the materials to
be measured in the center. Students compare the objects with the length
of the ruler. Shorter objects are placed on the left of the Venn diagram.
Longer objects go on the right, and objects that are the same length go
in the center. Encourage students to record their finding by drawing a picture
of their Venn diagram on a large sheet of drawing paper and recording the
length of each object.
Students may use the computer Logo. Using the Logo language is a logical
way to help reinforce measurement. Giving and responding to directions about
turns and distances is similar to moving the Logo turtle around the screen.
You can organize the task so that one student points to a place on the screen
and the other moves or programs the turtles to get to the spot. You can
save the pictures or the programs and have students share and compare their
What Size Bed?
The first person will trace around her partner's foot. Cut out the pattern.
Next, use the foot pattern to measure the right-sized bed for your partner.
With your partner lying on the floor, use their foot pattern to determine
how big a bed made for them would be. Then the second person repeats step
1 and 2. Then complete your worksheet.
Name What Size Bed?
Length Length Width Width
Two of the Same Scavenger Hunt.
Have students help you find something in the room that is about the same
length as their desk. Brainstorm a list of possibilities then have students
cut a piece of string the length of their desk and compare it with the objects
listed. Then have students to find two objects similar in size and measure
them with string. Then post the names of the objects on index cards and
attach the measurement on the strings. Allow students to use nonstandard
measures or standard metric measurement.
How Big is a foot? Give each group of students at least 10 feet cut outs
all of the same size. Then have each group measure, the teacher's desk,
the reading table, desk, or any other specified objects by using the feet
cutouts as the unit of measure. Ask students why they think the measurements
were different. Read the story, How Big Is a Foot? by Rolf Mylier,
and ask students to write a letter to the apprentice telling him how to
build a bed to fit the queen.
How Big Is a Foot? Macmillan, 1922.
Wylie, Joanne and David
A Big Fish Story, Children's Press, 1983.
The Snake: A very Long Story, Houghton Mifflin, 1978.
Inch by Inch, Astor-Honor, 1960.
Farmer Mack Measures His Pig, HarperCollins, 1986.
Jim and the Beanstalk, Putnam, 1970.
Depaola, Tomie Now
One Foot, Now the Other, Putnam, 1980.
Put your best foot forward with this fun art project. Make foot prints to
measure your many activities to help make measurement fun. Have your students
scuff their sneakers on the floor before stepping on pieces of white paper.
(sneakers with patterns on the soles are most effective) Each student outlines
the edge and sole design with fine-tipped markers before shaking off the
dust from the paper. After coloring the designs, have students cut out scraps
of construction paper to glue onto their pictures.
Measure off meter lengths on a sidewalk or in the parking lot. Draw a big
ladder. Make the steps a meter length apart. Write the meter progression
in each box. Divide your class into two teams. Each player tosses a beanbag.
Alternate turns. The team scoring the highest total wins.
Master Teacher: Kathy Raiford
Click here to view the worksheet
associated with this lesson.
NOTE TO TEACHER
Before using rulers or an inch worm (inch stick), children need much practice
with the concept of measurement. Comparing the lengths of common objects
using the term longer and shorter is a good beginning point. After several
sessions of hands-on comparing, children can then use pieces of string,
unifix cubes, paper clips, tooth picks, popsicle sticks, etc., to find the
length of an object.
Lesson Plan Database
Thirteen Ed Online | http://www.thirteen.org/edonline/nttidb/lessons/jx/measjx.html | 13 |
50 | Calculation of displacement in motion along a straight line
For simplicity we will analyze motion along the x axis of the coordinate system. The general formula for displacement of an object in motion with constant velocity is defined in Physics as
or, in the form of equation
If the object is moving with an acceleration the velocity is different at any moment of time and calculation of total displacement during the time of motion cannot be performed with Formula M1.16. In such a case we apply a special procedure by calculating the total displacement as a sum of a large number of displacements, each calculated for very a small period of time. One such displacement Δxi can be written as
where v(ti) is a velocity at given instance of time ti and Δt is a short lapse of time. During such a short time the velocity is nearly constant so we can use Formula M1.16 to calculate this small displacement. The approximate total displacement x will be equal to the sum of all displacements Δx created during the time of motion.
t1 is the moment of time when the motion starts, tn – the time the motion stops. To get the exact value of displacement we have to find the value of the right hand side of Equation M1.18 in the limiting case, when Δt → 0.
and this limit is known as an integral
Note that Δt → 0 in Equation M1.19 implies that number of intervals of time Δt becomes infinite - n → ∞.
The C is a constant of integration and can be found from the following reasoning. At time instant t=0, that is at the moment we start the calculation of the displacement, the already created displacement has some value x0. In other words, at the time instant we start calculation this object may have already traveled a distance x0, which is called the initial displacement.
Substituting at time t=0, x = x0, and ∫v(t)dt=0 , one gets
and the formula for displacement is now
It cannot be used directly to find the distance traveled by an object. We must know the explicit form of velocity v(t) as a function of time.
The simplest case is when velocity is constant, does not depend on time, v(t) = v. Substituting this constant velocity into Formula M1.21 we get
If you are not familiar with calculus and don’t know how to calculate integrals, just remember the final formula
The motion can be along any direction, not only along the x axis so this formula is usually written in the form.
which does not imply any specific direction of motion.
The next simplest case is when the velocity changes, but is a linear function of time
where a is acceleration. We do not use a vector notation for velocity and acceleration, because all the time we are discussing motion along a straight line. There is one direction of displacement, velocity, and acceleration. The Equation M1.25 tells us that an object starts from rest at time t=0 and gains the velocity uniformly with time. Substituting Equation M1.25 into M1.21 we get
Solving integral in this equation leads to the formula for the displacement
If you are not familiar with integration, don’t worry just remember formula M1.27, which may be used in solving problems.
A more complicated case is when at time t=0 an object already traveled distance x0 and has the velocity v0. From that moment (t=0) on it started moving with constant acceleration a. Velocities are additive (as long as they are much smaller than velocity of light = 300,000 km/s) therefore velocity on our object is given by
and after substituting it into Equation M1.21, we get the formula for displacement
which after solving the integral and rearranging reads
If you are not familiar with calculus, don’t pay attention to all the equations with integrals or derivatives. In solving most of the problems you will need only equations without these “fancy” symbols. But do not mistake the symbol of a derivative like with the operation of dividing dx by dt.
In general case one may want to calculate the displacement of an object moving with velocity given by an arbitrary function, for example
n is the frequency in cycles per second and the vertical lines denote the absolute value of the expression inserted between them.
Absolute value: |x| is defined as “unsigned” portion of x.
For example, |3| = 3, |-5| = 5.
The Formula M1.31 tells that velocity of an moving object changes from 0 to v0 periodically with frequency n. The calculation of displacement for such a motion requires the knowledge of calculus. You can find an example of such a calculation among the problems attached to this paragraph. You can smoothly go through this tutorial, even if you omit parts of material involving math you are not familiar with. So don’t worry when you see some strange equations or symbols.
Equations M1.23 and M1.27 are the special cases of Equation M1.30. Substituting into Equation M1.30:
Therefore the basic equation for solving most of the problems in Physics involving the calculation of displacement is Formula M1.30.
In more general cases one can have acceleration changing with time a = f(t) - a is a function of time. Such specific motions will be considered in some problems. | http://www.physics-tutorial.net/M1-4-calculate-displacement.html | 13 |
53 | theory and principles of waves, how they work and what causes them
|waves and environment||Waves have a major influence on the marine environment and ultimately on the planet's climate.|
|wave motion||Waves travel effortlessly along the water's surface. This is made possible by small movements of the water molecules. This chapter looks at how the motion is brought about and how waves can change speed, frequency and depth.|
|waves and wind||The wind blows over the water, changing its surface into ripples and waves. As waves grow in height, the wind pushes them along faster and higher. Waves can become unexpectedly strong and destructive.|
|waves in shallow water||As waves enter shallow water, they become taller and slow down, eventually breaking on the shore.|
|wave groups||In the real world, waves are not of an idealised, harmonious shape but irregular. They are composed of several interfering waves of different frequency and speed.|
|wave reflection||Water waves bounce off denser objects such as sandy or rocky shores. Very long waves such as tsunamis bounce off the continental slope.|
|Waves in the environment
Without waves, the world would be a different place. Waves cannot exist by themselves for they are caused by winds. Winds in turn are caused by differences in temperature on the planet, mainly between the hot tropics and the cold poles but also due to temperature fluctuations of continents relative to the sea.
Without waves, the winds would have only a very small grip on the water and would not be able to move it as much. The waves allow the wind to transfer its energy to the water's surface and to make it move. At the surface, waves promote the exchange of gases: carbon dioxide into the oceans and oxygen out. Currents and eddies mix the layers of water which would otherwise become stagnant and less conducive to life. Nutrients are thus circulated and re-used.
For the creatures in the sea, ocean currents allow their larvae to be dispersed and to be carried great distances. Many creatures spawn only during storms when large waves can mix their gametes effectively.
Coastal creatures living in shallow water experience the brunt of the waves directly. In order to survive there, they need to be robust and adaptable. Thus waves maintain a gradient of biodiversity all the way from the surface, down to depths of 30m or more. Without waves, there would not be as many species living in the sea.
Waves pound rocks and make them erode faster, but sea organisms covering
these rocks, delay this process. Waves make beaches by transporting sand
from deeper down towards the shore and by washing the sand and removing
fine particles. Waves stir and suspend the sand so that currents or gravity
can transport it.
Anyone having watched water waves rippling outward from the point where a stone was thrown in, should have noticed how effortlessly waves can propagate along the water's surface. Wherever we see water, we see its surface stirred by waves. Indeed, witnessing a lake or sea flat like a mirror, is rather unusual. Yet, as familiar we are with waves, we are unfamiliar with how water particles can join forces to make such waves.
Waves are oscillations in the water's surface. For oscillations to exist and to propagate, like the vibrating of a guitar string or the standing waves in a flute, there must be a returning force that brings equilibrium. The tension in a string and the pressure of the air are such forces. Without these, neither the string nor the flute could produce tones. The standing waves in musical instruments bounce their energy back and forth inside the string or the flute's cavity. The oscillations that are passed to the air are different in that they travel in widening spheres outward. These travelling waves have a direction and speed in addition to their tone or timbre. In air their returning force is the compression of the air molecules. In surface waves, the returning force is gravity, the pull of the Earth. Hence the name 'gravity waves' for water waves.
In solids, the molecules are tightly connected together, which prevents
them from moving freely, but they can vibrate. Water is a liquid and its
molecules are allowed to move freely although they are placed closely together.
In gases, the molecules are surrounded by vast expanses of vacuum space,
which allows them to move freely and at high speed. In all these media,
waves are propagated by compression of the medium. However, the surface
waves between two media (water and air), behave very different and solely
under the influence of gravity, which is much weaker than that of elastic
compression, the method by which sound propagates.
|The specific volume of
sea water changes by only about 4 thousands of 1 percent (4E-5) under a
pressure change of one atmosphere (1 kg/cm2).
This may seem insignificant, but the Pacific Ocean would stand about 50m
higher, except for compression of the water by virtue of its own weight,
or about 22cm higher in the absence of the atmosphere. Since an atmosphere
is about equal to a column of water 10m high, the force of gravity is about
43 times weaker than that of elastic compression.
Surface tension (which forms droplets) exerts a stress parallel to the surface, equivalent to only one 74 millionth (1.4E-8) of an atmosphere. Its restoring force depends on the curvature of the surface and is still smaller. Nevertheless it dominates the behaviour of small ripples (capillary waves), whose presence greatly contributes to the roughness (aerodynamic drag) of the sea surface, and hence, to the efficiency with which can generate larger waves and currents. (Van Dorn, 1974)
If each water particle makes small oscillations around its spot, relative
to its neighbours, waves can form if all water particles move at the same
time and in directions that add up to the wave's shape and direction. Because
water has a vast number of molecules, the height of waves is theoretically
unlimited. In practice, surface waves can be sustained as high as 70% of
the water's depth or some 3000m in a 4000m deep sea (Van Dorn, 1974).
Note that the water particles do not travel but only their collective energy does! Waves that travel far and fast, undulate slowly, requiring the water particles to make slow oscillations, which reduces friction and loss of energy.
|In the diagram some familiar terms are shown. A floating object is observed to move in perfect circles when waves oscillate harmoniously sinus-like in deep water. If that object hovered in the water, like a water particle, it would be moving along diminishing circles, when placed deeper in the water. At a certain depth, the object would stand still. This is the wave's base, precisely half the wave's length. Thus long waves (ocean swell) extend much deeper down than short waves (chop). Waves with 100 metres between crests are common and could just stir the bottom down to a depth of 50m. Note that the depth of a wave has little to do with its height! But a wave's height contains the wave's energy, which is unrelated to the wave's length. Long surface waves travel faster and further than short ones. Note also that the forward movement of the water under a crest in shallow water is faster than the backward movement under its trough. By this difference, sand is swept forward towards the beach.|
Water waves can store or dissipate much energy. Like other waves (alternating electric currents, e.g.), a wave's energy is proportional to the square of its height (potential). Thus a 3m high wave has 3x3=9 times more energy than a 1m high wave. When fine-weather waves of about 1m height pound on the beach, they dissipate an average of 10kW (ten one-bar heaters) per metre of beach or the power of a small car at full throttle, every five metres. (Ref Douglas L Inman in Oceanography, the last frontier, 1974). Attempts to harness the energy from waves have failed because they require large structures over large areas and these structures should be capable of surviving storm conditions with energies hundreds of times larger than they were designed to capture.
Waves have a direction and speed. Sound waves propagate by compressing
the medium. They can travel in water about 4.5 times faster than in air,
about 1500m per second (5400km/s, or mach-4.5, depending on temperature
and salinity). Such waves can travel in all directions and reach the bottom
of the ocean (about 4km) in less than a second. Surface waves, however,
are limited by the density of water and the pull of gravity. They can travel
only along the surface and their wave lengths can at most be about twice
the average depth of the ocean (2 x 4 km). The fastest surface waves observed,
are those caused by tsunamis. The 'tidal wave' caused by an under-sea earthquake
in Chile in May 1960, covered the 6000 nautical miles (11,000km) to New
Zealand in about 12 hours, travelling at a speed of about 900 km/hr!
When it arrived, it caused an oscillation in water level of 0.6m at various
places along the coast, 1.4m in Tauranga Harbour and 2.4m in Whitianga
harbour. Note that tsunamis reach their minimum at about 6000 km distance.
Beyond that, the curvature of the Earth bends the wave fronts to focus
them again at a distance of about 12,000 km, where they can still cause
|The relationship between wave
speed (phase velocity) and depth of long surface waves in shallow water
is given by the formula
c x c = g x d x (p2 - p1) / p2 orFor an ocean depth of 4000m, a wave's celerity or speed would be about SQR(10 x 4000) = 200 m/s = 720 km/hr. Surface waves could theoretically travel much faster on larger planets, in media denser than water.
For deep water, the relationship between speed and wavelength is given by the formula:
l = g x t x t / (2 x pi)Thus waves with a period of 10 seconds, travel at 56 km/hr with a wave length of about 156m. A 60 knot (110 km/hr) gale can produce in 24 hours waves with periods of 17 seconds and wave lengths of 450m. Such waves travel close to the wind's speed (97 km/hr). A tsunami travelling at 200 m/s has a wave period of 128 s, and a wave length of 25,600 m.
two diagrams show the relationships between wave speed and period for various
depths (left), and wave length and period (right), for periodic, progressive
surface waves. (Adapted from Van Dorn, 1974) Note that the
term phase velocity is more precise than wave speed.
The period of waves is easy to measure using a stopwatch,
whereas wave length and speed are not. In the left picture, the red line
gives the linear relationship between wave speed and wave period. A 12
second swell in deep water travels at about 20m/s or 72 km/hr. From the
red line in the right diagram, we can see that such swell has a wave length
between crests of about 250m.
|Waves and wind
How wind causes water to form waves is easy to understand although many intricate details still lack a satisfactory theory. On a perfectly calm sea, the wind has practically no grip. As it slides over the water surface film, it makes it move. As the water moves, it forms eddies and small ripples. Ironically, these ripples do not travel exactly in the direction of the wind but as two sets of parallel ripples, at angles 70-80º to the wind direction. The ripples make the water's surface rough, giving the wind a better grip. The ripples, starting at a minimum wave speed of 0.23 m/s, grow to wavelets and start to travel in the direction of the wind. At wind speeds of 4-6 knots (7-11 km/hr), these double wave fronts travel at about 30º from the wind. The surface still looks glassy overall but as the wind speed increases, the wavelets become high enough to interact with the air flow and the surface starts to look rough. The wind becomes turbulent just above the surface and starts transferring energy to the waves. Strong winds are more turbulent and make waves more easily.
The rougher the water becomes, the easier it is for the wind to transfer its energy. The waves become steep and choppy. Further away from the shore, the water's surface is not only stirred by the wind but also by waves arriving with the wind. These waves influence the motion of the water particles such that opposing movements gradually cancel out, whereas synchronising movements are enhanced. The waves start to become more rounded and harmonious. Depending on duration and distance (fetch), the waves develop into a fully developed sea.
Anyone familiar with the sea, knows that waves never assume a uniform,
harmonious shape. Even when the wind has blown strictly from one direction
only, the resulting water movement is made up of various waves, each with
a different speed and height. Although some waves are small, most waves
have a certain height and sometimes a wave occurs which is much higher.
trying to be more precise about waves, difficulties arise: how do we measure
waves objectively? When is a wave a wave and should be counted? Scientists
do this by introducing a value E which is derived from the energy
component of the compound wave. In the left part of the drawing is shown
how the value E is derived entirely mathematically from the shape
of the wave. Instruments can also measure it precisely and objectively.
The wave height is now proportional to the square root of E.
The sea state E is two times the average of the sum of the squared amplitudes of all wave samples.The right part of the diagram illustrates the probability of waves exceeding a certain height. The vertical axis gives height relative to the square root of the average energy state of the sea: h / SQR( E ) . For understanding the graph, one can take the average wave height at 50% probability as reference.
Fifty percent of all waves exceed the average wave height, and an equal number are smaller. The highest one-tenth of all waves are twice as high as the average wave height (and four times more powerful). Towards the left, the probability curve keeps rising off the scale: one in 5000 waves is three times higher and so on. The significant wave height H3 is twice the most probable height and occurs about 15% or once in seven waves, hence the saying "Every seventh wave is highest". Click here for a larger version of this diagram.
the wind blows sufficiently long from the same direction, the waves it
creates, reach maximum size, speed and period beyond a certain distance
(fetch) from the shore. This is called a fully developed sea. Because
the waves travel at speeds close to that of the wind, the wind is no longer
able to transfer energy to them and the sea state has reached its maximum.
In the picture the wave spectra of three different fully developed seas
are shown. The bell curve for a 20 knot wind (green) is flat and low and
has many high frequency components (wave periods 1-10 seconds). As the
wind speed increases, the wave spectrum grows rapidly while also expanding
to the low frequencies (to the right). Note how the bell curve rapidly
cuts off for long wave periods, to the right. Compare the size of the red
bell, produced by 40 knot winds, with that of the green bell, produced
by winds of half that speed. The energy in the red bell is 16 times larger!
Important to remember is that the energy of the sea (maximum sea condition) increases very rapidly with wind speed, proportional to its fourth power. The amplitude of the waves increases to the third power of wind speed. This property makes storms so unexpectedly destructive.
|The biggest waves on the planet are found where strong winds consistently blow in a constant direction. Such a place is found south of the Indian Ocean, at latitudes of -40º to -60º, as shown by the yellow and red colours on this satellite map. Waves here average 7m, with the occasional waves twice that height! Directly south of New Zealand, wave heights exceeding 5m are also normal. The lowest waves occur where wind speeds are lowest, around the equator, particularly where the wind's fetch is limited by islands, indicated by the pink colour on this map. However, in these places, the sea water warms up, causing the birth of tropical cyclones, typhoons or hurricanes, which may send large waves in all directions, particularly in the direction they are travelling.|
|Waves entering shallow water
As waves enter shallow water, they slow down, grow taller and change shape. At a depth of half its wave length, the rounded waves start to rise and their crests become shorter while their troughs lengthen. Although their period (frequency) stays the same, the waves slow down and their overall wave length shortens. The 'bumps' gradually steepen and finally break in the surf when depth becomes less than 1.3 times their height. Note that waves change shape in depths depending on their wave length, but break in shallows relating to their height!
How high a wave will rise, depends on its wave length (period) and the
beach slope. It has been observed that a swell of 6-7m height in open sea,
with a period of 21 seconds, rose to 16m height off Manihiki Atoll, Cooks
Islands, on 2 June, 1967. Such swell could have arisen from a 60 knot storm.
||The photo shows waves entering shallow water at Piha,
New Zealand. Notice how the wave crests rise from an almost invisible swell
in the far distance. As they enter shallow water, they also change shape
and are no longer sinus-like. Although their period remain the same, their
distance between crests and their speed, diminish.
Not quite visible on this scale are the many surfies in the water near the centre of the picture. They favour this spot because as the waves bend around the rocks, and gradually break in a 'peeling' motion, they can ride them almost all the way back to the beach.
Going back to the 'wave
motion and depth' diagram showing how water particles move, we can
see that all particles make a circular movement in the same direction.
They move up on the wave's leading edge, forward on its crest, down on
its trailing slope and backward on its trough. In shallow water, the particles
close to the bottom will be restricted in their up and downward movements
and move along the bottom instead. As the diagram shows, the particle's
amplitude of movement does not decrease with depth. The forward/backward
movement over the sand creates ripples and disturbs it.
Since shallow long waves have short crests and long troughs, the sand's forward movement is much more brisk than its backward movement, resulting in sand being dragged towards the shore. This is important for sandy beaches.
|Note that a sandy bottom is just another medium, potentially capable of guiding gravity waves. It is about 1.8 times denser than water and contains about 30-40% liquid. Yet, neither does it behave like a liquid, nor entirely like a solid. It resists downward and sideways movements but upward movements not as much. So waves cannot propagate over the sand's surface, like they do along the water's surface, but divers can observe the sand 'jumping up' on the leading edge of a wave crest passing overhead (when the water particles move upward). This may help explain why sand is so easily stirred up by waves and why burrowing organisms are washed up so readily.|
Surf breakers are classified in three types:
Photos Van Dorn, 1974
When waves break, their energy is absorbed and converted to heat. The gentler the slope of the beach, the more energy is converted. Steep slopes such as rocky shores do not break waves as much but reflect them back to sea, which 'shelters' marine life..
Part of the irregularity of waves can be explained by treating them as formed by interference between two or more wave trains of different periods, moving in the same direction. It explains why waves often occur in groups. The diagram shows how two wave trains (dots and thin line) interfere, producing a wave group of larger amplitude (thick line). Such a wave group moves at half the average speed of its component waves. The wave's energy spectrum, discussed earlier, does not move at the speed of the waves but at the group speed. When distant storms send long waves out over great distances, they arrive at a time that corresponds with the group speed, not the wave speed. Thus a group of waves, with a period of 14s would travel at a group velocity of 11m/s (not 22 m/s) and take about 24 hours (not 12 hr) to reach the shore from a cyclone 1000 km distant. A group of waves with half the period (7s) would take twice as long, and would arrive a day later. (Harris, 1985)
Most wave systems at sea are comprised of not just two, but many component wave trains, having generally different amplitudes as well as periods. This does not alter the group concept, but has the effect of making the groups (and the waves within them) more irregular.
Anyone having observed waves arriving at a beach will have noticed that they are loosely grouped in periods of high waves, alternated by periods of low waves.
Adapted from Van Dorn, 1974.
Like sound waves, surface waves can be bent (refracted) or bounced back (reflected) by solid objects. Waves do not propagate in a strict line but tend to spread outward while becoming smaller. Where a wave front is large, such spreading cancels out and the parallel wave fronts are seen travelling in the same direction. Where a lee shore exists, such as inside a harbour or behind an island, waves can be seen to bend towards where no waves are. In the lee of islands, waves can create an area where they interfere, causing steep and hazardous seas.
When approaching a gently sloping shore, waves are slowed down and bent towards the shore.
When approaching a steep rocky shore, waves are bounced back, creating
a 'confused sea' of interfering waves with twice the height and steepness.
Such places may become hazardous to shipping in otherwise acceptable sea
||This drawing shows how waves are bent around an island
which should be at least 2-3 wave lengths wide in order to offer some shelter.
It causes immediately in the lee of the island (A) a wave shadow zone but
further out to sea a confusing sea (B) of interfering but weakened waves
which at some point (C) focuses the almost full wave energy from two directions,
resulting in unpredictable and dangerous seas. When seeking shelter, avoid
navigating through this area.
Recent research has shown that underwater sand banks can act as wave lenses, refracting the waves and focussing them some distance farther. It may suddenly accelerate coastal erosion in localised places along the coast.
Drawings from Van Dorn, 1974. | http://www.seafriends.org.nz/oceano/waves.htm | 13 |
80 | |x2 − 5x + 6 = 0
x = ?
|This article/section deals with mathematical concepts appropriate for a student in mid to late high school.|
The complex numbers are a set of numbers which have important applications in the analysis of periodic, oscillatory, or wavelike phenomena. Mathematicians denote the set of complex numbers with an ornate capital letter: . They are the 5th item in this hierarchy of types of numbers:
- The "natural numbers", 1, 2, 3, ... (There is controversy about whether zero should be included. It doesn't matter.)
- The "integers"—positive, negative, and zero
- The "rational numbers", or fractions, like 355/113
- The "real numbers", including irrational numbers
- The "complex numbers", which give solutions to polynomial equations
A complex number is composed of two parts, or components—a real component and an imaginary component. Each of these components is an ordinary (that is, real) number. The complex numbers form an "extension" of the real numbers: If the imaginary component of a complex number is zero, that number is essentially identical to the real number that is its real component.
The complex numbers are defined as a 2-dimensional vector space over the real numbers. That is, a complex number is an ordered pair of numbers: (a, b). The familiar real numbers constitute the complex numbers with second component zero. That is, x corresponds to (x, 0).
The second component is called the imaginary part. Its unit basis vector is called i. The first component is called the real part. Its unit basis vector is just 1. Thus, the complex number can also be written . Numbers with real part of zero are sometimes called "pure imaginary", with the term "complex" reserved for numbers with both components nonzero.
While the "invisible" nature of the imaginary component may be disconcerting at first (and the word "imaginary" may be an unfortunate term for it), the complex numbers are just as genuine as the Dedekind cuts and Cauchy sequences that are used in the definition of the "real" numbers.
The complex numbers form a field, with the mathematical operations defined as shown below.
The field operations are defined as follows:
Addition is just the standard addition on the 2-dimensional vector space. That is, add the real parts (first components), and add the imaginary parts (second components).
Letting the complex numbers w and z be defined by their respective components:
Multiplication has a special definition. It is this definition that gives the complex numbers their important properties.
- When we multiply , this definition gives
Writing out the product in the obvious way, we get the same answer:
This means that, using , one can perform arithmetic operations in a completely natural way.
Division requires a special trick. We have:
To get the individual components, the denominator needs to be real. This can be accomplished by multiplying both numerator and denominator by . We get:
This division will fail if and only if , that is, and are both zero, that is, the complex denominator is exactly zero (both components zero). This is exactly analogous to the rule that real division fails if the denominator is exactly zero.
Fundamental theorem of algebra
The complex numbers form an algebraically closed field. This means that any degree polynomial can be factored into n first degree (linear) polynomials. Equivalently, such a polynomial has n roots (though one has to count all multiple occurrences of repeated roots.) This statement is the Fundamental Theorem of Algebra, first proved by Carl Friedrich Gauss around 1800. The theorem is not true if the roots are required to be real. (This failure is what led to the development of complex numbers in the first place.) But when the roots are allowed to be complex, the theorem applies even to polynomials with complex coefficents.
The simplest polynomial with no real roots is , since -1 has no real square root. But if we look for roots of the form , we have:
For the imaginary part to be zero, one of a or b must be zero. For the real part to be zero, a must be zero and must be 1. This means that , so the roots are and , or .
These two numbers, and , are the square roots of -1.
Similar analysis shows that, for example, 1 has three cube roots:
One can verify that
For a given field, the field containing it (strictly speaking, the smallest such field) that is algebraically closed is called its algebraic closure. The field of real numbers is not algebraically closed; its closure is the field of complex numbers.
Polar coordinates, modulus, and phase
Complex numbers are often depicted in 2-dimensional Cartesian analytic geometry; this is called the complex plane. The real part is the x-coordinate, and the imaginary part is the y-coordinate. When these points are analyzed in polar coordinates, some very interesting properties become apparent. The representation of complex numbers in this way is called an Argand diagram.
If a complex number is represented as , its polar coordinates are given by:
Transforming the other way, we have:
The radial distance from the origin, , is called the modulus. It is the complex equivalent of the absolute value for real numbers. It is zero if and only if the complex number is zero.
The angle, , is called the phase. (Some older books refer to it as the argument.) It is zero for positive real numbers, and radians (180 degrees) for negative ones.
The multiplication of complex numbers takes a particularly interesting and useful form when represented this way. If two complex numbers and are represented in modulus/phase form, we have:
But that is just the addition rule for sines and cosines!
So the rule for multiplying complex numbers on the Argand diagram is just:
- Multiply the moduli.
- Add the phases.
One can use this property to find all of the nth roots of a number geometrically. For example, by using these trigonometric formulas:
The three cube roots of 1, listed above, can all be seen to have moduli of 1 and phases of 0 degrees, 120 degrees, and 240 degrees respectively. When raised to the third power, the phases are tripled, obtaining 0, 360, and 720 degrees. But they are all the same angle—zero. So the cubes of these numbers are all just one. Similarly, and have phases of 90 degrees and 270 degrees. When those numbers are squared, the phases are 180 and 540, both of which are the same angle—180 degrees. So their squares are both -1.
One can apply this property to the nth roots of any complex number. They lie equally spaced on a circle. This is DeMoivre's theorem.
A careful analysis of the power series for the exponential, sine, and cosine functions reveals the marvelous Euler formula:
of which there is the famous case (for θ = π):
The complex conjugate, or just conjugate, of a complex number is the result of negating its imaginary part. The conjugate is written with a bar over the quantity: . All real numbers are their own conjugates.
All arithmetic operations work naturally with conjugates—the sum of the conjugates is the conjugate of the sum, and so on.
It follows that, if P is a polynomial with real coefficients (so that its coefficients are their own conjugates)
If is a root of a real polynomial, then, since zero is its own conjugate, is also a root. This is often expressed as "Non-real roots of real polynomials come in conjugate pairs." We saw that above for the cube roots of 1—two of the roots are complex and are conjugates of each other. The third root is its own conjugate.
The higher mathematical functions (often called "transcendental functions"), like exponential, log, sine, cosine, etc., can be defined in terms of power series (Taylor series). They can be extended to handle complex arguments in the completely natural way, so these functions are defined over the complex plane. They are in fact "complex analytic functions". Just about any normal function one can think of can be extended to the complex numbers, and is complex analytic. Since the power series coefficients of the common functions are real, they work naturally with conjugates. For example:
The general study of functions that take a complex argument and return a complex result is an extremely rich and useful area of mathematics, known as complex analysis. When such a function is differentiable, it is called a complex analytic function, or just analytic function.
Complex numbers are extremely important in many areas of pure and applied mathematics and physics. Basically, any description of oscillatory phenomena can benefit from a formulation in terms of complex numbers.
The applications include these:
- Theoretical mathematics:
- algebra (including finding roots of polynomials)
- linear algebra—vector spaces, inner products, Hermitian and unitary operators, Hilbert spaces, etc.
- eigenvalue/eigenvector problems
- number theory—This seems improbable, but it is true. The Riemann zeta function, and the Riemann hypothesis, are very important in number theory. For a long time, the only proofs of the prime number theorem used complex numbers. The Wiles/Taylor proof of Fermat's Last Theorem uses modular forms, which use complex numbers.
- Applied mathematics
- eigenvalue/eigenvector problems
- Fourier and Laplace transforms
- linear differential equations
- stability theory
- Electrical engineering
- alternating current circuit analysis
- filter design
- antenna design
- Theoretical physics
- quantum mechanics
- fundamental particle physics
- electromagnetic radiation | http://www.conservapedia.com/Complex_number | 13 |
76 | In the previous section, you have learn how to perform computation of descriptive statistics and frequency analysis from our questionnaire data. In this section, you will learn how to analyze relationship between several variables in the questionnaire data. In particular, cross tabulation (pivot table) or sometimes called contingency table and chi-square independent test.
4. Is the existence of children playground in the park may gain higher visitors satisfaction ?
5. Is there any relationship between people activity and their activity time of the family in the park?
6. Is there any relationship between activity time and their mode to go to park?
Because those research problems are about relationship between two variables, we need to do what is called Cross Tabulation. Cross Tabulation (or CrossTab for short) is a frequency table between two or more variables. For readability, it is normally involve less than 4 variables. Cross Tabulation table has many name for different people. Actually they refer to the same thing. Some statistician called it Contingency Table while MS excel call it Pivot Table .
You may want to go directly to try the interactive programs that I made or you can learn the explanation of it in this page.
In Microsoft Excel, CrossTabs can be automated using Pivot Table. You may use either Pivot Table icon in the toolbar or using MS Excel Menu Data – Pivot Table and Pivot Chart Report .
When you click the toolbar or menu, Pivot Table wizard will pop up, click Next
In the step 2 of the wizard, you highlight the data including the label of the data in the top as shown in the following figure
In step 3 of the Pivot Table Wizard, select Layout button
To answer the relationship between variable Playground and Satisfaction , drag and drop the name of the variables on the right into the diagram. Put Satisfaction button in the row and Playground button in the column and make another drop to put Satisfaction once again to the Data . It will appear as Sum of Satisfaction . After that, double click the last button ( Sum of Satisfaction ) and Pivot Table Field dialog will appear. Select summarized by Count and then click the OK button twice.
When you go back to the Step 3 of Pivot table wizard, click Finish button.
MS excel will automatically create the Cross Tabulation table. Personally, I don't like to use it directly because it may contain very long formula. Thus, I prefer to highlight this Pivot Table and use Menu Edit Copy (CTRL-C).
Then select another cell, and use menu Edit - Paste Special . Click Values options and click OK button
After we reformat, we need to count the Independent values of the table. We need to do so because we want to know whether variable Playground has relationship with variable Satisfaction , or not. We will perform some simple test, called Chi-square test . If the result of the chi-square test shows that, variable Playground is independent from variable Satisfaction, and then we cannot conclude any relation between the two variables. Otherwise, we can conclude the relationship. Sound easy, isn't it?
To get the independent values, we to compute
Don't bother with the formula if you don't understand notation . It is just a symbol for summation. (click here to learn more about this sigma notation) The meaning is like this: to get the cell of independent table, we need to multiply the total of rows with the total of columns then divide this with the total of all data.
Example, for Satisfaction = 1 and Playground = 1, we have data of 2 respondents. The total of rows is 7 and total of columns is 4, the total of all data is 12. Then, the independent value for that cell is 7*4/12 = 2.333. Do the same thing for all cells in the table we get the table of independent values. This table has meaning that if variable Playground is 100% independent from variable Satisfaction , then the contents of the cells must be equal to these values.
To make sure that variable Playground has relationship (not independent) from variable Satisfaction , we need to put the degree of independent as small as possible, say less than 5%. The number of 5% is called the error or mistake that may happen by chance (who can avoid that we get it out of luck). Some people called it significant level .
The problem now is how to get the index to indicate degree of independent? The mathematician are very smart, they invented probability to represent that “degree of independent”. To determine that probability, we need to compute the difference between the observe values (from the Pivot table) and the expected values (from the Independent table), square this difference, and then divide with the expected value (to get the same unit back) and we sum for all entries of the table. In short,
To get the probability, we need to compute degree of freedom (df), that is
The probability can be obtained using MS Excel function = CHIDIST( , ). Now the Chi-square test is sound like this:
If probability is lower than 0.05, the two variables have relationship, otherwise we cannot conclude any relationship between the two variables in the contingency table.
As shown in the figure above, the value of probability is 0.048, thus we may conclude that there is a relationship between existences of playground with satisfaction level of parks' visitors. This answers the research question of “Is the existence of children playground in the park may gain higher visitors satisfaction ?” positively.
Note that if follow old statistical book, it will ask you to compare the Chi-square value ( ) with the Chi-square from the table. You can get Chi-square value from the table in MS Excel using function =CHIINV(probability, degree of freedom). If you put the probability 0.05 and 3 degree of freedom, you will get 7.8147 (look at your Chi square table if you have it, whether it is the same number). Because you have put probability here, you should use the value of the Chi-square to compare. The chi-square test will sound like this
Another note that MS excel also provide function =CHITEST (actual-range, expected range) to simplify the computation of probability. In this case, you don't need to create the third table to compute the value of Chi-square. The result is directly probability that you can compare with the value of significant level (i.e. 0.05).
For the next research questions, we can do the same steps as above in the new worksheet.
To get the Pivot table of Activity and activity time , drag and drop the time button in the column and all activities (1 to 6) button in the data . Let it be in the sum field because our activity data is binary.
Following the same steps as above, we gain the probability that variable Activity is independent from Time is 46.2%. Since it is larger than 5% as required in the Chi-square test, we conclude that there is no relationship between people activity and their activity time of the family in the park. This answers the research questions of “Is there any relationship between people activity and their activity time of the family in the park?”
For the last research question, we use variable Time in the Column of Pivot table and count of Mode in the data diagram. The results is shown in the figure below.
Since the chi-square probability is larger than 0.05, we conclude that there is no relationship between mode to go to parks with the activity time in the park. This answers negatively to the sixth research questions of “Is there any relationship between activity time and their mode to go to park?”. Try the interactive online program to compute chi-square independent test.
You have seen simple data analysis using MS excel Data Analysis – Descriptive Statistics and Pivot Table can be used to analyze your data from questionnaire survey. In the next section, you can play around with the online interactive program for data analysis
Preferable reference for this tutorial is
Teknomo, Kardi. Data Analysis from Questionnaires. http:\\people.revoledu.com\kardi\ tutorial\Questionnaire\ | http://people.revoledu.com/kardi/tutorial/Questionnaire/Crosstab.html | 13 |
164 | Topics covered: Concepts covered in this lecture include Hydrostatics, Archimedes' Principle, Fluid Dynamics, What makes your Boat Float?, and Bernoulli's Equation.
Instructor/speaker: Prof. Walter Lewin
Date recorded: November 17, 1999
Please make sure you play the Video before clicking the links below.
Stability of Floating Objects
Fluid Dynamics, Bernoulli's Equation
Fluid Mechanic Magic
Today we're going to continue with playing with liquids.
If I have an object that floats, a simple cylinder that floats in some liquid, the area is A here, the mass of the cylinder is M.
The density of the cylinder is rho and its length is l and the surface area is A.
So this is l.
And let the liquid line be here, and the fluid has a density rho fluid.
I call this level y1, this level y2.
The separation is h, and right on top here, there is the atmospheric pressure P2, which is the same as it is here on the liquid.
And here we have a pressure P1 in the liquid.
For this object to float we need equilibrium between, on the one hand, the force Mg and the buoyant force.
There is a force up here which I call F1, and there is a force down here which I call F2--
The force is always perpendicular to the surface.
There couldn't be any tangential component because then the air starts to flow, and it's static.
And here we have F1, which contains the hydrostatic pressure.
So P1 minus P2--
as we learned last time from Pascal--
equals rho of the fluid g to the minus y2 minus y1, which is h.
So that's the difference between the pressure P1 and P2.
For this to be in equilibrium, F1 minus F2 minus Mg has to be zero, and this we call the buoyant force.
And "buoyant" is spelt in a very strange way: b-u-o-y-a-n-t.
I always have to think about that.
It's the buoyant force.
F1 equals the area times P1 and F2 is the area times P2, so it is the area times P1 minus P2, and that is rho fluids times g times h.
And when you look at this, this is exactly the weight of the displaced fluid.
The area times h is the volume of the fluid which is displaced by this cylinder, and you multiply it by its density, that gives it mass.
Multiply it by g, that gives it weight.
So this is the weight of the displaced fluids.
And this is a very special case of a general principle which is called Archimedes' principle.
Archimedes' principle is as follows: The buoyant force on an immersed body has the same magnitude as the weight of the fluid which is displaced by the body.
According to legend Archimedes thought about this while he was taking a bath, and I have a picture of that here--
I don't know from when that dates--
but you see him there in his bath, but what you also see are there are two crowns.
And there is a reason why those crowns are there.
Archimedes lived in the third century B.C.
Archimedes had been given the task to determine whether a crown that was made for King Hieron II was pure gold.
The problem for him was to determine the density of this crown--
which is a very irregular-shaped object--
without destroying it.
And the legend has it that as Archimedes was taking a bath, he found the solution.
He rushed naked through the streets of Syracuse and he shouted, "Eureka! Eureka! Eureka!" which means, "I found it! I found it!" What did he find? What did he think of? He had the great vision to do the following: You take the crown and you weigh it in a normal way.
So the weight of the crown--
I call it W1--
is the volume of the crown times the density of which it is made.
If it is gold, it should be 19.3, I believe, and so this is the mass of the crown and this is the weight of the crown.
Now he takes the crown and he immerses it in water.
And he has a spring balance, and he weighs it again.
And he finds that the weight is less and so now we have the weight immersed in water.
So what you get is the weight of the crown minus the buoyant force, which is the weight of the displaced fluid.
And the weight of the displaced fluid is the volume of the crown--
because the crown is where...
the water has been removed where the crown is--
times the density of the fluid--
which is water, which he knew very well--
And so this part here is weight loss.
That's the loss of weight.
You can see that, you can measure that with a spring.
It's lost weight, because of the buoyant force.
And so now what he does, he takes W1 and divides that by the weight loss and that gives you this term divided by this term, which immediately gives you rho of the crown divided by rho of the water.
And he knows rho of the water, so he can find rho of the crown.
It's an amazing idea; he was a genius.
I don't know how the story ended, whether it was gold or not.
It probably was, because chances are that if it hadn't been gold that the king would have killed him--
for no good reason, but that's the way these things worked in those days.
This method is also used to measure the percentage of fat in persons' bodies, so they immerse them in water and then they weigh them and they compare that with their regular weight.
Let's look at an iceberg.
Here is an iceberg.
Here is the water--
it's floating in water.
It has mass M, it has a total volume V total, and the density of the ice is rho ice, which is 0.92 in grams per cubic centimeter.
It's less than water.
This is floating, and so there's equilibrium between Mg and the buoyant force.
So Mg must be equal to the buoyant force.
Now, Mg is the total volume times rho ice times g, just like the crown.
The buoyant force is the volume underwater, which is this part, times the density of water, rho water, times g.
You lose your g, and so you find that the volume underwater divided by the total volume equals rho ice divided by rho of water, which is 0.92.
That means 92% of the iceberg is underwater, and this explains something about the tragedy on April 15, 1912, when the Titanic hit an iceberg.
When you encounter an iceberg, you literally only see the tip of the iceberg.
That's where the expression comes from.
92% is underwater.
I want to return now to my cylinder, and I want to ask myself the question, when does that cylinder float? What is the condition for floating? Well, clearly, for that cylinder to float the buoyant force must be Mg, and the buoyant force is the area times h--
that's the volume underwater--
multiplied by the density of the fluid times g must be the total volume of the cylinder, which is the area times l, because that was the length of the cylinder, times the density of the object itself times g.
I lose my A, I lose my g, but I know that h must be less than l; otherwise it wouldn't be floating, right? The part below the water has to be smaller than the length of the cylinder.
And if h is less than l, that means that the density of the fluid must be larger than the density of the object, and this is a necessary condition for floating.
And therefore, if an object sinks then the density of the object is larger than the density of the fluid.
And the amazing thing is that this is completely independent of the dimensions of the object.
The only thing that matters is the density.
If you take a pebble and you throw it in the water, it sinks, because the density of a pebble is higher than water.
If you take a piece of wood, which has a density lower than water, and you throw it on water, it floats independent of its shape.
Whether it sinks or whether it floats, the buoyant force is always identical to the weight of the displaced fluid.
And this brings up one of my favorite questions that I have for you that I want you to think about.
And if you have a full understanding now of Archimedes' principle, you will be able to answer it, so concentrate on what I am going to present you with.
I am in a swimming pool, and I'm in a boat.
Here is the swimming pool and here is the boat, and I am sitting in the boat and I have a rock here in my boat.
I'm sitting in the swimming pool, nice rock in my boat.
I mark the waterline of the swimming pool very carefully.
I take the rock and I throw it overboard.
Will the waterline go up, or will the waterline go down, or maybe the waterline will stay the same? Now, use your intuition--
don't mind being wrong.
At home you have some time to think about it, and I am sure you will come up with the right answer.
Who thinks that the waterline will go up the swimming pool? Who thinks that the waterline will go down? Who thinks that it will make no difference, that the waterline stays the same?
Well, the waterline will change, but you figure it out.
Okay, you apply Archimedes' principle and you'll get the answer.
I want to talk about stability, particularly stability of ships, which is a very important thing--
Suppose I have an object here which is floating in water.
Here is the waterline, and let here be the center of mass of that object.
Could be way off center.
It could be an iceberg, it could be boulders, it could be rocks in there, right? It doesn't have to be uniform density.
The center of mass could be off the center...
of the geometric center.
So if this object has a certain mass, then this is the gravitational force.
But now look at the center of mass fluid that is displaced.
That's clearly more here, somewhere here, the displaced fluid.
That is where the buoyant force acts.
And so now what you have...
You have a torque on this object relative to any point that you choose.
It doesn't matter where you pick a point, you have a torque.
And so what's going to happen, this object is clearly going to rotate in this direction.
And the torque will only be zero when the buoyant force and the gravitational force are on one line.
Then the torque becomes zero, and then it is completely happy.
Now, there are two ways that you can get them on one line.
We discussed that earlier in a different context.
You can either have the center of mass of the object below the center of mass of the displaced fluid or above.
In both cases would they be on one line.
However, in one case, there would be stable equilibrium.
In the other, there would not be a stable equilibrium.
I have here an object which has its center of mass very low.
You can't tell that--
no way of knowing.
All you know is that the weight of the displaced fluid that you see here is the same as the weight of the object.
That's all you know.
If I took this object and I tilt it a little with the center of mass very low--
so here is Mg and here is somewhere the waterline--
so the center of mass of the displaced fluid is somewhere here, so Fb is here, the buoyant force, you can see what's going to happen.
It's going to rotate towards the right--
it's a restoring torque, and so it's completely stable.
I can wobble it back and forth and it is stable.
If I would turn it over, then it's not stable, because now I would have the center of mass somewhere here, high up, so now I have Mg.
And the center of the buoyant force, the displaced water, is about here, so now I have the buoyant force up, and now you see what's going to happen.
I tilt it to the side, and it will rotate even further.
This torque will drive it away from the vertical.
And that's very important, therefore, with ships, that you always build the ship such that the center of mass of the ship is as low as you can get it.
That gives you the most stable configuration.
If you bring the center of mass of ships very high--
in the 17th century, they had these very massive cannons which were very high on the deck--
then the ship can capsize, and it has happened many times because the center of mass was just too high.
So here... the center of mass is somewhere here.
Very heavy, this part.
And so now, if I lower it in the water notice it goes into the water to the same depth, because the buoyant force is, of course, the same, so the amount of displaced water is the same in both cases.
But now the center of mass is high and this is very unstable.
When I let it go, it flips over.
So the center of mass of the object was higher than the center of mass of the displaced fluid.
And so with ships, you have to be very careful about that.
Let's talk a little bit about balloons.
If I have a balloon, the situation is not too dissimilar from having an object floating in a liquid.
Let the balloon have a mass M.
That is the mass of the gas in the balloon plus all the rest, and what I mean by "all the rest"...
That is the material of the balloon and the string--
everything else that makes up the mass.
It has a certain volume v, and so there is a certain rho of the gas inside and there is rho of air outside.
And I want to evaluate what the criterion is for this balloon to rise.
Well, for it to rise, the buoyant force will have to be larger than Mg.
What is the buoyant force? That is the weight of the displaced fluid.
The fluid, in this case, is air.
So the weight of the displaced fluid is the volume times the density of the air--
that's the fluid in which it is now--
times g, that is the buoyant force.
That's... the weight of the displaced fluid has to be larger than Mg.
Now, Mg is the mass of the gas, which is the volume of the gas times the density of the gas.
That's the mass times g--
because we have to convert it to a force--
plus all the rest, times g.
I lose my g, and what you see...
that this, of course, is always larger than zero.
There's always some mass associated with the skin and in this case with the string.
But you see, the only way that this balloon can rise is that the density of the gas must be smaller than the density of air.
Density of the gas must be less than the density of the air.
This is a necessary condition for this to hold.
It is not a sufficient condition, because I can take a balloon, put a little bit of helium in there--
so the density of the gas is lower than the density of air--
but it may not rise, and that's because of this term.
But it is a necessary condition but not a sufficient condition.
Now I'm going to make you see a demonstration which is extremely nonintuitive, and I will try, step by step, to explain to you why you see what you see.
What you're going to see, very nonintuitive, so try to follow closely why you see what you will see.
I have here a pendulum with an apple, and here I have a balloon filled with helium.
I cut this string and I cut this string.
Gravity is in this direction.
The apple will fall, the balloon will rise.
The balloon goes in the opposite direction than the gravitational acceleration.
If there were no gravity, this balloon would not rise and the apple would not fall.
Do we agree so far? Without gravity, apple would not fall, balloon would not rise.
Now we go in outer space.
Here is a compartment and here is an apple.
I'm here as well.
None of us have weight, there's no gravity, and here is a helium-filled object, a balloon, and there's air inside.
We're in outer space, there's no gravity.
Nothing has any weight.
We're all floating.
Now I'm going to accelerate.
I have a rocket--
I'm going to accelerate it in this direction with acceleration a.
We all perceive, now, a perceived gravity in this direction.
I call it g.
So the apple will fall.
I'm standing there, I see this apple fall.
I'm in this compartment, closed compartment.
I see the apple go down.
A little later, the apple will be here.
I myself fall; a little later, I'm there.
I can put a bathroom scale here and weigh myself on the bathroom scale.
My weight will be M times this a, M being my mass, a being this acceleration.
I really think that it is gravity in this direction.
The air wants to fall, but the balloon wants to go against gravity.
The balloon will rise.
The air wants to fall, so inside here you create a differential pressure between the bottom, P1, and the top of the air, P2, inside here.
Just like the atmosphere on earth--
the atmosphere is pushing down on us--
the pressure is here higher than there.
So you get P1 is higher than P2.
So you create yourself an atmosphere, and the balloon will rise.
The balloon goes in the opposite direction of gravity.
If there were no air in there, then clearly all of us would fall: The apple would fall, I would fall, and the helium balloon would fall.
The only reason why the helium balloon rises is because the air is there and because you build up this differential pressure.
Now comes my question to you: Instead of accelerating it upwards and creating perceived gravity down, I'm now going to accelerate it in this direction, something that I'm going to do shortly in the classroom.
I'm going to accelerate all of us in this direction a.
In which direction will the apple go? In which direction will the balloon go? What do you think? The apple will go in the direction that it perceives gravity.
The apple will go like this.
I will go like this.
The air wants to go like this.
goes in the opposite direction of gravity, so helium goes in this direction.
In fact, what you're doing, you're building here an atmosphere where pressure P1 here will be higher than the pressure P2 there.
The air wants to go in this direction.
The pressure here is higher than the pressure there--
larger than zero.
If there's no air in there, we would all fall.
Helium would fall...
helium balloon would fall, apple would fall, and I would fall.
I have here an apple on a string in a closed compartment, not unlike what we have there except I can't take you out to an area where we have no gravity.
So here is that closed compartment, and here is the apple.
There is gravity in this direction.
It wants to fall in that direction of gravity if I cut the wire.
Now I'm going to accelerate it in this direction, and when I do that, I add a perceived component of gravity in the opposite direction.
So I add a perceived component of gravity in this direction.
So this apple wants to fall down because of the gravity that I cannot avoid, and it wants to fall in this direction.
So what will the string do? It's very clear, very intuitive, no one has any problem with that--
the string will do this.
Now I have a balloon here.
There is gravity in this direction.
That's why the balloon wants to go up.
It opposes gravity.
I'm going to accelerate the car in this direction.
I introduce perceived gravity in this direction.
What does the balloon want to do? It wants to go against gravity.
I build up in here, and it must be a closed compartment...
I must build up there a pressure differential.
The air wants to fall in this direction.
I build up a pressure here which is larger than the pressure there.
That's why it has to be a closed compartment.
What will the helium balloon do? It will go like that.
That is very nonintuitive.
So I accelerate this car.
As I will do, the apple will go back, which is completely consistent with all our intuition, but the helium balloon will go forward.
Let's first do it with the apple, which is totally consistent with anyone's intuition.
I'm going to make sure that the apple is not swinging too much.
Now, it only happens during the acceleration, so it's only during the very short portion that I accelerate that you see the apple go back, and then of course it starts to swing--
forget that part.
So watch closely--
only the moment that I accelerate the apple will come this way.
It goes in the direction of the extra component of perceived gravity.
Boy, it almost hit this glass here.
Everyone could see that, right? Okay.
Now we're going to do it with the balloon.
We're going to take this one off.
And now let's take one of our beautiful balloons.
We're going to put a balloon in here.
Has to be a closed compartment so that the air can build up the pressure differential.
There's always problems with static charges on these systems.
Only as long as I accelerate will the balloon go in a forward direction, so I accelerate in this direction, and what you're going to see is really very nonintuitive.
Every time I see it, I say to myself, "I can reason it, but do I understand it?" I don't know, what is the difference between reasoning and understanding? There we go.
The balloon went this way.
You can do this in your car with your parents.
It's really fun to do it.
Have a string with an apple or something else and have a helium balloon.
Close the windows.
They don't have to be totally closed, but more or less, and ask your dad or your mom to slam the brakes.
If you slam the brakes, what will happen? The apple will go...
what do you think? If you slam the brakes, the apple will go forwards, balloon will go backward.
If you accelerate the car all of a sudden, the apple will go backwards and the balloon will go forward.
You can do that at home.
You can enjoy... entertain your parents at Thanksgiving.
They'll get some of their $25,000 tuition back.
When fluids are moving, situations are way more complicated than when they are static.
And this leads to, again, very nonintuitive behavior of fluids.
I will derive in a short-cut way a very famous equation which is called Bernoulli's equation, which relates kinetic energy with potential energy and pressure.
Suppose I have a fluid, noncompressible, like so.
This cross-sectional area is A2 and the pressure here is P2.
And I have a velocity of that liquid which is v2 and this level is y2.
Here I have a cross-sectional area A1.
I have a pressure P1.
My level is y1; this is increasing y.
And I have a much larger velocity because the cross-section is substantially smaller there.
Now, if this fluid were completely static, if it were not moving--
so forget about the v1 and forget about the v2; it's just sitting still--
then P1 minus P2 would be rho g times y2 minus y1 if rho is the density of the fluid.
That's Pascal's Law.
So it would just be sitting still, and we know that the pressure here would be lower than the pressure there.
This is also, if you want to, rho gh if you call this distance h.
that reminds me of mgh, and mgh is gravitational potential energy.
When I divide m by volume, I get density.
So this is really a term which is gravitational potential energy per unit volume.
That makes the m divided by volume become density.
Therefore, pressure itself must also have the dimension of energy per unit volume.
And if we now set this whole machine in motion, then there are three players: There is, on the one hand, kinetic energy--
I take it, per unit volume.
There is gravitational potential energy...
I will take it, per unit volume.
And then there is pressure.
They're equal partners.
And if I apply the conservation of energy, the sum of these three should remain constant.
That's the idea behind Bernoulli's law, Bernoulli's equation.
When I take a fluid element and I move it from one position in the tube to another position, it trades speed for either height or for pressure.
What is the kinetic energy per unit volume? Well, the kinetic energy is one-half mv squared.
I divide by volume, I get one-half rho v squared.
What is gravitational potential energy? That is mgy.
I divide by volume, and so I get rho gy plus the pressure at that location y, and that must be a constant.
And this, now, is Bernoulli's equation.
It is a conservation of energy equation.
And as I will show you, it has very remarkable consequences.
First I will show you an example whereby I keep y constant.
So I have a tube which changes diameter, but the tube is not changing with level y, as I do there.
So I come in here, cross-sectional area A1.
I widen it, cross-sectional area A2.
This is y--
it's the same for both.
I have here inside pressure P1 and here inside I have pressure P2 and this is the density of the fluid.
There is here a velocity v2, and there is here a velocity v1.
And clearly v1 is way larger than v2 because A1 times v1 must be A2 times v2 because the fluid is incompressible.
So the same amount of matter that flows through here in one second must flow through here in one second.
And so these have to be the same, and since A1 is much smaller than A2, this velocity is much larger than v2.
Now I'm going to apply Bernoulli's equation.
So the first term tells me that one-half rho v1 squared...
I can forget the second term because I get the same term here as I get there because I measure the pressure here and I measure the pressure there.
They have the same level of y.
So I can ignore the second term.
Plus P1 must be one-half rho v2 squared plus P2.
That's what Bernoulli's equation tells me.
Now, v1 is larger than v2.
The only way that this can be correct, then, is that P1 must be less than P2.
So you will say, "Big deal." Well, it's a big deal, because I would have guessed exactly the other way around, and so would you, because here is where the highest velocity is, and all our instincts would say, "Oh, if the velocity is high, there's a lot of pressure." It's exactly the other way around.
Here is the low pressure, and here is the high pressure, which is one quite bizarre consequence of Bernoulli's equation.
You must all have encountered in your life what we call a siphon.
They were used in the medieval and they're still used today.
You have here...
A bucket in general is used with water--
We have water here, but it could be any liquid.
And I stick in here a tube which is small in diameter, substantially smaller than this area here.
And there will be water in here up to this level--
this level P2, y2.
This is y1, increasing value of y.
This height difference is h.
P2 is one atmosphere.
I put a one there--
And here, if it's open, then P1 is also one atmosphere.
So there's air in here and there's liquid in here.
I take this open end in my mouth and I suck the water in so that it's filled with this water, full with this water.
And strange as it may be, it's like making a hole in this tank.
If I take my finger off here, the water will start to run out, and I will show you that.
And you have here a velocity v1.
The water will stream down into this here and the velocity here is approximately zero, because this area is so much larger than this cross-sectional area that to a good approximation this water is going down extremely slowly.
Let's call this height difference d.
I apply Bernoulli's law.
So now we have a situation where the y's are different but the pressure is the same, because right here at this point of the liquid I have one atmosphere, which is barometric pressure, and since this is open with the outside world, P1 is also one atmosphere.
So now I lose my P term.
There I lost my y term; now I lose my P term.
So now I have that one-half rho... rho--
this is rho of the liquid--
v1 squared plus rho g times y1 must be one-half rho v2 squared, but we agreed that that was zero, so I don't have that term.
So I only have rho gy2.
I lose my g's...
no, I don't lose my g's.
One-half rho v squared--
no, that's fine.
And so... I lose my rho.
This is one-half.
I lose my rho.
And so you get that one-half v1 squared equals g times y2 minus y1, which is h.
And so what do you find? That the speed with which this water is running out here, v1, is the square root of 2gh.
And you've seen that before.
If you take a pebble and you release a pebble from this level and you let it fall, it will reach this point here, this level with the speed the square root of 2gh.
We've seen that many times.
So what is happening here--
since the pressure terms are the same here and there, now there's only a conversion.
Gravitational potential energy--
which is higher here than there--
is now converted to kinetic energy.
This siphon would only work if d is less than ten meters.
Because of the barometric pressure you can never suck up this water--
no one can; a vacuum pump can't either--
to a level that is higher than ten meters.
When I did the experiment there with the cranberry juice, I was able to get it up to five meters, but ten meters would have been the theoretical maximum.
So this has to be less than ten meters that you go up.
If I would have made a hole in this tank here, just like this, down to exactly this level, and I would have asked you to calculate with what speed the water is running out, you would have found exactly the same if you had applied Bernoulli's equation.
This is a way that people...
I've seen people steal other people's gasoline in the time that gasoline was very scarce and that there were no locks yet on the gasoline caps.
You would put a hose in the gasoline tank and you would have to suck on it a little--
you have to sacrifice a little bit--
you get a little bit of gasoline in your mouth, and then you can just empty someone's gasoline tank by having a canister or by having a jerrican and fill it with gasoline.
And I'm going to show that now to you by emptying...
That's still cranberry juice, by the way, from our last lecture.
So let's put this up on a stool.
So there is the hose--
it's that thing--
and I'm going to transfer this liquid from here to here.
So first I have to fill it with cranberry juice.
And there it goes.
And as long as this level is below that level, it keeps running.
Not so intuitive.
I remember, I was at a summer camp when I was maybe six or seven years old.
I couldn't believe it when I saw this for the first time.
We had these outdoor sinks where we washed ourselves and brushed our teeth, and the sink was clogged, it was full with water.
And one of the camp leaders took a hose, sucked up and it emptied itself.
And I really thought, you know, you'd have to take spoonfuls of water or maybe buckets and scoop it out.
This is the way you do it.
The nonintuitive part is that it runs up against gravity there.
So we can let it sit there and we have a transfer, mass transfer of cranberry juice.
Last time I was testing my lungs to see how strong I was.
I wasn't very good, right? I could only blow up one meter of water and only suck one meter water.
Differential pressure only one-tenth of an atmosphere.
Today I would like to test one of the students who, no doubt, is more powerful than I am.
And I have here a funnel...
with a Ping-Pong ball here, very lightweight, and we're going to have a contest to see who can blow it the highest.
I have two funnels, so it's very hygienic.
I will try it with this one.
They're clean, they just...
We just got them from the chemistry department.
And so I would like to see a volunteer--
woman or man, it doesn't matter.
You want to try it, see whether you can reach the ceiling? You don't want to try it? Come on! You want to try it? You're shy? You don't want to? Can I persuade you? I can.
Okay, come along.
Come right here.
You think you can make it to the ceiling? It's only a very light Ping-Pong ball.
So, you go like this, blow as hard as you can.
LEWIN: Try it, don't be nervous.
STUDENT: All right.
LEWIN: Straight up.
LEWIN: Blow as hard as you can--
get it out.
Amazing! Do it again.
Come on, there must have been something wrong.
LEWIN: You're not sick today, are you? Blow.
Harder! STUDENT: Is this a trick? LEWIN: No, there's nothing, there's no trick in here.
I mean, my goodness--
this is a Ping-Pong ball, I'm not a magician.
LEWIN: Come on, blow it up! Hey, it doesn't work.
Why don't you sit down?
LEWIN: Why doesn't it work? Why doesn't it work? The harder you blow, the least it will work.
Air is flowing here...
and right here, where there is very little room, the air will have very high speed, way higher than it has where it has lots of room.
And so at the highest speed, you get the lowest pressure.
And so the Ping-Pong ball is sucked in while you're blowing it.
And to give you the conclusive proof of that I will do it this way.
I will put the Ping-Pong ball like so, and I'm going to blow like this, and if I blow hard enough, the Ping-Pong ball will stay in there because I generate a lower pressure right here where the passage is the smallest, but I have to blow quite hard.
[inhales deeply, blowing hard]
You see that? Isn't that amazing? That's the reason why she couldn't get it up.
[inhales deeply, blowing hard]
That's what Bernoulli does for you.
Not so intuitive, is it?
I have here an air flow, a hose with air coming out, and I can show you there something that is equally nonintuitive.
Let's start the air flow.
It's coming out.
I take a Ping-Pong ball.
It stays there.
Is that due to Mr. Bernoulli? No.
No, that's more complicated physics, because it has to do with turbulence.
It has to do with vortices, which is very difficult.
What is happening here is that as the air flows, you get turbulence above here and the turbulence creates a lower pressure.
So the vortices, which are the turbulence, are keeping this up, because there's a lower pressure here and there.
But why is it so stable? I can see that I have...
because of this turbulence, that it's held up.
Why is it so stable? If I give it a little push it doesn't...
it's sucked back in again.
It's very stable--
that is Bernoulli.
Because if I blow air, like so...
then the velocity here is the highest, because it's diverging the air as it's coming out, but in the center, it is the highest, and so when this Ping-Pong ball goes to this side, it clearly has a lower pressure here than there and so it's being sucked back in again.
So the stability is due to Bernoulli, but the fact that it is held up is more difficult physics.
It is so stable that I can even tilt this...
and it will still stay there.
Now I have something that I want you to show your parents on Thanksgiving.
It's a little present for them, and that is something that you can very easily do at home.
You take a glass and you fill it with cranberry juice--
not all the way, up to here.
Take a thin piece of cardboard, the kind of stuff that you have on the back of pads.
You put it on top.
The table is beautifully set--
turkey, everything is there--
and you suggest to your parents that you turn this over.
Your mother will scream bloody murder, because she would think that the cranberry juice will fall out.
In fact, it may actually fall out.
I can't guarantee you that it won't.
LEWIN: But it may not, in which case you now have all the tools to explain that.
Please do invite me to your Thanksgiving dinner and I'll show it to your parents. | http://ocw.mit.edu/courses/physics/8-01-physics-i-classical-mechanics-fall-1999/video-lectures/lecture-28/ | 13 |
71 | Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction
has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction
has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2.
Unfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine.
The problem is easier to understand at first in base 10. Consider the fraction 1/3. You can approximate that as a base 10 fraction:
and so on. No matter how many digits you’re willing to write down, the result will never be exactly 1/3, but will be an increasingly better approximation of 1/3.
In the same way, no matter how many base 2 digits you’re willing to use, the decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base 2, 1/10 is the infinitely repeating fraction
Stop at any finite number of bits, and you get an approximation. On most machines today, floats are approximated using a binary fraction with the numerator using the first 53 bits starting with the most significant bit and with the denominator as a power of two. In the case of 1/10, the binary fraction is 3602879701896397 / 2 ** 55 which is close to but not exactly equal to the true value of 1/10.
Many users are not aware of the approximation because of the way values are displayed. Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display
>>> 0.1 0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead
>>> 1 / 10 0.1
Just remember, even though the printed result looks like the exact value of 1/10, the actual stored value is the nearest representable binary fraction.
Interestingly, there are many different decimal numbers that share the same nearest approximate binary fraction. For example, the numbers 0.1 and 0.10000000000000001 and 0.1000000000000000055511151231257827021181583404541015625 are all approximated by 3602879701896397 / 2 ** 55. Since all of these decimal values share the same approximation, any one of them could be displayed while still preserving the invariant eval(repr(x)) == x.
Historically, the Python prompt and built-in repr() function would choose the one with 17 significant digits, 0.10000000000000001. Starting with Python 3.1, Python (on most systems) is now able to choose the shortest of these and simply display 0.1.
Note that this is in the very nature of binary floating-point: this is not a bug in Python, and it is not a bug in your code either. You’ll see the same kind of thing in all languages that support your hardware’s floating-point arithmetic (although some languages may not display the difference by default, or in all output modes).
Python’s built-in str() function produces only 12 significant digits, and you may wish to use that instead. It’s unusual for eval(str(x)) to reproduce x, but the output may be more pleasant to look at:
>>> str(math.pi) '3.14159265359' >>> repr(math.pi) '3.141592653589793' >>> format(math.pi, '.2f') '3.14'
It’s important to realize that this is, in a real sense, an illusion: you’re simply rounding the display of the true machine value.
One illusion may beget another. For example, since 0.1 is not exactly 1/10, summing three values of 0.1 may not yield exactly 0.3, either:
>>> .1 + .1 + .1 == .3 False
Also, since the 0.1 cannot get any closer to the exact value of 1/10 and 0.3 cannot get any closer to the exact value of 3/10, then pre-rounding with round() function cannot help:
>>> round(.1, 1) + round(.1, 1) + round(.1, 1) == round(.3, 1) False
Though the numbers cannot be made closer to their intended exact values, the round() function can be useful for post-rounding so that results with inexact values become comparable to one another:
>>> round(.1 + .1 + .1, 10) == round(.3, 10) True
Binary floating-point arithmetic holds many surprises like this. The problem with “0.1” is explained in precise detail below, in the “Representation Error” section. See The Perils of Floating Point for a more complete account of other common surprises.
As that says near the end, “there are no easy answers.” Still, don’t be unduly wary of floating-point! The errors in Python float operations are inherited from the floating-point hardware, and on most machines are on the order of no more than 1 part in 2**53 per operation. That’s more than adequate for most tasks, but you do need to keep in mind that it’s not decimal arithmetic and that every float operation can suffer a new rounding error.
While pathological cases do exist, for most casual use of floating-point arithmetic you’ll see the result you expect in the end if you simply round the display of your final results to the number of decimal digits you expect. str() usually suffices, and for finer control see the str.format() method’s format specifiers in Format String Syntax.
For use cases which require exact decimal representation, try using the decimal module which implements decimal arithmetic suitable for accounting applications and high-precision applications.
Another form of exact arithmetic is supported by the fractions module which implements arithmetic based on rational numbers (so the numbers like 1/3 can be represented exactly).
If you are a heavy user of floating point operations you should take a look at the Numerical Python package and many other packages for mathematical and statistical operations supplied by the SciPy project. See <http://scipy.org>.
Python provides tools that may help on those rare occasions when you really do want to know the exact value of a float. The float.as_integer_ratio() method expresses the value of a float as a fraction:
>>> x = 3.14159 >>> x.as_integer_ratio() (3537115888337719, 1125899906842624)
Since the ratio is exact, it can be used to losslessly recreate the original value:
>>> x == 3537115888337719 / 1125899906842624 True
The float.hex() method expresses a float in hexadecimal (base 16), again giving the exact value stored by your computer:
>>> x.hex() '0x1.921f9f01b866ep+1'
This precise hexadecimal representation can be used to reconstruct the float value exactly:
>>> x == float.fromhex('0x1.921f9f01b866ep+1') True
Since the representation is exact, it is useful for reliably porting values across different versions of Python (platform independence) and exchanging data with other languages that support the same format (such as Java and C99).
Another helpful tool is the math.fsum() function which helps mitigate loss-of-precision during summation. It tracks “lost digits” as values are added onto a running total. That can make a difference in overall accuracy so that the errors do not accumulate to the point where they affect the final total:
>>> sum([0.1] * 10) == 1.0 False >>> math.fsum([0.1] * 10) == 1.0 True
This section explains the “0.1” example in detail, and shows how you can perform an exact analysis of cases like this yourself. Basic familiarity with binary floating-point representation is assumed.
Representation error refers to the fact that some (most, actually) decimal fractions cannot be represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many others) often won’t display the exact decimal number you expect.
Why is that? 1/10 is not exactly representable as a binary fraction. Almost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2**N where J is an integer containing exactly 53 bits. Rewriting
1 / 10 ~= J / (2**N)
J ~= 2**N / 10
and recalling that J has exactly 53 bits (is >= 2**52 but < 2**53), the best value for N is 56:
>>> 2**52 <= 2**56 // 10 < 2**53 True
That is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible value for J is then that quotient rounded:
>>> q, r = divmod(2**56, 10) >>> r 6
Since the remainder is more than half of 10, the best approximation is obtained by rounding up:
>>> q+1 7205759403792794
Therefore the best possible approximation to 1/10 in 754 double precision is:
7205759403792794 / 2 ** 56
Dividing both the numerator and denominator by two reduces the fraction to:
3602879701896397 / 2 ** 55
Note that since we rounded up, this is actually a little bit larger than 1/10; if we had not rounded up, the quotient would have been a little bit smaller than 1/10. But in no case can it be exactly 1/10!
So the computer never “sees” 1/10: what it sees is the exact fraction given above, the best 754 double approximation it can get:
>>> 0.1 * 2 ** 55 3602879701896397.0
If we multiply that fraction by 10**55, we can see the value out to 55 decimal digits:
>>> 3602879701896397 * 10 ** 55 // 2 ** 55 1000000000000000055511151231257827021181583404541015625
meaning that the exact number stored in the computer is equal to the decimal value 0.1000000000000000055511151231257827021181583404541015625. Instead of displaying the full decimal value, many languages (including older versions of Python), round the result to 17 significant digits:
>>> format(0.1, '.17f') '0.10000000000000001'
>>> from decimal import Decimal >>> from fractions import Fraction >>> Fraction.from_float(0.1) Fraction(3602879701896397, 36028797018963968) >>> (0.1).as_integer_ratio() (3602879701896397, 36028797018963968) >>> Decimal.from_float(0.1) Decimal('0.1000000000000000055511151231257827021181583404541015625') >>> format(Decimal.from_float(0.1), '.17') '0.10000000000000001' | http://docs.python.org/release/3.1.3/tutorial/floatingpoint.html | 13 |
113 | ENIAC took a Square Root
Abstract: The ENIAC (Electronic Numerical Integrator and Computer) is the world's first electronic computer. However it could only store twenty 10-digit decimal numbers and was programmed by wiring the computational units together. These limitations made it very unlike today's stored-program computers. The ENIAC had hardware to add, subtract, multiply, divide and take a square root. This last operation is interesting since computers normally don't do square roots in hardware. So given the limited capabilities of the ENIAC, how did it take a square root? A slightly updated version of How the ENIAC took a Square Root (revised 2009) in .pdf format is also available.
History: The ENIAC was a war time effort by the University of Pennsylvania's Moore School of Electrical Engineering for the Army's Ballistics Research Lab at Aberdeen Maryland. Its purpose was to compute "firing tables" for artillery, information that gunners would use to properly aim and fire their guns. During World War II such computational work for firing tables was being done using the Moore School's Differential Analyzer, an analog device that could solve differential equations. 1942 John Mauchly, a physics professor working at the Moore School who had a long time interest in scientific computing, submitted a proposal for using vacuum tube devices for high speed computing. Discussions with J. Presper Eckert, graduate student at the Moore School, convinced him that such a devices was possible. In 1943 when the need for more firing tables became more acute, Mauchly's proposal was brought up with the result that the army awarded a contract to the Moore School to build what we know today as the ENIAC. Mauchy was the principal consultant and Eckert the chief engineer. Work on ENIAC began in the summer of 1943 but the ENIAC was not completed until the after the war ended; the ENIAC was officially unveiled in February 1946.
Overview of the ENIAC: The ENIAC was "build" around twenty 10 decimal digit Accumulators which could add and subtract at electronic speeds. To add or subtract two numbers, the contents of one accumulator was sent to a second. Accumulators could "receive" a number, transmit its contents "additively" (for addition) or transmit "subtractively" (for subtraction). The ENIAC was capable of performing 5000 additions/subtractions per second!
An accumulator contained ten decade counters. Each decade counter (designed by Eckert) was a ten state circuit that could store a single decimal digit much like a ten position "counter wheel" from a mechanical calculator. An electronic "pulse" (a "square wave" __|---|__ ) would advance the decade counter one position. Digits were sent as a "train of pulses" so if a decade counter was in the "4" state, upon receiving a train of 3 pulses ( __|---|_|---|_|---|__ ) it would advance to the "7" state. If it received a train of 8 pulses it would advance with "wrap-around" to the "2" state and while generating a "carry- pulse" to the next decade. Subtraction was done by using 9 complement digits (i.e. -7 was sent as 9 - 7 = 2 pulses) with an extra pulse added to the units digit (essentially tens-complement notation). The ten digit "pulse trains" plus a sign "pulse" were sent over eleven parallel wires. An eleventh two-state plus-minus counter was used for the sign.
The ENIAC also had a high-speed Multiplier unit for multiplication. The Multiplier contained the logic to multiply a ten digit number by a single digit to obtain a partial product. The partial products were then added together (using accumulators) to obtain the final product. The Multiplier made use of four accumulators to multiply (six for a 20 digit product).
The final computational unit was a Divider/Square Rooter. Division and taking a square root was orchestrated as a series of subtractions/additions and shifts which like the Multiplier made use of a number of accumulators but unlike the Multiplier contained no special computational hardware to do either; in other words it used accumulators to do the needed addition and subtraction. All work was done in decimal. Division was done by repeated subtractions followed by repeated additions etc. using a technique called "non-restoring division". As we shall see taking a square root used a similar technique which is probably why the two operations were combined in one unit.
Input Output was provided by a mechanical IBM card Reader and card Punch which were connected to an electronic Constant Transmitter used to stored constants. The Constant Transmitter provided the interface between the slow-speed mechanical I/O devices and the rest of the high-speed electronic ENIAC. There were also three Function Table units, essentially 100 by 10 digit ROM memories which were set by switches.
The units of the ENIAC were connected by two buses: a data bus used to transmit the "ten digits plus sign" over parallel wires and a control bus. Program control for the ENIAC was distributed, not centralized. Each accumulator contained control logic that would allow it to "work" with other accumulators to perform a sequence of calculations. Programming was accomplished by setting switches on the various units and wiring the connections between them using the control bus for control signals and the data bus for data. A Master Control unit was used to "loop" the various sequence of calculations set up between accumulators. A Cycling Unit synchronized the various units over a third cycle bus (not shown above). There was also an Initializing Unit.
The ENIAC did not use a "fetch-decode-execute" cycle to execute its program since there was no memory to store instructions. And the ENIAC was not "programmed" using paper tape unlike Zuse's Z3 (a device unknown to Mauchly and Eckert) or Aiken's Automatic Sequence Controlled Calculator (Harvard Mark I) completed in the summer of 1944, both of which read their instructions from paper tape readers. The reason for not using paper tape readers was the slow speed of such mechanical. devices. If the ENIAC was to be truly fast, both instructions and calculations had to be executed at electronic speeds. The only way to effectively do the former was to "wire" the program into the machine. The idea was not completely new; IBM punch card equipment could be programming in a limited way using plug boards.
All of this was packaged into 40 panels each 2 feet wide wide by 8 feet high arranged in a U shape in a 30 by 60 foot area. A diagram of the ENIAC from the "ENIAC Progress Report" of June 30, 1945 can be seen <click here>.
A Method for Taking a Square Root
The method used by the ENIAC Square Rooter to take a square root required only addition and subtraction. It was based on the formula that the sum of the first n odd integers is n squared.
To calculate the square root of m, find the smallest integer n such the sum of the first n odd integers exceeds m. This can be done by subtracting the consecutive odd integers from m until a negative result is obtained. If n is the smallest integer such that m - (1 + 3 + 5 + ...+ 2n-1) < 0 then (n - 1)2 <= m < n2. Letting a equal the nth odd integer 2n -1, solve the double inequality (n-1) <= sqrt(m) < n for
Example: To estimate the square root of 7251, subtract the consecutive odd integers until a negative result is obtained. Note that 7251 - (1 + 3 + 5 + ... + 169) = 26 and 7251 - (1 + 3 + + 5 + ... 169 + 171) = -145 so a = 171. Thus
Aside: By comparing the 26 and the -145 observe that 85 is the closer approximation. Using linear interpolation, 26/171 = 0.1520 so 85.152 is a better approximation. (compare with 85.1528). - End of Example
Additional precision is possible if you first multiply m by a power of 100, take the square root then divide by the same power of 10. For example if we multiply 7251 by 1002 (one hundred squared) to obtain 72,510,000 and take the square root using the above technique, then 72,510,000 - (1 + 3 + 5 + ... + 17031) = -12256. Thus
so 85.15 <= sqrt(7251) < 85.16 (after dividing by ten squared).
Reality Check: The ENIAC could do 5000 additions/subtractions per second. Therefore to do the twice 8516 additions/subtractions would take 3.4 seconds! Thi is not a very efficient way to find the square root.
A More Efficient Approach
The above algorithm can be made more efficient. It's a two step process.
First observe that the sum of the consecutive odd multiples of 100 is also a square. That is 100 + 300 + 500 + ... (2n-1)*100 = n2*100. Thus we can calculate the square root of m to the nearest tens by finding the smallest integer n such that m - (100 + 300 + ... + (2n-1)*100) < 0 which implies (n-1)2*100 <= m < n2*100. Again if we let a equal the odd integer 2n-1 then solving the double inequality yields
However every odd multiple N = (2n-1)*100 is the sum of ten consecutive odd integers. Let
N = (2n-1)*100 = 200(n-1) + 100
Now 200(n-1) can be rewritten as 10*20(n-1) and 100 can be expressed as the sum 1 + 3 + 5 + ... + 19. Hence N can be written as the sum of the ten consecutive odd integers
N = [20(n-1) + 1] + [20(n-1) + 3] + ... [20(n-1) + 19]
Even better, since a little algebra shows (n-1) = (N - 100)/200 and 20(n-1) = N/10 - 10
N = [N/10 - 9] + [N/10 - 7] + ... + [N/10 + 7] + [N/10 + 9]
Thus we can add back the odd integers starting with N/10 + 9 until the result is positive. If N/10 + j (where j is an odd integers between -9 and +9) was the last (smallest) odd integer added back, the N/10 + j is the smallest odd integer such that
m - (1 + 3 + 5 + ... + (N/10 + j)) < 0
Thus if a = N/10 + j then
Example: Find the square root of 72,5120,000
1. Subtract odd multiples of 100
72,150,000 - (100 + 300 + 500 + ... + 170100) = 89900
72,150,000 - (100 + 300 + 500 + ... + 170100 + 170300) = -80400
so N = 170300. The last (largest) odd integer subtracted was 170300/10 + 9 = 17039
2. Add back the decreasing sequence of odd integers starting with 17039
-80400 + (17039 + 17037 + ... + 17033) = -12256
-80400 + (17039 + 17037 + ... + 17033 + 17031) = 4775
so N = 17031 which agrees with the previous result (i.e. 8515 <= sqrt(72,510,000) < 8516).
End of Example
Thus by combining subtracting odd multiples of 100 then adding back the odd integers whose sum equals the last odd multiple of 100, we can reduce the number of calculations by an order of magnitude.
Reality Check: With a reduction of the number of additions/subtractions by a factor of 10, the ENIAC could now calculate the square root in approximately 0.34 seconds - which is still slow.
A Generalization of the More Efficient Approach
We can extend this technique to subtracting odd multiples of powers of 100 by observing that the sequence of odd multiples of 100k is a square and that if N is any odd multiple of 100k, it can be written as a sum of ten odd multiples of 100k-1. These results are stated formally below,
For k >= 0 the sum of the first n odd multiples of 100k is a square.
Thus we can calculate the square root of m to the nearest 10k by finding the smallest integer n such that m - (1*100k + 3*100k + ... + (2n-1)*100k) < 0 which implies (n-1)2*100k <= m < n2*100k. If we let N = (2n-1)*100k then solving the double inequality obtains a range for twice the root
Observe that this is not a very good estimate!
Any odd multiple of 100k can be written as the sum of ten consecutive odd multiples of 100k-1. Specifically if N is an odd multiple of 100k, then
Proof:: Simply do the algebra. The case k = 1 is the result that any odd multiple of 100 is the sum of ten consecutive odd integers.
Given these two results we present an algorithm (used by the ENIAC) to obtain a square root
Square Root Algorithm
Replace k-1 with k and let N be the last odd multiple of 100k added back (remember k-1 is now k). If k = 0 then you're done so go to step 4; otherwise N is the sum of ten consecutive odd multiples of 100k-1 so continue on to step 3.
Replace k-1 with k and let N be the last odd multiple of 100k subtracted (remember k-1 is now k). If k = 0 then you're done so go to step 4; otherwise N is the sum of the ten consecutive odd multiples of 100k-1 so go back to step 2.
The ENIAC used N for twice the square root
Example: Find the square root of 72,510,000.
1. Starting with 1003 subtract the increasing sequence of odd multiples of 1003 until a negative result is obtained
72,510,000 - (1*1003 + 3*1003 + ... + 15*1003 + 17*1003) = -8490000
Therefore the last odd multiple of 1002 subtracted was 17*1003/10 + 9*1002 = 179*1002
2. Starting with 179*1002 add back the decreasing sequence of odd multiples of 1002 until a positive result is obtained.
-8490000 + (179*1002 + 177*1002 + ...+ 173*1002 + 171*1002) = 260000
Therefore the last odd multiple of 100 added back was 171*1002/10 - 900 = 1701*100.
3. Starting with 1701*100 subtract the increasing sequence of odd multiples of 100 until a negative result is obtained
260000 - (1701*100 + 1703*100) = -80400
Therefore the last odd integer subtracted was 1703*100/10 + 9 = 17039
4. Starting with 17039 add back the decreasing sequence of odd integers until a positive result is obtained
-80400 + (17039 + 17037 + ... + 17033 + 17031) = 4775
Therefore 17031 is the smallest odd integer such that 72,510,000 - (1 + 3 + 5 + ... + 17031) < 0. Therefore
17030 <= 2*sqrt(72,510,000) < 17032
End of Example
How this Algorithm was Implemented on the ENIAC
On the ENIAC the hardware to take a square root was combined with the divider since the sequence of operations used is similar to those used to divide (a non-restoring division technique was used) . Taking a square root was done using a couple of accumulators to execute the series of subtractions and additions discussed. The divider/square rooter essentially orchestrated the sequence of steps needed; unlike the high-speed multiplier unit it contain no special circuits to perform arithmetic.
The square rooter actually performed calculations to obtain twice the square root. To obtain 2*sqrt(m), the value m was deposited to an accumulator which we shall call the numerator. Then a second accumulator, the denominator, was initialized to 108 (1004) by setting the 9th digit to 1.
For example, to calculate twice the square root of 72,510,000 two accumulators were initialized as follows
Numerator: 0,072,510,000 Denominator: 0,100,000,000
Step 1: Subtract increasing odd multiples of 108 (1004) from the numerator until a negative result is obtained.
The denominator was subtracted from the numerator and the denominator was incremented by 2 in the 9th digit. This was repeated until there was a sign change in the numerator. Note that the denominator is incremented after the subtraction but before the sign of the numerator is tested so that the denominator "overshoots" the "correct" value.
Numerator: -0,027,490,000 Denominator: 0,300,000,000
At this point we know that m - 1*1004 < 0 and if N = 1*1004 then N/10 + 9*1003 was the last odd multiple of 1003 subtracted. Therefore starting with N/10 + 9*1003 = 019,000,000, add back the decreasing sequence of odd multiples of 1003 until a sign change is obtained. The value to be added back could be obtained by right shifting the denominator and setting the 7th position to 9.
Numerator: -0,027,490,000 Denominator: 0,019,000,000
However, this was not done by the ENIAC square rooter!
To increase the precision of the final answer the ENIAC scaled its calculations by left shifting the numerator and not right shifting the denominator and setting the 8th digit to 9 instead of the 7th digit. This scaling trick would eventually add 4 digits of accuracy to the final answer.
Numerator: -0,274,900,000 Denominator: 0,190,000,000
But there was one further complication! The denominator contains the wrong value since it was incremented by 2 before the sign of the numerator was tested. So instead of setting the 8th digit to 9, it subtracts 11 from the 8th and 9th digits
Denominator: 0,190,000,000 after subtracting 11
Aside: If N = (2n-1)*1004 is the largest odd multiple of 1004 (108) such that m - (1*1004 + 3*1004 + ...+ (2n-1)*1004) < 0 , then the denominator contains N + 2*108 instead of N. Subtracting 11*107 yields N + 9*107 which is what the denominator should contain for the next iteration.
Step 2: Add back!
Add the denominator back into the numerator and decrement the denominator by 2 in the 8th digit until a sign change is obtained (numerator is positive).
If N = (2n-1)*100k was the last odd multiple of 100k added back, then N/10 - 9*100k-1 was the last (smallest) odd multiple of 100k-1 added back. Thus beginning with N/10 - 9*100k-1 subtract the increasing sequence of odd multiples of 100k-1 until a sign change is obtained (to negative). Again to increase precision the ENIAC scaled by left shifting the numerator and by not right shifting the denominator and adding 11 to the 7th and 8th digits.
At this point we repeat alternately subtracting/adding the denominator from/to the numerator and incrementing/decrementing the denominator by 2 in the pth position until a sign change is obtained. After the repeated subtraction/addition sequence terminates with a sign change, we left shift the numerator and subtract/add 11 to the p-1 position in the denominator.
Step 3: Subtract
Step 4: Add Back
Step 5: Subtract
Step 6: Add Back
Step 7: Subtract
Step 8: Add Back
Step 9: Subtract
The last odd integer subtracted was a = 170,305,607 which made the numerator negative, Thus after scaling we have
If you compare the magnitudes of the last two numerator values, 1412,243,192 and -29,062,416, the better approximation would be 17,030.5607. The ENIAC square rooter has an optional round off mechanism. It left shifted the numerator one more time and then subtracted the denominator 5 times from it. If there is no sign change the last position in the doubled root was incremented or decremented by 2 depending on whether the denominator was previously decremented or incremented by 2. Thus
Numerator: -0,290,624,160 Denominator: 0,170,305,609
Numerator: 0,560,903,885 <- sign change!
Since there is a sign change, 17,030.5607 is the best approximation for the doubled square root. The decimal point is between the 4th and 5th positions so the doubled root is approximately 17,030.5607.
This method also works for small numbers. Here we demonstrate how the ENIAC would take the square root of 2 an accuracy of 4 digits below the decimal point.
Step 1: Subtract increasing odd multiplies of 108 from the numerator until a negative result is obtained. Remember that the ENIAC always increments the denominator by 2 in the pth digit..
Step 2: Left shift the numerator and subtract 11*107 from the denominator. Add back decreasing odd multiples of 107 to the numerator until a positive result is obtained.
Step 3: Left shift the numerator and add 11*106 to the denominator. Subtract increasing odd multiples of 106 from the numerator until a negative result is obtained.
Step 4: Left shift the numerator and subtract 11*105 from the denominator. Add back decreasing odd multiples of 105 to the numerator until a positive result is obtained.
Step 5: Left shift the numerator and add 11*104 to the denominator. Subtract increasing odd multiples of 104 from the numerator until a negative result is obtained.
Step 6: Left shift the numerator and subtract 11*103 from the denominator. Add back decreasing odd multiples of 103 to the numerator until a positive result is obtained.
Step 7: Left shift the numerator and add 11*102 to the denominator. Subtract increasing odd multiples of 102 from the numerator until a negative result is obtained.
Step 8: Left shift the numerator and subtract 11*101 from the denominator. Add back decreasing odd multiples of 101 to the numerator until a positive result is obtained.
Step 9: Left shift the numerator and add 11 to the denominator. Subtract increasing odd integers from the numerator until a negative result is obtained.
Thus twice the square root of 2 is between 2.8284 and 2.8286. If we round off by adding 5 times the denominator to the numerator we get a sign change so we use 2.8285 as our best approximation. 2.8285/2 = 1.41425 agrees favorably with sqrt(2) = 1.4142135.
Summary: To my knowledge, only two of the "early" computers ever implemented a square root operation in hardware: Zuse's Z3 and the ENIAC both of which had limited memory and/or programming capabilities. In the "First Draft of the Report on the EDVAC" written in 1945, von Neumann incorporates the design for a square rooter (which von Neumann notes is similar to the "divider network"). Yet in his later 1946 paper "Preliminary discussion of the logical design of an electronic computing instrument" which von Neumann authored with Arthur Burks and Herman Goldstine, he leaves out the design of a square rooter "because such a device would involve more equipment than we feel desirable in first model". Hardware was expensive in the early computers and designs reflected a "keep it simple" philosophy. (I should point out that neither the EDVAC or the EDSAC, computers which were heavily influenced by the "First Draft" paper, had a square root operation. And neither did the IAS machine which was influenced by the "Preliminary discussion" paper). Besides, the increased memory capacity and the flexibility of programming found in later stored program computers allowed square roots to be easily done in software. The Z3 and ENIAC were not stored program computers.
So given that square roots are hard to compute (how many of us can take one by hand?), how did a relatively primitive computer like the ENIAC take a square root? The ENIAC implemented a well known square root algorithm that could be carried out by any one with desk calculator with addition, subtraction, and shift capabilities. Since the ENIAC was very good at addition and subtraction (it had 20 accumulators that could do both at electronic speeds), taking a square root turned out just to be a matter of sequencing their operations in the correct way.
1946 Technical Report on The ENIAC <http://ftp.arl.army.mil/~mike/comphist/45eniac_report>: This is the first four chapters of the the June 1, 1946 Report on the ENIAC. Excellent primary source material from the U.S. Army Research Laboratory at the Aberdeen Proving Grounds, MD.
History of Computing Information <http://ftp.arl.army.mil/~mile/comphist> : web page with other links to ENIAC materials also maintained at the ARL at Aberdeen
How the ENIAC took a Square Root (revised 2009) is a slightly updated version in .pdf format of this .html file.
ENIAC: The Triumphs and Tragedies of the World's First Computer; Scott McCartney : This is an excellent general history of the ENIAC
From ENIAC to UNIVAC: An Appraisal of the Early Eckert-Mauchly Computers, Nancy Stern: Another excellent general history of the ENIAC.
"The ENIAC: History, Operation, and Reconstruction in VLSI" by Jan Van der Spiegel, James F. Tau, Titiimaea F. Ala'ilima Lin Ping Ang; from The First Computers: History and Architectures edited by R. Rojas and U. Hashagen : This paper contains many of the technical details of the ENIAC. The square root algorithm was obtained from this source.
Return to Brian
Shelburne's Home Page | http://www4.wittenberg.edu/academics/mathcomp/bjsdir/ENIACSquareRoot.htm | 13 |
159 | The Korean War (Korean: 6·25전쟁; 25 June 1950 – 27 July 1953)[a] was a war between the Republic of Korea (South Korea), supported by the United Nations, and the Democratic People's Republic of Korea (North Korea), at one time supported by the People's Republic of China and the Soviet Union. It was primarily the result of the political division of Korea by an agreement of the victorious Allies at the conclusion of the Pacific War at the end of World War II. The Korean Peninsula was ruled by the Empire of Japan from 1910 until the end of World War II. Following the surrender of the Empire of Japan in September 1945, American administrators divided the peninsula along the 38th parallel, with U.S. military forces occupying the southern half and Soviet military forces occupying the northern half.
The failure to hold free elections throughout the Korean Peninsula in 1948 deepened the division between the two sides; the North established a communist government, while the South established a right-wing government. The 38th parallel increasingly became a political border between the two Korean states. Although reunification negotiations continued in the months preceding the war, tension intensified. Cross-border skirmishes and raids at the 38th Parallel persisted. The situation escalated into open warfare when North Korean forces invaded South Korea on 25 June 1950. In 1950, the Soviet Union boycotted the United Nations Security Council, in protest at representation of China by the Kuomintang/Republic of China government, which had taken refuge in Taiwan following defeat in the Chinese Civil War. In the absence of a dissenting voice from the Soviet Union, who could have vetoed it, the United States and other countries passed a Security Council resolution authorizing military intervention in Korea.
The United States of America provided 88% of the 341,000 international soldiers which aided South Korean forces in repelling the invasion, with twenty other countries of the United Nations offering assistance. Suffering severe casualties within the first two months, the defenders were pushed back to a small area in the south of the Korean Peninsula, known as the Pusan perimeter. A rapid U.N. counter-offensive then drove the North Koreans past the 38th Parallel and almost to the Yalu River, when the People's Republic of China (PRC) entered the war on the side of North Korea. Chinese intervention forced the Southern-allied forces to retreat behind the 38th Parallel. While not directly committing forces to the conflict, the Soviet Union provided material aid to both the North Korean and Chinese armies. The fighting ended on 27 July 1953, when the armistice agreement was signed. The agreement restored the border between the Koreas near the 38th Parallel and created the Korean Demilitarized Zone (DMZ), a 2.5-mile (4.0 km)-wide fortified buffer zone between the two Korean nations. Minor incidents still continue today.
From a military science perspective, it combined strategies and tactics of World War I and World War II: it began with a mobile campaign of swift infantry attacks followed by air bombing raids, but became a static trench war by July 1951.
|South Korean name|
|North Korean name|
In the United States, the war was initially described by President Harry S. Truman as a "police action" as it was conducted under the auspices of the United Nations. It has been referred to as "The Forgotten War" or "The Unknown War" because of the lack of public attention it received both during and after the war, and in relation to the global scale of World War II, which preceded it, and the subsequent angst of the Vietnam War, which succeeded it.
In South Korea, the war is usually referred to as "625" or the 6–2–5 Upheaval (yook-i-o dongnan), reflecting the date of its commencement on 25 June.
In China the war was officially called the War to Resist U.S. Aggression and Aid Korea (simplified Chinese: 抗美援朝战争; traditional Chinese: 抗美援朝戰爭; pinyin: Kàngměiyuáncháo zhànzhēng), although the term "Chaoxian War" (simplified Chinese: 朝鲜战争; traditional Chinese: 朝鮮戰爭; pinyin: Cháoxiǎn zhànzhēng) is also used in unofficial contexts, along with the term "Korean Conflict".(simplified Chinese: 韩战; traditional Chinese: 韓戰; pinyin: Hán Zhàn)
Imperial Japanese rule (1910–1945)
Upon defeating the Qing Dynasty in the First Sino-Japanese War (1894–96), the Empire of Japan occupied the Korean Empire – a peninsula strategic to its sphere of influence. A decade later, defeating Imperial Russia in the Russo-Japanese War (1904–05), Japan made Korea its protectorate with the Eulsa Treaty in 1905, then annexed it with the Japan–Korea Annexation Treaty in 1910.
Korean nationalists and the intelligentsia fled the country, and some founded the Provisional Korean Government in 1919, which was headed by Syngman Rhee in Shanghai. This government-in-exile was recognized by few countries. From 1919 to 1925 and beyond, Korean communists led and were the primary agents of internal and external warfare against the Japanese.
Korea under Japanese rule was considered to be part of the Empire of Japan as an industrialized colony along with Taiwan, and both were part of the Greater East Asia Co-Prosperity Sphere. In 1937, the colonial Governor-General, General Jirō Minami, commanded the attempted cultural assimilation of Korea's 23.5 million people by banning the use and study of Korean language, literature, and culture, to be replaced with that of mandatory use and study of their Japanese counterparts. Starting in 1939, the populace was required to use Japanese names under the Sōshi-kaimei policy. In 1938, the Colonial Government established labor conscription.
In China, the Nationalist National Revolutionary Army and the Communist People's Liberation Army helped organize refugee Korean patriots and independence fighters against the Japanese military, which had also occupied parts of China. The Nationalist-backed Koreans, led by Yi Pom-Sok, fought in the Burma Campaign (December 1941 – August 1945). The Communists, led by Kim Il-sung among others, fought the Japanese in Korea and Manchuria.
During World War II, the Japanese used Korea's food, livestock, and metals for their war effort. Japanese forces in Korea increased from 46,000 soldiers in 1941 to 300,000 in 1945. Japanese Korea conscripted 2.6 million forced laborers controlled with a collaborationist Korean police force; some 723,000 people were sent to work in the overseas empire and in metropolitan Japan. By 1942, Korean men were being conscripted into the Imperial Japanese Army. By January 1945, Koreans comprised 32% of Japan's labor force. In August 1945, when the United States dropped atomic bombs on Hiroshima and Nagasaki, around 25% of those killed were Koreans. At the end of the war, other world powers did not recognize Japanese rule in Korea and Taiwan.
Meanwhile, at the Cairo Conference (November 1943), the Republic of China, the United Kingdom, and the United States decided "in due course Korea shall become free and independent". Later, the Yalta Conference (February 1945) granted to the Soviet Union European "buffer zones"—satellite states accountable to Moscow—as well as an expected Soviet pre-eminence in China and Manchuria, in return for joining the Allied Pacific War effort against Japan.
Soviet invasion of Manchuria (1945)
As agreed with the Allies at the Tehran Conference (November 1943) and the Yalta Conference (February 1945), the Soviet Union declared war against Japan within three months of the end of the war in Europe, on 9 August 1945. By 10 August, the Red Army occupied the northern part of the Korean peninsula as agreed, and on 26 August halted at the 38th parallel for three weeks to await the arrival of US forces in the south.
On 10 August 1945, with the Japanese surrender near, the Americans doubted whether the Soviets would honor their part of the Joint Commission, the US-sponsored Korean occupation agreement. A month earlier, Colonel Dean Rusk and Colonel Charles H. Bonesteel III, divided the Korean peninsula at the 38th parallel after hurriedly deciding that the US Korean Zone of Occupation had to have a minimum of two ports.
Explaining why the occupation zone demarcation was positioned at the 38th parallel, Rusk observed, "even though it was further north than could be realistically reached by US forces, in the event of Soviet disagreement ... we felt it important to include the capital of Korea in the area of responsibility of American troops", especially when "faced with the scarcity of US forces immediately available, and time and space factors, which would make it difficult to reach very far north, before Soviet troops could enter the area." The Soviets agreed to the US occupation zone demarcation to improve their negotiating position regarding the occupation zones in Eastern Europe, and because each would accept Japanese surrender where they stood.
Chinese Civil War (1945–1949)
After the end of the Second Sino-Japanese War, the Chinese Civil War resumed between the Chinese Communists and the Chinese Nationalists. While the Communists were struggling for supremacy in Manchuria, they were supported by the North Korean government with matériel and manpower. According to Chinese sources, the North Koreans donated 2,000 railway cars worth of matériel while thousands of Korean served in the Chinese People's Liberation Army (PLA) during the war. North Korea also provided the Chinese Communists in Manchuria with a safe refuge for non-combatants and communications with the rest of China.
The North Korean contributions to the Chinese Communist victory were not forgotten after the creation of the People's Republic of China in 1949. As a token of gratitude, between 50,000 to 70,000 Korean veterans that served in the PLA were sent back along with their weapons, and they would later play a significant role in the initial invasion of South Korea. China promised to support the North Koreans in the event of a war against South Korea. The Chinese support created a deep division between the Korean Communists, and Kim Il-Sung's authority within the Communist party was challenged by the Chinese faction led by Pak Il-yu, who was later purged by Kim.
After the formation of the People's Republic of China in 1949, the Chinese government named the Western nations, led by the United States, as the biggest threat to its national security. Basing this judgment on China's century of humiliation beginning in the early 19th century, American support for the Nationalists during the Chinese Civil War, and the ideological struggles between revolutionaries and reactionaries, the Chinese leadership believed that China would become a critical battleground in the United States' crusade against Communism. As a countermeasure and to elevate China's standing among the worldwide Communist movements, the Chinese leadership adopted a foreign policy that actively promoted Communist revolutions throughout territories on China's periphery.
Korea divided (1945–1949)
On 8 September 1945, Lt. Gen. John R. Hodge of the United States arrived in Incheon to accept the Japanese surrender south of the 38th parallel. Appointed as military governor, General Hodge directly controlled South Korea as head of the United States Army Military Government in Korea (USAMGIK 1945–48). He established control by restoring to power the key Japanese colonial administrators and their Korean police collaborators. The USAMGIK refused to recognise the provisional government of the short-lived People's Republic of Korea (PRK) because he suspected it was communist. These policies, voiding popular Korean sovereignty, provoked civil insurrections and guerrilla warfare. On 3 September 1945, Lieutenant General Yoshio Kozuki, Commander, Japanese Seventeenth Area Army, contacted Hodge, telling him that the Soviets were south of the 38th parallel at Kaesong. Hodge trusted the accuracy of the Japanese Army report.
In December 1945, Korea was administered by a United States–Soviet Union Joint Commission, as agreed at the Moscow Conference (1945). The Koreans were excluded from the talks. The commission decided the country would become independent after a five-year trusteeship action facilitated by each régime sharing its sponsor's ideology. The Korean populace revolted; in the south, some protested, and some rose in arms; to contain them, the USAMGIK banned strikes on 8 December 1945 and outlawed the PRK Revolutionary Government and the PRK People's Committees on 12 December 1945.
On 23 September 1946, an 8,000-strong railroad worker strike began in Pusan. Civil disorder spread throughout the country in what became known as the Autumn uprising. On 1 October 1946, Korean police killed three students in the Daegu Uprising; protesters counter-attacked, killing 38 policemen. On 3 October, some 10,000 people attacked the Yeongcheon police station, killing three policemen and injuring some 40 more; elsewhere, some 20 landlords and pro-Japanese South Korean officials were killed. The USAMGIK declared martial law.
The right-wing Representative Democratic Council, led by nationalist Syngman Rhee, opposed the Soviet–American trusteeship of Korea, arguing that after 35 years (1910–45) of Japanese colonial rule most Koreans opposed another foreign occupation. The USAMGIK decided to forego the five-year trusteeship agreed upon in Moscow, given the 31 March 1948 United Nations election deadline to achieve an anti-communist civil government in the US Korean Zone of Occupation.
On 3 April 1948, what began as a demonstration commemorating Korean resistance to Japanese rule ended with the Jeju Uprising where between 14,000 and 60,000 citizens were killed by South Korean soldiers.
On 10 May, South Korea convoked its first national general elections that the Soviets first opposed, then boycotted, insisting that the US honor the trusteeship agreed to at the Moscow Conference.
The resultant anti-communist South Korean government promulgated a national political constitution on 17 July 1948, elected a president, the American-educated strongman Syngman Rhee on 20 July 1948. The elections were marred by terrorism and sabotage resulting in 600 deaths. The Republic of Korea (South Korea) was established on 15 August 1948. In the Russian Korean Zone of Occupation, the Soviet Union established a Communist North Korean government led by Kim Il-sung. President Rhee's régime expelled communists and leftists from southern national politics. Disenfranchised, they headed for the hills, to prepare for guerrilla war against the US-sponsored ROK Government.
As nationalists, both Syngman Rhee and Kim Il-Sung were intent upon reunifying Korea under their own political system. The North Koreans gained support from both the Soviet Union and the People's Republic of China. They escalated the continual border skirmishes and raids and then prepared to invade. South Korea, with limited matériel, could not match them. During this era, the US government assumed that all communists (regardless of nationality) were controlled or directly influenced by Moscow; thus the US portrayed the civil war in Korea as a Soviet hegemonic maneuver.
The Soviet Union withdrew as agreed from Korea in 1948. U.S. troops withdrew from Korea in 1949, leaving the South Korean army relatively ill-equipped. On 24 December 1949, South Korean forces killed 86 to 88 people in the Mungyeong massacre and blamed the crime on communist marauding bands. By early 1950, Syngman Rhee had about 30,000 alleged communists in jails and about 300,000 suspected sympathisers enrolled in Bodo League re-education movement.
Course of the war
The Korean War begins (June 1950)
In the first half of 1950, Kim Il-sung travelled to Moscow and Beijing to secure support reunification with the South by force. The Soviet military became extensively involved in North Korea's war planning. There are differing accounts of the degree of Soviet support, ranging from support if the North was attacked, to approval, to actually initiating the war. Similarly, some accounts indicate that Chinese support was stronger than Soviet support, and some say it was reluctant.
Declassified documents from the Soviet Foreign Ministry and Presidential Archives now show a much clearer, but complex picture of the interactions between Kim, Soviet leader Josef Stalin, and Chinese leader Mao Zedong regarding the decision to invade South Korea. By 1949, South Korean forces had reduced the active number of communist guerrillas in the South from 5,000 to 1,000. However, Kim Il-Sung believed that the guerrillas had weakened the South Korean military and that a North Korean invasion would be welcomed by the much of the South Korean population. Kim began seeking Stalin's support for an invasion in March 1949.
Initially, Stalin did not think the time was right for a war in Korea. Chinese Communist forces still were fighting in China. American forces were still stationed in South Korea (they would complete their withdrawal in June 1949) and Stalin did not want the Soviet Union to become embroiled in a war with the US. But by 1950, Stalin believed the strategic situation had changed. The Soviets had detonated their first nuclear bomb in September 1949. Americans had fully withdrawn from Korea. The Americans had not intervened to stop the communist victory in China, and Stalin calculated that the Americans would be even less willing to fight in Korea - which had much less strategic significance. Stalin began a more aggressive strategy in Asia based on these developments, including promising economic and military aid to China through the Sino-Soviet Friendship, Alliance and Mutual Assistance Treaty.
Throughout 1949 and 1950 the Soviets continued to arm North Korea. After the Communist victory in China, ethnic Korean units in the Chinese People's Liberation Army (PLA) were released to North Korea. The combat experienced veterans from China, the tanks, artillery and aircraft supplied by the Soviets, and rigorous training increased North Korea's military superiority over the South, who had been armed by the American military.
In April 1950, Stalin gave Kim permission to invade the South under the condition that Mao would agree to send reinforcements if they became needed. Stalin made it clear that Soviet forces would not directly engage in combat to avoid a war with the Americans. Kim met with Mao in May 1950. Mao was concerned that the Americans would intervene but agreed to support the North Korean invasion. China desperately needed the economic and military aid promised by the Soviets. At that time, the Chinese were in the process of demobilizing half of the PLA's 5.6 million soldiers. However, Mao sent more ethnic Korean PLA veterans to Korea and promised to move an Army closer to the Korean border. Once Mao's commitment was secured, preparations for war accelerated.
Soviet generals who had extensive combat experience in World War II were sent to the Soviet Advisory Group in North Korea. These generals completed plans for the attack by May. The original plans were to start with a skirmish in the Ongjin peninsula on the west coast of Korea. The North Koreans would then launch a "counterattack" that would capture Seoul and encircle and destroy the South Korean army. The final stage would involve destroying South Korean government remnants, capturing the rest of the South Korea, including the ports.
On 7 June 1950, Kim Il-sung called for a Korea-wide election on 5–8 August 1950 and a consultative conference in Haeju on 15–17 June 1950. On 11 June, the North sent three diplomats to the South, as part of a planned peace overture that South Korean were certain to reject. On 21 June, Kim Il-Sung requested permission to start with general attack across the 38th parallel, rather than a limited operation in the Ongjin peninsula. Kim was concerned that South Korean agents had learned about the plans and South Korean forces were strengthening their defenses. Stalin agreed to this change of plan.
Although South Korean and American intelligence officers had in fact predicted an attack, an event they had incorrectly done so many times before. The Central Intelligence Agency noted the southward movement of North Korean forces, but said it was a "defensive measure" and concluded an invasion was "unlikely". South Korean and Americans forces were unprepared. On June 23, UN observers had inspected the border and failed to notice the imminent attack.
The KPA crossed the 38th parallel behind artillery fire at dawn on Sunday 25 June 1950. The KPA claimed that Republic of Korea Army (ROK Army) troops, under command of the régime of the "bandit traitor Syngman Rhee", had attacked first, and that they would arrest and execute Rhee. There had been frequent skirmishes along the 38th parallel. Fighting began on the strategic Ongjin peninsula in the west. There were initial South Korean claims that they had captured the city of Haeju, and this sequence of events had led some scholars to argue that the South Koreans actually fired first. For South Koreans, the Korean war is sometimes called the "June 25th incident".
Whoever fired the first shots in Ongjin, within an hour, North Korean forces attacked all along the 38th parallel. The North Korean had a combined arms force including tanks supported by heavy artillery. The South Koreans did not have any tanks, anti-tank weapons, nor heavy artillery, that could stop such an attack. In addition, South Koreans deployed their outgunned forces piecemeal and were routed within the first few days. On 27 June, Rhee secretly evacuated from Seoul with government officials. On 28 June, at 2am, the South Korean Army blew up the highway bridge across the Han River in an attempt to stop the North Korean army. The bridge was detonated while 4,000 refugees were crossing the bridge, and hundreds were killed. Destroying the bridge also trapped many South Korean military units North of the Han River. In spite of such desperation, Seoul fell that same day. A number of South Korean National Assemblymen remained in Seoul when it fell. Forty-eight of them subsequently pledged allegiance to the North.
The South Korean forces, which had 95,000 men on 25 June and could account for less than 22,000 men by the end of June. In early July, when American forces arrived, South Korean forces were placed under American operational command of the United Nations Command (Korea).
There were numerous massacres of civilians and atrocities throughout the Korean war. Both sides began killing civilians even during the first days of the war. On 28 June, Rhee ordered the Bodo League massacre. See also #War crimes.
Factors in U.S. intervention
The Truman Administration was caught at a crossroads. Before the invasion, Korea was not included in the strategic Asian Defense Perimeter outlined by Secretary of State Acheson. Military strategists were more concerned with the security of Europe against the Soviet Union than East Asia. At the same time, the Administration was worried that a war in Korea could quickly widen into another world war should the Chinese or Soviets decide to get involved as well.
One facet of the changing attitude toward Korea and whether to get involved was Japan. Especially after the fall of China to the Communists, "...Japan itself increasingly appeared as the major East Asian prize to be protected". U.S. East Asian experts saw Japan as the critical counterweight to the Soviet Union and China in the region. While there was no United States policy that dealt with South Korea directly as a national interest, its proximity to Japan pushed South Korea to the fore. "The recognition that the security of Japan required a non-hostile Korea led directly to President Truman's decision to intervene... The essential point... is that the American response to the North Korean attack stemmed from considerations of US policy toward Japan." The United States was working to shore up Japan which was its protectorate.
The other important part of committing to intervention lay in speculation about Soviet action in the event that the United States intervene. The Truman administration was fretful that a war in Korea was a diversionary assault that would escalate to a general war in Europe once the U.S. committed in Korea. At the same time, "[t]here was no suggestion from anyone that the United Nations or the United States could back away from [the conflict]". In Truman's mind, this aggression, if left unchecked, would start a chain reaction that would destroy the United Nations and give the go ahead to further Communist aggression elsewhere. Korea was where a stand had to be made; the difficult part was how. The UN Security Council approved the use of force to help the South Koreans and the US immediately begin using air and naval forces in the area to that end. The Administration still refrained from committing on the ground because some advisors believed the North Koreans could be stopped by air and naval power alone.
Also, it was still uncertain if this was a clever ploy by the Soviet Union to catch the U.S. unawares or just a test of U.S. resolve. The decision to commit ground troops and to intervene eventually became viable when a communiqué was received on 27 June from the Soviet Union that alluded it would not move against U.S. forces in Korea. "This opened the way for the sending of American ground forces, for it now seemed less likely that a general war—with Korea as a preliminary diversion—was imminent". With the Soviet Union's tacit agreement that this would not cause an escalation, the United States now could intervene with confidence that other commitments would not be jeopardized.
United Nations Security Council Resolutions
On 25 June 1950, the United Nations Security Council unanimously condemned the North Korean invasion of the Republic of Korea, with United Nations Security Council Resolution 82. The Soviet Union, a veto-wielding power, had boycotted the Council meetings since January 1950, protesting that the Republic of China (Taiwan), not the People's Republic of China, held a permanent seat in the UN Security Council. After debating the matter, the Security Council, on 27 June 1950, published Resolution 83 recommending member states provide military assistance to the Republic of Korea. On 27 June President Truman ordered U.S. air and sea forces to help the South Korean régime. On 4 July the Soviet Deputy Foreign Minister accused the U.S. of starting armed intervention on behalf of South Korea.
The Soviet Union challenged the legitimacy of the war for several reasons. The ROK Army intelligence upon which Resolution 83 was based came from U.S. Intelligence; North Korea was not invited as a sitting temporary member of the UN, which violated UN Charter Article 32; and the Korean conflict was beyond UN Charter scope, because the initial north–south border fighting was classed as a civil war. The Soviet representative boycotted the UN to prevent Security Council action, and to challenge the legitimacy of the UN action; legal scholars posited that deciding upon an action of this type required the unanimous vote of the five permanent members.
Comparison of military forces
By mid-1950, North Korean forces numbered between 150,000 and 200,000 troops, organized into 10 infantry divisions, one tank division, and one air force division, with 210 fighter planes and 280 tanks who captured scheduled objectives and territory, among them Kaesong, Chuncheon, Uijeongbu, and Ongjin. Their forces included 274 T-34-85 tanks, some 150 Yak fighters, 110 attack bombers, 200 artillery pieces, 78 Yak trainers, and 35 reconnaissance aircraft. In addition to the invasion force, the North KPA had 114 fighters, 78 bombers, 105 T-34-85 tanks, and some 30,000 soldiers stationed in reserve in North Korea. Although each navy consisted of only several small warships, the North Korean and South Korean navies fought in the war as sea-borne artillery for their in-country armies.
In contrast, the ROK Army defenders were vastly unprepared, and the political establishment in the south, while well aware of the threat to the north, were unable to convince American administrators of the reality of the threat. In South to the Naktong, North to the Yalu (1961), R.E. Appleman reports the ROK forces' low combat readiness as of 25 June 1950. The ROK Army had 98,000 soldiers (65,000 combat, 33,000 support), no tanks (they had been requested from the US military, but requests were denied), and a 22–piece air force comprising 12 liaison-type and 10 AT6 advanced-trainer airplanes. There were no large foreign military garrisons in Korea at invasion time, but there were large US garrisons and air forces in Japan.
United Nations response (July – August 1950)
Despite the rapid post–Second World War Allied demobilizations, there were substantial U.S. forces occupying Japan; under General Douglas MacArthur's command, they could be made ready to fight the North Koreans. Only the British Commonwealth had comparable forces in the area.
On Saturday, 24 June 1950, U.S. Secretary of State Dean Acheson informed President Truman by telephone, "Mr. President, I have very serious news. The North Koreans have invaded South Korea." Truman and Acheson discussed a U.S. invasion response with defense department principals, who agreed that the United States was obligated to repel military aggression, paralleling it with Adolf Hitler's aggressions in the 1930s, and said that the mistake of appeasement must not be repeated. In his autobiography, President Truman acknowledged that fighting the invasion was essential to the American goal of the global containment of communism as outlined in the National Security Council Report 68 (NSC-68) (declassified in 1975):
Communism was acting in Korea, just as Hitler, Mussolini and the Japanese had ten, fifteen, and twenty years earlier. I felt certain that if South Korea was allowed to fall, Communist leaders would be emboldened to override nations closer to our own shores. If the Communists were permitted to force their way into the Republic of Korea without opposition from the free world, no small nation would have the courage to resist threat and aggression by stronger Communist neighbors.
President Truman announced that the U.S. would counter "unprovoked aggression" and "vigorously support the effort of the [UN] security council to terminate this serious breach of peace." In Congress, the Joint Chiefs of Staff Chairman General Omar Bradley warned against appeasement, saying that Korea was the place "for drawing the line" against communist expansion. In August 1950, the President and the Secretary of State obtained the consent of Congress to appropriate $12 billion to pay for the military expenses.
Acting on State Secretary Acheson's recommendation, President Truman ordered General MacArthur to transfer matériel to the Army of the Republic of Korea while giving air cover to the evacuation of U.S. nationals. The President disagreed with advisers who recommended unilateral U.S. bombing of the North Korean forces, and ordered the US Seventh Fleet to protect the Republic of China (Taiwan), whose government asked to fight in Korea. The U.S. denied ROC's request for combat, lest it provoke a communist Chinese retaliation. Because the U.S. had sent the Seventh Fleet to "neutralize" the Taiwan Strait, Chinese premier Zhou Enlai criticized both the UN and U.S. initiatives as "armed aggression on Chinese territory."
The Battle of Osan, the first significant American engagement of the Korean War, involved the 540-soldier Task Force Smith, which was a small forward element of the 24th Infantry Division. On 5 July 1950, Task Force Smith attacked the North Koreans at Osan but without weapons capable of destroying the North Koreans' tanks. They were unsuccessful; the result was 180 dead, wounded, or taken prisoner. The KPA progressed southwards, pushing back the US force at Pyongtaek, Chonan, and Chochiwon, forcing the 24th Division's retreat to Taejeon, which the KPA captured in the Battle of Taejon; the 24th Division suffered 3,602 dead and wounded and 2,962 captured, including the Division's Commander, Major General William F. Dean. Overhead, the KPAF shot down 18 USAF fighters and 29 bombers; the USAF shot down five KPAF fighters.
By August, the KPA had pushed back the ROK Army and the Eighth United States Army to the vicinity of Pusan, in southeast Korea. In their southward advance, the KPA purged the Republic of Korea's intelligentsia by killing civil servants and intellectuals. On 20 August, General MacArthur warned North Korean leader Kim Il-Sung that he was responsible for the KPA's atrocities. By September, the UN Command controlled the Pusan perimeter, enclosing about 10% of Korea, in a line partially defined by the Nakdong River.
Although Kim's early successes had led him to predict that he would end the war by the end of August, Chinese leaders were more pessimistic. To counter the possibility of American invasion, Zhou Enlai secured a Soviet commitment to have the Soviet Union support Chinese forces with air cover, and deployed 260,000 soldiers along the Korean border, under the command of Gao Gang. Zhou commanded Chai Chengwen to conduct a topographical survey of Korea, and directed Lei Yingfu, Zhou's military advisor in Korea, to analyze the military situation in Korea. Lei concluded that MacArthur would most likely attempt a landing at Incheon. After conferring with Mao that this would be MacArthur's most likely strategy, Zhou briefed Soviet and North Korean advisers of Lei's findings, and issued orders to Chinese army commanders deployed on the Korean border to prepare for American naval activity in the Korea Strait.
Escalation (August – September 1950)
In the resulting Battle of Pusan Perimeter (August–September 1950), the U.S. Army withstood KPA attacks meant to capture the city at the Naktong Bulge, P'ohang-dong, and Taegu. The United States Air Force (USAF) interrupted KPA logistics with 40 daily ground support sorties that destroyed 32 bridges, halting most daytime road and rail traffic. KPA forces were forced to hide in tunnels by day and move only at night. To deny matériel to the KPA, the USAF destroyed logistics depots, petroleum refineries, and harbors, while the U.S. Navy air forces attacked transport hubs. Consequently, the over-extended KPA could not be supplied throughout the south.
Meanwhile, U.S. garrisons in Japan continually dispatched soldiers and matériel to reinforce defenders in the Pusan Perimeter. Tank battalions deployed to Korea directly from the United States mainland from the port of San Francisco to the port of Pusan, the largest Korean port. By late August, the Pusan Perimeter had some 500 medium tanks battle-ready. In early September 1950, ROK Army and UN Command forces outnumbered the KPA 180,000 to 100,000 soldiers. The UN forces, once prepared, counterattacked and broke out of the Pusan Perimeter.
Battle of Inchon (September 1950)
Against the rested and re-armed Pusan Perimeter defenders and their reinforcements, the KPA were undermanned and poorly supplied; unlike the UN Command, they lacked naval and air support. To relieve the Pusan Perimeter, General MacArthur recommended an amphibious landing at Inchon (now known as Incheon), well over 100 miles (160 km) behind the KPA lines. On 6 July, he ordered Major General Hobart R. Gay, Commander, 1st Cavalry Division, to plan the division's amphibious landing at Incheon; on 12–14 July, the 1st Cavalry Division embarked from Yokohama, Japan to reinforce the 24th Infantry Division inside the Pusan Perimeter.
Soon after the war began, General MacArthur had begun planning a landing at Incheon, but the Pentagon opposed him. When authorized, he activated a combined United States Army, United States Marine Corps, and ROK Army force. The X Corps, led by General Edward Almond, Commander, consisted of 40,000 men of the 1st Marine Division, the 7th Infantry Division and around 8,600 ROK Army soldiers. By 15 September attack date, the amphibious assault force faced few KPA defenders at Incheon: military intelligence, psychological warfare, guerrilla reconnaissance, and protracted bombardment facilitated a relatively light battle. However, the bombardment destroyed most of the city of Incheon.
After the Incheon landing, the 1st Cavalry Division began its northward advance from the Pusan Perimeter. "Task Force Lynch" (after Lieutenant Colonel James H. Lynch) —3rd Battalion, 7th Cavalry Regiment, and two 70th Tank Battalion units (Charlie Company and the Intelligence–Reconnaissance Platoon)— effected the "Pusan Perimeter Breakout" through 106.4 miles (171.2 km) of enemy territory to join the 7th Infantry Division at Osan. The X Corps rapidly defeated the KPA defenders around Seoul, thus threatening to trap the main KPA force in Southern Korea.
On 18 September, Stalin dispatched General H.M. Zakharov to Korea to advise Kim Il-sung to halt his offensive around the Pusan perimeter and to redeploy his forces to defend Seoul. Chinese commanders were not briefed on North Korean troop numbers or operational plans. As the overall commander of Chinese forces, Zhou Enlai suggested that the North Koreans should attempt to eliminate the enemy forces at Inchon only if they had reserves of at least 100,000 men; otherwise, he advised the North Koreans to withdraw their forces north.
On 25 September, Seoul was recaptured by South Korean forces. American air raids caused heavy damage to the KPA, destroying most of its tanks and much of its artillery. North Korean troops in the south, instead of effectively withdrawing north, rapidly disintegrated, leaving Pyongyang vulnerable. During the general retreat only 25,000 to 30,000 soldiers managed to rejoin the Northern KPA lines. On 27 September, Stalin convened an emergency session of the Politburo, in which he condemned the incompetence of the KPA command and held Soviet military advisers responsible for the defeat.
UN forces cross partition line (September – October 1950)
On 27 September, MacArthur received the top secret National Security Council Memorandum 81/1 from Truman reminding him that operations north of the 38th parallel were authorized only if "at the time of such operation there was no entry into North Korea by major Soviet or Chinese Communist forces, no announcements of intended entry, nor a threat to counter our operations militarily..." On 29 September MacArthur restored the government of the Republic of Korea under Syngman Rhee. On 30 September, Defense Secretary George Marshall sent an eyes-only message to MacArthur: "We want you to feel unhampered tactically and strategically to proceed north of the 38th parallel." During October, the ROK police executed people who were suspected to be sympathetic to North Korea, and similar massacres were carried out until early 1951.
On 30 September, Zhou Enlai warned the United States that it was prepared to intervene in Korea if the United States crossed the 38th parallel. Zhou attempted to advise North Korean commanders on how to conduct a general withdrawal by using the same tactics which had allowed Chinese communist forces to successfully escape Chiang Kai-shek's Encirclement Campaigns in the 1930s, but by some accounts North Korean commanders did not utilize these tactics effectively. Bruce Cumings argues, however, the KPA's rapid withdrawal was strategic, with troops melting into the mountains from where they could launch guerrilla raids on the UN forces spread out on the coasts.
By 1 October 1950, the UN Command repelled the KPA northwards, past the 38th parallel; the ROK Army crossed after them, into North Korea. MacArthur made a statement demanding the KPA's unconditional surrender. Six days later, on 7 October, with UN authorization, the UN Command forces followed the ROK forces northwards. The X Corps landed at Wonsan (in southeastern North Korea) and Riwon (in northeastern North Korea), already captured by ROK forces. The Eighth United States Army and the ROK Army drove up western Korea and captured Pyongyang city, the North Korean capital, on 19 October 1950. The 187th Airborne Regimental Combat Team ("Rakkasans") made their first of two combat jumps during the Korean War on 20 October 1950 at Sunchon and Sukchon. The missions of the 187th were to cut the road north going to China, preventing North Korean leaders from escaping from Pyongyang; and to rescue American prisoners of war. At month's end, UN forces held 135,000 KPA prisoners of war.
Taking advantage of the UN Command's strategic momentum against the communists, General MacArthur believed it necessary to extend the Korean War into China to destroy depots supplying the North Korean war effort. President Truman disagreed, and ordered caution at the Sino-Korean border.
China intervenes (October – December 1950)
On 27 June 1950, two days after the KPA invaded and three months before the Chinese entered the war, President Truman dispatched the United States Seventh Fleet to the Taiwan Strait, to prevent hostilities between the Nationalist Republic of China (Taiwan) and the People's Republic of China (PRC). On 4 August 1950, with the PRC invasion of Taiwan aborted, Mao Zedong reported to the Politburo that he would intervene in Korea when the People's Liberation Army's (PLA) Taiwan invasion force was reorganized into the PLA North East Frontier Force. China justified its entry into the war as a response to "American aggression in the guise of the UN".
On 20 August 1950, Premier Zhou Enlai informed the United Nations that "Korea is China's neighbor... The Chinese people cannot but be concerned about a solution of the Korean question". Thus, through neutral-country diplomats, China warned that in safeguarding Chinese national security, they would intervene against the UN Command in Korea. President Truman interpreted the communication as "a bald attempt to blackmail the UN", and dismissed it.
1 October 1950, the day that UN troops crossed the 38th parallel, was also the first anniversary of the founding of the People's Republic of China. On that day the Soviet ambassador forwarded a telegram from Stalin to Mao and Zhou requesting that China send five to six divisions into Korea, and Kim Il-sung sent frantic appeals to Mao for Chinese military intervention. At the same time, Stalin made it clear that Soviet forces themselves would not directly intervene.
In a series of emergency meetings that lasted from 2–5 October, Chinese leaders debated whether to send Chinese troops into Korea. There was considerable resistance among many leaders, including senior military leaders, to confronting the United States in Korea. Mao strongly supported intervention, and Zhou was one of the few Chinese leaders who firmly supported him. After General Lin Biao refused Mao's offer to command Chinese forces in Korea (citing poor health), Mao called General Peng Dehuai to Beijing to hear his views. After listening to both sides' arguments, Peng supported Mao's position, and the Politburo agreed to intervene in Korea. Later, the Chinese claimed that US bombers had violated PRC national airspace on three separate occasions and attacked Chinese targets before China intervened. On 8 October 1950, Mao Zedong redesignated the PLA North East Frontier Force as the Chinese People's Volunteer Army (PVA).
In order to enlist Stalin's support, Zhou traveled to Stalin's summer resort on the Black Sea on 10 October. Stalin initially agreed to send military equipment and ammunition, but warned Zhou that the Soviet Union's air force would need two or three months to prepare any operations. In a subsequent meeting, Stalin told Zhou that he would only provide China with equipment on a credit basis, and that the Soviet air force would only operate over Chinese airspace, and only after an undisclosed period of time. Stalin did not agree to send either military equipment or air support until March 1951. Mao did not find Soviet air support especially useful, as the fighting was going to take place on the south side of the Yalu. Soviet shipments of matériel, when they did arrive, were limited to small quantities of trucks, grenades, machine guns, and the like.
Immediately on his return to Beijing on 18 October 1950, Zhou met with Mao Zedong, Peng Dehuai, and Gao Gang, and the group ordered two hundred thousand Chinese troops to enter North Korea, which they did on 25 October. After consulting with Stalin, on 13 November, Mao appointed Zhou the overall commander and coordinator of the war effort, with Peng as field commander. Orders given by Zhou were delivered in the name of the Central Military Commission.
UN aerial reconnaissance had difficulty sighting PVA units in daytime, because their march and bivouac discipline minimized aerial detection. The PVA marched "dark-to-dark" (19:00–03:00), and aerial camouflage (concealing soldiers, pack animals, and equipment) was deployed by 05:30. Meanwhile, daylight advance parties scouted for the next bivouac site. During daylight activity or marching, soldiers were to remain motionless if an aircraft appeared, until it flew away; PVA officers were under order to shoot security violators. Such battlefield discipline allowed a three-division army to march the 286 miles (460 km) from An-tung, Manchuria to the combat zone in some 19 days. Another division night-marched a circuitous mountain route, averaging 18 miles (29 km) daily for 18 days.
Meanwhile, on 10 October 1950, the 89th Tank Battalion was attached to the 1st Cavalry Division, increasing the armor available for the Northern Offensive. On 15 October, after moderate KPA resistance, the 7th Cavalry Regiment and Charlie Company, 70th Tank Battalion captured Namchonjam city. On 17 October, they flanked rightwards, away from the principal road (to Pyongyang), to capture Hwangju. Two days later, the 1st Cavalry Division captured Pyongyang, the North's capital city, on 19 October 1950.
On 15 October 1950, President Truman and General MacArthur met at Wake Island in the mid-Pacific Ocean. This meeting was much publicized because of the General's discourteous refusal to meet the President on the continental US. To President Truman, MacArthur speculated there was little risk of Chinese intervention in Korea, and that the PRC's opportunity for aiding the KPA had lapsed. He believed the PRC had some 300,000 soldiers in Manchuria, and some 100,000–125,000 soldiers at the Yalu River. He further concluded that, although half of those forces might cross south, "if the Chinese tried to get down to Pyongyang, there would be the greatest slaughter" without air force protection.
After secretly crossing the Yalu River on 19 October, the PVA 13th Army Group launched the First Phase Offensive on 25 October, attacking the advancing U.N. forces near the Sino-Korean border. This military decision made by China solely changed the attitude of the Soviet Union. After 12 days of Chinese troops entering the war, Stalin allowed the Soviet Air Force to provide air cover, and supported more aid to China. After decimating the ROK II Corps at the Battle of Onjong, the first confrontation between Chinese and U.S. military occurred on 1 November 1950; deep in North Korea, thousands of soldiers from the PVA 39th Army encircled and attacked the US 8th Cavalry Regiment with three-prong assaults—from the north, northwest, and west—and overran the defensive position flanks in the Battle of Unsan. The surprise assault resulted in the U.N. forces retreating back to the Ch'ongch'on River, while the Chinese unexpectedly disappeared into mountain hideouts following victory. It is unclear why the Chinese did not press the attack and follow-up their victory.
The UN Command, however, were unconvinced that the Chinese had openly intervened due to the sudden Chinese withdrawal. On 24 November, the Home-by-Christmas Offensive was launched with the U.S. Eighth Army advancing in northwest Korea, while the US X Corps were attacking along the Korean east coast. But the Chinese were waiting in ambush with their Second Phase Offensive.
On 25 November at the Korean western front, the PVA 13th Army Group attacked and overran the ROK II Corps at the Battle of the Ch'ongch'on River, and then decimated the US 2nd Infantry Division on the UN forces' right flank. The UN Command retreated; the U.S. Eighth Army's retreat (the longest in US Army history) was made possible because of the Turkish Brigade's successful, but very costly, rear-guard delaying action near Kunuri that slowed the PVA attack for two days (27–9 November). On 27 November at the Korean eastern front, a US 7th Infantry Division Regimental Combat Team (3,000 soldiers) and the U.S. 1st Marine Division (12,000–15,000 marines) were unprepared for the PVA 9th Army Group's three-pronged encirclement tactics at the Battle of Chosin Reservoir, but they managed to escape under Air Force and X Corps support fire—albeit with some 15,000 collective casualties.
By 30 November, the PVA 13th Army Group managed to expel the U.S. Eighth Army from northwest Korea. Retreating from the north faster than they had counter-invaded, the Eighth Army crossed the 38th parallel border in mid December. U.N. morale hit rock bottom when commanding General Walton Walker of the U.S. Eighth Army was killed on 23 December 1950 in an automobile accident. In northeast Korea by 11 December, the U.S. X Corps managed to cripple the PVA 9th Army Group while establishing a defensive perimeter at the port city of Hungnam. The X Corps were forced to evacuate by 24 December in order to reinforce the badly depleted U.S. Eighth Army to the south.
During the Hungnam evacuation, about 193 shiploads of U.N. Command forces and matériel (approximately 105,000 soldiers, 98,000 civilians, 17,500 vehicles, and 350,000 tons of supplies) were evacuated to Pusan. The SS Meredith Victory was noted for evacuating 14,000 refugees, the largest rescue operation by a single ship, even though it was designed to hold 12 passengers. Before escaping, the U.N. Command forces razed most of Hungnam city, especially the port facilities; and on 16 December 1950, President Truman declared a national emergency with Presidential Proclamation No. 2914, 3 C.F.R. 99 (1953), which remained in force until 14 September 1978.[b]
Fighting around the 38th parallel (January – June 1951)
With Lieutenant-General Matthew Ridgway assuming the command of the U.S. Eighth Army on 26 December, the PVA and the KPA launched their Third Phase Offensive (also known as the "Chinese New Year's Offensive") on New Year's Eve of 1950. Utilizing night attacks in which U.N. Command fighting positions were encircled and then assaulted by numerically superior troops who had the element of surprise, the attacks were accompanied by loud trumpets and gongs, which fulfilled the double purpose of facilitating tactical communication and mentally disorienting the enemy. UN forces initially had no familiarity with this tactic, and as a result some soldiers panicked, abandoning their weapons and retreating to the south. The Chinese New Year's Offensive overwhelmed UN forces, allowing the PVA and KPA to conquer Seoul for the second time on 4 January 1951.
These setbacks prompted General MacArthur to consider using nuclear weapons against the Chinese or North Korean interiors, with the intention that radioactive fallout zones would interrupt the Chinese supply chains. However, upon the arrival of the charismatic General Ridgway, the esprit de corps of the bloodied Eighth Army immediately began to revive.
U.N. forces retreated to Suwon in the west, Wonju in the center, and the territory north of Samcheok in the east, where the battlefront stabilized and held. The PVA had outrun its logistics capability and thus were unable to press on beyond Seoul as food, ammunition, and matériel were carried nightly, on foot and bicycle, from the border at the Yalu River to the three battle lines. In late January, upon finding that the PVA had abandoned their battle lines, General Ridgway ordered a reconnaissance-in-force, which became Operation Roundup (5 February 1951). A full-scale X Corps advance proceeded which fully exploited the UN Command's air superiority, concluding with the UN reaching the Han River and recapturing Wonju.
After cease fire negotiations failed in January, the United Nations General Assembly passed Resolution 498 on February 1, condemning PRC as an aggressor, and called upon its forces to withdraw from Korea.
In early February, South Korean 11th Division ran the operation to destroy the guerrillas and their sympathizer citizens in Southern Korea. During the operation, the division and police conducted Geochang massacre and Sancheong-Hamyang massacre. In mid-February, the PVA counterattacked with the Fourth Phase Offensive and achieved initial victory at Hoengseong. But the offensive was soon blunted by the IX Corps positions at Chipyong-ni in the center. Units of the U.S. 2nd Infantry Division and the French Battalion fought a short but desperate battle that broke the attack's momentum. The battle is sometimes known as the Gettysburg of the Korean War. The battle saw 5,600 Korean, American and French defeat a numerically superior Chinese force. Surrounded on all sides, the US 2nd Infantry Division Warrior Division's 23rd Regimental Combat Team with an attached French Battalion was hemmed in by more than 25,000 Chinese Communist Forces. United Nations Forces had previously retreated in the face of large Communist forces instead of getting cut off, but this time they stood and fought at odds of roughly 15 to 1.
In the last two weeks of February 1951, Operation Roundup was followed by Operation Killer, carried out by the revitalized Eighth Army. It was a full-scale, battlefront-length attack staged for maximum exploitation of firepower to kill as many KPA and PVA troops as possible. Operation Killer concluded with I Corps re-occupying the territory south of the Han River, and IX Corps capturing Hoengseong. On 7 March 1951, the Eighth Army attacked with Operation Ripper, expelling the PVA and the KPA from Seoul on 14 March 1951. This was the city's fourth conquest in a years' time, leaving it a ruin; the 1.5 million pre-war population was down to 200,000, and people were suffering from severe food shortages.
On 1 March 1951 Mao sent a cable to Stalin, in which he emphasized the difficulties faced by Chinese forces and the urgent need for air cover, especially over supply lines. Apparently impressed by the Chinese war effort, Stalin finally agreed to supply two air force divisions, three anti-aircraft divisions, and six thousand trucks. PVA troops in Korea continued to suffer severe logistical problems throughout the war. In late April Peng Dehuai sent his deputy, Hong Xuezhi, to brief Zhou Enlai in Beijing. What Chinese soldiers feared, Hong said, was not the enemy, but that they had nothing to eat, no bullets to shoot, and no trucks to transport them to the rear when they were wounded. Zhou attempted to respond to the PVA's logistical concerns by increasing Chinese production and improving methods of supply, but these efforts were never completely sufficient. At the same time, large-scale air defense training programs were carried out, and the Chinese Air Force began to participate in the war from September 1951 onward.
On 11 April 1951, Commander-in-Chief Truman relieved the controversial General MacArthur, the Supreme Commander in Korea. There were several reasons for the dismissal. MacArthur had crossed the 38th parallel in the mistaken belief that the Chinese would not enter the war, leading to major allied losses. He believed that whether or not to use nuclear weapons should be his own decision, not the President's. MacArthur threatened to destroy China unless it surrendered. While MacArthur felt total victory was the only honorable outcome, Truman was more pessimistic about his chances once involved in a land war in Asia, and felt a truce and orderly withdrawal from Korea could be a valid solution. MacArthur was the subject of congressional hearings in May and June 1951, which determined that he had defied the orders of the President and thus had violated the US Constitution. A popular criticism of MacArthur was that he never spent a night in Korea, and directed the war from the safety of Tokyo.
General Ridgway was appointed Supreme Commander, Korea; he regrouped the UN forces for successful counterattacks, while General James Van Fleet assumed command of the US Eighth Army. Further attacks slowly depleted the PVA and KPA forces; Operations Courageous (23–28 March 1951) and Tomahawk (23 March 1951) were a joint ground and airborne infilltration meant to trap Chinese forces between Kaesong and Seoul. UN forces advanced to "Line Kansas," north of the 38th parallel. The 187th Airborne Regimental Combat Team ("Rakkasans") second of two combat jumps were on Easter Sunday, 1951 at Munsan-ni, South Korea codenamed Operation Tomahawk. The mission was to get behind Chinese forces and block their movement north. The 60th Indian Parachute Field Ambulance provided the medical cover for the operations, dropping an ADS and a surgical team and treating over 400 battle casualties apart from the civilian casualties that formed the core of their objective as the unit was on a humanitarian mission.
The Chinese counterattacked in April 1951, with the Fifth Phase Offensive (also known as the "Chinese Spring Offensive") with three field armies (approximately 700,000 men). The offensive's first thrust fell upon I Corps, which fiercely resisted in the Battle of the Imjin River (22–25 April 1951) and the Battle of Kapyong (22–25 April 1951), blunting the impetus of the offensive, which was halted at the "No-name Line" north of Seoul. On 15 May 1951, the Chinese commenced the second impulse of the Spring Offensive and attacked the ROK Army and the US X Corps in the east at the Soyang River. After initial success, they were halted by 20 May. At month's end, the US Eighth Army counterattacked and regained "Line Kansas," just north of the 38th parallel. The UN's "Line Kansas" halt and subsequent offensive action stand-down began the stalemate that lasted until the armistice of 1953.
Stalemate (July 1951 – July 1953)
For the remainder of the Korean War the UN Command and the PVA fought, but exchanged little territory; the stalemate held. Large-scale bombing of North Korea continued, and protracted armistice negotiations began 10 July 1951 at Kaesong. On the Chinese side, Zhou Enlai directed peace talks, and Li Kenong and Qiao Guanghua headed the negotiation team. Combat continued while the belligerents negotiated; the UN Command forces' goal was to recapture all of South Korea and to avoid losing territory. The PVA and the KPA attempted similar operations, and later effected military and psychological operations in order to test the UN Command's resolve to continue the war.
The principal battles of the stalemate include the Battle of Bloody Ridge (18 August – 15 September 1951), the Battle of Heartbreak Ridge (13 September – 15 October 1951), the Battle of Old Baldy (26 June – 4 August 1952), the Battle of White Horse (6–15 October 1952), the Battle of Triangle Hill (14 October – 25 November 1952), the Battle of Hill Eerie (21 March – 21 June 1952), the sieges of Outpost Harry (10–18 June 1953), the Battle of the Hook (28–9 May 1953), the Battle of Pork Chop Hill (23 March – 16 July 1953), and the Battle of Kumsong (13–27 July 1953).
Chinese troops suffered from deficient military equipment, serious logistical problems, overextended communication and supply lines, and the constant threat of UN bombers. All of these factors generally led to a rate of Chinese casualties that was far greater than the casualties suffered by UN troops. The situation became so serious that, on November 1951, Zhou Enlai called a conference in Shenyang to discuss the PVA's logistical problems. At the meeting it was decided to accelerate the construction of railways and airfields in the area, to increase the number of trucks available to the army, and to improve air defense by any means possible. These commitments did little to directly address the problems confronting PVA troops.
In the months after the Shenyang conference Peng Dehuai went to Beijing several times to brief Mao and Zhou about the heavy casualties suffered by Chinese troops and the increasing difficulty of keeping the front lines supplied with basic necessities. Peng was convinced that the war would be protracted, and that neither side would be able to achieve victory in the foreseeable future. On 24 February 1952, the Military Commission, presided over by Zhou, discussed the PVA's logistical problems with members of various government agencies involved in the war effort. After the government representatives emphasized their inability to meet the demands of the war, Peng, in an angry outburst, shouted: "You have this and that problem... You should go to the front and see with your own eyes what food and clothing the soldiers have! Not to speak of the casualties! For what are they giving their lives? We have no aircraft. We have only a few guns. Transports are not protected. More and more soldiers are dying of starvation. Can't you overcome some of your difficulties?" The atmosphere became so tense that Zhou was forced to adjourn the conference. Zhou subsequently called a series of meetings, where it was agreed that the PVA would be divided into three groups, to be dispatched to Korea in shifts; to accelerate the training of Chinese pilots, to provide more anti-aircraft guns to the front lines; to purchase more military equipment and ammunition from the Soviet Union; to provide the army with more food and clothing; and, to transfer the responsibility of logistics to the central government.
Armistice (July 1953 – November 1954)
The on again, off again armistice negotiations continued for two years, first at Kaesong (southern North Korea), then relocated at Panmunjom (bordering the Koreas). A major, problematic negotiation point was prisoner of war (POW) repatriation. The PVA, KPA, and UN Command could not agree on a system of repatriation because many PVA and KPA soldiers refused to be repatriated back to the north, which was unacceptable to the Chinese and North Koreans. In the final armistice agreement, signed on 27 July 1953, a Neutral Nations Repatriation Commission was set up to handle the matter.
In 1952, the US elected a new president, and on 29 November 1952, the president-elect, Dwight D. Eisenhower, went to Korea to learn what might end the Korean War. With the United Nations' acceptance of India's proposed Korean War armistice, the KPA, the PVA, and the UN Command ceased fire with the battle line approximately at the 38th parallel. Upon agreeing to the armistice, the belligerents established the Korean Demilitarized Zone (DMZ), which has since been patrolled by the KPA and ROKA, US, and Joint UN Commands.
The Demilitarized Zone runs northeast of the 38th parallel; to the south, it travels west. The old Korean capital city of Kaesong, site of the armistice negotiations, originally lay in the pre-war ROK, but now is in the DPRK. The United Nations Command, supported by the United States, the North Korean Korean People's Army, and the Chinese People's Volunteers, signed the Armistice Agreement on 27 July 1953 to end the fighting. The Armistice also called upon the governments of South Korea, North Korea, China and the United States to participate in continued peace talks. The war is considered to have ended at this point, even though there was no peace treaty. North Korea nevertheless claims that it won the Korean War.
After the war, Operation Glory (July–November 1954) was conducted to allow combatant countries to exchange their dead. The remains of 4,167 US Army and US Marine Corps dead were exchanged for 13,528 KPA and PVA dead, and 546 civilians dead in UN prisoner-of-war camps were delivered to the ROK government. After Operation Glory, 416 Korean War unknown soldiers were buried in the National Memorial Cemetery of the Pacific (The Punchbowl), on the island of Oahu, Hawaii. Defense Prisoner of War/Missing Personnel Office (DPMO) records indicate that the PRC and the DPRK transmitted 1,394 names, of which 858 were correct. From 4,167 containers of returned remains, forensic examination identified 4,219 individuals. Of these, 2,944 were identified as American, and all but 416 were identified by name. From 1996 to 2006, the DPRK recovered 220 remains near the Sino-Korean border.
Division of Korea (1954–present)
The Korean Armistice Agreement provided for monitoring by an international commission. Since 1953, the Neutral Nations Supervisory Commission (NNSC), composed of members from the Swiss and Swedish Armed Forces, has been stationed near the DMZ.
In April 1975, South Vietnam's capital was captured by the North Vietnamese army. Encouraged by the success of Communist revolution in Indochina, Kim Il-sung saw it as an opportunity to liberate the South. Kim visited China in April of that year, and met with Mao Zedong and Zhou Enlai to ask for military aid. Despite Pyongyang's expectations, however, Beijing refused to help North Korea for another war in Korea.
Since the armistice, there have been numerous incursions and acts of aggression by North Korea. In 1976, the axe murder incident was widely publicized. Since 1974, four incursion tunnels leading to Seoul have been uncovered. In 2010, a North Korean submarine torpedoed and sank the South Korean corvette ROKS Cheonan, resulting in the deaths of 46 sailors. Again in 2010, North Korea fired artillery shells on Yeonpyeong island, killing two military personnel and two civilians.
After a new wave of U.N. sanctions, on 11 March 2013, North Korea claimed that it had invalidated the 1953 armistice. On 13 March 2013, North Korea confirmed it ended the 1953 Armistice and declared North Korea "is not restrained by the North-South declaration on non-aggression." On 30 March 2013, North Korea stated that it had entered a "state of war" with South Korea and declared that "The long-standing situation of the Korean peninsula being neither at peace nor at war is finally over." Speaking on 4 April 2013, United States Secretary of Defense, Chuck Hagel, informed press that Pyongyang had 'formally informed' the Pentagon that it had 'ratified' the potential usage of a nuclear weapon against South Korea, Japan and the United States of America, including Guam and Hawaii. Hagel also stated that the US would deploy the Terminal High Altitude Area Defense anti-ballistic missile system to Guam, due to a credible and realistic nuclear threat from North Korea.
According to the data from the U.S. Department of Defense, the United States suffered 33,686 battle deaths, along with 2,830 non-battle deaths during the Korean War and 8,176 missing in action. South Korea reported some 373,599 civilian and 137,899 military deaths. Western sources estimate the PVA suffered about 400,000 killed and 486,000 wounded, while the KPA suffered 215,000 killed and 303,000 wounded.
Data from official Chinese sources, on the other hand, reported that the PVA had suffered 114,000 battle deaths, 34,000 non-battle deaths, 340,000 wounded, 7,600 missing and 21,400 captured during the war. Among those captured, about 14,000 defected to Taiwan while the other 7,110 were repatriated to China. Chinese sources also reported that North Korea had suffered 290,000 casualties, 90,000 captured and a "large" number of civilian deaths. In return, the Chinese and North Koreans estimated that about 390,000 soldiers from United States, 660,000 soldiers from South Korea and 29,000 other UN soldiers were "eliminated" from the battlefield.
Recent scholarship has put the full death toll on all sides at just over 1.2 million.
Initially, North Korean armor dominated the battlefield with Soviet T-34-85 medium tanks designed during the Second World War. The KPA's tanks confronted a tankless ROK Army armed with few modern anti-tank weapons, including American World War II–model 2.36-inch (60 mm) M9 bazookas, effective only against the 45 mm side armor of the T-34-85 tank. The US forces arriving in Korea were equipped with light M24 Chaffee tanks (on occupation duty in nearby Japan) that also proved ineffective against the heavier KPA T-34 tanks.
During the initial hours of warfare, some under-equipped ROK Army border units used American 105 mm howitzers as anti-tank guns to stop the tanks heading the KPA columns, firing high-explosive anti-tank ammunition (HEAT) over open sights to good effect; at the war's start, the ROK Army had 91 howitzers, but lost most to the invaders.
Countering the initial combat imbalance, the UN Command reinforcement matériel included heavier US M4 Sherman, M26 Pershing, M46 Patton, and British Cromwell and Centurion tanks that proved effective against North Korean armor, ending its battlefield dominance. Unlike in the Second World War (1939–45), in which the tank proved a decisive weapon, the Korean War featured few large-scale tank battles. The mountainous, heavily forested terrain prevented large masses of tanks from maneuvering. In Korea, tanks served largely as infantry support and mobile artillery pieces.
The Korean War was the first war in which jet aircraft played a central role. Once-formidable fighters such as the P-51 Mustang, F4U Corsair, and Hawker Sea Fury—all piston-engined, propeller-driven, and designed during World War II—relinquished their air superiority roles to a new generation of faster, jet-powered fighters arriving in the theater. For the initial months of the war, the P-80 Shooting Star, F9F Panther, and other jets under the UN flag dominated North Korea's prop-driven air force of Soviet Yakovlev Yak-9 and Lavochkin La-9s. The balance would shift with the arrival of the swept wing Soviet MiG-15 Fagot.
The Chinese intervention in late October 1950 bolstered the Korean People's Air Force (KPAF) of North Korea with the MiG-15 Fagot, one of the world's most advanced jet fighters. The fast, heavily armed MiG outflew first-generation UN jets such as the American F-80 and Australian and British Gloster Meteors, posing a real threat to B-29 Superfortress bombers even under fighter escort. Soviet Air Force pilots flew missions for the North to learn the West's aerial combat techniques. This direct Soviet participation is a casus belli that the UN Command deliberately overlooked, lest the war for the Korean peninsula expand, as the US initially feared, to include three communist countries—North Korea, the Soviet Union, and China—and so escalate to atomic warfare.
The USAF moved quickly to counter the MiG-15, with three squadrons of its most capable fighter, the F-86 Sabre, arriving in December 1950. Although the MiG's higher service ceiling—50,000 feet (15,000 m) vs. 42,000 feet (13,000 m)—could be advantageous at the start of a dogfight, in level flight, both swept wing designs attained comparable maximum speeds of around 660 mph (1,100 km/h). The MiG climbed faster, but the Sabre turned and dived better. The MiG was armed with one 37 mm and two 23 mm cannons, while the Sabre carried six .50 caliber (12.7 mm) machine guns aimed with radar-ranged gunsights.
By early 1951, the battle lines were established and changed little until 1953. In summer and autumn 1951, the outnumbered Sabres of the USAF's 4th Fighter Interceptor Wing—only 44 at one point—continued seeking battle in MiG Alley, where the Yalu River marks the Chinese border, against Chinese and North Korean air forces capable of deploying some 500 aircraft. Following Colonel Harrison Thyng's communication with the Pentagon, the 51st Fighter-Interceptor Wing finally reinforced the beleaguered 4th Wing in December 1951; for the next year-and-a-half stretch of the war, aerial warfare continued.
UN forces gained air superiority in the Korean theater after the initial months of the war and maintained it for the duration. This was decisive for the UN: first, for attacking into the peninsular north, and second, for resisting the Chinese intervention. North Korea and China also had jet-powered air forces; their limited training and experience made it strategically untenable to lose them against the better-trained UN air forces. Thus, the United States and the Soviet Union fed matériel to the war, battling by proxy and finding themselves virtually matched, technologically, when the USAF deployed the F-86F against the MiG-15 late in 1952.
Unlike the Vietnam War, in which the Soviet Union only officially sent 'advisers', in the Korean aerial war Soviet forces participated via the 64th Airborn Corps. 1106 enemy airplanes were officially downed by the Soviet pilots, 52 of whom earned the title of 'aces' with more than 5 confirmed kills. Since the Soviet system of confirming air kills erred on the conservative side – the pilot's words were never taken into account without corroboration from other witnesses, and enemy airplanes falling into the sea were not counted – the number might exceed 1106.
After the war, and to the present day, the USAF reports an F-86 Sabre kill ratio in excess of 10:1, with 792 MiG-15s and 108 other aircraft shot down by Sabres, and 78 Sabres lost to enemy fire. The Soviet Air Force reported some 1,100 air-to-air victories and 335 MiG combat losses, while China's People's Liberation Army Air Force (PLAAF) reported 231 combat losses, mostly MiG-15s, and 168 other aircraft lost. The KPAF reported no data, but the UN Command estimates some 200 KPAF aircraft lost in the war's first stage, and 70 additional aircraft after the Chinese intervention. The USAF disputes Soviet and Chinese claims of 650 and 211 downed F-86s, respectively. However, one unconfirmed source claims that the US Air Force has more recently cited 230 losses out of 674 F-86s deployed to Korea. The differing tactical roles of the F-86 and MiG-15 may have contributed to the disparity in losses: MiG-15s primarily targeted B-29 bombers and ground-attack fighter-bombers, while F-86s targeted the MiGs.
The Korean War marked a major milestone not only for fixed-wing aircraft, but also for rotorcraft, featuring the first large-scale deployment of helicopters for medical evacuation (medevac). In 1944–1945, during the Second World War, the YR-4 helicopter saw limited ambulance duty, but in Korea, where rough terrain trumped the jeep as a speedy medevac vehicle, helicopters like the Sikorsky H-19 helped reduce fatal casualties to a dramatic degree when combined with complementary medical innovations such as Mobile Army Surgical Hospitals. The limitations of jet aircraft for close air support highlighted the helicopter's potential in the role, leading to development of the AH-1 Cobra and other helicopter gunships used in the Vietnam War (1965–75).
Bombing North Korea
|This section requires expansion. (April 2013)|
On 12 August 1950, the USAF dropped 625 tons of bombs on North Korea; two weeks later, the daily tonnage increased to some 800 tons. U.S. warplanes dropped more napalm and bombs on North Korea than they did during the whole Pacific campaign of World War II.
As a result, almost every substantial building in North Korea was destroyed. The war's highest-ranking American POW, US Major General William F. Dean, reported that most of the North Korean cities and villages he saw were either rubble or snow-covered wastelands. US Air Force General Curtis LeMay commented, "we burned down every town in North Korea and South Korea, too."
Because neither Korea had a large navy, the Korean War featured few naval battles; mostly the combatant navies served as naval artillery for their in-country armies. A skirmish between North Korea and the UN Command occurred on 2 July 1950; the US Navy cruiser USS Juneau, the Royal Navy cruiser HMS Jamaica, and the frigate HMS Black Swan fought four North Korean torpedo boats and two mortar gunboats, and sank them.
During most of the war, the UN navies patrolled the west and east coasts of North Korea and sank supply and ammunition ships to deny the sea to North Korea. Aside from very occasional gunfire from North Korean shore batteries, the main threat to US and UN navy ships was from magnetic mines the North Koreans employed for defensive purposes. During the war, five U.S. Navy ships were lost (two minesweepers, two minesweeper escorts, and one ocean tug) all of them to mines, while 87 other warships suffered from slight to moderate damage from North Korean coastal artillery.
The USS Juneau sank ammunition ships that had been present in her previous battle. The last sea battle of the Korean War occurred at Inchon, days before the Battle of Incheon; the ROK ship PC 703 sank a North Korean mine layer in the Battle of Haeju Island, near Inchon. Three other supply ships were sunk by PC-703 two days later in the Yellow Sea.
U.S. threat of atomic warfare
On 5 April 1950, the Joint Chiefs of Staff (JCS) issued orders for the retaliatory atomic bombing of Manchurian PRC military bases, if either their armies crossed into Korea or if PRC or KPA bombers attacked Korea from there. The President ordered the transfer of nine Mark 4 nuclear bombs "to the Air Force's Ninth Bomb Group, the designated carrier of the weapons ... [and] signed an order to use them against Chinese and Korean targets", which he never transmitted.
Many American officials viewed the deployment of nuclear-capable (but not nuclear-armed) B-29 bombers to Britain as helping to resolve the Berlin Blockade of 1948-1949. Truman and Eisenhower both had military experience and viewed nuclear weapons as potentially usable components of their military. During Truman's first meeting to discuss the war on 25 June 1950, he ordered plans be prepared for attacking Soviet forces if they entered the war. By July, Truman approved another B-29 deployment to Britain, this time with bombs (but without their cores), to remind the Soviets of American offensive ability. Deployment of a similar fleet to Guam was leaked to The New York Times. As United Nations forces retreated to Pusan, and the CIA reported that mainland China was building up forces for a possible invasion of Taiwan, the Pentagon believed that Congress and the public would demand using nuclear weapons if the situation in Korea required them.
As Chinese forces pushed back the United States forces from the Yalu River, Truman stated during a 30 November 1950 press conference that using nuclear weapons was "always been [under] active consideration", with control under the local military commander. The Indian Ambassador, K. Madhava Panikkar, reports "that Truman announced that he was thinking of using the atom bomb in Korea. But the Chinese seemed totally unmoved by this threat ... The propaganda against American aggression was stepped up. The 'Aid Korea to resist America' campaign was made the slogan for increased production, greater national integration, and more rigid control over anti-national activities. One could not help feeling that Truman's threat came in very useful to the leaders of the Revolution, to enable them to keep up the tempo of their activities."
After his statement caused concern in Europe, Truman met on 4 December 1950 with UK prime minister and Commonwealth spokesman Clement Attlee, French Premier René Pleven, and Foreign Minister Robert Schuman to discuss their worries about atomic warfare and its likely continental expansion. The US's forgoing atomic warfare was not because of "a disinclination by the Soviet Union and People's Republic of China to escalate" the Korean War, but because UN allies—notably from the UK, the Commonwealth, and France—were concerned about a geopolitical imbalance rendering NATO defenseless while the US fought China, who then might persuade the Soviet Union to conquer Western Europe. The Joint Chiefs of Staff advised Truman to tell Attlee that the United States would only use nuclear weapons if necessary to protect an evacuation of UN troops, or to prevent a "major military disaster".
On 6 December 1950, after the Chinese intervention repelled the UN Command armies from northern North Korea, General J. Lawton Collins (Army Chief of Staff), General MacArthur, Admiral C. Turner Joy, General George E. Stratemeyer, and staff officers Major General Doyle Hickey, Major General Charles A. Willoughby, and Major General Edwin K. Wright, met in Tokyo to plan strategy countering the Chinese intervention; they considered three potential atomic warfare scenarios encompassinging the next weeks and months of warfare.
- In the first scenario: If the PVA continued attacking in full and the UN Command is forbidden to blockade and bomb China, and without ROC reinforcements, and without an increase in US forces until April 1951 (four National Guard divisions were due to arrive), then atomic bombs might be used in North Korea.
- In the second scenario: If the PVA continued full attacks and the UN Command have blockaded China and have effective aerial reconnaissance and bombing of the Chinese interior, and the ROC soldiers are maximally exploited, and tactical atomic bombing is to hand, then the UN forces could hold positions deep in North Korea.
- In the third scenario: if the PRC agreed to not cross the 38th parallel border, General MacArthur recommended UN acceptance of an armistice disallowing PVA and KPA troops south of the parallel, and requiring PVA and KPA guerrillas to withdraw northwards. The US Eighth Army would remain to protect the Seoul–Incheon area, while X Corps would retreat to Pusan. A UN commission should supervise implementation of the armistice.
Both the Pentagon and the State Department were nonetheless cautious about using nuclear weapons due to the risk of general war with China and the diplomatic ramifications. Truman and his senior advisors agreed, and never seriously considered using them in early December 1950 despite the poor military situation in Korea.
In 1951, the US escalated closest to atomic warfare in Korea. Because the PRC had deployed new armies to the Sino-Korean frontier, pit crews at the Kadena Air Base, Okinawa, assembled atomic bombs for Korean warfare, "lacking only the essential pit nuclear cores." In October 1951, the US effected Operation Hudson Harbor to establish nuclear weapons capability. USAF B-29 bombers practised individual bombing runs from Okinawa to North Korea (using dummy nuclear or conventional bombs), coordinated from Yokota Air Base in east-central Japan. Hudson Harbor tested "actual functioning of all activities which would be involved in an atomic strike, including weapons assembly and testing, leading, ground control of bomb aiming". The bombing run data indicated that atomic bombs would be tactically ineffective against massed infantry, because the "timely identification of large masses of enemy troops was extremely rare."
Ridgway was authorized to use nuclear weapons if a major air attack originated from outside Korea. An envoy was sent to Hong Kong to deliver a warning to China. The message likely caused Chinese leaders to be more cautious about potential American use of nuclear weapons, but whether they learned about the B-29 deployment is unclear and the failure of the two major Chinese offensives that month likely was what caused them to shift to a defensive strategy in Korea. The B-29s returned to the United States in June.
When Eisenhower succeeded Truman in early 1953 he was similarly cautious about using nuclear weapons in Korea, including for diplomatic purposes to encourage progress in the ongoing truce discussions. The administration prepared contingency plans for using them against China, but like Truman, the new president feared that doing so would result in Soviet attacks on Japan. The war ended as it had begun, without American nuclear weapons deployed near battle.
Civilian deaths and massacres
There were numerous atrocities and massacres of civilians throughout the Korean war committed by both the North and South Koreans. Many of these started the first days of the war. South Korean President Syngman Rhee ordered the Bodo League massacre on 28 June, beginning numerous killings of more than than 100,000 suspected leftist sympathizers and their families by South Korean officials and right wing groups. During the massacre, British protested their allies and saved some citizens.
In occupied areas, North Korean Army political officers purged South Korean society of its intelligentsia by executing every educated person—academic, governmental, religious—who might lead resistance against the North; the purges continued during the NPA retreat.
R. J. Rummel estimated that the North Korean Army executed at least 500,000 civilians in their drive to conscript South Koreans to their war effort. When the North Koreans retreated North in September 1950, they abducted tens of thousands of South Korean men. The reasons are not clear but the many of the victims had skills, or had been arrested as right-wing activists.
In addition to conventional military operations, North Korean soldiers fought the U.N. forces by infiltrating guerrillas among refugees. These soldiers disguised as refugees would approach UN forces asking for food and help, then open fire and attack. U.S. troops acted under a "shoot-first-ask-questions-later" policy against any civilian refugee approaching U.S. battlefield positions, a policy that led U.S. soldiers to kill an estimated 400 civilians at No Gun Ri (26–29 July 1950) in central Korea because they believed some of the refugees killed to be North Korean soldiers in disguise. The South Korean Truth and Reconciliation Commission defended this policy as a "military necessity".
Beginning in 2005, the South Korean Truth and Reconciliation Commission has investigated numerous atrocities committed by the Japanese Colonial government and the authoritarian South Korean governments that followed it. It has investigated atrocities before, during and after the Korean War.
Some of the worst pre-Korean War violence involved the Jeju Uprising (1948–49). The Commission has verified over 14,000 civilians were killed in the brutal fighting between South Korean military and paramilitary units against pro-North Korean guerrillas. Although most of the fighting had subsided by 1949, fighting continued until 1950. The Commission estimates 86% of the civilians were killed by South Korean forces. The Americans on the island documented the events, but never intervened.
Recently declassified US documents show that the South Koreans massacred entire families of leftists near Daejeon. Many of the victims were members of the Bodo League. The Truth and Reconciliation Commission estimates that at least 100,000 people—and possibly higher—were executed in the summer of 1950. The victims include political prisoners, civilians who were killed by US Forces, civilians who allegedly collaborated with communist North Korea or local communist groups, and civilians killed by communist insurgents. Disturbingly, the Commission has found evidence that both the South Korean government and the leftists murdered the children of their enemies.
Prisoners of war
The KPA killed POWs at the battles for Hill 312, Hill 303, the Pusan Perimeter, and Daejeon—discovered during early after-battle mop-up actions by the UN forces. Later, a US Congress war crimes investigation, the United States Senate Subcommittee on Korean War Atrocities of the Permanent Subcommittee of the Investigations of the Committee on Government Operations reported that "... two-thirds of all American prisoners of war in Korea died as a result of war crimes".
Although the Chinese rarely executed prisoners like their Korean counterparts, mass starvation and diseases swept through the Chinese-run POW camps during the winter of 1950–51. About 43 percent of all US POWs died during this period. The Chinese defended their actions by stating that all Chinese soldiers during this period were suffering mass starvation and diseases due to logistical difficulties. The UN POWs pointed out that most of the Chinese camps were located near the easily supplied Sino-Korean border, and that the Chinese withheld food to force the prisoners to accept the communism indoctrination programs.
North Korea may have detained up to 50,000 South Korean POWs after the ceasefire.:141 Over 88,000 South Korean soldiers were missing and the Communists' themselves had claimed they had captured 70,000 South Koreans.:142 However, when ceasefire negotiations began in 1951, the Communists reported they held only 8,000 South Koreans. The UN Command protested the discrepancies and alleged the Communists were forcing South Korean POWs to join the KPA.
The Communists denied such allegations. They claimed their POW rosters were small because many POWs were killed in UN air raids and they had released ROK soldiers at the front. They insisted that only volunteers were allowed to serve in the KPA.:143 By early 1952, UN negotiators gave up trying to get back the missing South Koreans. The POW exchange proceeded without access to South Korean POWs not on the Communist rosters.
North Korea continued to claim that any South Korean POW who stayed in the North did so voluntarily. However, since 1994, South Korean POWs have been escaping North Korea on their own after decades of captivity. As of 2010, the South Korean Ministry of Unification reports that 79 ROK POWs have escaped the North as of 2010. The South Korean government estimates 500 South Korean POWs continue to be detained in North Korea.
The escaped POWs have testified about their treatment and written memoirs about their lives in North Korea. They report that they were not told about the POW exchange procedures, and were assigned to work in mines in the remote northeastern regions near the Chinese and Russian border.:31 Declassified Soviet Foreign Ministry documents corroborate such testimony.
The Korean Central News Agency, reported that the UN forces killed some 33,600 KPA POWs; that on 19 July 1951, in POW Camp No. 62, some 100 POWs were killed as machine-gunnery targets; that on 27 May 1952, in the 77th Camp, Koje Island (now in Geoje), the ROK Army incinerated with flamethrowers some 800 KPA POWs who rejected "voluntary repatriation" south, and instead demanded repatriation north.
In 1997 the Geoje POW Camp in South Korea was turned into a memorial.
In December 1950, National Defense Corps was founded, the soldiers were 406,000 drafted citizens. In the winter of 1951, 50,000 to 90,000 South Korean National Defense Corps soldiers starved to death while marching southward under the Chinese offensive when their commanding officers embezzled funds earmarked for their food. This event is called the National Defense Corps Incident. There is no evidence that Syngman Rhee was personally involved in or benefited from the corruption.
In 1950, Secretary of Defence George C. Marshall and Secretary of the Navy Francis P. Matthews called on the USO which was disbanded by 1947 to provide support for U.S servicepersons. By the end of the war, more than 113,000 American USO volunteers were working at home front and abroad. Many stars came to Korea to give their performances. Throughout the Korean War, U.N. Comfort Stations were operated by South Korean officials for U.N soldiers.
Mao Zedong's decision to involve China in the Korean War was a conscientious effort to confront the most powerful country in the world, undertaken at a time when the regime was still consolidating its own power after winning the Chinese Civil War. Mao primarily supported intervention not to save North Korea or to appease the Soviet Union, but because he believed that a military conflict with the United States was inevitable after UN forces crossed the 38th parallel. Mao's secondary motive was to improve his own prestige inside the communist international community by demonstrating that his Marxist concerns were international. In his later years Mao believed that Stalin only gained a positive opinion of him after China's entrance into the Korean War. Inside China, the war improved the long-term prestige of Mao, Zhou, and Peng.
China emerged from the Korean War united by a sense of national pride, despite the war's enormous costs. The Chinese people have the point of view of the war being initiated by the United States and South Korea. In Chinese media, the Chinese war effort is considered as an example of China's engaging the strongest power in the world with an under-equipped army, forcing it to retreat, and fighting it to a military stalemate. These successes were contrasted with China's historical humiliations by Japan and by Western powers over the previous hundred years, highlighting the abilities of the PLA and the CCP. The most significant negative long-term consequence of the war (for China) was that it led the United States to guarantee the safety of Chiang Kai-shek's regime in Taiwan, effectively ensuring that Taiwan would remain outside of PRC control until the present day.
Racial integration efforts in the U.S. military began during the Korean War, where African Americans fought in integrated units for the first time. Among the 1.8 million American soldiers who fought in the Korean War there were more than 100,000 African Americans.
Post-war recovery was different in the two Koreas. South Korea stagnated in the first post-war decade. In 1953, South Korea and the United States concluded a Mutual Defense Treaty. In 1960, April Revolution occurred students joined anti-Syngman Rhee demonstration, 142 were killed by police, in consequence Syngman Rhee resigned and defected to the United States. Park Chung-hee's May 16 coup enabled social stability. In 1960s, western princess earned 25 parcent of South Korean GNP with the help of their military government. During 1965-1973, South Korea dispatched troops to Vietnam, got 235,560,000 dollars allowance and military procurement from US. GNP increased fivefold during the Vietnam War. South Korea industrialized and modernized. Contemporary North Korea remains underdeveloped, but its external debt is 30 times lower than that of South Korea. South Korea had one of the world's fastest growing economies from the early 1960s to the late 1990s. In 1957 South Korea had a lower per capita GDP than Ghana, and by 2010 it was ranked thirteenth in the world (Ghana was 86th).
Post-war, about 100,000 North Koreans were executed in purges. According to Rummel, forced labor and concentration camps were responsible for over one million deaths in North Korea from 1945 to 1987; others have estimated 400,000 deaths in concentration camps alone. Estimates based on the most recent North Korean census suggest that 240,000 to 420,000 people died as a result of the 1990s North Korean famine and that there were 600,000 to 850,000 unnatural deaths in North Korea from 1993 to 2008. The North Korean government has been accused of "crimes against humanity" for its alleged culpability in creating and prolonging the 1990s famine. A study by South Korean anthropologists of North Korean children who had defected to China found that 18-year-old males were 5 inches shorter than South Koreans their age due to malnutrition.
Korean anti-Americanism after the war was fueled by the presence and behavior of American military personnel (USFK) and U.S. support for the authoritarian regime, a fact still evident during the country's democratic transition in the 1980s. However, anti-Americanism has declined significantly in South Korea in recent years, from 46% favorable in 2003 to 74% favorable in 2011, making South Korea one of the most pro-American countries in the world.
In addition a large number of mixed race 'G.I. babies' (offspring of U.S. and other U.N. soldiers and Korean women) were filling up the country's orphanages. Korean traditional society places significant weight on paternal family ties, bloodlines, and purity of race. Children of mixed race or those without fathers are not easily accepted in South Korean society. International adoption of Korean children began in 1954. The U.S. Immigration Act of 1952 legalized the naturalization of non-whites as American citizens, and made possible the entry of military spouses and children from South Korea after the Korean War. With the passage of the Immigration Act of 1965, which substantially changed U.S. immigration policy toward non-Europeans, Koreans became one of the fastest growing Asian groups in the United States.
- As per armistice agreement of 1953, the opposing sides had to "insure a complete cessation of hostilities and of all acts of armed force in Korea until a final peaceful settlement is achieved".
- See 50 U.S.C. S 1601: "All powers and authorities possessed by the President, any other officer or employee of the Federal Government, or any executive agency... as a result of the existence of any declaration of national emergency in effect on 14 September 1976 are terminated two years from 14 September 1976."; Jolley v. INS, 441 F.2d 1245, 1255 n.17 (5th Cir. 1971).
- "Cinnost CSLA za valky v Koreji... | Ross Hedvicek ... Nastenka AgitProp" (in Czech). Hedvicek.blog.cz. 27 July 1953. Retrieved 7 November 2011.
- "Romania’s "Fraternal Support" to North Korea during the Korean War, 1950-1953". Wilson Centre. Retrieved 24 January 2013.
- Millett, Allan Reed, ed. (2001). The Korean War, Volume 3. Korea Institute of Military History. U of Nebraska Press. p. 692. ISBN 9780803277960. Retrieved 16 February 2013. "Total Strength 602,902 troops"
- Tim Kane (27 October 2004). "Global U.S. Troop Deployment, 1950-2003". Reports. The Heritage Foundation. Retrieved 15 February 2013.
Ashley Rowland (22 October 2008). "U.S. to keep troop levels the same in South Korea". Stars and Stripes. Retrieved 16 February 2013.
Colonel Tommy R. Mize, United States Army (12 March 2012). "U.S. Troops Stationed in South Korea, Anachronistic?". United States Army War College. Defense Technical Information Center. Retrieved 16 February 2013.
Louis H. Zanardi; Barbara A. Schmitt; Peter Konjevich; M. Elizabeth Guran; Susan E. Cohen; Judith A. McCloskey (August 1991). "Military Presence: U.S. Personnel in the Pacific Theater". Reports to Congressional Requesters. United States General Accounting Office. Retrieved 15 February 2013.
- USFK Public Affairs Office. "United Nations Command". United States Forces Korea. United States Department of Defense. Retrieved 17 February 2013. "Republic of Korea -- 590,911
Colombia -- 1,068
United States -- 302,483
Belgium -- 900
United Kingdom -- 14,198
South Africa -- 826
Canada -- 6,146
The Netherlands -- 819
Turkey -- 5,453
Luxembourg -- 44
Australia -- 2,282
Philippines -- 1,496
New Zealand -- 1,385
Thailand -- 1,204
Ethiopia -- 1,271
Greece -- 1,263
France -- 1,119"
- Rottman, Gordon L. (2002). Korean War Order of Battle: United States, United Nations, and Communist Ground, Naval, and Air Forces, 1950-1953. Greenwood Publishing Group. p. 126. ISBN 9780275978358. Retrieved 16 February 2013. "A peak strength of 14,198 British troops was reached in 1952, with over 40 total serving in Korea."
"UK-Korea Relations". British Embassy Pyongyang. Foreign and Commonwealth Office. 9 February 2012. Retrieved 16 February 2013. "When war came to Korea in June 1950, Britain was second only to the United States in the contribution it made to the UN effort in Korea. 87,000 British troops took part in the Korean conflict, and over 1,000 British servicemen lost their lives"
Jack D. Walker. "A Brief Account of the Korean War". Information. Korean War Veterans Association. Retrieved 17 February 2013. "Other countries to furnish combat units, with their peak strength, were: Australia (2,282), Belgium/Luxembourg (944), Canada (6,146), Colombia (1,068), Ethiopia (1,271), France (1,119), Greece (1,263), Netherlands (819), New Zealand (1,389), Philippines (1,496), Republic of South Africa (826), Thailand (1,294), Turkey (5,455), and the United Kingdom (Great Britain 14,198)."
- "Land of the Morning Calm: Canadians in Korea 1950 - 1953". Veterans Affairs Canada. Government of Canada. 7 January 2013. Retrieved 22 February 2013. "Peak Canadian Army strength in Korea was 8,123 all ranks."
- Edwards, Paul M. (2006). Korean War Almanac. Almanacs of American wars. Infobase Publishing. p. 517. ISBN 9780816074679. Retrieved 22 February 2013.
- "Casualties of Korean War". Ministry of National Defense of Republic of Korea. Retrieved 14 February 2007.
- Zhang 1995, p. 257.
- Shrader, Charles R. (1995). Communist Logistics in the Korean War. Issue 160 of Contributions in Military Studies. Greenwood Publishing Group. p. 90. ISBN 9780313295096. Retrieved 17 February 2013. "NKPA strength peaked in October 1952 at 266,600 men in eighteen divisions and six independent brigades."
- Kolb, Richard K. (1999). "In Korea we whipped the Russian Air Force". VFW Magazine (Veterans of Foreign Wars) 86 (11). Retrieved 17 February 2013. "Soviet involvement in the Korean War was on a large scale. During the war, 72,000 Soviet troops (among them 5,000 pilots) served along the Yalu River in Manchuria. At least 12 air divisions rotated through. A peak strength of 26,000 men was reached in 1952."
- "U.S. Military Casualties - Korean War Casualty Summary". Defense Casualty Analysis System. United States Department of Defense. 5 February 2013. Retrieved 6 February 2013.
- "Summary Statistics". Defense POW/Missing Personnel Office. United States Department of Defense. 24 January 2013. Retrieved 6 February 2013.
- "Records of American Prisoners of War During the Korean War, created, 1950 - 1953, documenting the period 1950 - 1953". Access to Archival Databases. National Archives and Records Administration. Retrieved 6 February 2013. "This series has records for 4,714 U.S. military officers and soldiers who were prisoners of war (POWs) during the Korean War and therefore considered casualties."
- Office of the Defence Attaché (30 September 2010). "Korean war". British Embassy Seoul. Foreign and Commonwealth Office. Retrieved 16 February 2013.
- Australian War Memorial Korea MIA Retrieved 17 March 2012
- "Korean War WebQuest". Veterans Affairs Canada. Government of Canada. 11 October 2011. Retrieved 28 May 2013. "In Brampton, Ontario, there is a 60 metre long "Memorial Wall" of polished granite, containing individual bronze plaques which commemorate the 516 Canadian soldiers who died during the Korean War."
"Canada Remembers the Korean War". Veterans Affairs Canada. Government of Canada. 1 March 2013. Retrieved 27 May 2013. "The names of 516 Canadians who died in service during the conflict are inscribed in the Korean War Book of Remembrance located in the Peace Tower in Ottawa."
- Aiysha Abdullah; Kirk Fachnie (6 December 2010). "Korean War veterans talk of "forgotten war"". Canadian Army. Government of Canada. Retrieved 28 May 2013. "Canada lost 516 military personnel during the Korean War and 1,042 more were wounded."
"Canadians in the Korean War". kvacanada.com. Korean Veterans Association of Canada Inc. Retrieved 28 May 2013. "Canada's casualties totalled 1,558 including 516 who died."
"2013 declared year of Korean war veteran". MSN News. The Canadian Press. 8 January 013. Retrieved 28 May 2013. "The 1,558 Canadian casualties in the three-year conflict included 516 people who died."
- Ted Barris (1 July 2003). "Canadians in Korea". legionmagazine.com. Royal Canadian Legion. Retrieved 28 May 2013. "Not one of the 33 Canadian PoWs imprisoned in North Korea signed the petitions."
"Behind barbed wire". CBC News. 29 September 2003. Retrieved 28 May 2013.
- Sandler, Stanley, ed. (2002). Ground Warfare: H-Q. Volume 2 of Ground Warfare: An International Encyclopedia. ABC-CLIO. p. 160. ISBN 9781576073445. Retrieved 19 March 2013. "Philippines: KIA 92; WIA 299; MIA/POW 97
New Zealand: KIA 34; WIA 299; MIA/POW 1"
- "Two War Reporters Killed". The Times (London, United Kingdom). 14 August 1950. ISSN 0140-0460.
- Rummel, Rudolph J. (1997). Statistics of Democide: Genocide and Murder Since 1900. Chapter 10, Statistics Of North Korean Democide Estimates, Calculations, And Sources. ISBN 978-3-8258-4010-5.
- Hickey, Michael. "The Korean War: An Overview". Retrieved 31 December 2011.
- Li, Xiaobing (2007). A History of the Modern Chinese Army. Lexington, KY: University Press of Kentucky. p. 111. ISBN 978-0-8131-2438-4.
- Krivošeev, Grigorij F. (1997). Soviet Casualties and Combat Losses in the Twentieth Century. London: Greenhill. ISBN 1-85367-280-7.
- "US State Department statement regarding 'Korea: Neutral Nations Supervisory Commission' and the Armistice Agreement 'which ended the Korean War'". FAS. Retrieved 4 January 2011.
- "Text of the Korean War Armistice Agreement". FindLaw. 27 July 1953. Retrieved 26 November 2011.[dead link]
- "North Korea enters 'state of war' with South". BBC News. 30 March 2013. Retrieved 30 March 2013.
- Boose, Donald W. (Winter 1995–96). "Portentous Sideshow: The Korean Occupation Decision". Parameters: US Army War College Quarterly (US Army War College) 5 (4): 112–129. OCLC 227845188.
- Devine, Robert A.; Breen, T.H; Frederickson, George M; Williams, R Hal; Gross, Adriela J; Brands, H.W (2007). America Past and Present. II: Since 1865 (8th ed.). Pearson Longman. pp. 819–821. ISBN 0-321-44661-5.
- Truman, Harry S. (29 June 1950). "The President's News Conference of June 29, 1950". Teachingamericanhistory.org. Retrieved 4 January 2011.
- Halberstam 2007, p. 2.
- Pratt, Keith L.; Rutt, Richard; Hoare, James (1999). Korea: A Historical and Cultural Dictionary. Richmond, Surrey: Curzon. p. 239. ISBN 978-0-7007-0464-4.
- Kim, Ilpyong J. (2003). Historical Dictionary of North Korea. Lanham, Maryland: Scarecrow Press. p. 79. ISBN 978-0-8108-4331-8.
- "War to Resist U.S. Aggression and Aid Korea Commemorated in Henan". China Radio International. 25 October 2008. Retrieved 16 December 2011.
- "War to Resist US Aggression and Aid Korea Marked in DPRK". Xinhua News Agency. 26 October 2000. Retrieved 16 December 2011.
- Stokesbury 1990.
- Schnabel, James F. (1972). Policy and Direction: The First Year. United States Army in the Korean War 3. Washington, DC: Center of Military History, United States Army. pp. 3, 18. ISBN 0-16-035955-4.
- Stokesbury 1990, p. 23.
- Dear & Foot 1995, p. 516.
- Cumings 1997, pp. 160-161, 195-196.
- Early, Stephen (1943). "Cairo Communiqué". Japan: National Diet Library.
- Goulden 1983, p. 17.
- Whelan, Richard (1991). Drawing the Line: the Korean War 1950–53. Boston: Little, Brown and Company. p. 22. ISBN 0-316-93403-8.
- Stokesbury 1990, pp. 24, 25.
- McCullough, David (1992). Truman. Simon & Schuster Paperbacks. pp. 785, 786. ISBN 0-671-86920-5.
- Appleman 1998.
- McCune, Shannon Boyd Bailey (1946). "Physical Basis for Korean Boundaries". Far Eastern Quarterly 5: 286–7. OCLC 32463018.
- Grajdanzev, Andrew J (1945). "Korea Divided". Far Eastern Survey 14 (20): 282. ISSN 0362-8949. OCLC 482287795.
- Stokesbury 1990, p. 25.
- Chen 1994, p. 110.
- Chen 1994, pp. 110–111.
- Chen 1994, p. 111.
- Chen 1994, pp. 110, 162.
- Chen 1994, p. 26.
- Chen 1994, p. 22.
- Chen 1994, p. 41.
- Chen 1994, p. 21.
- Chen 1994, p. 19.
- Chen 1994, pp. 25–26, 93.
- Stokesbury 1990, pp. 24-25.
- Appleman 1998, pp. 24–25.
- Cumings 1981, p. 25.
- Becker 2005, p. 52.
- Halberstam 2007, p. 63.
- Hermes, Walter, Jr. (2002) . Truce Tent and Fighting Front. United States Army in the Korean War. United States Army Center of Military History. pp. 2, 6–9.
- Stokesbury 1990, pp. 25–26.
- Becker 2005, p. 53.
- Cumings 1981, chapter 3, 4.
- Johnson, Chalmers. Blowback: The Costs and Consequences of American Empire (2000, rev. 2004 ed.). Owl Book. pp. 99–101. ISBN 0-8050-6239-4. According to Chalmers Johnson, death toll is 14,000-30,000
- "Ghosts Of Cheju". Newsweek. 19 June 2000. Retrieved 6 December 2011. More than one of
- Stokesbury 1990, p. 26.
- "Korea: For Freedom". Time. 20 May 1946. Retrieved 16 December 2011.
- Malkasian 2001, p. 13.
- Stueck, William (2004). The Korean War in World History. Lexington: University Press of Kentucky. p. 38. ISBN 0-8131-2306-2.
- Stewart, Richard W., ed. (2005). "The Korean War, 1950–1953". American Military History, Volume 2. United States Army Center of Military History. CMH Pub 30-22. Retrieved 20 August 2007.
- Stokesbury 1990, p. 27.
- Wainstock, Dennis (1999). Truman, MacArthur, and the Korean War. p. 137.
- "439 civilians confirmed dead in Yeosu-Suncheon Uprising of 1948 New report by the Truth Commission places blame on Syngman Rhee and the Defense Ministry, advises government apology". Hankyoreh. 8 January 2009. Retrieved 16 July 2010.
- "'문경학살사건' 유족 항소심도 패소". Chosun Ilbo (in Korean). 6 August 2009. Retrieved 16 July 2010.
- "두 민간인 학살 사건, 상반된 판결 왜 나왔나?'울산보도연맹' – ' 문경학살사건' 판결문 비교분석해 봤더니...". OhmyNews (in Korean). 17 February 2009. Retrieved 16 July 2010.
- "South Korea owns up to brutal past". The Sydney Morning Herald. 2007. Retrieved 2013-04-05.
- Cumings 1997, p. 263.
- David Dallin, Soviet Foreign Policy After Stalin (J. B. Lippincott, 1961), p60.
- Douglas J. Macdonald, "Communist Bloc Expansion in the Early Cold War," International Security, Winter 1995-6, p180.
- Sergei N. Goncharov, John W. Lewis and Xue Litai, Uncertain Partners: Stalin, Mao and the Korean War (Stanford University Press, 1993), p213
- Cumings 1997, p. 251.
- William Stueck, The Korean War: An International History (Princeton University Press, 1995), pp31,69.
- John Lewis Gaddis, We Know Now: Rethinking Cold War History (Oxford University Press, 1997), p71.
- Cumings 1997, pp. 251, 253.
- Weathersby 2002.
- Weathersby 1993.
- Millett 2007, pp. 14-19.
- Weathersby 2002, pp. 3-4.
- Weathersby 2002, p. 3.
- Weathersby 2002, pp. 9,10.
- Weathersby 2002, pp. 11.
- Millett 2007, p. 14.
- Millett 2007, p. 15.
- Weathersby 2002, p. 10.
- Barnouin & Yu 2006, pp. 139–140.
- Weathersby 1993, p. 29.
- Weathersby 2002, p. 13.
- Mark O'Neill, "Soviet Involvement in the Korean War: A New View from the Soviet-Era Archives," OAH Magazine of History, Spring 2000, p21.
- Weathersby 1993, pp. 29-30.
- Weathersby 2002, p. 14.
- Weathersby 2002, p. 15.
- Millett 2007, p. 17.
- Tom Gjelten (25 June 2010). "CIA Files Show U.S. Blindsided By Korean War". National Public Radio. Retrieved 16 February 2013.
- Seth, Michael J. (2010). A history of Korea : from antiquity to the present. Lanham, Md.: Rowman & Littlefield. p. 324. ISBN 978-0742567160.
- Stokesbury 1990, p. 14.
- Cumings 1997, pp. 247-253.
- Cumings 1997, pp. 260-263.
- Seth, Michael J. (2010). A history of Korea : from antiquity to the present. Lanham, Md.: Rowman & Littlefield. ISBN 978-0742567160.
- Millett 2007, pp. 18-19.
- "만물상 6•25 한강다리 폭파의 희생자들". Chosun Ilbo (in Korean). 29 June 2010. Retrieved 15 July 2010.
- Johnston, William. A war of patrols: Canadian Army operations in Korea. Univ of British Columbia Pr. p. 20. ISBN 0-7748-1008-4.
- Cumings 1997, pp. 269-270.
- Webb, William J. "The Korean War: The Outbreak". United States Army Center for Military History. Retrieved 16 December 2011.
- Edwards, Paul. Historical Dictionary of the Korean War. Scarecrow Press. p. 32. ISBN 0810867737.
- Kim 1973, p. 30.
- Kim 1973, p. 46.
- Rees 1964, p. 22.
- Rees 1964, p. 23.
- Rees 1964, p. 26.
- Malkasian 2001, p. 16.
- Gromyko, Andrei A. (4 July 1950). "On American Intervention In Korea, 1950". Modern History Sourcebook. New York: Fordham University. Retrieved 16 December 2011.
- Gross, Leo (February 1951). "Voting in the Security Council: Abstention from Voting and Absence from Meetings". The Yale Law Journal 60 (2): 209–57. doi:10.2307/793412. JSTOR 793412.
- Schick, F. B (September 1950). "Videant Consules". The Western Political Quarterly 3 (3): 311–325. doi:10.2307/443348. JSTOR 443348.
- Stokesbury 1990, p. 42.
- Goulden 1983, p. 48.
- Hess, Gary R. (2001). Presidential Decisions for War : Korea, Vietnam and the Persian Gulf. Baltimore: Johns Hopkins University Press. ISBN 0-8018-6515-8.
- Graebner, Norman A.; Trani, Eugene P. (1979). The Age of Global Power: The United States Since 1939. V3641. New York: John Wiley & Sons. OCLC 477631060.
- Truman, Harry S.; Ferrell, Robert H. (1980). The Autobiography of Harry S. Truman. Boulder: University Press of Colorado. ISBN 0-87081-090-1.
- Rees 1964, p. 27.
- Barnouin & Yu 2006, p. 140.
- Stokesbury 1990, p. 45.
- Stokesbury 1990, p. 48.
- Stokesbury 1990, p. 53.
- Stokesbury 1990, p. 56.
- Barnouin & Yu 2006, p. 141.
- Stokesbury 1990, pp. 47–48, 66.
- Stokesbury 1990, p. 58.
- Stokesbury 1990, pp. 59–60.
- Stokesbury 1990, p. 61.
- Appleman 1998, p. 61.
- Stokesbury 1990, pp. 58, 61.
- Stokesbury 1990, p. 67.
- "History of the 1st Cavalry Division and Its Subordinate Commands". Cavalry Outpost Publications. Retrieved 27 March 2010.
- Stokesbury 1990, p. 68.
- Stokesbury 1990, p. 70.
- Hoyt, Edwin P. (1984). On To The Yalu. New York: Stein and Day. p. 104.
- Stokesbury 1990, pp. 71–72.
- Barnouin & Yu 2006, p. 143.
- Schnabel, James F (1992) . United States Army in the Korean War: Policy And Direction: The First Year. United States Army Center of Military History. pp. 155–92, 212, 283–4, 288–9, 304. ISBN 0-16-035955-4. CMH Pub 20-1-1.
- Korea Institute of Military History (2000). The Korean War: Korea Institute of Military History. 3-volume set 1, 2. Bison Books, University of Nebraska Press. pp. 730, 512–29. ISBN 0-8032-7794-6.
- Weintraub, Stanley (2000). MacArthur's War: Korea and the Undoing of an American Hero. New York: Simon & Schuster. pp. 157–58. ISBN 0-684-83419-7.
- "Goyang Geumjeong Cave Massacre memorial service". Hankyoreh. 9 February 2010. Retrieved 20 January 2012.
- Charles J. Hanley and Jae-Soon Chang (6 December 2008). "Children 'executed' in 1950 South Korean killings". U-T San Diego. Associated Press. Retrieved 1 September 2012.
- Barnouin & Yu 2006, pp. 143–144.
- Cumings 1997, pp. 278–281.
- Stokesbury 1990, pp. 79–94.
- Barnouin & Yu 2006, p. 144.
- Stokesbury 1990, p. 81.
- Stokesbury 1990, pp. 87–88.
- Stokesbury 1990, p. 90.
- Stokesbury 1990, p. 83.
- US Department of Defense (1950). Classified Teletype Conference, dated 27 June 1950, between the Pentagon and General Douglas MacArthur regarding authorization to use naval and air forces in support of South Korea. Papers of Harry S. Truman: Naval Aide Files. Truman Presidential Library and Museum. p. 1 and 4. "Page 1: In addition 7th Fleet will take station so as to prevent invasion of Formosa and to insure that Formosa not be used as base of operations against Chinese mainland." Page 4: "Seventh Fleet is hereby assigned to operational control CINCFE for employment in following task hereby assigned CINCFE: By naval and air action prevent any attack on Formosa, or any air or sea offensive from Formosa against mainland of China."
- Halberstam 2007, p. 319.
- Chinese Military Science Academy (September 2000). History of War to Resist America and Aid Korea (抗美援朝战争史) I. Beijing: Chinese Military Science Academy Publishing House. pp. 35–36. ISBN 7-80137-390-1.
- Offner, Arnold A. (2002). Another Such Victory: President Truman and the Cold War, 1945–1953. Stanford, CA: Stanford University Press. p. 390. ISBN 0-8047-4774-1.
- Barnouin & Yu 2006, pp. 144–146.
- Weng, Byron (Autumn 1966). "Communist China's Changing Attitudes Toward the United Nations". International Organization (Cambridge: MIT Press) 20 (4): 677–704. doi:10.1017/S0020818300012935. OCLC 480093623.
- Chinese Military Science Academy (September 2000). History of War to Resist America and Aid Korea (抗美援朝战争史) I. Beijing: Chinese Military Science Academy Publishing House. pp. 86–89. ISBN 7-80137-390-1.
- Chinese Military Science Academy (September 2000). History of War to Resist America and Aid Korea (抗美援朝战争史) I. Beijing: Chinese Military Science Academy Publishing House. p. 160. ISBN 7-80137-390-1.
- Barnouin & Yu 2006, p. 146, 149.
- Halberstam 2007, p. 361.
- Cumings 2005, p. 266.
- Barnouin & Yu 2006, pp. 147–148.
- Stokesbury 1990, p. 102.
- Stokesbury 1990, p. 88.
- Stokesbury 1990, p. 89.
- Donovan, Robert J (1996). Tumultuous Years: The Presidency of Harry S. Truman 1949–1953. University of Missouri Press. p. 285. ISBN 0-8262-1085-6.
- Shen Zhihua, China and the Dispatch of the Soviet Air Force: The Formation of the Chinese-Soviet-Korean Alliance in the Early Stage of the Korean WarThe Journal of Strategic Studies, vol. 33, no.2, pp. 211–230
- Stewart, Richard W (ed.). "The Korean War: The Chinese Intervention". history.army.mil. U.S. Army Center of Military History. Retrieved 17 December 2011.
- Stokesbury 1990, pp. 98–99.
- Cohen, Eliot A.; Gooch, John (2006). Military Misfortunes: The Anatomy of Failure in War. New York: Free Press. pp. 165–95. ISBN 0-7432-8082-2.
- Hopkins, William B. (1986). One Bugle No Drums: The Marines at Chosin Reservoir. Chapel Hill, N.C: Algonquin. ISBN 978-0-912697-45-1.
- Mossman 1990, p. 160.
- Stokesbury 1990, p. 111.
- Roe, Patrick C. (August 1996). "The Chinese Failure at Chosin". Dallas, TX: Korean War Project. Retrieved 17 December 2011.
- Stokesbury 1990, pp. 104–111.
- Mossman 1990, p. 158.
- Stokesbury 1990, p. 110.
- Doyle, James H; Mayer, Arthur J (April 1979). "December 1950 at Hungnam". U.S. Naval Institute Proceedings 105 (4): 44–65.
- Espinoza-Castro v. I.N.S., 242 F.3d 1181, 30 (2001).
- Stokesbury 1990, p. 117.
- Reminiscences- MacArthur, Douglas.
- Stokesbury 1990, p. 113.
- Stokesbury 1990, p. 118.
- Stokesbury 1990, p. 121.
- Stokesbury 1990, p. 120.
- "Resolution 498(V) Intervention of the Central People's Government of People's Republic of China in Korea". United Nations. 1951-2-1.
- "Cold War International History Project's Cold War Files". Wilson Center.
- "SURVIVOR Hundreds were killed in a 1951 massacre. One man is left to remember.". JoongAng Daily. 2003-02-10. Retrieved 2013-04-06.
- Timmons, Robert. "Allies mark 60th anniversary of Chipyong-ni victory". 8tharmy.korea.army.mil. US Eighth Army. Retrieved 22 December 2011.
- Stokesbury 1990, p. 122.
- Barnouin & Yu 2006, p. 149.
- Stokesbury 1990, pp. 123–127.
- Stein 1994, p. 69.
- Halberstam 2007, p. 600.
- Stein 1994, p. 79.
- Halberstam 2007, p. 498.
- Stokesbury 1990, p. 127.
- Stokesbury 1990, p. 130.
- Stokesbury 1990, p. 131.
- Stokesbury 1990, p. 131, 132.
- Stokesbury 1990, pp. 133–134.
- Stokesbury 1990, pp. 136–137.
- Stokesbury 1990, pp. 137–138.
- Stokesbury 1990, pp. 145, 175–177.
- Stokesbury 1990, p. 159.
- Stokesbury 1990, p. 160.
- Stokesbury 1990, p. 161–162.
- Barnouin & Yu 2006, p. 148.
- Barnouin & Yu 2006, pp. 148–149.
- Stokesbury 1990, pp. 144–153.
- Stokesbury 1990, p. 147.
- Stokesbury 1990, pp. 187–199.
- Boose, Donald W., Jr. (Spring 2000). "Fighting While Talking: The Korean War Truce Talks". OAH Magazine of History. Organization of American Historians. Archived from the original on 12 July 2007. Retrieved 7 November 2009. "... the UNC advised that only 70,000 out of over 170,000 North Korean and Chinese prisoners desired repatriation."
- Stokesbury 1990, pp. 189–190.
- Stokesbury 1990, pp. 242–245.
- Stokesbury 1990, p. 240.
- T. HARRISON, LIEUTENANT COLONEL WILLIAM. "MILITARY ARMISTICE IN KOREA: A CASE STUDY FOR STRATEGIC LEADERS". Retrieved 11 April 2013.
- Ho, Jong Ho (1993). The US Imperialists started the Korean War. Pyongyang: Foreign Languages Publishing House. p. 230. ASIN B0000CP2AZ.
- "War Victory Day of DPRK Marked in Different Countries". KCNA. 1 August 2011. Retrieved 22 December 2011.
- "Operation Glory". Fort Lee, Virginia: Army Quartermaster Museum, US Army. Retrieved 16 December 2007.
- US Deptartment of Defense. "DPMO White Paper: Punch Bowl 239" (PDF). Retrieved 22 December 2011.
- "Remains from Korea identified as Ind. soldier". Army News. 1 March 2008. Retrieved 25 December 2011.
- "NNSC in Korea" (PDF). Swiss Armed Forces, International Command. Retrieved 22 December 2011.
- "Korea – NSCC". Forsvarsmakten.se. Swedish Armed Forces. 1 November 2007. Retrieved 22 December 2011.
- Ria Chae (May 2012). "NKIDP e-Dossier No. 7: East German Documents on Kim Il Sung’s April 1975 Trip to Beijing". North Korea International Documentation Project. Woodrow Wilson International Center for Scholars. Retrieved 30 May 2012.
- "'North Korean torpedo' sank South's navy ship – report". BBC News. 20 May 2010. Retrieved 22 December 2011.
- Kim, Jack; Lee, Jae-won (23 November 2010). "North Korea shells South in fiercest attack in decades". Reuters. Retrieved 22 December 2011.
- Park, Madison (11 March 2013). "North Korea declares 1953 armistice invalid". CNN. Retrieved 11 March 2013.
- Chang-Won, Lim. [North Korea confirms end of war armistice "North Korea confirms end of war armistice"] Check
|url=scheme (help). AFP. Retrieved 23 March 2013.
- "North Korea threatens pre-emptive nuclear strike against US". The Guardian. 7 March 2013. Retrieved 4 April 2013.
- "North Korea threats: US to move missiles to Guam". BBC News. 3 April 2013. Retrieved 4 April 2013.
- Rhem, Kathleen T. (8 June 2000). "Defense.gov News Article: Korean War Death Stats Highlight Modern DoD Safety Record". defense.gov. US Department of Defense. Retrieved 22 December 2011.
- Xu, Yan (29 July 2003). "Korean War: In the View of Cost-effectiveness". Consulate General of the People's Republic of China in New York. Retrieved 12 August 2007.
- Bethany Lacina and Nils Petter Gleditsch, Monitoring Trends in Global Combat: A New Dataset of Battle Deaths, European Journal of Population (2005) 21: 145–166.
- Stokesbury 1990, pp. 14, 43.
- Stokesbury 1990, p. 39.
- Stein 1994, p. 25.
- Stein 1994, p. 18.
- Goulden 1983, p. 51.
- Stokesbury 1990, pp. 182–184.
- Stokesbury 1990, p. 174.
- Stokesbury 1990, p. 182.
- Werrell 2005, p. 71.
- Stokesbury 1990, p. 183.
- Werrell 2005, pp. 76–77.
- Sherman, Stephen (March 2000). "Korean War Aces: USAF F-86 Sabre jet pilots". acepilots.com. Retrieved 22 December 2011.
- Davis, Larry; Thyng, Harrison R.. "The Bloody Great Wheel: Harrison R. Thyng". Sabre Pilots Association. Retrieved 22 December 2011.
- "Soviet pilots in Korea" (in Russian). airwar.ru. 29 January 2010. Retrieved 5 March 2012.
- Puckett, Allen L. (1 April 2005). "Say 'hello' to the bad guy". af.mil. US Air Force. Retrieved 22 December 2011.
- Kreisher, Otto (16 January 2007). "The Rise of the Helicopter During the Korean War". historynet.com. Weider History Group. Retrieved 22 December 2011.
- "WW II Helicopter Evacuation". Olive Drab. Retrieved 22 December 2011.
- Day, Dwayne A. "M.A.S.H./Medevac Helicopters". CentennialOfFlight.gov. US Centennial of Flight Commission. Retrieved 22 December 2011.
- Cumings, Bruce (2006). "Korea: Forgotten Nuclear Threats". In Constantino, Renato Redentor. The Poverty of Memory: Essays on History and Empire. Quezon City, Philippines: Foundation for Nationalist Studies. p. 63. ISBN 978-971-8741-25-2. OCLC 74818792. Archived from the original on 22 September 2007. Retrieved 24 July 2009.
- Walkom, Thomas (25 November 2010). "Walkom: North Korea's unending war rages on". Toronto Star. Retrieved 22 December 2011.
- Cumings 1997, pp. 297–298.
- Witt, Linda; Bellafaire, Judith; Granrud, Britta; Binker, Mary Jo (2005). A Defense Weapon Known to be of Value: Servicewomen of the Korean War Era. University Press of New England. p. 217. ISBN 978-1-58465-472-8.
- Cuming, Bruce (10 December 2004). "Napalm über Nordkorea" (in German). Le Monde diplomatique. Retrieved 22 December 2011.
- William F Dean (1954) General Dean's Story, (as told to William L Worden), Viking Press, pp. 272–273.
- Cumings 1997, p. 298.
- Hogan, Michael, ed. (1995). America in the World: The Historiography of American Foreign Relations since 1941. New York: Cambridge University Press. p. 290. ISBN 978-0-521-49807-4.
- Marolda, Edward (26 August 2003). "Naval Battles". US Navy. Retrieved 22 December 2011.
- Cumings 1997, pp. 289–292.
- Dingman, Roger (1988-1989). "Atomic Diplomacy during the Korean War". International Security 13 (3): 50–91. Unknown parameter
- Knightley, Phillip (1982). The First Casualty: The War Correspondent as Hero, Propagandist and Myth-maker. Quartet. p. 334. ISBN 0-8018-6951-X.
- Panikkar, Kavalam Madhava (1981). In Two Chinas: Memoirs of a Diplomat. Hyperion Press. ISBN 0-8305-0013-8.
- Truman, Harry S (1955–1956). Memoirs (2 volumes). Doubleday. vol. II, pp. 394–5. ISBN 1-56852-062-X.
- Hasbrouck, S. V (1951). memo to file (November 7, 1951), G-3 Operations file, box 38-A. Library of Congress.
- Army Chief of Staff (1951). memo to file (November 20, 1951), G-3 Operations file, box 38-A. Library of Congress.
- Watson, Robert J; Schnabel, James F. (1998). The Joint Chiefs of Staff and National Policy, 1950–1951, The Korean War and 1951–1953, The Korean War. History of the Joint Chiefs of Staff, Volume III, Parts I and II. Office of Joint History, Office of the Chairman of the Joint Chiefs of Staff. part 1, p. v; part 2, p. 614.
- Commanding General, Far East Air Force (1951). Memo to 98th Bomb Wing Commander, Okinawa.
- Far East Command G-2 Theater Intelligence (1951). Résumé of Operation, Record Group 349, box 752.
- "60년 만에 만나는 한국의 신들러들". Hankyoreh (in Korean). 25 June 2010. Retrieved 15 July 2010.
- ""보도연맹 학살은 이승만 특명에 의한 것" 민간인 처형 집행했던 헌병대 간부 최초증언 출처 : "보도연맹 학살은 이승만 특명에 의한 것" – 오 마이뉴스". Ohmynews (in Korean). 4 July 2007. Retrieved 15 July 2010.
- "Unearthing proof of Korea killings". BBC. 18 August 2008. Retrieved 2013-04-05.
- "U.S. Allowed Korean Massacre In 1950". CBS News. 2009-02-11. Retrieved 2013-04-05. More than one of
- Choe, Sang-Hun (25 June 2007). "A half-century wait for a husband abducted by North Korea". The New York Times. Retrieved 25 December 2011.
- Hanley, Charles J.; Mendoza, Martha (29 May 2006). "U.S. Policy Was to Shoot Korean Refugees". The Washington Post. Associated Press. Retrieved 25 December 2011.
- Hanley, Charles J.; Mendoza, Martha (13 April 2007). "Letter reveals US intent at No Gun Ri". The Asia-Pacific Journal: Japan Focus. Associated Press. Retrieved 25 December 2011.
- Charles J. Hanley & Hyung-Jin Kim (10 July 2010). "Korea bloodbath probe ends; US escapes much blame". U-T San Diego. Associated Press. Retrieved 23 May 2011.
- Hanley, Charles J.; Chang, Jae-Soon (18 May 2008). "Thousands Killed in 1950 by US's Korean Ally". GlobalResearch.ca. Retrieved 4 April 2012.
- Kim Dong‐choon (5 March 2010). "The Truth and Reconciliation Commission of Korea: Uncovering the Hidden Korean War". jinsil.go.kr. Retrieved 24 December 2011.
- Charles J. Hanley and Jae-Soon Chang, "Children 'Executed' in 1950 South Korean Killings: ROK and US responsibility" The Asia-Pacific Journal, Vol 49-5-08, 7 December 2008. http://japanfocus.org/-J_S_-Chang/2979
- "서울대병원, 6.25전쟁 참전 용사들을 위한 추모제 가져". Seoul National University Hospital. 4 June 2010. Retrieved 19 July 2012.
- Potter, Charles (3 December 1953). "Korean War Atrocities" (PDF). United States Senate Subcommittee on Korean War Atrocities of the Permanent Subcommittee of the Investigations of the Committee on Government Operations (US Government Printing Office). Retrieved 25 December 2011.
- Carlson, Lewis H (2003). Remembered Prisoners of a Forgotten War: An Oral History of Korean War POWs. St. Martin's Griffin. ISBN 0-312-31007-2.
- Lakshmanan, Indira A.R (1999). "Hill 303 Massacre". Retrieved 25 December 2011.
- Van Zandt, James E (February 2003). "You are about to die a horrible death". VFW Magazine. Retrieved 25 December 2011.
- Skelton, William Paul (April 2002). "American Ex-Prisoners of War" (PDF). Department of Veterans Affairs. OCLC 77563074. Retrieved 31 December 2011.
- Lech, Raymond B. (2000). Broken Soldiers. Chicago: University of Illinois Press. pp. 2, 73. ISBN 0-252-02541-5.
- Heo, Man-ho (2002). "North Korea’s Continued Detention of South Korean POWs since the Korean and Vietnam Wars". The Korean Journal of Defense Analysis 14 (2).
- Lee, Sookyung (2007). "Hardly Known, Not Yet Forgotten, South Korean POWs Tell Their Story". Radio Free Asia. Archived from the original on 7 October 2007. Retrieved 22 August 2007.
- Hermes 1992, p. 136.
- Hermes 1992, p. 143.
- Hermes 1992, p. 149.
- Hermes 1992, p. 514.
- "S Korea POW celebrates escape". BBC News. 19 January 2004. Retrieved 22 December 2011.
- "S Korea 'regrets' refugee mix-up". BBC News. 18 January 2007. Retrieved 25 December 2011.
- Republic of Korea Ministry of Unification Initiatives on South Korean Prisoners of War and Abductees, http://eng.unikorea.go.kr/CmsWeb/viewPage.req?idx=PG0000000581#nohref
- Yoo, Young-Bok (2012). Tears of Blood: A Korean POW's Fight for Freedom, Family and Justice. Korean War POW Affairs-USA. ISBN 978-1479383856.
- Alena Volokhova, Armistice Talks in Korea (1951-1953) Based on Documents from the Russian Foreign Policy Archives. FAR EASTERN AFFAIRS, No. 2, 2000, at 74, 86, 89-90 http://dlib.eastview.com/browse/doc/2798784
- "DPRK Foreign Ministry memorandum on GI mass killings". Kcna.co.jp. KCNA. Retrieved 22 December 2011.
- ""국민방위군 수만명 한국전때 허망한 죽음" 간부들이 군수품 착 복...굶어죽거나 전염병 횡사 진실화해위, 매장지 등 확인...국가에 사과 권고" (in Korean). Hankyoreh. 7 September 2010.
- "국민방위군 사건" (in Korean). National Archives of Korea. Retrieved 20 July 2010.
- "50,000 Koreans die in camps in south; Government Inquiry Confirms Abuse of Draftees—General Held for Malfeasance". The New York Times (US). 12 June 1951. p. 3. Retrieved 23 July 2010.
- "'국민방위군' 희생자 56년만에 '순직' 인정". Newsis (in Korean). 30 October 2007. Retrieved 18 July 2010.
- Roehrig, Terence (2001). The Prosecution of Former Military Leaders in Newly Democratic Nations: The Cases of Argentina, Greece, and South Korea. McFarland & Company. p. 139. ISBN 978-0-7864-1091-0.
- Sandler, Stanley (1 October 1999). The Korean War: No Victors, No Vanquished. University Press of Kentucky. p. 224. ISBN 0-8131-0967-1.
- "South Korean Aide Quits; Defense Minister Says He Was Implicated in Scandals.". The New York Times. 4 June 1951. Retrieved 23 July 2010.
- Terence Roehrig (2001). Prosecution of Former Military Leaders in Newly Democratic Nations: The Cases of Argentina, Greece, and South Korea. McFarland & Company. p. 139. ISBN 978-0-7864-1091-0.
- Paul M. Edwards (2006). Prosecution of Former Military Leaders in Newly Democratic Nations: The Cases of Argentina, Greece, and South Korea. Greenwood. pp. 123–124. ISBN 0313332487.
- Höhn, Maria (2010). Over There: Living with the U.S. Military Empire from World War Two to the Present. Duke University Press. pp. 51–52. ISBN 0822348276.
- Barnouin & Yu 2006, p. 150.
- "Turkey". State.gov. US Department of State. 9 December 2011. Retrieved 24 December 2011.
- "Revue de la presse turque 26.06.2010". turquie-news.fr (in French). 26 June 2010. Retrieved 24 December 2011.
- Congressional Record, V. 146, Pt. 18, November 1, 2000 to January 2, 2001. US Government Printing Office. p. 27262.
- Savada, Andrea, ed. (1997). South Korea: A Country Study. Diane Pub Co. p. 34. ISBN 078814619X. Retrieved 5 April 2013.
- Park, Soo-mee (2008-10-30). "Former sex workers in fight for compensation". Joongang Daily. Retrieved 2013-04-10.
- "1965년 전투병 베트남 파병 의결". Dong-a Ilbo (in Korean). 2008-07-02. Retrieved 2011-09-24.
- "Leading article: Africa has to spend carefully". The Independent (London: INM). 13 July 2006. ISSN 0951-9467. OCLC 185201487. Retrieved 24 December 2011.
- "Country Comparison: GDP (purchasing power parity)". The World Factbook. CIA. 2011. Retrieved 24 December 2011.
- Courtois, Stephane, The Black Book of Communism, Harvard University Press, 1999, pg. 564.
- Rummel, R.J., Statistics Of North Korean Democide: Estimates, Calculations, And Sources, Statistics of Democide, 1997.
- Omestad, Thomas, "Gulag Nation", U.S. News & World Report, 23 June 2003.
- Spoorenberg, Thomas; Schwekendiek, Daniel. "Demographic Changes in North Korea: 1993–2008", Population and Development Review, 38(1), pp. 133-158.
- Noland, Marcus (2004). "Famine and Reform in North Korea". Asian Economic Papers 3 (2): 1–40. doi:10.1162/1535351044193411?journalCode=asep.
- Haggard, Nolan, Sen (2009). Famine in North Korea: Markets, Aid, and Reform. p. 209. ISBN 978-0-231-14001-0. "This tragedy was the result of a misguided strategy of self-reliance that only served to increase the country's vulnerability to both economic and natural shocks ... The state's culpability in this vast misery elevates the North Korean famine to a crime against humanity"
- "North Korea: A terrible truth". The Economist. 17 April 1997. Retrieved 2011-09-24.
- "The unpalatable appetites of Kim Jong-il". 8 October 2011. Retrieved 8 October 2011.
- Kristof, Nicholas D. (12 July 1987). "Anti-Americanism Grows in South Korea". The New York Times. Retrieved 11 April 2008.
- "Global Unease With Major World Powers". Pew Research Center. June 27, 2007.
- Views of US Continue to Improve in 2011 BBC Country Rating Poll, 7 March 2011.
- Jang, Jae-il (11 December 1998). "Adult Korean Adoptees in Search of Roots". The Korea Times. Retrieved 24 December 2011. More than one of
- Choe, Yong-Ho; Kim, Ilpyong J.; Han, Moo-Young (2005). "Annotated Chronology of the Korean Immigration to the United States: 1882 to 1952". Duke.edu. Retrieved 24 December 2011.
- Appleman, Roy E (1998) . South to the Naktong, North to the Yalu. United States Army Center of Military History. pp. 3, 15, 381, 545, 771, 719. ISBN 0-16-001918-4.
- Barnouin, Barbara; Yu, Changgeng (2006). Zhou Enlai: A Political Life. Hong Kong: Chinese University Press. ISBN 962-996-280-2.
- Becker, Jasper (2005). Rogue Regime: Kim Jong Il and the Looming Threat of North Korea. New York: Oxford University Press. ISBN 0-19-517044-X.
- Chen, Jian (1994). China's Road to the Korean War: The Making of the Sino-American Confrontation. New York: Columbia University Press. ISBN 978-0-231-10025-0.
- Cumings, Bruce (1997). Korea's Place in the Sun: A Modern History. WW Norton & Company. ISBN 0-393-31681-5.
- Cumings, Bruce (2005). Korea's Place in the Sun : A Modern History. New York: W. W. Norton & Company. ISBN 0-393-32702-7.
- Cumings, Bruce (1981). "3, 4". Origins of the Korean War. Princeton University Press. ISBN 89-7696-612-0.
- Dear, Ian; Foot, M.R.D. (1995). The Oxford Companion to World War II. Oxford, New York: Oxford University Press. p. 516. ISBN 0-19-866225-4.
- Goulden, Joseph C (1983). Korea: The Untold Story of the War. New York: McGraw-Hill. p. 17. ISBN 0-07-023580-5.
- Halberstam, David (2007). The Coldest Winter: America and the Korean War. New York: Hyperion. ISBN 978-1-4013-0052-4.
- Hermes, Walter G. (1992), Truce Tent and Fighting Front, Washington, DC: Center of Military History, United States Army, ISBN 0-16-035957-0
- Kim, Yǒng-jin (1973). Major Powers and Korea. Silver Spring, MD: Research Institute on Korean Affairs. OCLC 251811671.
- Malkasian, Carter (2001). The Korean War, 1950–1953. Essential Histories. London; Chicago: Fitzroy Dearborn. ISBN 1-57958-364-4.
- Millett, Allan R. (2007). The Korean War: The Essential Bibliography. The Essential Bibliography Series. Dulles, VA: Potomac Books Inc. ISBN 978-1-57488-976-5.
- Mossman, Billy C. (1990). Ebb and Flow, November 1950 – July 1951. United States Army in the Korean War 5. Washington, DC: Center of Military History, United States Army. OCLC 16764325.
- Rees, David (1964). Korea: The Limited War. New York: St Martin's. OCLC 1078693.
- Shen, Zhihua (2012). Mao, Stalin and the Korean War : trilateral communist relations in the 1950s. Milton Park, Abington; New York: Routledge. ISBN 9780415516457.
- Stein, R. Conrad (1994). The Korean War: "The Forgotten War". Hillside, NJ: Enslow Publishers. ISBN 0-89490-526-0.
- Stokesbury, James L (1990). A Short History of the Korean War. New York: Harper Perennial. ISBN 0-688-09513-5.
- Thomas, Nigel; Abbott, Peter (1986), The Korean War 1950-53, Osprey Publishing, ISBN 0-85045-685-1
- Weathersby, Kathryn (1993), Soviet Aims in Korea and the Origins of the Korean War, 1945-50: New Evidence From the Russian Archives, Cold War International History Project: Working Paper No. 8
- Weathersby, Kathryn (2002), "Should We Fear This?" Stalin and the Danger of War with America, Cold War International History Project: Working Paper No. 39
- Werrell, Kenneth P. (2005). Sabres Over MiG Alley. Annapolis: Naval Institute Press. ISBN 978-1-59114-933-0.
- Yoo, Young-Bok (2012), Tears of Blood: A Korean POW's Fight for Freedom, Family and Justice, Los Angeles, CA: Korean War POW Affairs-USA, ISBN 978-1479383856
- Zhang, Shu Guang (1995), Mao's Military Romanticism: China and the Korean War, 1950–1953, Lawrence, KS: University Press of Kansas, ISBN 0-7006-0723-4
|Find more about Korean War at Wikipedia's sister projects|
|Definitions and translations from Wiktionary|
|Media from Commons|
|Learning resources from Wikiversity|
|News stories from Wikinews|
|Quotations from Wikiquote|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
- Korean War resources, Dwight D. Eisenhower Presidential Library
- North Korea International Documentation Project
- Grand Valley State University Veteran's History Project digital collection
- The Forgotten War, Remembered – four testimonials in The New York Times
- Collection of Books and Research Materials on the Korean War an online collection of the United States Army Center of Military History
- The Korean War at History.com
- The short film Film No. 927 is available for free download at the Internet Archive [more]
- The Korean War You Never Knew & Life in the Korean War – slideshows by Life magazine
- QuickTime sequence of 27 maps adapted from the West Point Atlas of American Wars
- Animation for operations in 1950
- Animation for operations in 1951
- US Army Korea Media Center official Korean War online image archive
- Rare pictures of the Korean War from the U.S. Library of Congress and National Archives
- Land of the Morning Calm Canadians in Korea – multimedia project including veteran interviews
- Pathé Online newsreel archive featuring films on the war
- CBC Digital Archives—Forgotten Heroes: Canada and the Korean War
- Korea Defense Veterans of America
- Korean War Ex-POW Association
- Korean War Veterans Association
- The Center for the Study of the Korean War
- UN Memorial Cemetery, Busan
- War Memorial of Korea, Seoul The War Memorial's official website
- Korean Children's War Memorial
- Chinese 50th Anniversary Korean War Memorial | http://en.wikipedia.org/wiki/Korean_War | 13 |
61 | Forces & Newton's First Law of Motion
It is the force that enables us to do any work.
Whenever we have to do anything either we pull the object or push the object.
Therefore, force is defined as push or pull. In other words called force.
Example – to open a door, either we push iExample – to open a door, either we
push it or pull it. A drawer is pulled to open and pushed to close.
Effect of Force:
Force can make a stationary body in motionying force you can move a ball.
Force can stop a moving body – For example by applying brakes you can stop
a cycle or a vehicle which is in motion.
Force can change the direction of a moving object – By applying force,
i.e. by moving handle you can change the direction of a running bicycle.
Similarly by moving steering the direction of a running vehicle can be changed.
Force can change the speed of a moving body – By accelerating the speed of
a running vehicle can be increased.
Force can change the shape and size of an object – By hammering a block of
metal can be turned into a thin sheet. By hammering a stone can be broken into
Forces are of two types:
• Balanced Force
• Unbalanced Force
Balanced Forces – When the forces are applied on an object and resultant
is zero, then the applied forces are called balanced forces.
Example - In the tug of war when both the teams apply similar force from both
side, rope does not move either side, i.e. resultant is zero. Hence, it is a
Balanced forces do not cause any change of state of an object. Balanced forces
are equal in magnitude and opposite in direction.
Balanced forces can change the shape and size of an object. When you applies
forces from both side on a balloon, the size and shape of balloon is changed.
Unbalanced Forces – When we apply force on an object and object moves,
i.e. resultant is not equal to zero, the forces are called unbalanced forces. An
object in rest can be moved because of applying balanced forces.
Unbalanced forces can do the following:
• Move a stationary object.
• Increase the speed of a moving object.
• Decrease the speed of a moving object.
• Stop a moving object.
• Change the shape and size of an object.
Laws of Motion:
Galileo Galilei: Galileo first of all said that object move with a
constant speed when no forces act on them. This means if an object is moving on
a frictionless path and no other force is acting upon that then object would be
moving forever. That is there is no unbalanced force working on the object.
He propounded this theory after the observation of many moving objects.
But practically it is not possible for any object. Because to attain the
condition of zero unbalanced force is impossible. Force of friction, force air
and many other forces always acting upon an object.
Newton’s Laws of Motion:
Newton studied the ideas of Galileo and gave the three laws of motion. These laws
are popularly known as Newton’s Laws of Motion.
Newton’s First Law of Motion: Any
object remains in the state of rest or of uniform motion in a straight line,
until it is compelled to change the state by applying external force over it.
Explanation of Newton’s First LNewton studied the ideas
of Galileo and gave the three laws of motion. These laws are popularly known as
Newton’s Laws of Motion.
Newton’s First Law of Motion: Any object remains in the state of rest or
of uniform motion in a straight line, until it is compelled to change the state
by applying external force over it.
According to Newton’s First Law of Motion if a body is in rest then it will remain in rest unless unbalanced forces compel it to move. Second if a body is in motion then it will remain in motion unless unbalanced forces compel it to come in rest.
This means all objects resist to in changing their state. The state of any object can be changed by applying external forces only.
Newton’s First Law of Motion in Everyday Life:
(a) If an object is kept on the ground at a certain place then that will remain
on the ground unless a force is applied to move it.
(b) A person standing in a bus falls backward when bus is started moving
suddenly. This happens because the person and bus both are in rest while bus is
not moving, but as bus is started moving the legs of the person start moving
along with bus but rest portion of his body has tendency to remain in rest.
Because of this person falls backward if he is not alert.
(c) A person standing in a moving bus falls forward if driver applies brakes
suddenly. This happens also because of the theory of Newton’s First Law of
Motion. When bus is moving, the person standing in it is also in motion along
with bus. But when driver applies brakes the speed of bus decreases suddenly or
bus comes in the state of rest suddenly, in this condition the legs of the
person which are in the contact with bus come in rest suddenly while the rest
part of his body has tendency to remain in motion with same speed. Because of
this person falls forward if he is not alert.
(d) Before hanging the wet clothes over laundry line, usually many jerks are
given to the cloths to get them dried quickly. Because of jerk droplets of water
from the pores of the cloth falls on the ground. This happens because, when
suddenly cloths are made in motion by giving jerks, the water droplets in it
have tendency to remain in rest and they are separated from cloths and fall on
(e) When the pile of coin on the carom-board hit by a striker; coin only at the
bottom moves away leaving rest of the pile of coin at same place. This happens
also because of the theory of Newton’s First Law of Motion. When the pile is
struck with a striker, the coin at the bottom comes in motion suddenly because
of the force applied by striker, while rest of the coin in the pile has tendency
to remain in the rest and they vertically falls the carom board and remain at
(f) Seat belts are used in car and other vehicles, to prevent the passengers
being thrown in the condition of sudden braking or other emergency. Because in
the condition of sudden braking of the vehicles or accident, the speed of
vehicle would decrease or vehicle may stop suddenly, in that condition
passengers may be thrown in the direction of the motion of vehicle because of
the tendency to remain in the state of motion.
(g) The head of hammer is tightened on a wooden handle by banging the handle
against a hard surface.
(h) Head rest is provided with the seat of car to prevent the whiplash injury of
head in the case of accidents.
Mass and Inertia:
The property of an object because of which it resists to get disturbed its state
is called Inertia. Inertia of an object is measured by its mass. A heavy object
has more inertia than a lighter one. In other words inertia is the natural
tendency of an object which resists the change in state of rest or motion of the
1) A constant force acts on an object of mass 5 kg for a duration of 2 s. It
increases the object’s velocity from 3 m s–1 to 7 m s-1.
Find the magnitude of the applied force. Now, if the force was applied for
duration of 5 s, what would be the final velocity of the object?
We have been given that u = 3 m s–1
and v = 7 m s-1, t = 2 s and m = 5 kg.
Now, if this force is applied for a duration of 5 s (t = 5 s), then the final
velocity can be calculated by
On substituting the values of u, F, m and t, we get the final velocity,
v = 13 m s-1.
2) Which would require a greater force –– accelerating a 2 kg mass at 5 m
s-2 or a 4 kg mass at 2 m s-2?
we have F = ma.
Here we have m1 = 2 kg; a1 = 5 m s-2
and m2 = 4 kg; a2
= 2 m s-2.
Thus, F1 = m1a1 = 2 kg × 5 m s-2 =
and F2 = m2a2 = 4 kg × 2 m s-2= 8 N.
Or, F1 > F2.
Thus, accelerating a 2 kg mass at 5 m s-2 would require a greater
3) A motorcar is moving with a velocity of 108 km/h and it takes 4 s to
stop after the brakes are applied. Calculate the force exerted by the brakes on
the motorcar if its mass along with the passengers is 1000 kg.
The initial velocity of the motorcar
u = 108 km/h
= 108 × 1000 m/(60 × 60 s)
= 30 m s-1
and the final velocity of the motorcar
v = 0 m s-1.
The total mass of the motorcar along with its passengers = 1000 kg
and the time taken to stop the motorcar, t = 4 s.
we have the magnitude of the force applied by the brakes F as
m(v – u)/t.
On substituting the values, we get
F = 1000 kg × (0 – 30) m s-1/4 s
= – 7500 kg m s-2 or – 7500 N.
The negative sign tells us that the force exerted by the brakes is opposite to
the direction of motion of the motorcar.
4) A force of 5 N gives a mass m1, an acceleration of 10 m s–2
and a mass m2, an acceleration of 20 m s-2. What
acceleration would it give if both the masses were tied together?
we have m1 = F/a1; and
m2 = F/a2. Here, a1 = 10 m s-2;
a2 = 20 m s-2 and F = 5 N.
Thus, m1 = 5 N/10 m s-2 = 0.50 kg; and
m2 = 5 N/20 m s-2 = 0.25 kg.
If the two masses were tied together, the total mass, m would be
m = 0.50 kg + 0.25 kg = 0.75 kg.
The acceleration, a produced in the combined mass by the 5 N force would be,
a = F/m = 5 N/0.75 kg = 6.67 m s-2.
Newton’s 2nd Law of Motion: The acceleration a of a body is parallel and directly proportional to the net force ‘F’ acting on the body, is in the direction of the net force, and is inversely proportional to the mass m of the body, i.e., F = ma.
Third Law of Motion: there is always an equal and opposite
reaction: or the forces of two bodies on each other are always equal and are
directed in opposite directions. | http://www.excellup.com/classnine/sciencenine/forcenine.aspx | 13 |
69 | [History of Astronomy]
To place a link here contact the webmaster.
Measuring The Stars
Brightness : Distance : Luminosity : Surface Temperatures : Mass : Size : Density
Support this web site
In this series of essays we shall examine how this was done. We will look at how the following can be determined for a star:
We will also touch on other stellar properties that can be determined:
The magnitude scale was developed by the Ancient Greek astronomer, Hipparchus, around 120 BC. It was originally a rough visual system. The brightest stars were said to be of the first magnitude, the next brightest were second magnitude stars. This continued down to sixth magnitude stars at the limit of naked eye visibility. Note that the brighter stars are associated with the smaller magnitudes.
The six magnitudes do not represent a linear scale. A sixth magnitude star is not one sixth as bright as a first magnitude star. The scale is actually a logarithmic one. A star of the first magnitude is about 100 times brighter than a star of sixth magnitude. This system was made exact in 1854. A star of magnitude 1 was defined as being exactly 100 times as bright as a star of magnitude 6.
There are five magnitudes between 1 and 6. Since 100 is 102, a little mathematics shows that each whole number value on the magnitude scale differs from the next by a factor of 102/5 (the fifth root of 10 squared). 102/5 is roughly equal to 2.512.
In other words, a first magnitude star is 2.512 times brighter than a second magnitude star. A second magnitude star is 2.512 times brighter than a third magnitude star, and so on. The difference between two magnitudes is 2.512 x 2.512 (approximately 6.310). So a first magnitude star is 6.310 times brighter than a third magnitude star.
Modern stellar magnitudes are often given to two decimal places. For example the magnitude of the star, Deneb is given as 1.25. Aldebaran has a magnitude of 0.85. When the magnitudes of stars were measured accurately using this new definition, some stars were found to be brighter than first magnitude. Arcturus, for example is found to have a magnitude of 0.00. Sirius, the brightest star, has its magnitude given as -1.46.
In the other direction, Polaris, the northern hemisphere Pole Star, has a magnitude of 2.00, Merak (in The Plough) +2.40. The planet Uranus, on the limit of naked eye visibility, has a magnitude of 5.7.
The other planets can also have their brightness measured on the magnitude scale. Jupiter has a magnitude (at its brightest) of -2.6, Venus, the brightest planet, can reach magnitude -4.4. The Full Moon has an apparent magnitude of -12.5 while that of the Sun is -27. The faintest stars visible to the naked eye on a clear Moon-less dark night are of magnitude six. The minor planet Pluto has a magnitude of +14, far too faint to be visible without a powerful telescope.
A formula links the brightness of two stars (b1 and b2) with their magnitudes (m1 and m2).
Question: How much brighter does Arcturus (magnitude, m = 0.00) appear than Deneb (m = +1.25)?
Answer: Arcturus appears just over 3 times as bright as Deneb.
Question: How much brighter does Sirius, the brightest star (m = -1.46), appear than the barely visible planet Uranus (m = +5.7)?
Answer: Sirius appears over 730 times as bright as Uranus.
Question: The most brilliant planet is Venus which can reach a magnitude of -4.4 at its brightest. How much brighter than Sirius (m = -1.46) can it shine?
Answer: At its brightest, Venus shines 15 times brighter than Sirius.
For more information refer to the list of the 20 brightest stars.
Originally, apparent magnitudes were measured with the naked eye, comparing the star being studied with certain standard stars with known magnitudes. Later, photography was used, being more accurate and less subjective. In addition, large numbers of stars could be dealt with quickly. Modern methods use photoelectric devices that actually measure the quantity of light reaching the Earth from a star.
This involves measuring the position of an object from two different locations. The two locations form a baseline. In general, the longer the baseline, the further the distance that can be measured. On the Earth, this technique is used in surveying.
The diameter of the Earth and the distances to the Moon and Sun were worked out by simple trigonometry after observations from different points on the Earth's surface. However, the stars are too distant for a terrestrial baseline to be of any use.
To measure stellar distances a larger baseline is required. In the 1830s, stellar distances were measured by using twice the distance between the Earth and the Sun as the baseline. The diagram below indicates the procedure.
The nearby star whose distance is to be measured is observed from Earth 1. Its position against the further background stars is noted either visually or photographically. Six months later, the Earth, as it orbits the Sun, will be at Earth 2. The star is observed again. If the star is closer than the surrounding stars, its position should be different relative to these background stars.
This change in position, measured as an angle, is twice the parallax, 2p. In the diagram above, A is the (known) distance between the Earth and the Sun, while d is the distance to the star.
The value of A is 149 million km (or 93 million miles). Even with this large distance as a baseline, the parallax of even the nearest star is very small. The diameter of the Sun or Moon as seen from the Earth is half a degree. A degree (°) is made up of 60 minutes of arc. A minute (') is made up of 60 seconds of arc. A second ('') is thus 1 / 3600 of a degree. A small coin seen from 10km will subtend an angle of 1''.
The parallax of even the nearest star is less than a second of arc.
The largest parallax figure ever found is for the third brightest star, Alpha Centauri, and has a value of 0.76''.
The diagram below shows the trigonometry of parallax measurements.
The Earth, Sun and star make up a right angle triangle with the Sun at the right angle. The distance between the Earth and the Sun, A, is known. The angle on the right is half of the parallax, p, which has been measured. The distance to the star, d, can be calculated by trigonometry as indicated below.
which rearranges to
where p is in seconds of arc. The units of the distance, d, is whatever units are used to measure A.
For the star, Alpha Centauri, the distance calculated from the parallax of 0.76'' is (in km):
This is 40 million million km. In miles the distance is 2.52 x 1013 (or 25 million million). In either case, normal units are clearly inadequate for stellar distance.
The distance of a star can be expressed in Astronomical Units (AU), the distance between the Earth and Sun (A above). The formula then becomes:
For Alpha Centauri, d = 206,265 / 0.76 = 271,401 AU. In other words, the nearest star is over 270,000 times further away than the Sun. Even the AU is an inadequate unit for stellar distances.
Astronomers routinely use a larger unit called the Light Year (LY). This is the distance that light travels in one year. Light travels at 300,000 km (186,000 miles) per second. A Light Year is therefore 9,460,800 million km (approximately 6 million million miles). In Light Years the formula can be written:
For Alpha Centauri, d = 3.26 / 0.76 = 4.30 LY. Light takes four and a third years to travel from this star, the nearest star, to the Earth. This compares with the 8 minutes required for light to cross the distance between the Earth and the Sun.
A new distance unit can be introduced if the formula is simplified to:
A star with a parallax of 1 second of arc (1'') is at a distance of 1 parallax second, which is better known as a Parsec (pc). Alpha Centauri is = 1 / 0.76 = 1.32 pc distant.
The 20 brightest stars vary in distance from 4.30 LY (for Alpha Centauri) to 1,800 LY (for Deneb).
Below is a table of the nearest stars:
|Proxima Centauri||+10.7||0.763||1.31||4.27||A companion to Alpha Centauri - this is the nearest star but is nearly 5 magnitudes too faint to be visible with the naked eye.|
|Alpha Centauri||0.0 and 1.4||0.752||1.33||4.33||A binary star consisting of two stars that appear as one to the naked eye. This is the third brightest star in the sky.|
|Barnard's Star||+9.5||0.545||1.83||5.98||Another star too faint to be seen with the naked eye.|
|Wolf 359||+13.7||0.425||2.35||7.67||This is the faintest star in the table.|
|Lalande 21185||+7.5||0.398||2.51||8.19||Yet another faint star.|
|Sirius||-1.5 and 8.7||0.375||2.67||8.69||The brightest star is also one of the closest. It has a faint companion.|
Two interesting points can be made from this table. Of the six stars closest to the Sun, only 2 are visible to the naked eye. Furthermore, two are binary systems consisting of more than a single star. This is a very different list to the brightest stars.
Trigonometric parallax is accurate to about 200 pc (650 LY). Several tens of thousands of stars have had their distances measured directly using this technique. Beyond that distance other, indirect, methods have to be used. Some of these are summarised below and will be discussed in other essays. Many of these indirect methods involve determining a star's luminosity (the amount of light emitted by a star) which can then be compared to its brightness (the amount of light that reaches the Earth) to give a distance.
Absolute Magnitude is the magnitude a star would have at a standard distance, which has been set to 10 parsecs (about 32 Light Years).
The brightness of an object decreases with distance in line with the inverse square law. This means that if the distance is doubled, the brightness decreases by four. A simple formula links Absolute Magnitude (M) with apparent magnitude (m) and distance (d) in parsecs.
The table below shows apparent and absolute magnitudes for selected stars:
From a distance of 10 parsecs, Deneb would be a brilliant object appearing three magnitudes brighter than Venus, while our Sun would be one of the less prominent stars. The nearby star, Wolf 359, would be nearly 11 magnitudes fainter than naked eye visibility (apparent magnitude 6).
Luminosity is the amount of energy given out by a star. A simple formula links the star's Luminosity in Suns (L) with its Absolute Magnitude (M):
Question: What is the Luminosity of the star Deneb?
Deneb has an Absolute Magnitude of -7.39. Putting this value into the formula gives:
Answer: Deneb has the luminosity of nearly 70,000 Suns.
Question: What is the Luminosity of the nearby star Wolf 359?
Wolf 359 has an Absolute Magnitude of +16.8. Putting this value into the formula gives:
Answer: Wolf 359 has a luminosity nearly 150,000 times less than the Sun.
From the above two examples, it can be seen that stars vary greatly in luminosity. The Sun turns out to be of average luminosity. Some stars can be over half a million times more luminous than the Sun; others radiate a tiny fraction of the Sun's light.
Luminosity can also be estimated by other properties of a star, especially its spectrum. This allows astronomers to reverse the procedures above and determine a star's distance.
Stars glow because they generate energy. To study the properties of a glowing body, physicists assume that it is a perfect radiator and absorber of radiation. Such an object is called a Black Body.
If we assume that a star is a black body, it is possible to measure its surface temperature by studying the radiation given out. A star is not an exact Black Body but it is very close. There are three main methods of measuring stellar temperatures.
A glowing body normally emits radiation over a range of different wavelengths (or colours). The chart below shows the theoretical radiation profile for three different temperatures.
There are two points to note from the above graphs.
Firstly, the higher the temperature, the more total radiation is emitted (as shown by the curve being higher for all wavelengths).
Secondly, the curves peak at a different wavelength for different temperatures. For higher temperatures, the peak of the radiation is at shorter (bluer) wavelengths. The peak of the radiation curve for a body glowing at a temperature of 3000 K is at l1. For a body at a temperature of 4500 K the radiation peak is at l2 which is a shorter (bluer) wavelength. The curve for 6000 K peaks at l3, the shortest wavelength of the three.
Longer wavelengths mean redder light. Shorter wavelengths imply bluer light. In physical terms, relatively cool objects glow red. As they heat up they glow orange, then yellow, then white and finally blue. In addition, as a glowing body gets hotter the total amount of energy it emits increases.
Wein's Law provides a formula that relates the peak radiation wavelength (lT, in nanometers, nm) with the Temperature, T, of a body in degrees Kelvin (K).
When the light of a star is plotted on a graph (Wavelength against Output), it should peak at a particular wavelength. This wavelength can then be inserted into the above formula to give the surface temperature. The table below shows Wein's Law for three stars.
The radiation peak for Rigel is in the blue part of the spectrum. For the Sun it is in the yellow. Betelgeux peaks in the red. The temperature of a glowing object is clearly related to its colour.
Using Wein's Law to determine stellar temperatures used to be time consuming as each star has to have its energy output analysed over a range of wavelengths. Modern digital devices can obtain the radiation curve and display it on a screen within seconds.
The apparent magnitude of the stars can be measured photographically or digitally. The values obtained depend on the type of film, photographic plate or digital light cell used. Different devices are sensitive to different colours. For example, some films are at their most sensitive to daylight (which tends to have a lot of blue in it). Other films react best to artificial light (which contains more yellow).
Astronomers can use this variation to accurately measure the colour of a star.
If a star is photographed using a standard daylight photographic film, the magnitude obtained is called the Photographic Magnitude (mph). If a star is photographed with film or plates sensitive to light like the human eye (to yellow), the magnitude obtained is called the Visual Magnitude (mvs).
Colour Index (I) is defined as the difference between these two magnitudes:
Colour Index is zero for white stars which appear the same magnitude in both types of film. Colour Index is negative for bluish stars and positive for yellow, orange and red stars. The table below shows how Colour Index is related to the colour of a star:
|The Sun, Capella||+0.81||yellow|
The colour index of a large group of stars can be easily obtained by taking two photographs and then comparing magnitudes. Modern digital devices can measure specific wavelengths.
Once the colour index (I) of a star is known, there is a formula relating it to the star's temperature, T:
Question: What is the temperature of the star, Sirius?
Answer: Sirius has a colour index of 0.00, therefore T = 7200 / 0.64 = 11,250 K
Question: What is the temperature of the star, Arcturus?
Answer: Arcturus has a colour index of +1.24, therefore T = 7200 / (1.24 + 0.64) = 7200 / 1.88 = 3,830 K
Temperature can be calculated very quickly by using a star's colour index but the values are only approximate, especially for the hottest stars. The third technique gives the most accurate measurements of temperature as well as many other stellar properties.
During the 19th century, astronomers classified the different spectra using letters of the alphabet. Some letters were later dropped. The image below shows the most common spectral types.
Stars are classified into seven broad Spectral Types given the letters:
The differences in the spectral types is mainly due to differences in stellar temperature. In the spectral sequence above, O stars are the hottest, M the coolest. For more accuracy, each spectral type is sub-classified into 10 subtypes. Between G and K we have G0, G1, G2, G3 all the way to G9. Subtype G0 is slightly hotter than G1. Other spectral types exist but these will not be discussed here.
The existence of spectral lines can be explained by atomic physics and especially Quantum Mechanics.
The lines are produced when atoms (or ions - atoms which have lost electrons) close to the star's surface absorb precise wavelengths of light. The electrons in the atoms change energy levels in a well defined manner when atoms absorb light. The amount of light absorbed by a particular atom is dependent on a number of factors like the abundance of the atom, temperature and pressure. These factors all affect the intensity of a particular spectral line.
Some spectral lines are caused by atomic transitions that only begin to occur at a higher temperature. These lines will not be present in cooler stars but will begin to appear in the hotter stars. Lines due to neutral and ionised Helium behave in this way.
Other lines are due to atomic transitions that occur at a lower temperature. These lines will be present in the cooler stars. In the hotter stars, the electron causing the line may have been lost and less atoms are producing them. The lines will not be present in hotter stars. Examples are lines of metals like Calcium or Iron or molecular lines (Titanium Monoxide).
Some lines appear in the middle spectral types but not in the very hottest (Type O) or coolest (Type M). Lines of ionised metals are very common in these types of stars but the best example is lines of Hydrogen.
The diagram below shows the intensities of various selected spectral lines for stars of different spectral types.
Molecular lines are strongest in the cool, red M type stars. Neutral metals are strongest in orange K stars. Lines of neutral and ionised metals are about equally intense in the yellow, sun-like G stars. Ionised metals are very strong in pale yellow F stars. Hydrogen lines are strongest in the white A stars. Hydrogen and neutral Helium are of equal intensities in the bluish white hot B stars. Ionised Helium is very strong in the hottest type O stars.
The properties of spectral lines can be established in the laboratory by experiment and by theoretical calculation. This makes it possible to look at each type of stellar spectrum and apply a temperature to it.
On this spectral classification, the Sun is of Spectral Type G2. That means it is two tenths of the way between type G and K. The spectrum contains lines of both neutral and ionised metals which are more intense than lines of Hydrogen. This spectral type indicates a temperature of 5,700 K and a yellow colour for the star.
The following table shows the properties of stars of different spectral types.
|O||He+, He, H, O2+, N2+, C2+, Si3+||Blue||-0.45||45,000||Zeta Puppis|
|B||He, H, C+, O+, N+, Fe2+, Mg2+||Bluish White||-0.20||30,000||Rigel|
|A||H, ionised metals||White||0.00||12,000||Sirius|
|F||H, Ca+, Ti+, Fe+||Yellowish White||+0.40||8,000||Procyon|
|G||Ca+, Fe, Ti, Mg, H, some molecular bands||Yellow||+0.60||6,500||Capella|
|K||Ca+, H, molecular bands||Orange||+1.00||5,000||Aldebaran|
|M||TiO, Ca, molecular bands||Red||+1.5||3,500||Betelgeux|
The list of brightest stars contains spectral types.
Spectra give very accurate stellar temperatures. Many stars can be photographed to produce large numbers of stellar spectra for analysis.
Apart from surface temperature, much more information can be obtained from the powerful technique of Spectroscopy (the study of spectra).
Some stars appear close together in the sky but are actually at different distances. These unrelated stars are called Optical Binaries. More interesting are the Visual Binaries. These are two stars that are in orbit about each other forming a single stellar system.
It is even possible to have systems with more than two stars. The star Castor (one of the twins in Gemini) is a complex system of six stars in orbit around each other.
Visual Binary stars can be observed over many years and the orbits of the component stars can be plotted. These stars orbit each other following the laws of Gravitation discovered by John Kepler and Isaac Newton. By applying the laws of Gravitation to a pair of Visual Binary stars, it is possible to calculate their masses. The formula is:
where M1 and M2 are the masses of the two stars (in Suns), a is the mean (average) angular separation of the two stars (in seconds of arc, ''), p is the parallax of the system (in seconds of arc, '') and Y is the orbital period of the stars (in years).
The formula gives the sum of the masses of the two stars once a complete orbital revolution has been observed.
Question: The two components of the Alpha Centauri system have a mean separation of 17.6'' and an orbital period of 80.1 years. What are the combined masses of the binary?
The parallax (p) of Alpha Centauri is 0.752''. Putting these figures in to the formula above gives,
M1 + M2 = a3 / (p3Y2) = 17.63 / (0.7523 x 80.12) = 5451.8 / (0.42526 x 6416.01) = 1.998.
Answer: The combined masses of the two stars in the Alpha Centauri system is nearly twice that of the Sun.
Generally, stars in a binary system will be of unequal masses. If one of the stars is more massive than the other, the centre of gravity is closer to the more massive star. In the diagram below, the star on the left is more massive.
The masses of the two stars (M1 and M2) can be calculated by measuring the ratio of the two stars' distance to the centre of gravity (x, y in any units).
In other words, if star B is twice as far from the centre of gravity as star A, then star A is twice as massive as star B.
Question: The centre of gravity between the two stars in the Alpha Centauri binary system is in the ratio 1 to 1.25. What are the individual masses of the two stars?
|From before, the sum of the two masses is known:||M1 + M2 = 1.998||......Equation I|
|From the position of the centre of gravity, the ratio is known:||M1 / M2 = 1.25||......Equation II|
By using simultaneous equations the two masses are found to have values of M1 = 1.11 and M2 = 0.89.
Answer: One member of the Alpha Centauri binary system is about 10% more massive than the Sun, the other is roughly 10% less massive than the Sun.
The following table shows the masses of selected binary stars.
|Alpha Centauri||80.1||1.11 and 0.89|
|Sirius||49.9||2.28 and 0.98|
|Procyon||40.6||1.76 and 0.65|
Question: What would be the luminosity of a star with a mass of 2 suns?
Answer: Using the formula, L = M3.5 = 23.5 = 11.3. A star twice the sun's mass would be more than 11 times as luminous.
Because of the Mass-Luminosity Law, if either the mass or luminosity is measured the other can be calculated.
Note that the luminosity of stars increases very rapidly with increases in mass because of the 3.5 power.
If two stars have the same temperature but differ in luminosity, they must possess a different number of square meters.
Stefan's Law allows astronomers to measure the difference in surface area of stars if the Luminosity (L in suns) and Temperature (T in K) are known. These are related to the star's Radius (R, also in suns) by the following formula. Note that, the Diameter of a star is twice its Radius.
Question: Sirius has a Temperature twice that of the Sun and a Luminosity of 40 suns. What is its Radius (in suns)?
Answer: From the formula, R = √L / T2 = √40 / 22 = 6.4 / 4 = 1.6.
Sirius has a radius that is 1.6 times larger than the Sun. The Sun's diameter is 1,390,000 km, so the diameter of Sirius is 2,224,000 km.
The Diameter of the star depends on the wavelength of light used and on the precise nature of the interference pattern. Stars like Betelgeux, Antares and Aldebaran have had their diameters measured in this way. The values obtained agree very well with those calculated using Stefan's Law.
For single stars, it is not possible to distinguish between a red shift caused by the Doppler Effect and the Gravitational Red Shift. However, if the star is part of a binary, the Doppler Effect can be accounted for since it will be the same for both stars. This is assuming that the other star is of normal size.
Once the Gravitational Red Shift is known, the Radius can be calculated. Again, the values obtained agree with those calculated using Stefan's Law.
The velocities of the stars in their orbits can be measured by looking at the Doppler Effect in their spectra. The duration of the eclipses can be measured by looking in detail at how the brightness of the binary varies with time (the Light Curve).
These two pieces of information can be used to calculate the diameters of the two stars.
The following table lists the diameters of selected stars. These have been determined by a variety of methods.
|Alpha Centauri||1.23 and 0.87|
|Sirius||1.6 and 0.022|
|VY Canis Majoris||2600|
Question: The two stars in the Sirius system (called Sirius A and Sirius B) have masses of 2.28 and 0.98 respectively and their radii are 1.6 and 0.022. What are their densities relative to the Sun?
Answer: Using the formula for Sirius A gives r = M / R3 = 2.28 / 1.63 = 2.28 / 4.096 = 0.56.
For Sirius B, r = 0.98 / 0.0223 = 0.98 / 0.000011 = 92,000
Sirius A is half as dense as the Sun but Sirius B is over 90,000 times denser. Sirius B is dense White Dwarf star. It is so dense with such a strong gravitational field that it shows a Gravitational Red Shift.
In the H-R Diagram, Temperature is plotted along the x-axis. Spectral Type or Colour Index can also be used. For historical reasons, the higher temperatures are on the left of the x-axis. Luminosity (or Absolure Magnitude) is plotted along the y-axis.
Another smaller band runs almost horizontally on the right of the Main Sequence. These stars are cool and are, on average, about 100 times more luminous than the Sun. They are called Red Giants. Above them are the rarer Supergiants which shine with the power of at least 10,000 suns. Finally, at the lower left are the small, dense, hot White Dwarf stars, less than a hundredth the luminosity of the Sun.
This diagram is very important as it groups together different types of stars and allows them to be understood better. For example, the Mass-Luminosity Law is only obeyed by Main Sequence stars. Gaps in the diagram are often regions of instability. The region between the Main Sequence and White Dwarfs is inhabited by stars that vary in brightness and are unstable in other ways.
Some of the uses of the H-R Diagram are listed below.
© 2004, 2009 KryssTal
This essay is dedicated to Premika Rajamooni, Gavin Bryan, Tim Hale, Darren Toomer, Kimberley Smith.
Your career can take the right turn by passing our 646-364 exam guide, HP0-S29 dumps and HP0-J49 dumps .You can also download MB3-230 dumps and MB3-210 exam kit. | http://www.krysstal.com/thestars.html | 13 |
84 | You were introduced to the standard normal distribution in the section on relative scores. In this section, you will learn how to use the concepts of probability to compute percentile ranks using the standard normal distribution. You will also learn about the concept of sampling distributions, the variables that affect them, and how to use them. This section will give you the last of the tools that you need to understand inferential statistics.
As you learned earlier, the normal curve is determined by a mathematical equation, and any normal distribution can be converted to a standard normal distribution by subtracting the mean from each score and dividing by the standard deviation. This transformation turned each score into a standard score or Z-score. In this section we will see how the concept of probability can be applied to the standard normal distribution and, by extension, to any distribution.
To review, the standard normal curve has a mean of zero and a standard deviation of 1. The distances between the mean and any point on the standard normal curve can be computed mathematically. In fact, the computations have already been done and are tabled in something called the standard normal table. Shown below is a small part of the standard normal table, which is divided into three sets of three columns. In the columns labeled (a) are the Z-scores. The standard normal table included on this website has every Z-score from 0 to 3.25 in .01 increments, and from 3.25 to 4.00 in .05 increments. This is normally more than enough detail for solving the typical problems using this table. The columns labeled (b) show the area under the curve from the mean to that Z-score, and the columns labeled (c) show the area from that Z-score to to end of the distribution. For example, find the Z-score of .60 in the abbreviated table below. Note that the area from the mean to that Z-score is .2257 and the area from that Z-score to the end of the tail is .2776. These two numbers will always add to .5000, because exactly 50% (a proportion of .5000) of the curve falls above the mean and 50% below the mean in a normal curve. The figure below illustrates this Z-score and the areas that we read from the table.
How can we use this information to determine the percentile rank for the person whose Z-score is .60? Remember that the percentile rank is the percent of individuals who score below a given score. If you look at the figure above, you will see that the area below the mean is .5000 and the area between the mean and a Z-score of .60 is an additional .2257. Therefore, the total area under the standard normal curve that falls below a Z of .60 is equal to .5000 plus .2257, which is .7257. We are essentially using the addition rule of probability in that we are computing the probability of getting a Z-score of +.60 or lower as the probability of getting a Z-score below 0 plus the probability of getting a Z-score between 0 and +.60, as shown below. The value of .7257 is a proportion. The total area under the curve is 1.00 expressed as a proportion and 100% expressed as a percent. To convert a proportion to a percent, you multiply by 100, which is the same as moving the decimal point to the right two places. Therefore, the proportion of .7257 becomes a 72.57%. This approach can be used with any Z-score to compute the percentile associated with that Z-score.
Let's work another example, this time with a negative Z-score. What is the percentile rank for a Z-score of a -1.15. The first step is to always draw the picture. Quickly sketch a normal curve and place the score in approximately the right space. Remember, the sketch of a normal curve stretches a bit more than two standard deviations above and below the mean, so a Z-score of -1.15 will be about half way between the mean and the end of the lower tail, as shown in the figure below. (Technically, the normal curve stretches from a minus infinity to a plus infinity, never quite touching the X-axis. However, virtually the entire area of the curve is in the section from a Z = -2.00 to a Z = +2.00.) Again, we can use the abbreviated standard normal table above to determine the areas of that section of the curve.
The most common mistake made by students with negative Z-scores is to assume that the area listed in column (b) of the standard normal table is the area below the Z-score. It is always the area between the mean and the Z-score. It is the area below the Z-score down to the mean for positive Z-scores, but it is an area above the Z-score up to the mean for negative Z-scores. Again, if you draw the picture and carefully interpret the meaning of the standard normal curve, you can avoid these common mistakes. In this case, the area under the curve and below a Z-score of -1.15 is .1251. Therefore, the percentile rank for this score is 12.51%. Just over 12% of the sample scored lower than this score.
Let's assume for the sake of argument that we have a large population, say a million people, and that we have scores on all of the people in that population. It is unlikely that we really would have scores on everyone in a population that large, but for the sake of argument, we will say that we do. Furthermore, we have graphed the distribution for that population and found that it has a moderate positive skew, as shown in the curve below. The mean for this population is also shown with a vertical line. Because the distribution is skewed, the mean is pulled a bit from the peak of the distribution. If we take a random sample of 100 people from that population and produce a frequency polygon for the sample, we will get a graph that resembles this curve. With just 100 people, the graph will not be as smooth, but you will probably be able to tell by looking at the graph that there is a slight positive skew. If you take a bigger sample, say 500 people, the resulting graph will be very similar in shape to the population distribution shown here. The reason is that random samples tend to produce representative samples, so the range and distribution of the scores in a distribution drawn randomly from a population should provide a reasonably accurate representation of the shape of the distribution. The larger the sample size, generally the more likely that the sample will resemble the distribution of the population. This is the first concept to remember about sampling: The larger the sample, the more likely that the sample will accurately represent the population, provided that the sample is unbiased. A random sample is the best way to produce an unbiased sample.
We can think of a sample of 500 people that we select from the population and graph as 500 samples of size N=1. In other words, we are drawing 500 individual samples of one person. The mean for a sample of one person is the score for that person, because there is only 1 score to sum and we are dividing by N, which is equal to 1. If we think of the distribution not as a distribution of scores, but as a distribution of means from multiple samples of just one person, the distribution becomes a sampling distribution of the mean. Technically, a sampling distribution of the means is a theoretical concept, which is based on having all possible samples of a given sample size. But if we had 500 samples, we could get a pretty good idea what the distribution would look like. A sampling distribution of means is an important concept in statistics, because all inferential statistics is based on the concept.
If our samples only contain one person, they may each provide an unbiased estimate of the population mean, and the one person could be drawn from anywhere in the distribution. But what if we draw 500 samples of two people, compute the mean for each sample, and then graph the results. This would be a sampling distribution of the means for samples of size N=2. What do you think will happen? Suppose in the first sample, we happen to draw a person who scored in the tail as our first participant. What will the second participant look like? Remember that in random sampling, the selection of any one person does not affect the selection of any other person, so the initial selection will not influence the selection of the second person in the sample, but on average, it is unlikely that we would sample two people that are both in one tail of the distribution. If one is in the tail and the other is near the middle of the distribution, the mean will be halfway between the two. What will happen is that the distribution of means for a sample size of 2 will be a little less variable than for a sample size of 1. This is our second concept about sampling: A larger sample size will tend to give a better (i.e., more accurate) estimate of the population mean than a smaller sample size. For the sake of illustration, we have graphed the likely distribution of means for 500 samples of size N=2 below.
Note how the sampling distribution of the means for a sample size of 2 is somewhat narrower and a little less skewed than for a sample size of 1. The mean of this distribution of means is still the population mean, so that value has not shifted. You can see that the skew has decreased a bit in two ways. The first is that the tail to the right of the distribution is not quite as long. The second is that the mean is now closer to the peak of the distribution. What would happen if our sample size were larger still? Let's assume that we take 500 samples of 10 people each, computer their mean, and then graph those means to see what the distribution looks like. The figure below is what you can expect to find.
Again, the mean of this distribution of means has not shifted, because the mean of a sample is an unbiased estimate of the population mean. What that means is that if we took all possible samples of a given sample size and computed the mean for each sample, and then we computed the mean of all those means, the result would be the population mean. That is quite a mouthful to say and most students have to read that last sentence two or three times to let the idea settle in. This is a theoretical concept, although it can be proved mathematically. However, in practice, it is pretty close to impossible to actually achieve the goal of identifying every possible sample of a given size. Nevertheless, it is possible to predict exactly what will happen with this sampling distribution of the mean for any sample size. As the sample size increases, two things will occur. First the variability of the sampling distribution will get smaller as the sample size increases. This is just another way of saying that the estimate of the population mean tends to be more accurate (that is, closer to the population mean) as the sample increases. Therefore, the means for larger samples cluster more tightly around the population mean. The second thing that happens is that the shape of the distribution of means gets closer to a normal distribution as the sample size increases, regardless of the shape of the distribution of scores in the population. If you look at the three graphs above (samples sizes of 1, 2, and 10, respectively), we went from a moderate and clearly visible skew to a barely noticeable skew by the time we reached a sample size of 10. With a sample size of 25 or 30, it would be virtually impossible to detect the skew. This movement toward a normal distribution of the means as the sample size increases will always occur. Mathematicians have actually proved this principle in something called the central limit theorem. For all practical purposes, the distribution of means will be so close to normal by the time the sample size reaches 30, that we can treat the sampling distribution of means as if it were normal.
Mathematicians have also determined what the theoretical variability of a distribution of means should be for any given sample size. If we knew the population standard deviation, we could easily compute the standard deviation for the theoretical distribution of means using the following formula. The only new terminology in this equation is that we have subscripted the population standard deviation symbol with the symbol for a mean to indicate that, instead of talking about the standard deviation of a distribution of scores, we are talking about the standard deviation of a theoretical distribution of means. This standard deviation is referred to as the standard error of the mean (sometimes just called standard error), although it would be just as legitimate to call it the standard deviation of the means, because that is what it is.
Of course, we do not know population parameters like the standard deviation. The best we can do is to estimate the population standard deviation with the standard deviation from our sample. Remember that the formula for the standard deviation had be be corrected to avoid producing a biased estimate of the population standard deviation. It is this unbiased estimate formula that should be used when we compute the standard error estimate. The equation below includes three forms. The first indicates the definition, which is identical to the above equation. In the second form, we have substituted the formula for the unbiased estimate of the population standard deviation for s in the numerator. The third form was simplified from the second form with a little algebra.
Once again, we can now pull together several separate concepts and create a new statistical procedure that you can use, this time called the confidence interval. If we draw a single sample of participants from a population and compute the mean for that sample, we are essentially estimating the mean for the population. We would like to know how close our estimate of that population mean really is. There is, unfortunately, no way of knowing the answer to that question from just a single sample, but we can approach the question in a different way. Instead of simply saying that the sample mean is our best estimate of the population mean, we can give people an idea of how good an estimate it is by computing a confidence interval. A confidence interval is a range of scores in which we can predict that the mean falls a given percentage of the time. For example, a 95% confidence interval is a range in which we expect the population mean to fall 95% of the time. How wide or narrow that interval of scores is will give people an objective indication of how precise our estimate of the population is. Now remember the principle that you learned earlier that the larger the sample size, assuming the sample is unbiased, the more precise your estimate of the population mean. That suggests that with larger sample sizes, we can have a narrower confidence interval, because our mean from a large sample is likely to be close to the population mean.
The secret to creating the confidence interval is to know not only the mean and standard deviation of your sample, but also the shape of the distribution. If we are talking about a confidence interval about a mean, our distribution is the sampling distribution of the means, and the central limit theorem tells us that is will be normal in shape if the sample size is reasonably large (say 30 or more). So we can use the standard normal table and a relatively simple formula to compute our confidence interval.
When we introduced the normal curve earlier, we noted that the the distance from the mean almost to the end of the tail was two standard deviations, and that there was about 2% of the area under the curve that was beyond two standard deviations in each tail. If we want to do a 95% confidence interval, we want to know the exact value that will cut off 2.5% in each tail. Remember that the figures are listed as proportions in the standard normal table, so 2.5% is .0250 in the tail. Click on the link to that table and look up the Z-score that cuts off the last 2.5%, and then return to this page with the back arrow key. Did you find that a Z of 1.96 cuts off the 2.5% in the tail?
You now have all of the elements that you need to do a confidence interval for the mean (your sample mean and standard deviation and this cutoff point in the normal curve. So let's set up the logic and then introduce you to the formulas. Focus on the logic, because you can always look up the formulas when you need them. If we have a reasonable large sample, with its associated sample mean, standard deviation, and sample size, we have the ingredients for not only estimating the population mean, but also estimating how close our estimate is likely to be. The steps and logic are as follows:
The easiest way to understand this process is to work through an example. Let's assume that we have sampled 100 people and computed the sample mean as 104.37 and the sample standard deviation as 14.59. We used the formula for the standard deviation that gives us an unbiased estimate of the population value. In fact, from now on, we will only be using that formula (the one in which we divide by the square root of N-1). We can use this information to compute the standard error of the means, using the formula below.
Now before we plug the numbers into the confidence interval formula, let's first draw the figure to show what we are doing. We know that the sampling distribution of the means will be normal and will have an estimated mean of 104.37 and an estimated standard error (the standard deviation of that distribution of means) of 1.459, which we just calculated. So our confidence interval will look like the following.
This figure shows the theoretical distribution of the means for a sample of size 100, based on the mean and standard deviation from that sample. Again, our sample mean is our best estimate, but the population mean may be a little lower or higher than this particular sample mean. However, we now have a way to say just how much lower or higher it is likely to be. The lower limit is 104.37-1.96(1.459), which equals 101.51 (rounding to two decimal points). The upper limit is 104.37+1.96(1.459), which equals 107.23. Therefore we can say that we are 95% confident that the population means lies between 101.51 and 107.23. | http://www.ablongman.com/graziano6e/text_site/MATERIAL/statconcepts/sampling.htm | 13 |
53 | Common Core State Standards for Mathematics - Grade 6
|Back to Main Index|
|Ratios and Proportional Relationships|
Understand ratio concepts and use ratio reasoning to solve problems.
6.RP.1 Understand the concept of a ratio and use ratio language to describe a ratio relationship between two quantities.
6.RP.3 Use ratio and rate reasoning to solve real-world and mathematical problems, e.g., by reasoning about tables of equivalent ratios, tape diagrams, double number line diagrams, or equations.
|Thinking Blocks Ratios||Modeling Tool||Scale Factor X||Ratio Stadium||Ratio Blaster|
|Ratio Martian||Dirt Bike Proportions|
|The Number System|
Apply and extend previous understandings of multiplication and division to divide fractions by fractions.
6.NS.1 Interpret and compute quotients of fractions, and solve word problems involving division of fractions by fractions, e.g., by using visual fraction models and equations to represent the problem.
6.NS.3 Fluently add, subtract, multiply, and divide multi-digit decimals using the standard algorithm for each operation.
6.NS.4 Find the greatest common factor of two whole numbers less than or equal to 100 and the least common multiple of two whole numbers less than or equal to 12. Use the distributive property to express a sum of two whole numbers 1–100 with a common factor as a multiple of a sum of two whole numbers with no common factor.
6.NS.6 Understand a rational number as a point on the number line. Extend number line diagrams and coordinate axes familiar from previous grades to represent points on the line and in the plane with negative number coordinates.
6.NS.7 Understand ordering and absolute value of rational numbers.
6.NS.8 Solve real-world and mathematical problems by graphing points in all four quadrants of the coordinate plane. Include use of coordinates and absolute value to find distances between points with the same first coordinate or the same second coordinate.
|Orbit Integers||Integer Warp||Spider Match||The X Detectives|
|Expressions and Equations|
Apply and extend previous understandings of arithmetic to algebraic expressions.
6.EE.1 Write and evaluate numerical expressions involving whole-number exponents.
6.EE.2 Write, read, and evaluate expressions in which letters stand for numbers.
6.EE.3 Apply the properties of operations to generate equivalent expressions.
6.EE.4 Identify when two expressions are equivalent (i.e., when the two expressions name the same number regardless of which value is substituted into them).
6.EE.5 Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true? Use substitution to determine whether a given number in a specified set makes an equation or inequality true.
6.EE.6 Use variables to represent numbers and write expressions when solving a real-world or mathematical problem; understand that a variable can represent an unknown number, or, depending on the purpose at hand, any number in a specified set.
6.EE.7 Solve real-world and mathematical problems by writing and solving equations of the form x + p = q and px = q for cases in which p, q and x are all nonnegative rational numbers.
6.EE.8 Write an inequality of the form x > c or x < c to represent a constraint or condition in a real-world or mathematical problem. Recognize that inequalities of the form x > c or x < c have infinitely many solutions; represent solutions of such inequalities on number line diagrams.
6.EE.9 Use variables to represent two quantities in a real-world problem that change in relationship to one another; write an equation to express one quantity, thought of as the dependent variable, in terms of the other quantity, thought of as the independent variable. Analyze the relationship between the dependent and independent variables using graphs and tables, and relate these to the equation. For example, in a problem involving motion at constant speed, list and graph ordered pairs of distances and times, and write the equation d = 65t to represent the relationship between distance and time.
|Weigh the Wangdoodles||Algebra Puzzle||Math on Planet Zog||Algebraic Reasoning||Modeling Tool|
|Swimming Otters||Otter Rush|
|Solve real-world and mathematical problems involving area, surface area, and volume .|
6.G.1 Find the area of right triangles, other triangles, special quadrilaterals, and polygons by composing into rectangles or decomposing into triangles and other shapes; apply these techniques in the context of solving real-world and mathematical problems.
6.G.2 Find the volume of a right rectangular prism with fractional edge lengths by packing it with unit cubes of the appropriate unit fraction edge lengths, and show that the volume is the same as would be found by multiplying the edge lengths of the prism. Apply the formulas V = l w h and V = b h to find volumes of right rectangular prisms with fractional edge lengths in the context of solving real-world and mathematical problems.
6.G.3 Draw polygons in the coordinate plane given coordinates for the vertices; use coordinates to find the length of a side joining points with the same first coordinate or the same second coordinate. Apply these techniques in the context of solving real-world and mathematical problems.
6.G.4 Draw polygons in the coordinate plane given coordinates for the vertices; use coordinates to find the length of a side joining points with the same first coordinate or the same second coordinate. Apply these techniques in the context of solving real-world and mathematical problems.
|Statistics and Probability|
|Develop understanding of statistical variability.
Summarize and describe distributions.
6.SP.1 Recognize a statistical question as one that anticipates variability in the data related to the question and accounts for it in the answers.
6.SP.3 Recognize that a measure of center for a numerical data set summarizes all of its values with a single number, while a measure of variation describes how its values vary with a single number.
6.SP.4 Display numerical data in plots on a number line, including dot plots, histograms, and box plots.
6.SP.5 Summarize numerical data sets in relation to their context.
|Copyright © 2013 MathPlayground.com All rights reserved.| | http://www.mathplayground.com/common_core_state_standards_for_mathematics_grade_6.html | 13 |
126 | Lossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. The term lossless is in contrast to lossy data compression, which only allows constructing an approximation of the original data, in exchange for better compression rates.
Lossless data compression is used in many applications. For example, it is used in the ZIP file format and in the Unix tool gzip. It is also often used as a component within lossy data compression technologies (e.g. lossless mid/side joint stereo preprocessing by the LAME MP3 encoder and other lossy audio encoders).
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data could be deleterious. Typical examples are executable programs, text documents, and source code. Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, while smaller lossy audio files are typically used on portable players and in other cases where storage space is limited or exact replication of the audio is unnecessary.
Lossless compression techniques
Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data.
The primary encoding algorithms used to produce bit sequences are Huffman coding (also used by DEFLATE) and arithmetic coding. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. Adaptive models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm (general-purpose meaning that they can accept any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed images.
Text and image
Statistical modeling algorithms for text (or text-like binary data such as executables) include:
- Context tree weighting method (CTW)
- Burrows–Wheeler transform (block sorting preprocessing that makes compression more efficient)
- LZ77 (used by DEFLATE)
Techniques that take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones. Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values. This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes. For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called discrete wavelet transform. JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but hopefully the distribution of values is more peaked.
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.
Historical legal issues
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the USA and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing still image files in favor of Portable Network Graphics (PNG), which combines the LZ77-based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003.
Many of the lossless compression techniques used for text also work reasonably well for indexed images, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialised area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data – essentially using autoregressive models to predict the "next" value and encoding the (hopefully small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the error) tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression, of successive images within a sequence). This is called delta encoding (from the Greek letter Δ, which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Lossless compression methods
By operation of the pigeonhole principle, no lossless compression algorithm can efficiently compress all possible data. For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Some of the most common lossless compression algorithms are listed below.
General purpose
- Run-length encoding (RLE) – a simple scheme that provides good compression of data containing lots of runs of the same value.
- Lempel-Ziv 1978 (LZ78), Lempel-Ziv-Welch (LZW) – used by GIF images and compress among many other applications
- DEFLATE – used by gzip, ZIP (since version 2.0), and as part of the compression process of Portable Network Graphics (PNG), Point-to-Point Protocol (PPP), HTTP, SSH
- bzip2 – using the Burrows–Wheeler transform, this provides slower but higher compression than DEFLATE
- Lempel–Ziv–Markov chain algorithm (LZMA) – used by 7zip, xz, and other programs; higher compression than bzip2 as well as much faster decompression.
- Lempel–Ziv–Oberhumer (LZO) – designed for compression/decompression speed at the expense of compression ratios
- Statistical Lempel Ziv – a combination of statistical method and dictionary-based method; better compression ratio than using single method.
- Apple Lossless (ALAC - Apple Lossless Audio Codec)
- Adaptive Transform Acoustic Coding (ATRAC)
- apt-X Lossless
- Audio Lossless Coding (also known as MPEG-4 ALS)
- Direct Stream Transfer (DST)
- Dolby TrueHD
- DTS-HD Master Audio
- Free Lossless Audio Codec (FLAC)
- Meridian Lossless Packing (MLP)
- Monkey's Audio (Monkey's Audio APE)
- MPEG-4 SLS (also known as HD-AAC)
- Original Sound Quality (OSQ)
- RealPlayer (RealAudio Lossless)
- Shorten (SHN)
- TTA (True Audio Lossless)
- WavPack (WavPack lossless)
- WMA Lossless (Windows Media Lossless)
- ILBM – (lossless RLE compression of Amiga IFF images)
- JBIG2 – (lossless or lossy compression of B&W images)
- WebP – (high-density lossless or lossy compression of RGB and RGBA images)
- JPEG-LS – (lossless/near-lossless compression standard)
- JPEG 2000 – (includes lossless compression method, as proven by Sunil Kumar, Prof San Diego State University)
- JPEG XR – formerly WMPhoto and HD Photo, includes a lossless compression method
- PGF – Progressive Graphics File (lossless or lossy compression)
- PNG – Portable Network Graphics
- TIFF – Tagged Image File Format
- Gifsicle (GPL) – Optimize gif files
- Jpegoptim (GPL) – Optimize jpeg files
3D Graphics
- OpenCTM – Lossless compression of 3D triangle meshes
See this list of lossless video codecs.
Cryptosystems often compress data before encryption for added security; compression prior to encryption helps remove redundancies and patterns that might facilitate cryptanalysis. However, many ordinary lossless compression algorithms introduce predictable patterns (such as headers, wrappers, and tables) into the compressed data that may actually make cryptanalysis easier. One possible solution to this problem is to use Bijective Compression that has no headers or additional information. Also using bijective whole file transforms such as bijective BWT greatly increase the Unicity Distance. Therefore, cryptosystems often incorporate specialized compression algorithms specific to the cryptosystem—or at least demonstrated or widely held to be cryptographically secure—rather than standard compression algorithms that are efficient but provide potential opportunities for cryptanalysis.
Genetics compression algorithms are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype. In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression and in much faster time than the leading general-purpose compression utilities.
Lossless compression benchmarks
Lossless compression algorithms and their implementations are routinely tested in head-to-head benchmarks. There are a number of better-known compression benchmarks. Some benchmarks cover only the compression ratio, so winners in these benchmark may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from the class of context-mixing compression software.
The benchmarks listed in the 5th edition of the Handbook of Data Compression (Springer, 2009) are:
- The Maximum Compression benchmark, started in 2003 and frequently updated, includes over 150 programs. Maintained by Werner Bergmans, it tests on a variety of data sets, including text, images, and executable code. Two types of results are reported: single file compression (SFC) and multiple file compression (MFC). Not surprisingly, context mixing programs often win here; programs from the PAQ series and WinRK often are in the top. The site also has a list of pointers to other benchmarks.
- UCLC (the ultimate command-line compressors) benchmark by Johan de Bock is another actively maintained benchmark including over 100 programs. The winners in most tests usually are PAQ programs and WinRK, with the exception of lossless audio encoding and grayscale image compression where some specialized algorithms shine.
- Squeeze Chart by Stephan Busch is another frequently updated site.
- The EmilCont benchmarks by Berto Destasio are somewhat outdated having been most recently updated in 2004. A distinctive feature is that the data set is not public, to prevent optimizations targeting it specifically. Nevertheless, the best ratio winners are again the PAQ family, SLIM and WinRK.
- The Archive Comparison Test (ACT) by Jeff Gilchrist included 162 DOS/Windows and 8 Macintosh lossless compression programs, but it was last updated in 2002.
- The Art Of Lossless Data Compression by Alexander Ratushnyak provides a similar test performed in 2003.
- The Calgary Corpus dating back to 1987 is no longer widely used due to its small size, although Leonid A. Broukhis still maintains The Calgary Corpus Compression Challenge, which started in 1996.
- The Large Text Compression Benchmark and the similar Hutter Prize both use a trimmed Wikipedia XML UTF-8 data set.
- The Generic Compression Benchmark, maintained by Mahoney himself, test compression on random data.
- Sami Runsas (author of NanoZip) maintains Compression Ratings, a benchmark similar to Maximum Compression multiple file test, but with minimum speed requirements. It also offers a calculator that allows the user to weight the importance of speed and compression ratio. The top programs here are fairly different due to speed requirement. In January 2010, the top programs were NanoZip followed by FreeArc, CCM, flashzip, and 7-Zip.
- The Monster of Compression benchmark by N. F. Antonio tests compression on 1Gb of public data with a 40 minute time limit. As of Dec. 20, 2009 the top ranked archiver is NanoZip 0.07a and the top ranked single file compressor is ccmx 1.30c, both context mixing.
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument, as follows:
- Assume that each file is represented as a string of bits of some arbitrary length.
- Suppose that there is a compression algorithm that transforms every file into an output file that is no longer than the original file, and that at least one file will be compressed into an output file that is shorter than the original file.
- Let M be the least number such that there is a file F with length M bits that compresses to something shorter. Let N be the length (in bits) of the compressed version of F.
- Because N<M, every file of length N keeps its size during compression. There are 2N such files. Together with F, this makes 2N+1 files that all compress into one of the 2N files of length N.
- But 2N is smaller than 2N+1, so by the pigeonhole principle there must be some file of length N that is simultaneously the output of the compression function on two different inputs. That file cannot be decompressed reliably (which of the two originals should that yield?), which contradicts the assumption that the algorithm was lossless.
- We must therefore conclude that our original hypothesis (that the compression function makes no file longer) is necessarily untrue.
Any lossless compression algorithm that makes some files shorter must necessarily make some files longer, but it is not necessary that those files become very much longer. Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, DEFLATE compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be greater than N. So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a subset of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily modeled redundancy that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm: indeed, this result is used to define the concept of randomness in algorithmic complexity theory.
It's provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number N of random bits can always be compressed to N − 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 0. Allegedly "perfect" compression algorithms are usually called derisively "magic" compression algorithms.
On the other hand, it has also been proven that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it's possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant pi, which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor).
Mathematical background
Any compression algorithm can be viewed as a function that maps sequences of units (normally octets) into other sequences of the same units. Compression is successful if the resulting sequence is shorter than the original sequence plus the map needed to decompress it. For a compression algorithm to be lossless, there must be a reverse mapping from compressed bit sequences to original bit sequences. That is to say, the compression method must encapsulate a bijection between "plain" and "compressed" bit sequences.
The sequences of length N or less are clearly a strict superset of the sequences of length N − 1 or less. It follows that there are more sequences of length N or less than there are sequences of length N − 1 or less. It therefore follows from the pigeonhole principle that it is not possible to map every sequence of length N or less to a unique sequence of length N − 1 or less. Therefore it is not possible to produce an algorithm that reduces the size of every possible input sequence.
Psychological background
Most everyday files are relatively 'sparse' in an information entropy sense, and thus, most lossless algorithms a layperson is likely to apply on regular files compress them relatively well. This may, through misapplication of intuition, lead some individuals to conclude that a well-designed compression algorithm can compress any input, thus, constituting a magic compression algorithm.
Points of application in real compression theory
Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection is made by heuristics; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimising the compression overhead. For example, the zip data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim.
The Million Random Number Challenge
Mark Nelson, frustrated over many cranks trying to claim having invented a magic compression algorithm appearing in comp.compression, has constructed a 415,241 byte binary file () of highly entropic content, and issued a public challenge of $100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute ("decompress") it without error.
The FAQ for the comp.compression newsgroup contains a challenge by Mike Goldman offering $5,000 for a program that can compress random data. Patrick Craig took up the challenge, but rather than compressing the data, he split it up into separate files all of which ended in the number 5, which was not stored as part of the file. Omitting this character allowed the resulting files (plus, in accordance with the rules, the size of the program that reassembled them) to be smaller than the original file. However, no actual compression took place, and the information stored in the names of the files was necessary to reassemble them in the correct order in the original file, and this information was not taken into account in the file size comparison. The files themselves are thus not sufficient to reconstitute the original file; the file names are also necessary. A full history of the event, including discussion on whether or not the challenge was technically met, is on Patrick Craig's web site.
See also
- Audio compression (data)
- Comparison of file archivers
- David A. Huffman
- Entropy (information theory)
- Kolmogorov complexity
- Data compression
- Lossy compression
- Lossless Transform Audio Compression (LTAC)
- List of codecs
- Information theory
- Universal code (data compression)
- Grammar induction
- Unisys | LZW Patent and Software Information
- Chanda, Elhaik, and Bader (2012). "HapZipper: sharing HapMap populations just got easier". Nucleic Acids Res: 1–7. doi:10.1093/nar/gks709. PMID 22844100.
- David Salomon, Giovanni Motta, (with contributions by David Bryant), Handbook of Data Compression, 5th edition, Springer, 2009, ISBN 1-84882-902-7, pp. 16–18.
- Lossless Data Compression Benchmarks (links and spreadsheets)
- http://nishi.dreamhosters.com/u/dce2010-02-26.pdf, pp. 3–5
- Visualization of compression ratio and time
- comp.compression FAQ list entry #9: Compression of random data (WEB, Gilbert and others)
- ZIP file format specification by PKWARE, chapter V, section J
- Nelson, Mark (2006-06-20). "The Million Random Digit Challenge Revisited".
- Craig, Patrick. "The $5000 Compression Challenge". Retrieved 2009-06-08.
- Comparison of Lossless Audio Compressors at Hydrogenaudio Wiki
- Comparing lossless and lossy audio formats for music archiving
- — data-compression.com's overview of data compression and its fundamentals limitations
- — comp.compression's FAQ item 73, What is the theoretical compression limit?
- — c10n.info's overview of US patent #7,096,360, "[a]n "Frequency-Time Based Data Compression Method" supporting the compression, encryption, decompression, and decryption and persistence of many binary digits through frequencies where each frequency represents many bits."
- "LZF compression format" | http://en.wikipedia.org/wiki/Lossless | 13 |
57 | Since special relativity demonstrates that space and time are variable concepts from different frames of reference, then velocity (which is space divided by time) becomes a variable as well. If velocity changes from reference frame to reference frame, then concepts that involve velocity must also be relative. One such concept is momentum, motion energy.
Momentum, as defined by Newtonian, can not be conserved from frame to frame under special relativity. A new parameter had to be defined, called relativistic momentum, which is conserved, but only if the mass of the object is added to the momentum equation.
This has a big impact on classical physics because it means there is an equivalence between mass and energy, summarized by the famous Einstein equation:
The implications of this was not realized for many years. For example, the production of energy in nuclear reactions (i.e. fission and fusion) was shown to be the conversion of a small amount of atomic mass into energy. This led to the development of nuclear power and weapons.
As an object is accelerated close to the speed of light, relativistic effects begin to dominate. In particular, adding more energy to an object will not make it go faster since the speed of light is the limit. The energy has to go somewhere, so it is added to the mass of the object, as observed from the rest frame. Thus, we say that the observed mass of the object goes up with increased velocity. So a spaceship would appear to gain the mass of a city, then a planet, than a star, as its velocity increased.
Likewise, the equivalence of mass and energy allowed Einstein to predict that the photon has momentum, even though its mass is zero. This allows the development of light sails and photoelectric detectors.
Spacetime and Energy:
Special relativity and E=mc2 led to the most powerful unification of physical concepts since the time of Newton. The previously separate ideas of space, time, energy and mass were linked by special relativity, although without a clear understanding of how they were linked.
The how and why remained to the domain of what is called general relativity, a complete theory of gravity using the geometry of spacetime. The origin of general relativity lies in Einstein's attempt to apply special relativity in accelerated frames of reference. Remember that the conclusions of relativity were founded for inertial frames, i.e. ones that move only at a uniform velocity. Adding acceleration was a complication that took Einstein 10 years to formulate.
The equivalence principle was Einstein's `Newton's apple' insight to gravitation. His thought experiment was the following, imagine two elevators, one at rest of the Earth's surface, one accelerating in space. To an observer inside the elevator (no windows) there is no physical experiment that he/she could perform to differentiate between the two scenarios.
An immediate consequence of the equivalence principle is that gravity bends light. To visualize why this is true imagine a photon crossing the elevator accelerating into space. As the photon crosses the elevator, the floor is accelerated upward and the photon appears to fall downward. The same must be true in a gravitational field by the equivalence principle.
The principle of equivalence renders the gravitational field fundamentally different from all other force fields encountered in nature. The new theory of gravitation, the general theory of relativity, adopts this characteristic of the gravitational field as its foundation.
There were two classical test of general relativity, the first was that light should be deflected by passing close to a massive body. The first opportunity occurred during a total eclipse of the Sun in 1919.
Measurements of stellar positions near the darkened solar limb proved Einstein was right. Direct confirmation of gravitational lensing was obtained by the Hubble Space Telescope last year.
The second test is that general relativity predicts a time dilation in a gravitational field, so that, relative to someone outside of the field, clocks (or atomic processes) go slowly. This was confirmed with atomic clocks flying airplanes in the mid-1970's.
General Relativity :
The second part of relativity is the theory of general relativity and lies on two empirical findings that he elevated to the status of basic postulates. The first postulate is the relativity principle: local physics is governed by the theory of special relativity. The second postulate is the equivalence principle: there is no way for an observer to distinguish locally between gravity and acceleration.
Einstein discovered that there is a relationship between mass, gravity and spacetime. Mass distorts spacetime, causing it to curve.
Gravity can be described as motion caused in curved spacetime .
Thus, the primary result from general relativity is that gravitation is a purely geometric consequence of the properties of spacetime. Special relativity destroyed classical physics view of absolute space and time, general relativity dismantles the idea that spacetime is described by Euclidean or plane geometry. In this sense, general relativity is a field theory, relating Newton's law of gravity to the field nature of spacetime, which can be curved.
Gravity in general relativity is described in terms of curved spacetime. The idea that spacetime is distorted by motion, as in special relativity, is extended to gravity by the equivalence principle. Gravity comes from matter, so the presence of matter causes distortions or warps in spacetime. Matter tells spacetime how to curve, and spacetime tells matter how to move (orbits).
The general theory of relativity is constructed so that its results are approximately the same as those of Newton's theories as long as the velocities of all bodies interacting with each other gravitationally are small compared with the speed of light--i.e., as long as the gravitational fields involved are weak. The latter requirement may be stated roughly in terms of the escape velocity. A gravitational field is considered strong if the escape velocity approaches the speed of light, weak if it is much smaller. All gravitational fields encountered in the solar system are weak in this sense.
Notice that at low speeds and weak gravitational fields, general and special relativity reduce to Newtonian physics, i.e. everyday experience.
The fact that light is bent by a gravitational field brings up the following thought experiment. Imagine adding mass to a body. As the mass increases, so does the gravitational pull and objects require more energy to reach escape velocity. When the mass is sufficiently high enough that the velocity needed to escape is greater than the speed of light we say that a black hole has been created.
Another way of defining a black hole is that for a given mass, there is a radius where if all the mass is compress within this radius the curvature of spacetime becomes infinite and the object is surrounded by an event horizon. This radius called the Schwarzschild radius and varys with the mass of the object (large mass objects have large Schwarzschild radii, small mass objects have small Schwarzschild radii).
The Schwarzschild radius is easy to determine for an object of mass M. It is simply the radius where a test particle of mass m must move at the speed of light to exceed the gravitational energy of the primary object. So, we equate the kinetic energy and the gravitational potential energy such that:
which can be written as
where G = 6.668x10-11 and c = 3x108 meters per second and mass is in kilograms.
The Schwarzschild radius marks the point where the event horizon forms, below this radius no light escapes. The visual image of a black hole is one of a dark spot in space with no radiation emitted. Any radiation falling on the black hole is not reflected but rather absorbed, and starlight from behind the black hole is lensed.
Even though a black hole is invisible, it has properties and structure. The boundary surrounding the black hole at the Schwarzschild radius is called the event horizon, events below this limit are not observed. Since the forces of matter can not overcome the force of gravity, all the mass of a black hole compresses to infinity at the very center, called the singularity.
A black hole can come in any size. Stellar mass black holes are thought to form from supernova events, and have radii of 5 km. Galactic black hole in the cores of some galaxies, millions of solar masses and the radius of a solar system, are built up over time by cannibalizing stars. Mini black holes formed in the early Universe (due to tremendous pressures) down to masses of asteroids with radii the size of a grain of sand.
Note that a black hole is the ultimate entropy sink since all information or objects that enter a black hole never return. If an observer entered a black hole to look for the missing information, he/she would be unable to communicate their findings outside the event horizon. | http://abyss.uoregon.edu/~js/ast123/lectures/lec09.html | 13 |
112 | Applications of Circular Motion
Improve your problem-solving skills with problems, answers and solutions from The Calculator Pad.Flickr Physics
Visit The Physics Classroom's Flickr Galleries and enjoy a visual overview of the topic of circular motionParticle Motion On A Curve Model
Explore the direction of the normal force for the motion of an object along a curved path with this downloadable Java applet from Open Source Physics (OSP).
Try something new with this problem based learning activity on the design of a highway exit.The Laboratory
Looking for a lab that coordinates with this page? Try the Loop the Loop Lab from The Laboratory.Curriculum Corner
Practice makes perfect with this force analysis activity from The Curriculum Corner.Particle Motion On A Curve Model
This simulation from Open Source Physics (OSP) models the motion of an object along a curved path. The original speed and the path can be modified. Gravity, normal force, and the radius of curvature are displayed.
Newton's Second Law - Revisited
Newton's second law states that the acceleration of an object is directly proportional to the net force acting upon the object and inversely proportional to the mass of the object. The law is often expressed in the form of the following two equations.
In Unit 2 of The Physics Classroom, Newton's second law was used to analyze a variety of physical situations. The idea was that if any given physical situation is analyzed in terms of the individual forces that are acting upon an object, then those individual forces must add up as vectors to the net force. Furthermore, the net force must be equal to the mass times the acceleration. Subsequently, the acceleration of an object can be found if the mass of the object and the magnitudes and directions of each individual force are known. And the magnitude of any individual force can be determined if the mass of the object, the acceleration of the object, and the magnitude of the other individual forces are known. The process of analyzing such physical situations in order to determine unknown information is dependent upon the ability to represent the physical situation by means of a free-body diagram. A free-body diagram is a vector diagram that depicts the relative magnitude and direction of all the individual forces that are acting upon the object.
In this Lesson, we will use Unit 2 principles (free-body diagrams, Newton's second law equation, etc.) and circular motion concepts in order to analyze a variety of physical situations involving the motion of objects in circles or along curved paths. The mathematical equations discussed in Lesson 1 and the concept of a centripetal force requirement will be applied in order to analyze roller coasters and other amusement park rides and various athletic movements.
To illustrate how circular motion principles can be combined with Newton's second law to analyze a physical situation, consider a car moving in a horizontal circle on a level surface. The diagram below depicts the car on the left side of the circle.
Applying the concept of a centripetal force requirement, we know that the net force acting upon the object is directed inwards. Since the car is positioned on the left side of the circle, the net force is directed rightward. An analysis of the situation would reveal that there are three forces acting upon the object - the force of gravity (acting downwards), the normal force of the pavement (acting upwards), and the force of friction (acting inwards or rightwards). It is the friction force that supplies the centripetal force requirement for the car to move in a horizontal circle. Without friction, the car would turn its wheels but would not move in a circle (as is the case on an icy surface). This analysis leads to the free-body diagram shown at the right. Observe that each force is represented by a vector arrow that points in the specific direction that the force acts; also notice that each force is labeled according to type (Ffrict, Fnorm, and Fgrav). Such an analysis is the first step of any problem involving Newton's second law and a circular motion.
Now consider the following two problems pertaining to this physical scenario of the car making a turn on a horizontal surface.
The maximum speed with which a 945-kg car makes a 180-degree turn is 10.0 m/s. The radius of the circle through which the car is turning is 25.0 m. Determine the force of friction and the coefficient of friction acting upon the car.
The coefficient of friction acting upon a 945-kg car is 0.850. The car is making a 180-degree turn around a curve with a radius of 35.0 m. Determine the maximum speed with which the car can make the turn.
Sample problem #1 provides kinematic information (v and R) and requests the value of an individual force. As such the solution of the problem will demand that the acceleration and the net force first be determined; then the individual force value can be found by use of the free-body diagram. Sample problem #2 provides information about the individual force values (or at least information that allows for the determination of the individual force values) and requests the value of the maximum speed of the car. As such, its solution will demand that individual force values be used to determine the net force and acceleration; then the acceleration can be used to determine the maximum speed of the car. The two problems will be solved using the same general principles. Yet because the given and requested information is different in each, the solution method will be slightly different.
The known information and requested information in sample problem #1 is:
m = 945 kg
Ffrict = ???
The mass of the object can be used to determine the force of gravity acting in the downward direction. Use the equation
Fgrav = m *
where g is 9.8 m/s/s. Knowing that there is no vertical acceleration of the car, it can be concluded that the vertical forces balance each other. Thus, Fgrav = Fnorm= 9261 N. This allows us to determine two of the three forces identified in the free-body diagram. Only the friction force remains unknown.
Since the force of friction is the only horizontal force, it must be equal to the net force acting upon the object. So if the net force can be determined, then the friction force is known. To determine the net force, the mass and the kinematic information (speed and radius) must be substituted into the following equation:
Substituting the given values yields a net force of 3780 Newton. Thus, the force of friction is 3780 N.
Finally the coefficient of friction (μ) can be determined using the equation that relates the coefficient of friction to the force of friction and the normal force.
Substituting 3780 N for Ffrict and 9261 N for Fnorm yields a coefficient of friction of 0.408.
Once again, the problem begins by identifying the known and requested information. The known information and requested information in the sample problem #2 is:
m = 945 kg
v = ???
The mass of the car can be used to determine the force of gravity acting in the downward direction. Use the equation
Fgrav = m *
where g is 9.8 m/s/s. Knowing that there is no vertical acceleration of the car, it can be concluded that the vertical forces balance each other. Thus, Fgrav = Fnorm= 9261 N. Since the coefficient of friction (μ) is given, the force of friction can be determined using the following equation:
This allows us to determine all three forces identified in the free-body diagram.
The net force acting upon any object is the vector sum of all individual forces acting upon that object. So if all individual force values are known (as is the case here), the net force can be calculated. The vertical forces add to 0 N. Since the force of friction is the only horizontal force, it must be equal to the net force acting upon the object. Thus, Fnet = 7872 N.
Once the net force is determined, the acceleration can be quickly calculated using the following equation.
Substituting the given values yields an acceleration of 8.33 m/s/s. Finally, the speed at which the car could travel around the turn can be calculated using the equation for centripetal acceleration:
Substituting the known values for a and R into this equation and solving algebraically yields a maximum speed of 17.1 m/s.
The method prescribed above will serve you well as you approach circular motion problems. However, one caution is in order. Every physics problem differs from the previous problem. As such, there is no magic formula for solving every one. Using an appropriate approach to solving such problems (which involves constructing a FBD, identifying known information, identifying the requested information, and using available equations) will never eliminate the need to think, analyze and problem-solve. For this reason, make an effort to develop an appropriate approach to every problem; yet always engage your critical analysis skills in the process of the solution. If physics problems were a mere matter of following a foolproof, 5-step formula or using some memorized algorithm, then we wouldn't call them "problems."
Use your understanding of Newton's second law and circular motion principles to determine the unknown values in the following practice problems. Click the button to check your answers.
1. A 1.50-kg bucket of water is tied by a rope and whirled in a circle with a radius of 1.00 m. At the top of the circular loop, the speed of the bucket is 4.00 m/s. Determine the acceleration, the net force and the individual force values when the bucket is at the top of the circular loop.
m = 1.5 kg
a = ________ m/s/s
Fnet = _________ N
2. A 1.50-kg bucket of water is tied by a rope and whirled in a circle with a radius of 1.00 m. At the bottom of the circular loop, the speed of the bucket is 6.00 m/s. Determine the acceleration, the net force and the individual force values when the bucket is at the bottom of the circular loop.
m = 1.5 kg
a = ________ m/s/s
Fnet = _________ N | http://www.physicsclassroom.com/Class/circles/U6L2a.cfm | 13 |
92 | Random variables and probability distributions
Probability Density Functions Probability density functions for continuous random variables.
Probability Density Functions
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- In the last video, I introduced you to the notion of-- well,
- really we started with the random variable.
- And then we moved on to the two types of random variables.
- You had discrete, that took on a finite number of values.
- And the these, I was going to say that they tend to be
- integers, but they don't always have to be integers.
- You have discrete, so finite meaning you can't have an
- infinite number of values for a discrete random variable.
- And then we have the continuous, which can take
- on an infinite number.
- And the example I gave for continuous is, let's
- say random variable x.
- And people do tend to use-- let me change it a little bit, just
- so you can see it can be something other than an x.
- Let's have the random variable capital Y.
- They do tend to be capital letters.
- Is equal to the exact amount of rain tomorrow.
- And I say rain because I'm in northern California.
- It's actually raining quite hard right now.
- We're short right now, so that's a positive.
- We've been having a drought, so that's a good thing.
- But the exact amount of rain tomorrow.
- And let's say I don't know what the actual probability
- distribution function for this is, but I'll draw one and
- then we'll interpret it.
- Just so you can kind of think about how you can think about
- continuous random variables.
- So let me draw a probability distribution, or they call
- it its probability density function.
- And we draw like this.
- And let's say that there is-- it looks something like this.
- Like that.
- All right, and then I don't know what this height is.
- So the x-axis here is the amount of rain.
- Where this is 0 inches, this is 1 inch, this is 2 inches,
- this is 3 inches, 4 inches.
- And then this is some height.
- Let's say it peaks out here at, I don't know,
- let's say this 0.5.
- So the way to think about it, if you were to look at this and
- I were to ask you, what is the probability that Y-- because
- that's our random variable-- that Y is exactly
- equal to 2 inches?
- That Y is exactly equal to two inches.
- What's the probability of that happening?
- Well, based on how we thought about the probability
- distribution functions for the discrete random variable,
- you'd say OK, let's see.
- 2 inches, that's the case we care about right now.
- Let me go up here.
- You'd say it looks like it's about 0.5.
- And you'd say, I don't know, is it a 0.5 chance?
- And I would say no, it is not a 0.5 chance.
- And before we even think about how we would interpret it
- visually, let's just think about it logically.
- What is the probability that tomorrow we have exactly
- 2 inches of rain?
- Not 2.01 inches of rain, not 1.99 inches of rain.
- Not 1.99999 inches of rain, not 2.000001 inches of rain.
- Exactly 2 inches of rain.
- I mean, there's not a single extra atom, water molecule
- above the 2 inch mark.
- And not as single water molecule below the 2 inch mark.
- It's essentially 0, right?
- It might not be obvious to you, because you've probably heard,
- oh, we had 2 inches of rain last night.
- But think about it, exactly 2 inches, right?
- Normally if it's 2.01 people will say that's 2.
- But we're saying no, this does not count.
- It can't be 2 inches.
- We want exactly 2.
- 1.99 does not count.
- Normally our measurements, we don't even have tools that
- can tell us whether it is exactly 2 inches.
- No ruler you can even say is exactly 2 inches long.
- At some point, just the way we manufacture things, there's
- going to be an extra atom on it here or there.
- So the odds of actually anything being exactly a
- certain measurement to the exact infinite decimal
- point is actually 0.
- The way you would think about a continuous random variable,
- you could say what is the probability that Y is almost 2?
- So if we said that the absolute value of Y minus is 2 is
- less than some tolerance?
- Is less than 0.1.
- And if that doesn't make sense to you, this is essentially
- just saying what is the probability that Y is greater
- than 1.9 and less than 2.1?
- These two statements are equivalent.
- I'll let you think about it a little bit.
- But now this starts to make a little bit of sense.
- Now we have an interval here.
- So we want all Y's between 1.9 and 2.1.
- So we are now talking about this whole area.
- And area is key.
- So if you want to know the probability of this occurring,
- you actually want the area under this curve from this
- point to this point.
- And for those of you who have studied your calculus, that
- would essentially be the definite integral of this
- probability density function from this point to this point.
- So from-- let me see, I've run out of space down here.
- So let's say if this graph-- let me draw it
- in a different color.
- If this line was defined by, I'll call it f of x.
- I could call it p of x or something.
- The probability of this happening would be equal to the
- integral, for those of you who've studied calculus, from
- 1.9 to 2.1 of f of x dx.
- Assuming this is the x-axis.
- So it's a very important thing to realize.
- Because when a random variable can take on an infinite number
- of values, or it can take on any value between an interval,
- to get an exact value, to get exactly 1.999, the
- probability is actually 0.
- It's like asking you what is the area under a
- curve on just this line.
- Or even more specifically, it's like asking you
- what's the area of a line?
- An area of a line, if you were to just draw a line,
- you'd say well, area is height times base.
- Well the height has some dimension, but the base,
- what's the width the a line?
- As far as the way we've defined a line, a line has no with,
- and therefore no area.
- And it should make intuitive sense.
- That the probability of a very super-exact thing happening
- is pretty much 0.
- That you really have to say, OK what's the probably
- that we'll get close to 2?
- And then you can define an area.
- And if you said oh, what's the probability that we get
- someplace between 1 and 3 inches of rain, then of course
- the probability is much higher.
- The probability is much higher.
- It would be all of this kind of stuff.
- You could also say what's the probability we have
- less than 0.1 of rain?
- Then you would go here and if this was 0.1, you would
- calculate this area.
- And you could say what's the probability that we have more
- than 4 inches of rain tomorrow?
- Then you would start here and you'd calculate the area in the
- curve all the way to infinity, if the curve has area all
- the way to infinity.
- And hopefully that's not an infinite number, right?
- Then your probability won't make any sense.
- But hopefully if you take this sum it comes to some number.
- And we'll say there's only a 10% chance that you have more
- than 4 inches tomorrow.
- And all of this should immediately lead to one light
- bulb in your head, is that the probability of all of the
- events that might occur can't be more than 100%.
- All the events combined-- there's a probability of 1 that
- one of these events will occur.
- So essentially, the whole area under this curve
- has to be equal to 1.
- So if we took the integral of f of x from 0 to infinity, this
- thing, at least as I've drawn it, dx should be equal to 1.
- For those of you who've studied calculus.
- For those of you who haven't, an integral is just the
- area under a curve.
- And you can watch the calculus videos if you want to learn a
- little bit more about how to do them.
- And this also applies to the discrete probability
- Let me draw one.
- The sum of all of the probabilities have
- to be equal to 1.
- And that example with the dice-- or let's say, since it's
- faster to draw, the coin-- the two probabilities have
- to be equal to 1.
- So this is 1, 0, where x is equal to 1 if we're heads
- or 0 if we're tails.
- Each of these have to be 0.5.
- Or they don't have to be 0.5, but if one was 0.6, the
- other would have to be 0.4.
- They have to add to 1.
- If one of these was-- you can't have a 60% probability of
- getting a heads and then a 60% probability of getting
- a tails as well.
- Because then you would have essentially 120% probability
- of either of the outcomes happening, which makes
- no sense at all.
- So it's important to realize that a probability distribution
- function, in this case for a discrete random variable, they
- all have to add up to 1.
- So 0.5 plus 0.5.
- And in this case the area under the probability
- density function also has to be equal to 1.
- Anyway, I'm all the time for now.
- In the next video I'll introduce you to the idea
- of an expected value.
- See you soon.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | http://www.khanacademy.org/math/probability/random-variables-topic/random_variables_prob_dist/v/probability-density-functions | 13 |
68 | Home | Writing | Reading | Social Studies | Math | Science
Lesson 3A: Geometry and Graphing
1. Units of Measure
b. Triangles, angles and lines
In Lesson 3 we will look at geometry and coordinate geometry (graphing).
During the test, you will have a page that contains many useful formulas for making geometry calculations. We will be referring to those formulas here.
UNITS OF MEASURE
The test will ask you to work with measurements that are expressed in units such as feet, kilometers, gallons or pounds.
Distance and length are measured along one dimension, in a straight line from one point to another.
Common units and conversions: Abbreviations:
1 foot = 12 inches inch(es): in. feet:ft.
1 yard = 3 feet yard: yd. yards: yds.
1 meter = 100 centimeters kilometer(s): km meter(s): m
1 kilometer = 100 meters
1 cm = 10 millimeters centimeter(s): cm millimeter(s): mm
Area is a measure of how much space a shape takes up in two dimensions.
A rectangle measures 2 feet by 3 feet. You are asked for the area in square inches.
- The area for a rectangle is length x width:
= 2 x 3
= 6 square feet
- To convert feet to inches, recall that there are 12 inches in a foot so:
1 square foot = (12 x 12) square inches = 144 sq. in.
We convert by multiplying the area (in square feet) times 144 (square inches per square foot):
= (6 x 144) square inches
= 864 sq. in.
Volume is a measure of how much space an object takes up in three dimensions.
A rectangular solid measures 2 feet by 3 feet by 4 feet
It has a volume given by the formula:
“Volume = length x width x height”
= 2 x 3 x 4 = 6 x 4
= 24 cubic feet
Angles are measured in degrees (º). A degree is 1/360th of a full circle.
Some examples of common angles:
We will talk more about angles in our discussion of geometric principles.
The diameter is the distance across the circle.
The radius, which is half the length of the diameter, is the distance from the center to the edge.
The circumference is the distance all the way around the circle.
- In the list of formulas you have:
“Circumference = π x diameter; π is approximately 3.14” or
Circumference = 3.14 x diameter, which can simply be remembered as: C = πd
or, if you have the radius, C = 2πr since d, the diameter, is 2 x r, the radius.
The list of formulas also has this formula for circles:
“Area = π x radius2” or Area = 3.14 x (radius squared)
or A = πr2
You are told that a wheel has a radius of 20 cm. You are asked for the area and circumference of that wheel.
Area = πr2
= 3.14 x (20)2
= 3.14 x (20 x 20)
= 3.14 x 400
=122600 square centimeters
Circumference = 2πr
= 2 x 3.14 x 20
= 6.28 x 20
= 125.6 centimeters
b. Triangles, angles and lines
A triangle has three sides and three vertices (corners).
The angle measures of all three vertices in a triangle always add up to 180º
This can be seen if you cut a triangle apart and bring the vertices together.
They form a straight edge which shows that, together, the three angles span a total angle measure of 180º.
Perpendicular lines intersect to form a 90º corner or right angle: ┼
The right angle symbol always means that an angle is 90º:
Parallel lines are oriented at the same angle: ║
Any angle is formed by two intersecting lines.
A line intersecting another line can bisect that line into two angles. The angles add up to 180º because the
original line was a 180º angle.
We call the two angles supplementary angles:
a + b = 180º
A bisected right angle creates two angles that add up to 90º.
We call those angles complementary angles:
a + b =90º
When a line bisects another line, it can create 4 angles.
Opposite angles have the same degree measure, as indicated by the matching colors
Whenever you have an unknown angle, check to see if it is:
- an opposite angle
- a complementary/supplementary angle
- or, a part of a triangle
Click on the link below to move on to lesson 3B.
Back: Math Lesson 2B | Next: Math Lesson 3B
Signup! It's Free! | Language Arts | Reading | Social Studies | Math | Science | http://www.gedforfree.com/free-ged-course/math/math-lesson-3a.html | 13 |
59 | In mathematics, a conic section (or just conic) is a curve that can be formed by intersecting a cone (more precisely, a right circular conical surface) with a plane. The conic sections were named and studied as long ago as 200 B.C.E., when Apollonius of Perga undertook a systematic study of their properties.
Two well-known conics are the circle and the ellipse. They arise when the intersection of the cone and plane is a closed curve. The circle is a special case of the ellipse in which the plane is perpendicular to the axis of the cone. If the plane is parallel to a generator line of the cone, the conic is called a parabola. Finally, if the intersection is an open curve and the plane is not parallel to generator lines of the cone, the figure is a hyperbola. (In this case the plane will intersect both halves of the cone, producing two separate curves, though often one is ignored.)
Conic sections are observed in the paths taken by celestial bodies. When two massive objects interact according to Newton's law of universal gravitation, their orbits are conic sections if their common center of mass is considered to be at rest. If they are bound together, they will both trace out ellipses; if they are moving apart, they will both follow parabolas or hyperbolas.
The study of conic sections is important not only for mathematics, physics, and astronomy, but also for a variety of engineering applications. The smoothness of conic sections is an important property for applications such as aerodynamics, where a smooth surface is needed to ensure laminar flow and prevent turbulence.
There are a number of degenerate cases, in which the plane passes through the apex of the cone. The intersection in these cases can be a straight line (when the plane is tangential to the surface of the cone); a point (when the angle between the plane and the axis of the cone is larger than this); or a pair of intersecting lines (when the angle is smaller). There is also a degenerate where the cone is a cylinder (the vertex is at infinity), which can produce two parallel lines.
The four defining conditions above can be combined into one condition that depends on a fixed point F (the focus), a line L (the directrix) not containing F, and a nonnegative real number e (the eccentricity). The corresponding conic section consists of all points whose distance to F equals e times their distance to L. For 0 < e < 1 an ellipse is obtained; for e = 1, a parabola; and for e > 1, a hyperbola.
For an ellipse and a hyperbola, two focus-directrix combinations can be taken, each giving the same full ellipse or hyperbola. The distance from the center to the directrix is a / e, where is the semi-major axis of the ellipse, or the distance from the center to the tops of the hyperbola. The distance from the center to a focus is .
In the case of a circle, the eccentricity e = 0, and one can imagine the directrix to be infinitely far removed from the center. However, the statement that the circle consists of all points whose distance is e times the distance to L is not useful, because it yields zero times infinity.
The eccentricity of a conic section is thus a measure of how far it deviates from being circular.
For a given , the closer is to 1, the smaller is the semi-minor axis.
In the Cartesian coordinate system, the graph of a quadratic equation in two variables is always a conic section, and all conic sections arise in this way. The equation will be of the form
- with , , not all zero.
- if , the equation represents an ellipse (unless the conic is degenerate, for example );
- if and , the equation represents a circle;
- if , the equation represents a parabola;
- if , the equation represents a hyperbola;
- if we also have , the equation represents a rectangular hyperbola.
Note that A and B are just polynomial coefficients, not the lengths of semi-major/minor axis as defined in the previous sections.
Through change of coordinates these equations can be put in standard forms:
- Ellipse: ,
- Rectangular Hyperbola:
Such forms will be symmetrical about the x-axis and for the circle, ellipse, and hyperbola, symmetrical about the y-axis.
The rectangular hyperbola however is only symmetrical about the lines and . Therefore its inverse function is exactly the same as its original function.
These standard forms can be written as parametric equations,
- Circle: ,
- Ellipse: ,
- Parabola: ,
- Hyperbola: or .
- Rectangular Hyperbola:
In homogeneous coordinates a conic section can be represented as:
- A1x2 + A2y2 + A3z2 + 2B1xy + 2B2xz + 2B3yz = 0.
Or in matrix notation
The matrix is called "the matrix of the conic section."
is called the determinant of the conic section. If Δ = 0 then the conic section is said to be degenerate, this means that the conic section is in fact a union of two straight lines. A conic section that intersects itself is always degenerate, however not all degenerate conic sections intersect themselves; if they do not, they are straight lines.
For example, the conic section reduces to the union of two lines:
Similarly, a conic section sometimes reduces to a (single) line:
is called the discriminant of the conic section. If δ = 0 then the conic section is a parabola, if δ<0, it is an hyperbola and if δ>0, it is an ellipse. A conic section is a circle if δ>0 and A1 = A2, it is an rectangular hyperbola if δ<0 and A1 = -A2. It can be proven that in the complex projective plane CP2 two conic sections have four points in common (if one accounts for multiplicity), so there are never more than 4 intersection points and there is always 1 intersection point (possibilities: 4 distinct intersection points, 2 singular intersection points and 1 double intersection points, 2 double intersection points, 1 singular intersection point and 1 with multiplicity 3, 1 intersection point with multiplicity 4). If there exists at least one intersection point with multiplicity > 1, then the two conic sections are said to be tangent. If there is only one intersection point, which has multiplicity 4, the two conic sections are said to be osculating.
Furthermore each straight line intersects each conic section twice. If the intersection point is double, the line is said to be tangent and it is called the tangent line. Because every straight line intersects a conic section twice, each conic section has two points at infinity (the intersection points with the line at infinity). If these points are real, the conic section must be a hyperbola, if they are imaginary conjugated, the conic section must be an ellipse, if the conic section has one double point at infinity it is a parabola. If the points at infinity are (1,i,0) and (1,-i,0), the conic section is a circle. If a conic section has one real and one imaginary point at infinity or it has two imaginary points that are not conjugated it is neither a parabola nor an ellipse nor a hyperbola.
The semi-latus rectum of a conic section, usually denoted l, is the distance from the single focus, or one of the two foci, to the conic section itself, measured along a line perpendicular to the major axis.
In polar coordinates, a conic section with one focus at the origin and, if any, the other on the x-axis, is given by the equation
As above, for e = 0, yields a circle, for 0 < e < 1 one obtains an ellipse, for e = 1 a parabola, and for e > 1 a hyperbola.
Conic sections are important in astronomy. The orbits of two massive objects that interact according to Newton's law of universal gravitation are conic sections if their common center of mass is considered to be at rest. If they are bound together, they will both trace out ellipses; if they are moving apart, they will both follow parabolas or hyperbolas.
Conic sections are always "smooth;" more precisely, they contain no inflection points. This is important for many applications, such as aerodynamics, where a smooth surface is required to ensure laminar flow and prevent turbulence.
In projective geometry, the conic sections in the projective plane are equivalent to each other up to projective transformations.
- ↑ E.J. Wilczynski, 1916, Some remarks on the historical development and the future prospects of the differential geometry of plane curves. Bull. Amer. Math. Soc. 22:317-329.
- Arnone, Wendy. 2001. Geometry for Dummies. Hoboken, NJ: For Dummies (Wiley). ISBN 0764553240.
- Hartshorne, Robin. 2002. Geometry: Euclid and Beyond. Undergraduate Texts in Mathematics. New York: Springer. ISBN 0387986502.
- Research and Education Association. 1999. Math Made Nice-n-Easy Books #7: Trigonometric Identities & Equations, Straight Lines, Conic Sections. Piscataway, N.J.: Research & Education Association. ISBN 0878912061.
- Smith, Karen E. 2000. An Invitation to Algebraic Geometry. New York: Springer. ISBN 0387989803.
- Stillwell, John. 1998. Numbers and Geometry. Undergraduate Texts in Mathematics. New York: Springer. ISBN 0387982892.
- Stillwell, John. 2006. Yearning for the Impossible: The Surprising Truths of Mathematics. Wellesley, MA: A. K. Peters. ISBN 156881254X.
All links retrieved June 13, 2013.
- Derivations of Conic Sections
- Conic sections
- Eric W. Weisstein. Conic Section. MathWorld.
- Determinants and Conic Section Curves.
- Occurrence of the conics.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Conic_section | 13 |
172 | Mechanics: Vectors and Forces in Two-Dimensions
Vectors and Forces in 2-D: Problem Set Overview
This set of 27 problems targets your ability to determine the vector sum of two or more forces (which are not at right angles to each other), analyze situations in which forces are applied at angles to the horizontal to move an object along a horizontal surface, analyze equilibrium situations to determine an unknown quantity, and to analyze the motion of objects along an inclined plane. The more difficult problems are color-coded as blue problems.
Vector Addition and Vector Components
Two or more vectors can be added together in order to determine the resultant vector. The resultant vector is simply the result of adding two or more vectors. Vectors which make right angles to one another are easily added using the Pythagorean theorem; Trigonometric functions can be used to determine the direction of the resultant vector. Vectors which are not at right angles to each other can be resolved into components which lie along the east-west and north-south coordinate axes. Sine and cosine functions can be used to determine these components. Once all components have been determined, they can be simplified into a single east-west and a single north-south vector; then the Pythagorean theorem and trigonometric functions can be used to determine the magnitude and direction of the resultant vector.
Counter-Clockwise Convention and Vector Components
A vector which is directed at angle to one of the coordinate axes is said to have components directed along the axes. These components describe the effect of the vector in the direction of the axes. The direction of a vector is often expressed using the counter clockwise (CCW) from east convention. By such a convention, the direction of a vector is represented as the counter-clockwise angle of rotation which the vector makes with due East. When this convention is used, the components of the vector along the east-west and north-south axes can be determined quite easily using the sine and cosine functions. If a vector has a magnitude of A and a direction of (by the CCW convention), then the horizontal and vertical components can be determined using the following equations:
Ax = A • cos
Ay = A • sin
Newton's Second Law
The acceleration (a) of objects is caused by an unbalanced or net force (Fnet). The magnitude of the acceleration is equal to the ratio of net force to mass: a = Fnet / m. Typical Newton's second law problems are centered around determining the net force, the mass or the magnitude of individual forces acting upon an object.
There are typically two types of these problems in this set of problems:
- Determine the individual force value: If the acceleration of an object is known, then the magnitude of the net force can usually be determined. This net force value is related to the vector sum of all individual force values; as such, the magnitude of an individual force can often be found if the net force can be calculated.
- Determine the acceleration value: If the values of all individual force values are known, then the net force can be calculated as the vector sum of all the forces. The mass is often stated or determined from the weight of the object. The acceleration of the object can then be found as the ratio of the net force to
Mass is a quantity which is dependent upon the amount of matter present within an object; it is expressed in kilograms. Weight, on the other hand, is the force of gravity which acts upon an object. Being a force, weight is expressed in the metric unit as Newtons. Every location in the universe is characterized by a gravitational constant represented by the symbol g (sometimes referred to as the acceleration of gravity). Weight (or Fgrav) and mass are related by the equation: Fgrav = m • g.
An object which is moving (or even attempting to move) across a surface encounters a force of friction. Friction force results from the two surfaces being pressed together closely, causing intermolecular attractive forces between molecules of different surfaces. As such, friction depends upon the nature of the two surfaces and upon the degree to which they are pressed together. The friction force can be calculated using the equation:
Ffrict = µ • Fnorm
The symbol µ (pronounced mew) represents of the coefficient of friction and will be different for different surfaces.
The Acceleration of Objects by Forces at Angles
Several of the problems in this set target your ability to analyze objects which are moving across horizontal surfaces and acted upon by forces directed at angles to the horizontal. Previously, Newton's second law has been applied to analyze objects accelerated across horizontal surfaces by horizontal forces. When the applied force is at an angle to the horizontal, the approach is very similar. The first task involves the construction of a free-body diagram and the resolution of the angled force into horizontal and vertical components. Once done, the problem becomes like the usual Newton's second law problem in which all forces are directed either horizontally or vertically.
The free-body diagram above shows the presence of a friction force. This force may or may not be present in the problems you solve. If present, its value is related to the normal force and the coefficient of friction (see above). There is a slight complication related to the normal force. As always, an object which is not accelerating in the vertical direction must be experiencing a balance of all vertical forces. That is, the sum of all up forces is equal to the sum of all down forces. But now there are two up forces - the normal force and the Fy force (vertical component of the applied force) As such, the normal force plus the vertical component of the applied force is equal to the downward gravity force. That is,
Fnorm + Fy = Fgrav
There are other instances in which the applied force is exerted at an angle below the horizontal. Once resolved into its components, there are two downward forces acting upon the object - the gravity force and the vertical component of the applied force (Fy). In such instances, the gravity force plus the vertical component of the applied force is equal to the upward normal force. That is,
Fnorm = Fgrav + Fy
As always, the net force is the vector sum of all the forces. In this case, the vertical forces sum to zero; the remaining horizontal forces will sum together to equal the net force. Since the friction force is leftward (in the negative direction), the vector sum equation can be written as
Fnet = Fx - Ffrict = m • a
The general strategy for solving these problems involves first using trigonometric functions to determine the components of the applied force. If friction is present, a vertical force analysis is used to determine the normal force; and the normal force is used to determine the friction force. Then the net force can be computed using the above equation. Finally, the acceleration can be found using Newton's second law of motion.
The Hanging of Signs and Other Objects at Equilibrium
Several of the problems in this set target your ability to analyze objects which are suspended at equilibrium by two or more wires, cables, or strings. In each problem, the object is attached by a wire, cable or string which makes an angle to the horizontal. As such, there are two or more tension forces which have both a horizontal and a vertical components. The horizontal and vertical components of these tension forces is related to the angle and the tension force value by a trigonometric function (see above). Since the object is at equilibrium, the vector sum of all horizontal force components must add to zero and the vector sum of all vertical force components must add to zero. In the case of the vertical analysis, there is typically one downward force - the force of gravity - which is related to the mass of the object. There are two or more upward force components which are the result of the tension forces. The sum of these upward force components is equal to the downward force of gravity.
The unknown quantity to be solved for could be the tension, the weight or the mass of the object; the angle is usually known. The graphic above illustrates the relationship between these quantities. Detailed information and examples of equilibrium problems is available online at The Physics Classroom Tutorial.
Inclined Plane Problems
Several problems in this set of problems will target your ability to analyze objects positioned on inclined planes, either accelerating along the incline or at equilibrium. As in all problems in this set, the analysis begins with the construction of a free-body diagram in which forces acting upon the object are drawn. This is shown below on the left. Note that the force of friction is directed parallel to the incline, the normal force is directed perpendicular to the incline, and the gravity force is neither parallel nor perpendicular to the incline. It is common practice in Fnet = m•a problems to analyze the forces acting upon an object in terms of those which are along the same axis of the acceleration and those which are perpendicular to it. On horizontal surfaces, we would look at all horizontal forces separate from those which are vertical. But on inclined surfaces, we would analyze the forces parallel to the incline (along the axis of acceleration) separate from those which are perpendicular to the incline. Since the force of gravity is neither parallel nor perpendicular to the inclined plane, it is imperative that it be resolved into two components of force which are directed parallel and perpendicular to the incline. This is shown on the diagram below in the middle. The formulas for determining the components of the gravity force parallel and perpendicular to the inclined plane (have an incline angle of theta) are:
Fparallel = m•g•sin(theta)
Fperpendicular = m•g•cos(theta)
Once the components are found, the gravity force can be ignored since it has been substituted for by its components; this is illustrated in the diagram below on the right.
Once the gravity force has been resolved into its perpendicular and parallel components, the problem is approached like any Fnet = m•a problem. The net force is determined by adding all the forces as vectors. The forces directed perpendicular to the incline balance each other and add to zero. For the more common cases in which there are only two forces perpendicular to the incline, one might write this as:
Fnorm = Fperpendicular
The net force is therefore the result of the forces directed parallel to the incline. As always, the net force is found by adding the forces in the direction of acceleration and subtracting the forces directed opposite of the acceleration. In the specific case shown above for an object sliding down an incline in the presence of friction,
Fnet = Fparallel - Ffrict
Once the net force is determined, the acceleration can be calculated from the ratio of net force to mass.
There are a variety of situations that could occur for the motion of objects along inclined planes. There are situations in which a force is applied to the object upward and parallel to the incline to either hold the object at rest or to accelerate it upward along the incline. Whenever there is a motion up the inclined plane, friction would oppose that motion and be directed down the incline. The net force is still determined by adding the forces in the direction of acceleration and subtracting the forces directed opposite of the acceleration. In this case, the net force is given by the following equation:
Fnet = Fapp - Fparallel - Ffrict
Specific discussions of each of the myriad of possibilities is not as useful as one might think. Most often such discussions cause physics students to focus on the specifics and to subsequently miss the big ideas which underlay every analysis regardless of the specific situation. Every problem can be (and should be) approached in the same manner: by drawing the free-body diagram showing all the forces acting upon the object, resolving the gravity force into components parallel and perpendicular to the incline, and writing the Fnet expression by adding the forces in the direction of acceleration and subtracting the forces directed opposite of the acceleration.
Habits of an Effective Problem-Solver
An effective problem solver by habit approaches a physics problem in a manner that reflects a collection of disciplined habits. While not every effective problem solver employs the same approach, they all have habits which they share in common. These habits are described briefly here. An effective problem-solver...
- ...reads the problem carefully and develops a mental picture of the physical situation. If needed, they sketch a simple diagram of the physical situation to help visualize it.
- ...identifies the known and unknown quantities in an organized manner, often times recording them on the diagram iteself. They equate given values to the symbols used to represent the corresponding quantity (e.g., m = 1.25 kg, µ = 0.459, vo = 0.0 m/s, Ø = 41.6º, vf = ???).
- ...plots a strategy for solving for the unknown quantity; the strategy will typically center around the use of physics equations be heavily dependent upon an understaning of physics principles.
- ...identifies the appropriate formula(s) to use, often times writing them down. Where needed, they perform the needed conversion of quantities into the proper unit.
- ...performs substitutions and algebraic manipulations in order to solve for the unknown quantity.
Additional Readings/Study Aids:
The following pages from The Physics Classroom tutorial may serve to be useful in assisting you in the understanding of the concepts and mathematics associated with these problems.
- Vectors and Direction
- Vector Addition
- Vector Components
- Vector Resolution
- Mass and Weight
- Newton's First Law
- Drawing Free Body Diagrams
- Newton's Second Law
- Determining Acceleration From Force
- Determining Individual Force Values
- Friction Force
- Kinematic Equations
- Vector Components
- Vector Resolution
- Net Force Problems Revisited
- Inclined Planes | http://www.physicsclassroom.com/calcpad/vecforce/index.cfm | 13 |
69 | Shape Change and Density
Date: Fall 2009
Why does changing the shape of an object have no effect
on the density of that object?
In a uniform, unchanging material, density is a measure of how much mass
there is per unit volume of the material. Density is an "intrinsic"
property of the material, which means it's not affected by changes in
the amount of the material. You can add, remove, or reshape the material
all you want, and its density will be unaffected. If the material is *not*
constant (like if you squished bread), then changing shape may change the
Hope this helps,
Density is a measure of the amount of mass (weight) that exists in a given volume.
Mathematically, density equals mass divided by volume. Density is commonly reported
in grams per cubic centimeter.
Volume is reported in units of cubic length (that is, cubic feet, cubic centimeters,
cubic meters…). One could easily define objects that have different shapes, yet
similar volumes. Consider, for example, a right rectangular prism that measures 5
(long) x 5 (wide) x 4 (height); this object has 100 cubic units of volume. An
object that measures 10 (long) x 10 (wide) x 1 (height) also has 100 cubic units
of volume. The two units have different shapes, yet equivalent volumes. If the same
mass (weight) of material exists in each of the two objects, then the density would
be the same in each object. Density is independent of shape.
The equation for density is a ratio, so density does not depend on the size of the
material under study, only the ratio between the objects mass and its volume.
Certainly a larger object will weigh more, but it will also have a larger volume.
Think about this: How can two objects of similar volumes have different densities?
I suppose it makes a difference how the shape of the object is changed.
Let us imagine the object is a lead brick. If the shape is changed
by cutting off a piece of the brick, the density of the pieces is the
same as the density of the original brick. However, if the lead
brick is compressed in a powerful vise in order to change its
volume, the same mass is being forced to occupy a smaller volume,
which therefore causes its density to increase.
There are two basic values that control what the density is: mass
(the amount of stuff), and volume (how much space the amount of stuff takes).
Now imagine a sample of water. Whether that water is in a tall, thin
container (such as a glass) or in a flat, wide container (such as a
bowl) - the amount of stuff (the mass) has not changed, you still
have the same quantity of water. Also, the volume has not changed,
whether the 1 cup of water is in a glass or in a bowl, does not
change the fact that there is 1 cup of water. So since the mass and
the volume has not changed, and since mass and volume are the only
two factors in determining density, then the density has not changed.
Having said that, there is a "change of shape" that does change
density. Imagine a sponge (like the kind used to wash dishes). If
the sponge is squeezed it takes up less volume, even if the mass
(the amount of sponge) has not changed. So in this case, the change
in shape did cause a change in density. BUT, it does so because the
air pockets in the sponge has been squeezed out, if you just
consider the actual plastic that makes up the sponge - without
considering the air pockets in the sponge, then the volume occupied
by just the plastic has not changed. So while the sponge object
changed in density with the squeezing, the plastic in the sponge has not.
Greg (Roberto Gregorius)
The units of "density" is "mass" per "volume". Assuming the object is
uniform, at rest,
Mass = Density x Volume. So the increase (or decrease) in the Volume of the
object results in a proportional corresponding change in the Mass of the
object. There are of course ways to "trick" the object, e.g. changing the
temperature, applied pressure and the like, but in the "simple" case the
Mass and Volume will vary in the same proportion.
Density is the amount of matter in the space an object takes up
(called its volume).
Unless a body changes volume or the amount of matter changes, its
density stays the same.
R. W. "Bob" Avakian
B.S. Earth Sciences; M.S. Geophysics
Oklahoma State Univ. Inst. of Technology
Click here to return to the General Topics Archives
Update: June 2012 | http://www.newton.dep.anl.gov/askasci/gen06/gen06792.htm | 13 |
202 | Derivation of the Navier–Stokes equations
Basic assumptions
The Navier–Stokes equations are based on the assumption that the fluid, at the scale of interest, is a continuum, in other words is not made up of discrete particles but rather a continuous substance. Another necessary assumption is that all the fields of interest like pressure, velocity, density, temperature and so on are differentiable, weakly at least.
The equations are derived from the basic principles of conservation of mass, momentum, and energy. For that matter, sometimes it is necessary to consider a finite arbitrary volume, called a control volume, over which these principles can be applied. This finite volume is denoted by and its bounding surface . The control volume can remain fixed in space or can move with the fluid.
The material derivative
Changes in properties of a moving fluid can be measured in two different ways. One can measure a given property by either carrying out the measurement on a fixed point in space as particles of the fluid pass by, or by following a parcel of fluid along its streamline. The derivative of a field with respect to a fixed position in space is called the Eulerian derivative while the derivative following a moving parcel is called the advective or material derivative.
The material derivative is defined as the operator:
where is the velocity of the fluid. The first term on the right-hand side of the equation is the ordinary Eulerian derivative (i.e. the derivative on a fixed reference frame, representing changes at a point with respect to time) whereas the second term represents changes of a quantity with respect to position (see advection). This "special" derivative is in fact the ordinary derivative of a function of many variables along a path following the fluid motion; it may be derived through application of the chain rule in which all independent variables are checked for change along the path (i.e. the total derivative).
For example, the measurement of changes in wind velocity in the atmosphere can be obtained with the help of an anemometer in a weather station or by mounting it on a weather balloon. The anemometer in the first case is measuring the velocity of all the moving particles passing through a fixed point in space, whereas in the second case the instrument is measuring changes in velocity as it moves with the fluid.
Conservation laws
This is done via the Reynolds transport theorem, an integral relation stating that the sum of the changes of some intensive property (call it ) defined over a control volume must be equal to what is lost (or gained) through the boundaries of the volume plus what is created/consumed by sources and sinks inside the control volume. This is expressed by the following integral equation:
where v is the velocity of the fluid and represents the sources and sinks in the fluid. Recall that represents the control volume and its bounding surface.
Applying Leibniz's rule to the integral on the left and then combining all of the integrals:
The integral must be zero for any control volume; this can only be true if the integrand itself is zero, so that:
From this valuable relation (a very generic continuity equation), three important concepts may be concisely written: conservation of mass, conservation of momentum, and conservation of energy. Validity is retained if is a vector, in which case the vector-vector product in the second term will be a dyad.
Conservation of momentum
The most elemental form of the Navier–Stokes equations is obtained when the conservation relation is applied to momentum. Writing momentum as gives:
where is a dyad, a special case of tensor product, which results in a second rank tensor; the divergence of a second rank tensor is again a vector (a first rank tensor). Noting that a body force (notated ) is a source or sink of momentum (per volume) and expanding the derivatives completely:
Note that the gradient of a vector is a special case of the covariant derivative, the operation results in second rank tensors; except in Cartesian coordinates, it's important to understand that this isn't simply an element by element gradient. Rearranging and recognizing that :
The leftmost expression enclosed in parentheses is, by mass continuity (shown in a moment), equal to zero. Noting that what remains on the right side of the equation is the convective derivative:
This appears to simply be an expression of Newton's second law (F = ma) in terms of body forces instead of point forces. Each term in any case of the Navier–Stokes equations is a body force. A shorter though less rigorous way to arrive at this result would be the application of the chain rule to acceleration:
where . The reason why this is "less rigorous" is that we haven't shown that picking is correct; however it does make sense since with that choice of path the derivative is "following" a fluid "particle", and in order for Newton's second law to work, forces must be summed following a particle. For this reason the convective derivative is also known as the particle derivative.
Conservation of mass
Mass may be considered also. Taking (no sources or sinks of mass) and putting in density:
where is the mass density (mass per unit volume), and is the velocity of the fluid. This equation is called the mass continuity equation, or simply "the" continuity equation. This equation generally accompanies the Navier–Stokes equation.
In the case of an incompressible fluid, is a constant and the equation reduces to:
which is in fact a statement of the conservation of volume.
General form of the equations of motion
The generic body force seen previously is made specific first by breaking it up into two new terms, one to describe forces resulting from stresses and one for "other" forces such as gravity. By examining the forces acting on a small cube in a fluid, it may be shown that
where is the Cauchy stress tensor, and accounts for other body forces present. This equation is called the Cauchy momentum equation and describes the non-relativistic momentum conservation of any continuum that conserves mass. is a rank two symmetric tensor given by its covariant components:
The motivation for doing this is that pressure is typically a variable of interest, and also this simplifies application to specific fluid families later on since the rightmost tensor in the equation above must be zero for a fluid at rest. Note that is traceless. The Navier–Stokes equation may now be written in the most general form:
This equation is still incomplete. For completion, one must make hypotheses on the forms of and , that is, one needs a constitutive law for the stress tensor which can be obtained for specific fluid families and on the pressure; additionally, if the flow is assumed compressible an equation of state will be required, which will likely further require a conservation of energy formulation.
Application to different fluids
The general form of the equations of motion is not "ready for use", the stress tensor is still unknown so that more information is needed; this information is normally some knowledge of the viscous behavior of the fluid. For different types of fluid flow this results in specific forms of the Navier–Stokes equations.
Newtonian fluid
Compressible Newtonian fluid
The formulation for Newtonian fluids stems from an observation made by Newton that, for most fluids,
In order to apply this to the Navier–Stokes equations, three assumptions were made by Stokes:
- The stress tensor is a linear function of the strain rates.
- The fluid is isotropic.
- For a fluid at rest, must be zero (so that hydrostatic pressure results).
Applying these assumptions will lead to:
That is, the deviatoric of the deformation rate tensor is identified to the deviatoric of the stress tensor, up to a factor μ.
is the Kronecker delta. μ and λ are proportionality constants associated with the assumption that stress depends on strain linearly; μ is called the first coefficient of viscosity (usually just called "viscosity") and λ is the second coefficient of viscosity (related to bulk viscosity). The value of λ, which produces a viscous effect associated with volume change, is very difficult to determine, not even its sign is known with absolute certainty. Even in compressible flows, the term involving λ is often negligible; however it can occasionally be important even in nearly incompressible flows and is a matter of controversy. When taken nonzero, the most common approximation is λ ≈ - ⅔ μ.
A straightforward substitution of into the momentum conservation equation will yield the Navier–Stokes equations for a compressible Newtonian fluid:
or, more compactly in vector form,
where the transpose has been used. Gravity has been accounted for as "the" body force, ie . The associated mass continuity equation is:
In addition to this equation, an equation of state and an equation for the conservation of energy is needed. The equation of state to use depends on context (often the ideal gas law), the conservation of energy will read:
With a good equation of state and good functions for the dependence of parameters (such as viscosity) on the variables, this system of equations seems to properly model the dynamics of all known gases and most liquids.
Incompressible Newtonian fluid
For the special (but very common) case of incompressible flow, the momentum equations simplify significantly. Taking into account the following assumptions:
- Viscosity will now be a constant
- The second viscosity effect
- The simplified mass continuity equation
then looking at the viscous terms of the momentum equation for example we have:
Similarly for the and momentum directions we have and . o
Non-Newtonian fluids
A non-Newtonian fluid is a fluid whose flow properties differ in any way from those of Newtonian fluids. Most commonly the viscosity of non-Newtonian fluids is not independent of shear rate or shear rate history. However, there are some non-Newtonian fluids with shear-independent viscosity, that nonetheless exhibit normal stress-differences or other non-Newtonian behaviour. Many salt solutions and molten polymers are non-Newtonian fluids, as are many commonly found substances such as ketchup, custard, toothpaste, starch suspensions, paint, blood, and shampoo. In a Newtonian fluid, the relation between the shear stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different, and can even be time-dependent. The study of the non-Newtonian fluids is usually called rheology. A few examples are given here.
Bingham fluid
In Bingham fluids, the situation is slightly different:
Power-law fluid
This form is useful for approximating all sorts of general fluids, including shear thinning (such as latex paint) and shear thickening (such as corn starch water mixture).
Stream function formulation
In the analysis of a flow, it is often desirable to reduce the number of equations or the number of variables being dealt with, or both. The incompressible Navier-Stokes equation with mass continuity (four equations in four unknowns) can, in fact, be reduced to a single equation with a single dependent variable in 2D, or one vector equation in 3D. This is enabled by two vector calculus identities:
for any differentiable scalar and vector . The first identity implies that any term in the Navier-Stokes equation that may be represented as the gradient of a scalar will disappear when the curl of the equation is taken. Commonly, pressure and gravity are what eliminate, resulting in (this is true in 2D as well as 3D):
where it's assumed that all body forces are describable as gradients (true for gravity), and density has been divided so that viscosity becomes kinematic viscosity.
The second vector calculus identity above states that the divergence of the curl of a vector field is zero. Since the (incompressible) mass continuity equation specifies the divergence of velocity being zero, we can replace the velocity with the curl of some vector so that mass continuity is always satisfied:
So, as long as velocity is represented through , mass continuity is unconditionally satisfied. With this new dependent vector variable, the Navier-Stokes equation (with curl taken as above) becomes a single fourth order vector equation, no longer containing the unknown pressure variable and no longer dependent on a separate mass continuity equation:
Apart from containing fourth order derivatives, this equation is fairly complicated, and is thus uncommon. Note that if the cross differentiation is left out, the result is a third order vector equation containing an unknown vector field (the gradient of pressure) that may be determined from the same boundary conditions that one would apply to the fourth order equation above.
2D flow in orthogonal coordinates
The true utility of this formulation is seen when the flow is two dimensional in nature and the equation is written in a general orthogonal coordinate system, in other words a system where the basis vectors are orthogonal. Note that this by no means limits application to Cartesian coordinates, in fact most of the common coordinates systems are orthogonal, including familiar ones like cylindrical and obscure ones like toroidal.
The 3D velocity is expressed as (note that the discussion has been coordinate free up till now):
where are basis vectors, not necessarily constant and not necessarily normalized, and are velocity components; let also the coordinates of space be .
Now suppose that the flow is 2D. This doesn't mean the flow is in a plane, rather it means that the component of velocity in one direction is zero and the remaining components are independent of the same direction. In that case (take component 3 to be zero):
The vector function is still defined via:
but this must simplify in some way also since the flow is assumed 2D. If orthogonal coordinates are assumed, the curl takes on a fairly simple form, and the equation above expanded becomes:
Examining this equation shows that we can set and retain equality with no loss of generality, so that:
the significance here is that only one component of remains, so that 2D flow becomes a problem with only one dependent variable. The cross differentiated Navier–Stokes equation becomes two 0 = 0 equations and one meaningful equation.
The remaining component is called the stream function. The equation for can simplify since a variety of quantities will now equal zero, for example:
Manipulating the cross differentiated Navier–Stokes equation using the above two equations and a variety of identities will eventually yield the 1D scalar equation for the stream function:
where is the biharmonic operator. This is very useful because it is a single self contained scalar equation that describes both momentum and mass conservation in 2D. The only other equations that this partial differential equation needs are initial and boundary conditions.
Derivation of the scalar stream function equation
Distributing the curl:
Replacing curl of the curl with the Laplacian and expanding convection and viscosity:
Above, the curl of a gradient is zero, and the divergence of is zero. Negating:
Expanding the curl of the cross product into four terms:
Only one of four terms of the expanded curl is nonzero. The second is zero because it is the dot product of orthogonal vectors, the third is zero because it contains the divergence of velocity, and the fourth is zero because the divergence of a vector with only component three is zero (since it's assumed that nothing (except maybe ) depends on component three).
This vector equation is one meaningful scalar equation and two 0 = 0 equations.
The assumptions for the stream function equation are listed below:
- The flow is incompressible and Newtonian.
- Coordinates are orthogonal.
- Flow is 2D:
- The first two scale factors of the coordinate system are independent of the last coordinate: , otherwise extra terms appear.
The stream function has some useful properties:
- Since , the vorticity of the flow is just the negative of the Laplacian of the stream function.
- The level curves of the stream function are streamlines.
The stress tensor
The derivation of the Navier-Stokes equation involves the consideration of forces acting on fluid elements, so that a quantity called the stress tensor appears naturally in the Cauchy momentum equation. Since the divergence of this tensor is taken, it is customary to write out the equation fully simplified, so that the original appearance of the stress tensor is lost.
However, the stress tensor still has some important uses, especially in formulating boundary conditions at fluid interfaces. Recalling that , for a Newtonian fluid the stress tensor is:
If the fluid is assumed to be incompressible, the tensor simplifies significantly:
is the strain rate tensor, by definition: | http://en.wikipedia.org/wiki/Derivation_of_the_Navier%e2%80%93Stokes_equations | 13 |
95 | The first chapter of this book dealt with the topic of kinematics — the mathematical description of motion. With the exception of falling bodies and projectiles (which involve some mysterious thing called gravity) the factors affecting this motion were never discussed. It is now time to expand our studies to include the quantities that affect motion — mass and force. The mathematical description of motion that includes these quantities is called dynamics.
Many introductory textbooks often define a force as "a push or a pull". This is a reasonable informal definition to help you conceptualize a force, but it is a terrible operational definition. What exactly is "a push or a pull"? How would you measure such a thing? Most importantly, how does "a push or a pull" relate to the other quantities already defined in this book?
Physics, like mathematics, is axiomatic. Each new topic begins with elemental concepts, called axioms, that are so simple that they cannot be made any simpler or are so generally well understood that an explanation would not help people to understand them any better. The two quantities that play this role in kinematics are distance and time. No real attempt was made to define either of these quantities formally in this book (so far) and none was needed. Nearly everyone on the planet knows what distance and time mean.
How about we build up the concept of force with real world examples? Here we go …
Physics is a simple subject taught by simpleminded folk. When physicists look at an object, their first instinct is to simplify that object. A book isn't made up of pages of paper bound together with glue and twine, it's a box. A car doesn't have rubber tires that rotate, six-way adjustable seats, ample cup holders, and a rear window defogger; it's a box. A person doesn't have two arms, two legs, and a head; they aren't made of bone, muscle, skin, and hair; they're a box. This is the beginning of a type of drawing used by physicists and engineers called a free body diagram.
Physics is built on the logical process of analysis — breaking complex situations down into a set of simpler ones. This is how we generate our initial understanding of a situation. In many cases this first approximation of reality is good enough. When it isn't, we add another layer to our analysis. We keep repeating the process until we reach a level of understanding that suits our needs.
Just drawing a box is not going to tell us anything. Objects don't exist in isolation. They interact with the world around them. A force is one type of interaction. The forces acting on an object are represented by arrows coming out of the box — out of the center of the box. This means that in essence, every object is a point — a thing with no dimensions whatsoever. The box we initially drew is just a place to put a dot and the dot is just a place to start the arrows. This process is called point approximation and results in the simplest type of free body diagram.
Let's apply this technique to a series of examples. Draw a free body diagram of …
First example: Let's start with the archetypal example that all physics teachers begin with — a demonstration so simple it requires no preparation. Reach into the drawer, pull out the textbook, and lay it on top in a manner befitting its importance. Behold! A book lying on a level table. Is there anything more grand? Now watch as we reduce it to its essence. Draw a box to represent the book. Draw a horizontal line under the box to represent the table if you're feeling bold. Then identify the forces acting on it.
Something keeps the book down. We need to draw an arrow coming out of the center pointing down to represent that force. Thousands of years ago, there was no name for that force. "Books lie on tables because that's what they do," was the thinking. We now have a more sophisticated understanding of the world. Books lie on tables because gravity pulls them down. We could label this arrow Fg for "force of gravity" or W for it's more prosaic name, weight. (Prosaic means non-poetic, by the way. Prosaic is a very poetic way to say common. Prosaic is a non-prosaic word. Back to the diagram.)
Gravity pulls the book down, but it doesn't fall down. Therefore there has to be some force that also pushes the book up. What do we call this force? The "table force"? No that sounds silly and besides, it's not the act of being a table that makes the force. It's some characteristic the table has. Place a book in water or in the air and down it goes. The thing about a table that makes it work is that it's solid. So what do we call this force? The "solid force"? That actually doesn't sound half bad, but it's not the name that's used. Think about it this way. Rest on a table and there's an upward force. Lean against a wall and there's a sideways force. Jump on a trampoline high enough to hit your head on the ceiling and you'll feel a downward force. The direction of the force always seems to be coming out of the solid surface. A direction which is perpendicular to the plane of a surface is said to be normal. The force that a solid surface exerts on anything in the normal direction is called the normal force.
Calling a force "normal" may seem a little odd since we generally think of the word normal as meaning ordinary, usual, or expected. If there's a normal force, shoudn't there also be an abnormal force? The origin of the Modern English word normal is the Latin word for a carpenter's square — norma. The word didn't acquire its current meaning until the Nineteenth Century. Normal force is closer to the original meaning of the word normal than normal behavior (behavior at a right angle?), normal use (use only at a right angle?), or normal body temperature (take your temperature at a right angle?).
Are we done? Well in terms of identifying forces, yes we are. This is a pretty simple problem. You've got a book, a table, and the earth. The earth exerts a force on the book called gravity or weight. The table exerts a force on the book called normal or the normal force. What else is there? Forces come from the interaction between things. When you run out of things, you run out of forces.
The last word for this simple problem is about length. How long should we draw the arrow representing each force. There are two ways to answer this question. One is, "Who cares?" We've identified all the forces and got their directions right, let's move on and let the algebra take care of the rest. This is a reasonable reply. Directions are what really matter since they determine the algebraic sign when we start combining forces. The algebra really will take care of it all. The second answer is, "Who cares is not an acceptable answer." We should make an effort and determine which force is greater given the situation described. Knowing the relative size of the forces may tell us something interesting or useful and help us understand what's going on.
So what is going on? In essence, a whole lot of nothing. Our book isn't going anywhere or doing anything physically interesting. Wait long enough and the paper will decompose (that's chemistry) and decomposers will help decompose it (that's biology). Given the lack of any activity, I think it's safe to say that the downward gravitational force is balanced by the upward normal force.
W = N
In summary, draw a box with two arrows of equal lengths coming out of the center, one pointing up and one pointing down. Label the one pointing down weight (or use the symbol W or Fg) and label the one pointing up normal (or use the symbol N or Fn).
It may seem like I've said a lot for such a simple question, but I rambled with a reason. There were quite a few concepts that needed to be explained: identifying the forces of weight and normal, determining their directions and relative sizes, knowing when to quit drawing, and knowing when to quit adding forces.
Second example: a person floating in still water. We could draw a stick figure, but that has too much unnecessary detail. Remember, analysis is about breaking up complex situations into a set of simple things. Draw a box to represent the person. Draw a wavy line to represent water if you feel like being fancy. Identify the forces acting on the person. They're on earth and they have mass, therefore they have weight. But we all know what it's like to float in water. You feel weightless. There must be a second force to counteract the weight. The force experienced by objects immersed in a fluid is called buoyancy. The person is pulled down by gravity and buoyed up by buoyancy. Since the person is neither rising nor sinking nor moving in any other direction, these forces must cancel
W = B
In summary, draw a box with two arrows of equal lengths coming out of the center, one pointing up and one pointing down. Label the one pointing down weight (or W or Fg) and the one pointing up buoyancy (or B or Fb).
Buoyancy is the forced that objects experience when they are immersed in a fluid. Fluids are substances that can flow. All liquids and gases are fluids. Air is a gas, therefore air is a fluid. But wait, wasn't the book in the previous example immersed in the air. I said there were only three objects in that problem: the book, the table, and the earth. What about the air? Shouldn't we draw a second upward arrow on the book to represent the buoyant force of the air on the book?
The air does indeed exist and it does indeed exert an upward force on the book, but does adding an extra arrow to the previous example really help us understand the situation in any way? Probably not. People float in water and even when they sink they feel lighter in water. The buoyant force in this example is significant. It's what the problem's probably all about. Books in the air just feel like books. Whatever buoyant force is exerted on them is imperceptible and quite difficult to measure.
Analysis is a skill. It isn't a set of procedures one follows. When you reduce a situation to its essence you have to make a judgment call. Sometimes small effects are worth studying and sometimes they aren't. An observant person deals with the details that are significant and quietly ignores the rest. An obsessive person pays attention to all details equally. The former are mentally healthy. The latter are mentally ill.
Third example: a wrecking ball hanging vertically from a cable. Start by drawing a box. No wait, that's silly. Draw a circle. It's a simple shape and it's the shape of the actual thing itself. Draw a line coming out the top if you feel so inclined. Keep it light, however. You don't want to be distracted by it when you add in the forces.
The wrecking ball has mass. It's on the earth (in the earth's gravitational field to be more precise). Therefore it has weight. Weight points down. One vector done.
The wrecking ball is suspended. It isn't falling. Therefore something is acting against gravity. That thing is the cable which suspends the ball. The force it exerts is called tension. The cable is vertical. Therefore the force is vertical. Gravity down. Tension up. Size?
Nothing's going anywhere. This sounds like the previous two questions. Tension and weight cancel.
W = T
In summary, draw a circle with two arrows of equal length coming out of the center, one pointing up and one pointing down. Label the one pointing down weight (or W or Fg) and the one pointing up tension (or T or Ft).
Fourth example: a helicopter hovering in place. How do you draw a helicopter? A box. What if you're tired of drawing boxes? A circle is a good alternative. What if even that's too much effort? Draw a small circle, I suppose. What if I want to try drawing a helicopter? Extra credit will not be awarded.
You know the rest of the story. All objects have weight. Draw an arrow pointing down and label it. The helicopter is neither rising nor falling. What keeps it up? The rotor. What force does the rotor apply? A rotor is a kind of wing and wings provide lift. Draw an arrow pointing up and label it.
The helicopter isn't sitting on the ground, so there is no normal force. It's not a hot air balloon or a ship at sea, so buoyancy isn't significant. There are no strings attached, so tension is nonexistent. In other words, stop drawing forces. Have I mentioned that knowing when to quit is an important skill? If not, I probably should have.
Once again, we have an object going nowhere fast. When this happens it should be somewhat obvious that the forces must cancel.
W = L
In summary, draw a rectangle with two arrows of equal lengths coming out of the center, one pointing up and one pointing down. Label the one pointing down weight (or W or Fg) and the one pointing up lift (or L or Fℓ).
Let's do one more free body diagram for practice.
First, establish what the problem is about. This is somewhat ambiguous. Are we being asked to draw the child or the wagon or both? The long answer is, "it depends." The short answer is, "I am telling you that I want you to deal with the wagon." Draw a rectangle to represent the wagon.
Next, identify the forces. Gravity pulls everything down, so draw an arrow pointing down and label it weight (or W or Fg according to your preference). It is not falling, but lies on solid ground. That means a normal force is present. The ground is level (i.e., horizontal), so the normal force points up. Draw an arrow pointing up and label it normal (or N or Fn). The wagon is not moving vertically so these forces are equal. Draw the arows representing normal and weight with equal length.
W = N
The child is pushing the wagon. We have to assume he's using the wagon for its intended purpose and is pushing it horizontally. I read left to right, which means I prefer using right for the forward direction on paper, blackboards, whiteboards, and computer displays. Draw an arrow to the right coming out of the center of the block. I see no reason to give this force a technical name so let's just call it push (P). If you disagree with me, there is an option. You could call it the applied force (Fa). That has the benefit of making you sound well-educated, but also has the drawback of being less precise. Calling a force an applied force says nothing about it since all forces have to be applied to exist. The word push is also a bit vague since all forces are a kind of push or pull, but pushing is something we generally think of as being done by hands. Since there is no benefit to using technobabble and the plain word push actually describes what the child is doing, we'll use the word push.
Motion on the earth does not take place in a vacuum. When one thing moves, it moves through or across another. When a wheel turns on an axle, the two surfaces rub against one another. This is called dry friction. Grease can be used to separate the solid metal parts, but this just reduces the problem to layers within the grease sliding past one another. This is called viscous friction. Pushing a wagon forward means pushing the air out of the way. This is another kind of viscous friction called drag. Round wheels sag when loaded, which makes them difficult to rotate. This is called rolling resistance. These resistive forces are often collectively called friction and they are everywhere. A real world analysis of any situation that involves motion must include friction. Draw an arrow to the left (opposite the assumed direction of motion) and label it friction (or f or Ff).
Now for the tricky part. How do the horizontal forces compare? Is the push greater than or less than the friction? To answer this question, we first need to do something that physicists are famous for. We are going to exit the real world and enter a fantasy realm. We are going to pretend that friction doesn't exist.
Watch the swinging pendulum. Your eyes are getting heavy. You are getting sleepy. Sleepy. I am going to count to three. When I say the word three you will awake in a world without friction. One. Two. Three. Welcome to the real world. No wait, that's a line from the Matrix.
Assuming hypnosis worked, you should now slide off whatever it is your sitting on and fall to the ground. While you're down there I'd like you to answer this seemingly simple question. What does it take to make something move? More precisely, what does it take to make something move with a constant velocity?
In the real world where friction is everywhere, motion winds down. Hit the brakes of your car and you'll come to a stop rather quickly. Turn the engine of your car off and you'll come to a stop gradually. Bowl a bowling ball down your lane and you probably won't perceive much of a change in speed. (If you're a good bowler, however, you're probably used to seeing the ball curve into the pocket. Remember, velocity is speed plus direction. Whenever either one changes, velocity changes.) Slap a hockey puck with a hockey stick and you'll basically see it move with one speed in one direction. I've chosen these examples and presented them in this order for a reason. There's less friction in coasting to a stop than braking to a stop. There's less friction in a hockey puck on ice than a bowling ball on a wooden lane.
How about an example that's a little less everyday? Push a railroad car on a level track. Think you can't do it? Well think again. I'm not asking you push an entire train or even a locomotive — just a nice empty boxcar or subway car. I'm also not saying it's going to be easy. You may need a friend or two to help. This is something that is routinely done by railroad maitenance crews.
FINISH THIS WITH A GALILEO REFERENCE
IMAGE OF WORKERS PUSHING A SUBWAY CAR
Heaven is a place where nothing ever happens.
Isaac Newton (1642-1727) England. Did most of the work during the plague years of 1665 & 1666. Philosophiæ Naturalis Principia Mathematica (The Mathematical Principles of Natural Philosophy) published in 1687 (20+ year lag!) at Halley's expense.
Lex. I. Law I. Corpus omne perſeverare in ſtatu ſuo quieſcendi vel movendi uniformiter in directum, niſi quatennus illud a viribus impreſſi cogitur ſtatum suum mutare. Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. Projectilia perſeverant in motibus ſuis, niſi quatenus a reſiſtentia aëris retardantur, & vi gravitatis impelluntur deorſum. Trochus, cujus partes cohærendo perpetuo retrahunt ſeſe a motibus rectilineis, non ceſſat rotari, niſi quatenus ab aëre retardantur. Majora autem planetarum & cometarum corpora motus ſuos & progreſſivos & circulares in ſpatiis minus reſiſtentibus factos conſervant diutius. Projectiles continue in their motions, so far as they are not retarded by the resistance of the air, or impelled downwards by the force of gravity. A top, whose parts by their cohesion are continually drawn aside from rectilinear motions, does not cease its rotations, otherwise than it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in freer spaces, persevere in their motions both progressive and circular for a much longer time.
(Newton, interpreted by Elert)
An object at rest tends to remain at rest and an object in motion tends to continue moving with constant velocity unless compelled by a net external force to act otherwise.
This rather complicated sentence says quite a bit. A common misconception is that moving objects contain a quantity called "go" (or something like that — in the old days they called it "impetus") and they eventually stop since they run out of "go".
If no forces act on a body, its speed and direction of motion remain constant.
Motion is just as natural a state as is rest.
Motion (or the lack of motion) doesn't need a cause, but a change in motion does.
Definitio. III. Definition III. Materiæ vis insita est potentia resistendi, qua corpus unumquodque, quantum in se est, perseverat in statu suo vel quiescendi vel movendi uniformiter in directum. The vis insita, or innate force of matter, is a power of resisting, by which every body endeavours to persevere in its present state, whether it be of rest, or of moving uniformly forward in a right line. … … Definitio. IV. Definition IV. Vis impressa est actio in corpus exercita, ad mutandum ejus statum vel quiescendi vel movendi uniformiter in directum. An impressed force is an action exerted upon a body, in order to change its state, either of rest, or of moving uniformly forward in a right line. Consistit hæc vis in actione sola, neque post actionem permanet in corpore. Perserverat enim corpus in statu omni novo per solam vim inertiæ. Est autem vis impresa diversarum originum, ut ex ictu, ex pressione, ex vi centripeta. This force consists in the action only; and remains no longer in the body when the action is over. For a body maintains every new state it acquires, by its vis inertiæ only. Impressed forces are of different origins as from percussion, from pressure, from centripetal force.
In general, inertia is resistance to change. In mechanics, inertia is the resistance to change in velocity or, if you prefer, the resistance to acceleration.
In general, a force is an interaction that causes a change. In mechanics, a force is that which causes a change in velocity or, if you prefer, that which causes an acceleration.
When more than one force acts on an object it is the net force that is important. Since force is a vector quantity, use geometry instead of arithmetic when combining forces.
External force: For a force to accelerate an object it must come from outside it. You can't pull yourself up by your own bootstraps. Anyone who says you can is literally wrong. | http://physics.info/newton-first/ | 13 |
51 | Use our NMR service for your NMR experiments.
Pulse sequences are used to excite signals that are observed in an NMR spectrometer. They range from general purpose single-pulse experiments to complex highly sophisticated experiments that select specifically interacting nuclei. This page introduces the theory and coventions used to describe pulse sequences.
What you should already know before continuing to read this
Before you start reading about pulse sequences, you should some knowledge of:
If you want to read about this subject first, please go to the link above.
Pulse sequences & the vector model
A spinning nucleus, such as a proton, is charged and therefore has a magnetic moment. When it enters an external magnetic field it may either align itself with the field or oppose the field. If the nuclei were in random orientations it would be impossible to observe any net effect. However, it takes energy to oppose the field so slightly fewer nuclei choose to oppose rather than align with the field. This sets up a bulk magnetic moment vector that is the sum of the individual magnetic moments and is the basis of NMR. The overall magnetic moment is called the bulk magnetization of the sample. It is possible to represent the magnetization by a vector in the direction of the magnetic field. The magnetization vector can be shown against coordinates x,y,z with it on the z-axis of a fixed or laboratory frame (fig. 1).
Fig. 1. The laboratory or stationary frame showing the bulk magnetization vector at equilibrium along the z-axis
Say that we somehow managed to move the magnetization vector by an angle β from the z-axis. (How this is done will be explained later.) In this case the magnetic vector will move around the z-axis describing a cone with the z-axis at its center. The vector's motion has a specific frequency that is representative of it and is called Larmor precession.
If the magnetic field is B0 then the Larmor precession frequency will be according to equation 1.
The frequency, ν0 is the same frequency that will be observed in the NMR spectrum. In the magnet there is a detector (electric coil) that measures the Larmor frequency. The measurement is carried out with a detector in the x,y-plane and when the magnetic vector crosses it during precession it induces a measurable electric current in the coil. The process is similar to the way electric current is generated by a rotating magnet in a coil.
We will now deal with how to move the magnetic vector away from equilibrium. The magnetic field is along the z-axis is the magnetization vector. The vector is moved by transmitting a signal (a radiofrequency signal – rf) from a coil in the detector in the x,y-plane. The problem that we have to deal with is that the induced magnetic field that we have to deal with is much stronger than any electrical signal that can be transmitted through the probe coil of the spectrometer. Instead of using a fixed frame of reference, from here on we use a coordinate framework where the x,y-plane rotates about the z-axis at the observation frequency (usually hundreds of millions of times per second) close to the Larmor precession rate of the material under study (Fig. 2).
Fig. 2. The rotating frame, rotating at or near the Larmor frequency, showing the bulk magnetization vector at equilibrium along the z-axis
The induced field in on the z-axis becomes proportionate to the rf signal that becomes a fixed electric field vector (E) in the x,y-plane relative on the rotating frame. The correct choice of the rf signal frequency allows the movement of the magnetization vector from the z-axis to the x,y-plane.
The magnetization vector is moved by an angle proportional to the length and intensity of the pulse. If the vector is moved by 90° then the process is called a 90° pulse (fig. 3). It is possible to arrange a 180° pulse so that the magnetization vector goes from +z to –z.
Fig. 3. Effect of a 90°x pulse. The magnetization vector is rotated to the y-axis.
Most NMR spectroscopic measurements are concerned with measuring more than one signal and each of them has a different Larmor frequency. A sufficiently strong rf pulse is needed to overcome the induced field to move all the signals at different Larmor frequencies away from equilibrium. This is called a hard pulse.
When the pulse is weaker it is possible to excite a single signal within the spectrum. This is done by choosing a radio frequency identical to the selected signal and reducing its power, thereby reducing the influence of the pulse on other signals. Such pulses are known as selective pulses or soft pulses and may be specially shaped in order to tailor thier excitation profile.
It is possible to apply a magnetic field gradient to the sample. While the field gradient is applied the resonant frequency, which is proportional to the magnetic field, is different in different parts of the NMR tube. If the magnetic field gradient is applied for a short period of time, the phase (direction) of the magnetization will change differently in different parts of the tube such that the overall sum of the magnetization will be zero making the NMR signal disappear. the application of a gradient in the opposite direction allows the signal to be seen again (fig. 4). In combination with rf pulses that act as quantum filters it is possible to observe correlations between nuclei. Likewise, it is possible to measure physical movement in the sample such as diffusion.
Fig. 4. Effect of a magnetic field gradient pulse. The magnetization vector is rotates differently at different positions in the tube cancelling out the total signal. A refocusing gradient pulse can make the total signal reappear depending on its sign and intervening rf pulses.
When the magnetization is not at equilibrium, it returns slowly (typically over a period of seconds) to the equilibrium magnetization along the z-axis (fig. 5). In the process, radiofrequency radiation is emitted. This is acquired at the end of the pulse sequence and is called the Free Induction Decay (FID).
Fig. 5. Free induction decay of an excited magnetization vector towards equilibrium in the rotating frame. The magnetization precesses around the z-axis while approaching it.
Pulse sequences are represented diagrammatically (fig. 6). Throughout this website, radiofrequency pulses are represented as blue rectangles whose widths represent their duration and whose heights represent their intensity (not to scale). Magnetic field pulses are represented as red rectangles whose heights represent their intensities. Decoupling is represented as a grey rectangle. The free induction decay (FID) is represented as a green decaying sinusoid.
Fig. 6. Symbols used in pulse sequences
Fig. 7 shows the regular pulse sequence for 1D acquisition using the symbols described above. The sequence starts with a period of time to allow magnetic equilibration known as the relaxation time. There follows a hard radiofrequency pulse that excites the nucleus, transferring the magnetization into the x,y-plane. As the nuclei relax towards equilibrium they emit a radiofrequency signal known as the free induction decay (FID). This sequence relates to only one type of nucleus (such as 1H) which is the detected in the observed channel in the experiment.
Fig. 7. Basic 1D-NMR pulse sequence
Many experiments involve more than one nucleus and therefore require the use of more than one radiofrequency channel. Note that each of the rf channels are at different frequencies, matched to different nuclei. One of the simplest is the decoupled experiment such as 13C decoupled by 1H. In this experiment (fig. 8), 13C is the observed channel and 1H is the coupled channel (that is decoupled continuously throughout the experiment).
Fig. 8. Decoupling pulse sequence when protons are decoupled from carbons
The coupled channel may be used to selectively excite the observed channel in a two-dimensional (2D) experiment. The evolution time, t1, is the time that is incremented between each row of a 2D acquisition and makes up the extra dimension. In a 2D pulse sequence the symbol for the acquisition time axis is t2. For example in a simple old-fashioned 1H-13C 2D correlation, the coupled protons are excited and their magnetization allowed to precess for a short period before being used to modulate the carbon signals (fig. 9).
Fig. 9. Basic heteronuclear correlation sequence showing excitation in two channels
Many modern pulse sequences include a magnetic field gradient channel that is used to selectively refocus signals. For example Fig. 10 shows an example of a three-channel pulse program. In two of the channels, radiofrequency pulses are transmitted and in the third, magnetic gradient pulses are applied. The proton channel is the observed channel while the second channel is that of the 13C nucleus coupled to proton. Transmission on all the channels is in parallel. The diagram shows a pulse sequence for a 2D experiment, HMBC that measures proton-carbon correlation.
Under each rf pulse its pulse angle and phase are written. Under the rf channels there is a time-scale (Δ and t1). The grey lines on the diagram are not usually drawn but have been added here (and only here) for clarity. The relative gradient intensities are shown under the gradient channel. Note that the 180° pulse is drawn wider than the others indicating its longer duration but remember that the times are not drawn to scale. One of the pulse phases is shown as ±x indicating that this sequence is acquired twice and added once with phase +x and once with phase –x.
Fig. 10. Three channel pulse sequence for the 2D HMBC experiment | http://chem.ch.huji.ac.il/nmr/techniques/1d/pulseq.htm | 13 |
132 | Coordinate system that allows description of time and position of points relative to a body. The axes, or lines, emanate from a position called the origin. As a point moves, its velocity can be described in terms of changes in displacement and direction. Reference frames are chosen arbitrarily. For example, if a person is sitting in a moving train, the description of the person's motion depends on the chosen frame of reference. If the frame of reference is the train, the person is considered to be not moving relative to the train; if the frame of reference is the Earth, the person is moving relative to the Earth.
Learn more about reference frame with a free trial on Britannica.com.
One of these fictitious forces invariably points directly outward from the axis of rotation, with magnitude proportional to the square of the rotation rate of the frame. In much of the literature on classical dynamics, this term is called centrifugal force.
The apparent motion that may be ascribed to centrifugal force is sometimes called the centrifugal effect.
It is sometimes convenient to treat the first term on the right hand side as if it actually were the absolute acceleration, and not merely the acceleration in the rotating frame. That is, we pretend the rotating frame is an inertial frame, and move the other terms over to the force side of the equation, and treat them as fictitious forces. When this is done, the equation of motion has the form:
where r is the radius from the axis of rotation. This result can be verified by taking the gradient of the potential to obtain the radially outward force:
The potential energy is useful, for example, in calculating the form of the water surface in a rotating bucket. Let the height of the water be : then the potential energy per unit mass contributed by gravity is (g = acceleration due to gravity) and the total potential energy per unit mass on the surface is . In a static situation (no motion of the fluid in the rotating frame), this energy is constant independent of position r. Requiring the energy to be constant, we obtain the parabolic form:
where is the height at r = 0 (the axis). See Figure 1.
Similarly, the potential energy of the centrifugal force is a minor contributor to the complex calculation of the height of the tides on the Earth (where the centrifugal force is included to account for the rotation of the Earth around the Earth-Moon center of mass).
The principle of operation of the centrifuge also can be simply understood in terms of this expression for the potential energy, which shows that it is favorable energetically when the volume far from the axis of rotation is occupied by the heavier substance.
It has been mentioned that to deal with motion in a rotating frame of reference, one alternative to a solution based upon translating everything into an inertial frame instead is to apply Newton's laws of motion in the rotating frame by adding pseudo-forces, and then working directly in the rotating frame. Next is a simple example of this method.
Figure 3 illustrates that a body that is stationary relative to the non-rotating inertial frame S' appears to be rotating when viewed from the rotating frame S, which is rotating at angular rate Ω. Therefore, application of Newton's laws to what looks like circular motion in the rotating frame S at a radius R, requires an inward centripetal force of −m Ω2 R to account for the apparent circular motion. According to observers in S, this centripetal force in the rotating frame is provided as a net force that is the sum of the radially outward centrifugal pseudo force m Ω2 R and the Coriolis force −2m Ω × vrot. To evaluate the Coriolis force, we need the velocity as seen in the rotating frame, vrot. According to the formulas in the Derivation section, this velocity is given by −Ω × R. Hence, the Coriolis force (in this example) is inward, in the opposite direction to the centrifugal force, and has the value −2m Ω2 R. The combination of the centrifugal and Coriolis force is then m Ω2 R−2m Ω2 R = −m Ω2 R, exactly the centripetal force required by Newton's laws for circular motion.
For further examples and discussion, see below, and see Taylor.
Figure 4 shows a simplified version of an apparatus for studying centrifugal force called the "whirling table". The apparatus consists of a rod that can be whirled about an axis, causing a bead to slide on the rod under the influence of centrifugal force. A cord ties a weight to the sliding bead. By observing how the equilibrium balancing distance varies with the weight and the speed of rotation, the centrifugal force can be measured as a function of the rate of rotation and the distance of the bead from the center of rotation.
From the viewpoint of an inertial frame of reference, equilibrium results when the bead is positioned to select the particular circular orbit for which the weight provides the correct centripetal force.
The whirling table is a lab experiment, and standing there watching the table you have a detached viewpoint. It seems pretty much arbitrary whether to deal with centripetal force or centrifugal force. But if you were the bead, not the lab observer, and if you wanted to stay at a particular position on the rod, the centrifugal force would be how you looked at things. Centrifugal force would be pushing you around. Maybe the centripetal interpretation would come to you later, but not while you were coping with matters. Centrifugal force is not just mathematics.
Figure 5 shows two identical spheres rotating about the center of the string joining them. This sphere example is one used by Newton himself to discuss the detection of rotation relative to absolute space. (A more practical experiment is to observe the isotropy of the cosmic background radiation.) The axis of rotation is shown as a vector Ω with direction given by the right-hand rule and magnitude equal to the rate of rotation: |Ω| = ω. The angular rate of rotation ω is assumed independent of time (uniform circular motion). Because of the rotation, the string is under tension. (See reactive centrifugal force.) The description of this system next is presented from the viewpoint of an inertial frame and from a rotating frame of reference.
where uR is a unit vector pointing from the axis of rotation to one of the spheres, and Ω is a vector representing the angular rotation, with magnitude ω and direction normal to the plane of rotation given by the right-hand rule, m is the mass of the ball, and R is the distance from the axis of rotation to the spheres (the magnitude of the displacement vector, |xB| = R, locating one or the other of the spheres). According to the rotating observer, shouldn't the tension in the string be twice as big as before (the tension from the centrifugal force plus the extra tension needed to provide the centripetal force of rotation)? The reason the rotating observer sees zero tension is because of yet another fictitious force in the rotating world, the Coriolis force, which depends on the velocity of a moving object. In this zero-tension case, according to the rotating observer the spheres now are moving, and the Coriolis force (which depends upon velocity) is activated. According to the article fictitious force, the Coriolis force is:
where R is the distance to the object from the center of rotation, and vB is the velocity of the object subject to the Coriolis force, |vB| = ωR.
In the geometry of this example, this Coriolis force has twice the magnitude of the ubiquitous centrifugal force and is exactly opposite in direction. Therefore, it cancels out the ubiquitous centrifugal force found in the first example, and goes a step further to provide exactly the centripetal force demanded by uniform circular motion, so the rotating observer calculates there is no need for tension in the string − the Coriolis force looks after everything.
This force also is the force due to tension seen by the rotating observers. The rotating observers see the spheres in circular motion with angular rate ωS = ωI − ωR (S = spheres). That is, if the frame rotates more slowly than the spheres, ωS > 0 and the spheres advance counterclockwise around a circle, while for a more rapidly moving frame, ωS < 0, and the spheres appear to retreat clockwise around a circle. In either case, the rotating observers see circular motion and require a net inward centripetal force:
However, this force is not the tension in the string. So the rotational observers conclude that a force exists (which the inertial observers call a fictitious force) so that:
The fictitious force changes sign depending upon which of ωI and ωS is greater. The reason for the sign change is that when ωI > ωS, the spheres actually are moving faster than the rotating observers measure, so they measure a tension in the string that actually is larger than they expect; hence, the fictitious force must increase the tension (point outward). When ωI < ωS, things are reversed so the fictitious force has to decrease the tension, and therefore has the opposite sign (points inward). Incidentally, checking the fictitious force needed to account for the tension in the string is one way for an observer to decide whether or not they are rotating – if the fictitious force is zero, they are not rotating. (Of course, in an extreme case like the gravitron amusement ride, you do not need much convincing that you are rotating, but standing on the Earth's surface, the matter is more subtle.)
The subscript B refers to quantities referred to the non-inertial coordinate system. Full notational details are in Fictitious force. For constant angular rate of rotation the last term is zero. To evaluate the other terms we need the position of one of the spheres:
and the velocity of this sphere as seen in the rotating frame:
where uθ is a unit vector perpendicular to uR pointing in the direction of motion.
The vector of rotation Ω = ωR uz (uz a unit vector in the z-direction), and Ω × uR = ωR (uz × uR) = ωR uθ ; Ω × uθ = −ωR uR. The centrifugal force is then:
and has the ability to change sign, being outward when the spheres move faster than the frame (ωS > 0 ) and being inward when the spheres move slower than the frame (ωS < 0 ). Combining the terms:
Figure 7 shows a ball dropping vertically (parallel to the axis of rotation Ω of the rotating frame). For simplicity, suppose it moves downward at a fixed speed in the inertial frame, occupying successively the vertically aligned positions numbered one, two, three. In the rotating frame it appears to spiral downward, and the right side of Figure 7 shows a top view of the circular trajectory of the ball in the rotating frame. Because it drops vertically at a constant speed, from this top view in the rotating frame the ball appears to move at a constant speed around its circular track. A description of the motion in the two frames is next.
where ω is the angular rate of rotation, m is the mass of the ball, and R is the radius of the spiral in the horizontal plane. Because there is no apparent source for such a force (hence the label "fictitious"), the rotating observer concludes it is just "a fact of life" in the rotating world that there exists an inward force with this behavior. Inasmuch as the rotating observer already knows there is a ubiquitous outward centrifugal force in the rotating world, how can there be an inward force? The answer is again the Coriolis force: the component of velocity tangential to the circular motion seen in the right panel of Figure 7 activates the Coriolis force, which cancels the centrifugal force and, just as in the zero-tension case of the spheres, goes a step further to provide the centripetal force demanded by the calculations of the rotating observer. Some details of evaluation of the Coriolis force are shown in Figure 8.
Because the Coriolis force and centrifugal forces combine to provide the centripetal force the rotating observer requires for the observed circular motion, the rotating observer does not need to apply any additional force to the object, in complete agreement with the inertial observer, who also says there is no force needed. One way to express the result: the fictitious forces look after the "fictitious" situation, so the ball needs no help to travel the perceived trajectory: all observers agree that nothing needs to be done to make the ball follow its path.
To show a different frame of reference, let's revisit the dropping ball example in Figure 7 from the viewpoint of a parachutist falling at constant speed to Earth (the rotating platform). The parachutist aims to land upon the point on the rotating ground directly below the drop-off point. Figure 9 shows the vertical path of descent seen in the rotating frame. The parachutist drops at constant speed, occupying successively the vertically aligned positions one, two, three.
In the stationary frame, let us suppose the parachutist jumps from a helicopter hovering over the destination site on the rotating ground below, and therefore traveling at the same speed as the target below. The parachutist starts with the necessary speed tangential to his path (ωR) to track the destination site. If the parachutist is to land on target, the parachute must spiral downward on the path shown in Figure 9. The stationary observer sees a uniform circular motion of the parachutist when the motion is projected downward, as in the left panel of Figure 9. That is, in the horizontal plane, the stationary observer sees a centripetal force at work, -m ω2 R, as is necessary to achieve the circular path. The parachutist needs a thruster to provide this force. Without thrust, the parachutist follows the dashed vertical path in the left panel of Figure 9, obeying Newton's law of inertia.
The stationary observer and the observer on the rotating ground agree that there is no vertical force involved: the parachutist travels vertically at constant speed. However, the observer on the ground sees the parachutist simply drop vertically from the helicopter to the ground, following the vertically aligned positions one, two, three. There is no force necessary. So how come the parachutist needs a thruster?
The ground observer has this view: there is always a centrifugal force in the rotating world. Without a thruster, the parachutist would be carried away by this centrifugal force and land far off the mark. From the parachutist's viewpoint, trying to keep the target directly below, the same appears true: a steady thrust radially inward is necessary, just to hold a position directly above target. Unlike the dropping ball case, where the fictitious forces conspired to produce no need for external agency, in this case they require intervention to achieve the trajectory. The basic rule is: if the inertial observer says a situation demands action or does not, the fictitious forces of the rotational frame will lead the rotational observer to the same conclusions, albeit by a different sequence.
Notice that there is no Coriolis force in this discussion, because the parachutist has zero horizontal velocity from the viewpoint of the ground observer.
There is evidence that Sir Isaac Newton originally conceived circular motion as being caused a balance between an inward centripetal force and an outward centrifugal force.
The modern conception of centrifugal force appears to have its origins in Christiaan Huygens' paper De Vi Centrifuga, written in 1659. It has been suggested that the idea of circular motion as caused by a single force was introduced to Newton by Robert Hooke.
Newton described the role of centrifugal force upon the height of the oceans near the equator in the Principia:
The effect of centrifugal force in countering gravity, as in this behavior of the tides, has led centrifugal force sometimes to be called "false gravity" or "imitation gravity" or "quasi-gravity".
Later scientists found this view unwarranted: they pointed out (as did Newton) that the laws of mechanics were the same for all observers that differed only by uniform translation; that is, all observers that differed in motion only by a constant velocity. Hence, the "fixed stars" or "absolute space" was not preferred, but only one of a set of frames related by Galilean transformations. The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojević:
Ultimately this notion of the transformation properties of physical laws between frames played a more and more central role. It was noted that accelerating frames exhibited "fictitious forces" like the centrifugal force. These forces did not behave under transformation like other forces, providing a means of distinguishing them. This peculiarity of these forces led to the names inertial forces, pseudo-forces or fictitious forces. In particular, fictitious forces did not appear at all in some frames: those frames differing from that of the fixed stars by only a constant velocity. Thus, the preferred frames, called "inertial frames", were identifiable by the absence of fictitious forces.
The idea of an inertial frame was extended further in the special theory of relativity. This theory posited that all physical laws should appear of the same form in inertial frames, not just the laws of mechanics. In particular, Maxwell's equations should apply in all frames. Because Maxwell's equations implied the same speed of light in the vacuum of free space for all inertial frames, inertial frames now were found to be related not by Galilean transformations, but by Poincaré transformations, of which a subset is the Lorentz transformations. That posit led to many ramifications, including Lorentz contractions and relativity of simultaneity. Einstein succeeded, through many clever thought experiments, in showing that these apparently odd ramifications in fact had very natural explanation upon looking at just how measurements and clocks actually were used. That is, these ideas flowed from operational definitions of measurement coupled with the experimental confirmation of the constancy of the speed of light.
Later the general theory of relativity further generalized the idea of frame independence of the laws of physics, and abolished the special position of inertial frames, at the cost of introducing curved space-time. Following an analogy with centrifugal force (sometimes called "artificial gravity" or "false gravity"), gravity itself became a fictitious force, as enunciated in the principle of equivalence.
In short, centrifugal force played a key early role in establishing the set of inertial frames of reference and the significance of fictitious forces, even aiding in the development of general relativity.
Nevertheless, all of these systems can also be described without requiring the concept of centrifugal force, in terms of motions and forces in an inertial frame, at the cost of taking somewhat more care in the consideration of forces and motions within the system.
Wipo Publishes Patent of Lg Electronics(china) R&d Center, Jie Jia, Yong Ho Cho, Wencheng Jin for "Inter Picture Prediction Encoding/ decoding and Method and Apparatus for Reference Frame Serial Number Encoding/ decoding Thereof" (Chinese Inventors)
Jan 04, 2013; GENEVA, Jan. 4 -- Publication No. WO/2012/174840 was published on Dec. 27, 2012.Title of the invention: "INTER PICTURE PREDICTION...
Patent Application Titled "Video Compression Coding Device and Decoding Device Applied with Motion Compensation Technique Using Selective Reference Frame, and Method for Determining Selective Referenc
Apr 04, 2013; By a News Reporter-Staff News Editor at Politics & Government Week -- According to news reporting originating from Washington,... | http://www.reference.com/browse/reference+frame | 13 |
69 | Click on the image to enlarge it. Click again to close. Download PDF (1027 KB)
This is a level 3 number and measurement activity from the Figure It Out series.
use multiplication to solve perimeter and area problems.
Number Framework links
Use this activity to encourage transition from advanced additive strategies (stage 6) to advanced multiplicative strategies (stage 7).
This activity is based on the perimeter and area of rectangles. As a general introduction, have your students look at this rectangle where the side lengths are given as l and w.
The area can be found by: area = l x w.
The perimeter can be found by: perimeter = 2 x l + 2 x w.
Problems that involve maximising or minimising one measurement while either holding the other constant or minimising it are common in the real world. Fred’s fence is typical of constrained maximisation or minimisation problems.
Students exploring question 1 are likely to try different side lengths that will result in an area of 80 square metres. The problem requires a systematic approach, so encourage your students to organise their results in a table or organised list:
|Side A||Side B||Area||Perimeter|
In this way, the students can find all the solutions with whole-number side measurements and calculate the perimeters at the same time. They may notice that the closer the side measurements become to each other, the smaller the perimeter becomes.
Encourage your students to explore the minimum perimeters for rectangles with the areas 16, 36, and 64 (square numbers). They will find that the perimeter is minimised when the rectangle is a square. In this situation, the length of each side is the square root of the area. They can then go back to question 1 with the knowledge that the solution is the closest whole number to √80 = 8.944 (to 4 significant figures). Students are likely to argue that the question asked for a rectangle and that a rectangle is not a square. It is worth stopping to discuss this reasonable view. In everyday use, a rectangle and a square are different shapes, but in mathematics, a square is just a special case of a rectangle.
Provide the students with a set of rectangles and squares and ask them to describe the attributes of these shapes. Encourage them to come up with minimal definitions, listing just the attributes that are absolutely necessary to define the shape. Students will typically say that a rectangle has:
• 4 sides
• 4 right-angled corners
• 2 pairs of parallel sides.
If you ask them to draw a 4-sided polygon that has right-angled corners but does not have 2 pairs of parallel sides, they will find that this is impossible. So it is not necessary to state that opposite sides must be parallel. This gives us the minimal definition for a rectangle. The minimal definition of a square is “a 4-sided polygon with right-angled corners and equal sides”. Squares are therefore a subclass of rectangles.
In the Investigation, students try to find rectangles that have the same number for the measurement of their perimeter as they do for the measurement of their area.
One solution is a square with sides of 4 metres. Its perimeter is 16 metres, and its area is 16 square metres. If they are systematic, students should be able to establish the existence of two other whole-number solutions.
They could begin by setting the length (at, say, 2 metres) and exploring what widths might work. They will discover that no whole-number solution will work for a side length of 2. But if they then try 3, they will find that a 3 x 6 rectangle has an area of 18 square metres and a perimeter of 18 metres. 6 x 3 is a third solution, but this is not a genuinely different rectangle.
Having got this far, your students may guess that there are other rectangles that meet the requirement but that they do not have whole-number sides. There are in fact an infinite number of such rectangles. In the table below, there are six rectangles that happen to have a whole-number measurement for one of their two dimensions. You could give your students the length of side b and challenge them to find the length of side a (in bold in the table), using a trial-and-improvement strategy.
|Side a||Side b||Area||Perimeter|
There is an algebraic relationship between the pairs of values of a and b that satisfy the requirement that the number of perimeter units must be equal to the number of units of area. The relationship can be expressed in this way:
(To find the length of the second side, double the length of the first and divide by its length less 2.) Students who are developing an understanding of symbolic notation may like to try using this formula to find other pairs for a and b with the help of a calculator or spreadsheet program such as that shown.
Numeracy Project materials (see http://www.nzmaths.co.nz/numeracy-projects)
• Book 9: Teaching Number through Measurement, Geometry, Algebra and Statistics Investigating Area, page 11
Figure It Out
• Number: Book Three, Years 7–8, Level 4 Orchard Antics, page 23
• Number Sense and Algebraic Thinking: Book One, Levels 3–4 Tile the Town, Tiny!, pages 20–21
1. a. 5 different rectangular shapes are: 1 m by 80 m, 2 m by 40 m, 4 m by 20 m, 5 m by 16 m, and 8 m by 10 m. Only the last two shapes would suit the dodgems (the other three would be too narrow).
b. The 8 m by 10 m rectangle would use 36 panels and cost $108, which is cheaper than the other options. It is one of the shapes that would suit the dodgems.
2. a. There are 12 different-sized rectangles that could be made.
3. 50 m. The length must be 15 m because 10 x 15 = 150. 2 x (10 + 15) = 2 x 25 = 50 gives the perimeter.
Answers may vary. There are three whole-number solutions: 4 x 4, 3 x 6, and 6 x 3 (which is the same as 3 x 6). There is an infinite number of solutions if rectangles with only one whole-number side or no whole-number sides are included. | http://nzmaths.co.nz/resource/freds-rent-fence | 13 |