score
int64 50
2.08k
| text
stringlengths 698
618k
| url
stringlengths 16
846
| year
int64 13
24
|
---|---|---|---|
52 | This article gives an overview of the various ways to multiply matrices.
Ordinary matrix product
By far the most important way to multiply matrices is the usual matrix multiplication. It is defined between two matrices only if the number of columns of the first matrix is the same as the number of rows of the second matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their product A×B is an m-by-p matrix given by
for each pair i and j. The algebraic system of "matrix units" summarises the abstract properties of this kind of multiplication.
The following picture shows how to calculate the (AB)12 element of A×B if A is a 2×4 matrix, and B is a 4×3 matrix. Elements from each matrix are paired off in the direction of the arrows; each pair is multiplied and the products are added. The location of the resulting number in AB corresponds to the row and column that were considered.
The proportions-vectors method
There is a simpler concept for multiplying matrices. Suppose you want to mix vectors together in different proportions. The matrix on the left is the list of proportions. The matrix on the right is the list of vectors. Here's how it works :
Use vector notation :
The first proportion in each mix tells how much of the first vector to use and so on :
Remove the vector notation :
Matrix multiplication is not commutative (A×B = B×A), except in special cases. It's easy to see why: you can't expect to switch the proportions with the vectors and get the same result. It's also easy to see why the number of columns in the proportions matrix has to be the same as the number of rows in the vectors matrix: they have to represent the same number of vectors.
This notion of multiplication is important because if A and B are interpreted as linear transformations (which is almost universally done), then the matrix product AB corresponds to the composition of the two linear transformations, with B being applied first.
The complexity of matrix multiplication, if carried out naively, is O(n³), but more efficient algorithms do exist. Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication", uses a mapping of bilinear combinations to reduce complexity to O(nlog2(7)) (approximately O(n2.807...)). In practice, though, it is rarely used since it is awkward to implement, lacking numerical stability. The constant factor involved is about 4.695 asymptotically; Winograd's method improves on this slightly by reducing it to an asymptotic 4.537.
The best algorithm currently known, which was presented by Don Coppersmith and S. Winograd in 1990, has an asymptotic complexity of O(n2.376). It has been shown that the leading exponent must be at least 2.
For two matrices of the same dimensions, we have the Hadamard product or entrywise product. The Hadamard product of two m-by-n matrices A and B, denoted by A · B, is an m-by-n matrix given by
(A·B)[i,j]=A[i,j]B[i,j]. For instance
Note that the Hadamard product is a submatrix of the Kronecker product (see below). Hadamard product is studied by matrix theorists, but it is virtually untouched by linear algebraists.
For any two arbitrary matrices A=(aij) and B, we have the direct product or Kronecker product A B defined as
(the HTML entity ⊗ (⊗) represents the direct product, but is not supported on older browsers)
Note that if A is m-by-n and B is p-by-r then A B is an mp-by-nr matrix. Again this multiplication is not commutative.
If A and B represent linear transformations V1 → W1 and V2 → W2, respectively, then A B represents the tensor product of the two maps, V1 V2 → W1 W2.
For an excellent treatment of Hadamard products or advanced matrix analysis see "Topics in Matrix Analysis: Horn and Johnson; Cambridge"
All three notions of matrix multiplication are associative
- A(BC) = (AB)C
- A(B + C) = AB + AC
- (A + B)C = AC + BC
and compatible with scalar multiplication:
- c(AB) = (cA)B = A(cB)
The scalar multiplication of a matrix A=(aij) and a scalar r gives the product
- rA=(r aij).
If we are concerned with matrices over a ring, then the above multiplication
is sometimes called the left multiplication while the right multiplication is defined to be
- Ar=(aij r).
When the underlying ring is commutative, for example, the real or complex number field, the two multiplications are the same. However, if the ring is not commutative, such as the quaternions, they may be different. For example
- Strassen, Volker, Gaussian Elimination is not Optimal, Numer. Math. 13, p. 354-356, 1969
- Coppersmith, D., Winograd S., Matrix multiplication via arithmetic progressions, J. Symbolic Comput. 9, p. 251-280, 1990
- Horn, Roger; Johnson, Charles: "Topics in Matrix Analysis", Cambridge, 1994. | http://www.biologydaily.com/biology/Matrix_multiplication | 13 |
82 | Chapter 13 - Inverse Functions
Chapter 13 - Inverse Functions
In the second part of this book on
Calculus, we shall be devoting our study to another type of function, the
exponential function and its close relative the Sine function. Before we
immerse ourselves in this complex and analytical study, we first need to
understand something about inverse functions.
The Inverse function is by
definition a function whose output becomes the input or the dependent
becomes an independent variable. For example given the function:
Newton.s second law, or the force acting on a body of mass, m, is a
the acceleration given to it. We are free to input any a and what we get
the force. The inverse of this Force function, according to the
will give us the acceleration as a function of Force. This is done by
solving for the independent variable, a:
Now I can let F
be anything and then find the acceleration as a function of it.
The inverse of a function, f(x), is
commonly written as,
. Now we will look at the more general case of graphing a
function and its inverse in the same co-ordinate planes. Given the
, to calculate its inverse we only have to solve this for x
. Notice that we have not really changed the function at all,
we have only solved for the independent variable. The graph of these two
functions would be exactly the same. Our definition of the inverse
therefore has to be slightly modified. After finding the inverse of a
we just interchange x and y to get:
What does this do to the inverse function?
This essential flips the graph of f(x) about the line y=x such that for
point (x,y) there is a corresponding point (y,x) on the graph of the
function. Now both the functions can be graphed in the same x-y plane.
Remember that if we just solve for
the dependent, we are not changing the equation but merely re-writing it.
this reason its graph is the same. By flipping the x and y, we get another
function of x, whose relation to f(x) is that it has been graphed as
x-axis were the y-axis and vice-versa. It is best we look at the two
Notice how every
point (x, y) has a corresponding point (y,x) on the inverse function. The
of the inverse function is therefore exactly the same as the original
except that the x and y-axis have been switched:
Since every point (x, y) has a
corresponding point (y,x) then any point y from the inverse function when
inputted in the original function should yield x:
Remember the a
function and its inverse are both function.s of x. The way they are
that the inverse function represents the original function by just having
dependent and independent variable switched around. As you can see from
first graph, when the two function.s are graphed together, the inverse
contains all the point (x, y), of the first function, plotted as (y, x)
the exception that y is given as function of x. For this reason
important to understand about the inverse function is that it is obtained
solving for the independent variable, then replacing it with y, to create
function that is also a function of x and can be graphed along with the
Now that we know how a function and
its inverse function are closely related, it brings us to the question,
the derivatives related? Logic would tell us that instead of
we should just find
by taking the
reciprocal of the derivative. For example if we had:
of the inverse function might be:
is 1/2x. But this is not the case, the derivative is:
Let us examine
the graph of f(x) and its inverse function to see what exactly is going
Note that at
x=2, the slopes are not reciprocals but are reciprocals only at y values
the inverse function or through (x, f(x)) and (f-1(f(x)), x).
point (3,9) will have a reciprocal slope at (9,3) since at this point x
are reversed hence the slope becomes the reciprocal or
This is the
important point to understand about the function and its inverse, they
behave as opposites at point (a,b) and (b,a). This means that at point a
something different is going on. The question is then how can we find the
derivative of the inverse function with respect to the x-axis? Looking
By replacing x
with y and y with x in this last expression we get:
What we have
just done is calculated the derivative of the inverse function only by
at the original function and its derivative. The reason the derivative was
just the reciprocal of y=2x was because we forgot to do the following two
1) Replace x with its equivalent
expression in terms of y.
The slope in the
following graph is
at x=2 the slope
By replacing x
we can find the
derivative with respect to the same
x-axis but instead with a y-value.
At y=4 the slope
which is the answer
we got using x=2 instead.
Since the inverse function is
graphed in the same xy plane as
, we can find the derivative of the inverse function with
respect to the axis by taking the reciprocal of the expression
and then replacing
every y with x and vice-versa.
expression is the derivative for the function.s inverse with respect to
To summarize we can state the
To find the
derivative of the inverse function,
an inverse function is related to the main function in that if you reflect it over the line y=x, you will land on the main function.
2) First find
the derivative of f(x)
3) Replace any x
in the derivative with its y-equivalent, so as to be able to find the
derivative with any given y-value.
4) Take the
reciprocal of the derivative to get
so as to be able to
find the derivative with respect to the y-axis.
5) Since the
inverse function is graphed with respect to x, replace every y with x and
with y to find the derivative of the inverse function.
To summarize further:
find the derivative of its inverse, | http://www.understandingcalculus.com/chapters/13/13-1.php | 13 |
70 | Electrical impedance is the measure of the opposition that a circuit presents to the passage of a current when a voltage is applied. In quantitative terms, it is the complex ratio of the voltage to the current in an alternating current (AC) circuit. Impedance extends the concept of resistance to AC circuits, and possesses both magnitude and phase, unlike resistance, which has only magnitude. When a circuit is driven with direct current (DC), there is no distinction between impedance and resistance; the latter can be thought of as impedance with zero phase angle.
It is necessary to introduce the concept of impedance in AC circuits because there are other mechanisms impeding the flow of current besides the normal resistance of DC circuits. There are an additional two impeding mechanisms to be taken into account in AC circuits: the induction of voltages in conductors self-induced by the magnetic fields of currents (inductance), and the electrostatic storage of charge induced by voltages between conductors (capacitance). The impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part.
The symbol for impedance is usually Z and it may be represented by writing its magnitude and phase in the form |Z|∠θ. However, complex number representation is often more powerful for circuit analysis purposes. The term impedance was coined by Oliver Heaviside in July 1886. Arthur Kennelly was the first to represent impedance with complex numbers in 1893.
Impedance is defined as the frequency domain ratio of the voltage to the current. In other words, it is the voltage–current ratio for a single complex exponential at a particular frequency ω. In general, impedance will be a complex number, with the same units as resistance, for which the SI unit is the ohm (Ω). For a sinusoidal current or voltage input, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular,
- The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude.
- The phase of the complex impedance is the phase shift by which the current is ahead of the voltage.
Complex impedance
where the magnitude represents the ratio of the voltage difference amplitude to the current amplitude, while the argument gives the phase difference between voltage and current. is the imaginary unit, and is used instead of in this context to avoid confusion with the symbol for electric current. In Cartesian form,
Where it is required to add or subtract impedances the cartesian form is more convenient, but when quantities are multiplied or divided the calculation becomes simpler if the polar form is used. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers.
Ohm's law
The magnitude of the impedance acts just like resistance, giving the drop in voltage amplitude across an impedance for a given current . The phase factor tells us that the current lags the voltage by a phase of (i.e., in the time domain, the current signal is shifted later with respect to the voltage signal).
Just as impedance extends Ohm's law to cover AC circuits, other results from DC circuit analysis such as voltage division, current division, Thevenin's theorem, and Norton's theorem can also be extended to AC circuits by replacing resistance with impedance.
Complex voltage and current
Impedance is defined as the ratio of these quantities.
Substituting these into Ohm's law we have
Noting that this must hold for all , we may equate the magnitudes and phases to obtain
The magnitude equation is the familiar Ohm's law applied to the voltage and current amplitudes, while the second equation defines the phase relationship.
Validity of complex representation
This representation using complex exponentials may be justified by noting that (by Euler's formula):
The real-valued sinusoidal function representing either voltage or current may be broken into two complex-valued functions. By the principle of superposition, we may analyse the behaviour of the sinusoid on the left-hand side by analysing the behaviour of the two complex terms on the right-hand side. Given the symmetry, we only need to perform the analysis for one right-hand term; the results will be identical for the other. At the end of any calculation, we may return to real-valued sinusoids by further noting that
A phasor is a constant complex number, usually expressed in exponential form, representing the complex amplitude (magnitude and phase) of a sinusoidal function of time. Phasors are used by electrical engineers to simplify computations involving sinusoids, where they can often reduce a differential equation problem to an algebraic one.
The impedance of a circuit element can be defined as the ratio of the phasor voltage across the element to the phasor current through the element, as determined by the relative amplitudes and phases of the voltage and current. This is identical to the definition from Ohm's law given above, recognising that the factors of cancel.
Device examples
The impedance of an ideal resistor is purely real and is referred to as a resistive impedance:
In this case, the voltage and current waveforms are proportional and in phase.
the impedance of inductors increases as frequency increases;
the impedance of capacitors decreases as frequency increases;
In both cases, for an applied sinusoidal voltage, the resulting current is also sinusoidal, but in quadrature, 90 degrees out of phase with the voltage. However, the phases have opposite signs: in an inductor, the current is lagging; in a capacitor the current is leading.
Note the following identities for the imaginary unit and its reciprocal:
Thus the inductor and capacitor impedance equations can be rewritten in polar form:
The magnitude gives the change in voltage amplitude for a given current amplitude through the impedance, while the exponential factors give the phase relationship.
Deriving the device-specific impedances
What follows below is a derivation of impedance for each of the three basic circuit elements: the resistor, the capacitor, and the inductor. Although the idea can be extended to define the relationship between the voltage and current of any arbitrary signal, these derivations will assume sinusoidal signals, since any arbitrary signal can be approximated as a sum of sinusoids through Fourier analysis.
For a resistor, there is the relation:
This is Ohm's law.
Considering the voltage signal to be
it follows that
This says that the ratio of AC voltage amplitude to alternating current (AC) amplitude across a resistor is , and that the AC voltage leads the current across a resistor by 0 degrees.
This result is commonly expressed as
For a capacitor, there is the relation:
Considering the voltage signal to be
it follows that
This says that the ratio of AC voltage amplitude to AC current amplitude across a capacitor is , and that the AC voltage lags the AC current across a capacitor by 90 degrees (or the AC current leads the AC voltage across a capacitor by 90 degrees).
This result is commonly expressed in polar form, as
or, by applying Euler's formula, as
For the inductor, we have the relation:
This time, considering the current signal to be
it follows that
This says that the ratio of AC voltage amplitude to AC current amplitude across an inductor is , and that the AC voltage leads the AC current across an inductor by 90 degrees.
This result is commonly expressed in polar form, as
or, using Euler's formula, as
Generalised s-plane impedance
Impedance defined in terms of jω can strictly only be applied to circuits which are energised with a steady-state AC signal. The concept of impedance can be extended to a circuit energised with any arbitrary signal by using complex frequency instead of jω. Complex frequency is given the symbol s and is, in general, a complex number. Signals are expressed in terms of complex frequency by taking the Laplace transform of the time domain expression of the signal. The impedance of the basic circuit elements in this more general notation is as follows:
For a DC circuit this simplifies to s = 0. For a steady-state sinusoidal AC signal s = jω.
Resistance vs reactance
Resistance and reactance together determine the magnitude and phase of the impedance through the following relations:
In many applications the relative phase of the voltage and current is not critical so only the magnitude of the impedance is significant.
Resistance is the real part of impedance; a device with a purely resistive impedance exhibits no phase shift between the voltage and current.
Reactance is the imaginary part of the impedance; a component with a finite reactance induces a phase shift between the voltage across it and the current through it.
A purely reactive component is distinguished by the sinusoidal voltage across the component being in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. A pure reactance will not dissipate any power.
Capacitive reactance
At low frequencies a capacitor is open circuit, as no charge flows in the dielectric. A DC voltage applied across a capacitor causes charge to accumulate on one side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero.
Driven by an AC supply, a capacitor will only accumulate a limited amount of charge before the potential difference changes sign and the charge dissipates. The higher the frequency, the less charge will accumulate and the smaller the opposition to the current.
Inductive reactance
An inductor consists of a coiled conductor. Faraday's law of electromagnetic induction gives the back emf (voltage opposing current) due to a rate-of-change of magnetic flux density through a current loop.
For an inductor consisting of a coil with loops this gives.
The back-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency.
Total reactance
The total reactance is given by
so that the total impedance is
Combining impedances
The total impedance of many simple networks of components can be calculated using the rules for combining impedances in series and parallel. The rules are identical to those used for combining resistances, except that the numbers in general will be complex numbers. In the general case however, equivalent impedance transforms in addition to series and parallel will be required.
Series combination
For components connected in series, the current through each circuit element is the same; the total impedance is the sum of the component impedances.
Or explicitly in real and imaginary terms:
Parallel combination
For components connected in parallel, the voltage across each circuit element is the same; the ratio of currents through any two elements is the inverse ratio of their impedances.
Hence the inverse total impedance is the sum of the inverses of the component impedances:
or, when n = 2:
The equivalent impedance can be calculated in terms of the equivalent series resistance and reactance .
The measurement of the impedance of devices and transmission lines is a practical problem in radio technology and others. Measurements of impedance may be carried out at one frequency, or the variation of device impedance over a range of frequencies may be of interest. The impedance may be measured or displayed directly in ohms, or other values related to impedance may be displayed; for example in a radio antenna the standing wave ratio or reflection coefficient may be more useful than the impedance alone. Measurement of impedance requires measurement of the magnitude of voltage and current, and the phase difference between them. Impedance is often measured by "bridge" methods, similar to the direct-current Wheatstone bridge; a calibrated reference impedance is adjusted to balance off the effect of the impedance of the device under test. Impedance measurement in power electronic devices may require simultaneous measurement and provision of power to the operating device.
The impedance of a device can be calculated by complex division of the voltage and current. The impedance of the device can be calculated by applying a sinusoidal voltage to the device in series with a resistor, and measuring the voltage across the resistor and across the device. Performing this measurement by sweeping the frequencies of the applied signal provides the impedance phase and magnitude.
The LCR meter (Inductance (L), Capacitance (C), and Resistance (R)) is a device commonly used to measure the inductance, resistance and capacitance of a component; from these values the impedance at any frequency can be calculated.
Variable impedance
In general, neither impedance nor admittance can be time varying as they are defined for complex exponentials for –∞ < t < +∞. If the complex exponential voltage–current ratio changes over time or amplitude, the circuit element cannot be described using the frequency domain. However, many systems (e.g., varicaps that are used in radio tuners) may exhibit non-linear or time-varying voltage–current ratios that appear to be linear time-invariant (LTI) for small signals over small observation windows; hence, they can be roughly described as having a time-varying impedance. That is, this description is an approximation; over large signal swings or observation windows, the voltage–current relationship is non-LTI and cannot be described by impedance.
See also
- Impedance matching
- Impedance cardiography
- Impedance bridging
- Characteristic impedance
- Negative impedance converter
- Resistance distance
- Electrical characteristics of dynamic loudspeakers
- Science, p. 18, 1888
- Oliver Heaviside, The Electrician, p. 212, 23 July 1886, reprinted as Electrical Papers, p 64, AMS Bookstore, ISBN 0-8218-3465-7
- Kennelly, Arthur. Impedance (AIEE, 1893)
- Alexander, Charles; Sadiku, Matthew (2006). Fundamentals of Electric Circuits (3, revised ed.). McGraw-Hill. pp. 387–389. ISBN 978-0-07-330115-0
- AC Ohm's law, Hyperphysics
- Horowitz, Paul; Hill, Winfield (1989). "1". The Art of Electronics. Cambridge University Press. pp. 32–33. ISBN 0-521-37095-7.
- Complex impedance, Hyperphysics
- Horowitz, Paul; Hill, Winfield (1989). "1". The Art of Electronics. Cambridge University Press. pp. 31–32. ISBN 0-521-37095-7.
- Parallel Impedance Expressions, Hyperphysics
- Lewis Jr., George; George K. Lewis Sr. and William Olbricht (August 2008). "Cost-effective broad-band electrical impedance spectroscopy measurement circuit and signal analysis for piezo-materials and ultrasound transducers". Measurement Science and Technology 19 (10): 105102. Bibcode:2008MeScT..19j5102L. doi:10.1088/0957-0233/19/10/105102. PMC 2600501. PMID 19081773. Retrieved 2008-09-15. | http://en.wikipedia.org/wiki/Electrical_impedance | 13 |
133 | Surface tension is a contractive tendency of the surface of a liquid that allows it to resist an external force. It is revealed, for example, in the floating of some objects on the surface of water, even though they are denser than water, and in the ability of some insects (e.g. water striders) to run on the water surface. This property is caused by cohesion of similar molecules, and is responsible for many of the behaviors of liquids.
Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent—but when referring to energy per unit of area, people use the term surface energy—which is a more general term in the sense that it applies also to solids and not just liquids.
The cohesive forces among liquid molecules are responsible for the phenomenon of surface tension. In the bulk of the liquid, each molecule is pulled equally in every direction by neighboring liquid molecules, resulting in a net force of zero. The molecules at the surface do not have other molecules on all sides of them and therefore are pulled inwards. This creates some internal pressure and forces liquid surfaces to contract to the minimal area.
Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a spherical shape by the cohesive forces of the surface layer. In the absence of other forces, including gravity, drops of virtually all liquids would be perfectly spherical. The spherical shape minimizes the necessary "wall tension" of the surface layer according to Laplace's law.
Another way to view surface tension is in terms of energy. A molecule in contact with a neighbor is in a lower state of energy than if it were alone (not in contact with a neighbor). The interior molecules have as many neighbors as they can possibly have, but the boundary molecules are missing neighbors (compared to interior molecules) and therefore have a higher energy. For the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized quantity of boundary molecules results in a minimized surface area.
As a result of surface area minimization, a surface will assume the smoothest shape it can (mathematical proof that "smooth" shapes minimize surface area relies on use of the Euler–Lagrange equation). Since any curvature in the surface shape results in greater area, a higher energy will also result. Consequently the surface will push back against any curvature in much the same way as a ball pushed uphill will push back to minimize its gravitational potential energy.
Effects of surface tension
Several effects of surface tension can be seen with ordinary water:
A. Beading of rain water on a waxy surface, such as a leaf. Water adheres weakly to wax and strongly to itself, so water clusters into drops. Surface tension gives them their near-spherical shape, because a sphere has the smallest possible surface area to volume ratio.
B. Formation of drops occurs when a mass of liquid is stretched. The animation shows water adhering to the faucet gaining mass until it is stretched to a point where the surface tension can no longer bind it to the faucet. It then separates and surface tension forms the drop into a sphere. If a stream of water were running from the faucet, the stream would break up into drops during its fall. Gravity stretches the stream, then surface tension pinches it into spheres.
C. Flotation of objects denser than water occurs when the object is nonwettable and its weight is small enough to be borne by the forces arising from surface tension. For example, water striders use surface tension to walk on the surface of a pond. The surface of the water behaves like an elastic film: the insect's feet cause indentations in the water's surface, increasing its surface area.
D. Separation of oil and water (in this case, water and liquid wax) is caused by a tension in the surface between dissimilar liquids. This type of surface tension is called "interface tension", but its physics are the same.
E. Tears of wine is the formation of drops and rivulets on the side of a glass containing an alcoholic beverage. Its cause is a complex interaction between the differing surface tensions of water and ethanol; it is induced by a combination of surface tension modification of water by ethanol together with ethanol evaporating faster than water.
C. Water striders stay atop the liquid because of surface tension
D. Lava lamp with interaction between dissimilar liquids; water and liquid wax
E. Photo showing the "tears of wine" phenomenon.
Surface tension is visible in other common phenomena, especially when surfactants are used to decrease it:
- Soap bubbles have very large surface areas with very little mass. Bubbles in pure water are unstable. The addition of surfactants, however, can have a stabilizing effect on the bubbles (see Marangoni effect). Notice that surfactants actually reduce the surface tension of water by a factor of three or more.
- Emulsions are a type of solution in which surface tension plays a role. Tiny fragments of oil suspended in pure water will spontaneously assemble themselves into much larger masses. But the presence of a surfactant provides a decrease in surface tension, which permits stability of minute droplets of oil in the bulk of water (or vice versa).
Surface tension, represented by the symbol γ is defined as the force along a line of unit length, where the force is parallel to the surface but perpendicular to the line. One way to picture this is to imagine a flat soap film bounded on one side by a taut thread of length, L. The thread will be pulled toward the interior of the film by a force equal to 2L (the factor of 2 is because the soap film has two sides, hence two surfaces). Surface tension is therefore measured in forces per unit length. Its SI unit is newton per meter but the cgs unit of dyne per cm is also used. One dyn/cm corresponds to 0.001 N/m.
An equivalent definition, one that is useful in thermodynamics, is work done per unit area. As such, in order to increase the surface area of a mass of liquid by an amount, δA, a quantity of work, δA, is needed. This work is stored as potential energy. Consequently surface tension can be also measured in SI system as joules per square meter and in the cgs system as ergs per cm2. Since mechanical systems try to find a state of minimum potential energy, a free droplet of liquid naturally assumes a spherical shape, which has the minimum surface area for a given volume.
Surface curvature and pressure
If no force acts normal to a tensioned surface, the surface must remain flat. But if the pressure on one side of the surface differs from pressure on the other side, the pressure difference times surface area results in a normal force. In order for the surface tension forces to cancel the force due to pressure, the surface must be curved. The diagram shows how surface curvature of a tiny patch of surface leads to a net component of surface tension forces acting normal to the center of the patch. When all the forces are balanced, the resulting equation is known as the Young–Laplace equation:
- Δp is the pressure difference.
- is surface tension.
- Rx and Ry are radii of curvature in each of the axes that are parallel to the surface.
The quantity in parentheses on the right hand side is in fact (twice) the mean curvature of the surface (depending on normalisation).
Solutions to this equation determine the shape of water drops, puddles, menisci, soap bubbles, and all other shapes determined by surface tension (such as the shape of the impressions that a water strider's feet make on the surface of a pond).
The table below shows how the internal pressure of a water droplet increases with decreasing radius. For not very small drops the effect is subtle, but the pressure difference becomes enormous when the drop sizes approach the molecular size. (In the limit of a single molecule the concept becomes meaningless.)
|Δp for water drops of different radii at STP|
|Droplet radius||1 mm||0.1 mm||1 μm||10 nm|
To find the shape of the minimal surface bounded by some arbitrary shaped frame using strictly mathematical means can be a daunting task. Yet by fashioning the frame out of wire and dipping it in soap-solution, a locally minimal surface will appear in the resulting soap-film within seconds.
The reason for this is that the pressure difference across a fluid interface is proportional to the mean curvature, as seen in the Young-Laplace equation. For an open soap film, the pressure difference is zero, hence the mean curvature is zero, and minimal surfaces have the property of zero mean curvature.
The surface of any liquid is an interface between that liquid and some other medium.[note 1] The top surface of a pond, for example, is an interface between the pond water and the air. Surface tension, then, is not a property of the liquid alone, but a property of the liquid's interface with another medium. If a liquid is in a container, then besides the liquid/air interface at its top surface, there is also an interface between the liquid and the walls of the container. The surface tension between the liquid and air is usually different (greater than) its surface tension with the walls of a container. And where the two surfaces meet, their geometry must be such that all forces balance.
Where the two surfaces meet, they form a contact angle, , which is the angle the tangent to the surface makes with the solid surface. The diagram to the right shows two examples. Tension forces are shown for the liquid-air interface, the liquid-solid interface, and the solid-air interface. The example on the left is where the difference between the liquid-solid and solid-air surface tension, , is less than the liquid-air surface tension, , but is nevertheless positive, that is
The more telling balance of forces, though, is in the vertical direction. The vertical component of must exactly cancel the force, .
|methyl iodide||soda-lime glass||29°|
|Some liquid-solid contact angles|
Since the forces are in direct proportion to their respective surface tensions, we also have:
This means that although the difference between the liquid-solid and solid-air surface tension, , is difficult to measure directly, it can be inferred from the liquid-air surface tension, , and the equilibrium contact angle, , which is a function of the easily measurable advancing and receding contact angles (see main article contact angle).
This same relationship exists in the diagram on the right. But in this case we see that because the contact angle is less than 90°, the liquid-solid/solid-air surface tension difference must be negative:
Special contact angles
Observe that in the special case of a water-silver interface where the contact angle is equal to 90°, the liquid-solid/solid-air surface tension difference is exactly zero.
Another special case is where the contact angle is exactly 180°. Water with specially prepared Teflon approaches this. Contact angle of 180° occurs when the liquid-solid surface tension is exactly equal to the liquid-air surface tension.
Methods of measurement
Because surface tension manifests itself in various effects, it offers a number of paths to its measurement. Which method is optimal depends upon the nature of the liquid being measured, the conditions under which its tension is to be measured, and the stability of its surface when it is deformed.
- Du Noüy Ring method: The traditional method used to measure surface or interfacial tension. Wetting properties of the surface or interface have little influence on this measuring technique. Maximum pull exerted on the ring by the surface is measured.
- Du Noüy-Padday method: A minimized version of Du Noüy method uses a small diameter metal needle instead of a ring, in combination with a high sensitivity microbalance to record maximum pull. The advantage of this method is that very small sample volumes (down to few tens of microliters) can be measured with very high precision, without the need to correct for buoyancy (for a needle or rather, rod, with proper geometry). Further, the measurement can be performed very quickly, minimally in about 20 seconds. First commercial multichannel tensiometers [CMCeeker] were recently built based on this principle.
- Wilhelmy plate method: A universal method especially suited to check surface tension over long time intervals. A vertical plate of known perimeter is attached to a balance, and the force due to wetting is measured.
- Spinning drop method: This technique is ideal for measuring low interfacial tensions. The diameter of a drop within a heavy phase is measured while both are rotated.
- Pendant drop method: Surface and interfacial tension can be measured by this technique, even at elevated temperatures and pressures. Geometry of a drop is analyzed optically. For details, see Drop.
- Bubble pressure method (Jaeger's method): A measurement technique for determining surface tension at short surface ages. Maximum pressure of each bubble is measured.
- Drop volume method: A method for determining interfacial tension as a function of interface age. Liquid of one density is pumped into a second liquid of a different density and time between drops produced is measured.
- Capillary rise method: The end of a capillary is immersed into the solution. The height at which the solution reaches inside the capillary is related to the surface tension by the equation discussed below.
- Stalagmometric method: A method of weighting and reading a drop of liquid.
- Sessile drop method: A method for determining surface tension and density by placing a drop on a substrate and measuring the contact angle (see Sessile drop technique).
- Vibrational frequency of levitated drops: The natural frequency of vibrational oscillations of magnetically levitated drops has been used to measure the surface tension of superfluid 4He. This value is estimated to be 0.375 dyn/cm at T = 0 K.
Liquid in a vertical tube
An old style mercury barometer consists of a vertical glass tube about 1 cm in diameter partially filled with mercury, and with a vacuum (called Torricelli's vacuum) in the unfilled volume (see diagram to the right). Notice that the mercury level at the center of the tube is higher than at the edges, making the upper surface of the mercury dome-shaped. The center of mass of the entire column of mercury would be slightly lower if the top surface of the mercury were flat over the entire crossection of the tube. But the dome-shaped top gives slightly less surface area to the entire mass of mercury. Again the two effects combine to minimize the total potential energy. Such a surface shape is known as a convex meniscus.
The reason we consider the surface area of the entire mass of mercury, including the part of the surface that is in contact with the glass, is because mercury does not adhere at all to glass. So the surface tension of the mercury acts over its entire surface area, including where it is in contact with the glass. If instead of glass, the tube were made out of copper, the situation would be very different. Mercury aggressively adheres to copper. So in a copper tube, the level of mercury at the center of the tube will be lower than at the edges (that is, it would be a concave meniscus). In a situation where the liquid adheres to the walls of its container, we consider the part of the fluid's surface area that is in contact with the container to have negative surface tension. The fluid then works to maximize the contact surface area. So in this case increasing the area in contact with the container decreases rather than increases the potential energy. That decrease is enough to compensate for the increased potential energy associated with lifting the fluid near the walls of the container.
If a tube is sufficiently narrow and the liquid adhesion to its walls is sufficiently strong, surface tension can draw liquid up the tube in a phenomenon known as capillary action. The height to which the column is lifted is given by:
- is the height the liquid is lifted,
- is the liquid-air surface tension,
- is the density of the liquid,
- is the radius of the capillary,
- is the acceleration due to gravity,
- is the angle of contact described above. If is greater than 90°, as with mercury in a glass container, the liquid will be depressed rather than lifted.
Puddles on a surface
Pouring mercury onto a horizontal flat sheet of glass results in a puddle that has a perceptible thickness. The puddle will spread out only to the point where it is a little under half a centimeter thick, and no thinner. Again this is due to the action of mercury's strong surface tension. The liquid mass flattens out because that brings as much of the mercury to as low a level as possible, but the surface tension, at the same time, is acting to reduce the total surface area. The result is the compromise of a puddle of a nearly fixed thickness.
The same surface tension demonstration can be done with water, lime water or even saline, but only on a surface made of a substance that the water does not adhere to. Wax is such a substance. Water poured onto a smooth, flat, horizontal wax surface, say a waxed sheet of glass, will behave similarly to the mercury poured onto glass.
The thickness of a puddle of liquid on a surface whose contact angle is 180° is given by:
is the depth of the puddle in centimeters or meters. is the surface tension of the liquid in dynes per centimeter or newtons per meter. is the acceleration due to gravity and is equal to 980 cm/s2 or 9.8 m/s2 is the density of the liquid in grams per cubic centimeter or kilograms per cubic meter
In reality, the thicknesses of the puddles will be slightly less than what is predicted by the above formula because very few surfaces have a contact angle of 180° with any liquid. When the contact angle is less than 180°, the thickness is given by:
For mercury on glass, γHg = 487 dyn/cm, ρHg = 13.5 g/cm3 and θ = 140°, which gives hHg = 0.36 cm. For water on paraffin at 25 °C, γ = 72 dyn/cm, ρ = 1.0 g/cm3, and θ = 107° which gives hH2O = 0.44 cm.
The formula also predicts that when the contact angle is 0°, the liquid will spread out into a micro-thin layer over the surface. Such a surface is said to be fully wettable by the liquid.
The breakup of streams into drops
In day-to-day life we all observe that a stream of water emerging from a faucet will break up into droplets, no matter how smoothly the stream is emitted from the faucet. This is due to a phenomenon called the Plateau–Rayleigh instability, which is entirely a consequence of the effects of surface tension.
The explanation of this instability begins with the existence of tiny perturbations in the stream. These are always present, no matter how smooth the stream is. If the perturbations are resolved into sinusoidal components, we find that some components grow with time while others decay with time. Among those that grow with time, some grow at faster rates than others. Whether a component decays or grows, and how fast it grows is entirely a function of its wave number (a measure of how many peaks and troughs per centimeter) and the radii of the original cylindrical stream.
where is Gibbs free energy and is the area.
Thermodynamics requires that all spontaneous changes of state are accompanied by a decrease in Gibbs free energy.
From this it is easy to understand why decreasing the surface area of a mass of liquid is always spontaneous (), provided it is not coupled to any other energy changes. It follows that in order to increase surface area, a certain amount of energy must be added.
Gibbs free energy is defined by the equation, , where is enthalpy and is entropy. Based upon this and the fact that surface tension is Gibbs free energy per unit area, it is possible to obtain the following expression for entropy per unit area:
Kelvin's Equation for surfaces arises by rearranging the previous equations. It states that surface enthalpy or surface energy (different from surface free energy) depends both on surface tension and its derivative with temperature at constant pressure by the relationship.
Thermodynamics of soap bubbles
The pressure inside an ideal (one surface) soap bubble can be derived from thermodynamic free energy considerations. At constant temperature and particle number, , the differential Helmholtz energy is given by
where is the difference in pressure inside and outside of the bubble, and is the surface tension. In equilibrium, , and so,
For a spherical bubble, the volume and surface area are given simply by
Substituting these relations into the previous expression, we find
which is equivalent to the Young–Laplace equation when Rx = Ry. For real soap bubbles, the pressure is doubled due to the presence of two interfaces, one inside and one outside.
Influence of temperature
Surface tension is dependent on temperature. For that reason, when a value is given for the surface tension of an interface, temperature must be explicitly stated. The general trend is that surface tension decreases with the increase of temperature, reaching a value of 0 at the critical temperature. For further details see Eötvös rule. There are only empirical equations to relate surface tension and temperature:
Here V is the molar volume of a substance, TC is the critical temperature and k is a constant valid for almost all substances. A typical value is k = 2.1 x 10−7 [J K−1 mol−2/3]. For water one can further use V = 18 ml/mol and TC = 374°C.
A variant on Eötvös is described by Ramay and Shields:
where the temperature offset of 6 kelvins provides the formula with a better fit to reality at lower temperatures.
is a constant for each liquid and n is an empirical factor, whose value is 11/9 for organic liquids. This equation was also proposed by van der Waals, who further proposed that could be given by the expression, , where is a universal constant for all liquids, and is the critical pressure of the liquid (although later experiments found to vary to some degree from one liquid to another).
Both Guggenheim-Katayama and Eötvös take into account the fact that surface tension reaches 0 at the critical temperature, whereas Ramay and Shields fails to match reality at this endpoint.
Influence of solute concentration
Solutes can have different effects on surface tension depending on their structure:
- Little or no effect, for example sugar
- Increase surface tension, inorganic salts
- Decrease surface tension progressively, alcohols
- Decrease surface tension and, once a minimum is reached, no more effect: surfactants
What complicates the effect is that a solute can exist in a different concentration at the surface of a solvent than in its bulk. This difference varies from one solute/solvent combination to another.
- is known as surface concentration, it represents excess of solute per unit area of the surface over what would be present if the bulk concentration prevailed all the way to the surface. It has units of mol/m2
- is the concentration of the substance in the bulk solution.
Certain assumptions are taken in its deduction, therefore Gibbs isotherm can only be applied to ideal (very dilute) solutions with two components.
Influence of particle size on vapor pressure
The Clausius–Clapeyron relation leads to another equation also attributed to Kelvin, as the Kelvin equation. It explains why, because of surface tension, the vapor pressure for small droplets of liquid in suspension is greater than standard vapor pressure of that same liquid when the interface is flat. That is to say that when a liquid is forming small droplets, the equilibrium concentration of its vapor in its surroundings is greater. This arises because the pressure inside the droplet is greater than outside.
- is the standard vapor pressure for that liquid at that temperature and pressure.
- is the molar volume.
- is the gas constant
is the Kelvin radius, the radius of the droplets.
The effect explains supersaturation of vapors. In the absence of nucleation sites, tiny droplets must form before they can evolve into larger droplets. This requires a vapor pressure many times the vapor pressure at the phase transition point.
The effect can be viewed in terms of the average number of molecular neighbors of surface molecules (see diagram).
The table shows some calculated values of this effect for water at different drop sizes:
|P/P0 for water drops of different radii at STP|
|Droplet radius (nm)||1000||100||10||1|
The effect becomes clear for very small drop sizes, as a drop of 1 nm radius has about 100 molecules inside, which is a quantity small enough to require a quantum mechanics analysis.
|Liquid||Temperature °C||Surface tension, γ|
|Acetic acid (40.1%) + Water||30||40.68|
|Acetic acid (10.0%) + Water||30||54.56|
|Ethanol (40%) + Water||25||29.63|
|Ethanol (11.1%) + Water||25||46.03|
|Hydrochloric acid 17.7M aqueous solution||20||65.95|
|Liquid helium II||-273||0.37|
|Sodium chloride 6.0M aqueous solution||20||82.55|
|Sucrose (55%) + water||20||76.45|
- Capillary wave — short waves on a water surface, governed by surface tension and inertia
- Cheerio effect — the tendency for small wettable floating objects to attract one another.
- Dimensionless numbers
- Dortmund Data Bank — contains experimental temperature-dependent surface tensions.
- Electrodipping force
- Eötvös rule — a rule for predicting surface tension dependent on temperature.
- Fluid pipe
- Hydrostatic equilibrium—the effect of gravity pulling matter into a round shape.
- Meniscus — surface curvature formed by a liquid in a container.
- Mercury beating heart — a consequence of inhomogeneous surface tension.
- Sessile drop technique
- Sow-Hsin Chen
- Specific surface energy — same as surface tension in isotropic materials.
- Spinning drop method
- Stalagmometric method
- Surface pressure
- Surface tension values
- Surfactants — substances which reduce surface tension.
- Szyszkowski equation — Calculating surface tension of aqueous solutions
- Tears of wine — the surface tension induced phenomenon seen on the sides of glasses containing alcoholic beverages.
- Tolman length — leading term in correcting the surface tension for curved surfaces.
- Wetting and dewetting
Gallery of effects
Surface tension prevents a coin from sinking: the coin is indisputably denser than water, so it must be displacing a volume greater than its own for buoyancy to balance mass.
- In a mercury barometer, the upper liquid surface is an interface between the liquid and a vacuum containing some molecules of evaporated liquid.
- White, Harvey E. (1948). Modern College Physics. van Nostrand. ISBN 0-442-29401-8.
- John W. M. Bush (May 2004). "MIT Lecture Notes on Surface Tension, lecture 5" (PDF). Massachusetts Institute of Technology. Retrieved April 1, 2007.
- John W. M. Bush (May 2004). "MIT Lecture Notes on Surface Tension, lecture 3" (PDF). Massachusetts Institute of Technology. Retrieved April 1, 2007.
- Sears, Francis Weston; Zemanski, Mark W. University Physics 2nd ed. Addison Wesley 1955
- John W. M. Bush (April 2004). "MIT Lecture Notes on Surface Tension, lecture 1" (PDF). Massachusetts Institute of Technology. Retrieved April 1, 2007.
- Pierre-Gilles de Gennes; Françoise Brochard-Wyart; David Quéré (2002). Capillary and Wetting Phenomena—Drops, Bubbles, Pearls, Waves. Alex Reisinger. Springer. ISBN 0-387-00592-7.
- Aaronson, Scott. "NP-Complete Problems and physical reality.". SIGACT News.
- "Surface Tension by the Ring Method (Du Nouy Method)" (PDF). PHYWE. Retrieved 2007-09-08.
- "Surface and Interfacial Tension". Langmuir-Blodgett Instruments. Retrieved 2007-09-08.
- "Surfacants at interfaces" (PDF). lauda.de. Retrieved 2007-09-08.
- Calvert, James B. "Surface Tension (physics lecture notes)". University of Denver. Retrieved 2007-09-08.
- "Sessile Drop Method". Dataphysics. Archived from the original on August 8, 2007. Retrieved 2007-09-08.
- Vicente, C.; Yao, W.; Maris, H.; Seidel, G. (2002). "Surface tension of liquid 4He as measured using the vibration modes of a levitated drop". Physical Review B 66 (21). Bibcode:2002PhRvB..66u4504V. doi:10.1103/PhysRevB.66.214504.
- Moore, Walter J. (1962). Physical Chemistry, 3rd ed. Prentice Hall.
- Adam, Neil Kensington (1941). The Physics and Chemistry of Surfaces, 3rd ed. Oxford University Press.
- "Physical Properties Sources Index: Eötvös Constant". Retrieved 2008-11-16.
- G. Ertl, H. Knözinger and J. Weitkamp; Handbook of heterogeneous catalysis, Vol. 2, page 430; Wiley-VCH; Weinheim; 1997 ISBN 3-527-31241-2
- Lange's Handbook of Chemistry (1967) 10th ed. pp 1661–1665 ISBN 0-07-016190-9 (11th ed.)
- On the Surface Tension of Liquid Helium II
|Wikimedia Commons has media related to: Surface tension|
- On surface tension and interesting real-world cases
- MIT Lecture Notes on Surface Tension
- Surface Tensions of Various Liquids
- Calculation of temperature-dependent surface tensions for some common components
- Surface Tension Calculator For Aqueous Solutions Containing the Ions H+, NH4+, Na+, K+, Mg2+, Ca2+, SO42–, NO3–, Cl–, CO32–, Br– and OH–.
- The Bubble Wall (Audio slideshow from the National High Magnetic Field Laboratory explaining cohesion, surface tension and hydrogen bonds)
- C. Pfister: Interface Free Energy. Scholarpedia 2010 (from first principles of statistical mechanics) | http://en.wikipedia.org/wiki/Surface_tension | 13 |
98 | Now that we have solved equations in one variable, we will now work on solving equations in two variables and graphing equations on the coordinate plane. Graphs are very important for giving a visual representation of the relationship between two variables in an equation.
First, let's get acquainted with the coordinate plane (or Cartesian graph).
The coordinate plane was created by the French Mathematician Rene Descartes in order to goemetrically represent algebraic equations. This is often why the coordinate plane is called the Cartesian plane, or graph. When working with equations in more than one variable, using the Cartesian graph can be in important tool to make equations easier to visualize and understand.
The horizontal number line is the x-axis and the vertical number line is the y-axis. The point where both lines intersect is called the origin.
Each point on the graph is depicted by an ordered pair, where x is always the first value and y is always the second value in the ordered pair (x,y). This is because x is the independent variable, which means that it is the variable being changed. This makes y the dependent variable, which means that it is dependent on how x is being changed. We will explore this as we start to graph equations in terms of x and y. Now, even though there are two values in an order pair, they associate to only one point on the graph.
Let's plot the points A(0,0), B(1,2), C(-4,2), D(-3,-4), and E(4,-2).
Notice that A(0,0) is the origin because both it's x and y values are 0. For B(1,2), the x value would be 1 and the y value would be 2. To plot the point, we would go in the positive direction on the x axis until we hit 1, then we would go up on the positive y axis until we hit 2. This is where the point is located. We get our points by just lining up the x value and y value to get their locations, and we can do this for any coordinate pair.
On the coordinate plane, we know that each point must have an x and a y value. When we solved equations in one variable, it was easy to see that we had an x value. What we didn't realize is that we also had a y value as well. In fact, we had infinitely many y values. Similarly, if we were to solve a one variable equation in terms of y, we would have infinitely many x values. These equations do not form a point, but rather a horizontal or vertical line.
When x equals a number, y can take on any value and it would not change the equality. We could think of the equation as having a y value with 0 as a coefficient, so no matter what value y takes, it will always multiply by 0. This will form a vertical line.
Similarly, when y equals a number, x can take on any value and it would not affect the equality. We could think of the equation having value of 0x, so x can be any number and it would not affect the equation. This graph will be a horizontal line.
This makes sense, because the x axis is y = 0 and the y axis is x = 0.
We have seen expressions and equations of one variable, mainly x. Here is an expression
When we plug in different values of x, we also yield a different output as well. Since this output varies depending on x, we can also use a variable to represent the output of x.
When dealing with equations in two variables, the solutions consist of x and y values that make the equation true when plugged into the equation. These solutions will turn out to be ordered pairs, and we will see that equations in 2 variables can have more than one solution, and often infinitely many solutions.
Given the equation
Determine whether the coordinates (1,5), (2,6), and (-1,1) are solutions to the equation.
Let's start with (1,5) and plug it into the equation for x and y.
This is true! This means that this point is a solution.
Let's try (2,6)
This is false because 6 does not equal 7, therefore it is not a solution.
Finally, let's plug in (-1,1).
This is another true statement, so (1,1) is a solution to the equation.
Let's plot both of the solutions we found to y = 2x+3 on the coordinate plane
These are not the only solutions to this equation. One method we could use to find other solutions to our equation is make a table of x and y values. We can do this by plugging in different x values and find their corresponding y values.
Now that we have a few coordinate points, let's plot them on the graph.
We can see that the points form a straight line, so we can draw a line through them. Any point on this line is a solution to the equation y = 2x+3. It is safe to say that the line we have drawn on the graph is the solution set to our equation.
For any two variable equation, we can attempt to graph the function by plugging in random x values to get our corresponding y values. This way, we have many points that we can graph. Some equations are easier to graph because they have noticeable patterns. We should keep in mind that most of the equations we work with will be in terms of x and y, because the coordinate plane is formed by the x and y axes.
Let's look at linear equations.
Linear equations are equations of two variables that form a line on the graph. A linear equation is defined where each term is either a constant or a product of a constant and a single variable. There are many different ways that linear equations can be represented algebraically and plotted graphically.
Here are different forms of Linear Equations
A linear equation in the form of two variables can be written in this form, where A, B, and C are constants. This form is beneficial because we can easily obtain the x and y intercepts by plugging in 0 for one of the variables. An intercept is the intersection of the line and either the x or y axis. We will see that these intercepts will help in plotting linear equations.
Write the following equation in standard form and plot the line on the graph.
First we multiply both sides by 3 to get rid of the fraction.
Then we subtract 2x from both sides to get the x and y on the same side
Let's rearrange it so our x value is first.
This is the standard form of our original equation. Since our original equation can be written in standard form, we know it is a linear equation (if an equation cannot be written in standard form, it is not linear).
Now let's plot the graph of the equation by finding our intercepts. First, let's find our y intercept by plugging in 0 for x.
We plugged in 0 for x and got -4 for y. Our coordinate would be (0,-4), which we call our y intercept. This is called our y intercept because it is the point where the graph of the equation intersects the y axis.
Let's find the x intercept by plugging in 0 for y.
When we plugged in y = 0, we got x = 6, so our coordinate is (6,0). This is the x intercept because it is the point where the graph crosses the x axis. Since we have our x and y intercepts and we know the equation is linear (we put it in Standard Form), we can graph the equation.
This line is the solution set of our equation. We should note that if we know an equation is linear, it only takes two points to construct the line on a graph. Just to make sure, it is always good to plot more than two points to check if the points are collinear (If they form a line). If we do not know it is linear, it is beneficial to plot a number of points to clearly see the curve of the graph. If we were given this graph without the algebraic representation, it would be hard to come up with the standard form of the equation, so we can use the following general forms of linear equations to find them.
This form is the most commonly used to represent linear equations. This form is the best way to find the slope and y intercept of a linear equation, where m is the slope and b is the y intercept.
Let's plot this equation using the slope-intercept form.
Comparing to our general slope-intercept equation, we can see that m = 2/3 and b = -4. Plotting this on a graph, we can obtain our line.
Since we have our y intercept and our slope, we can plot our y intercept and find other point on the line using the slope. Since m = 2/3, we can go up positive 2 and right positive 3 to obtain our next point on the line. We can repeat this process to get the line of our equation.
Finally, we have point-slope form. We can use the representation if we have any point on the line (it doesn't have to be an intercept) and the slope, or if we have any two points on the line.
Find the equation of the line through the point (3,-2) with slope m = 2/3. Let's plug the values into our equation.
We plug in (3,-2) for (x1,y1) and let m = 2/3
We have our equation! Now let's try it given two points.
Find the equation of the line through the points (-3,-6) and (3,-2). If we know the equation is linear, we can just plot the points and draw a line through them, but in this case we want to find the equation of the line. Let's plug them into the general point slope form and see what we get.
Since we don't know the slope but we have two points, we can plug our two points into the slope formula.
Even though we have three different forms of linear equations, they are all the same. The reason we have these different forms is because they are each beneficial for different geometric representations and ways of working with the information we have. The various forms of linear equations can be converted from one form to another.
(1) When converting from Standard form (Ax + By = C) to Slope-intercept form (y = mx+b), we have
(2) When converting from Standard form (Ax + By = C) to Point slope form [(y-y1) = m(x-x1)], we have
If we are given a graph of a line and we want to find its equation (or algebraic representation), we can find it a number of ways.
(1) Given two points on the line, we can plug them into the slope formula to find the slope and then use the point-slope form.
(2) Given any point on the line and the y intercept, we can plug them into the slope formula to find the slope and then use either the point-slope or slope-intercept form.
(3) Given any point on the line and the slope, use the point-slope form.
(4) Given the y intercept and the slope, use the slope-intercept form.
Given any type of equation (it doesn't have to be linear), we can plug in a random x value and obtain a y value. We could plot points this way, but it is a tedious process and not completely necessary. Here are other ways of constructing the graph of a linear equation.
(1) Given an equation in any form, plug in any x value to the equation and find the y value. Plot the point on the graph and do this again for at least one more point. After we have two points, draw a line through the points for all solutions to the linear equation. If the points are noncollinear (they do not line up), then either the equation is not linear or there was an arithmetic mistake in finding them.
(2) Given the Standard Form of a linear equation (Ax + By = C), set the x value to 0 to find the y intercept and then the y value to 0 for the x intercept. Plot the points on the graph and draw a line through them.
(3) Given the Slope-Intercept Form (y = mx + b), plot the y intercept (0,b) and use the slope m to find the rest of the points on the graph.
(4) Given the Point-Slope Form [(y - y1) = m(x - x1)], plot the point (x1,y1) and use the slope m to find the rest of the points on the graph.
When we graph more than one linear equation at once, we are considered to have a system of linear equations. Solving these systems will give us the point at which the lines intersect, which is quite relevant in various real life applications and is executed often in economics and higher level math. | http://www.wyzant.com/help/math/algebra/Graphing_Linear_Equations | 13 |
56 | Language Arts Lessons
Useful Word Lists
Help and Information
- British Spelling
- Custom Sentences and Definitions
- Funding Sources
- FAQs - Frequently Asked Questions
- Getting Started Welcome Letters
- Handwriting Worksheets
- Our Educational Awards
- Single Sign On via Gmail
- Sequential Spelling Program
- Student Writing Practice
- Teacher Training Videos
- Standards Correlation
- The Importance of Spelling
- Improve Your Writing Skills
- Recommended Learning Resources
- Writing Prompts that Motivate
- Reading Comprehension
- Research on Spelling Automaticity
- Incorporating Spelling Into Reading
- SpellingCity and NComputing
- Title 1 Schools
- Vocabulary/Word Study
- Welcome Edmodo
- Welcome Google Chrome
Fifth Grade Math Vocabulary
VocabularySpellingCity provides these fifth grade math word lists to enable teachers and parents to supplement the fifth grade math curriculum with interactive, educational math vocabulary games. Fifth graders can choose the area they are currently studying to see the related math word lists and then select any of 25 learning activities to practice their words. The material is specifically designed to be used in a 5th grade math class.
The math vocabulary lists are based on the Common Core Fifth Grade Math Standards. VocabularySpellingCity ensures that these academic vocabulary lists are level-appropriate for 5th graders. Teachers can import these lists into their own accounts, and edit or add to them for their own use.
Common Core State Standards Overview for Fifth Grade Math
Click for more information on Math Vocabulary and the Common Core Standards in general. For information pertaining to 5th grade in particular, please refer to the chart above. To preview the definitions, select a list and use the Flash Cards. For help on using the site, watch one of our short videos on how to use the site.
Vocabulary instruction is one of the pivotal means of gaining subject comprehension. Math is no exception to this rule. By using challenging elementary math vocabulary in drill and practice, students' comprehension of math grows by leaps and bounds. With fun online math vocabulary games that fifth graders enjoy playing both in school and at home, students come to gain proficiency in math through 5th grade math vocabulary instruction that extends beyond the classroom. Students can listen to the fifth grade math terms being read to them, play an exciting interactive game, or take an online test.
The math vocabulary lists are based on the Common Core Fifth Grade Math Standards. Homeschool parents and classroom teachers alike appreciate the manner in which 5th grade math words are organized in themed lists to make learning and teaching 5th grade math definitions easier. Similarly, students appreciate being able to put away their fifth grade math glossary and enjoy learning elementary math words in a refreshing and fun way as they prepare to enter the world of middle school math.
Fifth Grade Math Vocabulary
Words at a Glance:
Operations & Algebraic Thinking: equivalent, inequality, pattern, variable, expression, order of operations, evaluate, equation, forms, relationship, factoring, pair, squared, coefficient, solution, square root, inverse, vertices, exponent, point, braces, sequence, symbol, ordered pairs, rule, coordinate plane, parentheses, numerical expression, numerical pattern, brackets
Base Ten Operations
Number & Operations in Base Ten: decimal number, divisible, digit, dividend, billion, operation, natural numbers, consecutive, cardinal number, calculate, sum, product, multiplicand, percent, subtrahend, estimation, million, difference, quotient, prime number
Number & Operations - Fractions: prime factorization, ordinal number, least common multiple, divisible, reduce, equivalent, remainder, divisor quotient, simplify, whole, percent, half, estimation, quarter, ratio, part, greatest common factor, fraction, dividend
Measurement & Data
Units & Coordinates: units of measure, unit conversion, coordinates, plot, unit , square unit, cubic units, y-axis, x-axis, coordinate system
Data Collection: data collection, unorganized data, arrangement, input, labels, increments, location, survey, data, organize
Measurement: Celsius, Fahrenheit, mass, quantity, scale, capacity, volume, estimate, measure , area
Problem Solving: predict, likely, probability, certainty, verify, less likely, collection, chosen, array, analysis
Interpretation: interpret, mean, ratio, bar graph, data, median, mode, line graph, circle graph, pie chart
Representation: randomly, function, stem and leaf plot, diagram, grid, scale, Venn diagram, double-bar graph, tree diagram, data
Angles: semicircle, acute angle, obtuse angle, perpendicular, degrees, congruent, right angle, straight angle, parallel lines, line
Lines: coordinates, diameter, distance, line of symmetry, intersection, side, diagonal, line segment, horizontal, vertical
Measurement: diameter, circumference, radius, horizontal, turn, translation, reflection, transformation, rotation, symmetry
Shapes: semicircle, rectangular, trapezoid, two-dimensional, tessellation, quadrilateral, symmetry, parallelogram, polygon, prism
For a complete online Math curriculum in Kindergarten Math, First Grade Math, Second Grade Math, Third Grade Math, Fourth Grade Math, Fifth Grade Math, Sixth Grade Math, Seventh Grade Math, or Eighth Grade Math visit Time4Learning.com.
Here are some fun Math Games from LearningGamesForKids by grade level: Kindergarten Math Games, First Grade Math Games, Second Grade Math Games, Third Grade Math Games, Fourth Grade Math Games, Fifth Grade Math Games, Addition Math Games, Subtraction Math Games, Multiplication Math Games, or Division Math Games. | http://www.spellingcity.com/5th-grade-math-vocabulary.html | 13 |
77 | Mental Math - A Guide to Effective Mental Calculations
Note about Notation: This book generally uses the English/U.S. styles of notation. This includes using commas as a way to divide up the thousands in long numbers (e.g. 32,000 = thirty-two thousand), it will use full stops (periods) as decimal points.
Calculating things in your head can be a difficult task. If you can't remember what you've worked out or simply don't know how to solve a problem then it can be very challenging and frustrating. But by learning and practicing the methods of using mathematical patterns, you can dramatically improve the speed and accuracy of your arithmetic. These methods are often called "High Speed Mental Math".
Mental Math is a valuable skill to have, even in the computerized world we live in:
- With good mental math skills you can save yourself time by not needing to pull out a calculator (or cell phone) every time you want to do a task.
- Mental math skills will improve your ability to estimate results, thus having a better ability to catch errors from computer-derived results. For example, while a calculator will generally give the right answer, based upon what was typed in, if you accidentally typed the wrong number, you might not catch your error if you didn't have good mental arithmetic skills.
The foundation of all arithmetic is addition, also known as summing. Similar to all mental math calculations, you can improve your ability to add numbers by learning to use some basic patterns.
Changing the Order of Addition
Often when looking for a pattern to help you quickly do an addition problem, it can help to change the order that you add things. For example, 8+1 is the same as 1+8, and in both cases you just count up one from 8, to get the answer 9. If you get stumped on an addition problem, try changing the order of which one you add first, and see if it helps.
Adding Zero, One, or Two
Unless you are completely new to addition, you surely know the pattern for adding zero. Anything plus zero is equal to the original number. Thus when you see a zero in an addition problem, you can basically ignore it, as it won't have any effect upon the answer. For those who are interested in math trivia, this property of zero not changing things when you add it, is called the "identity property" in arithmetic. Also, keep in mind, that you can only ignore zero in addition, and sometimes in subtraction. In multiplication and division, having a zero in a problem always changes things.
You probably also know the rule of adding 1, which is simply count up one number. This can also be used quickly with 2, where you can either count up 2 numbers, or if you know your even and odd numbers well, you can skip to the next even or odd.
To understand using the even and odd pattern, you may be familiar with the cheer that has the even numbers: "2,4,6,8, who do we appreciate?" If we add 6+2 we will go up to the next even number, which is 8. Similarly, the odd numbers go 1,3,5,7 so 2+7 will be the next odd number, which is 9.
Adding Nine or Eight: Counting Down from 10
Another arithmetic pattern you surely know, is how to add 10 to a number. An example of this is 2+10 is 12, or 6+10 is 16. We can use this pattern to help us add 9 or 8. Since 9 is one less than 10, you can always add nine to something by adding 10 to the number instead and then counting down one. For instance to find 9+7, you can add 10+7, which is 17, and then count down to 16.
Similarly with adding 8 to a number, you can also add 10, and then count down 2 numbers. So for instance, 8+7 can be found by adding 10+7 which is 17, and then counting down 2, which is 15. If you are good with knowing your even and odd numbers, you can also use a similar pattern as was explained with adding 2, by just going down one even or odd number. or you could say when its adding 9 you know its going to a higher number for instance 9+8 you know its going to be a teen so subtract one 9+8 = 1 now subtract one from 8 to get 7 now put them next to each other to get 17.
Doubling and Nearly Doubling
You may already be good at doubling numbers, such as 2+2. When doubling a number, you are doing the same thing as multiplying by 2. This also means, that if you have learned to count by numbers, such as counting by 4's, you know that it is 4, 8, 12, 16... And thus 4+4 is 8.
Once you have become good at doubling, you can use this knowledge to add numbers that are nearly double of each other, by just counting up or down by one. So for instance, 8+7 can be found by adding 8+8, which is 16, and then counting down by 1, which is 15. (If it is easier, you could also have done this by doing 7+7, which is 14, and then counting up 1 to 15.)
Adding Fives
This technique is a little trickier than the others we have so far talked about, but with some practice it might help you out. When adding 5 to another number, the goal is to "find the five inside" the other number, and then add or subtract what is left. For instance when adding 5+8, you could say that 8 is 5+3, so 5+8 is 5+5+3, or 13. Don't worry if you can't get this method, because most of the time you can use one of the other methods that have been taught to find the answer.
To Ten or Close
It is useful to memorize the patterns that add to 10 which are:
By knowing these patterns, if you see a pattern a little different you will know to go one higher or lower. For instance, 4+7 has the 7 one higher than 6, so this must add up to 11, because 4+6=10 so 4+6+1=11.
Summing Groups of Numbers by Finding Numbers that Add to Multiples of 10
A useful trick when adding lots of small numbers is to clump together the ones that add up to multiples of 10. For example, if you have to add 2 + 3 + 5 + 7 + 9 + 11 + 8, that can be rearranged as (3 + 7) + (9 + 11) + (2 + 8) + 5 = 10 + 20 + 10 + 5 = 45.
This method is also useful when performing column addition with more than two numbers. For example, in the problem:
56 35 47 21 12 32 +23 ---
Column addition is generally performed by adding the digits in the ones place, carrying them over, and then adding the digits in the tens place., and so on. A way to make this task easier is to group the digits in the ones place in groups of ten, and mark them on your paper like this:
5 6 3 5 4
7\ 2 1 \ 1 2 -- 10 3 2 / +2 3/ ---
Similarly the 6, 2, and 2 would be crossed off, yielding another 10. Therefore the digits in the ones place add up to 10+10+5+1 (what's left) or 26.
A useful trick when subtracting numbers is to begin with the smaller value and mentally skip your way up the difference, with jumping points at recognizable boundaries, such as powers of 10. For example, to subtract 67 from 213 I would start with 67, then add 3 + 30 + 100 + 13. Try this once and you see how easy it is. Sounding out your thoughts it would be "three, thirty-three, one hundred thirty-three plus the remaining 13 is one hundred forty-six".
Subtraction from Numbers consisting of 1 followed by zeros: 100; 1,000; 10,000; etc
For example 1,000 - 258 We simply subtract each digit in 258 from 9 and the last digit from 10.
2 5 8 from 9 from 9 from 10 7 4 2
So the answer is 1,000 - 258 = 742
And thats all there is to it!
This always works for subtractions from numbers consisting of a 1 followed by zeroes: 100; 1000; 10,000 etc.
A second method is to break up the number that you are subtracting. So instead of doing 1000-258 you would do 1000-250 and then subtract 8.
Another way of easily thinking of this method is to always subtract from 999 if subtracting from 1,000, and then adding 1 back. Same for 10,000, subtract from 9999 and add 1. For example, 1000-555 = 999 - 555 + 1= 444 + 1 = 445
Similarly 10,000 - 1068 = (9999-1068)+1 = (8931)+1 =8932 So the answer is 10,000 - 1068 = 8932
For 1,000 - 86, in which we have more zeros than figures in the numbers being subtracted, we simply suppose 86 is 086. So 1,000 - 86 becomes 1,000 - 086 = 914
Multiplication Facts for 0 through 10
Patterns for 0, 1 and 10
Chances are you already know the pattern for multiplying by 0, 1 and 10. But in case you don't, anything times 0 is 0. Anything times 1, is itself still, and anything times 10 has a zero added to the end, so 29x10 is 290.
Patterns for 2, 4, and 8
Multiplying by 2 is simply doubling a number, and so the pattern for doubling with addition is the same.
Multiplying by 4 is doubling twice. So 12x4, can be found by going 12+12 which is 24 and then 24+24 which is 48.
Similarly, multiplying by 8 is doubling 3 times, so 12x8 is 12+12 which is 24, then 24+24 which is 48, and then 48+48 which is 96.
Patterns for 3 and 6
Multiplying by 3 can be done by doubling and then adding the number to itself, so 12x3 is 12+12 which is 24, and then add 12 to that which is 36.
Multiplying by 6 is similar in that it is doubling first, and then multiplying by 3.
Patterns for 9
Multiplying by 9 has a special pattern. When multiplying a single digit by 9, the answer will always start with a digit one less than the number, and then the other number will add to 9. This may sound complex, but lets look at an example. If we want to do 9x6, simply have your first digit being one less than 6, so we know the answer will start with a 5, then then next digit must add up to 9, so the number that when added to 5 will equal 9, is 4, so the answer of 9x6 is 54.
Patterns for 5
Any number multiplied by 5 will end in either a 5 or a 0. One way of finding the answer is to multiply by 10 first, and then divide your answer in half. Another is to count by fives.
"Number Neighbors"
If you don't know the answer to a problem, you may know the answer to a problem where one of the numbers is one more or less than the one in the problem. For instance, if you didn't know the answer to 7 x 6, you might know 6 x 6 is 36, and then you could just add one more 6 to get to 42.
Multiplying Larger Numbers
When multiplying larger numbers it is very important to pick the correct sums to do. If you multiply 251 by 323 straight off it can be very difficult, but it is actually a very easy sum if approached in the right way. 251x3 + 251x20 + 251x300 is a scary prospect, so you have to work out the simplest method.
One of the first things to do is to look if the numbers are near anything easy to work out. In this example there is, very conveniently, the number 251, which is next to 250. So all you have to do is 323x250 + 323 - much easier, but 323x250 still doesn't look too simple. There is, however, an easy way of multiplying by 250 which can also apply to other numbers. You multiply by 1000 then divide by 4. So 323x1000 = 323,000, divide by two and you get 161,500, divide by 2 again and you get 80,750. Now this may not seem easy, but once you've gotten used to it, dividing by four (or other low numbers) in that way becomes natural and takes only a fraction of a second. 80,750+323 = 81,073 , so you've got the answer with a minimum of effort compared to what you would otherwise have done. You can't always do it this easily, but it is always useful to look for the more obvious shortcuts in this style.
An even more effective way in some circumstances is to know a simple rule for a set of circumstances. There are a large number of rules which can be found, some of which are explained below.
If you recognize that one or both numbers are easily divisible, this is one way to make the problem much easier. For example, 72 x 39 may seem daunting, but if taken as 8 x 9 x 3 x 13, it becomes much easier.
First, rearrange the numbers in the hardest to multiply order. In this case, I'd go with 13 x 8 x 9 x 3. Then multiply them one at a time.
- 13 x 8 = 10 x 8 + 3 x 8 = 80 + 24 = 104
- 104 x 9 = 936
- 936 x 3 = 2808 that would equal another number
Which would equal up to a whole new Faction
Multiplication by 11
To multiply any 2-digit number by 11 we just put the total of the two digits between the 2 figures.
for example: 27x11 can be written as [2+7] Thus, 27x11=297
Another example: 33x11 can be written as [3+3] Thus, 33x11=363. To visualise:
330 + 33 ---- 363
77 x 11 = 847 This involves a carry figure because 7 + 7 = 14 we get 77 x 11 = . We add the 1 from 14 as carry over to 7 and get 77x11=847
Similarly, 84x11 can be written as [8+4]=. The 1 from 12 carries over, giving 84x11=924
For 3 digit numbers multiplied by 11:
254 x 11 = 2794 We put the 2 and the 4 at the ends. We add the first pair 2 + 5 = 7. and we add the last pair: 5 + 4 = 9. So we can write 254 x 11 as [2+5][5+4] i.e. 254x11=2794
Similarly, 909x11 can be written as [9+0][0+9] i.e. 909x11=9999
Multiplication by 99, 999, 9999, etc
To multiply a number A by 99, you can multiply A by 100 and then subtract A from the result. When the number A is a two digit or one digit number, the result would be (A - 1) following by (100 - A). For example, when we multiply 65 by 99, we get 6435.
Similarly, to multiply a number A by 999, you can multiply A by 1000 and then subtract A from the result. When the number A is a three digit, two digit or one digit number, the result would be (A - 1) following by (1000 - A). For example, when we multiply 611 by 999, we get 610389.
This same idea can be used for multiplication by any large number consisting only 9s.
Same First Digit, Second Digits Add to 10
Let's say you are multiplying two numbers, just two two-digit numbers for now (though the rules could be adapted for others) which start with the same digit and the sum of their unit digits is 10. For example, 87×83 (sum of unit digits: 7+3=10). You multiply the first digit by one more than itself (8×9 = 72). Then multiply the second digits together (7×3 = 21). Then stick the first answer at the start of the second to get the answer (7221). A simple proof of how this works is given in the Wikipedia article on Swami Bharati Krishna Tirtha's Vedic mathematics.
If the result from the multiplication of the unit digits is less than 10, simply add a zero in front of the number (i.e., 9 becomes 09). For example, 59×51 is equal to [5×6][9×1] which equals . Thus 59×51 = 3009.
Squaring a Number That Ends with 5
This is a special case of the previous method. Discard the 5, and multiply the remaining number by itself plus one. Then tack on a 25 (which as in the previous section, is 5x5). For example, 65x65. Discarding the 5 from 65 leaves us with 6
5 = 6. Multiplying 6 by itself plus one gives us 42 (6x7 = 42). Tacking on a 25 yields 4225, so 65x65=4225. For example, 45x45 can be written as [4x5][5x5]thus 45x45 = 2025
Squaring a two-digit number
Rather than doing 142 or 472 as 14x14 or 47x47, the alternative is:
142 = 10 x 1(14 + 4) + (4 x 4) = 10(18) + 16 = 180 + 16 = 196
In other words, add what's in the ones place to the number, multiply it by what was originally in the tens place (sometimes you'll get a sum with the next number up in the tens i.e. 47 + 7 = 54 so use 4 not 5 in this example) tack a zero at the end, then add the square of the ones place. So:
472 = 10 × 4(47 + 7) + (7 × 7) = 10 × 4(54) + 49 = 10 × 216 + 49 = 2160 + 49 = 2209
So now we know that 472 is 2209.
When squaring two digit numbers that are only 1 away from a number ending in zero you can also use the basic algebraic formulas (A^2)-(A-1)^2 = 2A-1 and (A+1)^2 - (A^2) = 2A + 1. For example when squaring 99 you can set 100 as A then: that's not true because they explained it the wrong way 100^2 = 10000 2 * 100 = 200 So the answer is (10000 - 200) + 1 = 9801
To square 91 use the second formula. Then: 90^2 = 8100 2*90 = 180 So the answer is 8100 + 180 + 1 = 8281
Squaring a number when you know the square of a number adjacent or in proximity
This is useful if you want to quickly calculate the square of a number when you know the square of the adjacent number. For example, take the square of 46, using the "5" rule above you know that 45 squared is 2025. Leverage this number and add 45+46 (91) to 2025, which equals 2116. While adding 91 to 2025 in your head isn't exactly easy, it is certainly easier than trying to calculate the square of 46 directly. Doing this with an adjacent known square that is below is a bit more challenging depending on how you feel about doing subtraction in your head. For subtraction, using 45 as our base, and trying to figure our what 44 squared is, we would take the known value of 2025 subtract 44 and 45 to get 1936. This can be leveraged to try to determine squares that are not directly adjacent to the known square, but it gets a bit more complex (in your head!). Symbolically: if b=a+1 and a and b are integers then b2=a2+|a|+|b|.
Just Over 100
This trick works for two numbers that are just over 100, as long as the last two digits of both numbers multiplied together is less than 100. For example, for 103 x 124, 3 x 24 = 72 < 100, so this trick will work. For 117 x 112, 17 x 12 = 204 > 100, so it will not.
If the first test works, then the answer is:
- 1[sum of last two digits][product of last two digits]
- 108 x 109 = 1[8+9][8x9] = 1 = 11,772
- 105 x 115 = 1[5+15][5x15] = 1 = 12,075
- 132 x 103 = 1[32+3][32x3] = 1 = 13,596
If the addition or multiplication of the last 2 digits < 10, then add a 0 infront of the number, example if the addition is 4, it should be 04. Example shown below:
- 102 x 103 = 1[2+3][2x3]=1=10,506
This trick works for numbers just over 200, 300, 400, etc. with one simple change:
- [product of first digits][(sum of last two digits) x first digit][product of last two digits]
- 215 x 204 = [2x2][(15+4)x2][15x4] = [19x2] = = 43,860
If the addition or multiplication of the last 2 digits < 10, then add a 0 infront of the number, example if the addition is 4, it should be 04. Example shown below:
- 201 x 202 = [2x2][(2+1)x2][2x1]== 4,0602
For numbers just over 1000, 2000, etc., use the following:
- [product of first digits]0[(sum of last two digits) x first digit]0[product of last two digits]
- 2008 x 2009 = [2x2]0[(8+9)x2]0[8x9] = 0[17x2]0 = 00 = 4,034072
- 2008 x 2009 = 4,034,072
For each order of magnitude (x10), add two zeroes to the middle.
Again there are many possible techniques, but you can make do with the following or research your own. All numbers are the products of primes (you can make them by multiplying together prime numbers). If you are dividing you can divide by all the prime products of the number you are dividing by to get the answer. This means that 100/24 = (((100/2)/2)/2)/3. Although this means you have a lot more stages to do they are all much simpler. 100/2 = 50 , 50/2 = 25 , 25/2 = 12.5 , 12.5/3 = 45/30 = 41/6 = 4.166666666recurring
Also, another helpful trick is, when you have to muliply and then divide by a number, always divide first, until you've reached numbers that are relatively prime, and then multiply. This keeps numbers from being too large. For example, if you must do (18 * 115)/15, it is much easier to divide 115 by 5 and 18 by 3, and then multiply them together to get 23 * 6 = 138.
Multiply by the Reciprocal
Division is equivalent to multiplying by the reciprocal. For instance, division by 5 is the same as multiplication by 0.2 (1/5=0.2). To multiply by 0.2, simply double the number and then divide by 10.
Division by 7
The number 1/7 is a special number, equal to . Note that there are six digits that repeat, 142857. A beautiful thing happens when we consider integer multiples of this number:
Note that these six fractions of seven contain the same six digits repeating in the same order ad infinitum, but starting with a different number. But how is this useful when dividing by seven? Consider the problem 207/7. First, we can convert this to 200/7 + 7/7. We know that 7/7 equals one, so the answer will be 200/7 + 1. But what is 200/7? It is simply 2/7 times 100, and from the above, we know that , so by moving the decimal point, we know that . All that remains is to add the one from 7/7, giving us .
Division by 9
The fraction 1/9 and its integer multiples are fairly straightforward - they are simply equal to a decimal point followed by the one-digit the numerator repeating to infinity:
To solve a problem such as 367/9, we reduce it to
First add . Then add .
Division by 11
The fraction 1/11 and its integer multiples are fairly straightforward - they are simply equal to a decimal point followed by the product of nine and the the integer multiple repeating to infinity:
The best way to make estimation quickly in mental math is to round to one or two significant digits (that is, round it to the nearest place of the highest order(s) of magnitude), and then proceed with typical operations. Thus, 1242 * 15645 is approximately equal to 1200 * 16000 = 19200000, which is reasonably close to the correct answer of 19431090. In certain cases, one can even round to simply the nearest power of ten (which is useful when making estimations with much error and large numbers).
Range Search
It is sometimes easier to make a calculation in the opposite direction from the one you want, and this can be used to quickly estimate the value you want.
Square root is a good example. It is easier to square numbers than to take the square root. So you take any number that will square to a little larger than the one you want, take another that squares to less than the one you want and use an average of the two.
The trick now is to apply a general technique ( the Bisection Method ).
We create a new estimate using the average of our first two numbers. Square this. If it's value is higher than we want, we use it as the upper value of our range. If it is lower than we want, we use it as the lower value of our range.
We now have a new range that must contain the square root we want. We can apply the same process again to get a more accurate value ( this is known as Iteration ).
This technique is widely applied in computing, but also very handy for some mental mathematics.
Other mental maths
Perhaps one of the more useful tricks to mental math is memorization. Although it may seem an annoyance to need to memorize certain math facts, such as perfect squares and cubes (especially powers of two), prime factorizations of certain numbers, or the decimal equivalents of common fractions (such as 1/7 = .1428...). Many are simple, such as 1/3 = .3333... and 2^5 = 32, but speed up your calculations enormously when you don't have to do the division or multiplication in your head. For example, trying to figure out 1024/32 is much easier knowing that that is the same as 2^10/2^5, or which, subtracting exponents, gives 2^5, or 32. Many of these are memorized simply by frequent use; so, the best way to get good is much practice.
It's a good idea to memorize 3 x 17 = 51. We can extend this to 6 x 17 = 102. If we round these numbers 3 x 17 is approx 50, 6 x 17 is approx 100, 9 x 17 is approx 150, and so on. These are very helpful in estimating since 3, 6, 9, 50, 100 ... are common numbers.
Advanced Mathematics
It is possible to make good estimates of quite complex formulae. Range search is a useful technique and in addition to this you can also exploit some additional mathematical rules.
Binomial Expansion
The Binomial Theory means we can relatively easily expand an equation like
This is very useful if is less than 1. In that case the powers of get smaller and smaller, and we can ignore them.
Even for whole numbers and Integers, this is useful as we can split the problem into a large and small number added and then raised to a power. The small number's powers will become less important and we can neglect terms of the expansion than involve them to get a reasonable estimate.
If the value of value is much smaller than the we can make a good estimate, even if we don't add the , or terms.
Compound Interest
The Binomial Theorem again comes to our help. Here we usually want to work out :
- where is the rate of interest (e.g. rate is 5% then ) and is the period.
Using the theory we can roughly estimate this as
Although this is only a simple approximation it works quite well for small x. | http://en.wikibooks.org/wiki/Mental_Math | 13 |
64 | ANALYSIS OF SPOKEN INTERACTION
Unit 1 - Introduction
This unit sets out to introduce you to the ways in which language varies according to contextual factors such as setting, participants and purpose. In order to illustrate such variation, it focuses on address forms, and in particular on alternation and co-occurrence. More generally, it discusses the concept of community as used in linguistic contexts and compares the related concepts of speech community and discourse community, commenting on their relevance to language teaching.
By the end of this unit you should be able to:
• explain why and how language varies;
• give examples illustrating the importance of context in influencing language choice;
• provide examples of alternation and co-occurrence other than those provided in the unit;
• state the rules relating to the use of address forms in a situation with which you are familiar;
• outline the definitional problems associated with the concept of a speech community;
• explain how the concept of a discourse community differs from that
of a speech community.
Language and choice
If using language boiled down to simply applying a set of precisely formulated rules, language teaching would be fairly straightforward — and monumentally boring. Fortunately, life is more complicated than that, and one of the challenges which faces us is that of trying to establish what the relevant ‘rules’ and considerations of language use might be. Consider the following utterances, for example:
Would you mind passing the salt, please.
It isn’t necessary to spell out the contexts in which these occur or the rules which influence their form. The surgeon’s request for a scalpel in the operating theatre is the most efficient way of getting the job done, and it reflects a perfectly proper professional relationship with the theatre nurse. However, what’s professionally acceptable around the operating table is socially disastrous around the dinner table.
The richness of language means that there is always more than one way of saying something, and our choices are never random and the first TEXT (‘A Japanese woman...’) provides an example of very precise rules governing the choice of linguistic form. Language is rarely used to convey only propositional information; what we say and the way we say it provides clues to how we position ourselves relative to specific groups in society and to those we are addressing.
When people use language, they do more than just try to get another person to understand the speaker’s thoughts and feelings. At the same time, both people are using language in subtle ways to define their relationship to each other, to identify themselves as part of a social group, and to establish the kind of speech event they are in. (Fasold, 1990: 1)
In addition, the choices we make play their own small part in the evolution of linguistic practice. When the surgeon chooses “Scalpel” this not only reflects an understanding of the relevant rules but reinforces them, but if enough surgeons started to say “Could you pass the scalpel, please” this might eventually become the norm. We often see this in action at a social level. When I arrive at work, walk into the office and say “Hi, Sue” this utterance not only arises from the context (this is the appropriate thing to do) but reinforces the context (the more I do it, the more significant it would be if I suddenly failed to do it). In Heritage’s terms, “the significance of any speaker’s communicative action is doubly contextual in being both context-shaped and context-renewing” (1984: 242).
There are all sorts of interactional ends to which language might be put, and the existence of recognised ‘rules’ or norms allows for the possibility of exploitation. Consider the following statement, made by Margaret Thatcher during her period as prime minister:
“We are a grandmother.”
The use of the plural form of the first person here is not accidental (she repeated it) and, since this is a form reserved for the reigning monarch, it is not insignificant. Coming at a time when Thatcher’s behaviour was becoming increasingly ‘regal’ and was recognised as such by contemporary satirists, this statement confirmed a view of herself which offended many but surprised few. In fact, she provides us here with an excellent example of the way in which our linguistic choices help to define us, our situation, and our relationship to those we are addressing. Of course, the ‘definition’ is not binding — I saw no evidence of people throwing themselves onto one knee and crying “Vivat Regina!” in response to her announcement.
The important relationship between the form an utterance takes and the circumstances in which it is used is captured by Hymes (1971:15) in his by now widely known aphorism that there are “rules of use without which the rules of grammar would be useless.” As teachers, we know that we must find a place for the ‘rules of use’ in our teaching because, as Thomas (1983: 96-97) has pointed out, people will readily forgive us for our grammatical errors, but we risk being branded as just plain rude if we express ourselves inappropriately. Unfortunately, this is easier said than done. Because of the complexity, extent and subtlety of such rules, they can be very hard to pin down, and a great deal of research energy in linguistics over the past quarter of a century has been expended in trying to understand them. This module serves as an introduction to some of that research and, more importantly, as an introduction to your own investigations in this area. With this in mind, I should now like to discuss briefly the key elements that underlie the module.
Before moving on, you might like to exchange examples of situations where an interactant or interactants have not settled quickly into the expected relationship. Here’s an example from my own experience:
I remember visiting my daughter’s primary school on the ‘new parents afternoon’ and sitting alongside other parents in a tiny chair which thrust my knees up to my chin. Just before the head teacher arrived, his deputy provided a short introduction. She bent forward and addressed us in her classroom voice, finishing by saying “And if any of you need to go, there’s a......(pointing and with a whisper) over there.” I found myself, in the company of all the other parents, nodding slowly and deliberately, with all the solemnity of a five year old.
Analysing Spoken Interaction
There are three key elements which underlie this module and are reflected in the assessment associated with it. So in going on to discuss these elements, I will be seeking not so much to offer definitions in the abstract as to explore how they relate to the aims of the module.
The emphasis on interaction reflects the essentially dynamic orientation of the module. As the title of the module suggests, we shall be concerned for the most part with spoken interaction, but this does not necessarily exclude non-verbal elements.
There is a close relationship between interaction and investigation, for the reasons already mentioned, but awareness is also important. We can learn about features of interaction from the literature and we can investigate various aspects of it for ourselves, but one of the most effective ways of developing an understanding of it is by increasing our own awareness of the interaction taking place around us. This is not quite as straightforward as it seems, but you can begin by reflecting on aspects of your own interactional experience. Try at least once a day to tune into the interaction around you — bearing in mind social sanctions on prying. Once you develop a ‘listening ear’ you’ll be surprised how interesting and varied everyday interaction can be. Below I’ve provided a couple of personal experiences to illustrate the sort of thing I have in mind.
Asking for coffee in Spanish cafés is always a trial for me: I know that ‘please’ is not required, but somehow it always seems to creep in. A participant on this course told me once that her Spanish husband was almost driven mad by her excessive use of ‘please’ and ‘thank you’.
An English speciality this, but even by our exaggerated standards something of an affliction in my case (the other day I found myself apologising when someone sat down next to me and dropped her car keys). Unfortunately, I’ve passed it on to my elder daughter, who is now in the process of apologising me to death.
There is a rich literature available to us in this field, but it can do no more than map the general territory and offer more detailed descriptions of selected areas. If we want to understand areas of language use particularly relevant to our own circumstances, we have no choice but to investigate them for ourselves.
Much of what passes for sociolinguistic enquiry is easy since it is only native speaker intuition. While there are areas of research where intuitions serve linguistics, one place where they serve nothing is the areas of direct, objective language use. ... Curriculum specialists, textbook authors, methodologists, and teachers, native speakers or not, have little justification in making unsupported judgements about actual occurrences of language in context. (Preston, 1989:3)
The advantages of investigation extend beyond the merely instrumental, for in the process of investigation itself we are likely to uncover new areas of interest and unforeseen perspectives which will inform our work and enrich our understanding. The process of research, if undertaken with proper commitment, is also a process of personal growth.
Research is often exhilarating but never easy: it is a messy and frustrating business which is deceptively represented by neatly packaged academic papers. If you want to understand it, you have to do it, and this module will provide you with the tools you need. I have chosen the term ‘investigating’ rather than ‘researching’ simply because the former has a practical and small-scale orientation which seems appropriate to the work which you will be expected to do when you come to tackle your project.
The importance of context in determining the form of the two utterances quoted at the beginning of this unit is clear enough, and it is equally clear that there is a relationship between any utterance and the context in which it is delivered. The problem lies in pinning this down, and in seeking to do so we must begin by recognising that there is no generally accepted notion of context:
Although there is no explicit theory of context, and the notion is used by different scholars with a wide variety of meanings, we may briefly define it as the structure of all properties of the social situation that are relevant for the production or the reception of discourse. (Van Dijk, 1997: 19)
Van Dijk’s definition will do as well as any, but it’s far too general to serve as a practical starting point for inquiry. This module will introduce you to modes of investigation designed to throw light on the relationship between context and linguistic choice and there’s a useful brief discussion of the concept of ‘context’ in Schiffrin (1994: 365-378), but we cannot hope ever to arrive at a complete description. The notion of context must remain throughout a subject for exploration.
In this introductory unit I’d like to offer two different perspectives on this relationship between language and context. The first explores it from the perspective of the choices which an individual might make. I’ll focus here on address forms as an illustration of the ways in which social rules operate. The second perspective is much broader, and here we’ll look at the relationship from the perspective of the social group. Membership of particular groups will constrain linguistic choice, and I’ll introduce two different descriptive systems for representing the membership of sociolinguistic communities.
Write down as many different forms of address as you can think of for ‘Deborah Talbot’. Try to relate these to situations in which they might be used and relationships which they might reflect.
When we talk, we usually have to settle on a way of addressing one another — that is, unless we make very special and usually awkward steps to avoid this — and the choice we make can reveal a great deal about our relationship. In what follows I’d like to take some time to explore this issue of address forms as an example of the sort of choice we find ourselves making virtually every day. First, though, we need to make a couple of straightforward distinctions.
We need to distinguish address forms, the terms we use to address people when we’re talking to them, from the way we refer to people and the way we summons them. We may use the same term for all these, but not necessarily. For example, I might refer to Sue Garton as ‘my colleague’, ‘the MSc Programmes Tutor’, or ‘Doctor Garton’, but when I address her directly I use her first name. Similarly, I might be summoned from doctor’s waiting room as ‘Keith Richards’ would but find it very odd to be addressed in this way during the consultation.
(At this point, it would be a good idea to go back to your list to see whether you’ve included examples which would be more likely to serve as forms of summons or reference than as forms of address.)
Your list of address forms for Deborah Talbot, will probably look something like this:
It is extremely unlikely that it will look exactly like this, but the main elements should be the same. The examples at the top and bottom are possible but have a much more restricted range than the others, so I’ll begin by discussing the central group in the context of work which has been done on address forms.
FN and TLN
Even though your list of possibilities may not be the same as the range from ‘Dr Talbot’ to ‘Debs’ above, it is likely that it will reflect the basic distinction to be found there: that between the use of a title and last name (usually abbreviated to TLN), and the use of a first name (FN). The additional forms included in the list above are simply variations on the FN alternative. They may be important variations, of course, so that while acquaintances (and perhaps parents) use ‘Deborah’, close friends use ‘Debbie’, and ‘Debs’ is reserved exclusively for use by her partner. I have heard of parents who would respond to a request over the telephone to speak to ‘Debbie’ with feigned incomprehension followed by “Oh, you mean Deborah!” It’s a losing battle, of course, because although they chose the name on the birth certificate they do not have the right to insist on the use of this form in all social contexts.
Power and Solidarity
The importance of social relationships in determining address forms is the subject of a paper by Brown and Gilman (1960) which is widely recognised as a classic in the field. The paper makes use of the authors’ distinction between ‘T’ and ‘V’ forms, the T form being taken from the Latin familiar pronoun tu and the V form from the deferential vos (it is worth noting that the distinction between the two forms, not available in English, is roughly analogous to that between FN and TLN). Brown and Gilman’s fundamental point is that pronoun usage is governed by two semantics: power and solidarity. The power semantic, which the authors believe to have been the original one, is non-reciprocal because two people cannot have power over each other in the same area at the same time. Where it applies, the powerful person says T to the non-powerful one and receives the deferential (and non-reciprocal) V in return. Where there is no difference in power, the same pronoun is used reciprocally.
Solidarity and Address
A Swiss participant on this course who teaches adults in her native country offered an interesting example of the relevance of situation to pronoun choice. Her students seemed happy enough with a T-T relationship in class, but insisted on shifting to V-V once outside. When the teacher once used a T form in the corridor outside the classroom, it was made clear to her that this was not acceptable. In Brown and Gilman’s terms, the shift here is from solidary to non-solidary, which may to some extent reflect the special classroom situation, although we must also recognise the teacher’s power within that context to use T and ask for T in return.
(My description of Brown and Gilman’s position is necessarily brief, but a fuller summary is available in Fasold, 1990:Chp 1, which also offers illuminating examples of other research in this area).
Although power is an important factor in determining address forms, and was dominant at least up to the beginning of the last century, it is not the only one. In some cases there will be no power difference but a considerable difference in the extent to which speakers have things in common, and here the solidarity semantic will determine the choice of form. Where there is no power difference, and hence no basis for establishing a T-V relationship, the choice of T-T or V-V will be made depending on the degree of solidarity which applies, with T-T used where two people are close (or ‘solidary’) and V-V where they are distant.
Address forms are an important part of a larger semantic system relating to social relationships, and the address form ‘Aunt Deborah’ provides a good example of this. One of the things which marks the transition from youth to adulthood in this country is the dropping of kin terms: ‘Aunt Deborah’ becomes just plain ‘Deborah’. Sometimes the transition is invited by the recipient or requested by the speaker, but often the transition just ‘happens’ — what was socially unacceptable a few months ago is now perfectly legitimate. This shift marks a new relationship which has all sorts of social implications, and once it is made, as with any other rite of passage, it cannot be ‘unmade’.
‘Aunt Deborah’ is also interesting because it offers a good example of how power and solidarity can conflict. The use of ‘aunt’ reflects a power distance between the speakers, which will be particularly marked when the addressor is young and which therefore requires T-V. However, there is also a sense in which, as aunt and nephew/niece, the speakers are very ‘close’, so T-T might be thought to be more appropriate.
Brown and Gilman show that since the middle of this century the solidarity semantic has been more or less established as the dominant one. However, as subsequent studies have confirmed, they recognise that there is considerable variation in pronoun use according to the background of the speaker. Different societies will have different rules about what constitutes solidarity, and even within one society there will be a range of factors influencing choice. Researchers also recognise that it’s possible to violate the rules in order to make a linguistic point. Recently, for example, I shifted from ‘Lou’ (my normal form of address) to ‘Louisa’ to make the point to my younger daughter that, contrary to her assumption, an issue between us had not yet been resolved. This prompted an apology from her and a return to our normal social (and linguistic) relationship.
So far, our examples have concentrated on what address forms most noticeably reflect: the relationship between the speakers involved. However, the setting may also be relevant to which address form is selected — as the examples at the top and bottom of my list illustrate.
The use of ‘Talbot’ as a reflection of a highly asymmetrical relationship, for example, is characteristic of certain institutional settings (e.g. public schools or the armed forces). It’s also interesting to note, as an example of historical change, that in the last century in England this form was also used between male friends (Holmes and Watson being a case in point). I chose to include ‘Wupsy-pups’ as an example of an address form which is not meant to be overheard. I came across it in a film, and although I assume that it is invented, there exists a class of address forms which are exclusive to two speakers when they are alone (i.e. in ‘private’ settings). In the film, the shift from ‘Debs’ to ‘Wupsy Pups’ corresponded with a move from the kitchen to the bedroom — different form of address, different place, different activity.
The extract from Foley (1997) in the TEXT section summarises many of the points made in this section and makes a useful connection with the work of Brown and Levinson (1987) on politeness.
What makes forms of address so interesting to the sociolinguist is this power to reflect relationships, even to the extent that the choice of a particular form of address can determine the status of a relationship. The following extract offers a forceful demonstration of this. You might like to develop your own analysis of it before reading the discussion which follows. You should be able to predict the salient facts about the interactants without being told (a full discussion of the exchange is to be found in Ervin-Tripp, 1986).
“What’s your name, boy?” the policeman asked. ...
“Dr Pouissaint. I’m a physician. ...”
“What’s your first name, boy? ...”
(Pouissaint, 1967. Quoted in Ervin-Tripp 1986)
On the surface, this is no more than an initial exchange in which an ‘acceptable’ form of address is established, but at a deeper level much, much more is happening. It opens, for example, with a direct insult. Dr Pouissaint is black, the policeman white, and the use of the term ‘boy’ is used here as a marker of race, implicitly denying the recipient the normal rights associated with adult status in this community. At the same time, it establishes an asymmetrical relationship between the policeman and the doctor. Dr Pouissaint’s reply represents an implicit rejection of the policeman’s position since the use of the term ‘boy’ is not consistent with the use of a title and last name as a form of address. In fact, in reinforcing his claim with an explanation of why the title is appropriate, Dr Pouissaint is reversing the asymmetry, at least in so far as the term ‘Doctor’ is deferential (on this subject, it’s interesting to note that while members of the medical profession are addressed directly as ‘Doctor’, this does not extend to academics, where the title is always used with the last name). It is most certainly not appropriate for a stranger to address a doctor by his or her first name.
The policeman’s response is a blunt rejection of this: his explicit demand for a first name, made more emphatic by the repetition of ‘boy’, represents a denial of the doctor’s right to claim occupational or adult status. The reply, ‘Alvin’, is an acceptance of the policeman’s formulation of the situation and the status of the parties involved. The effect on the speaker is profound:
As my heart palpitated, I muttered in profound humiliation. ... For the moment, my manhood had been ripped from me. ... No amount of self-love could have salvaged my pride or preserved my integrity. (ibid.)
Linguistic choice may be largely determined by social factors, but it also derives its power from these, and its impact — as we see here — may be personally devastating.
‘They call me Mr Tibbs’
In the week that I write this, the above film is due to be shown on television. The film is set in the sixties, the same period as the exchange discussed here, and the title refers to a black policeman working in one of the Southern states of the USA.
Alternation and Co-occurrence
So far, we’ve looked at the choice of particular address forms, but linguistic selection doesn’t stop here; the decision to use a particular form is the product of a process which will also influence our other linguistic choices. There would be something downright odd, for example, about talking to ‘Dr Talbot’ (“Good morning, Dr Talbot.”) in the way that we would talk to ‘Debs’ (“Hi Debs, howzit goin’?”), and exceptions are likely to be funny or embarrassing. I well remember picking up the phone and confusing my new boss, who introduced himself as ‘Frank’, with my Liverpudlian brother-in-law, Frank, and delicately ‘renegotiating’ the casual exchanges which I had initially established. You might like to reflect on similar examples from your own experience.
The relationship between linguistic choice and social context formed the subject of a paper by Ervin-Tripp (1986) from which the heading of this section is taken. In fact, this early paper (its original, longer, version appeared in 1969) embraces the same territory as this course and much more besides, and although it raises many more questions than it answers and some of its speculations lead to dead ends, the result is an endlessly stimulating piece of work which is of more than historical interest. I mention this because in selecting only one aspect for discussion here, I’m hardly doing justice to the scope and penetration of the original.
The distinction at the heart of the paper offers a useful way of describing the selection process we have been discussing. If you look back at your list of address forms for Deborah Talbot, what this represents is a set of alternatives only one of which will be chosen in any particular situation. This choice among alternatives is what Ervin-Tripp refers to as alternation. She discusses the ‘alternation rules’ for the selection of address forms, drawing attention to the range of factors which influence such choice and to the importance of shared norms. As we have seen, the significance of any particular choice will depend on the context in which it is selected and the social rules which are relevant to this. The exchange involving Dr Pouissaint, for example, Ervin-Tripp refers to as ‘perfect’, because the impact of the policeman’s selection depends on the fact that both participants fully understand the address system in operation.
Once the choice is made, however, co-occurrence rules apply. These are the rules which determine that once ‘Debs’ has been selected the language used will be different from that accompanying ‘Dr Talbot’. There may, of course, be violations of such co-occurrence, sometimes quite crude and deliberate (“You really screwed up, Dr Talbot”) and at other times relatively minor and unintentional. “How’s it going” is a good example of the latter. Here, as Ervin-Tripp points out, a phrase from casual speech ends with the formal suffix ‘-ing’, which is less appropriate than the informal ‘-in’.
In fact, Ervin-Tripp further distinguishes the sequential ordering of items (which she calls ‘horizontal co-occurrence’) from the specific lexical and phonological choices which are made ( ‘vertical co-occurrence’), but this distinction seems to me to be a merely technical one. What matters is that violations of co-occurrence rules may be socially as well as linguistically relevant.
We can determine whether particular choices are relevant or not by identifying the rules of use which apply to them. One way of working out such rules is to collect lots of examples in order to see what patterns emerge. We might then find, for example, that, unless special dispensation has been granted, the kin term ‘aunt’ is used when the speaker is under the age of 16 and is addressing an older female relative who stands in this blood relationship. We may then be able to identify the full range of address forms available and to represent diagrammatically the system of choices available and the rules relating to them.
Any such representation will relate to a particular group, because the relevant rules are not universal, and this takes us to the second of our two perspectives, that of the group rather than the individual. Perhaps it is possible to characterise such groups in linguistic terms which will enable us to specify relevant rules of use. In fact, there have been at least two attempts to pin down the idea of community in linguistic terms, and these we shall now explore.
Decide at the beginning of a particular day that you are going to take note of the different ways in which you are addressed and the language choices associated with them. You should try, wherever possible, to note down examples, supporting them with as much relevant detail as possible (setting, speaker, topic etc.). At the end of the day, review your notes and reflect on the range of situations in which you have found yourself. Did any of them require particularly delicate negotiation? Were some of them routine and predictable? Were there any violations? What does this tell you about the different groups with which you interact? etc.
Before reading on, look at the following task.
Do you think it’s possible to identify communities solely in terms of the language they use? Can you foresee any problems with this?
At the most basic level, it seems fairly obvious that specific groups will have their own ways of speaking, elements of vocabulary which will be typical of them, and perhaps preferred topics. Perhaps, then, such groups can be identified in terms of their speech. This is the idea which lies behind the concept of a speech community, a term which appears often enough in the sociolinguistics literature, but usually as a fairly general reference.
The concept itself is based on the assumption that since language reflects society it should be possible to identify particular communities in terms of their talk. To put it more generally, linguistic rather than social criteria should provide an adequate basis for establishing social boundaries. However, efforts to pin down the concept more precisely have not been successful, and there seem to be fundamental difficulties associated with it.
Many of the problems arise from the fact that the effectiveness of any particular identification will depend on the extent to which specific groups in society can be identified, but as Bolinger (1975:333) notes, there is almost no limit to the criteria for, or range of, such groupings:
There is no limit to the ways in which human beings league themselves together for self-identification, security, gain, amusement, worship, or any of the other purposes that are held in common; consequently there is no limit to the number and variety of speech communities that are to be found in society.
Nevertheless, the concept of a speech community has proved to be an attractive one, and you’re probably familiar with the linguistic experience of moving from one community to another. I’ll therefore begin with a personal example of this, based on two very different environments, one associated with my childhood and the other with my post-university life.
I grew up in the fifties in what is normally described as a “solidly working class environment” and I now live in a town house in the middle of Stratford-upon-Avon, commuting to work at a university. The groups associated with these two very different environments rarely come into close proximity, but when they do I am often aware of the differences between them. A few years ago, for example, at my brother’s engagement party, I found myself sitting in a room with (male) friends and acquaintances from one group, clutching a can of beer on my knee, sniffing audibly and throwing my own crude contributions into what I can only describe as the communal ‘wit pit’ in the centre of the room. My wife was in the next room with other women and the children, and though I was occasionally conscious of my own feelings about this, I was also aware that any attempt to flout convention would lead only to embarrassment. When we left the house as a family and got into the car, I noticed that my accent and vocabulary had changed to a significant extent.
The situation is slightly problematic, however, because a distinction is often drawn between membership of a speech community and participation in it. In order to be a member, it is argued, one must share the normative system of the group. The fact that certain topics are not introduced into the conversation of the ‘childhood’ group when I am present is therefore significant. I am never told racist or sexist jokes, for example, because my reaction to them in the past has been noted, and it seems clear that I do not share the norms of the group in this respect. Perhaps, then, I am no more than a participant. As we shall now see, the situation is further complicated by the assumptions I am making here about the relationship between a group and a community.
Above I described an experience of moving between two very different groups. Try to think of the different groups to which you belong and, if possible, the linguistic elements which distinguish them. Next time you move from one to the other, note the changes which this involves. You might have the opportunity to study this discreetly if you spend time in a staffroom which has sufficient diversity.
If Bolinger is right then there might be a strong case for arguing that within the ‘childhood’ group my brother and I form a smaller group. If our partners are to be believed, we do have a particular way of talking to one another which is distinctive, and this must represent a prima facie case for arguing that we are in some sense a distinct group. The consequences of such a conclusion are clear enough, and it should therefore come as no surprise that many sociolinguists prefer to think in terms of speech networks rather than speech communities. Crudely put, these are essentially maps of who interacts with whom; so although my interaction with my brother has no significance in itself, as the map develops it will become clear that there are people we both interact with, and that some of these will interact with one another while others will not, etc. This seems straightforward enough, although some of the conclusions drawn on the basis of it have been challenged.
The idea of speech networks as an alternative to speech communities points towards a fairly fundamental objection which has been laid at the door of the concept of speech community. In my discussion of the two groups to which I belong, I assumed that group and community are more or less synonymous, but most sociolinguists would probably wish to deny this. It certainly makes sense to talk of specific groups within a wider community, but this merely compounds the problem of pinning down the entities we wish to deal with. The challenge facing sociolinguists who wish to work with the concept of a speech community lies in finding a way of identifying such communities in linguistic terms, which is not the same thing as beginning with a particular community and exploring linguistic features of interaction within that community. It is perfectly legitimate, of course, to begin with a defined community and work from there, but the whole idea behind the concept of a speech community is that the defining should be in linguistic terms.
I don’t think it’s worth pursuing the idea of speech networks any further, but if you want to have a look at the sorts of criticisms they’ve attracted, you could read Romaine (1982) or Williams(1992), who also criticises the concept of speech community.
Examine the range of definitions of speech community provided in the TEXT section. Try to identify any common features and decide whether these represent the essential features of any definition.
Most definitions of speech community seem to agree that the basic requirements are that it should involve a shared language and shared norms of speaking (acceptable topics, forms of address etc.), but there is little agreement beyond this, and sometimes associated concepts are invoked in order to offer a fuller picture. Your own consideration of the problems associated with identifying a speech community in terms of a shared language will have given you a sense of the sort of problem which can be associated with trying to pin down this concept. There are, for example, problems of community (Australians, Canadians, New Zealanders, Americans and the British all share the same language, but are distinct communities), problems of language (historically, dialects marking a particular community may become absorbed into the dominant language, but at what point do they cease to exist as something distinct?).
It could be argued that if we can use things like a shared language and shared rules of speaking in order to identify distinct communities, we have the makings of an effective descriptive system. However, we still have to deal with individual variations and any exceptions we might meet, and at the moment all attempts to do so have had to fall back on other ways of characterising such groups, relying ultimately on the fact that rules of speaking are the product of group norms. Saville-Troike has offered a useful distinction between ‘hard-shelled’ and ‘soft-shelled’ communities, the former being communities which outsiders find it very difficult to penetrate. Drawing on this distinction, perhaps the best that can be said is that where communities are particularly hard-shelled definitional issues are less problematic. If you’re interested in reading more, Hudson (1996) offers an excellent brief discussion of the problems associated with the definition of a speech community.
Some pedagogic considerations
My aim in this rather selective presentation of the issues has been to show that attempts to bring together ‘speech’ and ‘community’ are fraught with problems. At the local level, we can analyse particular exchanges in terms of the factors influencing linguistic choice, and we can seek to identify norms and patterns, but large scale descriptions are more problematic. This module will provide you with the tools to analyse interaction at the local level and the background which will enable you to connect this with existing knowledge, but I wanted to spend time at this early stage by showing how an apparently innocent and easy-to-grasp concept can raise more questions than it answers.
The point of all this is that such general concepts are naturally attractive, and as teachers we often take them for granted. Coursebooks assume, for example, that they are preparing students for entry into a particular community (or ‘communities’ if there are English and American versions), as though this is a straightforward business. So instead of preparing them for the challenge of responding to the many communities they may encounter, such books offer simple ‘representative’ examples from their hypothetical community. The false confidence which this can engender only adds to the difficulties students face when they try to interact with native speakers. This module offers no simple solution to such problems, but it is intended to act as a useful antidote to comfortable generalisations and as a spur to investigation which might lead to the production of more accurate and precisely focused materials.
This emphasis on the need to look closely at the particular should not be taken as a denial of the value of generalisations or of the possibility of identifying particular communities. I should like to conclude the unit by considering the concept of a discourse community, even though this concept has roots which are very different from those of the speech community. I’ve chosen it because it has been presented as a concept which has significant pedagogic relevance and because I can use it to make a point about the composition of this module.
In terms of our understanding how this module is put together, it’s worth pausing for a moment to consider the reason why the concept of discourse community doesn’t feature in the writings of sociolinguistics or the ethnography of communication.
At first sight speech community and discourse community seem to share very similar concerns, and the relatively close relationship between them has been recognised by writers discussing the concept of a discourse community. However, the traditions on which they draw are very different, and at a fundamental level it is difficult to reconcile the two concepts. The roots of the idea of a speech community, as we have seen, are sociolinguistic, and the connection between particular groups and their ways of speaking must be at the core of any definition. The idea of a discourse community, on the other hand, derives from studies of rhetoric, where the focus is firmly on the text, rather than on the group which produces the text. The insight which the concept of discourse community represents is that particular texts might in themselves be representative of a particular group, whose membership relies on certain forms of discourse in order to further its aims. We shall see in the next section that there are a number of very obvious differences between speech and discourse community, but these should not be allowed to obscure the deeper issue which divides the two.
The implications of this for your understanding of this module have to do with the traditions on which it draws. I’ve stated explicitly that I’m not interested in confining myself to a particular tradition because my aim is the essentially practical one of introducing you to ways of undertaking research relevant to your work. However, it would be foolish to assume that it is possible to do this properly without at least acknowledging the existence of fundamental differences between traditions. All I wish to argue is that this should not preclude our drawing on such traditions if this is appropriate, and I draw strength from the position which Fasold (1990:viii) adopts in his introduction to one of the standard introductions to sociolinguistics:
I present sociolinguistics as a series of topics with some connections between them, as was done in the companion book. The reason for this is that I am not able to detect an overall theory, even of the portion of sociolinguistics that is addressed here ...
If there is indeed no overall theory of sociolinguistics upon which to base a selection of topics, it seems to me even more apparent that neither is there a theoretical basis for the contents of this module. All I would wish to claim is that the principled selection offered here represents an acceptable picture of an important general area.
As with most definitions, we need to recognise that the more we try to pin down a particular concept, the more likely we are to find those who disagree with aspects of our position. This is a necessary qualification because in what follows I intend to work with the definition which Swales offers in his book on genre analysis. Not everyone would agree with all aspects of it, but it seems to me to capture the essential elements of the concept in a way that makes its practical relevance clear.
My approach will be to take each of the elements which Swales identifies and discuss it, with the intention of (a) offering explication of Swales’ claim, (b) presenting any reservations I have, and (c) pointing to any differences between the concepts of speech and discourse community. You might like to approach what follows by pausing between Swales’ point and my own discussion of it to consider these aspects for yourself. (If you prefer, you can simply refer to the summary of Swales’ points in the TEXT section before reading on.) Finally, you might reflect on any general reservations which you have about the concept, before going on to the next section.
If you have a copy of Swales (1990), you might also find it useful to compare my discussion of the relationship between speech and discourse communities with your own ideas and with Swales’ comparison (ibid.:23-24).
A discourse community has a broadly agreed set of common public goals.
The important element here is the idea of goals. As we shall see, it underlies assumptions which are made about the texts produced by any particular community, in so far as these are assumed to be goal-directed. In a sense, if we accept this idea of declared goals, it becomes much easier to go along with subsequent claims which are made on behalf of the discourse characteristic of a particular community. However, the extent to which such goals are ‘common’ and ‘public’ is open to question, and it is worth noting the use of the hedge ‘broadly’. It’s probably fair to say that it is possible to identify goals which are associated with particular discourse communities, but how far these are recognised by the members is another matter, and even more open to question is the extent to which the discourse of the community actually reflects these.
The introduction of goals as a defining element opens up an immediate distinction between this and the speech community. Even though such communities may have goals (and in most cases it would probably be hard to go beyond the very general goal of maintaining community identity), such goals are not likely to be publicly agreed. For this to be possible they would need to be made explicit. It is also worth noting that working from the idea of goals sidesteps the challenge of subjectivity which has been levelled at the concept of speech community.
A discourse community has mechanisms of intercommunication among its members.
The interesting feature here is the use of the term ‘mechanism’. Loosely interpreted, the description could apply to both speech and discourse communities, but I think the term captures well the essentially utilitarian nature of these mechanisms, which are designed — or have evolved — to serve the ends of the community. It’s also true that speech is the ‘mechanism’ on which sociolinguists focus, whereas the suggestion here is that for the discourse community there is a range of relevant mechanisms.
A discourse community uses its participatory mechanisms primarily to provide
information and feedback.
This is where we see the difference between the two concepts emerging most strongly. A speech community will use its mechanisms for a variety of purposes, perhaps primarily for the maintenance of the social bonds which hold the community together, whereas the mechanisms of the discourse community are much more goal-directed. This difference derives from the first of Swales’ points and points to an element of design which is missing from the speech community. As we shall see, it is precisely this element of design which offers a basis for analysis which is likely to be of pedagogic value.
A discourse community utilizes and hence possesses one or more genres in the
communicative furtherance of its aims.
This analysis is likely to be based on the genres which the community utilises in using its mechanisms of intercommunication (Swales’ initial work in the area involved the analysis of introductions to research articles, a specific element in an easily identifiable genre). The term genre is not particularly precise, and while some genres are easy enough to identify others are much vaguer, so it would be a mistake to assume some sort of hierarchical arrangement where mechanisms can be broken down into genres and genres into structural elements and specific lexis (see below). At even the most superficial level this would not work because the same genre may be utilised in different discourse communities.
It’s interesting to note that the element of conscious design which I’ve already noted is to be found in this claim and that again aims feature prominently. It seems to me that there is a strong suggestion here of the deliberate exploitation of genres for defined communicative ends. Although it’s a minor point, I’m not happy about the use of ‘possesses’ here, because I can’t see any sense in which a genre can be possessed. It is in the nature of genres that they are available to be exploited, but the idea of possession suggests an exclusivity which seems to me to be inappropriate. Perhaps what Swales is trying to capture is the sense that there is something distinctive in the way in which these genres are exploited by particular communities (he does talk about assimilating ‘borrowed’ genres).
In order to decide whether I’m being fair to Swales here, you could read his own discussion of the subject in Genre Analysis pp. 24-7
In addition to owning genres, a discourse community has acquired some specific
That the use of ‘possess’ is not accidental is confirmed here, where we find the claim that communities own genres. Leaving this aside, though, this claim seems sound enough. Speech communities will also have their own lexis, although the specific lexis of a discourse community is perhaps more likely to be labelled jargon.
A discourse community has a threshold level of members with a suitable degree
of relevant content and discoursal expertise.
This, perhaps more than anything else, highlights the formal element in the constitution of a discourse community which is missing from the speech community. True, there may be an informal ‘apprenticeship’ in a speech community, if the process of finding out what is and what is not linguistically acceptable can be so described. However, this is not a formal process and it is hard to imagine a newcomer undergoing the sort of explicit criticism and correction which Swales himself had to face when he contributed to a stamp magazine before he had yet grasped the formal rules relating to discussion in that forum.
This idea of membership, then, is a much more formal business, explicitly tied to matters not just of discourse but of relevant knowledge; and the right to participate — the right to membership — is dependent on recognised expertise. It might be said that this expertise has to be formally displayed, and it is the form of its representation which offers the researcher such a sound object for analysis.
If you wish to read more about issues of definition, membership etc., I’ve included some relevant extracts in the TEXT section.
Advantages and limitations
The products of a formally constituted group (however explicit that constitution may be), identified in terms of commonly agreed goals and underpinned by an explicit knowledge base, offer an attractive object of analysis. Part of the reason for this is that if the community is dependent on formal mechanisms of communication, analysis of these will offer insights into the community itself. At a fairly trivial level, one of the biggest differences between speech and discourse communities is that the basis for exchange in the former is speech and in the latter it tends to be writing (although not exclusively so). This provides a rich and accessible database for both community member and analyst, who share a common interest in understanding its construction.
Go through Swales’ points and for each one provide specific examples from the TESOL discourse community. Finally, decide how you would describe your place in the discourse community. There’s a summary of my own response in the Resources (page 32) at the end of the unit.
In fact, because of the important part that writing plays, it is possible to be a member of a discourse community never having met any other members face to face. Let’s take a hypothetical example, to give flesh to the bones of our description of a discourse community. Despite living on a small island in the South Pacific, where there is no railway, Harry has been obsessed with steam trains since he saw ‘The Night Mail’. He has never seen one in the flesh, although he plans a trip to Europe in a couple of years, when visits to historic railways railway museums will remedy that. Since nobody else on the island has the slightest interest in his odd hobby, he’s had to look elsewhere for people who are willing to share the delights of different types of junction box on the Great Western Railway. As well as subscribing to railway magazines, he’s joined a number of railway societies and is a regular contributor to exchanges in their newsletters. With the advent of the internet he was quick to log on to the relevant lists.
In short, even though he has never met another enthusiast face to face, Harry is a member of the railway enthusiasts discourse community (and the smaller community with a special interest in signal boxes). The aims of the community are to keep alive the ‘spirit of steam’, promote the renovation of old lines and trains, exchange news about recent discoveries and developments in the field, etc. (apologies to any train enthusiast reading this if I misrepresent the situation — I’m drawing on a brief flirtation with trainspotting when I was about twelve). Participatory mechanisms include specially arranged trips, magazines, newsletters and conferences, and the genres utilised include the magazine article, the letter, the research article etc. Face-to-face encounters have so far eluded Harry, but when he goes to Europe he will meet people he has already come to know well, and they will settle quickly into the language of ‘bogeys’ and ‘double-headers’.
Bringing this closer to home, Swales’ insight that the concept of discourse community and, more specifically, genre analysis have considerable practical potential in the field of ESP is an important one, as his work on article introductions demonstrated. However, while it is also fair to say that things haven’t stopped there (his book includes other examples from the field), none of the work which has followed has quite lived up to the promise of his own pioneering exploration. The rate at which contributions continue to be made suggests that this is still a rich area, thanks to the formal aspects we have already discussed, but there is a limit to the extent to which specific genres can be pinned down. Swales’ work on article introductions has earned a place on any general EAP course worthy of the name, but the magnitude of his achievement serves only to emphasise the limitations of subsequent contributions. Even within Swales’ chosen genre, academic articles, once we move beyond the Introduction the pedagogic utility of what analysis can offer diminishes significantly.
If at least one researcher in this field (Rafoth, 1990:144) is to be believed, this limitation may arise at least in part from a weakness which we have already identified in the context of the speech community:
The problems identified in defining a speech community help to illustrate some of the obstacles in linking discourse community to any particular variety of writing or speech, except perhaps in the most immediate situations and localized contexts. In order to claim the existence of a discourse community, it may be argued, some set of features of the text or discourse — the conventional language — must be bounded.
It seems to me that this limitation has other consequences for the ESP teacher. In a nutshell, even though the teacher might be able to provide valuable help in precisely specified areas, the student still feels helplessly at sea outside these. So the business manager says thank you for your offer of lessons on presentations, but what he or she really needs is preparation for chat in the bar or over a meal. We are back into the vague and sticky realm of context, where the explicit goals of the discourse community are no longer the determinants of linguistic choice. And that, for good or ill, is where most of us are forced to live for most of the time.
Although I haven’t drawn any overall conclusions from the discussions in this unit, I hope that what I have offered provides the basis for a general position on the subject of interaction and context. I’ve tried to show that, although the two are intimately related, there are no simple rules or formulae for determining the ways in which they connect. However, I regard this as an incentive to investigation rather than as a cause for despair.
There is a rich field to be explored here, and there are a variety of ways of approaching this exploration. The aim of this module will be to introduce you to these approaches and the techniques of analysis associated with them, in the expectation that you will draw on the knowledge gained in order to explore spoken interaction in context for yourself. Such exploration has the potential not only to enrich your own professional environment but to contribute to our developing understanding of an important field.
In adopting this stance and in arguing that the composition of the module isn’t determined by any single theoretical perspective, I’m not for a moment advocating an anti-theoretical position. Just because there is no overarching theory which underpins all the areas we will explore and the approaches we will adopt, this doesn’t mean that these approaches are not informed by their own theories. This module is not designed to encourage you to scratch about on the surface of interaction in the expectation that descriptive accounts will translate easily into pedagogic currency. On the contrary, it should encourage you — if I’ve pitched it right — to investigate interactional practice sufficiently deeply to generate theoretical insights into the relationship between interaction and context. If it succeeds in this it will therefore contribute to the process of ‘becoming theoretical’ which began in the Foundation Module.
This has been a fairly wide ranging unit, but I think it was essential at the outset to clear the ground for what follows. These are the things I’ve tried to do. If any of them aren’t clear to you, go back to the relevant sections and work through them again. It may be that you have missed something or that I’ve failed to get my point across clearly enough.
• I started with the very simple point that language varies and that much research has been directed towards finding out the factors which affect choice.
• I moved on to some of the key elements in the module, emphasising the importance of research, suggesting that you try to increase your awareness of interaction, and indicating some of the problems of defining context.
• A discussion of address forms provided us with a concrete example of ways in which this choice (and the rules relating to it) operates.
• Alternation and co-occurrence extended this beyond address forms to a more general statement about choice among alternatives and the implications once selection has taken place.
• The shift to speech community focused attention on an important factor in language choice, but also revealed that attempts at general description may be fraught with difficulty at the conceptual level.
• Finally, I introduced the concept of discourse community because I
wanted to show how concepts from very different academic traditions can have
valuable things to contribute to our understanding of language choice.
A Japanese woman offers tea
(to own children)
2 Ocha do?
(to own children, friends younger than self, own younger brothers and sisters)
3 Ocha ikaga?
[tea how-about (polite)]
(to friends of the same age, own older brothers and sisters)
4 Ocha ikaga desu ka?
[tea how-about (polite) is Q]
(to husband (h), own parents, own aunts and uncles, h’s younger brothers and sisters)
5 Ocha wa ikaga desu ka?
[tea topic how-about (polite) is Q]
(to own grandparents)
6 Ocha ikaga desho ka?
[tea how-about (polite) is (polite) Q]
(to h’s elder brothers and sisters)
7 Ocha wa ikaga desho ka?
[tea topic how-about (polite) is (polite) Q]
(to teachers, h’s parents, h’s boss, h’s grandparents)
(to a guest of very high position in society)
Saville-Troike M. 1989. The Ethnography of Communication. Second Edition. Oxford: Blackwell. Page 53.
If you’re interested in this area and the wider issue of what Foley calls
‘social deixis’, it would be worth reading Chapter 16 in Foley (1997),
which includes a discussion of Japanese honorifics (pp. 318-323).
“Mutual FN is the most common form of address in American English; Americans try to get on a “first name basis” as soon as sufficient common interests and common background are established to make a reasonable assertion of solidarity. Mutual TLN is typically used only between newly introduced adults (although even here the relationship may start with mutual FN if the interactants are roughly equal in age and occupational background so as to suggest a presupposition of common interests). Newly introduced American adults will try to find a basis for solidarity in common interests and background in the early stages of their interaction so as to switch as quickly as possible to mutual FN. Interlocutors of the same generation and sex find this easiest to do, so they are the most rapid in their transition to mutual FN, but any variable based on shared life history and values, like religious affiliation, kinship, school or university attended, nationality or ethnicity, and even common experiences may do. If, on the other hand, two newly introduced people have a clear differential in occupation and status entitlement, like a doctor and his male patient, quick transition to mutual FN may not occur. Rather, the superior may address the inferior with FN, but continue to receive TLN. So, when the doctor and the patient first introduce themselves mutual TLN are used: Doctor Wilson — Mr Barrett. After the professional relationship has been established, the superior may shift to FN to indicate increased common background and familiarity, in short, solidarity but continue to receive TLN (such shifts to solidarity forms are properly the initiative of the superior person; initiatives from the inferior person may be rebuffed, if the superior feels solidarity is not sufficiently established, causing embarrassment to both parties). Only later, if ever, will mutual FN be adopted, when the inferior feels the relationship is sufficiently solidary. ...
“Although FN and TLN are the most common forms of address in American English they do not exhaust the repertoire of individual Americans. For example, in addition to TLN, T alone is also an option: Doctor, Professor, Mister, Madam. This is typically used with occupations or positions of high status or when the last name is unknown, so that extreme social distance is needed. The generalized title has an impersonalizing effect on the addressee (see Brown and Levinson, 1987:190-205), so that absolutely no claim of solidarity based on shared personal interests is possible. At the opposite extreme, there are many alternatives to FN to express claims of extremely high solidarity. Thus with very close friends, nicknames are commonplace: Scotty, Geordie, Will. And with our intimates, the options are truly amazing: sweetheart, honey, darling, among hundreds of other, often very idiosyncratic, forms.
It is worthwhile pointing out the similarities between this discussion of T/FN and V/TLN address forms and Brown and Levinson’s (1987) concern with positive and negative face/politeness. The T/FN forms are associated with positive face/politeness, suggesting closeness and solidarity between the interlocutors. The V/LN forms, on the other hand, are linked to negative face/politeness, expressing lack of intrusion on the individual’s space and rights, in a word, social distance. The asymmetrical use indicates that the inferior attends to the superior’s negative face by using V/TLN, indicating his perceived higher status and consequent power, but receives from the superior the T/FN forms, not so much to suggest closeness and solidarity, but as a suggestion of dependence — of the inferior to the superior’s discretion in his use of power to pursue his own interests.”
Foley W A. 1997. Anthropological Linguistics: An Introduction. Oxford: Blackwell.
If you are in any doubt about the power of address forms to define a relationship, consider the following extract from a newspaper report on the proceedings in an industrial tribunal:
“Feelings of bad blood between Alison Halford, the senior police officer who claims that sex discrimination blocked her promotion, and her chief constable were disclosed in a letter read to an industrial tribunal yesterday.
“Eldred Tabachnik, QC, representing Miss Halford, said that an initial honeymoon period when she arrived as assistant chief constable of Merseyside evaporated after six months and she wrote to James Sharples, chief constable, accusing him of a ‘vitriolic and unfounded attack’ on her. The letter to Mr Sharples said: ‘When I tried to defend myself, you became more vehement in your attitude and we moved from Alison to Miss Halford to madam ... your attitude seems mercurial, inconsistent and unpredictable.” The Times 14.5.92
Contrary to the claim in her letter, this “attitude” seems anything but mercurial and inconsistent: the address forms chart the decline of a relationship in sadly predictable terms.
Here is an even more blatant example of using a form of address in order to convey a message:
“Just before he [Ray Illingworth] left Yorkshire [cricket club] (the first time) he received a letter from the secretary saying that they did not intend to offer him a contract, which began ‘Dear Ray Illingworth’ — but the ‘Ray’ had been crossed out. As Illingworth wryly observed: ‘They couldn’t even bring themselves to call me by my first name or use a fresh piece of paper.’” The Observer 5.6.94
The same article contains the following gem from an announcement at a cricket match, this time related to a form of reference:
“No longer is it necessary for the public address system to crackle into life as it did in 1950, when Fred Titmus made his county debut: ‘Ladies and gentlemen, a correction to your scorecard; for F.J. Titmus, read Titmus, F.J.” The Observer 5.6.94
In order to understand the subtle message which this was intended to convey you need to know that at this time there was a straightforward division in cricket between amateur ‘gentlemen’ and professional ‘players’ (although the distinction was nearing its end). The former, who were addressed in terms of TLN by the latter, were accorded the respect associated with their ‘superior’ social status and enjoyed privileges which were denied to the players. The latter were addressed by their last name only and expected to make do with relatively crude facilities. With typical English subtlety, this social division was reflected by the way names appeared on the programme: initials followed by name indicated a ‘gentleman’, while players were entered last name first. The coded message in this announcement is, “We’ve made an awful mistake — Titmus is a player!”
In this final example (originally used to illustrate a different interactional feature), a police constable under review has just complained about a statement the reviewing inspector has made. Note that the inspector’s argument derives much of its force from the use of TLN:
Inspector: Yeah well yes well what you’re basically saying is that um
Detective Inspector Jenkins is wrong, Detective Inspector er Miller is wrong
er Acting Superintendent until recently Chief Inspector Butler is wrong Chief
Inspector Walker is wrong all these people are wrong but Barry you are right.
Constable: You know I can’t take them on sir.”
J. Thomas. 1984. Cross-cultural discourse as unequal encounter: towards a pragmatic
analysis. Applied Linguistics 5(3) 228-235.
Definitions of Speech Community
1. "A speech community is a group of people who interact by means of speech." (Bloomfield, 1935:42. Quoted in Hudson, 1980).
2. "the speech community: any human aggregate characterised by regular and frequent interaction by means of a shared body of verbal signs and set off from similar aggregates by significant differences in language use." (Gumperz J J. 1962. Types of linguistic communities. Anthropological Linguistics 4(1) 28-40. Page 31).
3. “The speech community is not defined by any marked agreement in the use of language elements, so much as by participation in a set of shared norms; these norms may be observed in overt types of evaluative behaviour .... and by the uniformity of abstract patterns of variation which are invariant in respect of particular levels of usage ..." (Labov, 1972:120. Quoted in Hudson, 1980).
4. "A speech community is defined, then, tautologically but radically, as a community sharing knowledge of rules for the conduct and interpretation of speech. Such sharing comprises knowledge of at least one form of speech, and knowledge also of its patterns of use. Both conditions are necessary. Since both kinds of knowledge may be shared apart from common membership in a community, an adequate theory of language requires additional notions, such as language field, speech field, and speech network, and requires the contribution of social science in characterising the notions of community, and of membership in a community. (Hymes D. 1977. Foundations in Sociolinguistics. London: Tavistock. Page 51).
5. "There is no limit to the ways in which human beings league themselves together for self-identification, security, gain, amusement, worship, or any of the other purposes that are held in common; consequently there is no limit to the number and variety of speech communities that are to be found in society.” (Bolinger, 1975:333. Quoted in Hudson, 1980).
6. “members of the same speech community need not all speak the same language nor use the same linguistic forms on similar occasions. All that is required is that there be at least one language in common and that the rules governing basic communicative strategies be shared so that the speakers can decode the social meanings carried by alternative modes of communication.” (Gumperz J J. 1972. Introduction. In J J Gumperz & D Hymes (Eds) Directions in Sociolinguistics. Oxford: Blackwell. P. 16).
7. “To participate in a speech community is not quite the same as to be a member of it. Here we encounter the limitation of any conception of speech community in terms of knowledge alone, even knowledge of patterns of speaking as well as grammar, and of course any definition in terms of interaction alone. Just the matter of accent may erect a barrier between participation and membership in one case, although be ignored in another. Obviously membership in a community depends upon criteria which in the given case may not even saliently involve language and speaking, as when birthright is considered indelible.” (Hymes D. 1977. Foundations in Sociolinguistics. London: Tavistock. Pages 50-51).
8. “Individuals, particularly in complex societies, may thus participate in a number of discrete or overlapping speech communities, just as they participate in a variety of social settings. Which one or ones a person orients himself or herself to at any moment — which set of rules he or she uses — is part of the strategy of communication. To understand this phenomenon, it is necessary to recognize that each member of a community has a repertoire of social identities, and each identity in a given context is associated with a number of appropriate verbal and nonverbal forms of expression.” (Saville-Troike M. 1989. The Ethnography of Communication (2nd Ed). Oxford: Blackwell. Page 20).
1. A discourse community has a broadly agreed set of common public goals.
2. A discourse community has mechanisms of intercommunication among its members.
3. A discourse community uses its participatory mechanisms primarily to provide information and feedback.
4. A discourse community utilizes and hence possesses one or more genres in the communicative furtherance of its aims.
5. In addition to owning genres, a discourse community has acquired some specific lexis.
6. A discourse community has a threshold level of members with a suitable degree of relevant content and discoursal expertise.
Swales J M. 1990. Genre Analysis. Cambridge: Cambridge University Press.
Aspects of the Discourse Community
“The use of the term ‘discourse community’ testifies to the increasingly common assumption that discourse operates within conventions defined by communities, be they academic disciplines or social groups. The pedagogies associated with writing across the curriculum and academic English now use the notion of ‘discourse communities’ to signify a cluster of ideas: that language use in a group is a form of social behaviour, that discourse is a means of maintaining and extending the group’s knowledge and of initiating new members into the group, and that discourse is epistemic or constitutive of the group’s knowledge.”
Herzberg B. 1986. The politics of discourse communities. Paper presented at the CCC Convention, New Orleans, La, March, 1986. (Quoted in J M Swales, 1990, Genre Analysis, Cambridge: CUP.)
Institutional vs Interdisciplinary/Social
“Studies in scientific and technical communication that identify the
discourse community with particular institutions, either disciplinary or organizational,
suggest the utility of studying, and teaching, the assumptions, norms, and practices,
including the communication practices, of these institutions. For example, researchers
in scientific and technical communication might study established principles
of readability such as the use of headings or topic sentences as they apply
to scientific journal articles, and they might study variations and deviations
from these principles such as the omission of headings and other formatting
devices in a journal ... In contrast, studies in scientific and technical communication
that identify the discourse community with the larger interdisciplinary and
social community suggest the need to study, and teach, modes of communication
that cut across the boundaries that separate disciplines and organizations from
each other and from the public. For example, researchers might study the organizational
or social criteria that apply to research when it is reported in an applied
research journal or in a proposal to the National Science Foundation....
“Studies such as these, both actual and potential, suggest the need to teach students not only to communicate within the context of several discourse communities but also, and especially, to develop the ability to step outside the boundaries of particular discourse communities and to participate in conversations with others on problems of mutual interest and concern.”
Zappen J P. 1989. The discourse community in scientific and technical communication:
institutional and social views. Journal of Technical Writing and Communication
19(1) 1-11. Pages 8-9.
‘Descriptive’ and ‘Explanatory’
“To the extent that language functions not only to reproduce the dominant order but to resist it as well, we have two types of uses for the concept of discourse community. The one — descriptive — is based on models of linguistic reproduction and is by now fairly familiar: The conventions of a discourse community, to the extent that they serve established interests of a particular group, are deliberately or tacitly imposed by members of this community on initiates or outsiders.... Carried further, explanatory adequacy serves an even greater purpose: Opposition to the conventions of a discourse community, insofar as it reflects minority or underrepresented interests, emerges to resist the established interests and bring about change in (or toward) the values and behaviour of those in power. Here, discourse community helps to provide students with the critical perspective needed to develop, in Giroux’s words, a self-managed existence.”
Rafoth B A. 1990. The concept of discourse community: descriptive and explanatory adequacy. In G Kirsch & D H Roen (Eds) A Sense of Audience in Written Communication pp140-152. Newbury park: Sage. Page 149.
In what follows I offer only the barest outline of an response. As long as your own response is along the same lines, you have nothing to worry about.
A discourse community has a broadly agreed set of common public goals.
I suppose that the most general formulation of the goal of our discourse community is the promotion of effective English language teaching. Within that it’s possible to identify a number of subordinate goals (the dissemination and promotion of good practice in TESOL, the dissemination of the latest research findings in the field, etc.).
A discourse community has mechanisms of intercommunication among its members.
The list is long and would certainly include teachers’ groups, seminars, conferences, newsletters, professional and academic journals, e-mail lists, and professional and academic courses.
A discourse community uses its participatory mechanisms primarily to provide information and feedback.
Any of the above would provide examples of this. For example, a newsletter might include news about a forthcoming event, a report on recent workshop, a call for papers for a conference, an exchange of letters on a topic of interest, a review of recent publications, an article on a teacher’s action research project, an article on the pedagogic implications of some recent findings in the field of lexis, an advertisement for an academic course, some tips for ending a lesson with a bang, a social calendar, etc.
A discourse community utilizes and hence possesses one or more genres in the communicative furtherance of its aims.
Again, the list is long. Prominent in any consideration would be such genres as the academic paper, the conference paper, the news article and the (discussion) letter.
In addition to owning genres, a discourse community has acquired some specific lexis.
The examples you provide here are likely to depend on which module you have already completed. It’s unlikely, for example, that you started this course with any idea of what anaphoric reference meant, but it will be standard fare after the AWD module. Similarly, trained ESOL (note the acronym — another example of specific lexis) teachers will talk happily of open and closed questions but the difference may need to be explained to an outsider.
A discourse community has a threshold level of members with a suitable degree of relevant content and discoursal expertise.
It’s hard to say what a threshold might be in our field, but given its size the issue is unlikely to arise. Your own position is an interesting one, though. I would argue (and I freely admit that this is open to challenge) that you are close to full membership of the community as a whole. Your professional qualifications and experience are more than adequate for participation in local and perhaps national teachers’ groups and the mechanisms of communication associated with them (newsletters, conferences, etc.), but you have yet to move to a point where you’re ready to publish in leading international journals. Progress to that point is certainly possible on the basis of this course, so in discourse community terms that’s where you’re headed.
Having said all that, there are communities and communities, and I know that
there are people who would claim that while I might have the right to claim
membership of the applied linguistics discourse community, the teachers discourse
community is closed to me. I happen to think it’s a fruitless debate,
but it is about community and membership.
Extra task 1
Read the passage below and identify features in it which relate to the discussion of address forms and alternation and co-occurrence in the unit. You could begin by identifying where the interaction ‘goes wrong’ and compare the ‘before’ and ‘after’ situations. There is a discussion of the passage on the next page, but you should not read this until you are satisfied that you have identified as many relevant features as you can.
Participants: Paul Roberts Marketing Manager
Deborah Talbot Administrative Assistant
Scene: Corridor outside Paul Roberts’ office
Time: 8.25 (work starts at 9.00)
PR: So how are you settling into the new flat then, Deborah?
DT: Fine thanks, Mr Roberts.
PR: Redecorating from top to bottom I suppose.
DT: No, not yet — it’ll take weeks to sort out the unpacking.
PR: Still living out of boxes then?
DT: Well, there always seems something more important to do.
PR: You ought to get it done, you know. The sooner it’s out of the way the better.
DT: You can talk.
DT: I mean, well, I was just thinking of the move here. You know, the boxes in your office. They took ages to move, them, didn’t they?
JR: Oh, I see.
DT: They do, though, don’t they?
PR: I think we had things pretty well under control. Which reminds me, Ms Talbot, could you make sure the minutes of that last board meeting are out by 11 o’clock this morning. Make that a priority.
DT: Yes, Mr Roberts.
Before beginning my analysis, I think it’s important to stress that this extract is not authentic. In future units all extracts will be authentic, but in this case I have found it impossible to track down a suitable example and have therefore produced the text myself. It is adequate for the purposes of this task, but you should not regard it as anything more than a convenient heuristic. It is not a reliable guide to authentic interaction.
You probably spotted fairly quickly where the turning point is. In fact, if this were a recording of an authentic exchange, there would almost certainly be a pause between “Well you can talk” and “Pardon”. The “Pardon” may well not have occurred at all. Whatever the case, it is clear that a change takes place in the interaction following Deborah’s comment. Let’s begin by looking at what has happened up to that point.
The exchange takes place in the workplace, but in a neutral area and outside work time, so the social topic seems appropriate. Notice, though, that the asymmetrical relationship between the two interactants is clearly marked. The fact that Paul uses FN to address Deborah but receives TLN in reply provides sufficiently clear evidence of this, but there are other aspects of the interaction which are also worth noting. For example, Paul initiates the exchange, and Deborah’s role is essentially responsive, so there is a sense in which he might be said to be in control. He also feels free to offer advice on her actions in her private life (something which would normally be the privilege of friends), and some would argue that in doing so he transgresses the boundaries of what is acceptable. At any rate, this advice is what prompts Deborah’s, “Well you can talk.”
Paul’s response to this indicates that it is inappropriate, that Deborah
has violated the norms which apply in this asymmetrical relationship by challenging
his own actions in an unacceptable way (although this would be appropriate if
she were responding to the advice of friends). Deborah immediately seeks to
repair the damage by explaining what prompted her comment and attempting to
shift the topic to an earlier move. Paul withholds agreement but acknowledges
the explanation. (As we shall see in the unit on conversation analysis, “Oh”
often serves as a ‘change of state token’ which indicates that the
hearer’s state of knowledge has changed as a result of the speaker’s
utterance.) Deborah once more attempts to involve him in a discussion of the
problems of moving (“They do...”) but receives a curt and quite
formal response which closes the topic.
The initiative is now once more with Paul, and we can see a marked contrast between the exchange which follows and the earlier one. Here are the features which seem to me to be significant:
1. By comparison with the earlier ‘social’ exchanges, Paul’s turn is long. Generally speaking, long turns are not features of brief social exchanges (although they may be, depending on the interactants and the topic), but they often feature in instruction-giving.
2. Paul switches the topic abruptly to business. There are two things worth noting here. The first is the abruptness of the switch, which is not characteristic of ordinary conversation, where more gradual shifts are common (Jefferson has identified what she calls a ‘step-wise transition’ from one topic to another). There is some concession to the need to mark such sudden shifts, however, in the use of “Which reminds me...”. The second aspect of interest is the topic itself. Particular topics are appropriate to particular situations and particular relationships, and by shifting to business (perhaps inappropriately, given the time) Paul firmly re-establishes the asymmetrical relationship which Deborah’s “Well you can talk” had violated.
3. The asymmetry is confirmed by the switch to TLN when addressing Deborah. This effectively marks the move from a ‘social’ relationship to an ‘office’ relationship, where Paul is the boss. Such shifts are the norm in some cultures, where colleagues switch from TLN to FN when they leave the workplace switch back on their return (I have seen German lunches quoted as an example of this).
4. The imperative which concludes Paul’s turn is stronger than the “ought” which featured his advice on unpacking. Again, the former may be characteristic of their office relationship.
The passage ends with Deborah’s explicit assent to Paul’s instruction and implicit acceptance of the office relationship.
At this point you may be worrying about differences between your analysis and
mine. Such differences are inevitable, and it is important to regard my analyses
of any text as no more than an example. If you have identified the important
points (which in this case boil down to the asymmetrical relationship and the
change of address form and topic following the Deborah’s “Well you
can talk.”), this is all that really matters — there is no definitive
analysis. There may be aspects of the interaction mentioned above which you
have not considered, or there may be features which I have ignored and which
you consider important. In either case, this represents a useful basis for further
Extra Task 2
This is a task which you might consider undertaking when you have finished the unit. It’s the sort of task which you can undertake quite informally if you wish. I’ve laid it out below on a step-by step basis, but there is no reason why you should complete every step. The amount of time you dedicate to this task, if any, is up to you. This will depend on your approach, which could range from simply keeping you ears peeled to setting up and following through a detailed study.
• Identify a context with which you are familiar (e.g. your staffroom, university department, office).
• Collect examples of the address forms used there.
• Try to work out the rules which apply to the use of these forms and summarise these.
• If possible, represent the rules diagrammatically (there are examples in the chapter in Fasold discussed in the Study section below).
• Discuss your findings with other participants on the course.
• Choose your setting carefully, and be aware that some situations are very sensitive. If you have any doubts about the wisdom of focusing on a particular setting (e.g. social or professional repercussions), choose another.
• Don’t be tempted to tell anyone what you’re doing: they may decide to ‘help’ you by putting on a show.
• Take notes on site if possible (i.e., if you can do so privately), but in any case as soon afterwards as you can.
• Don’t jump to conclusions. Watch carefully and build up as detailed
a picture as you can before developing your description.
These are the terms used to address individuals (‘Mr Smith’, ‘Steve’, etc.). Forms of address need to be distinguished from forms used to refer to people (‘my colleague, Mr Smith’) and to summon people (‘Stephen Smith (report to reception).’)
This refers to the choice among linguistic alternatives. Taking address forms as an example, there is more than one way of addressing an individual, and what begins as a ‘Mr Smith’ relationship may move to a ‘Steve’ relationship in which a sudden reversion to ‘Mr Smith’ may be communicatively significant. Once a choice has been made, this has significance for other linguistic choices (see co-occurrence).
This is an extremely difficult concept to tie down. Broadly speaking, we shall take it to refer to the (structure of) features of a social situation which influence the nature of interaction in that situation. This is very similar to a definition offered by Van Dijk and quoted in the unit.
Once a choice among linguistic alternatives has been made (see alternation), this will have implications for the form of the rest of the discourse. For example, ‘Mr Smith’ is likely to co-occur with “Can I just pass this to you?” and not “’ere, cop ‘old of this.”
The idea of a discourse community is based on the recognition that it is possible to identify specific (often professional or academic) groups with shared public aims, whose discourse reflects those aims and serves to represent the shared knowledge of the community. Mechanisms of intercommunication within the community (e.g. academic journals, conferences, newsletters) can be identified and the discourse represented in them analysed.
These are terms used to address members of one’s family. For example, ‘mummy’ or the ‘uncle’ in ‘uncle Fred’ are a kin terms. These are also examples of address forms.
There are numerous definitions of this term, but all of them are underpinned by the idea that specific communities, consisting of people who interact regularly together, can be identified (i.e. distinguished from other groups) in terms of the way they use language. It is usually assumed that the minimal requirements for participation in a speech community are a shared language and shared rules of use. The sociolinguistic roots of this concept are different from the rhetorical tradition from which the concept of the more formally constituted discourse community derives.
Schiffrin D. 1994. Approaches to Discourse. Oxford: Basil Blackwell. Chapter
1 (pp5-19) and
Chapter 10 (pp362-383).
Kumatoridani, T. 1999. ‘Alternation and co-occurrence in Japanese thanks’.
Journal of Pragmatics
There are other introductions to the general field (e.g. that offered in Van Dijk 1997), and if you find these more amenable there’s no reason why you shouldn’t also read them. Debates about what fits in where and which definitions are most accurate are reasonably common, so the more you read the more balanced your view is likely to be. However, it’s best to let this develop over the course rather than trying to force it into some sort of definite shape now.
Bolinger D. 1975. Aspects of Language. 2nd Edition. New York: Harcourt, Brace,
Brown R & Gilman A. 1960. The pronouns of power and solidarity. In T Sebeok (Ed) Style in Language pp 253-276. Cambridge Mass: Massachusetts Institute of Technology.
Brown P & S Levinson. 1987. Politeness. Cambridge: Cambridge University Press..
Ervin-Tripp S. 1986. On sociolinguistic rules: alternation and co-occurrence. In J Gumperz & D Hymes (Eds) Directions in Sociolinguistics pp 213-250. Oxford: Basil Blackwell.
Fasold R. 1990. The Sociolinguistics of Language. Oxford: Basil Blackwell.
Foley W A. 1997. Anthropological Linguistics: An Introduction. Oxford: Blackwell. (Chapter 16)
Gumperz, J. J. 1962. Types of linguistic communities. Anthropological Linguistics 4(1) 28-40.
Gumperz J & D Hymes (Eds). 1986. Directions in Sociolinguistics. Oxford: Blackwell.
Heritage J. 1984. Garfinkel and Ethnomethodology. Oxford: Basil Blackwell.
Hudson R A. 1996. Sociolinguistics (2nd Ed). Cambridge: Cambridge University Press.
Hymes D. 1971. On Communicative Competence. Philadelphia: University of Pennsylvania Press.
Hymes D. 1977. Foundations in Sociolinguistics. London: Tavistock.
Preston D. 1989. Sociolinguistics and Second Language Acquisition. Oxford: Blackwell.
Rafoth B A. 1990 The concept of discourse community: descriptive of explanatory adequacy. In G Kirsch & D H Roen (Eds) A Sense of Audience in Written Communication pp 140-152. Newbury Park: Sage.
Romaine S. 1982. Sociolinguistic Variation in Speech Communities. London: Arnold.
Saville-Troike M. 1989. The Ethnography of Communication. Second Edition. Oxford: Basil Blackwell.
Schiffrin D. 1994. Approaches to Discourse. Oxford: Basil Blackwell.
Swales J M. 1990 Genre Analysis: English in Academic and Research Settings. Cambridge: Cambridge University Press
Thomas J. 1983. Cross-cultural pragmatic failure. Applied Linguistics 4(2) 91-112.
Thomas J. 1984. Cross-cultural discourse as unequal encounter: towards a pragmatic analysis. Applied Linguistics 5(3) 228-235.
Van Dijk T A. 1997. Discourse as interaction in society. In Van Dijk 1997b.
Van Dijk T A (Ed). 1997a. Discourse as Structure and Process. London: Sage.
Van Dijk T A (Ed) 1997b. Discourse as Social Interaction. London: Sage.
Williams G. 1992. Sociolinguistics: A Sociological Critique. London: Routledge.
Zappen J P. 1989. The discourse community in scientific and technical communication: institutional and social views. Journal of Technical Writing and Communication 19(1) 1-11. | http://www.philseflsupport.com/analysisofspokeni.htm | 13 |
184 | Journey to the center of the earth: Discovery sheds light on mantle formation
Uncovering a rare, two-billion-year-old window into the Earth’s mantle, a University of Houston professor and his team have found our planet’s geological history is more complex than previously thought.
Jonathan Snow, assistant professor of geosciences at UH, led a team of researchers in a North Pole expedition, resulting in a discovery that could shed new light on the mantle, the vast layer that lies beneath the planet’s outer crust. These findings are described in a paper titled “Ancient, highly heterogeneous mantle beneath Gakkel Ridge, Arctic Ocean,” appearing recently in Nature.
These two-billion-year-old rocks that time forgot were found along the bottom of the Arctic Ocean floor, unearthed during research voyages in 2001 and 2004 to the Gakkel Ridge, an approximately 1,000-mile-long underwater mountain range between Greenland and Siberia. This massive underwater mountain range forms the border between the North American and Eurasian plates beneath the Arctic Ocean, where the two plates diverge.
These were the first major expeditions ever undertaken to the Gakkel Ridge, and these latest published findings are the fruit of several years of research and millions of dollars spent to retrieve and analyze these rocks.
The mantle, the rock layer that comprises about 70 percent of the Earth’s mass, sits several miles below the planet’s surface. Mid-ocean ridges like Gakkel, where mantle rock is slowly pushing upward to form new volcanic crust as the tectonic plates slowly move apart, is one place geologists look for clues about the mantle. Gakkel Ridge is unique because it features – at some locations – the least volcanic activity and most mantle exposure ever discovered on a mid-ocean ridge, allowing Snow and his colleagues to recover many mantle samples.
“I just about fell off my chair,” Snow said. “We can’t exaggerate how important these rocks are – they’re a window into that deep part of the Earth.”
Venturing out aboard a 400-foot-long research icebreaker, Snow and his team sifted through thousands of pounds of rocks scooped up from the ocean floor by the ship’s dredging device. The samples were labeled and cataloged and then cut into slices thinner than a human hair to be examined under a microscope. That is when Snow realized he found something that, for many geologists, is as rare and fascinating as moon rocks – mantle rocks devoid of sea floor alteration. Analysis of the isotopes of osmium, a noble metal rarer than platinum within the mantle rocks, indicated they were two billion years old. The use of osmium isotopes underscores the significance of the results, because using them for this type of analysis is still a new, innovative and difficult technique.
Since the mantle is slowly moving and churning within the Earth, geologists believe the mantle is a layer of well-mixed rock. Fresh mantle rock wells up at mid-ocean ridges to create new crust. As the tectonic plates move, this crust slowly makes its way to a subduction zone, a plate boundary where one plate slides underneath another and the crust is pushed back into the mantle from which it came.
Because this process takes about 200 million years, it was surprising to find rocks that had not been remixed inside the mantle for two billion years. The discovery of the rocks suggests the mantle is not as well-mixed or homogenous as geologists previously believed, revealing that the Earth’s mantle preserves an older and more complex geologic history than previously thought. This opens the possibility of exploring early events on Earth through the study of ancient rocks preserved within the Earth’s mantle.
The rocks were found during two expeditions Snow and his team made to the Arctic, each lasting about two months. The voyages were undertaken while Snow was a research scientist at the Max Planck Institute in Germany, and the laboratory study was done by his research team that now stretches from Hawaii to Houston to Beijing.
Since coming to UH in 2005, Snow’s work stemming from the Gakkel Ridge samples has continued, with more research needed to determine exactly why these rocks remained unmixed for so long. Further study using a laser microprobe technique for osmium analysis available only in Australia is planned for next year.
Source: University of Houston
Geologists Discover New Way of Estimating Size and Frequency of Meteorite Impacts
Scientists have developed a new way of determining the size and frequency of meteorites that have collided with Earth.
Their work shows that the size of the meteorite that likely plummeted to Earth at the time of the Cretaceous-Tertiary (K-T) boundary 65 million years ago was four to six kilometers in diameter. The meteorite was the trigger, scientists believe, for the mass extinction of dinosaurs and other life forms.
François Paquay, a geologist at the University of Hawaii at Manoa (UHM), used variations (isotopes) of the rare element osmium in sediments at the ocean bottom to estimate the size of these meteorites. The results are published in this week's issue of the journal Science.
When meteorites collide with Earth, they carry a different osmium isotope ratio than the levels normally seen throughout the oceans.
"The vaporization of meteorites carries a pulse of this rare element into the area where they landed," says Rodey Batiza of the National Science Foundation (NSF)'s Division of Ocean Sciences, which funded the research along with NSF's Division of Earth Sciences. "The osmium mixes throughout the ocean quickly. Records of these impact-induced changes in ocean chemistry are then preserved in deep-sea sediments."
Paquay analyzed samples from two sites, Ocean Drilling Program (ODP) site 1219 (located in the Equatorial Pacific), and ODP site 1090 (located off of the tip of South Africa) and measured osmium isotope levels during the late Eocene period, a time during which large meteorite impacts are known to have occurred.
"The record in marine sediments allowed us to discover how osmium changes in the ocean during and after an impact," says Paquay.
The scientists expect that this new approach to estimating impact size will become an important complement to a more well-known method based on iridium.
Paquay, along with co-author Gregory Ravizza of UHM and collaborators Tarun Dalai from the Indian Institute of Technology and Bernhard Peucker-Ehrenbrink from the Woods Hole Oceanographic Institution, also used this method to make estimates of impact size at the K-T boundary.
Even though these method works well for the K-T impact, it would break down for an event larger than that: the meteorite contribution of osmium to the oceans would overwhelm existing levels of the element, researchers believe, making it impossible to sort out the osmium's origin.
Under the assumption that all the osmium carried by meteorites is dissolved in seawater, the geologists were able to use their method to estimate the size of the K-T meteorite as four to six kilometers in diameter.
The potential for recognizing previously unknown impacts is an important outcome of this research, the scientists say.
"We know there were two big impacts, and can now give an interpretation of how the oceans behaved during these impacts," says Paquay. "Now we can look at other impact events, both large and small."
Source: National Science Foundation
ScienceDaily (Apr. 26, 2008) — Geologists studying deposits of volcanic glass in the western United States have found that the central Sierra Nevada largely attained its present elevation 12 million years ago, roughly 8 or 9 million years earlier than commonly thought.
The finding has implications not only for understanding the geologic history of the mountain range but for modeling ancient global climates."All the global climate models that are currently being used strongly rely on knowing the topography of the Earth," said Andreas Mulch, who was a postdoctoral scholar at Stanford when he conducted the research. He is the lead author of a paper published recently in the online Early Edition of the Proceedings of the National Academy of Sciences.
A variety of studies over the last five years have shown that the presence of the Sierra Nevada and Rocky Mountains in the western United States has direct implications for climate patterns extending into Europe, Mulch said."If we did not have these mountains, we would completely change the climate on the North American continent, and even change mean annual temperatures in central Europe," he said. "That's why we need to have some idea of how mountains were distributed over planet Earth in order to run past climate models reliably." Mulch is now a professor of tectonics and climate at the University of Hannover in Germany.
Mulch and his colleagues, including Page Chamberlain, a Stanford professor of environmental earth system science, reached their conclusion about the timing of the uplift of the Sierra Nevada by analyzing hydrogen isotopes in water incorporated into volcanic glass.
Because so much of the airborne moisture falls as rain on the windward side of the mountains, land on the leeward side gets far less rain—an effect called a "rain shadow"—which often produces a desert.
The higher the mountain, the more pronounced the rain shadow effect is and the greater the decrease in the number of heavy hydrogen isotopes in the water that makes it across the mountains and falls on the leeward side of the range. By determining the ratio of heavier to lighter hydrogen isotopes preserved in volcanic glass and comparing it with today's topography and rainwater, researchers can estimate the elevation of the mountains at the time the ancient water crossed them.
Volcanic glass is an excellent material for preserving ancient rainfall. The glass forms during explosive eruptions, when tiny particles of molten rock are ejected into the air. "These glasses were little melt particles, and they cooled so rapidly when they were blown into the atmosphere that they just froze, basically," Mulch said. "They couldn't crystallize and form minerals."
Because glass has an amorphous structure, as opposed to the ordered crystalline structure of minerals, there are structural vacancies in the glass into which water can diffuse. Once the glass has been deposited on the surface of the Earth, rainwater, runoff and near-surface groundwater are all available to interact with it. Mulch said the diffusion process continues until the glass is effectively saturated with water.
The samples they studied ranged from slightly more than 12 million years old to as young as 600,000 years old, a time span when volcanism was rampant in the western United States owing to the ongoing subduction of the Pacific plate under the continental crust of the North American plate.
Until now, researchers have been guided largely by "very good geophysical evidence" indicating that the range reached its present elevation approximately 3 or 4 million years ago, owing to major changes in the subsurface structure of the mountains, Mulch said.
"There was a very dense root of the Sierra Nevada, rock material that became so dense that it actually detached and sank down into the Earth's mantle, just because of density differences," Mulch said. "If you remove a very heavy weight at the base of something, the surface will rebound."
The rebound of the range after losing such a massive amount of material should have been substantial. But, Mulch said, "We do not observe any change in the surface elevation of the Sierra Nevada at that time, and that's what we were trying to test in this model."
However, Mulch said he does not think his results refute the geophysical evidence. It could be that the Sierra Nevada did not evolve uniformly along its 400-mile length, he said. The geophysical data indicating the loss of the crustal root is from the southern Sierra Nevada; Mulch's study focused more on the northern and central part of the range. In the southern Sierra Nevada, the weather patterns are different, and the rain shadow effect that Mulch's approach hinges on is less pronounced.
"That's why it's important to have information that's coming from deeper parts of the Earth's crust and from the surface and try to correlate these two," Mulch said. To really understand periods in the Earth's past where climate conditions were markedly different from today, he said, "you need to have integrated studies."
The research was funded by the National Science Foundation.
Adapted from materials provided by Stanford University
adendum : This article was reproduced in part, the full text can be accessed at www.sciencedaily.com/ h
Rocks under the northern ocean are found to resemble ones far south
Scientists probing volcanic rocks from deep under the frozen surface of the Arctic Ocean have discovered a special geochemical signature until now found only in the southern hemisphere. The rocks were dredged from the remote Gakkel Ridge, which lies under 3,000 to 5,000 meters of water; it is Earth’s most northerly undersea spreading ridge. The study appears in the May 1 issue of the leading science journal Nature.
The Gakkel extends some 1,800 kilometers beneath the Arctic ice between Greenland and Siberia. Heavy ice cover prevented scientists from getting at it until the 2001 Arctic Mid-Ocean Ridge Expedition, in which U.S and German ice breakers cooperated. This produced data showing that the ridge is divided into robust eastern and western volcanic zones, separated by an anomalously deep segment. That abrupt boundary contains exposed unmelted rock from earth’s mantle, the layer that underlies the planet’s hardened outer shell, or lithosphere.
By studying chemical trace elements and isotope ratios of the elements lead, neodymium, and strontium, the paper’s authors showed that the eastern lavas, closer to Siberia, display a typical northern hemisphere makeup. However, the western lavas, closer to Greenland, show an isotopic signature called the Dupal anomaly. The Dupal anomaly, whose origin is intensely debated, is found in the southern Indian and Atlantic oceans, but until now was not known from spreading ridges of the northern hemisphere. Lead author Steven Goldstein, a geochemist at Columbia University’s Lamont-Doherty Earth Observatory (LDEO), said that this did not suggest the rocks came from the south. Rather, he said, they might have formed in similar ways. “It implies that the processes at work in the Indian Ocean might have an analog here,” said Goldstein. Possible origins debated in the south include upwelling of material from the deep earth near the core, or shallow contamination of southern hemispheric mantle with certain elements during subduction along the edges of the ancient supercontinent of Pangea.
At least in the Arctic, the scientists say they know what happened. Some 53 million years ago, what are now Eurasia and Greenland began separating, with the Gakkel as the spreading axis. Part of Eurasia’s “keel”—a relatively stable layer of mantle pasted under the rigid continent and enriched in certain elements that are also enriched in the continental crust—got peeled away. As the spreading continued, the keel material got mixed with “normal” mantle that was depleted in these same elements. This formed a mixture resembling the Dupal anomaly. The proof, said Goldstein, is that the chemistry of the western Gakkel lavas appear to be mixtures of “normal” mantle and lavas coming from volcanoes on the Norwegian/Russian island of Spitsbergen. Although Spitsbergen is an island, it is attached to the Eurasian continent, and its volcanoes are fueled by melted keel material.
“This is unlikely to put an end to the debate about the origin of the southern hemisphere Dupal signature, as there may be other viable explanations for it,” said Goldstein. “On the other hand, this study nails it in the Arctic. Moreover, it delineates an important process within Earth’s system, where material associated with the continental lithospheric keel is transported to the deeper convectiing mantle.”
Source: The Earth Institute at Columbia University
How deep is Europe?
The Earth's crust is, on global average around 40 kilometres deep. In relation to the total diameter of the Earth with approx. 12800 kilometres this appears to be rather shallow, but precisely these upper kilometres of the crust, the human habitat, is of special interest for us.
Europe's crust shows an astonishing diversity: for example the crust under Finland is as deep as one only expects for crust under a mountain range such as the Alps. It is also amazing that the crust under Iceland and the Faroer-Islands is considerably deeper than a typical oceanic crust. This is explained by M. Tesauro und M. Kaban from GeoForschungsZentrum Potsdam (GFZ) and S. Cloetingh from the Vrije Universiteit in Amsterdam in a recent publication in the renowned scientific journal "Geophysical Research Letters". GFZ is the German Research Centre for Geosciences and a member of the Helmholtz Association.
For many years intensive investigation of the Earth's crust has been underway. However, different research groups in Europe have mostly been concentrating on individual regions. Hence, a high-resolution and consistent overall picture has not been available to date. With the present study this gap can now be filled. By incorporating the latest seismological results a digital model of the European crust has been created. This new detailed picture also allows for the minimization of interfering effects of the crust when taking a glance at the deeper Earth's interior.
A detailed model of the Earth's crust, i.e. from the upper layers to approx. a depth of 60 km is essential to understand the many millions of years of development of the European Continent. This knowledge supports the discovery of the commercial importance of ore deposits or crude oil in the continental shelf or in general with the use of the subterranean e.g. for the sequestration of CO2. It also contributes to the identification of geological hazards such as earthquakes.
Citation: Tesauro, M., M. K. Kaban, and S. A. P. L. Cloetingh (2008), EuCRUST-07: A new reference model for the European crust, Geophys. Res. Lett., 35, L05313, doi:10.1029/2007GL032244.
Source: Helmholtz Association of German Research Centres
by Barry Ray
Tallahassee FL (SPX) May 02, 2008
Working with colleagues from NASA, a Florida State University researcher has published a paper that calls into question three decades of conventional wisdom regarding some of the physical processes that helped shape the Earth as we know it today.
Munir Humayun, an associate professor in FSU's Department of Geological Sciences and a researcher at the National High Magnetic Field Laboratory, co-authored a paper, "Partitioning of Palladium at High Pressures and Temperatures During Core Formation," that was recently published in the peer-reviewed science journal Nature Geoscience.
The paper provides a direct challenge to the popular "late veneer hypothesis," a theory which suggests that all of our water, as well as several so-called "iron-loving" elements, were added to the Earth late in its formation by impacts with icy comets, meteorites and other passing objects.
"For 30 years, the late-veneer hypothesis has been the dominant paradigm for understanding Earth's early history, and our ultimate origins," Humayun said. "Now, with our latest research, we're suggesting that the late-veneer hypothesis may not be the only way of explaining the presence of certain elements in the Earth's crust and mantle."
To illustrate his point, Humayun points to what is known about the Earth's composition.
"We know that the Earth has an iron-rich core that accounts for about one-third of its total mass," he said. "Surrounding this core is a rocky mantle that accounts for most of the remaining two-thirds," with the thin crust of the Earth's surface making up the rest.
"According to the late-veneer hypothesis, most of the original iron-loving, or siderophile, elements" -- those elements such as gold, platinum, palladium and iridium that bond most readily with iron -- "would have been drawn down to the core over tens of millions of years and thereby removed from the Earth's crust and mantle. The amounts of siderophile elements that we see today, then, would have been supplied after the core was formed by later meteorite bombardment. This bombardment also would have brought in water, carbon and other materials essential for life, the oceans and the atmosphere."
To test the hypothesis, Humayun and his NASA colleagues -- Kevin Righter and Lisa Danielson -- conducted experiments at Johnson Space Center in Houston and the National High Magnetic Field Laboratory in Tallahassee. At the Johnson Space Center, Righter and Danielson used a massive 880-ton press to expose samples of rock containing palladium -- a metal commonly used in catalytic converters -- to extremes of heat and temperature equal to those found more than 300 miles inside the Earth.
The samples were then brought to the magnet lab, where Humayun used a highly sensitive analytical tool known as an inductively coupled plasma mass spectrometer, or ICP-MS, to measure the distribution of palladium within the sample.
"At the highest pressures and temperatures, our experiments found palladium in the same relative proportions between rock and metal as is observed in the natural world," Humayun said. "Put another way, the distribution of palladium and other siderophile elements in the Earth's mantle can be explained by means other than millions of years of meteorite bombardment."
The potential ramifications of his team's research are significant, Humayun said.
"This work will have important consequences for geologists' thinking about core formation, the core's present relation to the mantle, and the bombardment history of the early Earth," he said. "It also could lead us to rethink the origins of life on our planet."
Ancient mineral shows early Earth climate tough on continents
A new analysis of ancient minerals called zircons suggests that a harsh climate may have scoured and possibly even destroyed the surface of the Earth's earliest continents.
Zircons, the oldest known materials on Earth, offer a window in time back as far as 4.4 billion years ago, when the planet was a mere 150 million years old. Because these crystals are exceptionally resistant to chemical changes, they have become the gold standard for determining the age of ancient rocks, says UW-Madison geologist John Valley.
Valley previously used these tiny mineral grains — smaller than a speck of sand — to show that rocky continents and liquid water formed on the Earth much earlier than previously thought, about 4.2 billion years ago.
In a new paper published online this week in the journal Earth and Planetary Science Letters, a team of scientists led by UW-Madison geologists Takayuki Ushikubo, Valley and Noriko Kita show that rocky continents and liquid water existed at least 4.3 billion years ago and were subjected to heavy weathering by an acrid climate.
Ushikubo, the first author on the new study, says that atmospheric weathering could provide an answer to a long-standing question in geology: why no rock samples have ever been found dating back to the first 500 million years after the Earth formed.
"Currently, no rocks remain from before about 4 billion years ago," he says. "Some people consider this as evidence for very high temperature conditions on the ancient Earth."
Previous explanations for the missing rocks have included destruction by barrages of meteorites and the possibility that the early Earth was a red-hot sea of magma in which rocks could not form.
The current analysis suggests a different scenario. Ushikubo and colleagues used a sophisticated new instrument called an ion microprobe to analyze isotope ratios of the element lithium in zircons from the Jack Hills in western Australia. By comparing these chemical fingerprints to lithium compositions in zircons from continental crust and primitive rocks similar to the Earth's mantle, they found evidence that the young planet already had the beginnings of continents, relatively cool temperatures and liquid water by the time the Australian zircons formed.
"At 4.3 billion years ago, the Earth already had habitable conditions," Ushikubo says.
The zircons' lithium signatures also hold signs of rock exposure on the Earth's surface and breakdown by weather and water, identified by low levels of a heavy lithium isotope. "Weathering can occur at the surface on continental crust or at the bottom of the ocean, but the [observed] lithium compositions can only be formed from continental crust," says Ushikubo.
The findings suggest that extensive weathering may have destroyed the Earth's earliest rocks, he says.
"Extensive weathering earlier than 4 billion years ago actually makes a lot of sense," says Valley. "People have suspected this, but there's never been any direct evidence."
Carbon dioxide in the atmosphere can combine with water to form carbonic acid, which falls as acid rain. The early Earth's atmosphere is believed to have contained extremely high levels of carbon dioxide — maybe 10,000 times as much as today.
"At [those levels], you would have had vicious acid rain and intense greenhouse [effects]. That is a condition that will dissolve rocks," Valley says. "If granites were on the surface of the Earth, they would have been destroyed almost immediately — geologically speaking — and the only remnants that we could recognize as ancient would be these zircons."
by David Tenenbaum
for Astrobiology Magazine
Moffett Field (SPX) Jul 15, 2008
The oldest rocks so far identified on Earth are one-half billion years younger than the planet itself, so geologists have relied on certain crystals as micro-messengers from ancient times. Called zircons (for their major constituent, zirconium) these crystals "are the kind of mineral that a geologist loves," says Stephen Mojzsis, an associate professor of geological sciences at the University of Colorado at Boulder.
"They capture chemical information about the melt from which they crystallize, and they preserve that information very, very well," even under extreme heat and pressure.
The most ancient zircons yet recovered date back 4.38 billion years. They provide the first direct data on the young Earth soon after the solar system coalesced from a disk of gas and dust 4.57 billion years ago. These zircons tend to refute the conventional picture of a hot, volcanic planet under constant assault by asteroids and comets.
One modern use for the ancient zircons, Mojzsis says, is to explore the late heavy bombardment, a cataclysmic, 30- to 100-million-year period of impacts that many scientists think could have extinguished any life that may have been around 4 billion years ago.
With support from a NASA Exobiology grant, Mojzsis has begun examining the effect of impacts on a new batch of zircons found in areas that have been hit by more recent impacts. Some will come from the Sudbury, Ontario impact zone, which was formed 1.8 billion years ago.
"We know the size, velocity and temperature distribution, so we will be looking at the outer shell of the zircons," which can form during the intense heat and pressure of an impact, he says. A second set of zircons was chosen to span the Cretaceous-Tertiary (KT) impact of 65 million years ago, which exterminated the dinosaurs.
"The point is to demonstrate that the Hadean zircons show the same type of impact features as these younger ones," Mojzsis says. The Hadean Era, named for the hellish conditions that supposedly prevailed on Earth, ended about 3.8 billion years ago.
The oldest zircons indicate that Earth already had oceans and arcs of islands 4.45 to 4.5 billion years ago, just 50 million years after the gigantic collision that formed the moon. At that time, Mojzsis says, "Earth had more similarities than differences with today. It was completely contrary to the old assumption, based on no data, that Earth's surface was a blasted, lunar-like landscape."
Zircons are natural timekeepers because, during crystallization, they incorporate radioactive uranium and thorium, but exclude lead. As the uranium and thorium decay, they produce lead isotopes that get trapped within the zircons.
By knowing the half-lives of the decay of uranium and thorium to lead, and the amount of these elements and their isotopes in the mineral, it's possible to calculate how much time has elapsed since the zircon crystallized.
Zircons carry other information as well. Those that contain a high concentration of the heavier oxygen isotope O-18, compared to the more common O-16, crystallized in magma containing material that had interacted with liquid water.
A new "titanium thermometer," developed by Bruce Watson of Rensselaer Polytechnic Institute and Mark Harrison of the University of California at Los Angeles, can determine the temperature of crystallization based on the titanium concentration.
Both these analyses showed that zircons from as far back as 4.38 billion years ago crystallized in relatively cool conditions, such as at subduction zones where water and magma interact at the intersection of tectonic plates.
To Mojzsis, the message from the most ancient zircons is this: just 50 million years after a mammoth impact formed the moon, Earth had conditions we might recognize today, not the hellish conditions long favored by the conventional viewpoint.
For reasons related to the orbital dynamics of the solar system, that bucolic era was brutally interrupted about 3.96 billion years ago by the "late heavy bombardment," a period of intense asteroid impacts that churned the planet's surface.
The zircons record this period in the form of a narrow, 2-micron-thick zone that most likely formed during a brief exposure to very high temperature. Careful radioactive dating shows that these zones formed essentially simultaneously, even in Hadean zircons of different ages, Mojzsis says. "We found the most amazing thing. These zircons, even if the core ages are different, all share a common 3.96 billion year age for this overgrowth."
The zones also record "massive loss of lead, which happens when the system is heated quite catastrophically and then quenched," Mojzsis adds.
"So it looks like these zircons were sort of cauterized by some process" that both built up the zone and allowed the lead to escape. The cause, he says, was likely "some extremely energetic event" at 3.96 billion years ago, a date that "correlates very nicely to other estimates of the beginning of the late heavy bombardment."
The intense impacts of this period would seem to have exterminated any life that had formed previously. And yet Mojzsis says this conclusion may be overturned by the zircon data.
"From the Hadean zircons we can understand further what the thermal consequences for the crust were, and test our models for habitability during the late heavy bombardment. Most people think it sterilized Earth's surface, but our analysis says that is not the case at all. For a microbial biosphere at some depth in crustal rocks and sediments, impact at the surface zone did not matter," he says.
Indeed, University of Colorado post-doctoral student Oleg Abramov has calculated that the habitable volume of Earth's crust actually increased by a factor 10 for heat-loving thermophiles and hyperthermophiles during the impacts, Mojzsis says.
This raises the possibility that life survived the period of heavy impacts. "The bombing, however locally devastating, creates quite an ample supply of hydrothermal altered rock and hydrothermal systems, worldwide," says Mojzsis.
Although that's bad for organisms that require cool conditions, "thermophiles do not even notice," he says.
"This goes back to an old idea, maybe the late heavy bombardment pruned the tree of life, and selected for thermophiles. Whatever the diversity of life was like before the late heavy bombardment, afterwards it was diminished, and all life henceforth is derived from these survivors."
Columbus OH (SPX) Jul 29, 2008
A single typhoon in Taiwan buries as much carbon in the ocean -- in the form of sediment -- as all the other rains in that country all year long combined.
That's the finding of an Ohio State University study published in a recent issue of the journal Geology.
The study -- the first ever to examine the chemistry of stream water and sediments that were being washed out to sea while a typhoon was happening at full force -- will help scientists develop better models of global climate change.
Anne Carey, associate professor of earth sciences at Ohio State, said that she and her colleagues have braved two typhoons since starting the project in 2004. The Geology paper details their findings from a study of Taiwan's Choshui River during Typhoon Mindulle in July of that year.
Carey's team analyzes water and river sediments from around the world in order to measure how much carbon is pulled from the atmosphere as mountains weather away.
They study two types of weathering: physical and chemical. Physical weathering happens when organic matter containing carbon adheres to soil that is washed into the ocean and buried.
Chemical weathering happens when silicate rock on the mountainside is exposed to carbon dioxide and water, and the rock disintegrates. The carbon washes out to sea, where it eventually forms calcium carbonate and gets deposited on the ocean floor.
If the carbon gets buried in the ocean, Carey explained, it eventually becomes part of sedimentary rock, and doesn't return to the atmosphere for hundreds of millions of years.
Though the carbon buried in the ocean by storms won't solve global warming, knowing how much carbon is buried offshore of mountainous islands such as Taiwan could help scientists make better estimates of how much carbon is in the atmosphere -- and help them decipher its effect on global climate change.
Scientists have long suspected that extreme storms such as hurricanes and typhoons bury a lot of carbon, because they wash away so much sediment. But since the sediment washes out to sea quickly, samples had to be captured during a storm to answer the question definitively.
"We discovered that if you miss sampling these storms, then you miss truly understanding the sediment and chemical delivery of these rivers," said study coauthor and Ohio State doctoral student Steve Goldsmith.
The researchers found that, of the 61 million tons of sediment carried out to sea by the Choshui River during Typhoon Mindulle, some 500,000 tons consisted of particles of carbon created during chemical weathering. That's about 95 percent as much carbon as the river transports during normal rains over an entire year, and it equates to more than 400 tons of carbon being washed away for each square mile of the watershed during the storm.
Carey's collaborators from Academia Sinica -- a major research institute in Taiwan -- happened to be out collecting sediments for a long-term study of the region when Mindulle erupted in the Pacific.
"I don't want to say that a typhoon is serendipity, but you take what the weather provides," Carey said. "Since Taiwan has an average of four typhoons a year, in summer you pretty much can't avoid them. It's not unusual for some of us to be out in the field when one hits."
As the storm neared the coast, the geologists drove to the Choshui River watershed near the central western portion of the country.
Normally, the river is very shallow. But during a typhoon, it swells with water from the mountains. It's not unusual to see boulders the size of cars -- or actual cars -- floating downstream.
Mindulle gave the geologists their first chance to test some new equipment they designed for capturing water samples from storm runoff.
The equipment consisted of one-liter plastic bottles wedged inside a weighted Teflon case that would sink beneath the waves during a storm. They suspended the contraption from bridges above the river as the waters raged below. At the height of the storm, they tied themselves to the bridges for safety.
They did this once every three hours, taking refuge in a nearby storm shelter in between.
Four days later, after the storm had passed, they filtered the water from the bottles and analyzed the sediments for particulate organic carbon. Then they measured the amount of silica in the remaining water sample in order to calculate the amount of weathering occurring with the storm.
Because they know that two carbon molecules are required to weather one molecule of silica, they could then calculate how much carbon washed out to sea. Carey and Goldsmith did those calculations with study coauthor Berry Lyons, professor of earth sciences at Ohio State.
Carey cautioned that this is the first study of its kind, and more data are needed to put the Mindulle numbers into a long-term perspective. She and Goldsmith are still analyzing the data from Typhoon Haitang, which struck when the two of them happened to be in Taiwan in 2005, so it's too early to say how much carbon runoff occurred during that storm.
"But with two to four typhoons happening in Taiwan per year, it's not unreasonable to think that the amount of carbon sequestered during these storms could be comparable to the long-term annual carbon flux for the country," she said.
The findings could be useful to scientists who model global climate change, Goldsmith said. He pointed to other studies that suggest that mountainous islands such as Taiwan, New Zealand, and Papua New Guinea produce one third of all the sediments that enter the world oceans annually.
As scientists calculate Earth's carbon "budget" -- how much carbon is being added to the atmosphere and how much is being taken away -- they need to know how much is being buried in the oceans.
"What is the true budget of carbon being sequestered in the ocean per year? If the majority of sediment and dissolved constituents are being delivered during these storms, and the storms aren't taken into account, those numbers are going to be off," Goldsmith said.
As weathering pulls carbon from the atmosphere, the planet cools. For instance, other Ohio State geologists recently determined that the rise and weathering of the Appalachians preceded an ice age 450 million years ago.
If more carbon is being buried in the ocean than scientists once thought, does that mean we can worry less about global warming?
"I wouldn't go that far," Goldsmith said. "But if you want to build an accurate climate model, you need to understand how much CO2 is taken out naturally every year. And this paper shows that those numbers could be off substantially."
Carey agreed, and added that weathering rocks is not a practical strategy for reversing global warming, either.
"You'd have to weather all the volcanic rocks in the world to reduce the CO2 level back to pre-industrial times," she said. "You'd have to grind the rock into really fine particles, and you'd consume a lot of energy -- fossil fuels -- to do that, so there probably wouldn't be any long-term gain."
X-rays use diamonds as a window to the center of the Earth
Diamonds from Brazil have provided the answers to a question that Earth scientists have been trying to understand for many years: how is oceanic crust that has been subducted deep into the Earth recycled back into volcanic rocks?
A team of researchers, led by the University of Bristol, working alongside colleagues at the STFC Daresbury Laboratory, have gained a deeper insight into how the Earth recycles itself in the deep earth tectonic cycle way beyond the depths that can be accessed by drilling. The full paper on this research has been published (31 July) in the scientific journal, Nature.
The Earth's oceanic crust is constantly renewed in a cycle which has been occurring for billions of years. This crust is constantly being renewed from below by magma from the Earth's mantle that has been forced up at mid-ocean ridges. This crust is eventually returned to the mantle, sinking down at subduction zones that extend deep beneath the continents. Seismic imaging suggests that the oceanic crust can be subducted to depths of almost 3000km below the Earth's surface where it can remain for billions of years, during which time the crust material develops its own unique 'flavour' in comparison with the surrounding magmas. Exactly how this happens is a question that has baffled Earth scientists for years.
The Earth's oceanic crust lies under seawater for millions of years, and over time reacts with the seawater to form carbonate minerals, such as limestone, When subducted, these carbonate minerals have the effect of lowering the melting point of the crust material compared to that of the surrounding magma. It is thought that this melt is loaded with elements that carry the crustal 'flavour'.
This team of researchers have now proven this theory by looking at diamonds from the Juina area of Brazil. As the carbonate-rich magma rises through the mantle, diamonds crystallise, trapping minute quantities of minerals in the process. They form at great depths and pressures and therefore can provide clues as to what is happening at the Earth's deep interior, down to several hundred kilometres - way beyond the depths that can be physically accessed by drilling. Diamonds from the Juina area are particularly renowned for these mineral inclusions.
At the Synchrotron Radiation Source (SRS) at the STFC Daresbury Laboratory, the team used an intense beam of x-rays to look at the conditions of formation for the mineral perovskite which occurs in these diamonds but does not occur naturally near the Earth's surface. With a focused synchrotron X-ray beam less than half the width of a human hair, they used X-ray diffraction techniques to establish the conditions at which perovskite is stable, concluding that these mineral inclusions were formed up to 700km into the Earth in the mantle transition zone.
These results, backed up by further experiments carried out at the University of Edinburgh, the University of Bayreuth in Germany, and the Advanced Light Source in the USA, enabled the research team to show that the diamonds and their perovskite inclusions had indeed crystallised from very small-degree melts in the Earth's mantle. Upon heating, oceanic crust forms carbonatite melts, super-concentrated in trace elements with the 'flavour' of the Earth's oceanic crust. Furthermore, such melts may be widespread throughout the mantle and may have been 'flavouring' the mantle rocks for a very long time.
Dr Alistair Lennie, a research scientist at STFC Daresbury Laboratory, said: "Using X-rays to find solutions to Earth science questions is an area that has been highly active on the SRS at Daresbury Laboratory for some time. We are very excited that the SRS has contributed to answering such long standing questions about the Earth in this way."
Dr. Michael Walter, Department of Earth Sciences, University of Bristol, said: "The resources available at Daresbury's SRS for high-pressure research have been crucial in helping us determine the origin of these diamonds and their inclusions."
Source: Science and Technology Facilities Council
Moffett Field CA (SPX) Aug 25, 2008
For the last few years, astronomers have faced a puzzle: The vast majority of asteroids that come near the Earth are of a type that matches only a tiny fraction of the meteorites that most frequently hit our planet. Since meteorites are mostly pieces of asteroids, this discrepancy was hard to explain, but a team from MIT and other institutions has now found what it believes is the answer to the puzzle.
The smaller rocks that most often fall to Earth, it seems, come straight in from the main asteroid belt out between Mars and Jupiter, rather than from the near-Earth asteroid (NEA) population.
The puzzle gradually emerged from a long-term study of the properties of asteroids carried out by MIT professor of planetary science Richard Binzel and his students, along with postdoctoral researcher P. Vernazza, who is now with the European Space Agency, and A.T. Tokunaga, director of the University of Hawaii's Institute of Astronomy.
By studying the spectral signatures of near-Earth asteroids, they were able to compare them with spectra obtained on Earth from the thousands of meteorites that have been recovered from falls. But the more they looked, the more they found that most NEAs -- about two-thirds of them -- match a specific type of meteorites called LL chondrites, which only represent about 8 percent of meteorites. How could that be?
"Why do we see a difference between the objects hitting the ground and the big objects whizzing by?" Binzel asks. "It's been a head-scratcher." As the effect became gradually more and more noticeable as more asteroids were analyzed, "we finally had a big enough data set that the statistics demanded an answer. It could no longer be just a coincidence."
Way out in the main belt, the population is much more varied, and approximates the mix of types that is found among meteorites. But why would the things that most frequently hit us match this distant population better than it matches the stuff that's right in our neighborhood? That's where the idea emerged of a fast track all the way from the main belt to a "splat!" on Earth's surface.
This fast track, it turns out, is caused by an obscure effect that was discovered long ago, but only recently recognized as a significant factor in moving asteroids around, called the Yarkovsky effect.
The Yarkovsky effect causes asteroids to change their orbits as a result of the way they absorb the sun's heat on one side and radiate it back later as they rotate around. This causes a slight imbalance that slowly, over time, alters the object's path. But the key thing is this: The effect acts much more strongly on the smallest objects, and only weakly on the larger ones.
"We think the Yarkovsky effect is so efficient for meter-size objects that it can operate on all regions of the asteroid belt," not just its inner edge, Binzel says.
Thus, for chunks of rock from boulder-size on down -- the kinds of things that end up as typical meteorites -- the Yarkovsky effect plays a major role, moving them with ease from throughout the asteroid belt on to paths that can head toward Earth. For larger asteroids a kilometer or so across, the kind that we worry about as potential threats to the Earth, the effect is so weak it can only move them small amounts.
Binzel's study concludes that the largest near-Earth asteroids mostly come from the asteroid belt's innermost edge, where they are part of a specific "family" thought to all be remnants of a larger asteroid that was broken apart by collisions.
With an initial nudge from the Yarkovsky effect, kilometer-sized asteroids from the Flora region can find themselves "over the edge" of the asteroid belt and sent on a path to Earth's vicinity through the perturbing effects of the planets called resonances.
The new study is also good news for protecting the planet. One of the biggest problems in figuring out how to deal with an approaching asteroid, if and when one is discovered on a potential collision course, is that they are so varied. The best way of dealing with one kind might not work on another.
But now that this analysis has shown that the majority of near-Earth asteroids are of this specific type -- stony objects, rich in the mineral olivine and poor in iron -- it's possible to concentrate most planning on dealing with that kind of object, Binzel says.
"Odds are, an object we might have to deal with would be like an LL chondrite, and thanks to our samples in the laboratory, we can measure its properties in detail," he says. "It's the first step toward 'know thy enemy'."
The study not only yields information about impactors that might arrive at Earth in the future, but also provides new information about the types of materials delivered to Earth from extraterrestrial sources. Many scientists believe that impacts could have delivered important materials for the origin of life on early Earth.
The research is reported in the journal Nature. In addition to Binzel, Vernazza and Tokunaga, the co-authors are MIT graduate students Christina Thomas and Francesca DeMeo, S.J. Bus of the University of Hawaii, and A.S. Rivkin of Johns Hopkins University. The work was supported by NASA and the NSF.
Team finds Earth's 'oldest rocks'
By James Morgan
Science reporter, BBC News
Earth's most ancient rocks, with an age of 4.28 billion years, have been found on the shore of Hudson Bay, Canada.
Writing in Science journal, a team reports finding that a sample of Nuvvuagittuq greenstone is 250 million years older than any rocks known.
It may even hold evidence of activity by ancient life forms.
If so, it would be the earliest evidence of life on Earth - but co-author Don Francis cautioned that this had not been established.
"The rocks contain a very special chemical signature - one that can only be found in rocks which are very, very old," he said.
The professor of geology, who is based at McGill University in Montreal, added: "Nobody has found that signal any place else on the Earth."
"Originally, we thought the rocks were maybe 3.8 billion years old.
"Now we have pushed the Earth's crust back by hundreds of millions of years. That's why everyone is so excited."
Ancient rocks act as a time capsule - offering chemical clues to help geologists solve longstanding riddles of how the Earth formed and how life arose on it.
But the majority of our planet's early crust has already been mashed and recycled into Earth's interior several times over by plate tectonics.
Before this study, the oldest whole rocks were from a 4.03 billion-year-old body known as the Acasta Gneiss, in Canada's Northwest Territories.
The only things known to be older are mineral grains called zircons from Western Australia, which date back 4.36 billion years.
Professor Francis was looking for clues to the nature of the Earth's mantle 3.8 billion years ago.
He and colleague Jonathan O'Neil, from McGill University, travelled to remote tundra on the eastern shore of Hudson Bay, in northern Quebec, to examine an outcrop of the Nuvvuagittuq greenstone belt.
They sent samples for chemical analysis to scientists at the Carnegie Institution of Washington, who dated the rocks by measuring isotopes of the rare earth elements neodymium and samarium, which decay over time at a known rate.
The oldest rocks, termed "faux amphibolite", were dated within the range from 3.8 to 4.28 billion years old.
"4.28 billion is the figure I favour," says Francis.
"It could be that the rock was formed 4.3 billion years ago, but then it was re-worked into another rock form 3.8bn years ago. That's a hard distinction to draw."
The same unit of rock contains geological structures which might only have been formed if early life forms were present on the planet, Professor Francis suggested.
The material displays a banded iron formation - fine ribbon-like bands of alternating magnetite and quartz.
This feature is typical of rock precipitated in deep sea hydrothermal vents - which have been touted as potential habitats for early life on Earth.
"These ribbons could imply that 4.3 billion years ago, Earth had an ocean, with hydrothermal circulation," said Francis.
"Now, some people believe that to make precipitation work, you also need bacteria.
"If that were true, then this would be the oldest evidence of life.
"But if I were to say that, people would yell and scream and say that there is no hard evidence."
Fortunately, geologists have already begun looking for such evidence, in similar rocks found in Greenland, dated 3.8 billion years.
"The great thing about our find, is it will bring in people here to Lake Hudson to carry out specialised studies and see whether there was life here or not," says Francis.
"Regardless of that, or the exact date of the rocks, the exciting thing is that we've seen a chemical signature that's never been seen before. That alone makes this an exciting discovery."
Birth of a new ocean
In a remote part of northern Ethiopia, the Earth’s crust is being stretched to breaking point, providing geologists with a unique opportunity to watch the birth of what may eventually become a new ocean. Lorraine Field, a PhD student, and Dr James Hammond, both from the Department of Earth Sciences, are two of the many scientists involved in documenting this remarkable event.
The African continent is slowly splitting apart along the East African Rift, a 3,000 kilometre-long series of deep basins and flanking mountain ranges. An enormous plume of hot, partially molten rock is rising diagonally from the core-mantle boundary, some 2,900 kilometres beneath Southern Africa, and erupting at the Earth’s surface, or cooling just beneath it, in the Afar region of Ethiopia. It is the rise of this plume that is stretching the Earth’s crust to breaking point.
In September 2005, a series of fissures suddenly opened up along a 60-kilometre section as the plate catastrophically responded to the forces pulling it apart. The rapidity and immense length of the rupture – an event unprecedented in scientific history – greatly excited geologists, who rushed to this very remote part of the world to start measuring what was going on. It began with a big earthquake and continued with a swarm of moderate tremors. About a week into the sequence, eruption of the Dabbahu Volcano threw ash and rocks into the air, causing the evacuation of 6,300 people from the region, while cracks appeared in the ground, some of them more than a metre wide. The only fatality was a camel that fell into a fissure. While these movements are only the beginnings of what would be needed to create a new ocean – the complete process taking millions of years – the Afar event has given geologists a unique opportunity to study the rupture process which normally occurs on the floor of deep oceans. In order to do this research, a consortium of universities was formed and divided into five interdisciplinary working groups. Each group has its own aims and experimental programme whilst linking with, and providing results to, the other groups.
Lorraine Field is studying the Dabbahu volcano, located close to where the rifting event occurred, which had never been known to erupt before it woke up in September 2005. Following a very strong earthquake, locals reported a dark column of ‘smoke’ that rose high into the atmosphere and spread out to form an umbrella-shaped cloud. Emissions darkened the area for three days and three nights. Many of the lava flows on the mountain are made of obsidian, a black volcanic glass, and the fissure which opened in 2005 emits fumes and steam with a very strong smell of bad eggs. Water being extremely scarce, the local Afaris have devised an ingenious method of capturing it. They build a pit next to a fumarole that is emitting steam and gases. A low circular retaining wall is then built around the fumarole and topped with branches and grasses. These provide a condensing surface for the vapour which collects in the pit or ‘boina’. Of some concern, however, is the level of contamination in the water from the various chemicals and minerals found in volcanic areas. Occasionally goats have died from drinking this water, so in order to test its quality the locals hold a shiny piece of obsidian over the fumarole. If a milky deposit forms, this indicates a ‘bad’ boina, so they move on to the next. Members of the consortium have brought back some water to analyse in the hope of developing a device, similar to the Aquatest kit reported in the last issue of re:search, but which tests for toxic metals rather than bacteria.
In September 2005, a series of fissures suddenly opened up along a 60km section as the plate catastrophically responded to the forces pulling it apart
Field’s base was in a small village called Digdigga, which comprises a long main street with a mix of square houses built of wood and traditional round Afar houses, made of a lattice framework of sticks covered in thatch, skins and sacking. Digdigga has a concrete school building, the grounds of which became Field’s base camp for nearly three weeks in January this year. The village is situated on an immense, flat, windy plain surrounded by volcanic mountains and cinder cones. Due to the lack of any vegetation, everything quickly becomes covered in a layer of dust, but the bare rocks mean that satellite images can be used to measure the way the Earth’s surface changes as faults move and as molten rock moves up and along the fissures within the rift valley.
Conditions are still too extreme for normal field mapping and so representative rock samples from key locations have been collected. In order to access Dabbahu mountain, the team hired eight camels to carry supplies, taking enough food and water for six days (and an emergency day), and keeping in touch with the base camp by satellite phone. The rocks Field collected will be analysed to determine how the chemistry of the magmas varies at different locations and how it changes over time. This in turn gives information about the depth of the magma chambers within the crust and the relationship between rifting and volcanism in this area.
The rocks collected will be analysed to determine the relationship between rifting and volcanism
James Hammond is using a variety of seismological techniques to image the crust and mantle beneath Afar. For example, seismic waves are generated during earthquakes, so a network of 40 seismometers has been set up across the plate boundary zone to record seismic activity. One of the seismic stations was placed in the chief’s house, close to the summit of Erta Ale. This extraordinary volcano is essentially an open conduit right down into the mantle. By comparing the arrival times of seismic waves at the seismometers, Hammond and his team will be able to generate a three-dimensional image of the crust, crust-mantle boundary, mantle structure and base of the lithosphere across the study area. This will allow some constraints to be placed on the location of melt in this region, enabling the team to obtain information on the mechanisms of break-up involved in the rifting process. In a nutshell, the consortium has the best array of imaging equipment deployed anywhere in the world to help it ‘see’ into an actively rifting continent.
But all this work will not just benefit the scientific community; it will also have an immediate impact on understanding and mitigating natural hazards in Afar. Consequently, the teams work closely with Ethiopian scientists and policy makers in the region. In addition, the project will provide training for Ethiopian doctoral students and postdoctoral researchers, and Ethiopian scientists will be trained in the techniques used by the consortium. Over the next five years, scientists from the UK, Ethiopia and many other countries will all come together to further our understanding of the processes involved in shaping the surface of the Earth.
Provided by University of Bristol
University of Minnesota geology and geophysics researchers, along with their colleagues from China, have uncovered surprising effects of climate patterns on social upheaval and the fall of dynasties in ancient China.
Their research identifies a natural phenomenon that may have been the last straw for some Chinese dynasties: a weakening of the summer Asian Monsoons. Such weakening accompanied the fall of three dynasties and now could be lessening precipitation in northern China.
The study, led researchers from the University of Minnesota and Lanzhou University in China, appears in Science.
The work rests on climate records preserved in the layers of stone in a 118-millimeter-long stalagmite found in Wanxiang Cave in Gansu Province, China. By measuring amounts of the elements uranium and thorium throughout the stalagmite, the researchers could tell the date each layer was formed.
And by analyzing the "signatures" of two forms of oxygen in the stalagmite, they could match amounts of rainfall--a measure of summer monsoon strength--to those dates.
The stalagmite was formed over 1,810 years; stone at its base dates from A.D. 190, and stone at its tip was laid down in A.D. 2003, the year the stalagmite was collected.
"It is not intuitive that a record of surface weather would be preserved in underground cave deposits. This research nicely illustrates the promise of paleoclimate science to look beyond the obvious and see new possibilities," said David Verardo, director of the U.S. National Science Foundation's Paleoclimatology Program, which funded the research.
"Summer monsoon winds originate in the Indian Ocean and sweep into China," said Hai Cheng, corresponding author of the paper and a research scientist at the University of Minnesota. "When the summer monsoon is stronger, it pushes farther northwest into China."
These moisture-laden winds bring rain necessary for cultivating rice. But when the monsoon is weak, the rains stall farther south and east, depriving northern and western parts of China of summer rains. A lack of rainfall could have contributed to social upheaval and the fall of dynasties.
The researchers discovered that periods of weak summer monsoons coincided with the last years of the Tang, Yuan, and Ming dynasties, which are known to have been times of popular unrest. Conversely, the research group found that a strong summer monsoon prevailed during one of China's "golden ages," the Northern Song Dynasty.
The ample summer monsoon rains may have contributed to the rapid expansion of rice cultivation from southern China to the midsection of the country. During the Northern Song Dynasty, rice first became China's main staple crop, and China's population doubled.
"The waxing and waning of summer monsoon rains are just one piece of the puzzle of changing climate and culture around the world," said Larry Edwards, Distinguished McKnight University Professor in Geology and Geophysics and a co-author on the paper. For example, the study showed that the dry period at the end of the Tang Dynasty coincided with a previously identified drought halfway around the world, in Meso-America, which has been linked to the fall of the Mayan civilization.
The study also showed that the ample summer rains of the Northern Song Dynasty coincided with the beginning of the well-known Medieval Warm Period in Europe and Greenland. During this time--the late 10th century--Vikings colonized southern Greenland. Centuries later, a series of weak monsoons prevailed as Europe and Greenland shivered through what geologists call the Little Ice Age.
In the 14th and early 15th centuries, as the cold of the Little Ice Age settled into Greenland, the Vikings disappeared from there. At the same time, on the other side of the world, the weak monsoons of the 14th century coincided with the end of the Yuan Dynasty.
A second major finding concerns the relationship between temperature and the strength of the monsoons. For most of the last 1,810 years, as average temperatures rose, so, too, did the strength of the summer monsoon. That relationship flipped, however, around 1960, a sign that the late 20th century weakening of the monsoon and drying in northwestern China was caused by human activity.
If carbon dioxide is the culprit, as some have proposed, the drying trend may well continue in Inner Mongolia, northern China and neighboring areas on the fringes of the monsoon's reach, as society is likely to continue adding carbon dioxide to the atmosphere for the foreseeable future.
If, however, the culprit is man-made soot, as others have proposed, the trend could be reversed, the researchers said, by reduction of soot emissions.
Washington DC (SPX) Nov 14, 2008
Evolution isn't just for living organisms. Scientists at the Carnegie Institution have found that the mineral kingdom co-evolved with life, and that up to two thirds of the more than 4,000 known types of minerals on Earth can be directly or indirectly linked to biological activity. The finding, published in American Mineralogist, could aid scientists in the search for life on other planets.
Robert Hazen and Dominic Papineau of the Carnegie Institution's Geophysical Laboratory, with six colleagues, reviewed the physical, chemical, and biological processes that gradually transformed about a dozen different primordial minerals in ancient interstellar dust grains to the thousands of mineral species on the present-day Earth. (Unlike biological species, each mineral species is defined by its characteristic chemical makeup and crystal structure.)
"It's a different way of looking at minerals from more traditional approaches," says Hazen."Mineral evolution is obviously different from Darwinian evolution-minerals don't mutate, reproduce or compete like living organisms. But we found both the variety and relative abundances of minerals have changed dramatically over more than 4.5 billion years of Earth's history."
All the chemical elements were present from the start in the Solar Systems' primordial dust, but they formed comparatively few minerals. Only after large bodies such as the Sun and planets congealed did there exist the extremes of temperature and pressure required to forge a large diversity of mineral species. Many elements were also too dispersed in the original dust clouds to be able to solidify into mineral crystals.
As the Solar System took shape through "gravitational clumping" of small, undifferentiated bodies-fragments of which are found today in the form of meteorites-about 60 different minerals made their appearance. Larger, planet-sized bodies, especially those with volcanic activity and bearing significant amounts of water, could have given rise to several hundred new mineral species.
Mars and Venus, which Hazen and coworkers estimate to have at least 500 different mineral species in their surface rocks, appear to have reached this stage in their mineral evolution.
However, only on Earth-at least in our Solar System-did mineral evolution progress to the next stages. A key factor was the churning of the planet's interior by plate tectonics, the process that drives the slow shifting continents and ocean basins over geological time.
Unique to Earth, plate tectonics created new kinds of physical and chemical environments where minerals could form, and thereby boosted mineral diversity to more than a thousand types.
What ultimately had the biggest impact on mineral evolution, however, was the origin of life, approximately 4 billion years ago. "Of the approximately 4,300 known mineral species on Earth, perhaps two thirds of them are biologically mediated," says Hazen.
"This is principally a consequence of our oxygen-rich atmosphere, which is a product of photosynthesis by microscopic algae." Many important minerals are oxidized weathering products, including ores of iron, copper and many other metals.
Microorganisms and plants also accelerated the production of diverse clay minerals. In the oceans, the evolution of organisms with shells and mineralized skeletons generated thick layered deposits of minerals such as calcite, which would be rare on a lifeless planet.
"For at least 2.5 billion years, and possibly since the emergence of life, Earth's mineralogy has evolved in parallel with biology," says Hazen. "One implication of this finding is that remote observations of the mineralogy of other moons and planets may provide crucial evidence for biological influences beyond Earth."
Stanford University geologist Gary Ernst called the study "breathtaking," saying that "the unique perspective presented in this paper may revolutionize the way Earth scientists regard minerals."
Plate tectonics started over 4 billion years ago, geochemists report
(PhysOrg.com) -- A new picture of the early Earth is emerging, including the surprising finding that plate tectonics may have started more than 4 billion years ago — much earlier than scientists had believed, according to new research by UCLA geochemists reported Nov. 27 in the journal Nature.
"We are proposing that there was plate-tectonic activity in the first 500 million years of Earth's history," said geochemistry professor Mark Harrison, director of UCLA's Institute of Geophysics and Planetary Physics and co-author of the Nature paper. "We are reporting the first evidence of this phenomenon."
"Unlike the longstanding myth of a hellish, dry, desolate early Earth with no continents, it looks like as soon as the Earth formed, it fell into the same dynamic regime that continues today," Harrison said. "Plate tectonics was inevitable, life was inevitable. In the early Earth, there appear to have been oceans; there could have been life — completely contradictory to the cartoonish story we had been telling ourselves."
"We're revealing a new picture of what the early Earth might have looked like," said lead author Michelle Hopkins, a UCLA graduate student in Earth and space sciences. "In high school, we are taught to see the Earth as a red, hellish, molten-lava Earth. Now we're seeing a new picture, more like today, with continents, water, blue sky, blue ocean, much earlier than we thought."
The Earth is 4.5 billion years old. Some scientists think plate tectonics — the geological phenomenon involving the movement of huge crustal plates that make up the Earth's surface over the planet's molten interior — started 3.5 billion years ago, others that it began even more recently than that.
The research by Harrison, Hopkins and Craig Manning, a UCLA professor of geology and geochemistry, is based on their analysis of ancient mineral grains known as zircons found inside molten rocks, or magmas, from Western Australia that are about 3 billion years old. Zircons are heavy, durable minerals related to the synthetic cubic zirconium used for imitation diamonds and costume jewelry. The zircons studied in the Australian rocks are about twice the thickness of a human hair.
Hopkins analyzed the zircons with UCLA's high-resolution ion microprobe, an instrument that enables scientists to date and learn the exact composition of samples with enormous precision. The microprobe shoots a beam of ions, or charged atoms, at a sample, releasing from the sample its own ions, which are then analyzed in a mass spectrometer. Scientists can aim the beam of ions at specific microscopic areas of a sample and conduct a high-resolution isotope analysis of them without destroying the object.
"The microprobe is the perfect tool for determining the age of the zircons," Harrison said.
The analysis determined that some of the zircons found in the magmas were more than 4 billion years old. They were also found to have been formed in a region with heat flow far lower than the global average at that time.
"The global average heat flow in the Earth's first 500 million years was thought to be about 200 to 300 milliwatts per meter squared," Hopkins said. "Our zircons are indicating a heat flow of just 75 milliwatts per meter squared — the figure one would expect to find in subduction zones, where two plates converge, with one moving underneath the other."
"The data we are reporting are from zircons from between 4 billion and 4.2 billion years ago," Harrison said. "The evidence is indirect, but strong. We have assessed dozens of scenarios trying to imagine how to create magmas in a heat flow as low as we have found without plate tectonics, and nothing works; none of them explain the chemistry of the inclusions or the low melting temperature of the granites."
Evidence for water on Earth during the planet's first 500 million years is now overwhelming, according to Harrison.
"You don't have plate tectonics on a dry planet," he said.
Strong evidence for liquid water at or near the Earth's surface 4.3 billion years ago was presented by Harrison and colleagues in a Jan. 11, 2001, cover story in Nature.
"Five different lines of evidence now support that once radical hypothesis," Harrison said. "The inclusions we found tell us the zircons grew in water-saturated magmas. We now observe a surprisingly low geothermal gradient, a low rate at which temperature increases in the Earth. The only mechanism that we recognize that is consistent with everything we see is that the formation of these zircons was at a plate-tectonic boundary. In addition, the chemistry of the inclusions in the zircons is characteristic of the two kinds of magmas today that we see at place-tectonic boundaries."
"We developed the view that plate tectonics was impossible in the early Earth," Harrison added. "We have now made observations from the Hadean (the Earth's earliest geological eon) — these little grains contain a record about the conditions under which they formed — and the zircons are telling us that they formed in a region with anomalously low heat flow. Where in the modern Earth do you have heat flow that is one-third of the global average, which is what we found in the zircons? There is only one place where you have heat flow that low in which magmas are forming: convergent plate-tectonic boundaries."
Three years ago, Harrison and his colleagues applied a technique to determine the temperature of ancient zircons.
"We discovered the temperature at which these zircons formed was constant and very low," Harrison said. "You can't make a magma at any lower temperature than what we're seeing in these zircons. You look at artists' conceptions of the early Earth, with flying objects from outer space making large craters; that should make zircons hundreds of degrees centigrade hotter than the ones we see. The only way you can make zircons at the low temperature we see is if the melt is water-saturated. There had to be abundant water. That's a big surprise because our longstanding conception of the early Earth is that it was dry."
Source: University of California - Los Angeles
A very interesting article,and it will be added to I'm sure when the area to the very North west of the Flinders ranges well past Arcaroola is studied in more depth.
As Ice Melts, Antarctic Bedrock Is on the Move
As ice melts away from Antarctica, parts of the continental bedrock are rising in response -- and other parts are sinking, scientists have discovered.
The finding will give much needed perspective to satellite instruments that measure ice loss on the continent, and help improve estimates of future sea level rise.
"Our preliminary results show that we can dramatically improve our estimates of whether Antarctica is gaining or losing ice," said Terry Wilson, associate professor of earth sciences at Ohio State University.
Wilson reported the research in a press conference Monday, December 15, 2008 at the American Geophysical Union meeting in San Francisco.
These results come from a trio of global positioning system (GPS) sensor networks on the continent.
Wilson leads POLENET, a growing network of GPS trackers and seismic sensors implanted in the bedrock beneath the West Antarctic Ice Sheet (WAIS). POLENET is reoccupying sites previously measured by the West Antarctic GPS Network (WAGN) and the Transantarctic Mountains Deformation (TAMDEF) network.
In separate sessions at the meeting, Michael Bevis, Ohio Eminent Scholar in geodyamics and professor of earth sciences at Ohio State, presented results from WAGN, while doctoral student Michael Willis presented results from TAMDEF.
Taken together, the three projects are yielding the best view yet of what's happening under the ice.
When satellites measure the height of the WAIS, scientists calculate ice thickness by subtracting the height of the earth beneath it. They must take into account whether the bedrock is rising or falling. Ice weighs down the bedrock, but as the ice melts, the earth slowly rebounds.
Gravity measurements, too, rely on knowledge of the bedrock. As the crust under Antarctica rises, the mantle layer below it flows in to fill the gap. That mass change must be subtracted from Gravity Recovery and Climate Experiment (GRACE) satellite measurements in order to isolate gravity changes caused by the thickening or thinning of the ice.
Before POLENET and its more spatially limited predecessors, scientists had few direct measurements of the bedrock. They had to rely on computer models, which now appear to be incorrect.
"When you compare how fast the earth is rising, and where, to the models of where ice is being lost and how much is lost -- they don't match," Wilson said. "There are places where the models predict no crustal uplift, where we see several millimeters of uplift per year. We even have evidence of other places sinking, which is not predicted by any of the models."
A few millimeters may sound like a small change, but it's actually quite large, she explained. Crustal uplift in parts of North America is measured on the scale of millimeters per year.
POLENET's GPS sensors measure how much the crust is rising or falling, while the seismic sensors measure the stiffness of the bedrock -- a key factor for predicting how much the bedrock will rise in the future.
"We're pinning down both parts of this problem, which will improve the correction made to the satellite data, which will in turn improve what we know about whether we're gaining ice or losing ice," Wilson said. Better estimates of sea level rise can then follow.
POLENET scientists have been implanting sensors in Antarctica since December 2007. The network will be complete in 2010 and will record data into 2012. Selected sites may remain as a permanent Antarctic observational network.
Source: Ohio State University
Ancient Magma 'Superpiles' May Have Shaped The Continents
Two giant plumes of hot rock deep within the earth are linked to the plate motions that shape the continents, researchers have found.
The two superplumes, one beneath Hawaii and the other beneath Africa, have likely existed for at least 200 million years, explained Wendy Panero, assistant professor of earth sciences at Ohio State University.
The giant plumes -- or "superpiles" as Panero calls them -- rise from the bottom of Earth's mantle, just above our planet's core. Each is larger than the continental United States. And each is surrounded by a wall of plates from Earth's crust that have sunk into the mantle.
She and her colleagues reported their findings at the American Geophysical Union meeting in San Francisco.
Computer models have connected the piles to the sunken former plates, but it's currently unclear which one spawned the other, Panero said. Plates sink into the mantle as part of the normal processes that shape the continents. But which came first, the piles or the plates, the researchers simply do not know.
"Do these superpiles organize plate motions, or do plate motions organize the superpiles? I don't know if it's truly a chicken-or-egg kind of question, but the locations of the two piles do seem to be related to where the continents are today, and where the last supercontinent would have been 200 million years ago," she said.
That supercontinent was Pangea, and its breakup eventually led to the seven continents we know today.
Scientists first proposed the existence of the superpiles more than a decade ago. Earthquakes offer an opportunity to study them, since they slow the seismic waves that pass through them. Scientists combine the seismic data with what they know about Earth's interior to create computer models and learn more.
But to date, the seismic images have created a mystery: they suggest that the superpiles have remained in the same locations, unchanged for hundreds of millions of years.
"That's a problem," Panero said. "We know that the rest of the mantle is always moving. So why are the piles still there?"
Hot rock constantly migrates from the base of the mantle up to the crust, she explained. Hot portions of the mantle rise, and cool portions fall. Continental plates emerge, then sink back into the earth.
But the presence of the superpiles and the location of subducted plates suggest that the two superpiles have likely remained fixed to the Earth's core while the rest of the mantle has churned around them for millions of years.
Unlocking this mystery is the goal of the Cooperative Institute for Deep Earth Research (CIDER) collaboration, a group of researchers from across the United States who are attempting to unite many different disciplines in the study of Earth's interior.
Panero provides CIDER her expertise in mineral physics; others specialize in geodynamics, geomagnetism, seismology, and geochemistry. Together, they have assembled a new model that suggests why the two superpiles are so stable, and what they are made of.
As it turns out, just a tiny difference in chemical composition can keep the superpiles in place, they found.
The superpiles contain slightly more iron than the rest of the mantle; their composition likely consists of 11-13 percent iron instead of 10-12 percent. But that small change is enough to make the superpiles denser than their surroundings.
"Material that is more dense is going to sink to the base of the mantle," Panero said. "It would normally spread out at that point, but in this case we have subducting plates that are coming down from above and keeping the piles contained."
CIDER will continue to explore the link between the superpiles and the plates that surround them. The researchers will also work to explain the relationship between the superpiles and other mantle plumes that rise above them, which feed hotspots such as those beneath Hawaii and mid-ocean ridges. Ultimately, they hope to determine whether the superpiles may have contributed to the breakup of Pangea.
Provided by Ohio State University
Coastal bluffs reveal secrets of past
By Dave Schwab - La Jolla Light
It's a favored surf spot off La Jolla's shoreline today, but millions of years ago it was a volcanic "hot spot."
"It" is the stretch of beach from Scripps Pier north to Torrey Pines that has a very special geology.
"It's a vertical, volcanic intrusion," noted Thomas A. Demere, Ph.D., curator of paleontology at the San Diego Natural History Museum. "Distinctively black basaltic rocks deposited there, right out in the surf zone, are 10 to 12 million years old."
Demere added this remnant volcanic formation lies just beneath the cliff bluffs where the National Marine Fisheries Service Science Center on UCSD's campus sits. At low tide, standing on the beach in that area looking south toward La Jolla, the linear nature of that volcanic deposit is obvious.
"It's really quite striking," Demere added, "quite different from the light brown sandstones that compose the cliffs."
Geologic "sleuths" like Demere are piecing together the geologic riddle of San Diego's paleontological history. Evidence buried in, or uncovered by, natural erosion reveals a past topography much different than today, when an ancient oceanic crustal tectonic plate created an archipelago of volcanic islands producing massive volumes of magma that later congealed into rock.
Also recorded in the historical record of coastal San Diego are periods of higher rainfall and subtropical climates that supported coastal rain forests with exotic plants and animals. With the coming and going of worldwide ice ages, San Diego's coastline endured periods of "drowning," as well as widespread earthquake faulting.
La Jolla's downtown Village has its own unique geologic pedigree, Demere said.
"La Jolla is built on a series of sea floors that are related to climatic fluctuations over the last 120,000 years," he said. "Scripps Park down by the Cove on that nice broad, flat surface is a sea floor 85,000 years old. The flat surface on Prospect Street, the central portion of La Jolla Village, is another sea floor 120,000 years old."
Terraced sea floors like those in La Jolla are the consequence of ice ages and intervening periods of global warming, in roughly 100,000-year cycles that caused wide discrepancies in sea levels.
"The peak of the last ice age, 18,000 years ago, sea level was up to 400 feet lower than it is today," noted Demere.
Natural wave action led to the carving out of platforms resulting in the current topography.
Did Earth's Twin Cores Spark Plate Tectonics?
Michael Reilly, Discovery News
Jan. 6, 2009 -- It's a classic image from every youngster's science textbook: a cutaway image of Earth's interior. The brown crust is paper-thin; the warm mantle orange, the seething liquid of the outer core yellow, and at the center the core, a ball of solid, red-hot iron.
Now a new theory aims to rewrite it all by proposing the seemingly impossible: Earth has not one but two inner cores.
The idea stems from an ancient, cataclysmic collision that scientists believe occurred when a Mars-sized object hit Earth about 4.45 billion years ago. The young Earth was still so hot that it was mostly molten, and debris flung from the impact is thought to have formed the moon.
Haluk Cetin and Fugen Ozkirim of Murray State University think the core of the Mars-sized object may have been left behind inside Earth, and that it sank down near the original inner core. There the two may still remain, either separate or as conjoined twins, locked in a tight orbit.
Their case is largely circumstantial and speculative, Cetin admitted.
"We have no solid evidence yet, and we're not saying 100 percent that it still exists," he said. "The interior of Earth is a very hard place to study."
The ancient collision is a widely accepted phenomenon. But most scientists believe the incredible pressure at the center of the planet would've long since pushed the two cores into each other.
Still, the inner core is a mysterious place. Recently, scientists discovered that it rotates faster than the rest of the planet. And a study last year of how seismic waves propagate through the iron showed that the core is split into two distinct regions.
Beyond that, little is known. But Cetin and Ozkirim think a dual inner core can explain the rise of plate tectonics, and help explain why the planet remains hotter today than it should be, given its size.
"If this is true, it would change all Earth models as we know them," Cetin said. "If not, and these two cores coalesced early on, we would have less to say, but it could still be how plate tectonics got started."
Based on models of Earth's interior, Cetin thinks the two cores rotate in opposite directions, like the wheels of a pasta maker. Their motion would suck in magma from behind and spit it out in front. If this motion persisted for long enough, it could set up a giant current of circulation that would push plates of crust apart in front, and suck them down into the mantle in back.
Friction generated by the motion would keep the planet hot.
Scientists asked to comment on this hypothesis were extremely skeptical. Some asked not to be quoted, citing insufficient evidence to make a well-reasoned critique of the study, which the authors presented last month at the fall meeting of the American Geophysical Union in San Francisco.
"In terms of its volume, and even its mass, the Earth's inner core is quite small relative to the whole planet, about 1 percent," Paul Richards of Columbia University said. "I seriously doubt that inner core dynamics could play a significant role in moving the tectonic plates."
I think with that theory the scenario might go something like this : Light the blue touch paper and stand well back !
Two rare meteorites found in Antarctica two years ago are from a previously unknown, ancient asteroid with an outer layer or crust similar in composition to the crust of Earth's continents, reports a research team primarily composed of geochemists from the University of Maryland.
Published in the January 8 issue of the journal Nature, this is the first ever finding of material from an asteroid with a crust like Earth's. The discovery also represents the oldest example of rock with this composition ever found.
These meteorites point "to previously unrecognized diversity" of materials formed early in the history of the Solar System, write authors James Day, Richard Ash, Jeremy Bellucci, William McDonough and Richard Walker of the University of Maryland; Yang Liu and Lawrence Taylor of the University of Tennessee and Douglas Rumble III of the Carnegie Institution for Science.
"What is most unusual about these rocks is that they have compositions similar to Earth's andesite continental crust -- what the rock beneath our feet is made of," said first author Day, who is a research scientist in Maryland's department of geology. "No meteorites like this have ever been seen before."
Day explained that his team focused their investigations on how such different Solar System bodies could have crusts with such similar compositions. "We show that this occurred because of limited melting of the asteroid, and thus illustrate that the formation of andesite crust has occurred in our solar system by processes other than plate tectonics, which is the generally accepted process that created the crust of Earth."
The two meteorites (numbered GRA 06128 and GRA 06129) were discovered in the Graves Nunatak Icefield during the US Antarctic Search for Meteorites (ANSMET) 2006/2007 field season. Day and his colleagues immediately recognized that these meteorites were unusual because of elevated contents of a light-colored feldspar mineral called oligoclase. "Our age results point to these rocks being over 4.52 billion years old and that they formed during the birth of the Solar System. Combined with the oxygen isotope data, this age points to their origin from an asteroid rather than a planet," he said.
There are a number of asteroids in the asteroid belt that may have properties like the GRA 06128 and GRA 06129 meteorites including the asteroid (2867) Steins, which was studied by the European Space Agency's Rosetta spacecraft during a flyby this past September. These so-called E-type asteroids reflect the Sun's light very brightly, as would be predicted for a body with a crust made of feldspar.
According to Day and his colleagues, finding pieces of meteorites with andesite compositions is important because they not only point to a previously unrecognized diversity of Solar System materials, but also to a new mechanism to generate andesite crust. On the present-day Earth, this occurs dominantly through plates colliding and subduction - where one plate slides beneath another. Subduction forces water back into the mantle aiding melting and generating arc volcanoes, such as the Pacific Rim of Fire - in this way new crust is formed.
"Our studies of the GRA meteorites suggest similar crust compositions may be formed via melting of materials in planets that are initially volatile- and possibly water-rich, like the Earth probably was when if first formed" said Day." A major uncertainty is how evolved crust formed in the early Solar System and these meteorites are a piece in the puzzle to understanding these processes."
Note: This story has been adapted from a news release issued by the University of Maryland
Talk about deep, dark secrets. Rare "ultra-deep" diamonds are valuable - not because they look good twinkling on a newlywed's finger - but because of what they can tell us about conditions far below the Earth's crust.
Now a find of these unusual gems in Australia has provided new clues to how they were formed.
The diamonds, which are white and a few millimetres across, were found by a mineral exploration company just outside the village of Eurelia, some 300 kilometres north of Adelaide, in southern Australia. From there, they were sent to Ralf Tappert, a diamond expert at the University of Adelaide.
Tappert and colleagues say minerals found trapped inside the Eurelia diamonds could only have formed more than 670 kilometres (416 miles) beneath the surface of the Earth - a distance greater than that between Boston and Washington, DC.
Clues from the deep
"The vast majority of diamonds worldwide form at depths between 150 km and 250 km, within the mantle roots of ancient continental plates," says Tappert. "These diamonds formed in the Earth's lower mantle at depths greater than 670 km, which is much deeper than 'normal' diamonds."
Fewer than a dozen ultra-deep diamonds have been found in various corners of the globe since the 1990s. Sites range from Canada and Brazil to Africa - and now Australia.
"Deep diamonds are important because they are the only natural samples that we have from the lower mantle," says Catherine McCammon, a geologist at the University of Bayreuth in Germany. "This makes them an invaluable set of samples - much like the lunar rocks are to our studies of the moon."
The Eurelia gems contain information about the carbon they were made from. Their heavy carbon isotope signatures suggest the carbon was once contained in marine carbonates lying on the ocean floor.
Location, though, provides researchers with a common thread for the Brazilian, African and Australian deep diamonds, which could explain how they were born. All six groups of diamonds were found in areas that would once have lined the edge of the ancient supercontinent Gondwana.
"Deep diamonds have always been treated like oddball diamonds," says Tappert. "We don't really know what their origin is. With the discovery of the ones in Australia we start to get a pattern."
Their geographic spread suggests that all these ultra-deep diamonds were formed in the same way: as the oceanic crust dived down beneath Gondwana - a process known as subduction - it would have dragged carbon down to the lower mantle, transforming it into graphite and then diamond along the way.
Eventually, kimberlites - volcanic rocks named after the town of Kimberley in South Africa - are propelled to the surface during rapid eruptions, bringing the gems up to the surface.
According to John Ludden of the British Geological Survey, if the theory were proven true, it would mean the Eurelia diamonds are much younger than most diamonds are thought to be.
"Many of the world's diamonds are thought to have been sampled from subducted crust in the very early Earth, 3 billion years ago," says Ludden.
Yet Tappert's theory suggests these diamonds would have been formed about 300 million years ago. "This may well result in a revision of exploration models for kimberlites and the diamonds they host, as to date exploration has focused on very old rock units of the early Earth," Ludden told New Scientist.
McCammon says Tappert's theory is "plausible" but just "one among possible models". She says not all deep diamonds fit the Gondwana model, but adds that the new gems "proved a concrete idea that can be tested by others in the community".
Journal reference: Geology (vol 37, p 43)
ScienceDaily (Feb. 28, 2009) — The argument over whether an outcrop of rock in South West Greenland contains the earliest known traces of life on Earth has been reignited, in a study published in the Journal of the Geological Society. The research, led by Martin J. Whitehouse at the Swedish Museum of Natural History, argues that the controversial rocks "cannot host evidence of Earth’s oldest life," reopening the debate over where the oldest traces of life are located.
The small island of Akilia has long been the centre of attention for scientists looking for early evidence of life. Research carried out in 1996 argued that a five metre wide outcrop of rock on the island contained graphite with depleted levels of 13C. Carbon isotopes are frequently used to search for evidence of early life, because the lightest form of carbon, 12C (atomic weight 12), is preferred in biological processes as it requires less energy to be used by organisms. This results in heavier forms, such as 13C, being less concentrated, which might account for the depleted levels found in the rocks at Akilia.
Crucial to the dating of these traces was analysing the cross-cutting intrusions made by igneous rocks into the outcrop. Whatever is cross-cut must be older than the intruding rocks, so obtaining a date for the intrusive rock was vital. When these were claimed to be at least 3.85 billion years old, it seemed that Akilia did indeed hold evidence of the oldest traces of life on Earth.
Since then, many critics have cast doubt on the findings. Over billions of years, the rocks have undergone countless changes to their structure, being folded, distorted, heated and compressed to such an extent that their mineral composition is very different now to what it was originally. The dating of the intrusive rock has also been questioned .Nevertheless, in July 2006, an international team of scientists, led by Craig E. Manning at UCLA, published a paper claiming that they had proved conclusively that the traces of life were older than 3.8 billion years, after having mapped the area extensively. They argued that the rocks formed part of a volcanic stratigraphy, with igneous intrusions, using the cross-cutting relationships between the rocks as an important part of their theory.
The new research, led by Martin J. Whitehouse at the Swedish Museum of Natural History and Nordic Center for Earth Evolution, casts doubt on this interpretation. The researchers present new evidence demonstrating that the cross-cutting relationships are instead caused by tectonic activity, and represent a deformed fault or unconformity. If so, the age of the intrusive rock is irrelevant to the dating of the graphite, and it could well be older. Because of this, the scientists turned their attention to dating the graphite-containing rocks themselves, and found no evidence that they are any older than c. 3.67 billion years.
"The rocks of Akilia provide no evidence that life existed at or before c. 3.82 Ga, or indeed before 3.67 Ga," they conclude.
The age of the Earth itself is around 4.5 billion years. If life complex enough to have the ability to fractionate carbon were to exist at 3.8 billion years, this would suggest life originated even earlier. The Hadean eon, 3.8 – 4.5 billion years ago, is thought to have been an environment extremely hostile to life. In addition to surviving this period, such early life would have had to contend with the ‘Late Heavy Bombardment’ between 3.8 and 4.1 billion years ago, when a large number of impact craters on the Moon suggest that both the Earth and the Moon underwent significant bombardment, probably by collision with asteroids.
M J Whitehouse, J S Myers & C M Fedo. The Akilia Controversy: field, structural and geochronological evidence questions interpretations of >3.8 Ga life in SW Greenland. Journal of the Geological Society, 2009; 166 (2): 335-348 DOI: 10.1144/0016-76492008-070
Adapted from materials provided by Geological Society of London, via AlphaGalileo.
ScienceDaily (Mar. 8, 2009) — A Monash geoscientist and a team of international researchers have discovered the existence of an ocean floor was destroyed 50 to 20 million years ago, proving that New Caledonia and New Zealand are geographically connected.
Using new computer modelling programs Wouter Schellart and the team reconstructed the prehistoric cataclysm that took place when a tectonic plate between Australia and New Zealand was subducted 1100 kilometres into the Earth's interior and at the same time formed a long chain of volcanic islands at the surface.
Mr Schellart conducted the research, published in the journal Earth and Planetary Science Letters, in collaboration with Brian Kennett from ANU (Canberra) and Wim Spakman and Maisha Amaru from Utrecht University in the Netherlands.
"Until now many geologists have only looked at New Caledonia and New Zealand separately and didn't see a connection, Mr Schellart said.
"In our new reconstruction, which looked at a much larger region including eastern Australia, New Zealand, Fiji, Vanuatu, New Caledonia and New Guinea, we saw a large number of similarities between New Caledonia and northern New Zealand in terms of geology, structure, volcanism and timing of geological events.
"We then searched deep within the Earth for proof of a connection and found the evidence 1100 km below the Tasman Sea in the form of a subducted tectonic plate.
"We combined reconstructions of the tectonic plates that cover the Earth's surface with seismic tomography, a technique that allows one to look deep into the Earth's interior using seismic waves that travel through the Earth's interior to map different regions.
"We are now able to say a tectonic plate about 70 km thick, some 2500 km long and 700 km wide was subducted into the Earth's interior.
"The discovery means there was a geographical connection between New Caledonia and New Zealand between 50 and 20 million years ago by a long chain of volcanic islands. This could be important for the migration of certain plant and animal species at that time," Mr Schellart said.
Mr Schellart said the new discovery diffuses the debate about whether the continents and micro-continents in the Southwest Pacific have been completely separated since 100 million years ago and helps to explain some of the mysteries surrounding evolution in the region.
"As geologists present more data, and computer modelling programs become more hi-tech, it is likely we will learn more about our Earth's history and the processes of evolution."
Washington DC (SPX) Mar 26, 2009
Earth's crust melts easier than previously thought, scientists have discovered. In a paper published in this week's issue of the journal Nature, geologists report results of a study of how well rocks conduct heat at different temperatures.
They found that as rocks get hotter in Earth's crust, they become better insulators and poorer conductors.
The findings provide insights into how magmas are formed, the scientists say, and will lead to better models of continental collision and the formation of mountain belts.
"These results shed important light on a geologic question: how large bodies of granite magma can be formed in Earth's crust," said Sonia Esperanca, a program director in the National Science Foundation (NSF)'s Division of Earth Sciences, which funded the research.
"In the presence of external heat sources, rocks heat up more efficiently than previously thought," said geologist Alan Whittington of the University of Missouri.
"We applied our findings to computer models that predict what happens to rocks when they get buried and heat up in mountain belts, such as the Himalayas today or the Black Hills in South Dakota in the geologic past.
"We found that strain heating, caused by tectonic movements during mountain belt formation, easily triggers crustal melting."
In the study, the researchers used a laser-based technique to determine how long it took heat to conduct through different rock samples. In all their samples, thermal diffusivity, or how well a material conducts heat, decreased rapidly with increasing temperatures.
The thermal diffusivity of hot rocks and magmas was half that of what had been previously assumed.
"Most crustal melting on Earth comes from intrusions of hot basaltic magma from the Earth's mantle," said Peter Nabelek, also a geologist at the University of Missouri. "The problem is that during continental collisions, we don't see intrusions of basaltic magma into continental crust."
These experiments suggest that because of low thermal diffusivity, strain heating is much faster and more efficient. Once rocks get heated, they stay hotter for much longer, Nabelek said.
The processes take millions of years to happen, and scientists can only simulate them on a computer. The new data will allow them to create computer models that more accurately represent processes that occur during continental collisions.
This is part article only,follow the link to read the complete transcript:
GARY ANDERSON was not around to see a backhoe tear up the buffalo grass at his ranch near Akron, Colorado. But he was watching a few weeks later when the technicians came to dump instruments and insulation into their 2-metre-deep hole.
What they left behind didn't look like much: an anonymous mound of dirt and, a few paces away, a spindly metal framework supporting a solar panel. All Anderson knew was that he was helping to host some kind of science experiment. It wouldn't be any trouble, he'd been told, and it wouldn't disturb the cattle. After a couple of years the people who installed it would come and take it away again.
He had in fact become part of what is probably the most ambitious seismological project ever conducted. Its name is USArray and its aim is to run what amounts to an ultrasound scan over the 48 contiguous states of the US. Through the seismic shudders and murmurs that rack Earth's innards, it will build up an unprecedented 3D picture of what lies beneath North America.
It is a mammoth undertaking, during which USArray's scanner - a set of 400 transportable seismometers - will sweep all the way from the Pacific to the Atlantic. Having started off in California in 2004, it is now just east of the Rockies, covering a north-south swathe stretching from Montana's border with Canada down past El Paso on the Texas-Mexico border. By 2013, it should have reached the north-east coast, and its mission end.
Though not yet at the halfway stage, the project is already bringing the rocky underbelly of the US into unprecedented focus. Geologists are using this rich source of information to gain new understanding of the continent's tumultuous past - and what its future holds.
For something so fundamental, our idea of what lies beneath our feet is sketchy at best. It is only half a century since geologists firmed up the now standard theory of plate tectonics. This is the notion that Earth's uppermost layers are segmented like a jigsaw puzzle whose pieces - vast "plates" carrying whole continents or chunks of ocean - are constantly on the move. Where two plates collide, we now know, one often dives beneath the other. That process, known as subduction, can create forces strong enough to build up spectacular mountain ranges such as the still-growing Andes in South America or the Rocky mountains of the western US and Canada.
In the heat and pressure of the mantle beneath Earth's surface, the subducted rock deforms and slowly flows, circulating on timescales of millions of years. Eventually, it can force its way back to the surface, prising apart two plates at another tectonic weak point. The mid-Atlantic ridge, at the eastern edge of the North American plate, is a classic example of this process in action.
What we don't yet know is exactly what happens to the rock during its tour of Earth's interior. How does its path deep underground relate to features we can see on the surface? Is the diving of plates a smoothly flowing process or a messy, bitty, stop-start affair?
USArray will allow geologists to poke around under the hood, inspecting Earth's internal workings right down to where the mantle touches the iron-rich core 2900 kilometres below the surface - and perhaps even further down. "It is our version of the Hubble Space Telescope. With it, we'll be able to view Earth in a fundamentally different way," says Matthew Fouch, a geophysicist at Arizona State University in Tempe.
College Park MD (SPX) May 11, 2009
An international team of geologists may have uncovered the answer to an age-old question - an ice-age-old question, that is. It appears that Earth's earliest ice age may have been due to the rise of oxygen in Earth's atmosphere, which consumed atmospheric greenhouse gases and chilled the earth.
Scientists from the University of Maryland, including post-doctoral fellows Boswell Wing and Sang-Tae Kim, graduate student Margaret Baker, and professors Alan J. Kaufman and James Farquhar, along with colleagues in Germany, South Africa, Canada and the United States, uncovered evidence that the oxygenation of Earth's atmosphere - generally known as the Great Oxygenation Event - coincided with the first widespread ice age on the planet.
"We can now put our hands on the rock library that preserves evidence of irreversible atmospheric change," said Kaufman. "This singular event had a profound effect on the climate, and also on life."
Using sulfur isotopes to determine the oxygen content of ~2.3 billion year-old rocks in the Transvaal Supergroup in South Africa, they found evidence of a sudden increase in atmospheric oxygen that broadly coincided with physical evidence of glacial debris, and geochemical evidence of a new world-order for the carbon cycle.
"The sulfur isotope change we recorded coincided with the first known anomaly in the carbon cycle. This may have resulted from the diversification of photosynthetic life that produced the oxygen that changed the atmosphere," Kaufman said.
Two and a half billion years ago, before the Earth's atmosphere contained appreciable oxygen, photosynthetic bacteria gave off oxygen that first likely oxygenated the surface of the ocean, and only later the atmosphere.
The first formed oxygen reacted with iron in the oceans, creating iron oxides that settled to the ocean floor in sediments called banded iron-formations - layered deposits of red-brown rock that accumulated in ocean basins worldwide. Later, once the iron was used up, oxygen escaped from the oceans and started filling up the atmosphere.
Once oxygen made it into the atmosphere, the scientists suggest that it reacted with methane, a powerful greenhouse gas, to form carbon dioxide, which is 62 times less effective at warming the surface of the planet. "With less warming potential, surface temperatures may have plummeted, resulting in globe-encompassing glaciers and sea ice" said Kaufman.
In addition to its affect on climate, the rise in oxygen stimulated the rise in stratospheric ozone, our global sunscreen. This gas layer, which lies between 12 and 30 miles above the surface, decreased the amount of damaging ultraviolet sunrays reaching the oceans, allowing photosynthetic organisms that previously lived deeper down, to move up to the surface, and hence increase their output of oxygen, further building up stratospheric ozone.
"New oxygen in the atmosphere would also have stimulated weathering processes, delivering more nutrients to the seas, and may have also pushed biological evolution towards eukaryotes, which require free oxygen for important biosynthetic pathways," said Kaufman.
The result of the Great Oxidation Event, according to Kaufman and his colleagues, was a complete transformation of Earth's atmosphere, of its climate, and of the life that populated its surface. The study is published in the May issue of Geology.
Panama, Panama (SPX) May 19, 2009
The geologic faults responsible for the rise of the eastern Andes mountains in Colombia became active 25 million years ago-18 million years before the previously accepted start date for the Andes' rise, according to researchers at the Smithsonian Tropical Research Institute in Panama, the University of Potsdam in Germany and Ecopetrol in Colombia.
"No one had ever dated mountain-building events in the eastern range of the Colombian Andes," said Mauricio Parra, a former doctoral candidate at the University of Potsdam (now a postdoctoral fellow with the University of Texas) and lead author.
"This eastern sector of America's backbone turned out to be far more ancient here than in the central Andes, where the eastern ranges probably began to form only about 10 million years ago."
The team integrated new geologic maps that illustrate tectonic thrusting and faulting, information about the origins and movements of sediments and the location and age of plant pollen in the sediments, as well as zircon-fission track analysis to provide an unusually thorough description of basin and range formation.
As mountain ranges rise, rainfall and erosion wash minerals like zircon from rocks of volcanic origin into adjacent basins, where they accumulate to form sedimentary rocks. Zircon contains traces of uranium. As the uranium decays, trails of radiation damage accumulate in the zircon crystals.
At high temperatures, fission tracks disappear like the mark of a knife disappears from a soft block of butter. By counting the microscopic fission tracks in zircon minerals, researchers can tell how long ago sediments formed and how deeply they were buried.
Classification of nearly 17,000 pollen grains made it possible to clearly delimit the age of sedimentary layers.
The use of these complementary techniques led the team to postulate that the rapid advance of a sinking wedge of material as part of tectonic events 31 million years ago may have set the stage for the subsequent rise of the range.
"The date that mountain building began is critical to those of us who want to understand the movement of ancient animals and plants across the landscape and to engineers looking for oil and gas," said Carlos Jaramillo, staff scientist from STRI. "We are still trying to put together a big tectonic jigsaw puzzle to figure out how this part of the world formed
Tempe AZ (SPX) May 28, 2009
There are very few places in the world where dynamic activity taking place beneath Earth's surface goes undetected. Volcanoes, earthquakes, and even the sudden uplifting or sinking of the ground are all visible results of restlessness far below, but according to research by Arizona State University (ASU) seismologists, dynamic activity deep beneath us isn't always expressed on the surface.
The Great Basin in the western United States is a desert region largely devoid of major surface changes. The area consists of small mountain ranges separated by valleys and includes most of Nevada, the western half of Utah and portions of other nearby states.
For tens of millions of years, the Great Basin has been undergoing extension--the stretching of Earth's crust.
While studying the extension of the region, geologist John West of ASU was surprised to find that something unusual existed beneath this area's surface.
West and colleagues found that portions of the lithosphere--the crust and uppermost mantle of the Earth--had sunk into the more fluid upper mantle beneath the Great Basin and formed a large cylindrical blob of cold material far below the surface of central Nevada.
It was an extremely unexpected finding in a location that showed no corresponding changes in surface topography or volcanic activity, West says.
West compared his unusual results of the area with tomography models--CAT scans of the inside of Earth--done by geologist Jeff Roth, also of ASU. West and Roth are graduate students; working with their advisor, Matthew Fouch, the team concluded that they had found a lithospheric drip.
Results of their research, funded by the National Science Foundation (NSF), were published in the May 24 issue of the journal Nature Geoscience.
"The results provide important insights into fine-scale mantle convection processes, and their possible connections with volcanism and mountain-building on Earth's surface," said Greg Anderson, program director in NSF's Division of Earth Sciences.
A lithospheric drip can be envisioned as honey dripping off a spoon, where an initial lithospheric blob is followed by a long tail of material.
When a small, high-density mass is embedded near the base of the crust and the area is warmed up, the high-density piece will be heavier than the area around it and it will start sinking. As it drops, material in the lithosphere starts flowing into the newly created conduit.
Seismic images of mantle structure beneath the region provided additional evidence, showing a large cylindrical mass 100 km wide and at least 500 km tall (about 60 by 300 miles).
"As a general rule, I have been anti-drip since my early days as a scientist," admits Fouch. "The idea of a lithospheric drip has been used many times over the years to explain things like volcanism, surface uplift, surface subsidence, but you could never really confirm it--and until now no one has caught a drip in the act, so to speak."
Originally, the team didn't think any visible signs appeared on the surface.
"We wondered how you could have something like a drip that is drawing material into its center when the surface of the whole area is stretching apart," says Fouch.
"But it turns out that there is an area right above the drip, in fact the only area in the Great Basin, that is currently undergoing contraction. John's finding of a drip is therefore informing geologists to develop a new paradigm of Great Basin evolution."
Scientists have known about the contraction for some time, but have been arguing about its cause.
As a drip forms, surrounding material is drawn in behind it; this means that the surface should be contracting toward the center of the basin. Since contraction is an expected consequence of a drip, a lithospheric drip could well be the answer to what is being observed in the Great Basin.
"Many in the scientific community thought it couldn't be a drip because there wasn't any elevation change or surface manifestation, and a drip has historically always been connected with major surface changes," says West.
"But those features aren't required to have the drip. Under certain conditions, like in the Great Basin, drips can form with little or no corresponding changes in surface topography or volcanic activity."
All the numerical models computed by the team suggest that the drip isn't going to cause things to sink down or pop up quickly, or cause lots of earthquakes.
There would likely be little or no impact on the people living above the drip. The team believes that the drip is a transient process that started some 15-20 million years ago, and probably recently detached from the overlying plate.
"This finding would not have been possible without the incredible wealth of seismic data captured by EarthScope's Transportable Array (TA) as it moved across the western United States," says West.
"We had access to data from a few long-term stations in the region, but the excellent data and 75-km grid spacing of the TA is what made these results possible."
This is a great example "of science in action," says Fouch.
"We went in not expecting to find this. Instead, we came up with a hypothesis that was not what anyone had proposed previously for the area, and then we tested the hypothesis with as many different types of data as we could find.
"In all cases so far it has held up. We're excited to see how this discovery plays a role in the development of new ideas about the geologic history of the western U.S."
Washington DC (SPX) Jul 29, 2009
A new analysis of jade found along the Motagua fault that bisects Guatemala is underscoring the fact that this region has a more complex geologic history than previously thought.
Because jade and other associated metamorphic rocks are found on both sides of the fault, and because the jade to the north is younger by about 60 million years, a team of geologists posits in a new research paper that the North American and Caribbean plates have done more than simply slide past each other: they have collided. Twice.
"Now we understand what has happened in Guatemala, geologically," says one of the authors, Hannes Brueckner, Professor of Geology at Queens College, City University of New York. "Our new research is filling in information about plate tectonics for an area of the world that needed sorting."
Jade is a cultural term for two rare metamorphic rocks known as jadeitite (as discussed in the current research) and nephrite that are both extremely tough and have been used as tools and talismans throughout the world. The jadeitite (or jadeite jade) is a sort of scar tissue from some collisions between Earth's plates.
As ocean crust is pushed under another block, or subducted, pressure increases with only modest rise in temperature, squeezing and drying the rocks without melting them. Jade precipitates from fluids flowing up the subduction channel and into the chilled, overlying mantle that becomes serpentinite.
The serpentinite assemblage, which includes jade and has a relatively low density, can be uplifted during subsequent continental collisions and extruded along the band of the collision boundary, such as those found in the Alps, California, Iran, Russia, and other parts of the world.
The Motagua fault is one of three subparallel left-lateral strike-slip faults (with horizontal motion) in Guatemala and forms the boundary between the North American and Caribbean tectonic plates.
In an earlier paper, the team of authors found evidence of two different collisions by dating mica found in collisional rocks (including jade) from the North American side of the fault to about 70 million years ago and from the southern side (or the Caribbean plate) to between 120 and 130 million years ago.
But mica dates can be "reset" by subsequent heating. Now, the authors have turned to eclogite, a metamorphic rock that forms from ocean floor basalt in the subduction channel. Eclogite dates are rarely reset, and the authors found that eclogite from both sides of the Motagua dates to roughly 130 million years old.
The disparate dating of rocks along the Motagua can be explained by the following scenario: a collision 130 million years ago created a serpentinite belt that was subsequently sliced into segments.
Then, after plate movement changed direction about 100 million years ago, a second collision between one of these slices and the North American plate reset the mica clocks in jadeitite found on the northern side of the fault to 70 million years. Finally, plate motion in the last 70 million years juxtaposed the southern serpentinites with the northern serpentinites, which explains why there are collisional remnants on both sides of the Motagua.
"All serpentinites along the fault line formed at the same time, but the northern assemblage was re-metamorphosed at about 70 million year ago. There are two collision events recorded in the rocks observed today, one event on the southern side and two on the northern," explains author George Harlow, Curator in the Division of Earth and Planetary Sciences at the American Museum of Natural History. "Motion between plates is usually not a single motion-it is a series of motions.
Rich Ore Deposits Linked To Ancient Atmosphere
Washington DC (SPX) Nov 27, 2009
Much of our planet's mineral wealth was deposited billions of years ago when Earth's chemical cycles were different from today's. Using geochemical clues from rocks nearly 3 billion years old, a group of scientists including Andrey Bekker and Doug Rumble from the Carnegie Institution have made the surprising discovery that the creation of economically important nickel ore deposits was linked to sulfur in the ancient oxygen-poor atmosphere.
These ancient ores - specifically iron-nickel sulfide deposits - yield 10% of the world's annual nickel production. They formed for the most part between two and three billion years ago when hot magmas erupted on the ocean floor. Yet scientists have puzzled over the origin of the rich deposits. The ore minerals require sulfur to form, but neither seawater nor the magmas hosting the ores were thought to be rich enough in sulfur for this to happen.
"These nickel deposits have sulfur in them arising from an atmospheric cycle in ancient times. The isotopic signal is of an anoxic atmosphere," says Rumble of Carnegie's Geophysical Laboratory, a co-author of the paper appearing in the November 20 issue of Science.
Rumble, with lead author Andrey Bekker (formerly Carnegie Fellow and now at the University of Manitoba), and four other colleagues used advanced geochemical techniques to analyze rock samples from major ore deposits in Australia and Canada. They found that to help produce the ancient deposits, sulfur atoms made a complicated journey from volcanic eruptions, to the atmosphere, to seawater, to hot springs on the ocean floor, and finally to molten, ore-producing magmas.
The key evidence came from a form of sulfur known as sulfur-33, an isotope in which atoms contain one more neutron than "normal" sulfur (sulfur-32). Both isotopes act the same in most chemical reactions, but reactions in the atmosphere in which sulfur dioxide gas molecules are split by ultraviolet light (UV) rays cause the isotopes to be sorted or "fractionated" into different reaction products, creating isotopic anomalies.
"If there is too much oxygen in the atmosphere then not enough UV gets through and these reactions can't happen," says Rumble. "So if you find these sulfur isotope anomalies in rocks of a certain age, you have information about the oxygen level in the atmosphere."
By linking the rich nickel ores with the ancient atmosphere, the anomalies in the rock samples also answer the long-standing question regarding the source of the sulfur in the ore minerals. Knowing this will help geologists track down new ore deposits, says Rumble, because the presence of sulfur and other chemical factors determine whether or not a deposit will form.
"Ore deposits are a tiny fraction of a percent of the Earth's surface, yet economically they are incredibly important.
Corvallis OR (SPX) Feb 02, 2010
Researchers have discovered that some of the most fundamental assumptions about how water moves through soil in a seasonally dry climate such as the Pacific Northwest are incorrect - and that a century of research based on those assumptions will have to be reconsidered.
A new study by scientists from Oregon State University and the Environmental Protection Agency showed - much to the surprise of the researchers - that soil clings tenaciously to the first precipitation after a dry summer, and holds it so tightly that it almost never mixes with other water.
The finding is so significant, researchers said, that they aren't even sure yet what it may mean. But it could affect our understanding of how pollutants move through soils, how nutrients get transported from soils to streams, how streams function and even how vegetation might respond to climate change.
The research was just published online in Nature Geoscience, a professional journal.
"Water in mountains such as the Cascade Range of Oregon and Washington basically exists in two separate worlds," said Jeff McDonnell, an OSU distinguished professor and holder of the Richardson Chair in Watershed Science in the OSU College of Forestry. "We used to believe that when new precipitation entered the soil, it mixed well with other water and eventually moved to streams. We just found out that isn't true."
"This could have enormous implications for our understanding of watershed function," he said. "It challenges about 100 years of conventional thinking."
What actually happens, the study showed, is that the small pores around plant roots fill with water that gets held there until it's eventually used up in plant transpiration back to the atmosphere. Then new water becomes available with the return of fall rains, replenishes these small localized reservoirs near the plants and repeats the process. But all the other water moving through larger pores is essentially separate and almost never intermingles with that used by plants during the dry summer.
The study found in one test, for instance, that after the first large rainstorm in October, only 4 percent of the precipitation entering the soil ended up in the stream - 96 percent was taken up and held tightly by soil around plants to recharge soil moisture.
A month later when soil moisture was fully recharged, 55 percent of precipitation went directly into streams. And as winter rains continue to pour moisture into the ground, almost all of the water that originally recharged the soil around plants remains held tightly in the soil - it never moves or mixes.
"This tells us that we have a less complete understanding of how water moves through soils, and is affected by them, than we thought we did," said Renee Brooks, a research plant physiologist with the EPA and courtesy faculty in the OSU Department of Forest Ecosystems and Society.
"Our mathematical models of ecosystem function are based on certain assumptions about biological processes," Brooks said. "This changes some of those assumptions. Among the implications is that we may have to reconsider how other things move through soils that we are interested in, such as nutrients or pollutants."
The new findings were made possible by advances in the speed and efficiency of stable isotope analyses of water, which allowed scientists to essentially "fingerprint" water and tell where it came from and where it moved to. Never before was it possible to make so many isotopic measurements and get a better view of water origin and movement, the researchers said.
The study also points out the incredible ability of plants to take up water that is so tightly bound to the soil, with forces nothing else in nature can match.
Earth's robust magnetic field protects the planet and its inhabitants from the full brunt of the solar wind, a torrent of charged particles that on less shielded planets such as Venus and Mars has over the ages stripped away water reserves and degraded their upper atmospheres. Unraveling the timeline for the emergence of that magnetic field and the mechanism that generates it—a dynamo of convective fluid in Earth's outer core—can help constrain the early history of the planet, including the interplay of geologic, atmospheric and astronomical processes that rendered the world habitable.
An interdisciplinary study published in the March 5 Science attempts to do just that, presenting evidence that Earth had a dynamo-generated magnetic field as early as 3.45 billion years ago, just a billion or so years after the planet had formed. The new research pushes back the record of Earth's magnetic field by at least 200 million years; a related group had presented similar evidence from slightly younger rocks in 2007, arguing for a strong terrestrial magnetic field 3.2 billion years ago.
University of Rochester geophysicist John Tarduno and his colleagues analyzed rocks from the Kaapvaal Craton, a region near the southern tip of Africa that hosts relatively pristine early Archean crust. (The Archean eon began about 3.8 billion years ago and ended 2.5 billion years ago.)
In 2009 Tarduno's group had found that some of the rocks were magnetized 3.45 billion years ago—roughly coinciding with the direct evidence for Earth's first life, at 3.5 billion years ago. But an external source for the magnetism—such as a blast from the solar wind—could not be ruled out. Venus, for instance, which lacks a strong internal magnetic field of its own, does have a feeble external magnetic field induced by the impact of the solar wind into the planet's dense atmosphere.
The new study examines the magnetic field strength required to imprint magnetism on the Kaapvaal rocks; it concludes that the field was 50 percent to 70 percent of its present strength. That value is many times greater than would be expected for an external magnetic field, such as the weak Venusian field, supporting the presence of an inner-Earth dynamo at that time.
With the added constraints on the early magnetic field, the researchers were able to extrapolate how well that field could keep the solar wind at bay. The group found that the early Archean magnetopause, the boundary in space where the magnetic field meets the solar wind, was about 30,000 kilometers or less from Earth. The magnetopause is about twice that distance today but can shift in response to extreme energetic outbursts from the sun. "Those steady-state conditions three and a half billion years ago are similar to what we see during severe solar storms today," Tarduno says. With the magnetopause so close to Earth, the planet would not have been totally shielded from the solar wind and may have lost much of its water early on, the researchers say.
Clues for finding habitable exoplanets
As researchers redouble their efforts to find the first truly Earth-like planet outside the solar system, Tarduno says the relationship between stellar wind, atmospheres and magnetic fields should come into play when modeling a planet's potential habitability. "This is clearly a variable to think about when looking at exoplanets," he says, adding that a magnetic field's impact on a planet's water budget seems particularly important.
One scientist in the field agrees that the results are plausible but has some lingering questions. "I think the work that Tarduno and his co-authors are doing is really exciting," says Peter Selkin, a geologist at the University of Washington Tacoma. "There's a lot of potential to use the tools that they've developed to look at rocks that are much older than anybody has been able to do paleomagnetism on before."
But he notes that even the relatively pristine rocks of the Kaapvaal Craton have undergone low-grade mineralogical and temperature changes over billions of years. "They're not exactly in the state they were in initially," Selkin says, "and that's exactly what has made a lot of paleomagnetists stay away from rocks like these." Selkin credits Tarduno and his co-authors for doing all they can to show that the magnetized samples have been minimally altered, but he would like to see more petrologic and mineralogical analysis. "I think that there are still things that we need to know about the minerals that Tarduno and his co-authors used in this study in order to be able to completely buy the results," he says.
David Dunlop, a geophysicist at the University of Toronto, is more convinced, calling the work a "very careful demonstration." The field strengths, he says, "can be assigned quite confidently" to the time interval 3.4 billion to 3.45 billion years ago. "It would be exciting to push back the curtain shadowing [the] onset of the geodynamo still further, but this seems unlikely," Dunlop says. "Nowhere else has nature been so kind in preserving nearly pristine magnetic remanence carriers."
Geologists have found evidence that sea ice extended to the equator 716.5 million years ago, bringing new precision to a "snowball Earth" event long suspected to have taken place around that time.
Funded by the National Science Foundation (NSF) and led by scientists at Harvard University, the team reports on its work this week in the journal Science.
The new findings--based on an analysis of ancient tropical rocks that are now found in remote northwestern Canada--bolster the theory that our planet has, at times in the past, been ice-covered at all latitudes.
"This is the first time that the Sturtian glaciation has been shown to have occurred at tropical latitudes, providing direct evidence that this particular glaciation was a 'snowball Earth' event," says lead author Francis Macdonald, a geologist at Harvard University.
"Our data also suggest that the Sturtian glaciation lasted a minimum of five million years."
According to Enriqueta Barrera, program director in NSF's Division of Earth Sciences, which supported the research, the Sturtian glaciation, along with the Marinoan glaciation right after it, are the greatest ice ages known to have taken place on Earth. "Ice may have covered the entire planet then," says Barrera, "turning it into a 'snowball Earth.'"
The survival of eukaryotes--life forms other than microbes such as bacteria--throughout this period suggests that sunlight and surface water remained available somewhere on Earth's surface. The earliest animals arose at roughly the same time.
Even in a snowball Earth, Macdonald says, there would be temperature gradients, and it is likely that sea ice would be dynamic: flowing, thinning and forming local patches of open water, providing refuge for life.
"The fossil record suggests that all of the major eukaryotic groups, with the possible exception of animals, existed before the Sturtian glaciation," Macdonald says. "The questions that arise from this are: If a snowball Earth existed, how did these eukaryotes survive? Did the Sturtian snowball Earth stimulate evolution and the origin of animals?"
"From an evolutionary perspective," he adds, "it's not always a bad thing for life on Earth to face severe stress."
The rocks Macdonald and his colleagues analyzed in Canada's Yukon Territory showed glacial deposits and other signs of glaciation, such as striated clasts, ice-rafted debris, and deformation of soft sediments.
The scientists were able to determine, based on the magnetism and composition of these rocks, that 716.5 million years ago the rocks were located at sea-level in the tropics, at about 10 degrees latitude.
"Climate modeling has long predicted that if sea ice were ever to develop within 30 degrees latitude of the equator, the whole ocean would rapidly freeze over," Macdonald says. "So our result implies quite strongly that ice would have been found at all latitudes during the Sturtian glaciation."
Scientists don't know exactly what caused this glaciation or what ended it, but Macdonald says its age of 716.5 million years closely matches the age of a large igneous province--made up of rocks formed by magma that has cooled--stretching more than 1,500 kilometers (932 miles) from Alaska to Ellesmere Island in far northeastern Canada.
This coincidence could mean the glaciation was either precipitated or terminated by volcanic activity.
A thousand years after the last ice age ended, the Northern Hemisphere was plunged back into glacial conditions. For 20 years, scientists have blamed a vast flood of meltwater for causing this 'Younger Dryas' cooling, 13,000 years ago. Picking through evidence from Canada's Mackenzie River, geologists now believe they have found traces of this flood, revealing that cold water from North America's dwindling ice sheet poured into the Arctic Ocean, from where it ultimately disrupted climate-warming currents in the Atlantic.
The researchers scoured tumbled boulders and gravel terraces along the Mackenzie River for signs of the meltwater's passage. The flood "would solve a big problem if it actually happened", says oceanographer Wally Broecker of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, who was not part of the team.
the geologists present evidence confirming that the flood occurred (J. B. Murton et al. Nature 464, 740–743; 2010). But their findings raise questions about exactly how the flood chilled the planet. Many researchers thought the water would have poured down what is now the St Lawrence River into the North Atlantic Ocean, where the currents form a sensitive climate trigger. Instead, the Mackenzie River route would have funnelled the flood into the Arctic Ocean .
The Younger Dryas was named after the Arctic wild flower Dryas octopetala that spread across Scandinavia as the big chill set in. At its onset, temperatures in northern Europe suddenly dropped 10 °C or more in decades, and tundra replaced the forest that had been regaining its hold on the land. Broecker suggested in 1989 that the rapid climate shift was caused by a slowdown of surface currents in the Atlantic Ocean, which carry warm water north from the Equator to high latitudes (W. S. Broecker et al. Nature 341, 318-321; 1989). The currents are part of the 'thermohaline' ocean circulation, which is driven as the cold and salty — hence dense — waters of the far North Atlantic sink, drawing warmer surface waters north.
Broecker proposed that the circulation was disrupted by a surge of fresh water that overflowed from Lake Agassiz, a vast meltwater reservoir that had accumulated behind the retreating Laurentide Ice Sheet in the area of today's Great Lakes. The fresh water would have reduced the salinity of the surface waters, stopping them from sinking.
“There's no way for that water to go out of the Arctic without going into the Atlantic.”
The theory is widely accepted. However, scientists never found geological evidence of the assumed flood pathway down the St Lawrence River into the North Atlantic; or along a possible alternative route southwards through the Mississippi basin. Now it is clear why: the flood did occur; it just took a different route.
The team, led by Julian Murton of the University of Sussex in Brighton, UK, dated sand, gravel and boulders from eroded surfaces in the Athabasca Valley and the Mackenzie River delta in northwestern Canada. The shapes of the geological features there suggest that the region had two major glacial outburst floods, the first of which coincides with the onset of the Younger Dryas. If the western margins of the Laurentide Ice Sheet lay just slightly east of their assumed location, several thousand cubic kilometres of water would have been able to flood into the Arctic Ocean.
"Geomorphic observations and chronology clearly indicate a northwestern flood route down the Mackenzie valley," says James Teller, a geologist at the University of Manitoba in Winnipeg, Canada, who took part in the study. But he thinks that the route raises questions about the climatic effects of the Lake Agassiz spill. "We're pretty sure that the water, had it flooded the northern Atlantic, would have been capable of slowing the thermohaline ocean circulation and produce the Younger Dryas cooling," he says. "The question is whether it could have done the same in the Arctic Ocean."
Broecker, however, says that the Arctic flood is just what his theory needed. He says that flood waters heading down the St Lawrence River might not have affected the thermohaline circulation anyway, because the sinking takes place far to the north, near Greenland. A pulse of fresh water into the Arctic, however, would ultimately have flowed into the North Atlantic and pulled the climate trigger there. "There's no way for that water to go out of the Arctic without going into the Atlantic," he says.
Santa Barbara, Calif. (UPI) Apr 6, 2010
A U.S. geologist says she's discovered a pattern that connects regular changes in the Earth's orbital cycle to changes in the planet's climate.
University of California-Santa Barbara Assistant Professor Lorraine Lisiecki performed her analysis of climate by examining ocean sediment cores taken from 57 locations around the world and linking that climate record to the history of the Earth's orbit.
The researchers said it's known the Earth's orbit around the sun changes shape every 100,000 years, becoming either more round or more elliptical. The shape of the orbit is known as its "eccentricity" and a related aspect is the 41,000-year cycle in the tilt of the Earth's axis.
Glaciation of the Earth also occurs every 100,000 years and Lisiecki found the timing of changes in climate and eccentricity coincided.
"The clear correlation between the timing of the change in orbit and the change in the Earth's climate is strong evidence of a link between the two," Lisiecki said. She also said she discovered the largest glacial cycles occurred during the weakest changes in the eccentricity of Earth's orbit -- and vice versa, with the stronger changes in orbit correlating to weaker changes in climate.
"This may mean that the Earth's climate has internal instability in addition to sensitivity to changes in the orbit," she said.
The research is reported in the journal Nature Geoscience.
An international team of scientists including Mark Williams and Jan Zalasiewicz of the Geology Department of the University of Leicester, and led by Dr. Thijs Vandenbroucke, formerly of Leicester and now at the University of Lille 1 (France), has reconstructed the Earth's climate belts of the late Ordovician Period, between 460 and 445 million years ago.
The findings have been published online in the Proceedings of the National Academy of Sciences -- and show that these ancient climate belts were surprisingly like those of the present.
The researchers state: "The world of the ancient past had been thought by scientists to differ from ours in many respects, including having carbon dioxide levels much higher -- over twenty times as high -- than those of the present. However, it is very hard to deduce carbon dioxide levels with any accuracy from such ancient rocks, and it was known that there was a paradox, for the late Ordovician was known to include a brief, intense glaciation -- something difficult to envisage in a world with high levels of greenhouse gases. "
The team of scientists looked at the global distribution of common, but mysterious fossils called chitinozoans -- probably the egg-cases of extinct planktonic animals -- before and during this Ordovician glaciation. They found a pattern that revealed the position of ancient climate belts, including such features as the polar front, which separates cold polar waters from more temperate ones at lower latitudes. The position of these climate belts changed as the Earth entered the Ordovician glaciation -- but in a pattern very similar to that which happened in oceans much more recently, as they adjusted to the glacial and interglacial phases of our current (and ongoing) Ice Age.
This 'modern-looking' pattern suggests that those ancient carbon dioxide levels could not have been as high as previously thought, but were more modest, at about five times current levels (they would have had to be somewhat higher than today's, because the sun in those far-off times shone less brightly).
"These ancient, but modern-looking oceans emphasise the stability of Earth's atmosphere and climate through deep time -- and show the current man-made rise in greenhouse gas levels to be an even more striking phenomenon than was thought," the researchers conclude.
Aug 19, 2010
Scientists have discovered a new window into the Earth's violent past. Geochemical evidence from volcanic rocks collected on Baffin Island in the Canadian Arctic suggests that beneath it lies a region of the Earth's mantle that has largely escaped the billions of years of melting and geological churning that has affected the rest of the planet.
Researchers believe the discovery offers clues to the early chemical evolution of the Earth.
The newly identified mantle "reservoir," as it is called, dates from just a few tens of million years after the Earth was first assembled from the collisions of smaller bodies. This reservoir likely represents the composition of the mantle shortly after formation of the core, but before the 4.5 billion years of crust formation and recycling modified the composition of most of the rest of Earth's interior.
"This was a key phase in the evolution of the Earth," says co-author Richard Carlson of the Carnegie Institution's Department of Terrestrial Magnetism. "It set the stage for everything that came after. Primitive mantle such as that we have identified would have been the ultimate source of all the magmas and all the different rock types we see on Earth today."
Carlson and lead author Matthew Jackson (a former Carnegie postdoctoral fellow, now at Boston University), with colleagues, using samples collected by coauthor Don Francis of McGill University, targeted the Baffin Island rocks, which are the earliest expression of the mantle hotspot now feeding volcanic eruptions on Iceland, because previous study of helium isotopes in these rocks showed them to have anomalously high ratios of helium-3 to helium-4.
Helium-3 is generally extremely rare within the Earth; most of the mantle's supply has been outgassed by volcanic eruptions and lost to space over the planet's long geological history. In contrast, helium-4 has been constantly replenished within the Earth by the decay of radioactive uranium and thorium.
The high proportion of helium-3 suggests that the Baffin Island lavas came from a reservoir in the mantle that had never previously outgassed its original helium-3, implying that it had not been subjected to the extensive chemical differentiation experienced by most of the mantle.
The researchers confirmed this conclusion by analyzing the lead isotopes in the lava samples, which date the reservoir to between 4.55 and 4.45 billion years old. This age is only slightly younger than the Earth itself.
The early age of the mantle reservoir implies that it existed before melting of the mantle began to create the magmas that rose to form Earth's crust and before plate tectonics allowed that crust to be mixed back into the mantle.
Many researchers have assumed that before continental crust formed the mantle's chemistry was similar to that of meteorites called chondrites, but that the formation of continents altered its chemistry, causing it to become depleted in the elements, called incompatible elements, that are extracted with the magma when melting occurs in the mantle.
"Our results question this assumption," says Carlson. "They suggest that before continent extraction, the mantle already was depleted in incompatible elements compared to chondrites, perhaps because of an even earlier Earth differentiation event, or perhaps because the Earth originally formed from building blocks depleted in these elements."
Of the two possibilities, Carlson favors the early differentiation model, which would involve a global magma ocean on the newly-formed Earth. This magma ocean produced a crust that predated the crust that exists today.
"In our model, the original crust that formed by the solidification of the magma ocean was buoyantly unstable at Earth's surface because it was rich in iron," he says. "This instability caused it to sink to the base of the mantle, taking the incompatible elements with it, where it remains today."
Some of this deep material may have remained liquid despite the high pressures, and Carlson points out that seismological studies of the deep mantle reveal certain areas, one beneath the southern Pacific and another beneath Africa, that appear to be molten and possibly chemically different from the rest of the mantle.
"I'm holding out hope that these seismically imaged areas might be the compositional complement to the "depleted" primitive mantle that we sample in the Baffin Island lavas," he says
Computational scientists and geophysicists at the University of Texas at Austin and the California Institute of Technology (Caltech) have developed new computer algorithms that for the first time allow for the simultaneous modeling of the earth's Earth's mantle flow, large-scale tectonic plate motions, and the behavior of individual fault zones, to produce an unprecedented view of plate tectonics and the forces that drive it.
A paper describing the whole-earth model and its underlying algorithms will be published in the August 27 issue of the journal Science and also featured on the cover.
The work "illustrates the interplay between making important advances in science and pushing the envelope of computational science," says Michael Gurnis, the John E. and Hazel S. Smits Professor of Geophysics, director of the Caltech Seismological Laboratory, and a coauthor of the Science paper.
To create the new model, computational scientists at Texas's Institute for Computational Engineering and Sciences (ICES)-a team that included Omar Ghattas, the John A. and Katherine G. Jackson Chair in Computational Geosciences and professor of geological sciences and mechanical engineering, and research associates Georg Stadler and Carsten Burstedde-pushed the envelope of a computational technique known as Adaptive Mesh Refinement (AMR).
Partial differential equations such as those describing mantle flow are solved by subdividing the region of interest (such as the mantle) into a computational grid. Ordinarily, the resolution is kept the same throughout the grid. However, many problems feature small-scale dynamics that are found only in limited regions.
"AMR methods adaptively create finer resolution only where it's needed," explains Ghattas. "This leads to huge reductions in the number of grid points, making possible simulations that were previously out of reach."
"The complexity of managing adaptivity among thousands of processors, however, has meant that current AMR algorithms have not scaled well on modern petascale supercomputers," he adds. Petascale computers are capable of one million billion operations per second. To overcome this long-standing problem, the group developed new algorithms that, Burstedde says, "allows for adaptivity in a way that scales to the hundreds of thousands of processor cores of the largest supercomputers available today."
With the new algorithms, the scientists were able to simulate global mantle flow and how it manifests as plate tectonics and the motion of individual faults. According to Stadler, the AMR algorithms reduced the size of the simulations by a factor of 5,000, permitting them to fit on fewer than 10,000 processors and run overnight on the Ranger supercomputer at the National Science Foundation (NSF)-supported Texas Advanced Computing Center.
A key to the model was the incorporation of data on a multitude of scales. "Many natural processes display a multitude of phenomena on a wide range of scales, from small to large," Gurnis explains.
For example, at the largest scale-that of the whole earth-the movement of the surface tectonic plates is a manifestation of a giant heat engine, driven by the convection of the mantle below. The boundaries between the plates, however, are composed of many hundreds to thousands of individual faults, which together constitute active fault zones.
"The individual fault zones play a critical role in how the whole planet works," he says, "and if you can't simulate the fault zones, you can't simulate plate movement"-and, in turn, you can't simulate the dynamics of the whole planet.
In the new model, the researchers were able to resolve the largest fault zones, creating a mesh with a resolution of about one kilometer near the plate boundaries.
Included in the simulation were seismological data as well as data pertaining to the temperature of the rocks, their density, and their viscosity-or how strong or weak the rocks are, which affects how easily they deform. That deformation is nonlinear-with simple changes producing unexpected and complex effects.
"Normally, when you hit a baseball with a bat, the properties of the bat don't change-it won't turn to Silly Putty. In the earth, the properties do change, which creates an exciting computational problem," says Gurnis. "If the system is too nonlinear, the earth becomes too mushy; if it's not nonlinear enough, plates won't move. We need to hit the 'sweet spot.'"
After crunching through the data for 100,000 hours of processing time per run, the model returned an estimate of the motion of both large tectonic plates and smaller microplates-including their speed and direction. The results were remarkably close to observed plate movements.
In fact, the investigators discovered that anomalous rapid motion of microplates emerged from the global simulations. "In the western Pacific," Gurnis says, "we have some of the most rapid tectonic motions seen anywhere on Earth, in a process called 'trench rollback.' For the first time, we found that these small-scale tectonic motions emerged from the global models, opening a new frontier in geophysics."
One surprising result from the model relates to the energy released from plates in earthquake zones. "It had been thought that the majority of energy associated with plate tectonics is released when plates bend, but it turns out that's much less important than previously thought," Gurnis says.
"Instead, we found that much of the energy dissipation occurs in the earth's deep interior. We never saw this when we looked on smaller scales."
ScienceDaily (Sep. 17, 2010) — Earth's mantle and its core mix at a distance of 2900 kilometers under our feet in a mysterious zone. A team of geophysicists has just verified that the partial fusion of the mantle is possible in this area when the temperature reaches 4200 Kelvin. This reinforces the hypothesis of the presence of a deep magma ocean.
The originality of this work, carried out by the scientists of the Institut de minéralogie et de physique des milieux condensés (UPMC/Université Paris Diderot/Institut de Physique du Globe/CNRS/IRD), lies in the use of X-ray diffraction at the European Synchrotron Radiation Facility in Grenoble (France). The results will have an effect in the understanding of the dynamics, composition and the formation of the depths of our planet.
On top of Earth's core, consisting of liquid iron, lies the solid mantle, which is made up essentially of magnesium oxides, iron and silicon. The border between the core and the mantle, located at 2900 km below Earth's surface, is highly intriguing to geophysicists. With a pressure of around 1.4 million times the atmospheric pressure and a temperature of more than 4000 Kelvin, this zone is home to chemical reactions and changes in states of matter still unknown. The seismologists who have studied this subject have acknowledged an abrupt reduction of the speed of the seismic waves, which sometimes reach 30% when getting close to this border. This fact has led scientists to formulate the hypothesis, for the last 15 years, of the partial melting of the Earth mantle at the level of this mantle-core border. Today, this hypothesis has been confirmed.
In order to access the depths of our planet, scientists have not only seismological images but also a precious experimental technique: diamond anvil cells, coupled with a heating layer. This instrument allows scientists to re-create the same pressure and temperature conditions as those in Earth's interior on samples of a few microns. This is the technique used by the researchers of the Institut de minéralogie et de physique des milieux condensés on natural samples that are representatives of Earth's mantle and that have been put under pressures of more than 140 gigapascals (or 1.4 million times the atmospheric pressure), and temperatures of more than 5000 Kelvin.
A new approach to this study has been the use of the X-ray diffraction technique at the European synchrotron (ESRF). This has allowed the scientists to determine what mineral phases melt first, and they have also established, without extrapolation, fusion curves of the deep Earth mantle -- i.e., the characterization of the passage from a solid state to a partially liquid state. Their observations show that the partial fusion of the mantle is possible when the temperature approaches 4200 Kelvin. These experiments also prove that the liquid produced during this partial fusion is dense and that it can hold multiple chemical elements, among which are important markers of the dynamics of Earth's mantle. These studies will allow geophysicists and geochemists to achieve a deeper knowledge of the mechanisms of differentiation of Earth and the history of its formation, which started around 4.5 billion years ago.
Seattle WA (SPX) Dec 06, 2010
For years, geologists have argued about the processes that formed steep inner gorges in the broad glacial valleys of the Swiss Alps.
The U-shaped valleys were created by slow-moving glaciers that behaved something like road graders, eroding the bedrock over hundreds or thousands of years. When the glaciers receded, rivers carved V-shaped notches, or inner gorges, into the floors of the glacial valleys. But scientists disagreed about whether those notches were erased by subsequent glaciers and then formed all over again as the second round of glaciers receded.
New research led by a University of Washington scientist indicates that the notches endure, at least in part, from one glacial episode to the next. The glaciers appear to fill the gorges with ice and rock, protecting them from being scoured away as the glaciers move.
When the glaciers receded, the resulting rivers returned to the gorges and easily cleared out the debris deposited there, said David Montgomery, a UW professor of Earth and space sciences.
"The alpine inner gorges appear to lay low and endure glacial attack. They are topographic survivors," Montgomery said.
"The answer is not so simple that the glaciers always win. The river valleys can hide under the glaciers and when the glaciers melt the rivers can go back to work."
Montgomery is lead author of a paper describing the research, published online Dec. 5 in Nature Geoscience. Co-author is Oliver Korup of the University of Potsdam in Germany, who did the work while with the Swiss Federal Research Institutes in Davos, Switzerland.
The researchers used topographic data taken from laser-based (LIDAR) measurements to determine that, if the gorges were erased with each glacial episode, the rivers would have had to erode the bedrock from one-third to three-quarters of an inch per year since the last glacial period to get gorges as deep as they are today.
"That is screamingly fast. It's really too fast for the processes," Montgomery said. Such erosion rates would exceed those in all areas of the world except the most tectonically active regions, the researchers said, and they would have to maintain those rates for 1,000 years.
Montgomery and Korup found other telltale evidence, sediment from much higher elevations and older than the last glacial deposits, at the bottom of the river gorges. That material likely was pushed into the gorges as glaciers moved down the valleys, indicating the gorges formed before the last glaciers.
"That means the glaciers aren't cutting down the bedrock as fast as the rivers do. If the glaciers were keeping up, each time they'd be able to erase the notch left by the river," Montgomery said.
"They're locked in this dance, working together to tear the mountains down."
The work raises questions about how common the preservation of gorges might be in other mountainous regions of the world.
"It shows that inner gorges can persist, and so the question is, 'How typical is that?' I don't think every inner gorge in the world survives multiple glaciations like that, but the Swiss Alps are a classic case. That's where mountain glaciation was first discovered."
I find this article as symptomatic that LIDAR has found yet ANOTHER use. A very useful too, really. The USGS office in Rolla, Missouri has several specialists who have provided LIDAR expertise on occasion that is targeted on various ends, mainly answering questions of topographic veracity at scale.
But it's so much more.
It can be used to define slope so well, and define individual block sizes and shapes, that men need no longer go up and down on ropes to make the measurements to determine rockfall likelihood, bounce heights and velocities in the Colorado Rockfall Simulation Program, and even monitor changes in slope configuration.
I've seen it used for the mapping of underground mines.
Thanks for posting. Overall, a good article.
Berkeley CA (SPX) Dec 20, 2010
A University of California, Berkeley, geophysicist has made the first-ever measurement of the strength of the magnetic field inside Earth's core, 1,800 miles underground.
The magnetic field strength is 25 Gauss, or 50 times stronger than the magnetic field at the surface that makes compass needles align north-south. Though this number is in the middle of the range geophysicists predict, it puts constraints on the identity of the heat sources in the core that keep the internal dynamo running to maintain this magnetic field.
"This is the first really good number we've had based on observations, not inference," said author Bruce A. Buffett, professor of earth and planetary science at UC Berkeley. "The result is not controversial, but it does rule out a very weak magnetic field and argues against a very strong field."
A strong magnetic field inside the outer core means there is a lot of convection and thus a lot of heat being produced, which scientists would need to account for, Buffett said. The presumed sources of energy are the residual heat from 4 billion years ago when the planet was hot and molten, release of gravitational energy as heavy elements sink to the bottom of the liquid core, and radioactive decay of long-lived elements such as potassium, uranium and thorium.
A weak field - 5 Gauss, for example - would imply that little heat is being supplied by radioactive decay, while a strong field, on the order of 100 Gauss, would imply a large contribution from radioactive decay.
"A measurement of the magnetic field tells us what the energy requirements are and what the sources of heat are," Buffett said.
About 60 percent of the power generated inside the earth likely comes from the exclusion of light elements from the solid inner core as it freezes and grows, he said. This constantly builds up crud in the outer core.
The Earth's magnetic field is produced in the outer two-thirds of the planet's iron/nickel core. This outer core, about 1,400 miles thick, is liquid, while the inner core is a frozen iron and nickel wrecking ball with a radius of about 800 miles - roughly the size of the moon. The core is surrounded by a hot, gooey mantle and a rigid surface crust.
The cooling Earth originally captured its magnetic field from the planetary disk in which the solar system formed. That field would have disappeared within 10,000 years if not for the planet's internal dynamo, which regenerates the field thanks to heat produced inside the planet. The heat makes the liquid outer core boil, or "convect," and as the conducting metals rise and then sink through the existing magnetic field, they create electrical currents that maintain the magnetic field. This roiling dynamo produces a slowly shifting magnetic field at the surface.
"You get changes in the surface magnetic field that look a lot like gyres and flows in the oceans and the atmosphere, but these are being driven by fluid flow in the outer core," Buffett said.
Buffett is a theoretician who uses observations to improve computer models of the earth's internal dynamo. Now at work on a second generation model, he admits that a lack of information about conditions in the earth's interior has been a big hindrance to making accurate models.
He realized, however, that the tug of the moon on the tilt of the earth's spin axis could provide information about the magnetic field inside. This tug would make the inner core precess - that is, make the spin axis slowly rotate in the opposite direction - which would produce magnetic changes in the outer core that damp the precession. Radio observations of distant quasars - extremely bright, active galaxies - provide very precise measurements of the changes in the earth's rotation axis needed to calculate this damping.
"The moon is continually forcing the rotation axis of the core to precess, and we're looking at the response of the fluid outer core to the precession of the inner core," he said.
By calculating the effect of the moon on the spinning inner core, Buffett discovered that the precession makes the slightly out-of-round inner core generate shear waves in the liquid outer core. These waves of molten iron and nickel move within a tight cone only 30 to 40 meters thick, interacting with the magnetic field to produce an electric current that heats the liquid. This serves to damp the precession of the rotation axis. The damping causes the precession to lag behind the moon as it orbits the earth. A measurement of the lag allowed Buffett to calculate the magnitude of the damping and thus of the magnetic field inside the outer core.
Buffett noted that the calculated field - 25 Gauss - is an average over the entire outer core. The field is expected to vary with position.
"I still find it remarkable that we can look to distant quasars to get insights into the deep interior of our planet," Buffett said.
Palo Alto CA (SPX) Dec 20, 2010
To answer the big questions, it often helps to look at the smallest details. That is the approach Stanford mineral physicist Wendy Mao is taking to understanding a major event in Earth's inner history.
Using a new technique to scrutinize how minute amounts of iron and silicate minerals interact at ultra-high pressures and temperatures, she is gaining insight into the biggest transformation Earth has ever undergone - the separation of its rocky mantle from its iron-rich core approximately 4.5 billion years ago.
The technique, called high-pressure nanoscale X-ray computed tomography, is being developed at SLAC National Accelerator Laboratory. With it, Mao is getting unprecedented detail - in three-dimensional images - of changes in the texture and shape of molten iron and solid silicate minerals as they respond to the same intense pressures and temperatures found deep in the Earth.
Mao will present the results of the first few experiments with the technique at the annual meeting of the American Geophysical Union in San Francisco.
Tomography refers to the process that creates a three-dimensional image by combining a series of two-dimensional images, or cross-sections, through an object. A computer program interpolates between the images to flesh out a recreation of the object.
Through experiments at SLAC's Stanford Synchrotron Radiation Lightsource and Argonne National Laboratory's Advanced Photon Source, researchers have developed a way to combine a diamond anvil cell, which compresses tiny samples between the tips of two diamonds, with nanoscale X-ray computed tomography to capture images of material at high pressure.
The pressures deep in the Earth are so high - millions of times atmospheric pressure - that only diamonds can exert the needed pressure without breaking under the force.
At present, the SLAC researchers and their collaborators from HPSync, the High Pressure Synergetic Consortium at Argonne's Advanced Photon Source are the only group using this technique.
"It is pretty exciting, being able to measure the interactions of iron and silicate materials at very high pressures and temperatures, which you could not do before," said Mao, an assistant professor of geological and environmental sciences and of photon science.
"No one has ever imaged these sorts of changes at these very high pressures."
It is generally agreed that the initially homogenous ball of material that was the very early Earth had to be very hot in order to differentiate into the layered sphere we live on today. Since the crust and the layer underneath it, the mantle, are silicate-rich, rocky layers, while the core is iron-rich, it's clear that silicate and iron went in different directions at some point.
But how they separated out and squeezed past each other is not clear. Silicate minerals, which contain silica, make up about 90 percent of the crust of the Earth.
If the planet got hot enough to melt both elements, it would have been easy enough for the difference in density to send iron to the bottom and silicates to the top.
If the temperature was not hot enough to melt silicates, it has been proposed that molten iron might have been able to move along the boundaries between grains of the solid silicate minerals.
"To prove that, though, you need to know whether the molten iron would tend to form small spheres or whether it would form channels," Mao said. "That would depend on the surface energy between the iron and silicate."
Previous experimental work has shown that at low pressure, iron forms isolated spheres, similar to the way water beads up on a waxed surface, Mao said, and spheres could not percolate through solid silicate material.
Mao said the results of her first high-pressure experiments using the tomography apparatus suggest that at high pressure, since the silicate transforms into a different structure, the interaction between the iron and silicate could be different than at low pressure.
"At high pressure, the iron takes a more elongate, platelet-like form," she said. That means the iron would spread out on the surface of the silicate minerals, connecting to form channels instead of remaining in isolated spheres.
"So it looks like you could get some percolation of iron at high pressure," Mao said. "If iron could do that, that would tell you something really significant about the thermal history of the Earth."
But she cautioned that she only has data from the initial experiments.
"We have some interesting results, but it is the kind of measurement that you need to repeat a couple times to make sure," Mao said.
A team of University of Nevada, Reno and University of Nevada, Las Vegas researchers have devised a new model for how Nevada's gold deposits formed, which may help in exploration efforts for new gold deposits.
The deposits, known as Carlin-type gold deposits, are characterized by extremely fine-grained nanometer-sized particles of gold adhered to pyrite over large areas that can extend to great depths. More gold has been mined from Carlin-type deposits in Nevada in the last 50 years - more than $200 billion worth at today's gold prices - than was ever mined from during the California gold rush of the 1800s.
This current Nevada gold boom started in 1961 with the discovery of the Carlin gold mine, near the town of Carlin, at a spot where the early westward-moving prospectors missed the gold because it was too fine-grained to be readily seen. Since the 1960s, geologists have found clusters of these "Carlin-type" deposits throughout northern Nevada. They constitute, after South Africa, the second largest concentration of gold on Earth. Despite their importance, geologists have argued for decades about how they formed.
"Carlin-type deposits are unique to Nevada in that they represent a perfect storm of Nevada's ideal geology - a tectonic trigger and magmatic processes, resulting in extremely efficient transport and deposition of gold," said John Muntean, a research economic geologist with the Nevada Bureau of Mines and Geology at the University of Nevada, Reno and previously an industry geologist who explored for gold in Nevada for many years.
"Understanding how these deposits formed is important because most of the deposits that cropped out at the surface have likely been found. Exploration is increasingly targeting deeper deposits. Such risky deep exploration requires expensive drilling.
"Our model for the formation of Carlin-type deposits may not directly result in new discoveries, but models for gold deposit formation play an important role in how companies explore by mitigating risk. Knowing how certain types of gold deposits form allows one to be more predictive by evaluating whether ore-forming processes operated in the right geologic settings. This could lead to identification of potential new areas of discovery."
Muntean collaborated with researchers from the University of Nevada, Las Vegas: Jean Cline, a facultyprofessor of geology at UNLV and a leading authority on Carlin-type gold deposits; Adam Simon, an assistant professor of geoscience who provided new experimental data and his expertise on the interplay between magmas and ore deposits; and Tony Longo, a post-doctoral fellow who carried out detailed microanalyses of the ore minerals.
The team combined decades of previous studies by research and industry geologists with new data of their own to reach their conclusions, which were written about in the Jan. 23 early online issue of Nature Geoscience magazine and will appear in the February printed edition. The team relates formation of the gold deposits to a change in plate tectonics and a major magma event about 40 million years ago. It is the most complete explanation for Carlin-type gold deposits to date.
"Our model won't be the final word on Carlin-type deposits," Muntean said. "We hope it spurs new research in Nevada, especially by people who may not necessarily be ore deposit geologists."
The work was funded by grants from the National Science Foundation, the United States Geological Survey, Placer Dome Exploration and Barrick Gold Corporation.
In one of his songs Bob Dylan asks "How many years can a mountain exist before it is washed to the sea?", and thus poses an intriguing geological question for which an accurate answer is not easily provided. Mountain ranges are in a constant interplay between climatically controlled weathering processes on the one hand and the tectonic forces that cause folding and thrusting and thus thickening of the Earth's crust on the other hand.
While erosion eventually erases any geological obstacles, tectonic forces are responsible for piling- and lifting-up rocks and thus for forming spectacular mountain landscapes such as the European Alps.
In reality, climate, weathering and mountain uplift interact in a complex manner and quantifying rates for erosion and uplift, especially for the last couple of millions of years, remains a challenging task.
In a recent Geology paper Michael Meyer (University of Innsbruck) et al. report on ancient cave systems discovered near the summits of the Allgau Mountains (Austria) that preserved the oldest radiometrically dated dripstones currently known from the European Alps.
"These cave deposits formed ca. 2 million years ago and their geochemical signature and biological inclusions are vastly different from other cave calcites in the Alps" says Meyer, who works at the Institute of Geology and Paleontology at the University of Innsbruck, Austria.
By carefully analysing these dripstones and using an isotopic modelling approach the authors were able to back-calculate both, the depth of the cave and the altitude of the corresponding summit area at the time of calcite formation. Meyer et al. thus derived erosion and uplift rates for the northern rim of the Alps and - most critically - for a geological time period that is characterized by reoccurring ice ages and hence by intensive glacial erosion.
"Our results suggest that 2 million years ago the cave was situated ~1500 meters below its present altitude and the mountains were probably up to 500 meters lower compared to today", states Meyer. These altitudinal changes were significant and much of this uplift can probably be attributed to the gradual unloading of the Alps due to glacial erosion.
Dripstones have been used to reconstruct past climate and environmental change in a variety of ways. The study of Meyer et al. is novel, however, as it highlights the potential of caves and their deposits to quantitatively constrain mountain evolution on a timescale of millions of years and further shows how the interplay of tectonic and climatic processes can be understood. Key to success is an accurate age control provided by Uranium-Lead dating.
This method is commonly used to constrain the age of much older rocks and minerals but has only rarely be applied to dripstones - i.e. only those with high Uranium concentrations - and luckily this is the case for the samples from the Allgau Mountains.
Geologists debate epoch to mark effects of Homo sapiens.
Humanity's profound impact on this planet is hard to deny, but is it big enough to merit its own geological epoch? This is the question facing geoscientists gathered in London this week to debate the validity and definition of the 'Anthropocene', a proposed new epoch characterized by human effects on the geological record.
"We are in the process of formalizing it," says Michael Ellis, head of the climate-change programme of the British Geological Survey in Nottingham, who coordinated the 11 May meeting. He and others hope that adopting the term will shift the thinking of policy-makers. "It should remind them of the global and significant impact that humans have," says Ellis.
But not everyone is behind the idea. "Some think it premature, perhaps hubristic, perhaps nonsensical," says Jan Zalasiewicz, a stratigrapher at the University of Leicester, UK, and a co-convener of the meeting. Zalasiewicz, who declares himself "officially very firmly sitting on the fence", also chairs a working group investigating the proposal for the International Commission on Stratigraphy (ICS) — the body that oversees designations of geological time.
The term Anthropocene was first coined in 2000 by Nobel laureate Paul Crutzen, now at the Max Planck Institute for Chemistry in Mainz, Germany, and his colleagues. It then began appearing in peer-reviewed papers as if it were a technical term rather than scientific slang.
Click for larger imageThe "evidence for the prosecution", as Zalasiewicz puts it, is compelling. Through food production and urbanization, humans have altered more than half of the planet's ice-free land mass1 (see 'Transformation of the biosphere'), and are moving as much as an order of magnitude more rock and soil around than are natural processes2. Rising carbon dioxide levels in the atmosphere are expected to make the ocean 0.3–0.4 pH points more acidic by the end of this century. That will dissolve light-coloured carbonate shells and sea-floor rocks for about 1,000 years, leaving a dark band in the sea-floor sediment that will be obvious to future geologists. A similar dark stripe identifies the Palaeocene–Eocene Thermal Maximum about 55 million years ago, when global temperatures rose by some 6 °C in 20,000 years. A similar temperature jump could happen by 2100, according to some high-emissions scenarios3.
The fossil record will show upheavals too. Some 20% of species living in large areas are now invasive, says Zalasiewicz. "Globally that's a completely novel change." And a review published in Nature in March4 concluded that the disappearance of the species now listed as 'critically endangered' would qualify as a mass extinction on a level seen only five times in the past 540 million years — and all of those mark transitions between geological time periods.
Some at the ICS are wary of formalizing a new epoch. "My main concern is that those who promote it have not given it the careful scientific consideration and evaluation it needs," says Stan Finney, chair of the ICS and a geologist at California State University in Long Beach. He eschews the notion of focusing on the term simply to "generate publicity".
Others point out that an epoch typically lasts tens of millions of years. Our current epoch, the Holocene, began only 11,700 years ago. Declaring the start of a new epoch would compress the geological timeline to what some say is a ridiculous extent. Advocates of the Anthropocene, however, say that it is natural to divide recent history into smaller, more detailed chunks. A less controversial alternative would be to declare the Anthropocene a new 'age': a subdivision of an epoch.
If scientists can agree in principle that a new time division is justified, they will have to settle on a geological marker for its start. Some suggest the pollen of cultivated plants, arguing that mankind's fingerprint can be seen 5,000–10,000 years ago with the beginnings of agriculture. Others support the rise in the levels of greenhouse gases and air pollution in the latter part of the eighteenth century, as industrialization began. A third group would start with the flicker of radioactive isotopes in 1945, marking the invention of nuclear weapons.
Should the working group decide that the Anthropocene epoch has merit, it will go to an ICS vote. But the whole process will take time — defining other geological periods has sometimes taken decades. In the meantime, Zalasiewicz says, "the formalization is the excuse to try to do some very interesting science", comparing Earth's current changes to those of the past.
Leeds UK (SPX) May 23, 2011
The inner core of the Earth is simultaneously melting and freezing due to circulation of heat in the overlying rocky mantle, according to new research from the University of Leeds, UC San Diego and the Indian Institute of Technology.
The findings, published tomorrow in Nature, could help us understand how the inner core formed and how the outer core acts as a 'geodynamo', which generates the planet's magnetic field.
"The origins of Earth's magnetic field remain a mystery to scientists," said study co-author Dr Jon Mound from the University of Leeds. "We can't go and collect samples from the centre of the Earth, so we have to rely on surface measurements and computer models to tell us what's happening in the core."
"Our new model provides a fairly simple explanation to some of the measurements that have puzzled scientists for years. It suggests that the whole dynamics of the Earth's core are in some way linked to plate tectonics, which isn't at all obvious from surface observations.
"If our model is verified it's a big step towards understanding how the inner core formed, which in turn helps us understand how the core generates the Earth's magnetic field."
The Earth's inner core is a ball of solid iron about the size of our moon. This ball is surrounded by a highly dynamic outer core of a liquid iron-nickel alloy (and some other, lighter elements), a highly viscous mantle and a solid crust that forms the surface where we live.
Over billions of years, the Earth has cooled from the inside out causing the molten iron core to partly freeze and solidify. The inner core has subsequently been growing at the rate of around 1mm a year as iron crystals freeze and form a solid mass.
The heat given off as the core cools flows from the core to the mantle to the Earth's crust through a process known as convection. Like a pan of water boiling on a stove, convection currents move warm mantle to the surface and send cool mantle back to the core. This escaping heat powers the geodynamo and coupled with the spinning of the Earth generates the magnetic field.
Scientists have recently begun to realise that the inner core may be melting as well as freezing, but there has been much debate about how this is possible when overall the deep Earth is cooling. Now the research team believes they have solved the mystery.
Using a computer model of convection in the outer core, together with seismology data, they show that heat flow at the core-mantle boundary varies depending on the structure of the overlying mantle. In some regions, this variation is large enough to force heat from the mantle back into the core, causing localised melting.
The model shows that beneath the seismically active regions around the Pacific 'Ring of Fire', where tectonic plates are undergoing subduction, the cold remnants of oceanic plates at the bottom of the mantle draw a lot of heat from the core. This extra mantle cooling generates down-streams of cold material that cross the outer core and freeze onto the inner core.
Conversely, in two large regions under Africa and the Pacific where the lowermost mantle is hotter than average, less heat flows out from the core. The outer core below these regions can become warm enough that it will start melting back the solid inner core.
Co-author Dr Binod Sreenivasan from the Indian Institute of Technology said: "If Earth's inner core is melting in places, it can make the dynamics near the inner core-outer core boundary more complex than previously thought.
"On the one hand, we have blobs of light material being constantly released from the boundary where pure iron crystallizes. On the other hand, melting would produce a layer of dense liquid above the boundary. Therefore, the blobs of light elements will rise through this layer before they stir the overlying outer core.
"Interestingly, not all dynamo models produce heat going into the inner core. So the possibility of inner core melting can also place a powerful constraint on the regime in which the Earth's dynamo operates."
Co-author Dr Sebastian Rost from the University of Leeds added: "The standard view has been that the inner core is freezing all over and growing out progressively, but it appears that there are regions where the core is actually melting. The net flow of heat from core to mantle ensures that there's still overall freezing of outer core material and it's still growing over time, but by no means is this a uniform process.
"Our model allows us to explain some seismic measurements which have shown that there is a dense layer of liquid surrounding the inner core. The localised melting theory could also explain other seismic observations, for example why seismic waves from earthquakes travel faster through some parts of the core than others."
Stanford CA (SPX) May 27, 2011
The magnitude 9 earthquake and resulting tsunami that struck Japan on March 11 were like a one-two punch - first violently shaking, then swamping the islands - causing tens of thousands of deaths and hundreds of billions of dollars in damage. Now Stanford researchers have discovered the catastrophe was caused by a sequence of unusual geologic events never before seen so clearly.
"It was not appreciated before this earthquake that this size of earthquake was possible on this plate boundary," said Stanford geophysicist Greg Beroza. "It was thought that typical earthquakes were much smaller."
The earthquake occurred in a subduction zone, where one great tectonic plate is being forced down under another tectonic plate and into the Earth's interior along an active fault.
The fault on which the Tohoku-Oki earthquake took place slopes down from the ocean floor toward the west. It first ruptured mainly westward from its epicenter - 32 kilometers (about 20 miles) below the seafloor - toward Japan, shaking the island of Honshu violently for 40 seconds.
Surprisingly, the fault then ruptured eastward from the epicenter, up toward the ocean floor along the sloping fault plane for about 30 or 35 seconds.
As the rupture neared the seafloor, the movement of the fault grew rapidly, violently deforming the seafloor sediments sitting on top of the fault plane, punching the overlying water upward and triggering the tsunami.
"When the rupture approached the seafloor, it exploded into tremendously large slip," said Beroza. "It displaced the seafloor dramatically.
"This amplification of slip near the surface was predicted in computer simulations of earthquake rupture, but this is the first time we have clearly seen it occur in a real earthquake.
"The depth of the water column there is also greater than elsewhere," Beroza said. "That, together with the slip being greatest where the fault meets the ocean floor, led to the tsunami being outlandishly big."
Beroza is one of the authors of a paper detailing the research, published online last week in Science Express.
"Now that this slip amplification has been observed in the Tohoku-Oki earthquake, what we need to figure out is whether similar earthquakes - and large tsunamis - could happen in other subduction zones around the world," he said.
Beroza said the sort of "two-faced" rupture seen in the Tohoku-Oki earthquake has not been seen in other subduction zones, but that could be a function of the limited amount of data available for analyzing other earthquakes.
There is a denser network of seismometers in Japan than any other place in the world, he said. The sensors provided researchers with much more detailed data than is normally available after an earthquake, enabling them to discern the different phases of the March 11 temblor with much greater resolution than usual.
Prior to the Tohoku-Oki earthquake, Beroza and Shuo Ma, who is now an assistant professor at San Diego State University, had been working on computer simulations of what might happen during an earthquake in just such a setting. Their simulations had generated similar "overshoot" of sediments overlying the upper part of the fault plane.
Following the Japanese earthquake, aftershocks as large as magnitude 6.5 slipped in the opposite direction to the main shock. This is a symptom of what is called "extreme dynamic overshoot" of the upper fault plane, Beroza said, with the overextended sediments on top of the fault plane slipping during the aftershocks back in the direction they came from.
"We didn't really expect this to happen because we believe there is friction acting on the fault" that would prevent any rebound, he said. "Our interpretation is that it slipped so much that it sort of overdid it. And in adjusting during the aftershock sequence, it went back a bit.
"We don't see these bizarre aftershocks on parts of the fault where the slip is less," he said.
The damage from the March 11 earthquake was so extensive in part simply because the earthquake was so large. But the way it ruptured on the fault plane, in two stages, made the devastation greater than it might have been otherwise, Beroza said.
The deeper part of the fault plane, which sloped downward to the west, was bounded by dense, hard rock on each side. The rock transmitted the seismic waves very efficiently, maximizing the amount of shaking felt on the island of Honshu.
The shallower part of the fault surface, which slopes upward to the east and surfaces at the Japan Trench - where the overlying plate is warped downward by the motion of the descending plate - had massive slip. Unfortunately, this slip was ideally situated to efficiently generate the gigantic tsunami, with devastating consequences.
Nuclear fission powers the movement of Earth's continents and crust, a consortium of physicists and other scientists is now reporting, confirming long-standing thinking on this topic. Using neutrino detectors in Japan and Italy—the Kamioka Liquid-Scintillator Antineutrino Detector (KamLAND) and the Borexino Detector—the scientists arrived at their conclusion by measuring the flow of the antithesis of these neutral particles as they emanate from our planet. Their results are detailed July 17 in Nature Geoscience. (Scientific American is part of the Nature Publishing Group.)
Neutrinos and antineutrinos, which travel through mass and space freely due to their lack of charge and other properties, are released by radioactive materials as they decay. And Earth is chock full of such radioactive elements—primarily uranium, thorium and potassium. Over the billions of years of Earth's existence, the radioactive isotopes have been splitting, releasing energy as well as these antineutrinos—just like in a man-made nuclear reactor. That energy heats the surrounding rock and keeps the elemental forces of plate tectonics in motion. By measuring the antineutrino emissions, scientists can determine how much of Earth's heat results from this radioactive decay.
How much heat? Roughly 20 terawatts of heat—or nearly twice as much energy as used by all of humanity at present—judging by the number of such antineutrino particles emanating from the planet, dubbed geoneutrinos by the scientists. Combined with the 4 terawatts from decaying potassium, it's enough energy to move mountains, or at least cause the collisions that create them.
The precision of the new measurements made by the KamLAND team was made possible by an extended shutdown of the Kashiwazaki-Kariwa nuclear reactor in Japan, following an earthquake there back in 2007. Particles released by the nearby plant would otherwise mix with naturally released geoneutrinos and confuse measurements; the closure of the plant allowed the two to be distinguished. The detector hides from cosmic rays—broadly similar to the neutrinos and antineutrinos it is designed to register—under Mount Ikenoyama nearby. The detector itself is a 13-meter-diameter balloon of transparent film filled with a mix of special liquid hydrocarbons, itself suspended in a bath of mineral oil contained in a 18-meter-diameter stainless steel sphere, covered on the inside with detector tubes. All that to capture the telltale mark of some 90 geoneutrinos over the course of seven years of measurements.
The new measurements suggest radioactive decay provides more than half of Earth's total heat, estimated at roughly 44 terawatts based on temperatures found at the bottom of deep boreholes into the planet's crust. The rest is leftover from Earth's formation or other causes yet unknown, according to the scientists involved. Some of that heat may have been trapped in Earth's molten iron core since the planet's formation, while the nuclear decay happens primarily in the crust and mantle. But with fission still pumping out so much heat, Earth is unlikely to cool—and thereby halt the collisions of continents—for hundreds of millions of years thanks to the long half-lives of some of these elements. And that means there's a lot of geothermal energy—or natural nuclear energy—to be harvested.
ScienceDaily (July 23, 2011) — Fool's gold is providing scientists with valuable insights into a turning point in Earth's evolution, which took place billions of years ago.
Scientists are recreating ancient forms of the mineral pyrite -- dubbed fool's gold for its metallic lustre -- that reveal details of past geological events.
Detailed analysis of the mineral is giving fresh insight into Earth before the Great Oxygenation Event, which took place 2.4 billion years ago. This was a time when oxygen released by early forms of bacteria gave rise to new forms of plant and animal life, transforming Earth's oceans and atmosphere.
Studying the composition of pyrite enables a geological snapshot of events at the time when it was formed. Studying the composition of different forms of iron in fool's gold gives scientists clues as to how conditions such as atmospheric oxygen influenced the processes forming the compound.
The latest research shows that bacteria -- which would have been an abundant life form at the time -- did not influence the early composition of pyrite. This result, which contrasts with previous thinking, gives scientists a much clearer picture of the process.
More extensively, their discovery enables better understanding of geological conditions at the time, which informs how the oceans and atmosphere evolved.
The research, funded by the Natural Environment Research Council and the Edinburgh Collaborative of Subsurface Science and Engineering, was published in Science.
Dr Ian Butler, who led the research, said: "Technology allows us to trace scientific processes that we can't see from examining the mineral composition alone, to understand how compounds were formed. This new information about pyrite gives us a much sharper tool with which to analyse the early evolution of the Earth, telling us more about how our planet was formed."
Dr Romain Guilbaud, investigator on the study, said: "Our discovery enables a better understanding of how information on the Earth's evolution, recorded in ancient minerals, can be interpreted
Geological history has periodically featured giant lava eruptions that coat large swaths of land or ocean floor with basaltic lava, which hardens into rock formations called flood basalt. New research from Matthew Jackson and Richard Carlson proposes that the remnants of six of the largest volcanic events of the past 250 million years contain traces of the ancient Earth's primitive mantle -- which existed before the largely differentiated mantle of today -- offering clues to the geochemical history of the planet.
Scientists recently discovered that an area in northern Canada and Greenland composed of flood basalt contains traces of ancient Earth's primitive mantle. Carlson and Jackson's research expanded these findings, in order to determine if other large volcanic rock deposits also derive from primitive sources.
Information about the primitive mantle reservoir -- which came into existence after Earth's core formed but before Earth's outer rocky shell differentiated into crust and depleted mantle -- would teach scientists about the geochemistry of early Earth and how our planet arrived at its present state.
Until recently, scientists believed that Earth's primitive mantle, such as the remnants found in northern Canada and Greenland, originated from a type of meteorite called carbonaceous chondrites. But comparisons of isotopes of the element neodymium between samples from Earth and samples from chondrites didn't produce the expected results, which suggested that modern mantle reservoirs may have evolved from something different.
Carlson, of Carnegie's Department of Terrestrial Magnetism, and Jackson, a former Carnegie fellow now at Boston University, examined the isotopic characteristics of flood basalts to determine whether they were created by a primitive mantle source, even if it wasn't a chondritic one.
They used geochemical techniques based on isotopes of neodymium and lead to compare basalts from the previously discovered 62-million-year-old primitive mantle source in northern Canada's Baffin Island and West Greenland to basalts from the South Pacific's Ontong-Java Plateau, which formed in the largest volcanic event in geologic history. They discovered minor differences in the isotopic compositions of the two basaltic provinces, but not beyond what could be expected in a primitive reservoir.
They compared these findings to basalts from four other large accumulations of lava-formed rocks in Botswana, Russia, India, and the Indian Ocean, and determined that lavas that have interacted with continental crust the least (and are thus less contaminated) have neodymium and lead isotopic compositions similar to an early-formed primitive mantle composition.
The presence of these early-earth signatures in the six flood basalts suggests that a significant fraction of the world's largest volcanic events originate from a modern mantle source that is similar to the primitive reservoir discovered in Baffin Island and West Greenland. This primitive mantle is hotter, due to a higher concentration of radioactive elements, and more easily melted than other mantle reservoirs. As a result, it could be more likely to generate the eruptions that form flood basalts.
Start-up funding for this work was provided by Boston University.
It's well known that Earth's most severe mass extinction occurred about 250 million years ago. What's not well known is the specific time when the extinctions occurred. A team of researchers from North America and China have published a paper in Science this week which explicitly provides the date and rate of extinction.
"This is the first paper to provide rates of such massive extinction," says Dr. Charles Henderson, professor in the Department of Geoscience at the University of Calgary and co-author of the paper: Calibrating the end-Permian mass extinction.
"Our information narrows down the possibilities of what triggered the massive extinction and any potential kill mechanism must coincide with this time."
About 95 percent of marine life and 70 percent of terrestrial life became extinct during what is known as the end-Permian, a time when continents were all one land mass called Pangea. The environment ranged from desert to lush forest.
Four-limbed vertebrates were becoming diverse and among them were primitive amphibians, reptiles and a group that would, one day, include mammals.
Through the analysis of various types of dating techniques on well-preserved sedimentary sections from South China to Tibet, researchers determined that the mass extinction peaked about 252.28 million years ago and lasted less than 200,000 years, with most of the extinction lasting about 20,000 years.
"These dates are important as it will allow us to understand the physical and biological changes that took place," says Henderson. "We do not discuss modern climate change, but obviously global warming is a biodiversity concern today.
The geologic record tells us that 'change' happens all the time, and from this great extinction life did recover."
There is ongoing debate over whether the death of both marine and terrestrial life coincided, as well as over kill mechanisms, which may include rapid global warming, hypercapnia (a condition where there is too much CO2 in the blood stream), continental aridity and massive wildfires.
The conclusion of this study says extinctions of most marine and terrestrial life took place at the same time. And the trigger, as suggested by these researchers and others, was the massive release of CO2 from volcanic flows known as the Siberian traps, now found in northern Russia.
Henderson's conodont research was integrated with other data to establish the study's findings. Conodonts are extinct, soft-bodied eel-like creatures with numerous tiny teeth that provide critical information on hydrocarbon deposits to global extinctions. | http://awesomenature.tribe.net/m/thread/64e0f374-3275-4347-9ffa-3f5218615259 | 13 |
140 | US History/Civil War
Politics Before The War
In the presidential election of 1860 the Republican Party nominated Abraham Lincoln as its candidate. Many Republicans believed that Lincoln's election would prevent any further spread of slavery. The party also promised a tariff for the protection of industry and pledged the enactment of a law granting free homesteads to settlers who would help in the opening of the West. The Democrats were not united. Southerners split from the party and nominated Vice President John C. Breckenridge of Kentucky for president. Stephen A. Douglas was the nominee of northern Democrats. Diehard Whigs from the border states formed the Constitutional Union Party and nominated John C. Bell of Tennessee. Lincoln and Douglas competed for electoral votes in the North, and Breckenridge and Bell competed in the South. Although Lincoln won only 39 percent of the popular vote he won a clear majority of 180 electoral votes. Lincoln won all 18 free states. Bell won Tennessee, Kentucky and Virginia; Breckenridge took the other slave states except for Missouri, which was won by Douglas. Despite his poor electoral showing, Douglas trailed only Lincoln in the popular vote. Lincoln's election, along with the fact that southerners now believed they no longer had a political voice in Washington, ensured South Carolina's secession. Other southern states followed suit, claiming that they were no longer bound by the Union because the northern states had in effect broken a constitutional contract by not honoring southerner's right to own slaves as property. Historians would later characterize the Civil War as our nation's true revolution and eventual fulfillment of the Declaration of Independence's promise that "all men are created equal."
Causes of the Civil War
The top five causes of the Civil War were:
- The fundamental disagreement between advocates of slave ownership and abolitionists.
- The conflict between the North and South over the extent of each state's rights within the Union
- Social and Economic differences between the North and South
- Whether it was constitutional to secede from the Union
- Election of Abraham Lincoln
Dixie's Constitution
By the end of March, 1861, the Confederacy had created a constitution and elected its first and only president, Jefferson Davis. The Constitution of the Confederate States of America was the supreme law of the Confederate States of America, as adopted on March 11, 1861 and in effect through the conclusion of the American Civil War. The Confederacy also operated under a Provisional Constitution from February 8, 1861 to March 11, 1861.
In regard to most articles of the Constitution, the document is a word-for-word duplicate of the United States Constitution. The original, hand-written document is currently located in the University of Georgia archives at Athens, Georgia. The major differences between the two constitutions was the Confederacy's greater emphasis on the rights of individual member states, and an explicit support of slavery.
|Wikisource has original text related to:|
Fort Sumter and the Beginning of the War
Several federal forts were seized and converted to Confederate strongholds. By the time of Lincoln's inauguration, only two major forts had not been taken. On April 11, Confederate General P. G. T. Beauregard demanded that Union Major Robert Anderson surrender Fort Sumter in Charleston, South Carolina, which was an important fort because of its strategic position, which was to defend Charleston's harbor. The supplies of the besieged forts would only last a few weeks. The Union unsuccessfully sent ships to resupply the fort. Beauregard's troops surrounded the fort which was located on an island outside the bay and opened fire on the fort. A tremendous cannon firefight ensured that remarkably claimed no casualties. By April 14, Anderson was forced to surrender the fort, and tragically the first casualties of the War occurred when a Union cannon misfired while the flag was being lowered.
On the very next day, President Lincoln declared formally that the US faced a rebellion. Lincoln called up state militias and requested volunteers to enlist in the Army. In response to this call and to the surrender of Fort Sumter, four more states, Virginia, Arkansas, Tennessee, and North Carolina all seceded. The Civil War had begun.
Each side proceeded to determine its strategies. The Confederate Army had a defensive-offensive strategy. The Confederacy only needed to defend itself and win to gain independence, but occasionally when the conditions were right, they would strike offensively into the North. Three people who had important roles in Confederate plans, had different strategies. General Robert Lee claimed that they had to fight the Union head on. Davis however, argued that they had to fight a solely defensive war. Jackson claimed that they needed to invade Union's important cities first and defeat the enemy that tires to reclaim the cities.
Meanwhile, the strategy of aging Union General Winfield Scott became popularly known as the Anaconda Plan. The Anaconda Plan, so named after the South American snake that strangles its victims to death, aimed to defeat the Confederacy by surrounding it on all sides with a blockade of Southern ports and the swift capture of the Mississippi River.
First Battle of Bull Run and the Early Stages of the War
Four slave states remained in the Union: Delaware, Maryland, Kentucky, and Missouri. The four border states were all important, and Lincoln did not want them to join the Confederacy. Missouri controlled parts of the Mississippi River, Kentucky controlled the Ohio river, and Delaware was close to the important city of Philadelphia. Perhaps the most important border state was Maryland. It was close to the Confederate capital, Richmond, Virginia, and the Union capital, Washington, was located between pro-Confederate sections of Maryland and seceded Virginia. Lincoln knew that he had to be cautious if he did not want these states to join the Confederacy. But they did anyway (with the exception of Maryland) after the Battle of Fort Sumter.
Both sides had advantages and weaknesses. The North had a greater population, more factories, supplies and more money than the South. The South had more experienced military leadership, better trained armies, and the advantage of fighting on familiar territory. Robert E. Lee is a good example because he was called on by president Lincoln before the civil war began to lead the Union army. But Lee refused and joined the Confederate army because he couldn't fight against his homeland, Virginia after they seceded.
However, the Confederacy faced considerable problems. Support for secession and the war was not unanimous, and all of the southern states provided considerable numbers of troops for the Union armies. Moreover, the presence of slavery acted as a drain of southern manpower, as adult males who might otherwise join the army were required to police the slaves and guard against slavery.
On July 21, 1861, the armies of General Beauregard and Union General Irvin McDowell met at Manassas, Virginia. At the Battle of Bull Run, the North originally had the upper hand, but Confederate General Thomas Jackson and his troops blocked Northern progress, Jackson's men began to retreat but Jackson stayed, standing "as a stone wall" (the origin of the nickname "Stonewall Jackson"). As Confederate reinforcements arrived, McDowell's army began to retreat in confusion and was defeated thoroughly, causing the North to discard their overly optimistic hopes for quick victory over the Confederacy. Even though the Confederates achieved victory, General Beauregard did not chase stragglers. So he was replaced by General Robert E. Lee. Also, General McDowell,who was defeated by Confederates was replaced by McClellan.
The Union even faced the threat of complete defeat early in the war. The Confederacy appointed two persons as representatives to the United Kingdom and France. Both of them decided to travel to Europe on a British ship, the Trent. A Union Captain, Charles Wilkes, seized the ship and forced the Confederate representatives to board the Union ship. However, Wilkes had violated the neutrality of the United Kingdom. The British demanded apologies, and Lincoln eventually complied, even releasing the Confederate representatives. Had he failed to do so, the United Kingdom might have joined with the Confederacy and the Union might have faced a much more difficult fight.
Technology and the Civil War
The Civil War was hallmarked by technological innovations that changed the nature of battle.
The most lethal change was the introduction of rifling to muskets. In previous wars, the maximum effective range of a musket was between 70 to 110 meters. Muskets, which were smooth bore firearms, weren't accurate beyond that. Tactics involved moving masses of troops to musket range, firing a volley, and then charging the opposing force with the bayonet, which is a sword blade attached to a firearm. However, a round (bullet) from an aimed rifled musket could hit a soldier more than 1300 meters away. This drastically changed the nature of warfare to the advantage of defenders. Massed attacks were less effective because they could easily be stopped from afar with a longer range.
The other key changes on land dealt with logistics (the art of military supply) and communications. By 1860, there were approximately 48,000 kilometers (30,000 miles) of railroad track, mostly in the Northern states. The railroads meant that supplies need not be obtained from local farms and cities, which meant armies could operate for extended periods of time without fear of starvation. In addition, armies could be moved across the country quickly, within days, without marching.
The telegraph is the third of the key technologies that changed the nature of the war. Washington City and Richmond, the capitals of the two opposing sides, could stay in touch with commanders in the field, passing on updated intelligence and orders. President Lincoln used the telegraph frequently, as did his chief general, Halleck, and field commanders such as Grant.
At sea, the greatest innovation was the introduction of ironclad warships. In 1862, the Confederate Navy built the CSS Virginia on the half-burned hull of the USS Merrimack. This ship, with iron armor, was impervious to cannon fire that would drive off or sink a wooden ship. The Virginia sank the U.S. frigate Cumberland and could have broken the blockade of the Federal fleet had it not been for the arrival of the ironclad USS Monitor, built by Swedish-American John Ericsson. The two met in May 1862 off Hampton Roads, Virginia. The battle was a draw, but this sufficed for the Union to continue its blockade of the Confederacy: the Virginia had retreated into a bay where it could not be of much use, and the Confederacy later burned it to prevent Union capture.
Things the Civil War had first
This is a list of things that the U.S. Civil War had first.
- Railroad artillery
- A successful submarine
- A "snorkel" breathing device
- The periscope, for trench warfare
- Land-mine fields
- Field trenches on a greater scale
- Flame throwers
- Wire entanglements
- Military telegraph
- Naval torpedoes
- Aerial reconnaissance
- Antiaircraft fire
- Repeating rifles
- Telescopic sights for rifles (Snipers)
- Long-range rifles for general use
- Fixed ammunition
- Ironclad navies
- A steel ship
- Revolving gun turrets
- Military railroads
- Organized medical and nursing corps
- Hospital ships
- Army ambulance corps
- A workable machine gun
- Legal voting for servicemen
- U.S. Secret Service
- The income tax
- Withholding tax
- Tobacco tax
- Cigarette tax
- American conscription
- American bread lines
- The Medal of Honor
- A wide-range corps of press correspondents in war zones aka battlefield correspondents
- Photography of battles and soldiers wounded and not wounded
- The bugle call, "Taps"
- African-American U.S. Army Officer (Major M.R. Delany)
- American President assassinated
- Department of Justice (Confederate)
- Commissioned American Army Chaplains
- U.S. Navy admiral
- Electronic exploding bombs and torpedoes
- The wigwag signal code in battle
- Wide-scale use of anesthetics for wounded
- Blackouts and camouflage under aerial observation
Shiloh and Ulysses Grant
While Union military efforts in the East were frustrated and even disastrous, West of the Appalachians, the war developed differently resulting in the first significant battlefield successes for the North.
Kentucky, on the border between the Union and Confederacy, was divided in its sentiments toward the two sides and politically attempted to pursue a neutral course. By autumn 1861, the state government decided to support the Union despite being a slave state. Kentucky's indecision and the divided loyalties of that state's population greatly influenced the course of military operations in the West as neither side wished to alienate Kentucky.
Below the confluence of the Ohio and Mississippi Rivers where the Kentucky, Tennessee and Missouri borders come together, Union Brigadier General Ulysses S. Grant, under command of Major General Henry W. Halleck, conducted a series of operations that would bring him national recognition. It was just across the Mississippi from Kentucky in Columbus, Missouri that Grant, later President of the United States, fought his first major battle.
The western campaigns continued into 1862 under Halleck's overall direction with Grant continuing into Western Tennessee along the Mississippi. In February, Grant attacked and captured the Tennessean Fort Donelson, providing a significant (though not necessarily major) victory for the North.
About two months after the victory at Fort Donelson, Grant fought an even more important battle at Shiloh.
Confederate generals, A.S Johnston and P.G.T Beauregard, made a surprise attack towards the Union army. The attack was pretty successful. However the Union made a counter attack and the Confederate army was defeated in the end.
After the Union took Fort Donelson, Grant wanted to push onto into Charleston and Memphis. But General Helleck denied it. If they had pushed and held the area, they would have gained control of the eastern railroad.
Grant's troops killed Confederate General Albert Johnston and defeated the Confederate troops, but at a steep price. Approximately thirteen thousand Union soldiers and eleven thousand Confederate soldiers died, and Grant lost a chance of capturing the West quickly.
Further Reading on the Battle of Shiloh
Peninsular Campaign
General Stonewall Jackson threatened to invade Washington. To prevent Jackson from doing so, Union General George McClellan left over fifty-thousand men in Washington. Little did he know that the deceptive Jackson did not even have 5000 men in his army. McClellan's unnecessary fear caused him to wait over half a year before continuing the war in Virginia, earning him the nickname "Tardy George" and allowing enough time for the Confederates to strengthen their position. Jackson's deceptions succeeded when General McClellan led Union troops in the Peninsular Campaign, the attempt to take the Confederate capital Richmond, without the aid of the force remaining in Washington.
In early April 1862, McClellan began the Peninsular Campaign. His troops traveled over sea to the peninsula formed by the mouths of the York and James Rivers, which included Yorktown and Williamsburg and led straight to Richmond. (The Union strategy for a quick end to the war was capturing Richmond, which appeared easy since it was close to Washington.) In late May, McClellan was a few miles from Richmond, when Robert E. Lee took control of one of the Confederate Armies. After several battles, it appeared that McClellan could march to Richmond. But McClellan refused to attack, citing a lack of reinforcements. The forces that he wanted were instead defending Washington. During the last week of June, Confederate General Robert E. Lee initiated the Seven Days' Battles that forced McClellan to retreat. By July, McClellan had lost over fifteen thousand men for no apparent reason; there was little consolation in the fact that Lee had lost even more.
During the Peninsular Campaign, other military skirmishes occurred. Flag Officer David Farragut of the Union Navy easily took control of the Mississippi River when he captured the key port of New Orleans in April, providing a key advantage to the Union and practically depriving the Confederacy of the river.
Total War
If Richmond had indeed been captured quickly and the war had ended, slavery and the Southern lifestyle would probably not have changed significantly. After the unsuccessful Union attacks in Virginia, Lincoln began to think about the Emancipation Proclamation, and the Union changed its strategy, from a quick capture of Richmond, to the destruction of the South through total war. Total war is a war strategy in which both military and non-military resources that are important to a state's ability to make war are destroyed by the opposing power. General William Sherman used total war in his "March to the Sea" November and December in 1864. This destroyed the South so much that it could not make war. It may involve attacks on civilians or the destruction of civilian property.
The Union strategy finally emerged with six parts:
- blockade the Confederate coastlines, preventing trade;
- free the slaves, destroying the domestic economy;
- disconnect the Trans-Mississippi by controlling the Mississippi River;
- further split the Confederacy by attacking the Southeast coast (Georgia, South Carolina, and North Carolina), denying access to foreign supply
- capture the capital of Richmond, which would severely incapacitate the Confederacy; and
- engage the enemy everywhere, weakening the armies through attrition.
Second Bull Run and Antietam
Meanwhile, a new Union Army under General John Pope was organized. Pope attempted to combine his army with McClellan's to create a powerful force. Stonewall Jackson attempted to prevent this danger by surrounding Pope's Army in Manassas. Both sides fought on August 29, and the Confederates won against a much larger Union force.
Pope's battered Army did eventually combine with McClellan's. But the Second Battle of Bull Run had encouraged General Lee to invade Maryland. In Sharpsburg, Maryland, McClellan and Lee led their armies against each other. On September 17, 1862, the Battle of Antietam (named for a nearby creek) led to the deaths of over ten thousand soldiers from each side; no other one-day battle led to more deaths in one day. This day is called "Bloodiest day of American History". McClellan's scouts had found Lee's battle plans with a discarded packet of cigars, but he did not act on the intelligence immediately. The Union technically won the Pyrrhic victory; McClellan lost about one-sixth of his Army, but Lee lost around one-third of his. Even though they could march and end the war, McClellan didn't go forward because he thought he's already lost too many soldiers. This was the victory needed for Lincoln's Emancipation Proclamation, so that it did not appear as an act of desperation.
The Emancipation Proclamation
Meanwhile, General McClellan seemed too defensive to Lincoln, who replaced McClellan with General Ambrose Burnside. Burnside decided to go on the offensive against Lee. In December 1862, at Fredricksburg, Virginia, Burnside's Army of the Potomac assaulted built-up Confederate positions and suffered terrible casualties to Lee's Army of Northern Virginia. The Federal superiority in numbers was matched by Lee's use of terrain and modernized firepower. "Burnside's Slaughter Pen" resulted in over ten thousand Union casualties, largely due to the ill-considered use of Napoleonic tactics against machine guns. Burnside then tried another attempt to move to capture Richmond, but the movement was foiled by winter weather. The "Mud March" forced the Army of the Potomac to return to winter quarters.
President Lincoln liked men who did not campaign on the abolition of slavery. He only intended to prevent slavery in all new states and territories. On the 22'nd of August, 1862, Lincoln was coming to the decision that abolishing slavery might help the Union, in a letter from that time he wrote "My paramount object in this struggle is to save the Union, and is not either to save or destroy slavery. If I could save the Union without freeing any slave, I would do it; and if I could save it by freeing all the slaves, I would do it; and if I could do it by freeing some and leaving others alone, I would also do that.". Doing so would especially disrupt the Confederate economy. In September, 1862, after the Battle of Antietam, Lincoln and his Cabinet agreed to emancipate, or free, southern slaves. On January 1, 1863, Lincoln issued the Emancipation Proclamation, which declared all slaves in rebel states "forever free."
The constitutional authority for the Emancipation Proclamation cannot be challenged. The Proclamation did not abolish slavery everywhere; it was restricted to states "still in rebellion" against the Union on the day it took effect. The Proclamation, technically, was part of a military strategy against states that had rebelled; this was to prevent internal conflict with the border states. Still, all the border states except Kentucky and Delaware had abolished slavery on their own. Naturally, the proclamation had no way of being enforced: the Executive in the form of military action was still trying to force the Confederacy to rejoin. Nonetheless, many slaves who had heard of the Proclamation escaped when Union forces approached.
The Proclamation also had another profound effect on the war: it changed the objective from forcing the Confederacy to rejoin the Union to eliminating slavery throughout the United States. The South had been trying too woo Great Britain (which relied on its agricultural exports, especially cotton, for manufacturing) into an alliance; now all hopes for one were eliminated. Great Britain was firmly against the institution of slavery, and it had been illegalized throughout the British Empire since 1833. In fact, many slaves freed via the Underground Railroad were taken to Britain, since it was safe from bounty hunters (Canada was too close to the U.S. for some).
Although the Union initially did not accept black freedmen for combat, it hired them for other jobs. When troops became scarce, the Union began enlisting blacks. At the end of the war, the 180,000 enlisted blacks made up about 10% of the Union Army, and 29500 enlisted blacks to Navy. Until 1864, the South refused to recognize captured black soldiers as prisoners of war, and executed several of them at Fort Pillow as escaped slaves. Lincoln believed in the necessity of black soldiers: in August 1864, he said if the black soldiers of the Union army all joined the Confederacy, "we would be compelled to abandon the war in three weeks." See Black Americans and the Civil War below for more on this subject.
Fredericksburg and Chancellorsville
In 1863, Lincoln again changed leadership, replacing Burnside with General Joseph Hooker. Hooker had a reputation for aggressiveness; his nickname was "Fighting Joe". From May 1 to May 4, 1863, near Chancellorsville, Virginia, General Lee, again outnumbered, used audacious tactics — he divided his smaller force in two in the face of superior numbers, sending Stonewall Jackson to the Union's flank, and defeated Hooker. Again, the Confederacy won, but at a great cost. Stonewall Jackson was accidentally shot by Confederate soldiers who didn't recognize him in the poor evening light and died shortly after the battle of Chancellorsville.
The North already held New Orleans. If they could take control over the entire Mississippi River, the Union could divide the Confederacy in two, making transportation of weapons and troops by the Confederates more difficult. The Vicksburg and the Fort Hudson was the only way that confederate can reach the Mississippi river. General Winfield Scott's strategic "Anaconda Plan" was based on control of the Mississippi; however, planning control was easier than gaining the control.
The city of Vicksburg, Mississippi, was located on high bluffs on the eastern bank of the river. At the time, the Mississippi River went through a 180-degree U shaped bend by the city. (It has since shifted course westward and the bend no longer exists.) Guns placed there could prevent Federal steamboats from crossing. Vicksburg was also on one of the major railroads running east-west through the Confederacy. Vicksburg was therefore the key point under Confederate control.
Major General Ulysses Grant marched on land from Memphis, Tennessee, while Union General William Tecumseh Sherman and his troops traveled by water. Both intended to converge on Vicksburg. Both failed, at least for the time being in December, 1862, when Grant's supply line was disrupted and Sherman had to attack alone.
Since Vicksburg did not fall to a frontal assault, the Union forces made several attempts to bypass Vicksburg by building canals to divert the Mississippi River, but these failed.
Grant decided to attack Vicksburg again in April. Instead of approaching from the north, as had been done before, his army approached Vicksburg from the south. Grant's Army of the Tennessee crossed from the western bank to the Eastern at Big Bluff on April 18, 1863 and then in a series of battles, including Raymond and Champion's Hill, defeated Confederate forces coming to the relief of Confederate general Pemberton. Sherman and Grant together besieged Vicksburg. Two major assaults were repelled by the defenders of Vicksburg, including one in which a giant land mine was set off under the Confederate fortifications.
From May to July, Vicksburg remained in Confederate hands, but on July 3, 1863, one day before Independence Day, General Pemberton finally capitulated. Thirty thousand Confederates were taken prisoner, but released after taking an oath to not participate in fighting the United States unless properly exchanged (a practice called parole}.
This victory cut the Confederate States in two, accomplishing one of the Union total war goals. Confederate forces would not be able to draw on the food and horses previously supplied by Texas.
This victory was very important in many ways.
- The Union now controlled all of the Mississippi River.
- Controlling the Mississippi meant that the Union had now split the Confederacy into two, depriving Confederate forces of the food and supplies of Texas.
The people of Vicksburg would not celebrate Independence Day on July 4th for another 81 years.
Concurrent with the opening of the Vicksburg Campaign, General Lee decided to march his troops into Pennsylvania for several reasons:
- He intended to win a major victory on Northern soil, increasing Southern morale, encouraging Northern peace activists, and increasing the likelihood of political recognition by England and France.
- He intended to feed his army on Northern supplies, reducing the burden on the Confederate economy.
- He intended to pressure Washington, DC, forcing the recall of Federal troops from the Western Theater and relieving some of the pressure on Vicksburg.
Using the Blue Ridge Mountains to screen his movements, Lee advanced up the Shenandoah Valley into West Virginia and Maryland before ultimately marching into south-central Pennsylvania. The Union forces moved north on roads to Lee's east. However, Lee did not know of the Federal movement, because his cavalry commander and chief scout, Jeb Stuart, had launched a raid eastward intending to "ride around" the Union army. On July 1, 1863, a Confederate division (Henry Heth's) ran into a Federal cavalry unit (Buford's) west of the city of Gettysburg. Buford's two brigades held their ground for several hours, until the arrival of the Union 1st Corps, and then withdrew through the town. The Confederates occupied Gettysburg, but by then the Union forces had formed a strong defensive line on the hills south of the town.
For the next three days, the Confederate Army of Northern Virginia faced the Union Army of the Potomac, now under the command of General George G. Meade, a Pennsylvanian who replaced Hooker, who had resigned as commander. (Hooker was given a corps command in the Army of the Cumberland, then in eastern Tennessee, where he performed satisfactorily for the remainder of the war.)
South of Gettysburg are high hills shaped like an inverted letter "J". At the end of the first day, the Union held this important high ground, partially because the Confederate left wing had dawdled moving into position. One July 2, Lee planned to attack up Emmitsburg Road from the south and west, hoping to force the Union troops to abandon the important hills and ridges. The attack went awry, and some Confederate forces, including Law's Alabama Brigade, attempted to force a gap in the Federal line between the two Round Tops, dominant heights at the extreme southern end of the Union's fish hook-shaped defensive line. Colonel Joshua Lawrence Chamberlain, commander of the 20th Maine Regiment, anchored this gap. He and the rest of his brigade, commanded by Colonel Strong Vincent, held the hill despite several hard-pressed attacks, including launching a bayonet charge when the regiment was low on ammunition.
Meanwhile, north of the Round Tops, a small ridge immediately to the west of the Federal line drew the attention of Union General Daniel Sickles, a former New York congressman, who commanded the Third Corps. He ordered his corps to advance to the peach-orchard crested ridge, which led to hard fighting around the "Devil's Den," Wheatfield, and Peach Orchard. Sickles lost a leg in the fight.
On the third day of the Battle of Gettysburg, Lee decided to try a direct attack on the Union and "virtually destroy their army." Putting Lieutenant General James Longstreet in charge of the three-division main assault, he wanted his men, including the division of Major General George Pickett, to march across a mile and a half up a gradual slope to the center of the Union line. Lee promised artillery support, but any trained soldier who looked across those fields knew that they would be an open target for the Union soldiers--much the reverse of the situation six months before in Fredericksburg. However, the choice was either to attack or withdraw, and Lee was a naturally aggressive soldier.
By the end of the attack, half of Longstreet's force was dead, wounded or captured and the position was not taken. George Pickett never forgave Lee for "slaughtering" his men. Pickett's Charge, called the "High Water Mark of the Confederacy," was practically the last hope of the Southern cause at Gettysburg.
Lee withdrew across the Potomac River. Meade did not pursue quickly, and Lee was able to reestablish himself in Virginia. He offered to Confederate President Jefferson Davis to resign as commander of the Army of Northern Virginia, saying, "Everything, therefore, points to the advantages to be derived from a new commander, and I the more anxiously urge the matter upon Your Excellency from my belief that a younger and abler man than myself can readily be attained." Davis did not relieve Lee; neither did Lincoln relieve Meade, though he wrote a letter of censure, saying "Again, my dear general, I do not believe you appreciate the magnitude of the misfortune involved in Lee's escape. He was within your easy grasp, and to have closed upon him would, in connection with our other late successes, have ended the war. As it is, the war will be prolonged indefinitely."
The battle of Gettysburg lasted three days. Both sides lost nearly twenty-five thousand men each. After Gettysburg, the South remained on the defensive.
On November 19, 1863 Lincoln delivered his most famous speech in the wake of this battle, it reads as follows.
"Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate -- we can not consecrate -- we can not hallow -- this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us -- that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion -- that we here highly resolve that these dead shall not have died in vain -- that this nation, under God, shall have a new birth of freedom -- and that government of the people, by the people, for the people, shall not perish from the earth."
Black Americans and the Civil War
The view of the Union towards blacks had changed during the previous two years. At the beginning of hostilities, the war was seen as an effort to save the Union, not free slaves. Several black slaves who reached Federal lines were returned to their owners. This stopped when Major General Benjamin F. Butler, a New Jersey lawyer and prominent member of the Democratic party, announced that slaves, being the property of persons in rebellion against the United States, would be seized as "contraband of war" and the Fugitive Slave Act could not apply. "Contrabands" were, if not always welcome by white soldiers, not turned away.
However, as the struggle grew more intense, abolition became a more popular option. Frederick Douglas, a former slave, urged that the war aim of the Union include the emancipation of slaves and the enlistment of black soldiers in the Union Army. This was done on a nationwide basis in 1863, though the state of Massachusetts had raised two regiments (the 54th and 55th Massachusetts) before this.
The 54th Massachusetts Regiment was the first black regiment recruited in the North. Col. Robert Gould Shaw, the 25 year old son of very wealthy abolitionist parents, was chosen to command. On May 28, the well equipped and drilled 54th paraded through the streets of Boston and then boarded ships bound for the coast of South Carolina. Their first conflict with Confederate soldiers came on July 16, when the regiment repelled an attack on James Island. But on July 18 came the supreme test of the courage and valor of the black soldiers; they were chosen to lead the assault on Battery Wagner, a Confederate fort on Morris Island at Charleston. In addressing his soldiers before leading them in charge across the beach, Colonel Shaw said, "I want you to prove yourselves. The eyes of thousands will look on what you do tonight."
While some blacks choose to join the military fight others fought by other means. An American teacher named Mary S. Peake worked to educate the freedmen and "contraband". She spent her days under a large oak tree teaching others near Fort Monroe in Virginia. (This giant tree is now over 140 years old and called Emancipation Oak). Since Fort Monroe remained under Union control this area was some what of a safe location for refugees and runaways to come to. Soon Mary began teaching in the Brown Cottage. This endeavor, sponsored by the American Missionary Association, became the basis from which Hampton University would spawn. Mary's school would house around 50 children during the day and 20 adults at night. This remarkable American died from tuberculosis on Washington's birthday in 1862.
Confederate President Jefferson Davis reacted to the raising of black regiments by passing General Order No. 111, which stated that captured black Federal soldiers would be returned into slavery (whether born free or not) and that white officers who led black soldiers would be tried for abetting servile rebellion. The Confederate Congress codified this into law on May 1, 1863. President Lincoln's order of July 30, 1863 responded:
It is therefore ordered that for every soldier of the United States killed in violation of the laws of war, a rebel soldier shall be executed; and for every one enslaved by the enemy or sold into slavery, a rebel soldier shall be placed at hard labor on the public works and continued at such labor until the other shall be released and receive the treatment due to a prisoner of war.
Eventually the Federal forces had several divisions' worth of black soldiers. Their treatment was not equal to white soldiers: at first, for example, black privates were paid $10 a month, the same as laborers, while white privates earned $13 a month. In addition, blacks could not be commissioned officers. The pay difference was settled retroactively in 1864.
The Confederate States also recruited and fielded black troops. It has been estimated that over 65,000 Southern blacks were in the Confederate ranks. Over 13,000 of these met the enemy in combat. Frederick Douglas reported, "There are at the present moment many Colored men in the Confederate Army doing duty not only as cooks, servants and laborers, but real soldiers, having musket on their shoulders, and bullets in their pockets, ready to shoot down any loyal troops and do all that soldiers may do to destroy the Federal government and build up that of the rebels."
The issue of black prisoners of war was a continual contention between the two sides. In the early stages of the war, prisoners of war would be exchanged rank for rank. However, the Confederates refused to exchange any black prisoner. The Union response was to stop exchanging any prisoner of war. The Confederate position changed to allowing blacks who were born free to be exchanged, and finally to exchange all soldiers, regardless of race. By then, the Federal leadership understood that the scarcity of white Confederates capable of serving as soldiers was an advantage, and there were no mass exchanges of prisoners, black or white, until the Confederate collapse.
Chickamauga and Chattanooga
In September 1863, Union Major General William Rosecrans decided to attempt the takeover of Chattanooga, a Confederate rail center in the eastern part of Tennessee. Controlling Chattanooga would provide a base to attack Georgia. The Confederates originally gave up Chattanooga, thinking that they could launch a devastating attack as the Union Army attempted to take control of it. Rosecrans did not, in the end, fall into such a trap. However, on November 23, 1863, the Union and Confederate Armies met at Chickamauga Creek, south of Chattanooga, upon which a rail line passed into Georgia.
The battle of Chickamauga was a Confederate victory. The Army of the Cumberland was forced to withdraw to Chattanooga, but Union General George Thomas, "the Rock of Chickamauga," and his troops prevented total defeat by standing their ground.
After Rosecrans withdrew to Chattanooga, the Confederates under General Braxton Bragg decided to besiege the city. Rosecrans was relieved of command; Lincoln's comment was that he appeared "stunned and confused, like a duck hit on the head." Meanwhile, by great effort, the Federal forces kept a "cracker line" open to supply Chattanooga with food and forage. Ulysses Grant replaced Rosecrans.
Grant's forces began to attack on November 23, 1863. On November 24 came the Battle of Lookout Mountain, an improbable victory in which Union soldiers, without the initiative of higher command, advanced up this mountain, which overlooks Chattanooga, and captured it. One of the authors of this text had an ancestor in the Confederate forces there; his comment was when the battle started, he was on top of the hill throwing rocks at the Yankees, and when it was over, the Yankees were throwing rocks at him.
By the end of November, Grant and his troops had pushed the Confederates out of East Tennessee and begun operations in Georgia.
Ulysses Grant As General-in-Chief
Lincoln recognized the great victories won by Ulysses Grant. In March, 1864, the President made Grant the general-in-chief of Union Forces, with the rank of Lieutenant General (a rank only previously held by George Washington). Grant decided on a campaign of continual pressure on all fronts, which would prevent Confederate forces from reinforcing each other.
He went east and made his headquarters with General Meade's Army of the Potomac (although Grant never took direct command of this Army). The Army of the Potomac's chief mission would be to whittle down the manpower of the Army of Northern Virginia, Lee's army. In May 1864, the two sides met in Virginia near site of the previous year's Battle of Chancellorsville. The terrain was heavily wooded and movement to attack or reinforce was particularly difficult.
During the Battle of the Wilderness, the Union lost eighteen thousand soldiers, while the Confederates lost eleven thousand. Nevertheless, the Union pushed on. The two Armies fought each other again at Spotsylvania Court House and at Cold Harbor. In each case, the Union again lost large numbers of soldiers. Grant then hatched a plan to go around rather than through the Confederate Army in order to capture Richmond. At the last second, due to a hesitation by Major General "Baldy" Smith, the Army of Northern Virginia blocked the Union troops at Petersburg. Grant then decided to siege the city (and Lee's forces) and force it to surrender; if Lee could not move, he could not help other Confederate armies.
The siege took almost one year.
The Georgia Campaign
Battles for Atlanta
This had a significant impact on the election of 1864. Without this victory, there may have been more support for his Copperhead opponent General McClellan.
The March to the Sea
Once Atlanta was taken, General Sherman and four army corps disconnected themselves from any railroad or telegraphic communications with the Union and headed through the state of Georgia. Their objective was Savannah, Georgia, a major seaport. Sherman's strategy was to inflict as much damage on the civilian population of Georgia, short of killing people, as possible. This strategy was known as "Total War". To accomplish this, he issued orders to "forage liberally on the country." Many of his soldiers saw this as a license to loot any food or valuable property they could. Sherman officially disapproved of this.
Sherman's army destroyed public buildings and railroad tracks wherever they went. One way to do this was through "Sherman's neckties", caused by heating a railroad section to red heat and twisting them around a tree. Sherman carved a path of destruction 300 miles long and over 60 miles wide from Atlanta to the coastal city of Savannah. His technique not only supported his regiments without supply lines, but destroyed supply caches for Confederate forces in the area as well.
The Confederate forces were unable to take on Sherman's forces, which, though separated from the Union army, had plenty of arms and ammunition. He reached the city of Savannah on December 24, 1864, and telegraphed President Lincoln "I present to you the city of Savannah as a Christmas present."
Moving through the Carolinas
Sherman's forces then moved north into South Carolina, while faking an approach on Augusta, Georgia; the general's eventual goal was to coordinate his forces with those of General Grant in Virginia and entrap and destroy Lee's Army of Northern Virginia. The pattern of destruction by the Union soldiers continued, often with a more personal feeling of vengeance. A Federal soldier said to his comrades, "Here is where treason began and, by God, here is where it will end!"
On February 17, 1865, Sherman's forces reached Columbia, the capital of South Carolina. After a brief bombardment, the city surrendered. However, a large stock of whiskey was left behind as the Confederates retreated. Drunken soldiers broke discipline; convicts were let loose from the city jail, and somehow fires broke out, destroying much of the city.
Hood's Invasion of Tennessee and the Battle of Nashville
Spring Hill
The battle of Spring Hill was fought on November 29, 1864, at Spring Hill, Tennessee. The Confederates attacked the Union as it retreated from Columbia. The Confederates were not able to inflict significant damage to the retreating Union force. So the Union Army was still able to make it safely north to Franklin during the night. The following day the Confederates decided to follow the Union and attack a much more fortified group at the Battle of Franklin. This did not prove to be a wise decision, as the Confederates suffered many casualties.
The Battle of Franklin was fought on November 30, 1864 at Franklin, Tennessee. This battle was a devastating loss for the Confederate Army. It detrimentally shut down their leadership. Fourteen Confederate Generals were extinguished with 6 killed, 7 wounded and 1 captured. 55 Regimental Commanders were casualties as well. After this battle the Confederate Army in this area was effectively handicapped.
In one of the decisive battles of the war, two brigades of black troops helped crush one of the Confederacy's finest armies at the Battle of Nashville on December 15-16, 1864. Black troops opened the battle on the first day and successfully engaged the right of the rebel line. On the second day Col. Charles R. Thompson's black brigade made a brilliant charge up Overton Hill. The 13th US Colored Troops sustained more casualties than any other regiment involved in the battle.
Fort Pillow
The Battle of Fort Pillow was fought on was fought on April 12, 1864, at Fort Pillow on the Mississippi River at Henning, Tennessee. The battle ended with a massacre of surrendered Union African-American troops under the direction of Confederate Brigadier General Nathan Bedford Forrest.
The End of the Confederacy
The Siege of Petersburg
The Siege of Petersburg, also known as The Richmond Petersburg Campaign, began on June 15, 1864 with the intent by the Union Army to take control of Petersburg which was Virginia's second largest city and the supply center for the Confederate capital at Richmond. The campaign lasted 292 days and concluded with the occupation of Union forces on April 3, 1865. Thirty-two black infantry and cavalry regiments took part in the siege.
First Battle of Deep Bottom
The First Battle of Deep Bottom is also known as Darbytown, Strawberry Plains, New Market Road, and Gravel Hill. It was part of The Siege of Petersburg, and was fought July 27-29, 1864, at Deep Bottom in Henrico County, Virginia.
The Crater
The Battle of the Crater was part of the Siege of Petersburg and took place on July 30, 1864. The battle took place between the Confederate Army of Northern Virginia and the Union Army of Potomac. The battle was an unusual attempt by the Union to penetrate the Confederate defenses south of Petersburg, VA. The battle showed to be a Union disaster. The Union Army went into battle with 16,500 troops, under the direct command of Ulysses S. Grant; the Confederate Army was commanded by Robert E. Lee and entered battle with 9,500 troops. Pennsylvania miners in the Union general Ambrose E. Burnside's Ninth Corps, worked for several weeks digging a long tunnel, and packing it with explosives. The explosives were then detonated at 3:15 on the morning of July 30, 1864. Burnside originally wanted to send a fresh division of black troops against the breach, but his superiors, Ulysses S. Grant, ruled against it. The job, chosen by short straw, went to James H. Ledlie. Ledlie watched from behind the lines as his white soldiers, rather than go around, pile into the deep crater, which was 170 feet long, 60 feet across, and 30 feet deep. They were not able to escape making the Union soldiers easy targets for the Confederates. The battle was marked by the cruel treatment of black soldiers who took part in the fight, most of them were captured and murdered. The battle ended with a confederate victory. The Confederacy took out 3,798 Union soldiers, while the Union were only able to defeat 1,491 Confederate soldiers. The United States Colored Troops suffered the most with their casualties being 1,327 which would include 450 men being captured.
Second Deep Bottom
The Second Battle of Deep Bottom was fought August 14-20, 1864, at Deep Bottom in Henrico County, Virginia; it was part of the Siege of Petersburg. The battle is also known as Fussell's Mill, Kingsland Creek, White's Tavern, Bailey's Creeks, and Charles City Road. General Winfield Scott Hancock came across the James River at Deep Bottom where he would threaten Richmond, Virginia. This would also cause the Confederates to leave Peterburgs, Virginia and the trenches and Shenandoah Valley.
Retreat from Richmond
Sherman did not stop in Georgia. As he marched North, he burnt several towns in South Carolina, including Columbia, the capital. (Sherman's troops felt more anger towards South Carolina, the first state to secede and in their eyes responsible for the war.) In March 1865, Lincoln, Sherman, and Grant all met outside Petersburg. Lincoln called for a quick end to the Civil War. Union General Sheridan said to Lincoln, "If the thing be pressed I think Lee will surrender." Lincoln responded, "Let the thing be pressed."
On April 2, 1865, the Confederate lines of Petersburg, Richmond's defense, which had been extended steadily to the west for 9 months, broke. General Lee informed President Davis he could no longer hold the lines; the Confederate government then evacuated Richmond. Lee pulled his forces out of the lines and moved west; Federal forces chased Lee's forces, annihilated a Confederate rear guard defense, and finally trapped the Army of Northern Virginia. General Lee requested terms. The two senior Army officers met each other near Appomattox Courthouse in Virginia on April 9th,1865. The men met at the home of Wilmer McLean. The gathering lasted about two and half hours. Grant offered extremely generous terms, requiring only that Lee's troops surrender and swear not to bear arms till the end of the War. This meeting helped to nearly end the bloodiest war in American history.
General Sherman met with Confederate General Joseph E. Johnston to discuss the surrender of Confederate troops in the South. Sherman initially allowed even more generous terms than Grant. However, the Secretary of War refused to accept the terms because of the assassination of Abraham Lincoln by the Confederate John Wilkes Booth. By killing Lincoln at Ford's Theater, Booth made things worse for the Confederacy. Sherman was forced to offer harsher terms of surrender than he originally proposed, and General Johnston surrendered on April 26 under the Appomattox terms. All Confederate armies had surrendered by the end of May, ending the Civil War.
Side note: A Virginian named Wilmer McLean had no luck escaping the Civil War. The first battle of the war, Bull Run, was fought right in front of his house, and the generals slept there, too. Hoping to get away from the war, he then moved to Appomattox. It was in his parlor that Lee surrendered to Grant.
Besides the Fighting
Not all the important events of the Civil War took place on the battlefield.
On May 20, 1862, the United States Congress passed the Homestead Act, which had been delayed by Southern legislators before secession. According to the provisions of the Act, any adult American citizen, or a person intending to become an American citizen, who was the head of a household, could qualify for a grant of 160 acres (67 hectares) of land by paying a small fee and living on the land continuously for 5 years. If a person was willing to pay $1.25 an acre, the time of occupation dwindled to six months.
Other vital legislation included the Pacific Railway Acts of 1862 and 1864, which enabled the United States Government to make a direct grant of land to railway companies for a transcontinental railroad, as well as a payment of $48,000 for every mile of track completed and lower-than-prime rate loans for any railway company who would build such a railway. Two railways, the Central Pacific and the Union Pacific, began to construct lines. The two railways finally met 4 years after the war, in Promontory Point, Utah, in 1869.
The federal government started a draft lottery in July, 1863. Men could avoid the draft by paying $300, or hiring another man to take their place. This caused resentment amongst the lower classes as they could not afford to dodge the draft. On Monday, July 13, 1863, between 6 and 7 A.M., the Civil War Draft Riots began in New York City. Rioters attacked the Draft offices, the Bull's Head Hotel on 44th Street, and more upscale residences near 5th Avenue. They lynched black men, burned down the Colored Orphan Asylum on 5th Avenue between 43rd and 44th Streets, and forced hundreds of blacks out of the city. Members of the 7th New York Infantry and 71st New York Infantry subdued the riot.
On April 22, 1864 the U.S. Congress passes the Coinage Act of 1864 which mandates that the inscription "In God We Trust" be placed on all coins minted as United States currency.
Dr. Rebecca Lee Crumpler becomes the first black woman to receive a medical degree.
The Morrill Act of 1862 was where the government granted land to the states in the Union where they were to build educational institutions. This excluded the states that seceded from the Union. The schools would have to teach lessons about military tactics, agriculture, and engineering.
In the 1860s, schools were small and normally multiple grades were taught in one classroom at one time. When giving a test, the teachers would have the students recite them orally. Many of the lessons were memorized by the children and recited. The punishment that was seen in school during this time was called Corporal Punishment and the parents even applauded the use of it. The parents thought the use of it would make their children become better children.
Students did not attend school very long because of having to work in the fields. The reading levels during this time were actually quite high. By the fifth grade students were to have been reading books that in modern times would be considered college level. There were academies during this time that provided education for children between the ages of thirteen and twenty. These academies offered an array of classes. Most of the academies kept the boys and girls separate.
Another group who was discriminated against when it came to schooling was women. Some of the women who stood out and took time to fight for the education rights of women were Susan Anthony, Emma Willard, Jane Addams and Mary McLeod. These women helped to establish the higher education institutions where women were able to take classes otherwise not offered to them. The first boys and girls college was Oberlin College which was established in 1833. The first all - women's college was Vassar College in 1861. | http://en.wikibooks.org/wiki/US_History/Civil_War | 13 |
60 | A previous article about the problems with pie charts received plenty of attention. This time I’m taking on Venn diagrams. As a numeric graph, these share many of the same failings as pie charts. They require the observer to estimate relative areas on abstract shapes.
In this example, you can see two overlapping circles. Firstly, did you know one circle is double the area of the other? Probably not, because we’re not very good at judging relative areas of shapes. Can you determine the value of the overlap area? The equation is complex on paper, let alone in your head!
You need to take the cord that cuts the circle, find the height h, the length of the cord L and the radius of the circle R.
That gives you area of the cord which you’d need to repeat for the other circle, then add the two values together. See not so easy now is it!
To “improve” this as a graph, we’d need to add the values for each area. In doing so, we’re back to the same issue with the pie chart, if you need labels, why isn’t this just a table? What additional information do you learn from the circles?
As a graph to convey some percentage of overlap Venn diagrams are just eye-candy, but they do have a strong reason to exist.
Venn Diagram’s for Abstract Concepts
Venn diagrams excel when they explain logical concepts such as philosophical ideas or arguments.
If we take the assertions: all mammals are part of the animal kingdom and humans are mammals, we can determine that humans are animals. If we represent this as a Venn diagram there is one large circle representing all animals. Inside of this, there’d be another circle representing all mammals, and finally within that, a final smaller circle representing humans. The circles are getting smaller because each contains multiple assertions. Humans are mammals, but so are dogs and cats. Humans are just a subset and therefore don’t take-up the whole area of the circle. If we remove the circle representing mammals and we’d be left with the assertion humans are animals.
From this, we can begin to build-up logical statements, also known as propositional calculus. Take for instance the following assertions: “it is raining”, “the ground is wet”. In propositional calculus we need to be aware of several logical statements and their fallacies. Take for instance Modus Tollens:
If we substitute our earlier assertions, “it is raining” and “the ground is wet” into this equation we get:
Then logical fallacy comes in when you invert the assertions.
While the first part might be true, the second part does not logically follow from it. There could be other reasons why the ground is wet even though it is not raining. People could be outside watering plants or washing their car. We can build these assertions as a Venn diagram to help explain why the inverse is not true.
The state of the world is either the ground being wet or not. So the largest circle is for the ground being in the wet state. A small circle inside of this represents “it is raining”. This is wholly within the circle “the ground is wet” because every time it rains the ground must be wet. We know this from our original assertion. But as you can see from the diagram, there are other reasons why the ground can be wet. For instance, “watering plants” can make the ground wet or may not. Therefore, that circle is partly inside and partly outside the circle “the ground is wet”.
Now if we go back to our assertions, if it’s raining the ground is wet. If we examine every point inside the circle “it is raining” we’ll find that it is contained by the circle “the ground is wet”. When we negate the statement, the ground is not wet, it is not raining. “Ground not wet” is represented by all the space outside the largest circle and nowhere in that space is any part of the circle “it is raining”.
The inverse does not hold-up. If the ground is wet, it is raining. If we look at the circle “The ground is wet” some of the area inside of this contains the circle “it is raining”. So this statement is not immediately false, but there are other reasons why the ground could be wet. The last part we need to address is the inverted negation “it’s not rain” therefore “the ground is not wet”. “It is not raining” is described by the area outside the circle “it is raining” and as we can plainly see, some of that area contains the circle “the ground is wet”.
Venn diagrams modeling logical statements allow us to visualize larger constructs in ways that are easier than written sentences without having to deal with exact numeric values.
First Order Logic
First order logic, truth tables and logical operators sound complicated, but they’re really not. We make statements in these forms all the time without even realizing it. When preparing dinner, you look through the fridge and see what’s available. Maybe you have some rice, then you need to ask your self, “I can cook some chicken or beef to go along with this.” Next you might think, “if I have chicken, I need to cook some vegetables.” This is built-up to statements such as:
The basics behind first order logic revolve around truth tables of AND, OR and NOT. This is a table that shows you the results if you were to look at all the possibilities and their resulting operations.
If there are two options “Chicken” or “Beef” the truth tables based on AND and OR are as follows:
Having this truth table, we can ask questions such as “Are you having beef OR chicken for dinner?” You can quickly look at the first two columns and begin to fill-in the blanks. The first row represents Chicken=No, Beef=No, so the answer to the question would be the corresponding OR column, “No, we are not having beef or chicken for dinner”. Now, if we look at the next row, chicken=No, beef=Yes, you can answer the question, “Yes, we are having beef or chicken for dinner.” Presumably, you don’t get the answer you wanted, which is “what is on the menu”, but using these truth tables you can build-up chains of these statements to determine answers to other questions. Plenty of board games such as Clue/Cluedo benefit from this sort of logic, “Do you have Professor Plum OR the Observatory OR the Revolver”, then then a card gets shown. So you know at least one of those was a yes! Over time, with enough truth tables, you can win the game, but be careful, if you get too good, all your friends will call you a dork and never play board games with you again—there is an art to bluffing and knowing when to be a gracious loser.
Performing a Web Search with Logical Operators
With this knowledge of logical operators it is possible to plug them into search engines to get more refined results, but it is highly unlikely that you have done this and if you have, I bet you don’t do it very often. The reason being, it’s too difficult to write out these long strings, the syntax for ANDs and ORs plus all the nested parentheses for groupings feels like a programming language when all you wanted to do is search.
Let’s take the three terms: Peanut Butter, Jelly, Bananas and examine a few different combinations which would allow us to search for vastly different documents.
Sites like Google use something similar to the truth table when determining if it should return a document. It asks each document, do you have the term “Peanut Butter” and saved the answer Yes or No. It then repeats this for the other two terms for all the documents. It then looks in the corresponding OR column. If the result is a Yes, then the page is returned as a search result matching the query. Now, computers are very good at this sort of logic, so the truth table is over simplifying what actually happens.
Let’s look at some other possible queries and think about what the truth table might look like and if the returned documents make sense.
This query will find all documents which contain all three terms, because for the AND operator to return Yes, all the individual items must also be Yes.
This query will find all documents with the term “Peanut butter” as well as one of the other two remaining terms. So you’ll be reading articles about Peanut Butter and Jelly sandwiches next to article about Peanut Butter and Banana sandwiches. The key is that Peanut Butter will be present in every article returned.
This query will find documents all documents that contain the terms Peanut Butter and Jelly, or documents that contain the term Bananas.
This doesn’t express all the permutations or even touch on the concept of NOTs. Such as, show me all articles about Peanut Butter and Banana sandwiches not involving Elvis.
Sites like Google have created advanced search forms which make it easier to build-up boolean logic queries, but they aren’t intuitive.
This form has the ability to conduct AND and OR searches, echoing back the query string to help teach people how they can do create their own searches without having to visit this page.
Google will argue that their search algorithm is good enough to get round edge-cases which require monolithicly complex queries. In Google’s case, they are probably right, the general public will never need to make use of such specific searches, but that doesn’t mean other more niche systems won’t need a solution.
Venn Diagrams as GUI Controls
One thing I have been experimenting with, and I’ll be the first to admit it might be a failure, is to map complex AND and OR queries to sections of a clickable Venn diagram. This is no longer a data graph, and it is bordering on no longer being a visualization, but instead a GUI component with the same first class status as the checkbox or radio button.
A Venn GUI control needs to be intuitive, easy to interact with, and immediately usable. At the moment this prototype isn’t there yet, but with further development, testing and tweaking, I think it shows promise in a niche sub-set of search applications.
There are several technical goals of this new GUI component which need to be addressed.
- The control needs to look like a control, look clickable and react in some form like a checkbox remembering state
- The clickable zones need to be large enough to be distinct from each other and so that they can acquired with ease as defined by Fitts’s law
- Accessible in a manner that there is some logical tab ordering or ability to activate zones without a pointing device
- Associate each zone with the concept or concepts it represents
- Usable on a monochrome display
Beyond the technical requirements, there needs to be an association with what the Venn diagram is trying to convey. Will people understand the concept of a union of multiple terms?
- The current design can easily handle 2 and 3 term Venn diagrams, but it does not easily scale to N>3
- At the moment this requires some scripting language to handle the click-behavior. This could be moved into a proprietary format such as Flash. If this were ever to become a standard GUI control, any behavior would natively be handled by the application, but until then, there are some external dependencies.
As you can see, it is modeled after the RGB color spectrum. The feedback from this test focused around not really understanding the colors or what was happening—back to the drawing board.
This first implementation didn’t look like a GUI control, it looked like a corporate logo. It didn’t “smell” like a widget so people were unaware of how to treat it.
I proceeded to add some 3Dness to the shapes by beveling the edges and adding a slight gradient to mimic an external light source. This is an example of how the control might look in situ.
The next step was to make the selected state more obvious without the RBG colors. The main color can be swapped out based on your theme, but for this I am using red.
If we have 3 search terms, Peanut Butter, Jelly and Bananas, the following Venn diagram would equate to the search:
Show me all articles that contain the word ("Banana" OR ("Banana" AND "Peanut Butter")) AND NOT "Jelly".
The circle as a whole represents all articles that contain the word “Banana”, but we have only selected the outermost part. Which are documents that just contain the word “Banana” and not the other two terms. We have also selected the small overlapping section between “Banana” and “Peanut Butter” so it will also return those corresponding documents.
In this example, the Venn Diagram represents a slightly more complex search query, yet it’s just as easy to click on and understand. We have selected the small wedge between “Peanut Butter” and “Bananas” as well as the small wedge between “Jelly” and” Bananas”. The third section is the intersection of all three terms. So the search query written out would look like this:
Show me all documents that contain: ("Bananas" AND "Peanut Butter") OR ("Bananas" AND "Jelly") OR ("Peanut Butter" AND "Jelly" AND "Bananas")
Clicking the different regions to adjust your advanced search seems to be much faster and easier than writing the long and egregious query strings, but the advantage is only gained when people understand the control without an explanation. This is what remains to be tested.
I am planning on using this control in some internal projects as an advanced search option. This is a controlled environment to test the awareness and usage so adjustments can be made or it can be scrapped entirely.
As with any new idea, the creator has more vested in seeing it succeed than others. I would love nothing more than for this idea to take-off, but I’m sure there are plenty of reasons why it won’t. I’m open to any suggestions on ways to improve the interface and idea, as well as all the reasons why this might be the worst GUI component ever thought-up. I think it has great potential, but then again, I’m biased. | http://optional.is/required/2009/09/23/venn-diagrams-as-ui-tools/ | 13 |
53 | Battle of Timor
|This article may be expanded with text translated from the corresponding article in the German Wikipedia. (February 2011)|
The Battle of Timor occurred in Portuguese Timor and Dutch Timor during the Second World War. Japanese forces invaded the island on 20 February 1942 and were resisted by a small, under-equipped force of Allied military personnel—known as Sparrow Force—predominantly from Australia, United Kingdom, and the Netherlands East Indies. Following a brief but stout resistance, the Japanese succeeded in forcing the surrender of the bulk of the Allied force after three days of fighting, however several hundred Australian commandos continued to wage an unconventional raiding campaign. They were resupplied by aircraft and vessels, based mostly in Darwin, Australia, about 650 km (400 mi) to the southeast, across the Timor Sea. During the subsequent fighting the Japanese suffered heavy casualties, but they were eventually able to contain the Australians.
The campaign lasted until 10 February 1943, when the final remaining Australians were evacuated, making them the last Allied land forces to leave South East Asia, following the Japanese offensives of 1941–1942. As a result, an entire Japanese division was tied up on Timor for more than six months, preventing its deployment elsewhere. Although Portugal was not a combatant, many East Timorese civilians and Portuguese European colonists fought with the Allies, or provided food, shelter and other assistance. Some Timorese continued a resistance campaign following the Australian withdrawal. For this, they paid a heavy price and tens of thousands of Timorese civilians died as a result of the Japanese occupation, which lasted until the end of the war in 1945.
|Part of a series on the|
|History of East Timor|
|Transition to independence|
|Contemporary East Timor|
|Santa Cruz massacre|
|Vote for independence|
|2006 political crisis|
|East Timor portal|
By late-1941, the island of Timor was divided politically between two colonial powers: the Portuguese in the east with a capital at Dili, and the Dutch in the west with an administrative centre at Kupang. A Portuguese enclave at Ocussi was also within the Dutch area.2 The Dutch defence included a force of 500 troops centred on Kupang, while the Portuguese force at Dili numbered just 150.3 In February the Australian and Dutch governments had agreed that in the event Japan entered the Second World War on the Axis side, Australia would provide aircraft and troops to reinforce Dutch Timor. Portugal—under pressure from Japan—maintained their neutrality however.145 As such, following the Japanese attack on Pearl Harbor, a small Australian force—known as Sparrow Force—arrived at Kupang on 12 December 1941.4 Meanwhile, two similar forces, known as Gull Force and Lark Force, were sent by the Australians to reinforce Ambon and Rabaul.6
Sparrow Force was initially commanded by Lieutenant Colonel William Leggatt, and included the 2/40th Battalion, a commando unit—the 2nd Independent Company—under Major Alexander Spence, and a battery of coastal artillery. There were in total around 1,400 men.25 The force reinforced Royal Netherlands East Indies Army troops under the command of Lieutenant Colonel Nico van Straten, including the Timor and Dependencies Garrison Battalion, a company from the VIII Infantry Battalion, a reserve infantry company, a machine-gun platoon from the XIII Infantry Battalion and an artillery battery.7 12 Lockheed Hudson light bombers of No. 2 Squadron, Royal Australian Air Force (RAAF).48 Sparrow Force was initially deployed around Kupang, and the strategic airfield of Penfui in the south-west corner of the island, although other units were based at Klapalima, Usapa Besar and Babau, while a supply base was also established further east at Champlong.8
Up to this point, the government of Portugal had declined to co-operate with the Allies, relying on its claim of neutrality and plans to send an 800-strong force from Mozambique to defend the territory in the event of any Japanese invasion. However, this refusal left the Allied flank severely exposed, and a 400-man combined Dutch-Australian force subsequently occupied Portuguese Timor on 17 December. In response, the Portuguese Prime-Minister, António de Oliveira Salazar, protested to the Allied governments, while the governor of Portuguese Timor declared himself a prisoner in order to preserve the appearance of neutrality. No resistance was offered by the small Portuguese garrison however, and the local authorities tacitly co-operated, while the population itself generally welcomed the Allied force. Most of the Dutch troops and the whole of the 2/2nd Independent Company, were subsequently transferred to Portuguese Timor and they were distributed in small detachments around the territory.1
In January 1942, the Allied forces on Timor became a key link in the so-called "Malay Barrier", defended by the short-lived American-British-Dutch-Australian Command under the overall command of General Sir Archibald Wavell. Additional Australian support staff arrived at Kupang on 12 February, including Brigadier William Veale, who had been made the Allied commanding officer on Timor. By this time, many members of Sparrow Force—most of whom were unused to tropical conditions—were suffering from malaria and other illnesses.1 The airfield at Penfui in Dutch Timor also become a key air link between Australia and American forces fighting in the Philippines under General Douglas MacArthur.3 Penfui came under attack from Japanese aircraft on 26 and 30 January 1942, however the raids were hampered by the British anti-aircraft gunners and, to a lesser degree, by P-40 fighters of the 33rd Pursuit Squadron, United States Army Air Forces, 11 of which were based in Darwin.5 Later, another 500 Dutch troops and the British 79th Light Anti-Aircraft Battery arrived to reinforce Timor, while an additional Australian-American force was scheduled to arrive in February.34
Meanwhile, Rabaul fell to the Japanese on 23 January, followed by Ambon on 3 February, and both Gull Force and Lark Force were destroyed.9 Later, on 16 February, an Allied convoy carrying reinforcements and supplies to Kupang—escorted by the heavy cruiser USS Houston, the destroyer USS Peary, and the sloops HMAS Swan and Warrego—came under intense Japanese air attack and was forced to return to Darwin without landing.5 The reinforcements had included an Australian pioneer battalion—the 2/4th Pioneer Battalion—and the 49th American Artillery Battalion.710 Sparrow Force could not be reinforced further and as the Japanese moved to complete their envelopment of the Netherlands East Indies, Timor was seemingly the next logical target.3
On the night of 19/20 February 1,500 troops from the Imperial Japanese Army's 228th Regimental Group, 38th Division, XVI Army, under the command of Colonel Sadashichi Doi, began landing in Dili. Initially the Japanese ships were mistaken for vessels carrying Portuguese reinforcements, and the Allies were caught by surprise. Nevertheless, they were well-prepared, and the garrison began an orderly withdrawal, covered by the 18-strong Australian Commando No. 2 Section stationed at the airfield. According to Australian accounts of the resistance to the Japanese landings at Dili, the commandos had killed an estimated 200 Japanese in the first hours of the battle, although the Japanese army recorded its casualties as including only seven men.11 Native accounts of the landings support the Australian claims, however.8
Another group of Australian commandos, No. 7 Section, was less fortunate, driving into a Japanese roadblock by chance. Despite surrendering, it is believedby whom? that all but one were massacred by the Japanese.8 Outnumbered, the surviving Australians then withdrew to the south and to the east, into the mountainous interior. Van Straten and 200 Dutch East Indies troops headed southwest toward the border.4
On the same night, Allied forces in Dutch Timor also came under extremely intense air attacks, which had already caused the small RAAF force to be withdrawn to Australia. The bombing was followed up by the landing of the main body of the 228th Regimental Group—two battalions totalling around 4,000 men—on the undefended southwest side of the island, at the Paha River. Five Type 94 tankettes were landed to support the Japanese infantry, and the force advanced north, cutting off the Dutch positions in the west and attacking the 2/40th Battalion positions at Penfui. A Japanese company thrust north-east to Usua, aiming to cut off the Allied retreat. In response Sparrow Force HQ was immediately moved further east, towards Champlong.8
Leggatt ordered the destruction of the airfield, but the Allied line of retreat towards Champlong had been cut off by the dropping of about 300 Japanese marine paratroopers, from the 3rd Yokosuka Special Naval Landing Force, near Usua, 22 km (14 mi) east of Kupang.38 Sparrow Force HQ moved further eastward, and Leggatt's men launched a sustained and devastating assault on the paratroopers, culminating in a bayonet charge. By the morning of 23 February, the 2/40th Battalion had killed all but 78 of the paratroopers, but had been engaged from the rear by the main Japanese force once again. With his soldiers running low on ammunition, exhausted, and carrying many men with serious wounds, Leggatt accepted a Japanese invitation to surrender at Usua. The 2/40th Battalion had suffered 84 killed and 132 wounded in the fighting, while more than twice that number would die as prisoners of war during the next two and a half years.8
Veale and the Sparrow Force HQ force—including about 290 Australian and Dutch troops—continued eastward across the border, to link up with the 2/2 Independent Company.7
By the end of February, the Japanese controlled most of Dutch Timor and the area around Dili in the northeast. However, they could not move into the south and east of the island without fear of attack.citation needed The 2/2nd Independent Company was specially trained for commando-style, stay behind operations and it had its own engineers and signalers, although it lacked heavy weapons and vehicles.3 The commandos were hidden throughout the mountains of Portuguese Timor, and they commenced raids against the Japanese, assisted by Timorese guides, native carriers and mountain ponies.3 Although Portuguese officials—under Governor Manuel de Abreu Ferreira de Carvalho—remained officially neutral and in charge of civil affairs, both the Portuguese and the indigenous East Timorese were usually sympathetic to the Allies, who were able to use the local telephone system to communicate among themselves and to gather intelligence on Japanese movements. Although initially they did not have functioning radio equipment and were unable to contact Australia to inform them of their continued resistance.12
Doi sent the Australian honorary consul, David Ross, also the local Qantas agent, to find the commandos and pass on a demand to surrender. Spence responded: "Surrender? Surrender be fucked!" Ross gave the commandos information on the disposition of Japanese forces and also provided a note in Portuguese, stating that anyone supplying them would be later reimbursed by the Australian government.13 In early March, Veale and Van Straten's forces linked up with the 2/2nd Company. A replacement radio—nicknamed "Winnie the War Winner"—was cobbled together and contact was made with Darwin.4 By May, Australian aircraft were dropping supplies to the commandos and their allies.14
The Japanese high command sent a highly-regarded veteran of the Malayan campaign and the Battle of Singapore, a major known as the "Tiger of Singapore" (or "Singapore Tiger"; his real name is unknown), to Timor. On 22 May, the "Tiger"—mounted on a white horse—led a Japanese force towards Remexio. An Australian patrol, with Portuguese and Timorese assistance, staged an ambush and killed four or five of the Japanese soldiers. During a second ambush, an Australian sniper shot and killed the "Tiger". Another 24 Japanese soldiers were also killed, and the force retreated to Dili.14 On 24 May, Veale and Van Straten were evacuated from the south east coast by an RAAF Catalina and Spence was appointed commanding officer, after being promoted to Lieutenant Colonel. On 27 May, Royal Australian Navy (RAN) launches successfully completed the first supply and evacuation missions to Timor.14
In June, General Douglas MacArthur—now the Supreme Allied Commander in the South West Pacific Area—was advised by General Thomas Blamey—Allied land force commander—that a full-scale Allied offensive in Timor would require a major amphibious assault, including at least one infantry division (at least 10,000 personnel). Because of this requirement and the overall Allied strategy of recapturing areas to the east, in New Guinea and the Solomon Islands, Blamey recommended that the campaign in Timor should be sustained for as long as possible, but not expanded. This suggestion was ultimately adopted.14
Relations between Ferreira de Carvalho and the Japanese deteriorated. His telegraph link with the Portuguese Government in Lisbon was cut. In June 1942, a Japanese official complained that the Governor had rejected Japanese demands to punish Portuguese officials and Timorese and civilians who had assisted the "invading army" (the Australians). On 24 June, the Japanese formally complained to Lisbon, but did not take any action against Ferreira de Carvalho.15 Meanwhile, Doi once again sent Ross with a message, complimenting Sparrow Force on its campaign so far, and again asking that it surrender. The Japanese commander drew a parallel with the efforts of Afrikaner commandos of the Second Boer War and said that he realized it would take a force 10 times that of the Allies to win. Nevertheless Doi said he was receiving reinforcements, and would eventually assemble the necessary units. This time Ross did not return to Dili, and he was evacuated to Australia on 16 July.14
In August, the Japanese 48th Division—commanded by Lieutenant General Yuitsu Tsuchihashi—began arriving from the Philippines and garrisoned Kupang, Dili and Malacca, relieving the Ito detachment.16 Tsuchihashi then launched a major counter-offensive in an attempt to push the Australians into a corner on the south coast of the island.17 Strong Japanese columns moved south—two from Dili and one from Manatuto on the northeast coast. Another moved eastward from Dutch Timor to attack Dutch positions in the central south of the island. The offensive ended on 19 August when the main Japanese force was withdrawn to Rabaul, but not before they secured the central town of Maubisse and the southern port of Beco. The Japanese were also recruiting significant numbers of Timorese civilians, who provided intelligence on Allied movements.1418 Meanwhile, also in late-August, a parallel conflict began when the Maubisse rebelled against the Portuguese.19
During September the main body of the Japanese 48th Division began arriving to take over the campaign. The Australians also sent reinforcements, in the form of the 450-strong 2/4th Independent Company—known as "Lancer Force"—which arrived on 23 September. The destroyer HMAS Voyager ran aground at the southern port of Betano while landing the 2/4th, and had to be abandoned after it came under air attack. The ship's crew was safely evacuated by HMAS Kalgoorlie and Warrnambool on 25 September 1942 and the ship destroyed by demolition charges.20 On 27 September, the Japanese mounted a thrust from Dili towards the wreck of Voyager, but without any significant success.14
By October, the Japanese had succeeded in recruiting significant numbers of Timorese civilians, who suffered severe casualties when used in frontal assaults against the Allies. The Portuguese were also being pressured to assist the Japanese, and at least 26 Portuguese civilians were killed in the first six months of the occupation, including local officials and a Catholic priest. On 1 November, the Allied high command approved the issuing of weapons to Portuguese officials, a policy which had previously been carried out on an informal basis. At around the same time, the Japanese ordered all Portuguese civilians to move to a "neutral zone" by 15 November. Those who failed to comply were to be considered accomplices of the Allies. This succeeded only in encouraging the Portuguese to cooperate with the Allies, whom they lobbied to evacuate some 300 women and children.14
Spence was evacuated to Australia on 11 November, and the 2/2nd commander, Major Bernard Callinan was appointed Allied commander in Timor. On the night of 30 November -1 December, the Royal Australian Navy mounted a major operation to land fresh Dutch troops at Betano, while evacuating 190 Dutch soldiers and 150 Portuguese civilians. The launch HMAS Kuru was used to ferry the passengers between the shore and two corvettes, HMAS Armidale and Castlemaine. However, Armidale—carrying the Dutch reinforcements—was sunk by Japanese aircraft and almost all of those on board were lost.14 Also during November, the Australian Army's public relations branch arranged to send the Academy Award-winning documentary filmmaker Damien Parer, and a war correspondent named Bill Marien, to Timor. Parer's film, Men of Timor, was later greeted with enthusiasm by audiences in Allied countries.21
By the end of 1942, the chances of the Allies re-taking Timor were remote, as there were now 12,000 Japanese troops on the island and the commandos were coming into increasing contact with the enemy. The Australian chiefs of staff estimated that it would take at least three Allied divisions, with strong air and naval support to recapture the island.14 Indeed, as the Japanese efforts to wear down the Australians and to separate them from their native support became more effective, the commandos had found their operations becoming increasingly untenable. Likewise, with the Australian Army fighting a number of costly battles against the Japanese beachheads around Buna in New Guinea, there were currently insufficient resources to continue operations in Timor. As such from early December Australian operations on Timor would be progressively wound down.18
On 11–12 December, the remainder of the original Sparrow Force—except for a few officers—was evacuated with Portuguese civilians, by the Dutch destroyer HNLMS Tjerk Hiddes.22 Meanwhile, in the first week of January the decision was made to withdraw Lancer Force. On the night of 9/10 January 1943, the bulk of the 2/4th and 50 Portuguese were evacuated by the destroyer HMAS Arunta. A small intelligence team known as S Force was left behind, but its presence was soon detected by the Japanese. With the remnants of Lancer Force, S Force made its way to the eastern tip of Timor, where the Australian-British Z Special Unit was also operating. They were evacuated by the American submarine USS Gudgeon on 10 February.14 Forty Australian commandos were killed during this phase of the fighting, while 1,500 Japanese were believed to have died.12
Overall, while the campaign on Timor had little strategic value, the Australian commandos had prevented an entire Japanese division from being used in the earlier phases of the New Guinea campaign14 while at the same time inflicting a disproportionate level of casualties on them. In contrast to those in Java, Ambon or Rabaul, Australian operations in Timor had been far more successful, even if it was also largely a token effort in the face of overwhelming Japanese strength. Likewise, they had proved that in favorable circumstances, unconventional operations could be both versatile and more economic than conventional operations, for which the resources were not available to the Allies at that time.23 Regardless, this success came at a high price and included the deaths of between 40,000 to 70,000 Timorese and Portuguese civilians during the Japanese occupation.1 Total Allied casualties included around 450 killed, while more the 2,000 Japanese were believed to have died in the fighting.citation needed
Ultimately, Japanese forces remained in control of Timor until their surrender in August 1945, following the destruction of Hiroshima and Nagasaki.1 On 5 September 1945, the Japanese commanding officer met Portuguese Governor Manuel de Abreu Ferreira de Carvalho, effectively returning power to him and placing the Japanese forces under Portuguese authority. On 11 September, the Australian Timorforce arrived in Kupang harbor and accepted the surrender of all Japanese forces on Timor from the senior Japanese officer on Timor, Col. Kaida Tatsuichi of the 4th Tank Regiment. The commander of the Timorforce, Brig. Lewis Dyke, a senior diplomat, W. D. Forsyth, and "as many ships as possible" were dispatched to Dili, arriving on 23 September. Ceremonies were then held with Australians, Portuguese and other local residents. Australian troops then supervised the disposal of arms by Japanese work parties before returning to West Timor for the surrender of the commander of the 48th Division, Lt. Gen. Yamada Kunitaro.24 On 27 September, a Portuguese naval and military force of more than 2,000 troops arrived to an impressive ceremony of welcome by the Timorese people. These troops included three engineering companies along with substantial supplies of food and construction materials for the reconstruction of Timor.25
- "A Short History of East Timor". Department of Defence. 2002. Archived from the original on 3 January 2006. Retrieved 3 January 2007.
- Dennis 2008, p. 528.
- Dennis 2008, p. 529.
- "Fighting in Timor, 1942". Australian War Memorial.
- "Fall of Timor". Australian Department of Veteran Affairs. 2005. Archived from the original on 27 July 2008. Retrieved 18 August 2008.
- Henning 1995, p. 47.
- Klemen, L. "Dutch West Timor Island in 1942".
- Remembering 1942
- Dennis 2008, p. 25 and 529.
- "2/4th Pioneer Battalion". The Australian War Memorial. Retrieved 5 January 2010.
- 防衛研修所戦史室, 戦史叢書 蘭印攻略作戦, Tokyo:Asagumo-Shimbun, 1967. (Japanese official military history by National Institute for Defense Studies)
- Callinan 1953, p. xxviii.
- "David Ross". The Airways Museum & Civil Aviation Historical Society. Archived from the original on 11 February 2010. Retrieved 5 January 2010.
- Klemen, L (2000). "The fightings on the Portuguese East Timor Island, 1942". Retrieved 18 August 2008.
- Geoffrey Gunn, 1999, History of Timor (Centro de Estudos sobre África e do Desenvolvimento; Universidade Técnica de Lisboa), p.13
- Rottmann 2002, p. 211.
- White 2002, p. 92.
- Dennis 2008, p. 530.
- "History of Timor". Retrieved 5 January 2010.
- "HMAS Voyager (I)". Royal Australian Navy. Retrieved 23 August 2008.
- "Damien Peter Parer". The Australian War Memorial. Retrieved 5 January 2010.
- Wheeler 2004, p. 152.
- Dennis 2008, pp. 529–530.
- For details about these and other postwar events, see Horton 2009.
- Geoffrey Gunn, 1999, History of Timor (Centro de Estudos sobre África e do Desenvolvimento; Universidade Técnica de Lisboa), p.129
- Callinan, Bernard (1953). Independent Company: The Australian Army in Portuguese Timor 1941–43. London: William Heinemann. ISBN [[Special:BookSources/0-85859-339-8|0-85859-339-8 [[Category:Articles with invalid ISBNs]]]] Check
- Campbell, Archie (1994). The Double Reds of Timor. Swanbourne: John Burridge Military Antiques. ISBN 0-646-25825-7.
- Dennis, Peter; et al (2008). The Oxford Companion to Australian Military History (Second ed.). Melbourne: Oxford University Press Australia & New Zealand. ISBN 978-0-19-551784-2.
- Doig, Colin (1986). A history of the 2nd Independent Company and 2/2 Commando Squadron. Perth: Selbstverlag. ISBN 0-7316-0668-X.
- Henning, Peter (1995). Doomed Battalion: The Australian 2/40th Battalion 1940–45. Mateship & Leadership in War & Captivity. St Leonards: Allen and Unwin. ISBN 1-86373-763-4.
- Horton, William Bradley (2009). "Through the Eyes of Australians: The Timor Area in the Early Postwar Period" Ajitaiheiyotokyu 12: 251–277.
- Rottman, George (2002). World War II Pacific Island Guide: A Geo-Military Study. Westport: Greenwood Press. ISBN 0-313-31395-4.
- Wheeler, Tony (2004). East Timor. Lonely Planet Publications. ISBN 1-74059-644-7.
- White, Ken (2002). Criado: A story of East Timor. Briar Hill: Indra Publishing. ISBN 0-9578735-4-9.
- Wray, Christopher (1987). Timor 1942. Australian Commandos at war with the Japanese. Hawthorn: Hutchinson Australia. ISBN 0-09-157480-3.
- Australian Department of Veterans Affairs, 2005, "Fall of Timor"
- The Japan Times, 28.04.2007, East Timor former sex slaves start speaking out | http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Battle_of_Timor | 13 |
82 | Binary decision diagram
In the field of computer science, a binary decision diagram (BDD) or branching program, like a negation normal form (NNF) or a propositional directed acyclic graph (PDAG), is a data structure that is used to represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression.
A Boolean function can be represented as a rooted, directed, acyclic graph, which consists of several decision nodes and terminal nodes. There are two types of terminal nodes called 0-terminal and 1-terminal. Each decision node is labeled by Boolean variable and has two child nodes called low child and high child. The edge from node to a low (or high) child represents an assignment of to 0 (resp. 1). Such a BDD is called 'ordered' if different variables appear in the same order on all paths from the root. A BDD is said to be 'reduced' if the following two rules have been applied to its graph:
In popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram (ROBDD in the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is that it is canonical (unique) for a particular function and variable order. This property makes it useful in functional equivalence checking and other operations like functional technology mapping.
A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the represented Boolean function is true. As the path descends to a low (or high) child from a node, then that node's variable is assigned to 0 (resp. 1).
The left figure below shows a binary decision tree (the reduction rules are not applied), and a truth table, each representing the function f (x1, x2, x3). In the tree on the left, the value of the function can be determined for a given variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, while solid lines represent edges to a high child. Therefore, to find (x1=0, x2=1, x3=1), begin at x1, traverse down the dotted line to x2 (since x1 has an assignment to 0), then down two solid lines (since x2 and x3 each have an assignment to one). This leads to the terminal 1, which is the value of f (x1=0, x2=1, x3=1).
The binary decision tree of the left figure can be transformed into a binary decision diagram by maximally reducing it according to the two reduction rules. The resulting BDD is shown in the right figure.
The basic idea from which the data structure was created is the Shannon expansion. A switching function is split into two sub-functions (cofactors) by assigning one variable (cf. if-then-else normal form). If such a sub-function is considered as a sub-tree, it can be represented by a binary decision tree. Binary decision diagrams (BDD) were introduced by Lee, and further studied and made known by Akers and Boute.
The full potential for efficient algorithms based on the data structure was investigated by Randal Bryant at Carnegie Mellon University: his key extensions were to use a fixed variable ordering (for canonical representation) and shared sub-graphs (for compression). Applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations. By extending the sharing to several BDDs, i.e. one sub-graph is used by several BDDs, the data structure Shared Reduced Ordered Binary Decision Diagram is defined. The notion of a BDD is now generally used to refer to that particular data structure.
In his video lecture Fun With Binary Decision Diagrams (BDDs), Donald Knuth calls BDDs "one of the only really fundamental data structures that came out in the last twenty-five years" and mentions that Bryant's 1986 paper was for some time one of the most-cited papers in computer science.
BDDs are extensively used in CAD software to synthesize circuits (logic synthesis) and in formal verification. There are several lesser known applications of BDD, including Fault tree analysis, Bayesian Reasoning, Product Configuration, and Private information retrieval .
Every arbitrary BDD (even if it is not reduced or ordered) can be directly implemented by replacing each node with a 2 to 1 multiplexer; each multiplexer can be directly implemented by a 4-LUT in a FPGA. It is not so simple to convert from an arbitrary network of logic gates to a BDD (unlike the and-inverter graph).
Variable ordering
The size of the BDD is determined both by the function being represented and the chosen ordering of the variables. There exist Boolean functions for which depending upon the ordering of the variables we would end up getting a graph whose number of nodes would be linear (in n) at the best and exponential at the worst case (e.g., a ripple carry adder). Let us consider the Boolean function Using the variable ordering , the BDD needs 2n+1 nodes to represent the function. Using the ordering , the BDD consists of 2n+2 nodes.
It is of crucial importance to care about variable ordering when applying this data structure in practice. The problem of finding the best variable ordering is NP-hard. For any constant c > 1 it is even NP-hard to compute a variable ordering resulting in an OBDD with a size that is at most c times larger than an optimal one. However there exist efficient heuristics to tackle the problem.
There are functions for which the graph size is always exponential — independent of variable ordering. This holds e. g. for the multiplication function (an indication as to the apparent complexity of factorization ).
Researchers have of late suggested refinements on the BDD data structure giving way to a number of related graphs, such as BMD (Binary Moment Diagrams), ZDD (Zero Suppressed Decision Diagram), FDD (Free Binary Decision Diagrams), PDD (Parity decision Diagrams), and MTBDDs (Multiple terminal BDDs).
Logical operations on BDDs
Many logical operations on BDDs can be implemented by polynomial-time graph manipulation algorithms.
However, repeating these operations several times, for example forming the conjunction or disjunction of a set of BDDs, may in the worst case result in an exponentially big BDD. This is because any of the preceding operations for two BDDs may result in a BDD with a size proportional to the product of the BDDs' sizes, and consequently for several BDDs the size may be exponential.
See also
- Boolean satisfiability problem
- L/poly, a complexity class that captures the complexity of problems with polynomially sized BDDs
- Model checking
- Radix tree
- Binary key – a method of species identification in biology using binary trees
- Barrington's theorem
- Graph-Based Algorithms for Boolean Function Manipulation, Randal E. Bryant, 1986
- C. Y. Lee. "Representation of Switching Circuits by Binary-Decision Programs". Bell Systems Technical Journal, 38:985–999, 1959.
- Sheldon B. Akers. Binary Decision Diagrams, IEEE Transactions on Computers, C-27(6):509–516, June 1978.
- Raymond T. Boute, "The Binary Decision Machine as a programmable controller". EUROMICRO Newsletter, Vol. 1(2):16–22, January 1976.
- Randal E. Bryant. "Graph-Based Algorithms for Boolean Function Manipulation". IEEE Transactions on Computers, C-35(8):677–691, 1986.
- R. E. Bryant, "Symbolic Boolean Manipulation with Ordered Binary Decision Diagrams", ACM Computing Surveys, Vol. 24, No. 3 (September, 1992), pp. 293–318.
- Karl S. Brace, Richard L. Rudell and Randal E. Bryant. "Efficient Implementation of a BDD Package". In Proceedings of the 27th ACM/IEEE Design Automation Conference (DAC 1990), pages 40–45. IEEE Computer Society Press, 1990.
- R.M. Jensen. "CLab: A C+ + library for fast backtrack-free interactive product configuration". Proceedings of the Tenth International Conference on Principles and Practice of Constraint Programming, 2004.
- H.L. Lipmaa. "First CPIR Protocol with Data-Dependent Computation". ICISC 2009.
- Beate Bollig, Ingo Wegener. Improving the Variable Ordering of OBDDs Is NP-Complete, IEEE Transactions on Computers, 45(9):993–1002, September 1996.
- Detlef Sieling. "The nonapproximability of OBDD minimization." Information and Computation 172, 103–138. 2002.
- R. Ubar, "Test Generation for Digital Circuits Using Alternative Graphs (in Russian)", in Proc. Tallinn Technical University, 1976, No.409, Tallinn Technical University, Tallinn, Estonia, pp. 75–81.
Further reading
- D. E. Knuth, "The Art of Computer Programming Volume 4, Fascicle 1: Bitwise tricks & techniques; Binary Decision Diagrams" (Addison–Wesley Professional, March 27, 2009) viii+260pp, ISBN 0-321-58050-8. Draft of Fascicle 1b available for download.
- H. R. Andersen "An Introduction to Binary Decision Diagrams," Lecture Notes, 1999, IT University of Copenhagen.
- Ch. Meinel, T. Theobald, "Algorithms and Data Structures in VLSI-Design: OBDD – Foundations and Applications", Springer-Verlag, Berlin, Heidelberg, New York, 1998. Complete textbook available for download.
- Rüdiger Ebendt; Görschwin Fey; Rolf Drechsler (2005). Advanced BDD optimization. Springer. ISBN 978-0-387-25453-1.
- Bernd Becker; Rolf Drechsler (1998). Binary Decision Diagrams: Theory and Implementation. Springer. ISBN 978-1-4419-5047-5.
|Wikimedia Commons has media related to: Binary decision diagrams|
Available OBDD Packages
- ABCD: The ABCD package by Armin Biere, Johannes Kepler Universität, Linz.
- CMU BDD, BDD package, Carnegie Mellon University, Pittsburgh
- BuDDy: A BDD package by Jørn Lind-Nielsen
- Biddy: Academic multiplatform BDD package, University of Maribor
- CUDD: BDD package, University of Colorado, Boulder
- JavaBDD, a Java port of BuDDy that also interfaces to CUDD, CAL, and JDD
- JDD is a pure java implementation of BDD and ZBDD. JBDD by the same author has a similar API but is a Java interface to BuDDy and CUDD
- The Berkeley CAL package which does breadth-first manipulation
- DDD: A C++ library with support for integer valued and hierarchical decision diagrams.
- JINC: A C++ library developed at University of Bonn, Germany, supporting several BDD variants and multi-threading. | http://en.wikipedia.org/wiki/Binary_decision_diagram | 13 |
55 | Signal-to-noise ratio (often abbreviated SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. It is defined as the ratio of signal power to the noise power. A ratio higher than 1:1 indicates more signal than noise. While SNR is commonly quoted for electrical signals, it can be applied to any form of signal (such as isotope levels in an ice core or biochemical signaling between cells).
Signal-to-noise ratio is sometimes used informally to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as "noise" that interferes with the "signal" of appropriate discussion.
where P is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. If the signal and the noise are measured across the same impedance, then the SNR can be obtained by calculating the square of the amplitude ratio:
where A is root mean square (RMS) amplitude (for example, RMS voltage). Because many signals have a very wide dynamic range, SNRs are often expressed using the logarithmic decibel scale. In decibels, the SNR is defined as
which may equivalently be written using amplitude ratios as
The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range measures the ratio between the strongest un-distorted signal on a channel and the minimum discernable signal, which for most purposes is the noise level. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 kHz at +4 dBu (1.228 VRMS).
SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that (near) instantaneous signal-to-noise ratios will be considerably different. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'.
Difference from conventional power
In Physics power (physics) of an ac signal is defined as
But in Signal Processing and Communication we usually assume that so that usually we don't include that resistance term while measuring power or energy of a signal. This usually causes some confusions among readers but the resistance term is not significant for operations performed in signal processing. Most of cases the power of a signal would be
where 'A' is the amplitude of the ac signal. In some places people just use
as the constant term doesn't affect much during the calculations.
Alternative definition
where is the signal mean or expected value and is the standard deviation of the noise, or an estimate thereof.[note 2] Notice that such an alternative definition is only useful for variables that are always non-negative (such as photon counts and luminance). Thus it is commonly used in image processing, where the SNR of an image is usually calculated as the ratio of the mean pixel value to the standard deviation of the pixel values over a given neighborhood. Sometimes SNR is defined as the square of the alternative definition above.
The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty. An SNR less than 5 means less than 100% certainty in identifying image details.
SNR for various modulation systems
Amplitude modulation
Channel signal-to-noise ratio is given by
where W is the bandwidth and ka is modulation index
Output signal-to-noise ratio (of AM receiver) is given by
Frequency modulation
Channel signal-to-noise ratio is given by
Output signal-to-noise ratio is given by
Improving SNR in practice
All real measurements are disturbed by noise. This includes electronic noise, but can also include external events that affect the measured phenomenon — wind, vibrations, gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. It is often possible to reduce the noise by controlling the environment. Otherwise, when the characteristics of the noise are known and are different from the signals, it is possible to filter it or to process the signal.
For example, it is sometimes possible to use a lock-in amplifier to modulate and confine the signal within a very narrow bandwidth and then filter the detected signal to the narrow band where it resides, thereby eliminating most of the broadband noise. When the signal is constant or periodic and the noise is random, it is possible to enhance the SNR by averaging the measurement. In this case the noise goes down as the square root of the number of averaged samples.
Digital signals
When a measurement is digitised, the number of bits used to represent the measurement determines the maximum possible signal-to-noise ratio. This is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called Quantization noise. This noise level is non-linear and signal-dependent; different calculations exist for different signal models. Quantization noise is modeled as an analog error signal summed with the signal before quantization ("additive noise").
This theoretical maximum SNR assumes a perfect input signal. If the input signal is already noisy (as is usually the case), the signal's noise may be larger than the quantization noise. Real analog-to-digital converters also have other sources of noise that further decrease the SNR compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither.
Although noise levels in a digital system can be expressed using SNR, it is more common to use Eb/No, the energy per bit per noise power spectral density.
The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal.
Fixed point
Assuming a uniform distribution of input signal values, the quantization noise is a uniformly distributed random signal with a peak-to-peak amplitude of one quantization level, making the amplitude ratio 2n/1. The formula is then:
This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Each extra quantization bit increases the dynamic range by roughly 6 dB.
Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal), the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level and uniform distribution. In this case, the SNR is approximately
Floating point
Note that the dynamic range is much larger than fixed-point, but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms.
Optical SNR
Optical signals have a carrier frequency that is much higher than the modulation frequency (about 200 THz and more). This way the noise covers a bandwidth that is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent of the modulation format, the frequency and the receiver. For instance an OSNR of 20dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with an optical spectrum analyzer.
See also
- The connection between optical power and voltage in an imaging system is linear. This usually means that the SNR of the electrical signal is calculated by the 10 log rule. With an interferometric system, however, where interest lies in the signal from one arm only, the field of the electromagnetic wave is proportional to the voltage (assuming that the intensity in the second, the reference arm is constant). Therefore the optical power of the measurement arm is directly proportional to the electrical power and electrical signals from optical interferometry are following the 20 log rule.
- The exact methods may vary between fields. For example, if the signal data are known to be constant, then can be calculated using the standard deviation of the signal. If the signal data are not constant, then can be calculated from data where the signal is zero or relatively constant.
- Often special filters are used to weight the noise: DIN-A, DIN-B, DIN-C, DIN-D, CCIR-601; for video, special filters such as comb filters may be used.
- Maximum possible full scale signal can be charged as peak-to-peak or as RMS. Audio uses RMS, Video P-P, which gave +9 dB more SNR for video.
- Michael A. Choma, Marinko V. Sarunic, Changhuei Yang, Joseph A. Izatt. Sensitivity advantage of swept source and Fourier domain optical coherence tomography. Optics Express, 11(18). Sept 2003.
- D. J. Schroeder (1999). Astronomical optics (2nd ed.). Academic Press. p. 433. ISBN 978-0-12-629810-9.
- Bushberg, J. T., et al., The Essential Physics of Medical Imaging, (2e). Philadelphia: Lippincott Williams & Wilkins, 2006, p. 280.
- Rafael C. González, Richard Eugene Woods (2008). Digital image processing. Prentice Hall. p. 354. ISBN 0-13-168728-X.
- Tania Stathaki (2008). Image fusion: algorithms and applications. Academic Press. p. 471. ISBN 0-12-372529-1.
- Jitendra R. Raol (2009). Multi-Sensor Data Fusion: Theory and Practice. CRC Press. ISBN 1-4398-0003-0.
- John C. Russ (2007). The image processing handbook. CRC Press. ISBN 0-8493-7254-2.
- Defining and Testing Dynamic Parameters in High-Speed ADCs — Maxim Integrated Products Application note 728
- Fixed-Point vs. Floating-Point DSP for Superior Audio — Rane Corporation technical library
- Taking the Mystery out of the Infamous Formula,"SNR = 6.02N + 1.76dB," and Why You Should Care. Analog Devices
- ADC and DAC Glossary – Maxim Integrated Products
- Understand SINAD, ENOB, SNR, THD, THD + N, and SFDR so you don't get lost in the noise floor – Analog Devices
- The Relationship of dynamic range to data word size in digital audio processing
- Calculation of signal-to-noise ratio, noise voltage, and noise level
- Learning by simulations – a simulation showing the improvement of the SNR by time averaging
- Dynamic Performance Testing of Digital Audio D/A Converters
- Fundamental theorem of analog circuits: a minimum level of power must be dissipated to maintain a level of SNR | http://en.wikipedia.org/wiki/Signal_to_noise_ratio | 13 |
88 | Area Art: Ethnomathematics For The Twenty-First Century
Overview | Activities
Answer these essential questions to check for your own understanding.
1. Can you find basic shapes (triangle, rectangle, square) in each of the following figures?
Using the IT'S HIP TO BE SQUARE HANDOUT
, identify as many combinations of basic shapes you can find inscribed within the borders of figures A through F. In some cases, you may choose to identify other polygons like trapezoids or parallelograms inscribed in the same figures. Name each of the more complex shapes.
2. What is Area
Access the Standard Deviants School Geometry: Figuring Out Area
, Segment 3 "Area of Combined Shapes & Review" video through EdVideo Online at http://www.thirteen.org/edonline/edvideo/index.html
. Once downloaded, FAST FORWARD the video to the image of a young woman explaining the "Area of Polygons" (you will see a chalkboard in the background with white text; time = 2:55). PLAY the video which reviews ways to calculate the area of a square (time = 2:55-3:05), a rectangle (time = 3:06-3:08), a triangle (time = 3:09-3:10), a trapezoid (time = 3:11-3:16) and a circle (time = 3:17-3:22). Record the formulas necessary to calculate area for each of the figures.
Area of a square = s2
where s= the measure of the sides
Area of a rectangle = bh where b = the measure of the base and h= the measure of the height
Area of a triangle = 1/2 bh *Note: A right triangle is basically a square or rectangle divided in half.
Area of a trapezoid = 1/2(b1 + b2)h
Area of a circle =
where r = radius of the circle and
is approximately equal to 3.14
Log onto the Area Explained Web site at http://www.mathleague.com/help/geometry/area.htm
. For each of the shapes, make sure that you can follow the steps used to calculate area and perimeter. If necessary, take notes in a notebook or on a separate sheet of paper. The same formulas are available on the STANDARD DEVIANTS GEOMETRY 2 NOTE CARD HANDOUT
Go to the Math Playground Web site at http://www.mathplayground.com/geometryMovie.html
. Calculate the area and perimeter of at least four rectangles. Once you are satisfied that you have mastered these calculations, you are ready to consider more challenging questions.
3. How do you find the area of combined (compound) shapes?
Look at the figures shown below.
A trapezoid may be conceptualized simply as a rectangle with two right triangles on the sides.
The area of many combined shapes like those identified in the IT'S HIP TO BE SQUARE HANDOUT
can be calculated simply by breaking the complex figures into smaller parts. PLAY the video segment used to answer "Introductory Activity Question 2: What is area?" from the beginning. The narrator explains how the shape of a typical mailbox found in the United States looks like the combination of a rectangle and a semicircle. Watch the segment to review strategies that can be used to calculate area of such figures.
Apply principles of geometry to other types of math problems
1. Shaded region practice problem
Try to solve the problem on the NCTM Weekly Problem Web site (http://www.nctm.org/high/asolutions.asp?ID=216
). In the problem a rectangle inscribed within a quarter circle. Use your knowledge of the area of a circle and the area of a rectangle to finish the problem. Since you only have ¼ of the full circle, the shaded region is represented by the difference of the ¼-circle area and the rectangle. If you are stumped, the site provides a detailed explanation of the problem.
2. GED Practice
Several good examples of questions on the GED Math test are available on this site. Complete the questions and get a summary of your performance. As you learn more about geometry, you should be able to complete all of the problems in the set. There are a few simple rules to remember: All of the interior angles of a triangle must add to 180 degrees and those of a circle, a square or rectangle will add to 360 degrees. These two facts will get you well on your way to answering the questions if you remember that in most regular shapes, you can find either a square, rectangle, triangle or circle hidden within its area.
Identify geometry in art and culture
1. Islamic Art
: Tiling and Geometric Patterns
Print out pages 22 and 24 of the GEOMETRIC PATTERNS HANDOUT (Available online at http://www.projects.ex.ac.uk/trol/trol/trolna.pdf). How do these patterns compare geometrically? (Answer: The patterns shown on page 22 radiate from the center of circles and polygons, while the patterns shown on page 24, commonly called knot patterns, are based on grids.)
Log onto the Islamic Art Web site at http://www.maths2art.co.uk/islamicart.htm
to examine patterns common in Islamic art forms. Click on the link to "Area and Perimeter" to get additional practice calculating these properties of two-dimensional shapes. Click on the "Construction" link to practice drawing some of these patterns on your own using GRAPH PAPER
, a ruler and a compass.
2. Quilting: World influences and American traditions
Log onto Quilted Math at http://www.riverdeep.net/current/2001/11/112601_quiltedmath.jhtml
. Look at the intricate "Heart and Gizzards" pattern; try to answer some of the questions provided on the site. An interesting problem is posed under the "Irish Chain Pattern": If the dimensions of this quilt are 60 in x 72 in, and 60 equal blocks were used to make the quilt, what were the dimensions of each block? (Answer: 1 in. x 1.2 in; Divide 60 in. by 60, and 72 in. by 60).
What is the area of a quilt with these dimensions? (Answer 4320 in2
; Multiply 60 in. by 72 in.)
3. Geometric Patterns: Create your own virtual quilt
For those of us who like the idea of creating a virtual quilt, log onto the Anna Grossnickle Hines Quilting Page at http://www.aghines.com/Quilt/interactive/grid/grid.htm
. Try making different designs using the various colored triangles-recreate the virtual quilt on GRAPH PAPER. If you are up to the challenge, create an actual quilt square out of fabric shapes.
Make more life connections between math, art and culture
Identify various shapes in buildings by exploring images available online at sites like http://stockphotography.smugmug.com/gallery/710985/1/31039469
; you could also try a more specific math connection using tangrams
Check out the tangram exercises available at http://www.projects.ex.ac.uk/trol/trol/trolxk.pdf
to see how fast you can create the various figures using different parallelograms.
World Aids Day is Friday, December 1st. Gather a group of friends or family and commit to each creating at least one square for an AIDS memorial quilt. Combine your pieces with squares of other members of your community and donate your shapes or your quilt to a quilting group working on a larger project or an organization sponsoring programs to raise funds and awareness of the worldwide AIDS crisis.
Homeowners and homebuyers sometimes have questions about the square footage (area) of their property. In most cases, homes and construction lots are not perfectly rectangular or triangular (1/2 of a quadrilateral); because of this, the calculations are much more involved than simply adding the areas of various familiar shapes. In this lesson, we used triangles and quadrilaterals that all had 90 degree angles. In real life, that is most often not the case. Check out the Math Forum's (http://mathforum.org/library/drmath/sets/select/dm_area_irreg.html
) responses to several real-life questions involving area and perimeter of some "not so regular" spaces. For a less complicated (more interactive) view of the same kind of activity, complete the "Plot Plans & Silhouettes" activities available through Annenberg/CPB Math and Science Project at http://www.learner.org/teacherslab/math/geometry/space/plotplan/index.html
For more practice with trapezoids, visit the Open Reference Web site at http://www.mathopenref.com/trapezoidarea.html. Visit the Illuminations Web sites (For circles http://illuminations.nctm.org/ActivityDetail.aspx?ID=87
and for angle sums http://illuminations.nctm.org/ActivityDetail.aspx?ID=9
) to learn more about circumference and interior angle sums of various regular and irregular polygons.
Math concepts can often be traced to cultural roots. Did you know African cornrow curves are related to fractal geometry and that graffiti art is often reproduced using principles of coordinate geometry? Teaching math through culture is just one way of potentially meeting the needs of a multicultural, multiethnic, multilingual society. Visit any of the Web sites linked on the Culturally situated design tools Web site at http://www.rpi.edu/~eglash/csdt.html
to find new ways of engaging students using the texts of the real world.
Islam originated in the 7th century in the Middle East (http://worldatlas.com/Web image/countrys/me.htm
). Islamic art and architecture has been influential in Europe, Asia and the United States for hundreds of years. Visit the Antiques Roadshow "Tips of the Trade" Web site (http://www.pbs.org/wgbh/pages/roadshow/tips/tiles/tiles.html
) to learn about the global influence of Islamic tiles. | http://www.thirteen.org/edonline/adulted/lessons/lesson54_activities.html | 13 |
51 | Section 5: Testing the Law of Universal Gravitation
Early successes of the law of universal gravitation included an explanation for Kepler's laws of planetary orbits and the discovery of the planet Neptune. Like any physical law, however, its validity rests on its agreement with experimental observations. Although the theory of general relativity has replaced the law of universal gravitation as our best theory of gravity, the three elements of the universal law—the universal constant of gravitation, the equality of gravitational and inertial mass, and the inverse square law—are also key elements of general relativity. To test our understanding of gravity, physicists continue to examine these elements of the universal law of gravitation with ever-increasing experimental sensitivity.
Figure 10: Mass of a helium atom is not equal to the sum of the individual masses of its constituent parts.
The mass of an object does not equal the sum of the masses of its constituents. For example, the mass of a helium atom is about one part per thousand less than the sum of the masses of the two neutrons, two protons, and two electrons that comprise it. The mass of the Earth is about five parts in 1010 smaller than the sum of the masses of the atoms that make up our planet. This difference arises from the nuclear and electrostatic binding—or potential—energy that holds the helium atom together and the gravitational binding (potential) energy that holds the earth together.
The inertial mass of an object therefore has contributions from the masses of the constituents and from all forms of binding energy that act within the object. If , gravity must act equally on the constituent masses and the nuclear, electrostatic, and gravitational binding energies. Is this indeed the case? Does the Sun's gravity act on both the atoms in the Earth and the gravitational binding energy that holds the Earth together? These are questions that have to be answered by experimental measurements. Modern tests of the universality of free fall tell us that the answer to these questions is yes, at least to within the precision that the measurements have achieved to date.
Tests of the universality of free fall
To test the universality of free fall (UFF), experimentalists compare the accelerations of different materials under the influence of the gravitational force of a third body, called the "source." Many of the most sensitive tests have come from torsion balance measurements. A recent experiment used eight barrel-shaped test bodies attached to a central frame, with four made of beryllium (Be) on one side and four of titanium (Ti) on the other. The denser titanium bodies were hollowed out to make their masses equal to those of the beryllium bodies while preserving the same outer dimensions. All surfaces on the pendulum were coated by a thin layer of gold. The vacuum vessel that surrounded and supported the torsion fiber and pendulum rotated at a slow uniform rate about the tungsten fiber axis. Any differential acceleration of the two types of test bodies toward an external source would have led to a twist about the fiber that changed in sign as the apparatus rotated through 180°. Essential to the experiment was the removal of all extraneous (nongravitational) forces acting on the test bodies.
Figure 11: Torsion pendulum to test the universality of free fall.
Source: © Blayne Heckel. More info
For source masses, experiments have used locally constructed masses within the laboratory, local topographic features such as a hillside, the Earth itself, the Sun, and the entire Milky Way galaxy. Comparing the differential acceleration of test bodies toward the galactic center is of particular interest. Theorists think that dark matter causes roughly 30 percent of our solar system's acceleration about the center of the galaxy. The same dark matter force that helps to hold the solar system in orbit about the galactic center acts on the test bodies of a torsion pendulum. A dark matter force that acts differently on different materials would then lead to an apparent breakdown of the UFF. Because physicists have observed no differential acceleration in the direction of the galactic center, they conclude that dark matter interacts with ordinary matter primarily through gravity.
No violation of the UFF has yet been observed. Physicists use tests of the UFF to search for very weak new forces that may act between objects. Such forces would lead to an apparent violation of the UFF and would be associated with length scales over which the new forces act. Different experimental techniques have been used to test the UFF (and search for new forces) at different length scales. For example, there is a region between 103 meters and 105 meters over which torsion balances fail to produce reliable constraints on new weak forces. This is because over this length scale, we do not have sufficient knowledge of the density homogeneity of the Earth to calculate reliably the direction of the new force—it might point directly parallel to the fiber axis and not produce a torque on the pendulum. In this length range, the best limits on new forces come from modern "drop tower" experiments that directly compare the accelerations of different materials in free fall at the Earth's surface.
UFF tests in space
The future for tests of the UFF may lie in space-based measurements. In a drag-free satellite, concentric cylinders of different composition can be placed in free fall in the Earth's gravitational field. Experimentalists can monitor the relative displacement (and acceleration) of the two cylinders with exquisite accuracy for long periods of time using optical or superconducting sensors. Satellite-based measurements might achieve a factor of 1,000 times greater sensitivity to UFF violation than ground-based tests.
Figure 12: Apollo mission astronauts deploy corner cube reflectors.
Source: © NASA. More info
One source of space-based tests of the UFF already exists. The Apollo space missions left optical corner mirror reflectors on the Moon that can reflect Earth-based laser light. Accurate measurements of the time of flight of a laser pulse to the Moon and back provide a record of the Earth-Moon separation to a precision that now approaches 1 millimeter. Because both the Earth and the Moon are falling in the gravitational field of the Sun, this lunar laser ranging (LLR) experiment provides a test of the relative accelerations of the Earth and Moon toward the Sun with precision of 2 x 10-13 of their average accelerations. Gravitational binding energy provides a larger fraction of the Earth's mass than it does for the Moon. Were the UFF to be violated because gravity acts differently on gravitational binding energy than other types of mass or binding energy, then one would expect a result about 2,000 times larger than the experimental limit from LLR.
Validating the inverse square law
Physicists have good reason to question the validity of the inverse square law at both large and short distances. Short length scales are the domain of the quantum world, where particles become waves and we can no longer consider point particles at rest. Finding a theory that incorporates gravity within quantum mechanics has given theoretical physicists a daunting challenge for almost a century; it remains an open question. At astronomical length scales, discrepancies between observations and the expectations of ordinary gravity require dark matter and dark energy to be the dominant constituents of the universe. How sure are we that the inverse square law holds at such vast distances?
Figure 13: Torsion pendulum to test the inverse square law of gravity at submillimeter distances.
Source: © Blayne Heckel. More info
The inverse square law has been tested over length scales ranging from 5 x 10-5 to 1015 meters. For the large lengths, scientists monitor the orbits of the planets, Moon, and spacecraft with high accuracy and compare them with the orbits calculated for a gravitational force that obeys the inverse square law (including small effects introduced by the theory of general relativity). Adding an additional force can lead to measurable modifications of the orbits. For example, general relativity predicts that the line connecting the perihelia and aphelia of an elliptical gravitational orbit (the points of closest and furthest approach to the Sun for planetary orbits, respectively) should precess slowly. Any violation of the inverse square law would change the precession rate of the ellipse's semi-major axis. So far, no discrepancy has been found between the observed and calculated orbits, allowing scientists to place tight limits on deviations of the inverse square law over solar system length scales.
Figure 14: Experimental limits on the universality of free fall.
Source: © Blayne Heckel. More info
At the shortest distances, researchers measure the gravitational force between plates separated by about 5 x 10-5 meters, a distance smaller than the diameter of a human hair. A thin conducting foil stretched between the plates eliminates any stray electrical forces. Recent studies using a torsion pendulum have confirmed the inverse square law at submillimeter distances. To probe even shorter distances, scientists have etched miniature (micro) cantilevers and torsion oscillators from silicon wafers. These devices have measured forces between macroscopic objects as close as 10-8 meters, but not yet with enough sensitivity to isolate the gravitational force.
Does the inverse square law hold at the tiny distances of the quantum world and at the large distances where dark matter and dark energy dominate? We don't know the answer to that question. Definitive tests of gravity at very small and large length scales are difficult to perform. Scientists have made progress in recent years, but they still have much to learn. | http://www.learner.org/courses/physics/unit/text.html?unit=3&secNum=5 | 13 |
69 | In computer networking, a packet is a formatted unit of data carried by a packet mode computer network. Computer communications links that do not support packets, such as traditional point-to-point telecommunications links, simply transmit data as a series of bytes, characters, or bits alone. When data is formatted into packets, the bitrate of the communication medium can be better shared among users than if the network were circuit switched.
Packet framing
A packet consists of two kinds of data: control information and user data (also known as payload). The control information provides data the network needs to deliver the user data, for example: source and destination addresses, error detection codes like checksums, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.
Different communications protocols use different conventions for distinguishing between the elements and for formatting the data. In Binary Synchronous Transmission, the packet is formatted in 8-bit bytes, and special characters are used to delimit the different elements. Other protocols, like Ethernet, establish the start of the header and data elements by their location relative to the start of the packet. Some protocols format the information at a bit level instead of a byte level.
A good analogy is to consider a packet to be like a letter: the header is like the envelope, and the data area is whatever the person puts inside the envelope. A difference, however, is that some networks can break a larger packet into smaller packets when necessary (note that these smaller data elements are still formatted as packets).
A network design can achieve two major results by using packets: error detection and multiple host addressing.
Error detection
The packet trailer often contains error checking data to detect errors that occur during transmission.
Host addressing
Modern networks usually connect three or more host computers together; in such cases the packet header generally contains addressing information so that the packet is received by the correct host computer. In complex networks constructed of multiple routing and switching nodes, like the ARPANET and the modern Internet, a series of packets sent from one host computer to another may follow different routes to reach the same destination. This technology is called packet switching.
In the seven-layer OSI model of computer networking, 'packet' strictly refers to a data unit at layer 3, the Network Layer. The correct term for a data unit at the Data Link Layer—Layer 2 of the seven-layer OSI model—is a frame, and at Layer 4, the Transport Layer, the correct term is a segment or datagram. Hence, e.g., a TCP segment is carried in one or more IP Layer packets, which are each carried in one or more Ethernet frames—though the mapping of TCP, IP, and Ethernet, to the layers of the OSI model is not exact.
In general, the term packet applies to any message formatted as a packet, while the term datagram is reserved for packets of an "unreliable" service. A "reliable" service is one that notifies the user if delivery fails, while an "unreliable" one does not notify the user if delivery fails. For example, IP provides an unreliable service. Together, TCP and IP provide a reliable service, whereas UDP and IP provide an unreliable one. All these protocols use packets, but UDP packets are generally called datagrams.
When the ARPANET pioneered packet switching, it provided a reliable packet delivery procedure to its connected hosts via its 1822 interface. A host computer simply arranged the data in the correct packet format, inserted the address of the destination host computer, and sent the message across the interface to its connected Interface Message Processor. Once the message was delivered to the destination host, an acknowledgement was delivered to the sending host. If the network could not deliver the message, it would send an error message back to the sending host.
Meanwhile, the developers of CYCLADES and of ALOHAnet demonstrated that it was possible to build an effective computer network without providing reliable packet transmission. This lesson was later embraced by the designers of Ethernet.
If a network does not guarantee packet delivery, then it becomes the host's responsibility to provide reliability by detecting and retransmitting lost packets. Subsequent experience on the ARPANET indicated that the network itself could not reliably detect all packet delivery failures, and this pushed responsibility for error detection onto the sending host in any case. This led to the development of the end-to-end principle, which is one of the Internet's fundamental design assumptions.
Example: IP packets
- 4 bits that contain the version, that specifies if it's an IPv4 or IPv6 packet,
- 4 bits that contain the Internet Header Length, which is the length of the header in multiples of 4 bytes (e.g., 5 means 20 bytes).
- 8 bits that contain the Type of Service, also referred to as Quality of Service (QoS), which describes what priority the packet should have,
- 16 bits that contain the length of the packet in bytes,
- 16 bits that contain an identification tag to help reconstruct the packet from several fragments,
- 3 bits. The first contains a zero, followed by a flag that says whether the packet is allowed to be fragmented or not (DF: Don't fragment), and a flag to state whether more fragments of a packet follow (MF: More Fragments)
- 13 bits that contain the fragment offset, a field to identify position of fragment within original packet
- 8 bits that contain the Time to live (TTL), which is the number of hops (router, computer or device along a network) the packet is allowed to pass before it dies (for example, a packet with a TTL of 16 will be allowed to go across 16 routers to get to its destination before it is discarded),
- 8 bits that contain the protocol (TCP, UDP, ICMP, etc.)
- 16 bits that contain the Header Checksum, a number used in error detection,
- 32 bits that contain the source IP address,
- 32 bits that contain the destination address.
After those 160 bits, optional flags can be added of varied length, which can change based on the protocol used, then the data that packet carries is added. An IP packet has no trailer. However, an IP packet is often carried as the payload inside an Ethernet frame, which has its own header and trailer.
Delivery not guaranteed
Many networks do not provide guarantees of delivery, nonduplication of packets, or in-order delivery of packets, e.g., the UDP protocol of the Internet. However, it is possible to layer a transport protocol on top of the packet service that can provide such protection; TCP and UDP are the best examples of layer 4, the Transport Layer, of the seven layered OSI model.
The header of a packet specifies the data type, packet number, total number of packets, and the sender's and receiver's IP addresses.
The term frame is sometimes used to refer to a packet exactly as transmitted over the wire or radio.
Example: the NASA Deep Space Network
The Consultative Committee for Space Data Systems (CCSDS) packet telemetry standard defines the protocol used for the transmission of spacecraft instrument data over the deep-space channel. Under this standard, an image or other data sent from a spacecraft instrument is transmitted using one or more packets.
CCSDS packet definition
A packet is a block of data with length that can vary between successive packets, ranging from 7 to 65,542 bytes, including the packet header.
- Packetized data is transmitted via frames, which are fixed-length data blocks. The size of a frame, including frame header and control information, can range up to 2048 bytes.
- Packet sizes are fixed during the development phase.
Because packet lengths are variable but frame lengths are fixed, packet boundaries usually do not coincide with frame boundaries.
Telecom processing notes
Data in a frame is typically protected from channel errors by error-correcting codes.
- Even when the channel errors exceed the correction capability of the error-correcting code, the presence of errors nearly always is detected by the error-correcting code or by a separate error-detecting code.
- Frames for which uncorrectable errors are detected are marked as undecodable and typically are deleted.
Handling data loss
Deleted undecodable whole frames are the principal type of data loss that affects compressed data sets. In general, there would be little to gain from attempting to use compressed data from a frame marked as undecodable.
- When errors are present in a frame, the bits of the subband pixels are already decoded before the first bit error will remain intact, but all subsequent decoded bits in the segment usually will be completely corrupted; a single bit error is often just as disruptive as many bit errors.
- Furthermore, compressed data usually are protected by powerful, long-blocklength error-correcting codes, which are the types of codes most likely to yield substantial fractions of bit errors throughout those frames that are undecodable.
Thus, frames with detected errors would be essentially unusable even if they were not deleted by the frame processor.
This data loss can be compensated for with the following mechanisms.
- If an erroneous frame escapes detection, the decompressor will blindly use the frame data as if they were reliable, whereas in the case of detected erroneous frames, the decompressor can base its reconstruction on incomplete, but not misleading, data.
- However, it is extremely rare for an erroneous frame to go undetected.
- For frames coded by the CCSDS Reed–Solomon code, fewer than 1 in 40,000 erroneous frames can escape detection.
- All frames not employing the Reed–Solomon code use a cyclic redundancy check (CRC) error-detecting code, which has an undetected frame-error rate of less than 1 in 32,000.
Example: Radio and TV Broadcasting
MPEG packetized stream
Packetized Elementary Stream (PES) is a specification defined by the MPEG communication protocol (see the MPEG-2 standard) that allows an elementary stream to be divided into packets. The elementary stream is packetized by encapsulating sequential data bytes from the elementary stream inside PES packet headers.
A typical method of transmitting elementary stream data from a video or audio encoder is to first create PES packets from the elementary stream data and then to encapsulate these PES packets inside an MPEG transport stream (TS) packets or an MPEG program stream (PS). The TS packets can then be multiplexed and transmitted using broadcasting techniques, such as those used in an ATSC and DVB.
PES packet header
|Packet start code prefix||3 bytes||0x000001|
|Stream id||1 byte||Examples: Audio streams (0xC0-0xDF), Video streams (0xE0-0xEF) |
|Note: The above 4 bytes is called the 32-bit start code.|
|PES Packet length||2 bytes||Can be zero as in not specified for video streams in MPEG transport streams|
|Optional PES header||variable length|
|Stuffing bytes||variable length|
|Data||See elementary stream. In the case of private streams the first byte of the payload is the sub-stream number.|
Optional PES header
|Name||Number of Bits||Description|
|Marker bits||2||10 binary or 0x2 hex|
|Scrambling control||2||00 implies not scrambled|
|Data alignment indicator||1||1 indicates that the PES packet header is immediately followed by the video start code or audio syncword|
|Copyright||1||1 implies copyrighted|
|Original or Copy||1||1 implies original|
|PTS DTS indicator||2||11 = both present, 10 = only PTS|
|ES rate flag||1|
|DSM trick mode flag||1|
|Additional copy info flag||1|
|PES header length||8||gives the length of the remainder of the PES header|
|Optional fields||variable length||presence is determined by flag bits above|
|Stuffing Bytes||variable length||0xff|
In order to provide mono "compatibility", the NICAM signal is transmitted on a subcarrier alongside the sound carrier. This means that the FM or AM regular mono sound carrier is left alone for reception by monaural receivers.
A NICAM-based stereo-TV infrastructure can transmit a stereo TV programme as well as the mono "compatibility" sound at the same time, or can transmit two or three entirely different sound streams. This latter mode could be used to transmit audio in different languages, in a similar manner to that used for in-flight movies on international flights. In this mode, the user can select which soundtrack to listen to when watching the content by operating a "sound-select" control on the receiver.
NICAM offers the following possibilities. The mode is auto-selected by the inclusion of a 3-bit type field in the data-stream
- One digital stereo sound channel.
- Two completely different digital mono sound channels.
- One digital mono sound channel and a 352 kbit/s data channel.
- One 704 kbit/s data channel.
The four other options could be implemented at a later date. Only the first two of the ones listed are known to be in general use however.
NICAM packet transmission
The NICAM packet (except for the header) is scrambled with a nine-bit pseudo-random bit-generator before transmission.
- The topology of this pseudo-random generator yields a bitstream with a repetition period of 511 bits.
- The pseudo-random generator's polynomial is: x^9 + x^4 + 1.
- The pseudo-random generator is initialized with: 111111111.
Making the NICAM bitstream look more like white noise is important because this reduces signal patterning on adjacent TV channels.
- The NICAM header is not subject to scrambling. This is necessary so as to aid in locking on to the NICAM data stream and resynchronisation of the data stream at the receiver.
- At the start of each NICAM packet the pseudo-random bit generator's shift-register is reset to all-ones.
See also
- DHCP server
- Fast packet switching
- Louis Pouzin
- Mangled packet
- Packet analyzer
- Packet generation model
- Protocol data unit
- Statistical multiplexing
- Tail drop
- TCP segment
- Kurose, James F. & Ross, Keith W. (2007), "Computer Networking: A Top-Down Approach" ISBN 0-321-49770-8
- Method and apparatus for changing codec to reproduce video and/or audio data streams encoded by different codecs within a channel - Patent EP1827030
- European publication server | http://en.wikipedia.org/wiki/Network_packet | 13 |
125 | |This article/section deals with mathematical concepts appropriate for a student in late high school or early university.|
An integral is a mathematical construction used in calculus to represent the area of a region in a plane bounded by the graph of a function in one real variable. Definite integrals use the following notation:
where a and b represent the lower and upper bounds of the interval being integrated over, f(x) represents the function being integrated (the integrand), and dx represents a dummy variable given various definitions, depending on the context of the integral.
Indefinite integrals use the notation
The fact that f is of bounded variation implies that the Riemann sum above converges and is independent of the choices of . For integration of measurable functions which may not have bounded variation, the Lebesgue integral must be used. It is easy to see that is the area under the curve f(x) between the vertical line x=a and the vertical line x=b. a and b are called the limits of the integral, and f(x) is called the integrand. This type of integral is referred to as a definite integral, or an integral between definite limits.
Indefinite integrals are related to definite integrals by the second part of the Fundamental Theorem of Calculus.
Note that, because the derivative of any constant function is 0, . Therefore, does not simply equal x2, but rather x2 + C for any constant real number C. The above-mentioned functions are the family of antiderivatives for 2x.
The concept of integration can be extended to functions in more than one real variable, as well as functions defined over the complex numbers.
Integration has many physical applications. The indefinite integral of an acceleration function with respect to time gives the velocity function defined to within a constant, while the definite integral of an acceleration function with respect to time gives the change in velocity between the upper and lower limits of integration. Likewise, the indefinite integral of a time function of velocity with respect to time gives the position function defined to within a constant, and the definite integral of this velocity function will give the change in position between the two limits of integration.
Methods of Integration
For a more detailed treatment, see Methods of integration.
There are many different ways to integrate functions. Sometimes, when it is impossible to directly integrate a function, an approximation is used, such as the Riemann integral. Or, there may be a process to integrate the function using a rule, such as with Integration by parts.
Types of Integrals
There are several types of integrals. Definite integrals are integrals that are evaluated over limits of integration. Indefinite integrals are not evaluated over limits of integration. Evaluating an indefinite integral yields the antiderivative of the integrand plus a constant of integration.
A third type - an improper integral - is an integral in which one of the limits of integration is infinity. Evaluating an improper integral requires taking the limit of the definite integral as the appropriate limit of integration approaches infinity.
A fourth type of integral is known as the Lebesgue integral. The Lebesgue integral is a generalization of the Riemann integral that allows traditionally and non-integrable functions such as the indicator function on the rational numbers to be integrated.
Properties of integrals
Integration has the following properties
If an integral contains two functions added together, it may be rewritten as two separate integrals, each containing one function.
Constant Multiplicative Distribution
If an integral contains a function multiplied by a constant, it may be rewritten as the constant times the integral of the function.
Any integral that contains the same upper and lower bounds is equal to zero.
An integral may be rewritten as the sum of two integrals with adjacent bounds.
Any integral is equal to the additive inverse of the same integral with reversed upper and lower bounds.
Antiderivative vs Integration
There are important differences between the anti-derivative and integration. An anti-derivative of a function f(x) is a function F(x) such that,
The integral of a function can be evaluated using its antiderivative,
This works for the kind of functions encountered in late high school and early university mathematics. It is, however, an incomplete method. For example one cannot write the anti-derivative of in terms of familiar functions (such as trigonometric functions, exponentials, and logarithms) and function operations.
For a more detailed treatment, see Riemann Integral.
As a geometric interpretation of the integral of the area under a curve, the Riemann integral consists of dividing the area under the curve of the function into slices. The domain of the function is partioned into N segments of width The height of the segment is dependent on which side of the rectangle is taken. The lower sum takes the lower side of the rectangle, the upper sum the higher side of the rectangle. In the limit of these two series become the integral. If they approach the same value then the integral exists, otherwise it is undefined.
For a more detailed treatment, see Simpson's rule.
Simpson's Rule is an extension of the Riemann integral. Instead of using shapes such as rectangles or trapezoids, Simpson's Rule allows for the use of parabolas or other higher order polynomials to approximate integrals.
Multiple integrals are integrals extended to higher dimensions. Just like a definite integral gives the area under a 2-dimensional function, a double integral gives the volume under a three-dimensional function, and a triple integral gives the four-dimensional volume under a four-dimensional function. An ordinary integral is integrated over a single variable, such as x; similarly, a double integral is integrated over a two-dimensional area, usually written A; and a triple integral is integrated over a three-dimensional volume, usually written V.
- A double integral:
- A triple integral:
Fubini's Theorem states that, for double integrals,
and for triple integrals,
These functions can be integrated with respect to the variables x, y, and z in any order. For example, the double integral above is also equivalent to .
The Lebesgue integral is usually introduced in late university or early postgraduate mathematics. It is naively described as rotating the Reimann integral, in that it is the range instead of the domain that is partitioned. An understanding of measure theory is required to understand this technique.
The Lebesgue integral is defined for every function for which the Riemann integral is defined, as well as for an even larger class of functions: this possibility of integrating functions which are not Riemann integrable is a large part of the motivation for the Lebesgue theory. A typical example of a function which is Lebesgue integrable but not Riemann integrable is the characteristic function of the rationals, , defined by
Because the rationals are only countable, this function is zero "almost everywhere": there are far more irrationals than rationals, and the function is 0 at all of these. Thus we would expect that
Unfortunately, this function has so many discontinuities that its Riemann integral is not defined. However, if we use the Lebesgue integral instead, the function is integrable as hoped, and has the expected value 0. | http://conservapedia.com/Integration | 13 |
90 | Just as we have rotational counterparts for displacement,
velocity, and acceleration, so do we have rotational counterparts
for force, mass, and Newton’s Laws. As with angular kinematics,
the key here is to recognize the striking similarity between rotational
and linear dynamics, and to learn to move between the two quickly
If a net force is applied to an object’s center
of mass, it will not cause the object to rotate. However, if a net
force is applied to a point other than the center of mass, it will
affect the object’s rotation. Physicists call the effect of force
on rotational motion torque.
Consider a lever mounted on a wall so that the lever is
free to move around an axis of rotation O.
In order to lift the lever, you apply a force F to
point P, which is a distance r away from
the axis of rotation, as illustrated below.
Suppose the lever is very heavy and resists your efforts
to lift it. If you want to put all you can into lifting this lever,
what should you do? Simple intuition would suggest, first of all, that
you should lift with all your strength. Second, you should grab
onto the end of the lever, and not a point near its axis of rotation.
Third, you should lift in a direction that is perpendicular to the
lever: if you pull very hard away from the wall or push very hard toward
the wall, the lever won’t rotate at all.
Let’s summarize. In order to maximize torque, you need
Maximize the magnitude of the force, F,
that you apply to the lever.
the distance, r, from the axis of
rotation of the point on the lever to which you apply the force.
the force in a direction perpendicular to the lever.
We can apply these three requirements to an equation for
In this equation,
is the angle made between
the vector for the applied force and the lever.
Torque Defined in Terms of Perpendicular Components
There’s another way of thinking about torque that may
be a bit more intuitive than the definition provided above. Torque
is the product of the distance of the applied force from the axis
of rotation and the component of the applied force that is perpendicular
to the lever arm. Or, alternatively, torque is the product of the
applied force and the component of the length of the lever arm that
runs perpendicular to the applied force.
We can express these relations mathematically as follows:
are defined below.
Torque Defined as a Vector Quantity
Torque, like angular velocity and angular acceleration,
is a vector quantity. Most precisely, it is the cross product of
the displacement vector, r,
from the axis of rotation to the point where the force is applied,
and the vector for the applied force, F.
To determine the direction of the torque vector, use the
right-hand rule, curling your fingers around from the r vector
over to the F vector.
In the example of lifting the lever, the torque would be represented
by a vector at O pointing out of the
student exerts a force of 50 N on a lever at a distance 0.4 m from
its axis of rotation. The student pulls at an angle that is 60º
above the lever arm. What is the torque experienced by the lever
Let’s plug these values into the first equation we saw
This vector has its tail at the axis of rotation, and,
according to the right-hand rule, points out of the page.
Newton’s First Law and Equilibrium
Newton’s Laws apply to torque just as they apply to force.
You will find that solving problems involving torque is made a great
deal easier if you’re familiar with how to apply Newton’s Laws to
them. The First Law states:
If the net torque acting on a rigid object is
zero, it will rotate with a constant angular velocity.
The most significant application of Newton’s First Law
in this context is with regard to the concept of equilibrium.
When the net torque acting on a rigid object is zero, and that object is
not already rotating, it will not begin to rotate.
When SAT II Physics tests you on equilibrium, it will
usually present you with a system where more than one torque is
acting upon an object, and will tell you that the object is not
rotating. That means that the net torque acting on the object is
zero, so that the sum of all torques acting in the clockwise direction
is equal to the sum of all torques acting in the counterclockwise
direction. A typical SAT II Physics question will ask you to determine the
magnitude of one or more forces acting on a given object that is
masses are balanced on the scale pictured above. If the bar connecting
the two masses is horizontal and massless, what is the weight of
mass m in terms of M?
Since the scale is not rotating, it is in equilibrium,
and the net torque acting upon it must be zero. In other words,
the torque exerted by mass M must
be equal and opposite to the torque exerted by mass m.
Because m is three times
as far from the axis of rotation as M,
it applies three times as much torque per mass. If the two masses
are to balance one another out, then M must
be three times as heavy as m.
Newton’s Second Law
We have seen that acceleration has a rotational equivalent
in angular acceleration,
, and that force has a
rotational equivalent in torque,
. Just as the familiar version of Newton’s Second
Law tells us that the acceleration of a body is proportional to
the force applied to it, the rotational version of Newton’s Second
Law tells us that the angular acceleration of a body is proportional
to the torque applied to it.
Of course, force is also proportional to mass, and there
is also a rotational equivalent for mass: the moment of inertia
which represents an object’s resistance to being rotated. Using
the three variables,
, we can arrive at a rotational equivalent
for Newton’s Second Law:
As you might have guessed, the real challenge involved
in the rotational version of Newton’s Second Law is sorting out
the correct value for the moment of inertia.
Moment of Inertia
What might make a body more difficult to rotate? First
of all, it will be difficult to set in a spin if it has a great
mass: spinning a coin is a lot easier than spinning a lead block.
Second, experience shows that the distribution of a body’s mass
has a great effect on its potential for rotation. In general, a
body will rotate more easily if its mass is concentrated near the
axis of rotation, but the calculations that go into determining
the precise moment of inertia for different bodies is quite complex.
Moment of inertia for a single particle
Consider a particle of mass m that
is tethered by a massless string of length r to
point O, as pictured below:
The torque that produces the angular acceleration of the
is directed out of the page. From the linear version of Newton’s
Second Law, we know that F = ma
If we multiply both sides of this equation by r,
If we compare this equation to the rotational version
of Newton’s Second Law, we see that the moment of inertia of our
particle must be mr2.
Moment of inertia for rigid bodies
Consider a wheel, where every particle in the wheel moves
around the axis of rotation. The net torque on the wheel is the
sum of the torques exerted on each particle in the wheel. In its
most general form, the rotational version of Newton’s Second Law
takes into account the moment of inertia of each individual particle
in a rotating system:
Of course, adding up the radius and mass of every particle
in a system is very tiresome unless the system consists of only
two or three particles. The moment of inertia for more complex systems
can only be determined using calculus. SAT II Physics doesn’t expect you
to know calculus, so it will give you the moment of inertia for
a complex body whenever the need arises. For your own reference,
however, here is the moment of inertia for a few common shapes.
In these figures, M is the
mass of the rigid body, R is the radius
of round bodies, and L is the distance
on a rod between the axis of rotation and the end of the rod. Note
that the moment of inertia depends on the shape and mass of the
rigid body, as well as on its axis of rotation, and that for most
objects, the moment of inertia is a multiple of MR2.
record of mass M and radius R is
free to rotate around an axis through its center, O.
A tangential force F is
applied to the record. What must one do to maximize the angular
||Make F and M as
large as possible and R as small as possible
||Make M as large as possible and F and R as
small as possible.
||Make F as large
as possible and M and R as small
||Make R as large as possible and F and M as
small as possible.
||Make F, M,
and R as large as possible.
To answer this question, you don’t need to know exactly
what a disc’s moment of inertia is—you just need to be familiar
with the general principle that it will be some multiple of MR2.
The rotational version of Newton’s Second Law tells us
, and so
we don’t know what I
is, but we know
that it is some multiple of MR2
That’s enough to formulate an equation telling us all we need to
As we can see, the angular acceleration increases with
greater force, and with less mass and radius; therefore C is
the correct answer.
Alternately, you could have answered this question by
physical intuition. You know that the more force you exert on a
record, the greater its acceleration. Additionally, if you exert
a force on a small, light record, it will accelerate faster than
a large, massive record.
masses in the figure above are initially held at rest and are then
released. If the mass of the pulley is M, what
is the angular acceleration of the pulley? The moment of inertia
of a disk spinning around its center is MR2.
This is the only situation on SAT II Physics where you
may encounter a pulley that is not considered massless. Usually
you can ignore the mass of the pulley block, but it matters when
your knowledge of rotational motion is being tested.
In order to solve this problem, we first need to determine
the net torque acting on the pulley, and then use Newton’s
Second Law to determine the pulley’s angular acceleration. The weight
of each mass is transferred to the tension in the rope, and the
two forces of tension on the pulley block exert torques in opposite
directions as illustrated below:
To calculate the torque one must take into account the
tension in the ropes, the inertial resistance to motion of the hanging
masses, and the inertial resistence of the pulley itself. The sum
of the torques is given by:
Solve for the tensions using Newton’s second law. For
For Mass 2:
. Substitute into the
is positive, we know
that the pulley will spin in the counterclockwise direction and
block will drop. | http://www.sparknotes.com/testprep/books/sat2/physics/chapter10section4.rhtml | 13 |
84 | Sections 8.4 - 8.6
We've looked at the rotational equivalents of displacement, velocity, and acceleration; now we'll extend the parallel between straight-line motion and rotational motion by investigating the rotational equivalent of force, which is torque.
To get something to move in a straight-line, or to deflect an object traveling in a straight line, it is necessary to apply a force. Similarly, to start something spinning, or to alter the rotation of a spinning object, a torque must be applied.
A torque is a force exerted at a distance from the axis of rotation; the easiest way to think of torque is to consider a door. When you open a door, where do you push? If you exert a force at the hinge, the door will not move; the easiest way to open a door is to exert a force on the side of the door opposite the hinge, and to push or pull with a force perpendicular to the door. This maximizes the torque you exert.
I will state the equation for torque in a slightly different way than the book does. Note that the symbol for torque is the Greek letter tau. Torque is the product of the distance from the point of rotation to where the force is applied x the force x the sine of the angle between the line you measure distance along and the line of the force:
In a given situation, there are usually three ways to determine the torque arising from a particular force. Consider the example of the torque exerted by a rope tied to the end of a hinged rod, as shown in the diagram.
The first thing to notice is that the torque is a counter-clockwise torque, as it tends to make the rod spin in a counter-clockwise direction. The rod does not spin because the rope's torque is balanced by a clockwise torque coming from the weight of the rod itself. We'll look at that in more detail later; for now, consider just the torque exerted by the rope.
There are three equivalent ways to determine this torque, as shown in the diagram below.
Method 1 - In method one, simply measure r from the hinge along the rod to where the force is applied, multiply by the force, and then multiply by the sine of the angle between the rod (the line you measure r along) and the force.
Method 2 - For method two, set up a right-angled triangle, so that there is a 90° angle between the line you measure the distance along and the line of the force. This is the way the textbook does it; done in this way, the line you measure distance along is called the lever arm. If we give the lever arm the symbol l, from the right-angled triangle it is clear that
Using this to calculate the torque gives:
Method 3 - In this method, split the force into components, perpendicular to the rod and parallel to the rod. The component parallel to the rod is along a line passing through the hinge, so it is not trying to make the rod spin clockwise or counter-clockwise; it produces zero torque. The perpendicular component (F sinq) gives plenty of torque, the size of which is given by:
Any force that is along a line which passes through the axis of rotation produces no torque. Note that torque is a vector quantity, and, like angular displacement, angular velocity, and angular acceleration, is in a direction perpendicular to the plane of rotation. The same right-hand rule used for angular velocity, etc., can be applied to torque; for convenience, though, we'll probably just talk about the directions as clockwise and counter-clockwise.
We've looked at the rotational equivalents of several straight-line motion variables, so let's extend the parallel a little more by discussing the rotational equivalent of mass, which is something called the moment of inertia.
Mass is a measure of how difficult it is to get something to move in a straight line, or to change an object's straight-line motion. The more mass something has, the harder it is to start it moving, or to stop it once it starts. Similarly, the moment of inertia of an object is a measure of how difficult it is to start it spinning, or to alter an object's spinning motion. The moment of inertia depends on the mass of an object, but it also depends on how that mass is distributed relative to the axis of rotation: an object where the mass is concentrated close to the axis of rotation is easier to spin than an object of identical mass with the mass concentrated far from the axis of rotation.
The moment of inertia of an object depends on where the axis of rotation is. The moment of inertia can be found by breaking up the object into little pieces, multiplying the mass of each little piece by the square of the distance it is from the axis of rotation, and adding all these products up:
Fortunately, for common objects rotating about typical axes of rotation, these sums have been worked out, so we don't have to do it ourselves. A table of some of these moments of inertia can be found on page 223 in the textbook. Note that for an object where the mass is all concentrated at the same distance from the axis of rotation, such as a small ball being swung in a circle on a string, the moment of inertia is simply MR2 . For objects where the mass is distributed at different distances from the axis of rotation, there is some multiplying factor in front of the MR2.
You can figure out the rotational equivalent of any straight-line motion equation by substituting the corresponding rotational variables for the straight-line motion variables (angular displacement for displacement, angular velocity for velocity, angular acceleration for acceleration, torque for force, and moment of inertia for mass). Try this for Newton's second law:
Replace force by torque, m by I, and acceleration by angular acceleration and you get:
We've dealt with this kind of problem before, but we've never accounted for the pulley. Now we will. There are two masses, one sitting on a table, attached to the second mass which is hanging down over a pulley. When you let the system go, the hanging mass is pulled down by gravity, accelerating the mass sitting on the table. When you looked at this situation previously, you treated the pulley as being massless and frictionless. We'll still treat it as frictionless, but now let's work with a real pulley.
A 111 N block sits on a table; the coefficient of kinetic friction between the block and the table is 0.300. This block is attached to a 258 N block by a rope that passes over a pulley; the second block hangs down below the pulley. The pulley is a solid disk with a mass of 1.25 kg and an unknown radius. The rope passes over the pulley on the outer edge. What is the acceleration of the blocks?
As usual, the first place to start is with a free-body diagram of each block and the pulley. Note that because the pulley has an angular acceleration, the tensions in the two parts of the rope have to be different, so there are different tension forces acting on the two blocks.
Then, as usual, the next step is to apply Newton's second law and write down the force and/or torque equations. For block 1, the force equations look like this:
For block 2, the force equation is:
The pulley is rotating, not moving in a straight line, so do the sum of the torques:
Just a quick note about positive directions...you know that the system will accelerate so that block 2 accelerates down, so make down the positive direction for block 2. Block 1 accelerates right, so make right the positive direction for block 1, and for the pulley, which will have a clockwise angular acceleration, make clockwise the positive direction.
We have three equations, with a bunch of unknowns, the two tensions, the moment of inertia, the acceleration, and the angular acceleration. The moment of inertia is easy to calculate, because we know what the pulley looks like (a solid disk) and we have the mass and radius:
The next step is to make the connection between the angular acceleration of the pulley and the acceleration of the two blocks. Assume the rope does not slip on the pulley, so a point on the pulley which is in contact with the rope has a tangential acceleration equal to the acceleration of any point on the rope, which is equal to the acceleration of the blocks. Recalling the relationship between the angular acceleration and the tangential acceleration gives:
Plugging this, and the expression for the moment of inertia, into the torque equation gives:
All the factors of r, the radius of the pulley, cancel out, leaving:
Substituting the expressions derived above for the two tensions gives:
This can be solved for the acceleration:
Accounting for the mass of the pulley just gives an extra term in the denominator. Plugging in the numbers and solving for the acceleration gives:
Neglecting the mass of the pulley gives a = 5.97 m/s2. | http://physics.bu.edu/~duffy/py105/Torque.html | 13 |
58 | Measures of Central Tendency for Numerical Data Study Guide (page 3)
Introduction to Measures of Central Tendency for Numerical Data
We need a greater statistical vocabulary if we are to describe the distributions of dotplots and stem-and-leaf plots. In this lesson, we will begin to think about measures of the middle of the distribution.
Population Measures of Central Tendency
Measures of central tendency attempt to quantify the middle of the distribution. If we are working with the population, these measures are parameters. If we have a sample, the measures are statistics, which are estimates of the population parameters. There are many ways to measure the center of a distribution, and we will learn about the three most common: mean, median, and mode.
The mean, denoted by μ, is the most common measure of central tendency. The mean is the average of all population values. If a population has N members, the mean is
where xi is the value of the variable associated with the ith unit in the population. We have used two different notations to symbolize the mean. The first is the Greek letter μ, which has become a conventional representation of the mean. The other is E(X), representing the "expected value of X." The average of a random variable across the whole population is the mean or the expected value of that variable. Notice that the symbol for a capital sigma (∑), is a shorthand way of saying to add a set of numbers. The terms to be added are those beginning with i = 1 (because "i = 1" is below the sigma) to i = N (because "N" is at the top of the sigma). The index, here i, is incremented by one in each subsequent term.
Find the mean height of the 62 high school orchestra members given in Dotplots and Stem-and-Leaf Plots: Study Guide.
The mean height for this population is = 66.4.
Although we have presented a formal equation for the mean, it is important to remember that the population mean is simply the average of all population values.
The median, another measure of central tendency, is the middle value of the population distribution. To find the median, order all of the values in the population from largest to smallest and find the middle value. For example, suppose the following five values constituted the population distribution:
The middle value is the 7. Now suppose the following four values represent the population distribution:
Here, the middle value is somewhere between the 6 and the 9. The median is any value between 6 and 9; however, usually, the average of the two values, = 7.5, is taken as the median value. Through these two illustrations, we can see that we find the median in a slightly different manner when there is an even number of observations in the distribution than when there is an odd number of observations. This can be written generally as follows.
If N, the number of values in the population, is odd, the median is the st value in the ordered list of population values. If N is even, the median is any value between the th and the st values in the ordered list of population values; usually, the average of the two values is taken as the median. Note that this definition of population median is appropriate only if the number of population units is finite. For a continuous random variable, the median is still the middle value in the population, but we must use other methods to define it.
If N, the number of values in the population, is odd, the median is the st value in the ordered list of population values. If N is even, the median is any value between the th and the st values in the ordered list of population values; usually, the average of the two values is taken as the median.
Find the median height of the 62 high school orchestra members. Compare the mean and median.
Because an even number of orchestra members exists, the median height is the average of the 31st and 32nd values in the ordered list of orchestra members' heights. The stem-and-leaf plot makes these values easy to find. Simply start at the top of the plot and count the number of leaves, always working from the stem out. Continue until the 31st and 32nd values have been identified. The 31st value is 66.6 inches, and the 32nd value is 67.1 inches. Any value between 66.6 and 67.1 is a median value. However, we will follow tradition and average the two: = 66.85.That is, the median orchestra member height is 66.85 inches.
Notice that in this case, the mean and median are close, but not identical. For the median, exactly half of the population values are less than 66.85 inches, and half are greater than 66.85 inches. For the mean, 29 of the values are less than the mean, and 33 are greater than the mean, which is still close to half of the values. Sometimes, the mean and median are much further apart. We will consider what differences in the mean and median indicate about the distribution in the next lesson.
The mode is the most frequently occurring value in the population and is another measure of central tendency. This measure tends to be most useful for discrete random variables. For the orchestra members' heights, three members have heights of 59.7 inches. Similarly, we have three other groups, each with three members, with heights of 68.7, 68.9, and 70.5 inches. Thus, there are four modes.
Which measure of central tendency is the best? Each provides a little different information. The mean is the most common measure, but it is influenced by extreme values. One extreme value can have a big impact on the mean, especially if the population does not have many members. An unusually small population value may cause the mean to be quite a bit smaller than it would have been if that value was not in the population. Similarly, an unusually large population value may tend to inflate the mean. In contrast, the median and mode are not affected by these unusual values. The mode is often not useful because it may not be unique.
Sample Measures of Central Tendency
The population mean, median, and mode are parameters. To find their values, we must know all of the population values. Unless the population is small, as was the case when our population of interest was the orchestra members at one specific high school, this rarely happens. In practice, we cannot usually find the population mean, median, or mode. We estimate these parameters by finding the sample mean, sample median, and sample mode, respectively. The mean, median, and mode of the sample values are the sample mean, sample median, and sample mode. These statistics are each an estimator of their population counterpart. That is, the sample mean is an estimate of the population mean; the sample median is an estimate of the population median; and the sample mode is an estimate of the population mode. We will learn more about the characteristics of these estimates and their uses later. For now, let's concentrate on how to find them.
Find the sample mean, sample median, and sample mode for the difference in the number of blinks while playing video games and the number of blinks during normal conversation. Which one do you think is the best measure of central tendency for these data?
The sample mean is the average of the sample values, that is, the average of the sample differences:
To find the median, we first order the sample values from smallest to largest:
13 13 17 17 18 19
21 24 26 26 27 29
32 34 40
Because there are 15 values in the sample, the middle value is the = 8th value in the list; that is, the sample median is 24.
The sample mode is the most frequently occurring value in the sample. Here, the values of 13, 17, and 26 were each observed twice. In these cases, the sample mode is of little value in measuring central tendency.
The mean and median provide good measures of the center of distribution, but the mode does not.
Measures of Central Tendency for Numerical Data In Short
Mean, median, and mode are three measures of central tendency. Each provides a measure of the middle value in the population. The mean is the average of all population values. The median is the middle of the population values. The mode is the most frequently occurring population values. The sample mean, sample median, and sample mode are sample estimates of the corresponding population parameters.
Find practice problems and solutions for these concepts at Measures of Central Tendency for Numerical Data Practice Exercises.
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities
Local SAT & ACT Classes
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- Steps in the IEP Process | http://www.education.com/study-help/article/measures-central-tendency-numerical-data/?page=3 | 13 |
75 | When we think about volume from an intuitive point of view, we typically think of it as the amount of "space" an item occupies. Unfortunately assigning a number that measures this amount of space can prove difficult for all but the simplest geometric shapes. Calculus provides a new tool that can greatly extend our ability to calculate volume. In order to understand the ideas involved it helps to think about the volume of a cylinder. The volume of a cylinder is calculated using the formula . The base of the cylinder is a circle whose area is given by . Notice that the volume of a cylinder is derived by taking the area of its base and multiplying by the height . For more complicated shapes, we could think of approximating the volume by taking the area of some cross section at some height and multiplying by some small change in height then adding up the heights of all of these approximations from the bottom to the top of the object. This would appear to be a Riemann sum. Keeping this in mind, we can develop a more general formula for the volume of solids in (3 dimensional space).
Formal Definition
Formally the ideas above suggest that we can calculate the volume of a solid by calculating the integral of the cross-sectional area along some dimension. In the above example of a cylinder, the every cross section was given by the same circle, so the cross-sectional area is therefore a constant function, and the dimension of integration was vertical (although it could have been any one we desired). Generally, if is a solid that lies in between and , let denote the area of a cross section taken in the plane perpendicular to the x direction, and passing through the point x. If the function is continuous on , then the volume of the solid is given by:
Example 1: A right cylinder
Now we will calculate the volume of a right cylinder using our new ideas about how to calculate volume. Since we already know the formula for the volume of a cylinder this will give us a "sanity check" that our formulas make sense. First, we choose a dimension along which to integrate. In this case, it will greatly simplify the calculations to integrate along the height of the cylinder, so this is the direction we will choose. Thus we will call the vertical direction (see Figure 1). Now we find the function, , which will describe the cross-sectional area of our cylinder at a height of . The cross-sectional area of a cylinder is simply a circle. Now simply recall that the area of a circle is , and so . Before performing the computation, we must choose our bounds of integration. In this case, we simply define to be the base of the cylinder, and so we will integrate from to , where is the height of the cylinder. Finally, we integrate:
This is exactly the familiar formula for the volume of a cylinder.
Example 2: A right circular cone
For our next example we will look at an example where the cross sectional area is not constant. Consider a right circular cone. Once again the cross sections are simply circles. But now the radius varies from the base of the cone to the tip. Once again we choose to be the vertical direction, with the base at and the tip at , and we will let denote the radius of the base. While we know the cross sections are just circles we cannot calculate the area of the cross sections unless we find some way to determine the radius of the circle at height .
Luckily in this case it is possible to use some of what we know from geometry. We can imagine cutting the cone perpendicular to the base through some diameter of the circle all the way to the tip of the cone. If we then look at the flat side we just created, we will see simply a triangle, whose geometry we understand well. The right triangle from the tip to the base at height is similar to the right triangle from the tip to the base at height . This tells us that . So that we see that the radius of the circle at height is . Now using the familiar formula for the area of a circle we see that .
Now we are ready to integrate.
By u-substitution we may let , then and our integral becomes
Example 3: A sphere
In a similar fashion, we can use our definition to prove the well known formula for the volume of a sphere. First, we must find our cross-sectional area function, . Consider a sphere of radius which is centered at the origin in . If we again integrate vertically then will vary from to . In order to find the area of a particular cross section it helps to draw a right triangle whose points lie at the center of the sphere, the center of the circular cross section, and at a point along the circumference of the cross section. As shown in the diagram the side lengths of this triangle will be , , and . Where is the radius of the circular cross section. Then by the Pythagorean theorem and find that . It is slightly helpful to notice that so we do not need to keep the absolute value.
So we have that
Extension to Non-trivial Solids
Now that we have shown our definition agrees with our prior knowledge, we will see how it can help us extend our horizons to solids whose volumes are not possible to calculate using elementary geometry. | http://en.wikibooks.org/wiki/Calculus/Volume | 13 |
77 | The mass-to-charge ratio
, is a physical quantity
that is widely used in the electrodynamics
of charged particles, e.g. in electron optics and ion optics
. It appears in the scientific fields of lithography
, electron microscopy
, cathode ray tubes
, accelerator physics
, nuclear physics
, auger spectroscopy
and mass spectrometry
. The importance of the mass-to-charge ratio, according to classical electrodynamics, is that two particles with the same mass-to-charge ratio move in the same path in a vacuum when subjected to the same electric and magnetic fields.
When charged particles move in electric and magnetic fields the following two laws apply:
- (Lorentz force law)
- (Newton's second law of motion)
where F is the force applied to the ion, m is the mass of the ion, a is the acceleration, Q is the ionic charge, E is the electric field, and v x B is the vector cross product of the ion velocity and the magnetic field.
Using Newton's third law of motion yields:
This differential equation is the classic equation of motion of a charged particle in vacuum. Together with the particle's initial conditions it determines the particle's motion in space and time. It immediately reveals that two particles with the same m/Q behave the same. This is why the mass-to-charge ratio is an important physical quantity in those scientific fields where charged particles interact with magnetic (B) or electric (E) fields.
There are non-classical effects that derive from quantum mechanics, such as the Stern–Gerlach effect that can diverge the path of ions of identical m/Q.
Symbols & Units
The official symbol for mass
The official symbol for electric charge
is also very common. Charge is a scalar property, meaning that it can be either positive (+ symbol) or negative (- symbol). Sometimes, however, the sign of the charge is indicated indirectly. Coulomb
is the SI unit of charge; however, other units are not uncommon.
The SI unit of the physical quantity is kilograms per coulomb.
- = kg/C
The units and notation above are used in the field of mass spectrometry, while the unitless m/z notation is used for the independent variable that the mass spectrometer measures. These notations are closely related through the unified atomic mass unit and the elementary charge. See Mass spectrum.
In the 19th century the mass-to-charge ratios of some ions were measured by electrochemical methods. In 1897 the mass-to-charge ratio
of the electron
was first measured by J. J. Thomson
. By doing this he showed that the electron, which was postulated earlier in order to explain electricity, was in fact a particle with a mass and a charge; and that its mass-to-charge ratio was much smaller than that of the hydrogen ion H+
. In 1898 Wilhelm Wien
separated ions (canal rays) according to their mass-to-charge ratio with an ion optical device with superimposed electric and magnetic fields (Wien filter
). In 1901 Walter Kaufman
measured the relativistic mass increase of fast electrons. In 1913 J. J. Thomson
measured the mass-to-charge ratio of ions
with an instrument he called a parabola spectrograph. Today, an instrument that measures the mass-to-charge ratio of charged particles is called a mass spectrometer
The charge-to-mass ratio
(Q/m) of an object is, as its name implies, the charge
of an object divided by the mass
of the same object. This quantity is generally useful only for objects that may be treated as particles. For extended objects, total charge, charge density, total mass, and mass density are often more useful.
In some experiments, the charge-to-mass ratio is the only quantity that can be measured directly. Often, the charge can be inferred from theoretical considerations, so that the charge-to-mass ratio provides a way to calculate the mass of a particle.
Often, the charge-to-mass ratio can be determined from observing the deflection of a charged particle in an external magnetic field. The cyclotron equation, combined with other information such as the kinetic energy of the particle, will give the charge-to-mass ratio. One application of this principle is the mass spectrometer. The same principle can be used to extract information in experiments involving the Wilson cloud chamber.
The ratio of electrostatic to gravitational forces between two particles will be proportional to the product of their charge-to-mass ratios. It turns out that gravitational forces are negligible on the subatomic level.
The electron charge-to-mass quotient, , is a quantity in experimental physics. It matters because the electron mass is difficult to measure directly, and is instead derived from measurements of the fundamental charge and . It also has historical significance: Thomson's measurement of convinced him that cathode rays were particles, which we know as electrons.
The 2006 CODATA recommended value is . CODATA refers to this as the electron charge-to-mass quotient, but ratio is still commonly used.
The "q/m" of an electron was successfully calculated by J. J. Thomson in 1897 and more successfully by Dunnington's Method which involves the angular momentum and deflection due to a perpendicular magnetic field.
There are two other common ways of measuring the charge to mass ratio of an electron, apart from J.J Thompson and the Dunnington Method.
1. The Magnetron Method - Using a GRD7 Valve (Ferranti valve), electrons are expelled from a hot tungsten-wire filament towards an anode. The electron is then deflected using a solenoid. From the current in the solenoid and the current in the Ferranti Valve, e/m can be calculated
2. Fine Beam Tube Method - Electrons are accelerated from a cathode to a cap-shaped anode. The electron is then expelled into a helium-filled ray tube, producing a luminous circle. From the radius of this circle, e/m is calculated.
The charge-to-mass ratio of an electron may also be measured with the Zeeman effect, which gives rise to energy splittings in the presence of a magnetic field B:
Here (final and initial) are quantum values ranging in integer steps from -j to j, with j as the eigenvalue of the operator . ( is the spin operator with eigenvalue s and the angular momentum operator with eigenvalue l.) is the Landé-g factor, calculated as
The shift in energy is also given in terms of frequency and wavelength as
Measurements of the Zeeman effect commonly involve the use of a Fabry-Perot interferometer
, with light from a source (placed in a magnetic field) being passed between two mirrors of the interferometer. If
is the change in mirror separation required to bring the
-order ring of wavelength
into coincidence with that of wavelength
ring of wavelength
into coincidence with the
-order ring, then
It follows then that
Rearranging, it is possible to solve for the charge-to-mass ratio of an electron as
- NIST on units and manuscript check list
- Physics Today's instructions on quantities and units
- International Vocabulary of Basic Terms in Metrology (Second edition 1993: ISBN 92-67-01075-1); a guide with contributions of the following organizations: IUPAP, IUPAC, ISO, OIML, IEC, IFCC.
- IUPAP Red Book SUNAMCO 87-1 "Symbols, Units, Nomenclature and Fundamental Constants in Physics" (does not have an online version).
- Symbols Units and Nomenclature in Physics IUPAP-25 IUPAP-25, E.R. Cohen & P. Giacomo, Physics 146A (1987) 1-68.
- AIP style manual | http://www.reference.com/browse/relativistic-mass | 13 |
60 | Derivatives can be used to gather information about the graph of a function. Since the derivative represents the rate of change of a function, to determine when a function is increasing, we simply check where its derivative is positive. Similarly, to find when a function is decreasing, we check where its derivative is negative.
The points where the derivative is equal to 0 are called critical points. At these points, the function is instantaneously constant and its graph has horizontal tangent line. For a function representing the motion of an object, these are the points where the object is momentarily at rest.
A local minimum (resp. local maximum) of a function f is a point (x 0, f (x 0)) on the graph of f such that f (x 0)≤f (x) (resp. f (x 0)≥f (x) ) for all x in some interval containing x 0 . Such a point is called a global minimum (resp. global maximum) of a function f if the appropriate inequality holds for all points in the domain. In particular, any global maximum (minimum) is also a local maximum (minimum).
It is intuitively clear that the tangent line to the graph of a function at a local minimum or maximum must be horizontal, so the derivative at the point is 0 , and the point is a critical point. Therefore, in order to find the local minima/maxima of a function, we simply have to find all its critical points and then check each one to see whether it is a local minimum, a local maximum, or neither. If the function has a global minimum or maximum, it will be the least (resp. greatest) of the local minima (resp. maxima), or the value of the function on an endpoint of its domain (if any such points exist).
Clearly, the behavior near a local maximum is that the function increases, levels off, and begins decreasing. Therefore, a critical point is a local maximum if the derivative is positive just to the left of it, and negative just to the right. Similarly, a critical point is a local minimum if the derivative is negative just to the left and positive to the right. These criteria are collectively called the first derivative test for maxima and minima.
There may be critical points of a function that are neither local maxima or minima, where the derivative attains the value zero without crossing from positive to negative. For instance, the function f (x) = x 3 has a critical point at 0 which is of this type. The derivative f'(x) = 3x 2 is zero here, but everywhere else f' is positive. This function and its derivative are sketched below.
Once we have found the critical points, one way to determine if they are local minima or maxima is to apply the first derivative test. Another way uses the second derivative of f . Suppose x 0 is a critical point of the function f (x) , that is, f'(x 0) = 0 . We have the following three cases:
The first and second derivative tests employ essentially the same logic, examining what happens to the derivative f'(x) near a critical point x 0 . The first derivative test says that maxima and minima correspond to f' crossing zero from one direction or the other, which is indicated by the sign of f' near x 0 . The second derivative test is just the observation that the same information is encoded in the slope of the tangent line to f'(x) at x 0 .
A function f (x) is called concave up at x 0 if f''(x 0) > 0 , and concave down if f''(x 0) < 0 . Graphically, this represents which way the graph of f is "turning" near x 0 . A function that is concave up at x 0 lies above its tangent line in a small interval around x 0 (touching but not crossing at x 0 ). Similarly, a function that is concave down at x 0 lies below its tangent line near x 0 .
The remaining case is a point x 0 where f''(x 0) = 0 , which is called an inflection point. At such a point the function f holds closer to its tangent line than elsewhere, since the second derivative represents the rate at which the function turns away from the tangent line. Put another way, a function usually has the same value and derivative as its tangent line at the point of tangency; at an inflection point, the second derivatives of the function and its tangent line also agree. Of course, the second derivative of the tangent line function is always zero, so this statement is just that f''(x 0) = 0 .
Inflection points are the critical points of the first derivative f'(x) . At an inflection point, a function may change from being concave up to concave down (or the other way around), or momentarily "straighten out" while having the same concavity to either side. These three cases correspond, respectively, to the inflection point x 0 being a local maximum or local minimum of f'(x) , or neither. | http://www.sparknotes.com/math/calcbc1/applicationsofthederivative/section2.rhtml | 13 |
64 | Epsilon Delta Limit Definition 1 Introduction to the Epsilon Delta Definition of a Limit.
Epsilon Delta Limit Definition 1
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Let me draw a function that would be interesting
- to take a limit of.
- And I'll just draw it visually for now, and we'll do some
- specific examples a little later.
- So that's my y-axis, and that's my x-axis.
- And let;s say the function looks something like--
- I'll make it a fairly straightforward function
- --let's say it's a line, for the most part.
- Let's say it looks just like, accept it has a
- hole at some point.
- x is equal to a, so it's undefined there.
- Let me black that point out so you can see that
- it's not defined there.
- And that point there is x is equal to a.
- This is the x-axis, this is the y is equal f of x-axis.
- Let's just say that's the y-axis.
- And let's say that this is f of x, or this is
- y is equal to f of x.
- Now we've done a bunch of videos on limits.
- I think you have an intuition on this.
- If I were to say what is the limit as x approaches a,
- and let's say that this point right here is l.
- We know from our previous videos that-- well first of all
- I could write it down --the limit as x approaches
- a of f of x.
- What this means intuitively is as we approach a from either
- side, as we approach it from that side, what does
- f of x approach?
- So when x is here, f of x is here.
- When x is here, f of x is there.
- And we see that it's approaching this l right there.
- And when we approach a from that side-- and we've done
- limits where you approach from only the left or right side,
- but to actually have a limit it has to approach the same thing
- from the positive direction and the negative direction --but as
- you go from there, if you pick this x, then this is f of x.
- f of x is right there.
- If x gets here then it goes here, and as we get closer and
- closer to a, f of x approaches this point l, or this value l.
- So we say that the limit of f of x ax x approaches
- a is equal to l.
- I think we have that intuition.
- But this was not very, it's actually not rigorous at all
- in terms of being specific in terms of what we
- mean is a limit.
- All I said so far is as we get closer, what does
- f of x get closer to?
- So in this video I'll attempt to explain to you a definition
- of a limit that has a little bit more, or actually a lot
- more, mathematical rigor than just saying you know, as x gets
- closer to this value, what does f of x get closer to?
- And the way I think about it's: kind of like a little game.
- The definition is, this statement right here means that
- I can always give you a range about this point-- and when I
- talk about range I'm not talking about it in the whole
- domain range aspect, I'm just talking about a range like you
- know, I can give you a distance from a as long as I'm no
- further than that, I can guarantee you that f of x is go
- it not going to be any further than a given distance from l
- --and the way I think about it is, it could be viewed
- as a little game.
- Let's say you say, OK Sal, I don't believe you.
- I want to see you know, whether f of x can get within 0.5 of l.
- So let's say you give me 0.5 and you say Sal, by this
- definition you should always be able to give me a range
- around a that will get f of x within 0.5 of l, right?
- So the values of f of x are always going to be right in
- this range, right there.
- And as long as I'm in that range around a, as long as I'm
- the range around you give me, f of x will always be at least
- that close to our limit point.
- Let me draw it a little bit bigger, just because I think
- I'm just overwriting the same diagram over and over again.
- So let's say that this is f of x, this is the hole point.
- There doesn't have to be a hole there; the limit could equal
- actually a value of the function, but the limit is more
- interesting when the function isn't defined there
- but the limit is.
- So this point right here-- that is, let me draw the axes again.
- So that's x-axis, y-axis x, y, this is the limit point
- l, this is the point a.
- So the definition of the limit, and I'll go back to this in
- second because now that it's bigger I want explain it again.
- It says this means-- and this is the epsilon delta definition
- of limits, and we'll touch on epsilon and delta in a second,
- is I can guarantee you that f of x, you give me any
- distance from l you want.
- And actually let's call that epsilon.
- And let's just hit on the definition right
- from the get go.
- So you say I want to be no more than epsilon away from l.
- And epsilon can just be any number greater, any real
- number, greater than 0.
- So that would be, this distance right here is epsilon.
- This distance there is epsilon.
- And for any epsilon you give me, any real number-- so this
- is, this would be l plus epsilon right here, this would
- be l minus epsilon right here --the epsilon delta definition
- of this says that no matter what epsilon one you give me, I
- can always specify a distance around a.
- And I'll call that delta.
- I can always specify a distance around a.
- So let's say this is delta less than a, and this
- is delta more than a.
- This is the letter delta.
- Where as long as you pick an x that's within a plus delta and
- a minus delta, as long as the x is within here, I can guarantee
- you that the f of x, the corresponding f of x is going
- to be within your range.
- And if you think about it this makes sense right?
- It's essentially saying, I can get you as close as you want to
- this limit point just by-- and when I say as close as you
- want, you define what you want by giving me an epsilon; on
- it's a little bit of a game --and I can get you as close as
- you want to that limit point by giving you a range around the
- point that x is approaching.
- And as long as you pick an x value that's within this range
- around a, long as you pick an x value around there, I can
- guarantee you that f of x will be within the range
- you specify.
- Just make this a little bit more concrete, let's say you
- say, I want f of x to be within 0.5-- let's just you know, make
- everything concrete numbers.
- Let's say this is the number 2 and let's say this is number 1.
- So we're saying that the limit as x approaches 1 of f of x-- I
- haven't defined f of x, but it looks like a line with the hole
- right there, is equal to 2.
- This means that you can give me any number.
- Let's say you want to try it out for a couple of examples.
- Let's say you say I want f of x to be within point-- let me do
- a different color --I want f of x to be within 0.5 of 2.
- I want f of x to be between 2.5 and 1.5.
- Then I could say, OK, as long as you pick an x within-- I
- don't know, it could be arbitrarily close but as long
- as you pick an x that's --let's say it works for this function
- that's between, I don't know, 0.9 and 1.1.
- So in this case the delta from our limit point is only 0.1.
- As long as you pick an x that's within 0.1 of this point, or 1,
- I can guarantee you that your f of x is going to
- lie in that range.
- So hopefully you get a little bit of a sense of that.
- Let me define that with the actual epsilon delta, and this
- is what you'll actually see in your mat textbook, and then
- we'll do a couple of examples.
- And just to be clear, that was just a specific example.
- You gave me one epsilon and I gave you a delta that worked.
- But by definition if this is true, or if someone writes
- this, they're saying it doesn't just work for one specific
- instance, it works for any number you give me.
- You can say I want to be within one millionth of, you know, or
- ten to the negative hundredth power of 2, you know, super
- close to 2, and I can always give you a range around this
- point where as long as you pick an x in that range, f of x will
- always be within this range that you specify, within that
- were you know, one trillionth of a unit away from
- the limit point.
- And of course, the one thing I can't guarantee is what
- happens when x is equal to a.
- I'm just saying as long as you pick an x that's within my
- range but not on a, it'll work.
- Your f of x will show up to be within the range you specify.
- And just to make the math clear-- because I've been
- speaking only in words so far --and this is what we see the
- textbook: it says look, you give me any epsilon
- greater than 0.
- Anyway, this is a definition, right?
- If someone writes this they mean that you can give them any
- epsilon greater than 0, and then they'll give you a delta--
- remember your epsilon is how close you want f of x to be
- to your limit point, right?
- It's a range around f of x --they'll give you a delta
- which is a range around a, right?
- Let me write this.
- So limit as approaches a of f of x is equal to l.
- So they'll give you a delta where as long as x is no more
- than delta-- So the distance between x and a, so if we pick
- an x here-- let me do another color --if we pick an x here,
- the distance between that value and a, as long as one, that's
- greater than 0 so that x doesn't show up on top of a,
- because its function might be undefined at that point.
- But as long as the distance between x and a is greater
- than 0 and less than this x range that they gave you,
- it's less than delta.
- So as long as you take an x, you know if I were to zoom the
- x-axis right here-- this is a and so this distance right here
- would be delta, and this distance right here would be
- delta --as long as you pick an x value that falls here-- so as
- long as you pick that x value or this x value or this x value
- --as long as you pick one of those x values, I can guarantee
- you that the distance between your function and the limit
- point, so the distance between you know, when you take one of
- these x values and you evaluate f of x at that point, that the
- distance between that f of x and the limit point is
- going to be less than the number you gave them.
- And if you think of, it seems very complicated, and I have
- mixed feelings about where this is included in most
- calculus curriculums.
- It's included in like the, you know, the third week before you
- even learn derivatives, and it's kind of this very mathy
- and rigorous thing to think about, and you know, it tends
- to derail a lot of students and a lot of people I don't think
- get a lot of the intuition behind it, but it is
- mathematically rigorous.
- And I think it is very valuable once you study you know, more
- advanced calculus or become a math major.
- But with that said, this does make a lot of sense
- intuitively, right?
- Because before we were talking about, look you know, I can get
- you as close as x approaches this value f of x is going
- to approach this value.
- And the way we mathematically define it is, you say Sal,
- I want to be super close.
- I want the distance to be f of x [UNINTELLIGIBLE].
- And I want it to be 0.000000001, then I can always
- give you a distance around x where this will be true.
- And I'm all out of time in this video.
- In the next video I'll do some examples where I prove the
- limits, where I prove some limit statements using
- this definition.
- And hopefully you know, when we use some tangible numbers, this
- definition will make a little bit more sense.
- See you in the next video.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | http://www.khanacademy.org/math/calculus/limits_topic/epsilon_delta/v/epsilon-delta-limit-definition-1 | 13 |
107 | Suppose we are given a function and would like to determine the area underneath its graph over an interval. We could guess, but how could we figure out the exact area? Below, using a few clever ideas, we actually define such an area and show that by using what is called the definite integral we can indeed determine the exact area underneath a curve.
Definition of the Definite Integral
The rough idea of defining the area under the graph of is to approximate this area with a finite number of rectangles. Since we can easily work out the area of the rectangles, we get an estimate of the area under the graph. If we use a larger number of smaller-sized rectangles we expect greater accuracy with respect to the area under the curve and hence a better approximation. Somehow, it seems that we could use our old friend from differentiation, the limit, and "approach" an infinite number of rectangles to get the exact area. Let's look at such an idea more closely.
Suppose we have a function that is positive on the interval and we want to find the area under between and . Let's pick an integer and divide the interval into subintervals of equal width (see Figure 1). As the interval has width , each subinterval has width We denote the endpoints of the subintervals by . This gives us
Now for each pick a sample point in the interval and consider the rectangle of height and width (see Figure 2). The area of this rectangle is . By adding up the area of all the rectangles for we get that the area is approximated by
A more convenient way to write this is with summation notation:
For each number we get a different approximation. As gets larger the width of the rectangles gets smaller which yields a better approximation (see Figure 3). In the limit of as tends to infinity we get the area .
It is a fact that if is continuous on then this limit always exists and does not depend on the choice of the points . For instance they may be evenly spaced, or distributed ambiguously throughout the interval. The proof of this is technical and is beyond the scope of this section.
One important feature of this definition is that we also allow functions which take negative values. If for all then so . So the definite integral of will be strictly negative. More generally if takes on both positive an negative values then will be the area under the positive part of the graph of minus the area above the graph of the negative part of the graph (see Figure 4). For this reason we say that is the signed area under the graph.
Independence of Variable
It is important to notice that the variable did not play an important role in the definition of the integral. In fact we can replace it with any other letter, so the following are all equal:
Each of these is the signed area under the graph of between and . Such a variable is often referred to as a dummy variable or a bound variable.
Left and Right Handed Riemann Sums
The following methods are sometimes referred to as L-RAM and R-RAM, RAM standing for "Rectangular Approximation Method."
We could have decided to choose all our sample points to be on the right hand side of the interval (see Figure 5). Then for all and the approximation that we called for the area becomes
This is called the right-handed Riemann sum, and the integral is the limit
Alternatively we could have taken each sample point on the left hand side of the interval. In this case (see Figure 6) and the approximation becomes
Then the integral of is
The key point is that, as long as is continuous, these two definitions give the same answer for the integral.
In this example we will calculate the area under the curve given by the graph of for between 0 and 1. First we fix an integer and divide the interval into subintervals of equal width. So each subinterval has width
To calculate the integral we will use the right-handed Riemann sum. (We could have used the left-handed sum instead, and this would give the same answer in the end). For the right-handed sum the sample points are
Notice that . Putting this into the formula for the approximation,
Now we use the formula
To calculate the integral of between and we take the limit as tends to infinity,
Next we show how to find the integral of the function between and . This time the interval has width so
Once again we will use the right-handed Riemann sum. So the sample points we choose are
We have to calculate each piece on the right hand side of this equation. For the first two,
For the third sum we have to use a formula
Putting this together
Taking the limit as tend to infinity gives
Basic Properties of the Integral
From the definition of the integral we can deduce some basic properties. For all the following rules, suppose that f and g are continuous on [a,b].
The Constant Rule
When f is positive, the height of the function cf at a point x is c times the height of the function f. So the area under cf between a and b is c times the area under f. We can also give a proof using the definition of the integral, using the constant rule for limits,
We saw in the previous section that
Using the constant rule we can use this to calculate that
We saw in the previous section that
We can use this and the constant rule to calculate that
There is a special case of this rule used for integrating constants:
When and this integral is the area of a rectangle of height c and width b-a which equals c(b-a).
The addition and subtraction rule
As with the constant rule, the addition rule follows from the addition rule for limits:
= = =
The subtraction rule can be proved in a similar way.
From above and so
The Comparison Rule
If then each of the rectangles in the Riemann sum to calculate the integral of f will be above the y axis, so the area will be non-negative. If then and by the first property we get the second property. Finally if then the area under the graph of f will be greater than the area of rectangle with height m and less than the area of the rectangle with height M (see Figure 7). So
Linearity with respect to endpoints
Again suppose that is positive. Then this property should be interpreted as saying that the area under the graph of between and is the area between and plus the area between and (see Figure 8).
Even and odd functions
Recall that a function is called odd if it satisfies and is called even if
Suppose is an odd function and consider first just the integral from to . We make the substitution so . Notice that if then and if then . Hence Now as is odd, so the integral becomes Now we can replace the dummy variable with any other variable. So we can replace it with the letter to give
Now we split the integral into two pieces
The proof of the formula for even functions is similar. | http://en.m.wikibooks.org/wiki/Calculus/Definite_integral | 13 |
57 | This HTML version of is provided for convenience, but it is not the best format for the book. In particular, some of the symbols are not rendered correctly.
You might prefer to read the PDF version.
Chapter 5 Odds and addends
One way to represent a probability is with a number between 0 and 1, but that’s not the only way. If you have ever bet on a football game or a horse race, you have probably encountered another representation of probability, called “odds.”
You might have heard expressions like “the odds are three to one against,” but you might not know what they mean. The odds in favor of an event are the ratio of the probability it will occur to the probability that it will not.
So if I think my team has a 75% chance of winning, I would say that the odds in their favor are three to one, because the chance of winning is three times the chance of losing.
You can write odds in decimal form, but it is most common to write them as a ratio of integers. So “three to one” is written 3:1.
When probabilities are low, it is more common to report the “odds against” rather than the odds in favor. For example, if I think my horse has a 10% chance of winning, I would say that the odds against are 9:1.
Probabilities and odds are different representations of the same information. Given a probability, you can compute the odds like this:
def Odds(p): return p / (1-p)
Given the odds in favor, in decimal form, you can convert to probability like this:
def Probability(o): return o / (o+1)
If you represent odds with a numerator and denominator, you can convert to probability like this:
def Probability2(yes, no): return yes / (yes + no)
When I work with odds in my head, I find it helpful to picture people at the track. If 20% of them think my horse will win, then 80% of them don’t, so the odds in favor are 20:80 or 1:4.
If the odds are 5:1 against my horse, then five out of six people think she will lose, so the probability of winning is 1/6.
5.2 The odds form of Bayes’s Theorem
In Chapter 1 I wrote Bayes’s Theorem in the “probability form”:
If we define H as “hypothesis H is false”, we can write the odds in favor of H like this:
Or writing o(H) for odds in favor of H:
In words, this says that the posterior odds are the prior odds times the likelihood ratio. This is the “odds form” of Bayes’s Theorem.
This form is most convenient for computing a Bayesian update on paper, or in your head. For example, let’s go back to the cookie problem:
Suppose there are two bowls of cookies. Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. Bowl 2 contains 20 of each.
The prior probability is 50%, so the prior odds are 1:1, or just 1. The likelihood ratio is 3/4 / 1/2, or 3/2. So the posterior odds are 3:2, which corresponds to probability 3/5.
5.3 Oliver’s blood
Here is another problem from MacKay’s Information Theory, Inference, and Learning Algorithms:
Two people have left traces of their own blood at the scene of a crime. A suspect, Oliver, is tested and found to have type ’O’ blood. The blood groups of the two traces are found to be of type ’O’ (a common type in the local population, having frequency 60%) and of type ’AB’ (a rare type, with frequency 1%). Do these data [the traces found at the scene] give evidence in favor of the proposition that Oliver was one of the people [who left blood at the scene]?
To answer this question, we need to think about what it means for data to give evidence in favor of (or against) a hypothesis. Intuitively, we might say that data favor a hypothesis if the hypothesis is more likely in light of the data than it was before.
In the cookie problem, the prior odds are 1:1, or probability 50%. The posterior odds are 3:2, or probability 60%. So we could say that the vanilla cookie is evidence in favor of Bowl 1.
The odds form of Bayes’s Theorem provides a way to make this intuition more precise. Again
Or dividing through by o(H):
The term on the left is the ratio of the posterior and prior odds. The term on the right is the likelihood ratio, also called the “Bayes factor”.
If the Bayes factor value is greater than 1, that means that the data were more likely under H than under H. And since the odds ratio is also greater than 1, that means that the odds are greater, in light of the data, than they were before.
If the Bayes factor is less than 1, that means the data were less likely under H than under H, so the odds in favor of H go down.
Finally, if the Bayes factor is exactly 1, the data are equally likely under either hypothesis, so the odds do not change.
Now we can get back to the Oliver’s blood problem. If Oliver is one of the people who left blood at the crime scene, then he accounts for the ’O’ sample, so the probability of the data is just the probability that a random member of the population has type ’AB’ blood, which is 1%.
If Oliver did not leave blood at the scene, then we have two samples to account for. If we choose two random people from the population, what is the chance of finding one with type ’O’ and one with type ’AB’? Well, there are two ways it might happen: the first person we choose might have type ’O’ and the second ’AB’, or the other way around. So the total probability is 2 (0.6) (0.01) = 1.2%.
The likelihood of the data is slightly higher if Oliver is not one of the people who left blood at the scene, so the blood data is actually evidence against Oliver’s guilt.
This example is a little contrived, but it is an example of the counterintuitive result that data consistent with a hypothesis are not necessarily in favor of the hypothesis.
If this result is so counterintuitive that it bothers you, this way of thinking might help: the data consist of a common event, type ’O’ blood, and a rare event, type ’AB’ blood. If Oliver accounts for the common event, that leaves the rare event still unexplained. If Oliver doesn’t account for the ’O’ blood, then we have two chances to find someone in the population with ’AB’ blood. And that factor of two makes the difference.
The fundamental operation of Bayesian statistics is Update, which takes a prior distribution and a set of data, and produces a posterior distribution. But solving real problems usually involves a number of other operations, including scaling, addition and other arithmetic operations, max and min, and mixtures.
This chapter presents addition and max; I will present other operations as we need them.
Dungeons and Dragons is a role-playing game where the results of players’ decisions are usually determined by rolling dice. In fact, before game play starts, players generate each attribute of their characters—strength, intelligence, wisdom, dexterity, constitution, and charisma—by rolling three six-sided dice and adding them up.
So you might be curious to know the distribution of this sum. There are two ways you might compute it:
class Die(thinkbayes.Pmf): def __init__(self, sides): d = dict((i, 1) for i in xrange(1, sides+1)) thinkbayes.Pmf.__init__(self, d) self.Normalize()
Now I can create a six-sided die:
d6 = Die(6)
dice = [d6] * 3 three = thinkbayes.SampleSum(dice, 1000)
def RandomSum(dists): total = sum(dist.Random() for dist in dists) return total def SampleSum(dists, n): pmf = MakePmfFromList(RandomSum(dists) for i in xrange(n)) return pmf
The drawback of estimating this distribution by simulation is that
it is only approximately correct. As
The other approach is to enumerate all pairs of values and
compute the sum and probability of each pair. This is implemented
def __add__(self, other): pmf = Pmf() for v1, p1 in self.Items(): for v2, p2 in other.Items(): pmf.Incr(v1+v2, p1*p2) return pmf
self and other can be Pmfs, or anything else that provides Items. The result is a new Pmf.
And here’s how it’s used:
three_exact = d6 + d6 + d6
When you apply the + operator to a Pmf, Python invokes
The approximate and exact distributions are shown in Figure 5.1.
The code from this section is available from http://thinkbayes.com/dungeons.py.
Having generated a Dungeons and Dragons character, I would be particularly interested in the character’s best attributes, so I might wonder what is the chance of getting an 18 in one or more attributes, or more generally what is the distribution of the best attribute.
There are three ways to compute the distribution of a maximum:
The code to simulate maxima is almost identical to the code for simulating sums:
def RandomMax(dists): total = max(dist.Random() for dist in dists) return total def SampleMax(dists, n): pmf = MakePmfFromList(RandomMax(dists) for i in xrange(n)) return pmf
All I did was replace “sum” with “max”. And the code for enumeration is almost identical, too:
def PmfMax(pmf1, pmf2): res = thinkbayes.Pmf() for v1, p1 in pmf1.Items(): for v2, p2 in pmf2.Items(): res.Incr(max(v1, v2), p1*p2) return res
In fact, you could generalize this function by taking the appropriate operator as a parameter.
The only problem with this algorithm is that if each Pmf has n values, the run time is proportional to n2.
If we convert the Pmfs to Cdfs, we can do the same calculation in linear time! The key is to remember the definition of the cumulative distribution function:
where X is a random variable that means “a value chosen randomly from this distribution.” So, for example, CDF(5) is the probability that a value from this distribution is less than or equal to 5.
If I draw X from CDF1 and Y from CDF2, and compute the maximum Z = max(X, Y), what is the chance that Z is less than or equal to 5? Well, in that case both X and Y must be less than or equal to 5.
If the selections of X and Y are independent,
where CDF3 is the distribution of Z. I chose the value 5 because I think it makes the formulas easy to read, but we can generalize for any value of z:
In the special case where we draw n values from the same distribution,
So to find the distribution of the maximum of n values,
we can enumerate the probabilities in the given Cdf
and raise them to the nth power.
def Max(self, n): cdf = self.Copy() cdf.ps = [p**n for p in cdf.ps] return cdf
Finally, here’s an example that computes the distribution of your character’s best attribute:
best_attr_cdf = three_exact.Max(6) best_attr_pmf = best_attr_cdf.MakePmf()
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey. | http://www.greenteapress.com/thinkbayes/html/thinkbayes006.html | 13 |
56 | Problem: Given two circles and a point on one of the circles. Construct a circle tangent to the two circles with one point of tangency being the designated point.
First, we will begin with a large circle and a smaller circle inside the large circle as shown below.
We will explore the tangent circle to these two circles by keeping the smaller circle external to the tangent circle and the tangent circle internal to the large circle. The directions for the construction in Geometer's Sketch Pad are in the Assignment instructions.
Here is the resulting product for the tangent circle of two given circles. When exploring, notice the path of the center of the tangent circle as it moves around the smaller circle. The red circle is the circle tangent to both of the given circles. Click the link below to start your exploration.
As you can see from GSP, the locus of the center of the tangent circle creates an ellipse. The foci of the ellipse are the center of the two original circles. Did you have any other observations? Below are a few observations.
Extension: The blue dotted line in the link below is always tangent to the ellipse we explored above. The line is part of our original construction. Click below to see the tangency lines.
Secondly, we will look at the tangent circle to two intersecting circles as shown below.
The construction is the same as our first construction, with the exception of moving the circles to be intersecting. You can explore below. Notice the locus of the center of the tangent circle.
The locus of the center of the tangent circle formed an ellipse again. Did you observe that the ellipse intersects with the two points of intersection of the original two circles? The foci were still the centers of the two original circles. Did you also notice that the tangent circle is internal to the larger circle and then becomes internal to the smaller circle? Below are more observations.
Thirdly, we will look at the tangent circle (if it exists) to circles that are disjoint as shown below.
The construction is the same as our first construction, with the exception of the circles being disjoint. You can explore below. Notice the locus of the center of the tangent circle when it exists.
Did you notice what path the locus of the center of the tangent circle took? You should have noticed that it formed a hyperbola. Also, notice that there was not always a tangent circle. The radius of the tangent circle increased rapidly when the circle got close to becoming the tangent line between the two circles. At one point, the "tangent circle" becomes a tangent line. This is why the locus of the center of the tangent circle is a hyperbola. The asymptotes are where the tangent circle becomes the tangent line to both of the original circles.
Also, notice that as the original circles get closer in size, the two sections of the hyperbola get farther apart. Did you have any other observations?
Return to Home Page | http://jwilson.coe.uga.edu/EMT668/EMAT6680.2002/Coffman/Assignment%207/write-up.html | 13 |
68 | Force is the effort one body exerts upon another and is directly related to motion. Motion only occurs through the application of force. Conversely, motion is reduced or stopped by the application of force.
Body force is generated by muscle contraction. The amount of force available for use varies inversely with the speed of muscle movement. The faster a muscle contacts during a movement, the less force is available to overcome a resistance. For example, you can pick up a heavier weight by lifting slowly. If you try to lift it quickly, too much muscle tension is used for rapid contraction and too little tension is available to lift the weight. When executing a Taekwondo technique, quick muscle contraction is used initially to develop speed of movement, but, as the blow nears the target, muscle tension is shifted toward overcoming the resistance of the target. If only a light resistance is expected, such as in light-contact free-sparring, more muscle tension may be reserved for speed. To transfer maximum force to the target, the body is tensed just as contact is made and is maintained for a split second after contact is made.
If a muscle is stretched, it will contract more forcibly than if it had not been stretched, but the stretching must occur immediately preceding the contraction. Fortunately, this naturally occurs as different muscle groups come into play.
Here is the math
- v = s/t Constant or average velocity equals distance divided by time. When people talk about speed (how fast your fist or foot is moving) they are talking about this.
- M = mv Momentum equals mass times velocity. This is how hard you hit.
- a = (V2-V1)/t Acceleration is the change in velocity over time. Acceleration equals final velocity minus initial velocity divided by time.
- F = ma Force equals mass times acceleration.
- P = Fv Power equals force times the constant velocity.
- KE = 1/2m v2 Kinetic energy equals one-half of mass times velocity squared. This is a favorite of the proponents of velocity as it places much greater value to an increase in velocity than an increase in mass.
Newton's Laws of Motion
- First Law of Motion. A body at rest tends to remain at rest and a body in motion tends to remain in motion, unless acted upon by external forces. This tendency to resist a change in state is called inertia. Since an opponent who is in motion tends to remain in motion, it is easier for a defender to use that motion in his or her favor rather than trying to stop the motion, such as pulling an opponent who is charging you. Since an opponent who is at rest tends to remain at rest, it is difficult for the opponent to avoid an attack quickly.
- Second Law of Motion. When a force acts upon a mass, the mass acquires a certain acceleration proportional to, and in the direction of, the force acting upon it, and the acceleration is inverse to the magnitude of the mass. In other words, a large, heavy person has an advantage over a small, lighter person.
- Third Law of Motion. For every action there is an equal and opposite reaction. Thus, when you punch an opponent with a certain force, an equal but opposite force is applied against you by the opponent's body. Therefore, you must have a tensed body and a firm, stable stance so you may withstand the force. Hopefully, the opponent will not be prepared and thus must absorb the full force of the punch. The third law also applies to technique execution. When one arm is pulled back quickly, an equal but opposite action occurs in the opposite arm. If that arm is executing a punch, the force will combine to increase the force of the punch.
Centripetal and Centrifugal Force
Two other forces that come into play during the practice of Taekwondo are centripetal and centrifugal force. Centripetal force is the force that draws objects into a spinning whirlpool. Centrifugal force is the force that throws objects off a spinning top. These forces come into play when using spinning kicks and releases that use a spinning motion.
Potential and Kinetic Energy
Potential energy is energy at rest; it is stored and available for use. Kinetic energy is energy in motion; it is consumed as it is used. When throwing an opponent, you use your kinetic energy to lift the opponent off the floor against gravity. At the peak of the throw, potential energy is stored within the opponent. To complete the throw, you release the opponent and the opponent's potential energy changes to kinetic energy as gravity forces the opponent to the floor.
An object in motion tends to remain in motion until acted upon by another force. An attack must be stopped, deflected, or avoided. The attack may come in either a straight line or an arc. Stopping the attack by meeting it head-on may be painful and you may not always be able to get out of its way. Therefore, the best strategy is to deflect the attack. A small force that cannot stop a large force may easily deflect the large force. Deflections should be used in a circular motion.
An unskilled person will hurl their body, more or less uncontrolled in one direction, with their strike. They are functioning more or less like a falling rock. A skilled martial artist who understands the principles energy will not move in this manner. If one part of their body goes forward, another goes backward (Yin/Yang) and the forces are balanced. Therefore, pulling this person in the direction of their strike is difficult, because their inertia is balanced and not focused in one direction.
For greater striking force, you should strike the opponent on the same line as his/her inertia. If a strong part of your body is striking a weaker part of the opponent, there is no problem. However, if your weapon is not strong enough, relative to the target, this method will result in injury to yourself. If you meet force on a line perpendicular to it, your small force can deflect a larger force.
For every action there is an equal and opposite reaction. Taekwondo makes use of this principle to develop powerful techniques.
One way reaction force is used is the way the pivot foot is used to thrust against the ground when shifting. The reaction force of this thrust is returned through the leg and hip to thrust the body forward.
Another way reaction force is used is through body rotation. Body rotation is created by anchoring one side of the body and using it as a pivot. The other side of the body is driven forward by the rear leg and hip as a reaction of the supporting leg's thrusting against the ground.
Hip snap may also generate a reaction force. In hip snap, the hip and leg motion, and the corresponding reaction force, is a short term, small scale pulse of power immediately followed by recoil in the opposite direction. Power is applied in a sharp pulse of energy of very short duration. Hip recoil also helps maintain stability.
These methods of generating reaction force are based on the fact that energy is being directed into the floor that bounces most all of the force back into the body, rather than absorbing it. Another way to generate a reaction force is when one hand performs a technique while the other hand is withdrawn in the opposite direction. The speed and scale of the movements of both hands are matched. This makes use of reaction force in two different ways. First, the pull back hand helps rotation to occur, because the force of the hand and arm being pulled back forcefully creates a forward movement on the other side of the body. Second, the pull back serves as a counter-balance for the technique being extended, so that if it misses the target, stability may be maintained.
Reaction force is also applied when actually striking a target with a technique. When a technique is finely focused, the body is so firmly connected to the ground that little or no force is accepted back into the body. If a technique quickly recoils after impact, none of the reaction force from the target can transfer back into the striking arm or leg and the impact duration is shortened, both of which increase impact force. | http://tkdtutor.com/articles/topics/techniques/199-power/1213-force?showall=&start=5 | 13 |
55 | The path of a particle moving in a plane need
not trace out the graph of a function, hence we cannot describe the path
by expressing y directly in terms of x. An alternate way
to describe the path of the particle is to express the coordinates of its
points as functions of a third variable using a pair of equations
Equations of this form are called parametric
equations for x and y, and the unknown t is called
a parameter. The parameter t may represent time in some instances,
an angle in other situations, or the distance a particle has traveled along
the path from a designated starting point.
x = f(t),
y = g(t).
An easy example of a parametric representation
of a curve is obtained by using basic trigonometry to obtain parametric
equations of a circle of radius 1 centered at the origin. We have the relationships
between a point (x,y) on the circle and an angle t as shown
in the following figure.
By elementary trigonometry we have the parametric
As t goes from 0 to 2p
the corresponding points trace out the circle in a counter clockwise direction.
The following animation illustrates the process.
x = cos(t),
y = sin(t).
(For a related demonstration for generation
of sine and cosine curves see the Circular Functions demo. )
Generating a circle
based on circles.
A famous curve that was named by Galileo
in 1599 is called a cycloid. A cycloid is the path traced out by
a point on the circumference of a circle as the circle rolls (without slipping)
along a straight line. A cycloid can be drawn by a pencil (chalk or marker)
attached to a circular lid which is rolled along a ruler. The following
animation illustrates the generation of a cycloid.
If the circle that is rolled has radius
then the parametric equations of the cycloid are
x = a(t - sin(t)),
y = a(1 - cos(t))
where parameter t is the angle
through which the circle was rolled. As in the case of the circle, these
parametric equations can be derived using elementary trigonometry. To see
the basics of the derivation click on the following: The
Equations of a Cycloid.
For some history related to cycloids click
on the following: St.Andrews-cycloid
Take a large circle centered at the
origin. Place a smaller circle tangent to the original circle at
the point where it crosses the positive x-axis and outside of the original
circle. Identify the point of tangency. See the next figure.
Next we roll the smaller circle around the
larger circle and follow the path of the point of tangency. The resulting
curve is called an epicycloid. The shape of the curve generated
in this manner depends on the relationship between the radius of the large
circle and the small circle. The following animations illustrate three
With a careful analysis we can show that
the parametric equations of an epicycloid using a large circle of radius
and a small circle of radius b, where a > b, are
, y=(a+b)sin(t)-bsin((a+b)t/b) .
The epicycloid has been studied by such luminaries
as Leibniz, Euler, Halley, Newton and the Bernoullis. The epicycloid curve
is of special interest to astronomers and the design of cog-wheels with
minimum friction. To experiment with epicycloids see the files available
at the end of this module. For more information and an on-line animator
click on the following link:
animator for epicycloid.
Here take a large circle centered at the
origin. Place a smaller circle tangent to the original circle at the point
where it crosses the positive x-axis and inside the original circle.
Identify the point of tangency. See the next figure.
Roll the smaller circle around the larger
circle and follow the path of the point of tangency. The resulting curve
is called an hypocycloid. The shape of the curve generated in this
manner depends on the relationship between the radius of the large circle
and the small circle. The following two animations illustrate the generation
of hypocycloids. (Also see the animation at the beginning of this demo.)
Again with a careful analysis we can show
that the parametric equations of an epicycloid using a large circle of
radius a and a small circle of radius b, where a > b,
, y=(a-b)sin(t)-bsin((a-b)t/b) .
I have used demos of this type in several
Each semester for calculus we have four computer labs. Our
In calculus class we define parametric equations
and have the students plot a few by hand before we do the demonstration.
The demonstration then wows them as they realize how difficult it would
be to plot these objects by hand. For the epicycloid I usually use
two rolls of tape with different radii, identify a point on the outer one
with a magic marker, and then roll it around the other and ask the students
what they think the path would look like. Then using DERIVE (see
downloads available below) we plot epicycloids for various radii
and ask questions about what they would expect and (as I already pointed
out in discussion) the numbers of times we need to rotate (i.e. what the
parameters should be) in order to obtain a closed epicycloid curve.
We try to get them to state a little theorem about this.
students must go to the lab, perform the exercises or experiments on
the computer using DERIVE, and then write up their results. One of
these labs requires students to plot a collection of functions using
parametric equations and polar coordinates. The main purpose of
the lab is to familiarize students with the parametric plotting
capabilities of DERIVE. This lab is assigned after students have
seen some of the demo material shown above.
I have also used this demo or a variation
thereof during various admission (student recruiting) days.
Generally, here, the audience consists of prospective students and their
parents. It is relative easy to explain what the cycloids are and
then it is exciting and informative to see them plotted.
For DERIVE the following file was supplied
by Anthony Berard and can be downloaded by clicking on
(See the imbedded instructions concerning change of scale.)
For Matlab the following files were written
for this demo and can be downloaded by clicking on the file name: epicycloid.m
, hypocycloid.m .
This demo was submitted by
Department of Mathematics
and is included in Demos
with Positive Impact with his permission. | http://mathdemos.org/mathdemos/cycloid-demo/ | 13 |
95 | Trees are the largest plants. They are not a single taxon (unit of biological classification) but include members of many plant taxa. A tree can be defined as a large, perennial (living more than one or two years), woody plant. Although there is no set definition regarding minimum size, the term generally applies to plants at least 6 meters (20 feet) high at maturity and, more importantly, having secondary branches supported on a single, woody main stem or trunk.
Compared with most other plant forms, trees are tall and long-lived. A few species of trees grow to 100 meters tall, and some can live for several thousand years.
Trees are important components of the natural landscape and significant elements in landscaping and agriculture, supplying orchard crops (such as apples and pears). Trees are important for other plants, for animals, and for the entire web of life on earth, including humans. Trees also play an important role in many of the world's religions and mythologies.
As plants that span many different orders and families of plants, trees show a wide variety of growth form, leaf type and shape, bark characteristics, reproductive structures, and so forth.
The basic parts of a tree are the roots, trunk(s), branches, twigs, and leaves. Tree stems consist mainly of support and transport tissues (xylem and phloem). Xylem is the principal water conducting tissue, and phloem is the tissue that carries organic materials, such as sucrose. Wood consists of xylem cells, and bark is made of phloem and other tissues external to the vascular cambium.
Trees may be broadly grouped into exogenous and endogenous trees according to the way in which their stem diameter increases. Exogenous trees, which comprise the great majority of modern trees (all conifers and broadleaf trees), grow by the addition of new wood outwards, immediately under the bark. Endogenous trees, mainly in the monocotyledons (e.g. palms), grow by addition of new material inwards.
As an exogenous tree grows, it creates growth rings. In temperate climates, these are commonly visible due to changes in the rate of growth with temperature variation over an annual cycle. These rings can be counted to determine the age of the tree, and used to date cores or even wood taken from trees in the past; this practice is known as the science of dendrochronology. In some tropical regions with constant year-round climate, growth is continuous and distinct rings are not formed, so age determination is impossible. Age determination is also impossible in endogenous trees.
The roots of a tree are generally embedded in earth, providing anchorage for the above-ground biomass and absorbing water and nutrients from the soil. Above ground, the trunk gives height to the leaf-bearing branches, aiding in competition with other plant species for sunlight. In many trees, the arrangement of the branches optimizes exposure of the leaves to sunlight.
Not all trees have all the plant organs or parts mentioned above. For example, most palm trees are not branched, the saguaro cactus of North America has no functional leaves, and tree ferns do not produce bark. Based on their general shape and size, all of these are nonetheless generally regarded as trees.
Indeed, sometimes size is the more important consideration. A plant form that is similar to a tree, but generally has smaller, multiple trunks and/or branches that arise near the ground, is called a shrub. However, no sharp differentiation between shrubs and trees is possible. Given their small size, bonsai plants would not technically be "trees," but one should not confuse reference to the form of a species with the size or shape of individual specimens. A spruce seedling does not fit the definition of a tree, but all spruces are trees. Bamboos by contrast, do show most of the characteristics of trees, yet are rarely called trees.
Types of trees
The earliest trees were tree ferns and horsetails, which grew in vast forests in the Carboniferous Period; tree ferns still survive, but the only surviving horsetails are not of tree form. Later, in the Triassic Period, conifers, ginkgos, cycads, and other gymnosperms appeared, and subsequently flowering plants (or angiosperms) appeared in the Cretaceous Period. Angiosperms (such as an apple tree) have their reproductive organs in flowers and cover their seeds in a true fruit, whereas gymnosperms bear their seeds on the scales of a cone or cone-like structure (such as a spruce tree).
Most trees today are classified as either broadleaf or conifer. Broadleafs (Dicotyledons or "dicots") are flowering plants which bear two-lobed seeds inside of fruits or seed cases. They include oaks, willows, apple trees, magnolia, eucalyptus, and many others. Broadleafs grow mainly from the tropics through the temperate zones in both the Southern and the Northern hemispheres. Most in the tropics and subtropics are evergreen, keeping their leaves until new ones replace them; while most in colder regions are deciduous, losing their leaves in fall and growing new ones in spring each year.
Conifers are gymnosperms. They do not have true flowers and bear their single-lobed seeds "naked," not covered in a fruit or seed case. In most cases, their leaves are small and needle-like. They include pines, firs, cypresses, and others. Most conifers grow in the Northern Hemisphere, from the temperate zone north to around the Arctic Circle. Almost all of them are evergreen.
Palms are the third largest tree group. They are also a type of angiosperm or flowering plant, and specifically Monocotyledons or monocots, meaning they have one cotyledon, or embryonic leaf, in their seeds (unlike Dicotyledones, which typically have two cotyledons). They grow mostly in the tropics and are distinctive for lack of branches and the large leaves growing directly from the top of the trunk, as well as for growing new material inward.
Smaller tree groups include members of the Agave family and the Cycad family and the ginkgo and tree ferns. The saguaro cactus and some species of bamboo (a grass) are sometimes considered to be trees because of their size.
Deciduous versus evergreen
In botany, deciduous plants, principally trees and shrubs, are those that lose all of their foliage for part of the year. In some cases, the foliage loss coincides with the incidence of winter in temperate or polar climates, while others lose their leaves during the dry season in climates with seasonal variation in rainfall. The converse of deciduous is evergreen.
Many deciduous plants flower during the period when they are leafless, as this increases the effectiveness of pollination. The absence of leaves improves wind transmission of pollen in the case of wind-pollinated plants, and increases the visibility of the flowers to insects in insect-pollinated plants. This strategy is not without risks, as the flowers can be damaged by frost, or in dry season areas, result in water stress on the plant.
An evergreen plant is a plant that retains its foliage all year round. Leaf persistence in evergreen plants may vary from a few months (with new leaves constantly being grown and old ones shed), to only just over a year (shedding the old leaves very soon after the new leaves appear), up to a maximum of several decades, such as 45 years in Great Basin Bristlecone Pine Pinus longaeva (Ewers and Schmid 1981). However, very few species show leaf persistence of over 5 years.
In tropical regions, most rainforest plants are evergreen, replacing their leaves gradually throughout the year as the leaves age and fall, whereas species growing in seasonally arid climates may be either evergreen or deciduous. Most warm temperate, climate plants are also evergreen. In cool temperate climates, fewer plants are evergreen, with a predominance of conifers, as few evergreen broadleaf plants can tolerate severe cold below about -25°C.
A small group of trees growing together is called a grove or copse, and a landscape covered by a dense growth of trees, in which they are the dominant influence, is called a forest. Several biotopes (an area of uniform environmental, physical conditions providing habitat for a specific assemblage of plants and animals) are defined largely by the trees that inhabit them; examples are rainforest and taiga. A landscape of trees scattered or spaced across grassland (usually grazed or burned over periodically) is called a savanna.
Most trees grow in forests. There are different types of forests around the world, mainly depending on the climate. Some main forests are identified below.
Tropical rainforests grow near the equator, where the climate is constantly warm and the rainfall is heavy all year round. Almost all of the trees in tropical rain forests are evergreen broadleafs. They have a much larger variety of trees than the other types of forests and also support many other types of plants and animals. The largest tropical rain forests are found in South America, Central America, Africa, and Southeast Asia.
Tropical seasonal forests
Tropical seasonal forests grow in regions of the tropics and subtropics that have a definite wet and dry season each year and a somewhat cooler climate than the tropical rainforests. Most of their trees are broadleafs with some being evergreen and some deciduous, shedding their leaves in the dry season. Tropical seasonal forests are found in Central America, South America, Africa, India, China, Australia, and on many islands in the Pacific Ocean.
Temperate deciduous forests
Temperate deciduous forests grow in regions that have a temperate climate with warm summers and cold winters. Most of the trees shed their leaves in the fall. Temperate deciduous forests are found in North America, Europe, and Northeast Asia.
Temperate evergreen forests
Temperate evergreen forests grow in some coastal and mountain regions. In most cases, their trees are conifers, but in Australia and New Zealand they are broadleafs. Temperate evergreen forests are also found in Europe, Asia, and North and South America.
In the temperate evergreen forests, there are almost always some deciduous trees, and in the deciduous forests there are almost always some evergreens. Some forests are classified as mixed deciduous-evergreen if the numbers of each are close to the same.
Boreal forests grow in northern (the word boreal means northern) regions with very cold winters and short growing seasons. Most of their trees are evergreen conifers, with a few broadleafs such as aspen. Boreal forests are found in northern North America, Europe, and Asia.
Savannas occur in a geographic region where there is not enough moisture to support a large density of trees. In savannas, trees grow individually or in small clumps with most of the land covered in grass or other low vegetation. Savannas are found in both tropical and temperate zones worldwide.
The importance of trees
Trees, like all plants, capture the energy of sunlight and through the process of photosynthesis convert it into chemical energy, which they use for their own growth and life processes. This energy is passed on, supporting a large community of living things. Many animals eat the fruits, seeds, leaves, sap, or even the wood of trees. On the forest floor, the fallen leaves decompose, thus supporting microorganisms, mushrooms, worms, insects, and other plants and animals. A layer of soil is built up and protected by the trees' roots. Besides food, trees also provide many species of animals with habitat, nesting space, and protection from predators.
Trees help to modify the climate, providing shade in hot weather and shelter from the wind. In some places, they help to cause more rainfall and condensation of fog. The forest floor holds water from rain and snow, helping to lessen the effects of flooding and drought. Trees can also hold snowfall in place to prevent avalanches and slow the spring melt.
Trees and humans
From the beginning of humankind, trees have provided people with food, in the form of fruits and nuts, and wood for fires, tools, and shelters. Trees also shade homes and act as windbreaks to protect homes, and they help prevent soil erosion. Many useful products come from trees, including rubber, cork, turpentine, tannic acid (used for making leather), and medicines such as quinine.
In the Old Testament or the Hebrew Bible ("Tanakh"), trees provide symbolism in the form of the Tree of Knowledge and the Tree of Good and Evil. In Buddhism, the Bodhi tree is the one under which Siddhartha Gautama (Buddha) received enlightenment. The Bodhi tree belongs to the Sacred Figs (Ficus religiosa), which are sacred to Hindus, Jains, and Buddhists. In some religions, such as Hinduism, trees are said to be the homes of tree spirits.
Trees of mythology include the Norse world tree Yggdrasil and the Austras Koks of Latvian mythology. In Norse mythology, the first humans were made from an ash and an elm. In Chinese mythology, there is a peach tree which grows one fruit every three thousand years, and the eating of the peach is to grant immortality. In Greek mythology, Eros makes Apollo fall in love with a nymph, Daphne, who hates him. As she runs away from him, she runs to the river and tells it to turn her into a tree. She becomes a bay tree.
Human Effect on Trees
Cultivation. From ancient days, people have planted and protected trees that they have found useful. Over time, many tree species have been modified by artificial selection and new varieties have come into being. Trees have also been planted in new places. Some of the first trees to be cultivated were the apple from central Asia, the fig and the date palm from western Asia, the mango from India, and the olive from the Mediterranean. The origins of the coconut are unknown, but it was spread worldwide by the Polynesians and other sea-faring peoples. Cocao and avocado trees were first cultivated in the New World. This process has greatly accelerated in modern times and many species of trees that people find useful or beautiful have been transplanted and are now growing far from their origins. (See Redwood for an example of a tree that has been planted in different regions.)
Deforestation. Since about the time of the beginning of agriculture and the domestication of animals, forests have suffered "deforestation," the loss of trees and conversion to non-forest, due to human activities. Forests have been cut down or burned to make room for farmland and villages. The grazing of sheep, goats, and other domestic animals killed young trees and turned forest into grassland or desert. As the human population increased, more trees were cut down for lumber and for fuel. By the 1800s, a large part of the world's forests had been lost. The process of deforestation is still going on in many parts of the world. About half of the world's forested area has been lost to deforestation.
Conservation and reforestation. In the second half of the nineteenth century, the conservation movement began in the United States and other countries calling for the preservation of forests, along with other natural resources. In 1872, Yellowstone National Park was established as the world's first national park. The conservation movement spread over the world and today there are over 7,000 national parks, nature reserves, and wilderness areas worldwide, protecting an area about the size of the mainland United States, much of it forest. The effort to protect forests is ongoing especially for the tropical rainforests, which are mostly located in poorer countries, where there is much pressure to utilize forested areas for the needy and growing populations.
Besides preservation, there is also a movement to replant trees and restore forests for both their environmental and economic benefits. This is being carried out by governments, by the United Nations, by non-profit organizations, by private landowners, and by concerned individuals in both rich and poor countries.
Major tree genera
Flowering plants (Magnoliophyta/Angiosperms)
Dicotyledons (Magnoliopsida; broadleaf or hardwood trees)
- Altingiaceae (Sweetgum family)
- Sweetgum, Liquidambar species
- Anacardiaceae (Cashew family)
- Cashew, Anacardium occidentale
- Mango, Mangifera indica
- Pistachio, Pistacia vera
- Sumac, Rhus species
- Lacquer tree, Toxicodendron verniciflua
- Annonaceae (Custard apple family)
- Cherimoya Annona cherimola
- Custard apple Annona reticulata
- Pawpaw Asimina triloba
- Soursop Annona muricata
- Apocynaceae (Dogbane family)
- Pachypodium Pachypodium species
- Aquifoliaceae (Holly family)
- Holly, Ilex species
- Araliaceae (Ivy family)
- Kalopanax, Kalopanax pictus
- Betulaceae (Birch family)
- Alder, Alnus species
- Birch, Betula species
- Hornbeam, Carpinus species
- Hazel, Corylus species
- Bignoniaceae (family)
- Catalpa, Catalpa species
- Cactaceae (Cactus family)
- Saguaro, Carnegiea gigantea
- Cannabaceae (Cannabis family)
- Hackberry, Celtis species
- Cornaceae (Dogwood family)
- Dogwood, Cornus species
- Dipterocarpaceae family
- Garjan Dipterocarpus species
- Sal Shorea species
- Ericaceae (Heath family)
- Arbutus, Arbutus species
- Eucommiaceae (Eucommia family)
- Eucommia Eucommia ulmoides
- Fabaceae (Pea family)
- Acacia, Acacia species
- Honey locust, Gleditsia triacanthos
- Black locust, Robinia pseudoacacia
- Laburnum, Laburnum species
- Brazilwood, Caesalpinia echinata
- Fagaceae (Beech family)
- Chestnut, Castanea species
- Beech, Fagus species
- Southern beech, Nothofagus species
- Tanoak, Lithocarpus densiflorus
- Oak, Quercus species
- Fouquieriaceae (Boojum family)
- Boojum, Fouquieria columnaris
- Hamamelidaceae (Witch-hazel family)
- Persian Ironwood, Parrotia persica
- Juglandaceae (Walnut family)
- Walnut, Juglans species
- Hickory, Carya species
- Wingnut, Pterocarya species
- Lauraceae (Laurel family)
- Cinnamon Cinnamomum zeylanicum
- Bay Laurel Laurus nobilis
- Avocado Persea americana
- Lecythidaceae (Paradise nut family)
- Brazil Nut Bertholletia excelsa
- Lythraceae (Loosestrife family)
- Crape-myrtle Lagerstroemia species
- Magnoliaceae (Magnolia family)
- Tulip tree, Liriodendron species
- Magnolia, Magnolia species
- Malvaceae (Mallow family; including Tiliaceae and Bombacaceae)
- Baobab, Adansonia species
- Silk-cotton tree, Bombax species
- Bottletrees, Brachychiton species
- Kapok, Ceiba pentandra
- Durian, Durio zibethinus
- Balsa, Ochroma lagopus
- Cacao, (cocoa), Theobroma cacao
- Linden (Basswood, Lime), Tilia species
- Meliaceae (Mahogany family)
- Neem, Azadirachta indica
- Bead tree, Melia azedarach
- Mahogany, Swietenia mahagoni
- Moraceae (Mulberry family)
- Fig, Ficus species
- Mulberry, Morus species
- Myristicaceae (Nutmeg family)
- Nutmeg, Mysristica fragrans
- Myrtaceae (Myrtle family)
- Eucalyptus, Eucalyptus species
- Myrtle, Myrtus species
- Guava, Psidium guajava
- Nyssaceae (Tupelo) family; sometimes included in Cornaceae
- Tupelo, Nyssa species
- Dove tree, Davidia involucrata
- Oleaceae (Olive family)
- Olive, Olea europaea
- Ash, Fraxinus species
- Paulowniaceae (Paulownia family)
- Foxglove Tree, Paulownia species
- Platanaceae (Plane family)
- Plane, Platanus species
- Rhizophoraceae (Mangrove family)
- Red Mangrove, Rhizophora mangle
- Rosaceae (Rose family)
- Rowans, Whitebeams, Service Trees Sorbus species
- Hawthorn, Crataegus species
- Pear, Pyrus species
- Apple, Malus species
- Almond, Prunus dulcis
- Peach, Prunus persica
- Plum, Prunus domestica
- Cherry, Prunus species
- Rubiaceae (Bedstraw family)
- Coffee, Coffea species
- Rutaceae (Rue family)
- Citrus, Citrus species
- Cork-tree, Phellodendron species
- Euodia, Tetradium species
- Salicaceae (Willow family)
- Aspen, Populus species
- Poplar, Populus species
- Willow, Salix species
- Sapindaceae (including Aceraceae, Hippocastanaceae) (Soapberry family)
- Maple, Acer species
- Buckeye, Horse-chestnut, Aesculus species
- Mexican Buckeye, Ungnadia speciosa
- Lychee, Litchi sinensis
- Golden rain tree, Koelreuteria
- Sapotaceae (Sapodilla family)
- Argan, Argania spinosa
- Gutta-percha, Palaquium species
- Tambalacoque, or "dodo tree", Sideroxylon grandiflorum, previously Calvaria major
- Simaroubaceae family
- Tree of heaven, Ailanthus species
- Theaceae (Camellia family)
- Gordonia, Gordonia species
- Stuartia, Stuartia species
- Thymelaeaceae (Thymelaea family)
- Ramin, Gonystylus species
- Ulmaceae (Elm family)
- Elm, Ulmus species
- Zelkova, Zelkova species
- Verbenaceae family
- Teak, Tectona species
- Agavaceae (Agave family)
- Cabbage tree, Cordyline australis
- Dragon tree, Dracaena draco
- Joshua tree, Yucca brevifolia
- Arecaceae (Palmae) (Palm family)
- Areca Nut, Areca catechu
- Coconut Cocos nucifera
- Date Palm, Phoenix dactylifera
- Chusan Palm, Trachycarpus fortunei
- Poaceae (grass family)
- Bamboos Poaceae, subfamily Bambusoideae
Conifers (Pinophyta; softwood trees)
- Araucariaceae (Araucaria family)
- Araucaria, Araucaria species
- Kauri, Agathis species
- Wollemia, Wollemia nobilis
- Cupressaceae (Cypress family)
- Cypress, Cupressus species
- Cypress, Chamaecyparis species
- Juniper, Juniperus species
- Alerce or Patagonian cypress, Fitzroya cupressoides
- Sugi, Cryptomeria japonica
- Coast Redwood, Sequoia sempervirens
- Giant Sequoia, Sequoiadendron giganteum
- Dawn Redwood, Metasequoia glyptostroboides
- Western Redcedar Thuja plicata
- Bald Cypress, Taxodium species
- Pinaceae (Pine family)
- White pine, Pinus species
- Pinyon pine, Pinus species
- Pine, Pinus species
- Spruce, Picea species
- Larch, Larix species
- Douglas-fir, Pseudotsuga species
- Fir, Abies species
- Cedar, Cedrus species
- Podocarpaceae (Yellowwood family)
- African Yellowwood, Afrocarpus falcatus
- Totara, Podocarpus totara
- Miro, Prumnopitys ferruginea
- Kahikatea, Dacrycarpus dacrydioides
- Rimu, Dacrydium cupressinum
- Kusamaki, Sciadopitys species
- Taxaceae (Yew family)
- Yew, Taxus species
- Ginkgoaceae (Ginkgo family)
- Ginkgo, Ginkgo biloba
- Cycadaceae family
- Ngathu cycad, Cycas angulata
- Zamiaceae family
- Wunu cycad, Lepidozamia hopei
- Cyatheaceae and Dicksoniaceae families
- Tree ferns, Cyathea, Alsophila, Dicksonia (not a monophyletic group)
The world's champion trees in terms of height, trunk diameter or girth, total size, and age, according to species, are all conifers. In most measures, the second to fourth places are also held by species of conifers.
- Tallest trees
The heights of the tallest trees in the world have been the subject of considerable dispute and much (often wild) exaggeration. Modern verified measurement with laser rangefinders combined with tape drop measurements made by tree climbers, carried out by the U.S. Eastern Native Tree Society, has shown that most older measuring methods and measurements are unreliable, often producing exaggerations of 5 to 15 percent above the real height. Historical claims of trees of 114 m, 117 m, 130 m, and even 150 m, are now largely disregarded as unreliable, fantasy, or fraudulent. The following are now accepted as the top five tallest reliably measured species, with the listing of the tallest one of that species:
- Coast Redwood Sequoia sempervirens: 112.83 m, Humboldt Redwoods State Park, California (Gymnosperm Database)
- Coast Douglas-fir Pseudotsuga menziesii: 100.3 m, Brummit Creek, Coos County, Oregon (Gymnosperm Database)
- Sitka Spruce Picea sitchensis: 96.7 m, Prairie Creek Redwoods State Park, California (Gymnosperm Database)
- Giant Sequoia Sequoiadendron giganteum: 93.6 m, Redwood Mountain Grove, California (Gymnosperm Database)
- Australian Mountain-ash Eucalyptus regnans: 92.0 m, Styx Valley, Tasmania (Forestry Tasmania [pdf file])
- Stoutest trees
As a general standard, tree girth (circumference) is taken at “breast height”; this is defined differently in different situations, with most foresters measuring girth at 1.3 m above ground, while ornamental tree measurers usually measure at 1.5 m above ground. In most cases this makes little difference to the measured girth. On sloping ground, the "above ground" reference point is usually taken as the highest point on the ground touching the trunk, but some use the average between the highest and lowest points of ground. Some of the inflated old measurements may have been taken at ground level. Some past exaggerated measurements also result from measuring the complete next-to-bark measurement, pushing the tape in and out over every crevice and buttress.
Modern trends are to cite the tree's diameter rather than the circumference; this is obtained by dividing the measured circumference by π. It assumes the trunk is circular in cross-section (an oval or irregular cross-section would result in a mean diameter slightly greater than the assumed circle). This is cited as dbh (diameter at breast height) in tree literature.
The stoutest species in diameter, excluding baobabs whose trunks change in size at various times during the season due to the storage of water, are:
- Montezuma Cypress Taxodium mucronatum: 11.42 m, Árbol del Tule, Santa Maria del Tule, Oaxaca, Mexico (A. F. Mitchell, International Dendrology Society Year Book 1983: 93, 1984).
- Giant Sequoia Sequoiadendron giganteum: 8.85 m, General Grant tree, Grant Grove, California (Gymnosperm Database)
- Coast Redwood Sequoia sempervirens: 7.44 m, Prairie Creek Redwoods State Park, California (Gymnosperm Database)
- Largest trees
The largest trees in total volume are those that are tall, of large diameter, and in particular, which hold a large diameter high up the trunk. Measurement is very complex, particularly if branch volume is to be included as well as the trunk volume, so measurements have only been made for a small number of trees, and generally only for the trunk. No attempt has ever been made to include root volume.
The top four species measured so far are (Gymnosperm Database):
- Giant Sequoia Sequoiadendron giganteum: 1489 m³, General Sherman tree
- Coast Redwood Sequoia sempervirens: 1045 m³, Del Norte Titan tree
- Western Redcedar Thuja plicata: 500 m³, Quinault Lake Redcedar
- Kauri Agathis australis: 400 m³, Tane Mahuta tree (total volume, including branches, 516.7 m³)
However, the Alerce Fitzroya cupressoides, as yet unmeasured, may well slot in at third or fourth place, and Montezuma Cypress Taxodium mucronatum is also likely to be high in the list. The largest broadleaf tree is an Australian Mountain Ash, the “El Grande” tree of about 380 m³ in Tasmania.
- Oldest trees
The oldest trees are determined by growth rings, which can be seen if the tree is cut down or in cores taken from the edge to the center of the tree. Accurate determination is only possible for trees which produce growth rings, generally those which occur in seasonal climates. Trees in uniform, non-seasonal, tropical climates grow continuously and do not have distinct growth rings. It is also only possible to measure age for trees that are solid to the center; many very old trees become hollow as the dead heartwood decays away. For some of these species, age estimates have been made on the basis of extrapolating current growth rates, but the results are usually little better than guesswork or wild speculation.
The verified oldest measured ages are (Gymnosperm Database):
- Great Basin Bristlecone Pine Pinus longaeva: 4,844 years
- Alerce Fitzroya cupressoides: 3,622 years
- Giant Sequoia Sequoiadendron giaganteum: 3,266 years
- Huon-pine Lagarostrobos franklinii: 2,500 years
- Rocky Mountains Bristlecone Pine Pinus aristata: 2,435 years
Other species suspected of reaching exceptional age include European Yew Taxus baccata (probably over 3,000 years) and Western Redcedar Thuja plicata.
The oldest verified age for a broadleaf tree is 2,293 years for the Sri Maha Bodhi Sacred Fig (Ficus religiosa) planted in 288 B.C.E. at Anuradhapura, Sri Lanka; this is also the oldest human-planted tree with a known planting date.
- Aerts, R. 1995. The advantages of being evergreen. Trends in Ecology and Evolution 10(10): 402-407.
- Ewers, F. W., and R. Schmid. 1981. Longevity of needle fascicles of Pinus longaeva (Bristlecone Pine) and other North American pines. Oecologia 51:107-115.
- Matyssek, R. 1986. Carbon, water and nitrogen relations in evergreen and deciduous conifers. Tree Physiology 2:177–187.
- Pakenham, T. 2002. Remarkable Trees of the World. Norton. ISBN 0297843001
- Pakenham, T. 1996. Meetings with Remarkable Trees. Weidenfeld & Nicolson. ISBN 0297832557
- Pizzetti, M., S. Schuler, and F. De Marco. (Eds.) 1977. Simon & Schuster's Guide to Trees. Simon & Schuster. ISBN 0671241257
- Sobrado, M. A. 1991. Cost-benefit relationships in deciduous and evergreen leaves of tropical dry forest species. Functional Ecology 5(5):608-616.
- Stone, Christopher D. 1996. Should Trees Have Standing? And Other Essays on Law, Morals and the Environment. Oxford University Press. ISBN 0379213818
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
- Tree (May 13, 2006) history
- Deciduous (May 13, 2006) history
- Evergreen (May 13, 2006) history
- Trees_in_mythology (May 13, 2006) history
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Tree | 13 |
113 | 2.1.1. The visibility function
"An interferometer is a device for measuring the spatial coherence function" (Clark 1999). This dry statement pretty much captures what interferometry is all about, and the rest of this chapter will try to explain what lies beneath it, how the measured spatial coherence function is turned into images and how properties of the interferometer affect the images. We will mostly abstain from equations here and give a written description instead, however, some equations are inevitable. The issues explained here have been covered in length in the literature, in particular in the first two chapters of Taylor et al. (1999) and in great detail in Thompson et al. (2001).
The basic idea of an interferometer is that the spatial intensity distribution of electromagnetic radiation produced by an astronomical object at a particular frequency, I, can be reconstructed from the spatial coherence function measured at two points with the interferometer elements, V(1, 2).
Let the (monochromatic) electromagnetic field arriving at the observer's location be denoted by E(). It is the sum of all waves emitted by celestial bodies at that particular frequency. A property of this field is the correlation function at two points, V(1, 2) = <E(1) E*(2)>, where the superscript * denotes the complex conjugate. V(1, 2) describes how similar the electromagnetic field measured at two locations is. Think of two corks thrown into a lake in windy weather. If the corks are very close together, they will move up and down almost synchronously; however as their separation increases their motions will become less and less similar, until they move completely independently when several meters apart.
Radiation from the sky is largely spatially incoherent, except over very small angles on the sky, and these assumptions (with a few more) then lead to the spatial coherence function
Here is the unit vector pointing towards the source and d is the surface element of the celestial sphere. The interesting point of this equation is that it is a function of the separation and relative orientation of two locations. An interferometer in Europe will measure the same thing as one in Australia, provided the separation and orientation of the interferometer elements are the same. The relevant parameters here are the coordinates of the antennas when projected onto a plane perpendicular to the line of sight (Figure 1). This plane has the axes u and v, hence it is called the (u, v) plane. Now let us further introduce units of wavelengths to measure positions in the (u, v) plane. One then gets
This equation is a Fourier transform between the spatial coherence function and the (modified) intensity distribution in the sky, I, and can be inverted to obtain I. The coordinates u and v are the components of a vector pointing from the origin of the (u, v) plane to a point in the plane, and describe the projected separation and orientation of the elements, measured in wavelengths. The coordinates l and m are direction cosines towards the astronomical source (or a part thereof). In radio astronomy, V is called the visibility function, but a factor, A, is commonly included to describe the sensitivity of the interferometer elements as a function of angle on the sky (the antenna response 1).
The visibility function is the quantity all interferometers measure and which is the input to all further processing by the observer.
Figure 1. Sketch of how a visibility measurement is obtained from a VLBI baseline. The source is observed in direction of the line-of-sight vector, , and the sky coordinates are the direction cosines l and m. The projection of the station coordinates onto the (u, v) plane, which is perpendicular to , yields the (u, v) coordinates of the antennas, measured in units of the observing wavelength. The emission from the source is delayed at one antenna by an amount = × / c. At each station, the signals are intercepted with antennas, amplified, and then mixed down to a low frequency where they are further amplified and sampled. The essential difference between a connected-element interferometer and a VLBI array is that each station has an independent local oscillator, which provides the frequency normal for the conversion from the observed frequency to the recorded frequency. The sampled signals are written to disk or tape and shipped to the correlator. At the correlator, the signals are played back, the geometric delay is compensated for, and the signals are correlated and Fourier transformed (in the case of an XF correlator).
2.1.2. The (u, v) plane
We introduced a coordinate system such that the line connecting the interferometer elements, the baseline, is perpendicular to the direction towards the source, and this plane is called the (u, v) plane for obvious reasons. However, the baseline in the (u, v) plane is only a projection of the vector connecting the physical elements. In general, the visibility function will not be the same at different locations in the (u, v) plane, an effect arising from structure in the astronomical source. It is therefore desirable to measure it at as many points in the (u, v) plane as possible. Fortunately, the rotation of the earth continuously changes the relative orientation of the interferometer elements with respect to the source, so that the point given by (u, v) slowly rotates through the plane, and so an interferometer which is fixed on the ground samples various aspects of the astronomical source as the observation progresses. Almost all contemporary radio interferometers work in this way, and the technique is then called aperture synthesis. Furthermore, one can change the observing frequency to move (radially) to a different point in the (u, v) plane. This is illustrated in Figure 2. Note that the visibility measured at (-u, -v) is the complex conjugate of that measured at (u, v), and therefore does not add information. Hence sometimes in plots of (u, v) coverage such as Figure 2, one also plots those points mirrored across the origin. A consequence of this relation is that after 12 h the aperture synthesis with a given array and frequency is complete.
Figure 2. Points in the (u, v) plane sampled during a typical VLBI observation at a frequency of 1.7 GHz ( = 0.18 m). The axis units are kilolambda, and the maximum projected baseline length corresponds to 60000 k = 10800 km. The source was observed four times for 25 min each, over a time range of 7.5 h. For each visibility only one point has been plotted, i.e., the locations of the complex conjugate visibilities are not shown. Top left: The (u, v) track of a single baseline. It describes part of an ellipse the centre of which does not generally coincide with the origin (this is only the case for an east-west interferometer). Top right: The (u, v) track of all interferometer pairs of all participating antennas. Bottom left: The same as the top right panel, but the (u, v) points of all four frequency bands used in the observation have been plotted, which broadened the tracks. Bottom right: This plot displays a magnified portion of the previous diagram. It shows that at any one time the four frequency bands sample different (u, v) points which lie on concentric ellipses. As the Earth rotates, the (u, v) points progress tangentially.
2.1.3. Image reconstruction
After a typical VLBI observation of one source, the (u, v) coverage will not look too different from the one shown in Figure 2. These are all the data needed to form an image by inverting Equation 3. However, the (u, v) plane has been sampled only at relatively few points, and the purely Fourier-transformed image (the "dirty image") will look poor. This is because the true brightness distribution has been convolved with the instrument's point-spread function (PSF). In the case of aperture synthesis, the PSF B(l, m) is the Fourier transform of the (u, v) coverage:
Here S(u, v)) is unity where measurements have been made, and zero elsewhere. Because the (u, v) coverage is mostly unsampled, B(l,m) has very high artefacts ("sidelobes").
Figure 3. The PSF B(l, m) of the (u, v) coverage shown in the bottom left panel of Figure 2. Contours are drawn at 5%, 15%, 25%, ... of the peak response in the image centre. Patches where the response is higher than 5% are scattered all over the image, sometimes reaching 15%. In the central region, the response outside the central peak reaches more than 25%. Without further processing, the achievable dynamic range with this sort of PSF is of the order of a few tens.
To remove the sidelobes requires to interpolate the visibilities to the empty regions of the (u, v) plane, and the standard method in radio astronomy to do that is the "CLEAN" algorithm.
The CLEAN algorithm (Högbom 1974) is a non-linear, iterative mechanism to rid interferometry images of artefacts caused by insufficient (u, v) coverage. Although a few varieties exist the basic structure of all CLEAN implementations is the same:
CLEAN can be stopped when the sidelobes of the sources in the residual image are much lower than the image noise. A corollary of this is that in the case of very low signal-to-noise ratios CLEAN will essentially have no effect on the image quality, because the sidelobes are below the noise limit introduced by the receivers in the interferometer elements. The "clean image" is iteratively build up out of delta components, and the final image is formed by convolving it with the "clean beam". The clean beam is a two-dimensional Gaussian which is commonly obtained by fitting a Gaussian to the centre of the dirty beam image. After the convolution the residual image is added.
Figure 4. Illustration of the effects of the CLEAN algorithm. Left panel: The Fourier transform of the visibilities already used for illustration in Figures 2 and 3. The image is dominated by artefacts arising from the PSF of the interferometer array. The dynamic range of the image (the image peak divided by the rms in an empty region) is 25. Right panel: The "clean image", made by convolving the model components with the clean beam, which in this case has a size of 3.0 × 4.3 mas. The dynamic range is 144. The contours in both panels start at 180 µJy and increase by factors of two.
The way in which images are formed in radio interferometry may seem difficult and laborious (and it is), but it also adds great flexibility. Because the image is constructed from typically thousands of interferometer measurements one can choose to ignore measurements from, e.g., the longest baselines to emphasize sensitivity to extended structure. Alternatively, one can choose to weight down or ignore short spacings to increase resolution. Or one can convolve the clean model with a Gaussian which is much smaller than the clean beam, to make a clean image with emphasis on fine detail ("superresolution").
2.1.5. Generating a visibility measurement
The previous chapter has dealt with the fundamentals of interferometry and image reconstruction. In this chapter we will give a brief overview about more technical aspects of VLBI observations and the signal processing involved to generate visibility measurements.
It may have become clear by now that an interferometer array really is only a collection of two-element interferometers, and only at the imaging stage is the information gathered by the telescopes combined. In an array of N antennas, the number of pairs which can be formed is N(N - 1) / 2, and so an array of 10 antennas can measure the visibility function at 45 locations in the (u, v) plane simultaneously. Hence, technically, obtaining visibility measurements with a global VLBI array consisting of 16 antennas is no more complex than doing it with a two-element interferometer - it is just logistically more challenging.
Because VLBI observations involve telescopes at widely separated locations (and can belong to different institutions), VLBI observations are fully automated. The entire "observing run" (a typical VLBI observations lasts around 12 h) including setting up the electronics, driving the antenna to the desired coordinates and recording the raw antenna data on tape or disk, is under computer control and requires no interaction by the observer. VLBI observations generally are supervised by telescope operators, not astronomers.
It should be obvious that each antenna needs to point towards the direction of the source to be observed, but as VLBI arrays are typically spread over thousands of kilometres, a source which just rises at one station can be in transit at another 2. Then the electronics need to be set up, which involves a very critical step: tuning the local oscillator. In radio astronomy, the signal received by the antenna is amplified many times (in total the signal is amplified by factors of the order of 108 to 1010), and to avoid receiver instabilities the signal is "mixed down" to much lower frequencies after the first amplification. The mixing involves injecting a locally generated signal (the local oscillator, or LO, signal) into the signal path with a frequency close to the observing frequency. This yields the signal at a frequency which is the difference between the observing frequency and the LO frequency (see, e.g., Rohlfs 1986 for more details). The LO frequency must be extremely stable (1 part in 1015 per day or better) and accurately known (to the sub-Hz level) to ensure that all antennas observe at the same frequency. Interferometers with connected elements such as the VLA or ATCA only need to generate a single LO the output of which can be sent to the individual stations, and any variation in its frequency will affect all stations equally. This is not possible in VLBI, and so each antenna is equipped with a maser (mostly hydrogen masers) which provides a frequency standard to which the LO is phase-locked. After downconversion, the signal is digitized at the receiver output and either stored on tape or disk, or, more recently, directly sent to the correlator via fast network connections ("eVLBI").
The correlator is sometimes referred to as the "lens" of VLBI observations, because it produces the visibility measurements from the electric fields sampled at the antennas. The data streams are aligned, appropriate delays and phases introduced and then two operations need to be performed on segments of the data: the cross-multiplication of each pair of stations and a Fourier transform, to go from the temporal domain into the spectral domain. Note that Eq.3 is strictly valid only at a particular frequency. Averaging in frequency is a source of error, and so the observing band is divided into frequency channels to reduce averaging, and the VLBI measurand is a cross-power spectrum.
The cross-correlation and Fourier transform can be interchanged, and correlator designs exists which carry out the cross-correlation first and then the Fourier transform (the "lag-based", or "XF", design such as the MPIfR's Mark IV correlator and the ATCA correlator), and also vice versa (the "FX" correlator such as the VLBA correlator). The advantages and disadvantages of the two designs are mostly in technical details and computing cost, and of little interest to the observer once the instrument is built. However, the response to spectral lines is different. In a lag-based correlator the signal is Fourier transformed once after cross-correlation, and the resulting cross-power spectrum is the intrinsic spectrum convolved with the sinc function. In a FX correlator, the input streams of both interferometer elements are Fourier transformed which includes a convolution with the sinc function, and the subsequent cross-correlation produces a spectrum which is convolved with the square of the sinc function. Hence the FX correlator has a finer spectral resolution and lower sidelobes (see Romney 1999).
The vast amount of data which needs to be processed in interferometry observations has always been processed on purpose-built computers (except for the very observations where bandwidths of the order of several hundred kHz were processed on general purpose computers). Only recently has the power of off-the-shelf PCs reached a level which makes it feasible to carry out the correlation in software. Deller et al. (2007) describe a software correlator which can efficiently run on a cluster of PC-architecture computers. Correlation is an "embarrassingly parallel" problem, which can be split up in time, frequency, and by baseline, and hence is ideally suited to run in a cluster environment.
The result of the correlation stage is a set of visibility measurements. These are stored in various formats along with auxiliary information about the array such as receiver temperatures, weather information, pointing accuracy, and so forth. These data are sent to the observer who has to carry out a number of calibration steps before the desired information can be extracted.
2.2. Sources of error in VLBI observations
VLBI observations generally are subject to the same problems as observations with connected-element interferometers, but the fact that the individual elements are separated by hundreds and thousands of kilometres adds a few complications.
The largest source of error in typical VLBI observations are phase errors introduced by the earth's atmosphere. Variations in the atmosphere's electric properties cause varying delays of the radio waves as they travel through it. The consequence of phase errors is that the measured flux of individual visibilities will be scattered away from the correct locations in the image, reducing the SNR of the observations or, in fact, prohibiting a detection at all. Phase errors arise from tiny variations in the electric path lengths from the source to the antennas. The bulk of the ionospheric and tropospheric delays is compensated in correlation using atmospheric models, but the atmosphere varies on very short timescales so that there are continuous fluctuations in the measured visibilities.
At frequencies below 5 GHz changes in the electron content (TEC) of the ionosphere along the line of sight are the dominant source of phase errors. The ionosphere is the uppermost part of the earth's atmosphere which is ionised by the sun, and hence undergoes diurnal and seasonal changes. At low GHz frequencies the ionosphere's plasma frequency is sufficiently close to the observing frequency to have a noticeable effect. Unlike tropospheric and most other errors, which have a linear dependence on frequency, the impact of the ionosphere is proportional to the inverse of the frequency squared, and so fades away rather quickly as one goes to higher frequencies. Whilst the TEC is regularly monitored 3 and the measurements can be incorporated into the VLBI data calibration, the residual errors are still considerable.
At higher frequencies changes in the tropospheric water vapour content have the largest impact on radio interferometry observations. Water vapour 4 does not mix well with air and thus the integrated amount of water vapour along the line of sight varies considerably as the wind blows over the antenna. Measuring the amount of water vapour along the line of sight is possible and has been implemented at a few observatories (Effelsberg, CARMA, Plateau de Bure), however it is difficult and not yet regularly used in the majority of VLBI observations.
Compact arrays generally suffer less from atmospheric effects because most of the weather is common to all antennas. The closer two antennas are together, the more similar the atmosphere is along the lines of sight, and the delay difference between the antennas decreases.
Other sources of error in VLBI observations are mainly uncertainties in the geometry of the array and instrumental errors. The properties of the array must be accurately known in correlation to introduce the correct delays. As one tries to measure the phase of an electromagnetic wave with a wavelength of a few centimetres, the array geometry must be known to a fraction of that. And because the earth is by no means a solid body, many effects have to be taken into account, from large effects like precession and nutation to smaller effects such as tectonic plate motion, post-glacial rebound and gravitational delay. For an interesting and still controversial astrophysical application of the latter, see Fomalont and Kopeikin (2003). For a long list of these effects including their magnitudes and timescales of variability, see Walker (1999).
2.3. The problem of phase calibration: self-calibration
Due to the aforementioned errors, VLBI visibilities directly from the correlator will never be used to make an image of astronomical sources. The visibility phases need to be calibrated in order to recover the information about the source's location and structure. However, how does one separate the unknown source properties from the unknown errors introduced by the instrument and atmosphere? The method commonly used to do this is called self-calibration and works as follows.
In simple words, in self-calibration one uses a model of the source (if a model is not available a point source is used) and tries to find phase corrections for the antennas to make the visibilities comply with that model. This won't work perfectly unless the model was a perfect representation of the source, and there will be residual, albeit smaller, phase errors. However the corrected visibilities will allow one to make an improved source model, and one can find phase corrections to make the visibilities comply with that improved model, which one then uses to make an even better source model. This process is continued until convergence is reached.
The assumption behind self-calibration is that the errors in the visibility phases, which are baseline-based quantities, can be described as the result of antenna-based errors. Most of the errors described in Sec 2.2 are antenna-based: e.g. delays introduced by the atmosphere, uncertainties in the location of the antennas, and drifts in the electronics all are antenna-based. The phase error of a visibility is the combination of the antenna-based phase errors 5. Since the number of unknown station phase errors is less than the number of visibilities, there is additional phase information which can be used to determine the source structure.
However, self-calibration contains some traps. The most important is making a model of the source which is usually accomplished by making a deconvolved image with the CLEAN algorithm. If a source has been observed during an earlier epoch and the structural changes are expected to be small, then one can use an existing model for a start. If in making that model one includes components which are not real (e.g., by cleaning regions of the image which in fact do not contain emission) then they will be included in the next iteration of self-calibration and will re-appear in the next image. Although it is not easy to generate fake sources or source parts which are strong, weaker source structures are easily affected. The authors have witnessed a radio astronomy PhD student producing a map of a colleague's name using a data set of pure noise, although the SNR was of the order of only 3.
It is also important to note that for self-calibration the SNR of the visibilities needs to be of the order of 5 or higher within the time it takes the fastest error component to change by a few tens of degrees ("atmospheric coherence time") (Cotton 1995). Thus the integration time for self-calibration usually is limited by fluctuations in the tropospheric water vapour content. At 5 GHz, one may be able to integrate for 2 min without the atmosphere changing too much, but this can drop to as little as 30 s at 43 GHz. Because radio antennas used for VLBI are less sensitive at higher frequencies observations at tens of GHz require brighter and brighter sources to calibrate the visibility phases. Weak sources well below the detection threshold within the atmospheric coherence time can only be observed using phase referencing (see Sec. 2.3.1.)
Another boundary condition for successfully self-calibration is that for a given number of array elements the source must not be too complex. The more antennas, the better because the ratio of number of constraints to number of antenna gains to be determined goes as N/2 (The number of constraints is the number of visibilities, N(N - 1) / 2. The number of gains is the number of stations, N, minus one, because the phase of one station is a free parameter and set to zero). Thus self-calibration works very well at the VLA, even with complex sources, whereas for an east-west interferometer with few elements such as the ATCA, self-calibration is rather limited. In VLBI observations, however, the sources are typically simple enough to make self-calibration work even with a modest number (N > 5) of antennas.
2.3.1. Phase referencing
It is possible to obtain phase-calibrated visibilities without self-calibration by measuring the phase of a nearby, known calibrator. The assumption is that all errors for the calibrator and the target are sufficiently similar to allow calibration of the target with phase corrections derived from the calibrator. While this assumption is justified for the more slowly varying errors such as clock errors and array geometry errors (provided target and calibrator are close), it is only valid under certain circumstances when atmospheric errors are considered. The critical ingredients in phase referencing observations are the target-calibrator separation and the atmospheric coherence time. The separation within which the phase errors for the target and calibrator are considered to be the same is called the isoplanatic patch, and is of the order of a few degrees at 5 GHz. The switching time must be shorter than the atmospheric coherence time to prevent undersampling of the atmospheric phase variations. At high GHz frequencies this can result in observing strategies where one spends half the observing time on the calibrator.
Phase-referencing not only allows one to observe sources too weak for self-calibration, but it also yields precise astrometry for the target relative to the calibrator. A treatment of the attainable accuracy can be found in Pradel et al. (2006).
The polarization of radio emission can yield insights into the strength and orientation of magnetic fields in astrophysical objects and the associated foregrounds. As a consequence and because the calibration has become easier and more streamlined it has become increasingly popular in the past 10 years to measure polarization.
Most radio antennas can record two orthogonal polarizations, conventionally in the form of dual circular polarization. In correlation, one can correlate the right-hand circular polarization signal (RCP) of one antenna with the left-hand circular polarization (LCP) of another and vice versa, to obtain the cross-polarization products RL and LR. The parallel-hand circular polarization cross-products are abbreviated as RR and LL. The four correlation products are converted into the four Stokes parameters in the following way:
From the Stokes images one can compute images of polarized intensity and polarization angle.
Most of the calibration in polarization VLBI observations is identical to conventional observations, where one either records only data in one circular polarization or does not form the cross-polarization data at the correlation stage. However, two effects need to be taken care of, the relative phase relation between RCP and LCP and the leakage of emission from RCP and LCP into the cross-products.
The relative phase orientation of RCP and LCP needs to be calibrated to obtain the absolute value for the electric vector position angle (EVPA) of the polarized emission in the source. This is usually accomplished by observing a calibrator which is known to have a stable EVPA with a low-resolution instrument such as a single dish telescope or a compact array.
Calibration of the leakage is more challenging. Each radio telescope has polarization impurities arising from structural asymmetries and errors in manufacturing, resulting in "leakage" of emission from one polarization to the other. The amount of leakage typically is of the order of a few percent and thus is of the same order as the typical degree of polarization in the observed sources and so needs to be carefully calibrated. The leakage is a function of frequency but can be regarded as stable over the course of a VLBI observation.
Unfortunately, sources which are detectable with VLBI are extremely small and hence mostly variable. It is therefore not possible to calibrate the leakage by simply observing a polarization calibrator, and the leakage needs to be calibrated by every observer. At present the calibration scheme exploits the fact that the polarized emission arising from leakage does not change its position angle in the course of the observations. The EVPA of the polarized emission coming from the source, however, will change with respect to the antenna and its feed horns, because most antennas have alt-azimuth mounts and so the source seems to rotate on the sky as the observation progresses 6. One can think of this situation as the sum of two vectors, where the instrumental polarization is a fixed vector and the astronomical polarization is added to this vector and rotates during the observation. Leakage calibration is about separating these two contributions, by observing a strong source at a wide range of position angles. The method is described in Leppänen et al. (1995), and a more detailed treatment of polarization VLBI is given by Kemball (1999).
2.5. Spectral line VLBI
In general a set of visibility measurements consists of cross-power spectra. If a continuum source has been targeted, the number of spectral points is commonly of the order of a few tens. If a spectral line has been observed, the number of channels can be as high as a few thousand, and is limited by the capabilities of the correlator. The high brightness temperatures (Section 3.1.3) needed to yield a VLBI detection restrict observations to masers, or relatively large absorbers in front of non-thermal continuum sources. The setup of frequencies requires the same care as for short baseline interferometry, but an additional complication is that the antennas have significant differences in their Doppler shifts towards the source. See Westpfahl (1999), Rupen (1999), and Reid et al. (1999) for a detailed treatment of spectral-line interferometry and VLBI.
2.6. Pulsar gating
If pulsars are to be observed with a radio interferometer it is desirable to correlate only those times where a pulse is arriving (or where it is absent, Stappers et al. 1999). This is called pulsar gating and is an observing mode available at most interferometers.
2.7. Wide-field limitations
The equations in Sec. 2.1 are strictly correct only for a single frequency and single points in time, but radio telescopes must observe a finite bandwidth, and in correlation a finite integration time must be used, to be able to detect objects. Hence a point in the (u, v) plane always represents the result of averaging across a bandwidth, , and over a time interval, (the points in Fig. 2 actually represent a continuous frequency band in the radial direction and a continuous observation in the time direction).
The errors arising from averaging across the bandwidth are referred to as bandwidth smearing, because the effect is similar to chromatic aberration in optical systems, where the light from one single point of an object is distributed radially in the image plane. In radio interferometry, the images of sources away from the observing centre are smeared out in the radial direction, reducing the signal-to-noise ratio. The effect of bandwidth smearing increases with the fractional bandwidth, / , the square root of the distance to the observing centre, (l2 + m2)1/2, and with 1 / b, where b is the FWHM of the synthesized beam. Interestingly, however, the dependencies of of the fractional bandwidth and of b cancel one another, and so for any given array and bandwidth, bandwidth smearing is independent of (see Thompson et al. 2001). The effect can be avoided if the observing band is subdivided into a sufficiently large number of frequency channels for all of which one calculates the locations in the (u, v) plane separately. This technique is sometimes deliberately chosen to increase the (u, v) coverage, predominantly at low frequencies where the fractional bandwidth is large. It is then called multi-frequency synthesis.
By analogy, the errors arising from time averaging are called time smearing, and they smear out the images approximately tangentially to the (u, v) ellipse. It occurs because each point in the (u, v) plane represents a measurement during which the baseline vector rotated through E , where E is the angular velocity of the earth. Time smearing also increases as a function of (l2 + m2)1/2 and can be avoided if is chosen small enough for the desired field of view.
VLBI observers generally are concerned with fields of view (FOV) of no more than about one arcsecond, and consequently most VLBI observers are not particularly bothered by wide field effects. However, wide-field VLBI has gained momentum in the last few years as the computing power to process finely time- and bandwidth-sampled data sets has become widely available. Recent examples of observations with fields of view of 1' or more are reported on in McDonald et al. (2001), Garrett et al. (2001), Garrett et al. (2005), Lenc and Tingay (2006) and Lenc et al. (2006). The effects of primary beam attenuation, bandwidth smearing and time smearing on the SNR of the observations can be estimated using the calculator at http://astronomy.swin.edu.au/~elenc/Calculators/wfcalc.php.
2.8. VLBI at mm wavelengths
In the quest for angular resolution VLBI helps to optimize one part of the equation which approximates the separation of the finest details an instrument is capable of resolving, = / D . In VLBI, D approaches the diameter of the earth, and larger instruments are barely possible, although "space-VLBI" can extend an array beyond earth. However, it is straightforward to attempt to decrease to push the resolution further up.
However, VLBI observations at frequencies above 20 GHz ( = 15 mm) become progressively harder towards higher frequencies. Many effects contribute to the difficulties at mm wavelengths: the atmospheric coherence time is shorter than one minute, telescopes are less efficient, receivers are less sensitive, sources are weaker than at cm wavelengths, and tropospheric water vapour absorbs the radio emission. All of these effects limit mm VLBI observations to comparatively few bright continuum sources or sources hosting strong masers. Hence also the number of possible phase calibrators for phase referencing drops. Nevertheless VLBI observations at 22 GHz (13 mm) , 43 GHz ( 7 mm) and 86 GHz ( 3 mm) are routinely carried out with the world's VLBI arrays. For example, of all projects observed in 2005 and 2006 with the VLBA, 16% were made at 22 GHz, 23% at 43 GHz, and 4% at 86 GHz 7.
Although observations at higher frequencies are experimental, a convincing demonstration of the feasibility of VLBI at wavelengths shorter than 3 mm was made at 2 mm (147 GHz) in 2001 and 2002 (Krichbaum et al. 2002a, Greve et al. 2002). These first 2 mm-VLBI experiments resulted in detections of about one dozen quasars on the short continental and long transatlantic baselines (Krichbaum et al. 2002a). In an experiment in April 2003 at 1.3 mm (230 GHz) a number of sources was detected on the 1150 km long baseline between Pico Veleta and Plateau de Bure (Krichbaum et al. 2004). On the 6.4 G long transatlantic baseline between Europe and Arizona, USA fringes for the quasar 3C 454.3 were clearly seen. This detection marks a new record in angular resolution in Astronomy (size < 30 µas). It indicates the existence of ultra compact emission regions in AGN even at the highest frequencies (for 3C 454.3 at z = 0.859, the rest frame frequency is 428 GHz). So far, no evidence for a reduced brightness temperature of the VLBI-cores at mm wavelengths was found (Krichbaum et al. 2004). These are the astronomical observations with the highest angular resolution possible today at any wavelength.
2.9. The future of VLBI: eVLBI, VLBI in space, and the SKA
One of the key drawbacks of VLBI observations has always been that the raw antenna signals are recorded and the visibilities formed only later when the signals are combined in the correlator. Thus there has never been an immediate feedback for the observer, who has had to wait several weeks or months until the data are received for investigation. With the advent of fast computer networks this has changed in the last few years. First small pieces of raw data were sent from the antennas to the correlator, to check the integrity of the VLBI array, then data were directly streamed onto disks at the correlator, then visibilities were produced in real-time from the data sent from the antennas to the correlator. A brief description of the transition from tape-based recording to real-time correlation of the European VLBI Network (EVN) is given in Szomoru et al. (2004). The EVN now regularly performs so-called "eVLBI" observing runs which is likely to be the standard mode of operation in the near future. The Australian Long Baseline Array (LBA) completed a first eVLBI-only observing run in March 2007 8.
It has been indicated in Sec. 2.8 that resolution can be increased not only by observing at higher frequencies with ground-based arrays but also by using a radio antenna in earth orbit. This has indeed been accomplished with the Japanese satellite "HALCA" (Highly Advanced Laboratory for Communications and Astronomy, Hirabayashi et al. 2000) which was used for VLBI observations at 1.6 GHz, 5 GHz and 22 GHz (albeit with very low sensitivity) between 1997 and 2003. The satellite provided the collecting area of an 8 m radio telescope and the sampled signals were directly transmitted to ground-based tracking stations. The satellite's elliptical orbit provided baselines between a few hundred km and more than 20000 km, yielding a resolution of up to 0.3 mas ( Dodson et al. 2006, Edwards and Piner 2002). Amazingly, although HALCA only provided left-hand circular polarization, it has been used successfully to observe polarized emission (e.g., Kemball et al. 2000, Bach et al. 2006a). But this was only possible because the ground array observed dual circular polarization. Many of the scientific results from VSOP are summarized in two special issues of Publications of the Astronomical Society of Japan (PASJ, Vol. 52, No. 6, 2000 and Vol. 58, No. 2, 2006). The successor to HALCA, ASTRO-G, is under development and due for launch in 2012. It will have a reflector with a diameter of 9 m and receivers for observations at 8 GHz, 22 GHz, and 43 GHz.
The Russian mission RadioAstron is a similar project to launch a 10 m radio telescope into a high apogee orbit. It will carry receivers for frequencies between 327 MHz and 25 GHz, and is due for launch in October 2008.
The design goals for the Square Kilometre Array (SKA), a large, next-generation radio telescope built by an international consortium, include interferometer baselines of at least 3000 km. At the same time, the design envisions the highest observing frequency to be 25 GHz, and so one would expect a maximum resolution of around 1 mas. However, most of the baselines will be shorter than 3000 km, and so a weighted average of all visibilities will yield a resolution of a few mas, and of tens of mas at low GHz frequencies. The SKA's resolution will therefore be comparable to current VLBI arrays. Its sensitivity, however, will be orders of magnitude higher (sub-µJy in 1 h). The most compelling consequence of this is that the SKA will allow one to observe thermal sources with brightness temperatures of the order of a few hundred Kelvin with a resolution of a few mas. Current VLBI observations are limited to sources with brightness temperatures of the order of 106 K and so to non-thermal radio sources and coherent emission from masers. With the SKA one can observe star and black hole formation throughout the universe, stars, water masers at significant redshifts, and much more. Whether or not the SKA can be called a VLBI array in the original sense (an array of isolated antennas the visibilities of which are produced later on) is a matter of taste: correlation will be done in real time and the local oscillator signals will be distributed from the same source. Still, the baselines will be "very long" when compared to 20th century connected-element interferometers. A short treatment of "VLBI with the SKA" can be found in Carilli (2005). Comprehensive information about the current state of the SKA is available on the aforementioned web page; prospects of the scientific outcomes of the SKA are summarized in Carilli and Rawlings (2004); and engineering aspects are treated in Hall (2005).
2.10. VLBI arrays around the world and their capabilities
This section gives an overview of presently active VLBI arrays which are available for all astronomers and which are predominantly used for astronomical observations. Antennas of these arrays are frequently used in other array's observations, to either add more long baselines or to increase the sensitivity of the observations. Joint observations including the VLBA and EVN antennas are quite common; also observations with the VLBA plus two or more of the phased VLA, the Green Bank, Effelsberg and Arecibo telescopes (then known as the High Sensitivity Array) have recently been made easier through a common application process. Note that each of these four telescopes has more collecting area than the VLBA alone, and hence the sensitivity improvement is considerable.
|Square Kilometre Array||http://www.skatelescope.org|
|High Sensitivity Array||http://www.nrao.edu/HSA|
|European VLBI Network||http://www.evlbi.org|
|Very Long Baseline Array||http://www.vlba.nrao.edu|
|Long Baseline Array||http://www.atnf.csiro.au/vlbi|
2.10.1 The European VLBI Network (EVN)
The EVN is a collaboration of 14 institutes in Europe, Asia, and South Africa and was founded in 1980. The participating telescopes are used in many independent radio astronomical observations, but are scheduled three times per year for several weeks together as a VLBI array. The EVN provides frequencies in the range of 300 MHz to 43 GHz, though due to its inhomogeneity not all frequencies can be observed at all antennas. The advantage of the EVN is that it includes several relatively large telescopes such as the 76 m Lovell telescope, the Westerbork array, and the Effelsberg 100 m telescope, which provide high sensitivity. Its disadvantages are a relatively poor frequency agility during the observations, because not all telescopes can change their receivers at the flick of a switch. EVN observations are mostly correlated on the correlator at the Joint Institute for VLBI in Europe (JIVE) in Dwingeloo, the Netherlands, but sometimes are processed at the Max-Planck-Institute for Radio Astronomy in Bonn, Germany, or the National Radio Astronomy Observatory in Socorro, USA.
2.10.2. The U.S. Very Long Baseline Array (VLBA)
The VLBA is a purpose-built VLBI array across the continental USA and islands in the Caribbean and Hawaii. It consists of 10 identical antennas with a diameter of 25 m, which are remotely operated from Socorro, New Mexico. The VLBA was constructed in the early 1990s and began full operations in 1993. It provides frequencies between 300 MHz and 86 GHz at all stations (except two which are not worth equipping with 86 GHz receivers due to their humid locations). Its advantages are excellent frequency agility and its homogeneity, which makes it very easy to use. Its disadvantages are its comparatively small antennas, although the VLBA is frequently used in conjunction with the phased VLA and the Effelsberg and Green Bank telescopes.
2.10.3. The Australian Long Baseline Array (LBA)
The LBA consists of six antennas in Ceduna, Hobart, Parkes, Mopra, Narrabri, and Tidbinbilla. Like the EVN it has been formed from existing antennas and so the array is inhomogeneous. Its frequency range is 1.4 GHz to 22 GHz, but not all antennas can observe at all available frequencies. Stretched out along Australia's east coast, the LBA extends in a north-south direction which limits the (u, v) coverage. Nevertheless, the LBA is the only VLBI array which can observe the entire southern sky, and the recent technical developments are remarkable: the LBA is at the forefront of development of eVLBI and at present is the only VLBI array correlating all of its observations using the software correlator developed by Deller et al. (2007) on a computer cluster of the Swinburne Centre for Astrophysics and Supercomputing.
2.10.4. The Korean VLBI Network (KVN)
The KVN is a dedicated VLBI network consisting of three antennas which currently is under construction in Korea. It will be able to observe at up to four widely separated frequencies (22 GHz, 43 GHz, 86 GHz, and 129 GHz), but will also be able to observe at 2.3 GHz and 8.4 GHz. The KVN has been designed to observe H2O and SiO masers in particular and can observe these transitions simultaneously. Furthermore, the antennas can slew quickly for improved performance in phase-referencing observations.
2.10.5. The Japanese VERA network
VERA (VLBI Exploration of Radio Astrometry) is a purpose-built VLBI network of four antennas in Japan. The scientific goal is to measure the annual parallax towards galactic masers (H2O masers at 22 GHz and SiO masers at 43 GHz), to construct a map of the Milky Way. Nevertheless, VERA is open for access to carry out any other observations. VERA can observe two sources separated by up to 2.2° simultaneously, intended for an extragalactic reference source and for the galactic target. This observing mode is a significant improvement over the technique of phase-referencing where the reference source and target are observed in turn. The positional accuracy is expected to reach 10 µas, and recent results seem to reach this (Hirota et al. 2007b). VERA's frequency range is 2.3 GHz to 43 GHz.
2.10.6. The Global mm-VLBI Array (GMVA)
The GMVA is an inhomogeneous array of 13 radio telescopes capable of observing at a frequency of 86 GHz. Observations with this network are scheduled twice a year for about a week. The array's objective is to provide the highest angular resolution on a regular schedule.
1 The fields of view in VLBI observations are typically so small that the dependence of (l,m) can be safely ignored. A can then be set to unity and disappears. Back.
2 This can be neatly observed with the VLBA's webcam images available at http://www.vlba.nrao.edu/sites/SITECAM/allsites.shtml. The images are updated every 5 min. Back.
3 http://iono.jpl.nasa.gov Back.
4 note that clouds do not primarily consist of water vapour, but of condensated water in droplets. Back.
5 Baseline-based errors exist, too, but are far less important, see Cornwell and Fomalont (1999) for a list. Back.
6 The 26 m antenna at the Mount Pleasant Observatory near Hobart, Australia, has a parallactic mount and thus there is no field rotation. Back.
7 e.g., ftp://ftp.aoc.nrao.edu/pub/cumvlbaobs.txt Back.
8 http://www.atnf.csiro.au/vlbi/evlbi/ Back. | http://ned.ipac.caltech.edu/level5/March12/Middelberg/Middelberg2.html | 13 |
83 | Identifying the Graphs of Polynomial
Many of the functions on the Math IIC are polynomial functions.
Although they can be difficult to sketch and identify, there are
a few tricks to make it easier. If you can find the roots of a function,
identify the degree, or understand the end behavior of a polynomial function,
you will usually be able to pick out the graph that matches the
function and vice versa.
The roots (or zeros) of a function are the x values
for which the function equals zero, or, graphically, the values
where the graph intersects the x-axis (x =
0). To solve for the roots of a function, set the function equal
to 0 and solve for x.
A question on the Math IIC that tests your knowledge of
roots and graphs will give you a function like f(x)
= x2 + x –
12 along with five graphs and ask you to determine which graph is
that of f(x). To approach a question
like this, you should start by identifying the general shape of
the graph of the function. For f(x) = x2 + x – 12,
you should recognize that the graph of the function in the paragraph
above is a parabola and that opens upward because of a positive
This basic analysis should immediately eliminate several
possibilities but might still leave two or three choices. Solving
for the roots of the function will usually get you to the one right
answer. To solve for the roots, factor the function:
The roots are –4 and 3, since those are the values at
which the function equals 0. Given this additional information,
you can choose the answer choice with the upward-opening parabola
that intersects the x-axis at –4 and 3.
The degree of a polynomial function is the highest exponent
to which the dependent variable is raised. For example, f(x)
= 4x5 – x2 +
5 is a fifth-degree polynomial, because its highest exponent is
A function’s degree can give you a good idea of its shape.
The graph produced by an n-degree function can
have as many as n – 1 “bumps” or “turns.” These
“bumps” or “turns” are technically called “extreme points.”
Once you know the degree of a function, you also know
the greatest number of extreme points a function can have. A fourth-degree
function can have at most three extreme points; a tenth-degree function
can have at most nine extreme points.
If you are given the graph of a function, you can simply
count the number of extreme points. Once you’ve counted the extreme
points, you can figure out the smallest degree that the function
can be. For example, if a graph has five extreme points, the function
that defines the graph must have at least degree six. If the function
has two extreme points, you know that it must be at least third
degree. The Math IIC will ask you questions about degrees and graphs
that may look like this:
||If the graph above represents a portion of
the function g(x), then which
of the following could be g(x)?
||ax2 + bx + c
||ax3 + bx2 + cx + d
||ax4 + bx3 + cx2 + dx + e
To answer this question, you need to use the graph to
learn something about the degree of the function. Since the graph
has three extreme points, you know the function must be at least
of the fourth degree. The only function that fits that description
is E. Note that the answer could have been any function
of degree four or higher; the Math IIC test will never present you
with more than one right answer, but you should know that even if
answer choice E had read ax7 + bx6 + cx5 + dx4 + ex3 + fx2 + gx + h it
still would have been the right answer.
Function Degree and Roots
The degree of a function is based on the largest exponent
found in that function. For instance, the function f(x)
= x2 + 3x +
2 is a second-degree function because its largest exponent is a
2, while the function g(x) = x4 +
2 is a fourth-degree function because
its largest exponent is a 4.
If you know the degree of a function,
you can tell how many roots that function will have. A second-degree
function will have two roots, a third-degree funtion will have three
roots, and a ninth-degree function will have nine roots. Easy, right?
Right, but with one complication.
In some cases, all the roots of a function will be distinct.
Take the function:
The factors of g(x)
are (x + 2) and
(x + 1),
which means that its roots occur when x equals –2 or –1. In contrast,
look at the function
While h(x) is a second-degree
function and has two roots, both roots occur when x equals
–2. In other words, the two roots of h(x)
are not distinct.
The Math IIC may occasionally present you with a function
and ask you how many distinct roots the function has. As long as
you are able to factor out the function and see how many of the
factors overlap, you can figure out the right answer. Whenever you
see a question that asks about the roots in a function, make sure
you determine whether the question is asking about roots or distinct
The end behavior of a function is a description of what
happens to the value of f(x) as x approaches
infinity and negative infinity. Think about what happens to a polynomial
containing x if you let x equal
a huge number, like 1,000,000,000. The polynomial is going to end
up being an enormous positive or negative number.
The point is that every polynomial function
either approaches infinity or negative infinity as x approaches
positive and negative infinity. Whether a function will
approach positive or negative infinity in relation to x is
called the function’s end behavior.
There are rules of end behavior that can allow you to
use a function’s end behavior to figure out its algebraic characteristics
or to figure out its end behavior based on its definition:
- If the degree of the polynomial is even,
the function behaves the same way as x approaches
both positive and negative infinity. If the coefficient of the term with
the greatest exponent is positive, f(x)
approaches positive infinity at both ends. If the leading coefficient
is negative, f(x) approaches negative
infinity at both ends.
- If the degree of the polynomial function is odd, the function
exhibits opposite behavior as x approaches positive
and negative infinity. If the leading coefficient is positive, the
function increases as x increases and decreases
as x decreases. If the leading coefficient is negative,
the function decreases as x increases and increases
as x decreases.
For the Math IIC, you should be able to determine a function’s
end behavior by simply looking at either its graph or definition.
Another type of question you might see on the Math IIC
involves identifying a function’s symmetry. Some functions have
no symmetry whatsoever. Others exhibit one of two types of symmetry
and are classified as either even functions or odd functions.
An even function is a function for which f(x)
= f(–x). Even functions are symmetrical
with respect to the y-axis. This means that a line
segment connecting f(x) and f(–x)
is a horizontal line. Some examples of even functions are f(x) = cos x, f(x) = x2,
and f(x) = |x|.
Here is a figure with an even function:
An odd function is a function for which f(x)
= –f(–x). Odd functions are symmetrical
with respect to the origin. This means that a line segment connecting f(x)
and f(–x) contains the origin.
Some examples of odd functions are f(x)
= sin x and f(x)
Here is a figure with an odd function:
Symmetry Across the x-Axis
No function can have symmetry across the x-axis,
but the Math IIC will occasionally include a graph that is symmetrical
across the x-axis to fool you. A quick check with
the vertical line test proves that the equations that produce such
lines are not functions: | http://www.sparknotes.com/testprep/books/sat2/math2c/chapter10section7.rhtml | 13 |
63 | Teacher's Guide for ODYSSEYTM Stats: Talking Numbers
"Statistics: Digging Into Data," pg. 6
Statistics helps scientists collect, organize, and interpret information. Statistical analysis uses small sample groups to infer something about a large population. An example of a statistical study of teen weight highlights that process.
Vocabulary, Inductive Reasoning
"Will Santa See Snow?", pg. 10
The probability of a white Christmas increases if an inch of snow lies on the ground on Dec. 18. The article explains how correlations are determined and used.
Calculating, Drawing Conclusions
"Random or Not? Find Out With a Scatterplot," pg. 12
Statistical analysis can reveal a relationship between two variables, and a scatterplot can provide a picture of it. The article explains how to organize and interpret a scatterplot, and a sidebar (pg. 14) lets readers test their skills in spotting correlations.
Inductive Reasoning, Following Directions
"Dr. Brain's Omega Sphere" (Brain Strain), pg. 15
How can you place three darts in the same hemisphere of a spinning globe? Can statistics help?
"Got a New Solution for Asthma? Test It!", pg. 16
Medical trials use statistics to determine the effectiveness of potential new drugs. Student's t-test and the chi-square test are highlighted. Sidebars distinguish hypothesis and interval testing (pg. 18) and offer a real-world example of statistics at work (pg. 19).
Problem Solving, Practical Application
"Slugging Out Sports Stats: An Interview with Steve Byrd" (People to Discover), pg. 20
The senior vice president of STATS, Inc., explains how his company collects, processes, and disseminates sports statistics. A sidebar (pg. 22) challenges readers to calculate four sports stats.
Process Analysis, Calculating
"Using Numbers to Save the World," pg. 23
In 1973, two scientists used statistical analysis to discover the harmful effects of CFCs on the Earth's ozone layer. Their mathematical modeling (explained in a sidebar, pg. 25) warned of an impending environmental disaster.
Inductive Reasoning, Problem Solving
"Who's Your Daddy?". pg. 26
Cladistics is a statistically based method of determining relationships among living things. Cladograms illustrate those relationships, sometimes surprising the scientific community and overturning traditional evolutionary theories.
Vocabulary, Drawing Conclusions
"Statistics in the Courtroom," pg. 29
The importance of statistical analysis in legal matters is revealed using actual courtroom events. Statistical testing may not convict a suspect, but it can complement other forms of evidence. Statistical terminology is highlighted.
"How Many in Your Bag?" (Activity), pg. 32
Do the candies in your bag of M&Ms meet manufacturer's specifications? Use statistics to find out.
Following Directions, Data Collection and Analysis
"Belle of the Mall" (Activity), pg. 34
Fill in the blanks with statistical terms to tell this tale of a shopping trip.
Vocabulary, Context Clues
"Statistics Never Lie. . . ," pg. 37
This question-and-answer article shows how statistics can mislead. Avoid pitfalls by asking questions, thinking about details, and being skeptical.
Making Inferences, Inductive Reasoning
"You Can Do Astronomy: A Living Solar System!" (What's Up and Planet Watch), pg. 40
The winter solstice shares the evening stage with two meteor showers and spectacular views of Jupiter, Saturn, and their largest moons. On cloudy nights, gather your friends to form a living, clapping solar system.
Observation, Following Directions
"How Fast Do You Eat Ice Cream?" (Fantastic Journeys), pg. 45
An enterprising and persistent ninth grader used statistical analysis to get her research published in a medical journal.
Hypothesis Formation, Data Analysis and Interpretation
Think Tank (Discussion Starters to Use Before Reading the Magazine):
1. Statistics show up on TV news programs and in the newspapers every day. What statistics have you seen or heard lately? List stories that you remember (don't forget sports and politics!) and discuss how statistics were used in each instance. How do statistics help make a story clearer?
2. Are statistics ever used to lie or mislead? What examples can you think of? What statistics do you trust and which don't you trust? Why?
Classroom "Syzygy" (Talk, Connect, Assess):
"Statistics: Digging Into Data", Pg. 6
Talk It Over:
1. What are the goals of statistical analysis? What is a sample, and why are samples used? How can statistics help us understand natural processes or make decisions? Offer examples from the article and from recent news stories.
2. How can we make the best use of the many statistics that come our way? Are there rules for the best use of statistical information? Make a list of tips and warning signs and add to it as you read this issue.
1. Visual Arts: Reread the closing section of the article "Caution!" and create a poster that warns consumers about the wise use of statistics. What should people be wary of when interpreting statistics?
2. History: Find an example of how a statistical study helped solve the mystery of a disease -- its origin, treatment, prevention, or cure. Look for information about such diseases as cholera, bubonic plague, or malaria. In your research, keep an eye out for the use of numbers in other fields of health and medicine.
3. Science: Scientists always attempt to repeat, or replicate, one anothers' research and findings. Conduct your own survey of weight among students ages 10-19. When you have gathered your data, compile graphs like those used in the article. Are your results similar? Where are the differences? Can those differences be explained, or is more research necessary?
1. In a narrative essay, describe a statistical study, beginning with a question of interest to you. (For example, who is the most popular rock star among the students in your class?) In your essay, use the following terms correctly: subject, characteristics, sample, population, descriptive statistics, and statistical inference.
2. Use the weight statistics given in the article to argue for or against allowing girls and boys to compete in the same wrestling league. Make sure your position is supported by the statistics. Present your case as a persuasive speech.
"Statistics in the Courtroom", Pg. 29
Talk It Over:
1. Statistics can help convict a lawbreaker. Can statistics ever prove someone innocent? Think of possible examples. Is it easier to use statistics (DNA, for example) to prove innocence or guilt? Why?
2. Before a jury is seated in a court case, lawyers representing both sides question potential jurors and accept or reject them based on their answers. If you were a lawyer planning to use statistical evidence in court, what questions would you ask potential jurors? Discuss and list 10 interview questions. Explain your reasons for asking each one.
1. Psychology: If attorneys can use statistics to convince a jury, can they also misuse statistics to mislead a jury? How might the fair use of statistics in a courtroom depend on the ability of the jury to understand statistical analysis?
2. Mathematics: The article mentions the close results of the 2000 presidential election in Florida. Go on the Internet to research the popular vote counts from other states in that race. Use percents to figure the margin of error in the vote counting that could have changed a state from Bush to Gore or from Gore to Bush. In what other states do you think the votes should have been recounted?
3. Creative Writing: Write a poem celebrating (or criticizing) the use of statistics in the courtroom. In your poem, try to use "scatterplot" and "outlier" as rhymes. Trade poems among class members and read aloud.
1. Describe a hypothetical court case alleging job discrimination. Write an informational essay about the types of statistics that might be used in such a case and how they would be presented.
2. Collect reports from newspapers and newsmagazines of a court case in which DNA testing is involved. Study the case and make a closing statement to the jury, using statistics to support your call for a verdict of guilt or innocence.
Far Out! Moving Beyond the Magazine (with Number-Numbness Quotes):
"There are three kinds of lies: lies, damned lies, and statistics." -- attributed by Mark Twain to Benjamin Disraeli, but the best-documented source is Leonard H. Courtney
Break the class into pairs or teams of three and organize a statistics-based science fair. Challenge each team to present a project that involves statistical analysis or the gathering of statistics. (Check the articles beginning on pages 10, 32, and 45 for ideas.) Give each team an opportunity to present their hypothesis, methods, data, statistical analysis techniques, and conclusions.
"Inspiring visions rarely include numbers." -- Tom Peters in Thriving on Chaos
Invite to your class the person in your school or school district who is responsible for compiling scores on standardized test. Ask your guest to explain how scores are determined, what they mean, and how they are communicated to the public. Gather examples of how those statistics are presented by the media.
"Mathematics has given economics rigor, but alas, also mortis." -- Robert Heilbroner
Create a bulletin board display titled "Statistics in the News." Post clippings or photocopies of advertisements, articles, graphs, or diagrams. Accompany each with a brief, written explanation of the statistics involved. Devote one section of the display to misused or misleading statistics.
"Smoking is one of the leading causes of statistics." -- Fletcher Knebel
Large-Group Collaborative Activity:
Select a controversial assertion for which proponents and opponents can present statistical arguments. Form two teams to debate the topic. Present your debate before a panel of judges who evaluate the persuasiveness not of the debaters, but of the statistics they use. | http://www.cobblestonepub.com/resources/ody0312t.html | 13 |
87 | Sailing is the propulsion of a vehicle and the control of its movement with large (usually fabric) foils called sails. By changing the rigging, rudder, and sometimes the keel or centreboard, a sailor manages the force of the wind on the sails in order to move the vessel relative to its surrounding medium (typically water, but also land and ice) and change its direction and speed. Mastery of the skill requires experience in varying wind and sea conditions, as well as knowledge concerning sailboats themselves and an understanding of one's surroundings.
While there are still some places in the world where sail-powered passenger, fishing and trading vessels are used, these craft have become rarer as internal combustion engines have become economically viable in even the poorest and most remote areas. In most countries sailing is enjoyed as a recreational activity or as a sport. Recreational sailing or yachting can be divided into racing and cruising. Cruising can include extended offshore and ocean-crossing trips, coastal sailing within sight of land, and daysailing.
Throughout history sailing has been instrumental in the development of civilization, affording humanity greater mobility than travel over land, whether for trade, transport or warfare, and the capacity for fishing. The earliest representation of a ship under sail appears on a painted disc found in Kuwait dating between 5000 and 5500 BC. Advances in sailing technology from the Middle Ages onward enabled Arab, Chinese, Indian and European explorers to make longer voyages into regions with extreme weather and climatic conditions. There were improvements in sails, masts and rigging; navigation equipment improved. From the 15th century onwards, European ships went further north, stayed longer on the Grand Banks and in the Gulf of St. Lawrence, and eventually began to explore the Pacific Northwest and the Western Arctic. Sailing has contributed to many great explorations in the world.
The air interacting with the sails of a sailing vessel creates various forces, including reaction forces. If the sails are properly oriented with respect to the wind, then the net force on the sails will move the vessel forward. However, boats propelled by sails cannot sail directly into the wind. They must tack (turn the boat through the eye of the wind) back and forth in order to progress directly upwind (see below "Beating").
Sails as airfoils
- when the boat is going in the same direction as the wind, the wind force simply pushes on the sail. The force on the sail is mostly aerodynamic drag, and sails acting in this way are aerodynamically stalled.
- when the boat is traveling across the wind, the air coming in from the side is redirected toward the rear; according to Newton's Third law, the air is accelerated towards the rear of the boat and the sails experience a force in the opposite direction. This force manifests itself as pressure differences between the two sides of the sail - there is a region of low pressure on the front side of the sail and a region of high pressure on the back. Another way to say this is that sails generate lift using the air that flows around them in the same way as an aircraft wing. The wind flowing over the surface of the sail creates a force approximately perpendicular to the sail; the component of that force parallel to the boat's keel pulls the boat forward, the component perpendicular to the keel makes the boat heel and causes leeway.
Apparent wind
The wind that a boat experiences is the combination of the true wind (i.e. the wind relative to a stationary object) and the wind that occurs due to the forward motion of the boat. This combination is the apparent wind, which is the relative velocity of the wind relative to the boat.
When sailing upwind the apparent wind is greater than the true wind and the direction of the apparent wind will be forward of the true wind. Some high-performance boats are capable of traveling faster than the true windspeed on some points of sail, see for example the Hydroptère, which set a world speed record in 2009 by sailing 1.71 times the speed of the wind. Iceboats can typically sail at 5 times the speed of the wind.
The energy that drives a sailboat is harnessed by manipulating the relative movement of wind and water speed: if there is no difference in movement, such as on a calm day or when the wind and water current are moving in the same direction at the same speed, there is no energy to be extracted and the sailboat will not be able to do anything but drift. Where there is a difference in motion, then there is energy to be extracted at the interface. The sailboat does this by placing the sail(s) in the air and the hull(s) in the water.
A sailing vessel is not maneuverable due to sails alone—the forces caused by the wind on the sails would cause the vessel to rotate and travel sideways instead of moving forward. In the same manner that an aircraft requires stabilizers, such as a tailplane with elevators as well as wings, a boat requires a keel and rudder. The forces on the sails as well as those from below the water line on the keel, centreboard, and other underwater foils including the hull itself (especially for catamarans or in a traditional proa) combine and partially cancel each other to produce the motive force for the vessel. Thus, the physical portion of the boat that is below water can be regarded as functioning as a "second sail." The flow of water over the underwater hull portions creates hydrodynamic forces, which combine with the aerodynamic forces from the sails to allow motion in almost any direction except straight into the wind. When sailing close to the wind the force generated by the sail acts at 90° to the sail. This force can be considered as split into a small force acting in the direction of travel, as well as a large sideways force that heels (tips) the boat. To enable maximum forward speed, the force needs to be cancelled out, perhaps using human balast, leaving only a smaller forward resultant force. Depending on the efficiency of the rig and hull, the angle of travel relative to the true wind can be as little as 35° or may need to be 80° or greater. This angle is half of the tacking angle and defines one side of a 'no-go zone' into the wind, in which a vessel cannot sail directly.
Tacking is essential when sailing upwind. The sails, when correctly adjusted, will generate aerodynamic lift. When sailing downwind, the sails no longer generate aerodynamic lift and airflow is stalled, with the wind push on the sails giving drag only. As the boat is going downwind, the apparent wind is less than the true wind and this, allied to the fact that the sails are not producing aerodynamic lift, serves to limit the downwind speed.
Effects of wind shear
Wind shear affects sailboats in motion by presenting a different wind speed and direction at different heights along the mast. Wind shear occurs because of friction above a water surface slowing the flow of air. Thus, a difference in true wind creates a different apparent wind at different heights. Sailmakers may introduce sail twist in the design of the sail, where the head of the sail is set at a different angle of attack from the foot of the sail in order to change the lift distribution with height. The effect of wind shear can be factored into the selection of twist in the sail design, but this can be difficult to predict since wind shear may vary widely in different weather conditions. Sailors may also adjust the trim of the sail to account for wind gradient, for example, using a boom vang.
Points of sail
The point of sail describes a sailing boat's course in relation to the wind direction.
No sailboat can sail directly into the wind (known as being "in irons"), and for a given boat there is a minimum angle that it can sail relative to the wind; attempting to sail closer than that leads to the sails luffing and the boat will slow down and stop. This "no-go zone" (shown shaded in accompanying figure) is about 45° either side of the true wind for a modern sloop.
There are 5 main points of sail. In order from the edge of the no-go zone (or "irons") to directly downwind they are:
- close haul (the minimum angle to the wind that the boat and its rig can manage - typically about 45° )
- close reach (between close hauled and a beam reach)
- beam reach (approximately 90° to the wind)
- broad reach (between a beam reach and running)
- running (close to directly downwind)
The sail trim on a boat is relative to the point of sail one is on: on a beam reach sails are mostly let out, on a run sails are all the way out, and close hauled sails are pulled in very tightly. Two main skills of sailing are trimming the sails correctly for the direction and strength of the wind, and maintaining a course relative to the wind that suits the sails once trimmed.
Close Hauled or "Beating"
A boat can be 'worked to windward', to arrive at an upwind destination, by sailing close-hauled with the wind coming from one side, then tacking (turning the boat through the eye of the wind) and sailing with the wind coming from the other side. By this method of zig-zagging into the wind, known as beating, it is possible to reach any upwind destination. A yacht beating to a mark directly upwind one mile (1.6 km) away will cover a distance through the water of at least 1.4 miles (2.3 km), if it can tack through an angle of 90 degrees including leeway. An old adage describes beating as sailing for twice the distance at half the speed and three times the discomfort.
An estimate of the correct tacking distance can be obtained (& thereby the time taken to travel it @ various boat speeds) by using Pythagoras' theorem with equal tacks (assume a value of 1). This also assumes a tacking angle of 90°. The straight line distance is the hypotenuse value of √2
When beating to windward one tack may be more favorable than the other - more in the desired direction. The best strategy is to stay on the favorable tack as much as possible. If the wind shifts in the sailor's favor, called a lift, so much the better, then this tack is even more favorable. But if it shifts against the sailor's, called a header, then the opposite tack may become the more favorable course. So when the destination is directly into the wind the best strategy is given by the racing adage "tack on a header." This is true because a header on one tack is a lift on the other.
How closely a boat can sail into the wind depends on the boat's design, sail shape and trim, the sea state, and the wind speed.
Typical minimum pointing angles to the true wind are as follows. Actual course over the ground will be worse due to leeway.
- about 35° for modern racing yachts which have been optimized for upwind performance (like America's Cup yachts)
- about 40 to 45° for modern cruiser-racer yachts (fast cruising yachts)
- about 50 to 60° for cruisers and workboats with inefficient keels, inefficient hull shapes, or low draught, when compared to craft designed for sailing performance, and for boats carrying two or more masts (since the forward sails adversely affect the windward ability of sails further aft when sailing upwind)
- close to 90° for square riggers and similar vessels due to the sail shape which is very ineffective when sailing upwind
Sailing close-hauled under a large amount of sail, and heeling a great deal, can induce weather helm, or a tendency for the boat to turn into the wind. This requires pulling the tiller to windward (i.e. 'to weather'), or turning the wheel leeward, in order to counteract the effect and maintain the required course. The lee side of the hull is more under water than the weather side and the resulting shape of the submerged parts of the hull usually creates a force that pushes the bow to weather. Driving both the asymmetric heeling hull form and the angled rudder through the water produces drag that slows the boat down. If weather helm builds further, it can limit the ability of the helmsman to steer the boat, which can be turned towards but not effectively away from the wind. At more extreme angles of heel, the boat will spontaneously 'round up' into the wind during gusts, i.e. it will turn into the wind regardless of any corrective action taken on the helm.
Any action that reduces the angle of heel of a boat that is reaching or beating to windward will help reduce excessive weather helm. Racing sailors use their body weight to bring the boat to a more upright position, but are not allowed to use "movable ballast" during a race. Reducing or reefing the total sail area will have the same effect and many boats will sail faster with less sail in a stiff breeze due to the reduction in underwater drag. Easing the sheets on aft-most sails, such as the mainsail in a sloop or cutter can have an immediate effect, especially to help with manoeuvering. Moving or increasing sail area forward can also help, for example by raising the jib (and maybe lowering the staysail) on a cutter.
When the boat is traveling approximately perpendicular to the wind, this is called reaching. A beam reach is with the wind at right angles to the boat, a close reach is anywhere between beating and a beam reach, and a broad reach is between a beam reach and running.
For most modern sailboats, that is boats with fore-and-aft sails, reaching is the fastest way to travel. The direction of the wind is ideal when reaching because it can maximize the lift generated on the sails in the forward direction of the boat, giving the best boat speed. Also when reaching, the boat can be steered exactly in the direction that is most desirable, and the sails can be trimmed for that direction.
Reaching may, however, put the boat on a course parallel with the crests of the waves. When the waves are steep, it may be necessary to sail closer to the wind to avoid waves directly on the beam.
Sailing the boat within roughly 30 degrees either side of dead downwind is called a run. This can be the most comfortable point of sail, but requires constant attention. Loss of attention by the helmsman can lead to an accidental jibe, causing injury to the boat or crew. All on deck must be aware of, and if possible avoid, the potential arc of the boom, mainsheet and other gear in case an accidental jibe occurs during a run. A preventer can be rigged to reduce danger and damage from accidental jibes.
This is generally the most unstable point of sail, but the easiest for a novice to grasp conceptually, making it a common downfall for beginners. In stronger winds, rolling increases as there is less rolling resistance provided by the sails, as they are eased out. Also, having the sails and boom(s) perpendicular to the boat throws weight and some wind force to that side, making the boat harder to balance. In smaller boats, death rolls can build up and lead to capsize.
Also on a run an inexperienced or inattentive sailor can easily misjudge the real wind strength since the boat speed subtracts directly from the true wind speed and makes the apparent wind less. In addition sea conditions can also falsely seem milder than they are as the waves ahead are being viewed from behind making white caps less apparent. When changing course from this point of sail to a reach or a beat, a sailboat that seemed under control can instantly become over-canvassed and in danger. Any boat over-canvassed on a run can round up, heel excessively and stop suddenly in the water. This is called broaching and it can lead to capsize, possible crew injury and loss of crew into the water.
Options for maneuvering are also reduced. On other points of sail, it is easy to stop or slow the boat by heading into the wind; there may be no such easy way out when running, especially in close quarters or when a spinnaker, whisker pole or preventer are set.
Basic sailing techniques
An important aspect of sailing is keeping the boat in "trim".
- Course made good - The turning or steering of the boat vessel using the wheel or tiller to the desired course or buoy. See different points of sail. This may be a definite bearing (e.g. steer 270 degrees), or along a transit, or at a desired angle to the apparent wind direction.
- Trim - This is the fore and aft balance of the boat. The aim is to adjust the moveable ballast (the crew) forwards or backwards to achieve an 'even keel'. On an upwind course in a small boat, the crew typically sit forward to reduce drag. When 'running', it is more efficient for the crew to sit to the rear of the boat. The position of the crew matters less as the size (and weight) of the boat increases.
- Balance - This is the port and starboard balance. The aim, once again, is to adjust weight 'windward' or 'leeward' to prevent excessive heeling. The boat moves at a faster velocity if it is flat to the water.
- Sail trim - Trimming sails is a large topic. Simply put, however, a sail should be pulled in until it fills with wind, but no further than the point where the front edge of the sail (the luff) is exactly in line with the wind. Let it out until it starts to flap, and then pull it in until it stops.
- Centreboard (Daggerboard) - If a moveable centreboard is fitted, then it should be lowered when sailing "close to the wind" but can be raised up on downwind courses to reduce drag. The centreboard prevents lateral motion and allows the boat to sail upwind. A boat with no centreboard will instead have a permanent keel, some other form of underwater foil, or even the hull itself which serves the same purpose. On a close haul the daggerboard should be fully down, and while running, over half way up.
Together, these points are known as 'The Five Essentials' and constitute the central aspects of sailing.
Tacking and gybing
There are two ways to change from port tack to starboard tack (or vice versa): either by turning the bow through the eye of the wind, "tacking" or the stern, "gybing". In general sailing, tacking is the safer method and preferred especially when sailing upwind; in windsurfing, gybing is preferred as this involves much less maneuvering for the sailor.
For general sailing, during such course changes, there is work that needs to be done. Just before tacking the command "Ready about" is given, at which point the crew must man the sheet lines which need to be changed over to the other tack and the helmsman gets ready. To execute the tack the command "Lee-ho" or "Hard-a-lee" is given. The latter is a direct order to the helmsman to push the tiller hard to the leeward side of the boat making the bow of the boat come up and quickly turn through the eye of the wind to prevent the boat being caught in irons. As the boat turns through the eye of the wind, some sails such as those with a boom and a single sheet may self-tack and need only small adjustments of sheeting points, but for jibs and other sails with separate sheets on either side, the original sheet must be loosened and the opposite sheet lines hauled in and set quickly and properly for the new point of sail.
Gybing is often necessary to change course when sailing off the wind or downwind. It is a more dangerous maneuver because booms must be controlled as the sails catch the new wind direction from astern. An uncontrolled jibe can happen suddenly by itself when sailing downwind if the helmsman is not paying attention to the wind direction and can be very dangerous as the main boom will sweep across the cockpit very quickly and with great force. Before gybing the command "Ready to gybe" is given. The crew gets ready at their positions. If any sails are constrained with preventers or whisker poles these are taken down. The command "Gybe-ho" is given to execute the turn. The boomed sails must be hauled in and made fast before the stern reaches the eye of the wind, so that they are amidship and controlled as the stern passes through the wind, and then let out quickly under control and adjusted to the new point of sail.
Reducing sail
An important safety aspect of sailing is to adjust the amount of sail to suit the wind conditions. As the wind speed increases the crew should progressively reduce the amount of sail. On a small boat with only jib and mainsail this is done by furling the jib and by partially lowering the mainsail, a process called 'reefing the main'.
Reefing means reducing the area of a sail without actually changing it for a smaller sail. Ideally reefing does not only result in a reduced sail area but also in a lower centre of effort from the sails, reducing the heeling moment and keeping the boat more upright.
There are three common methods of reefing the mainsail:
- Slab reefing, which involves lowering the sail by about one-quarter to one-third of its full length and tightening the lower part of the sail using an outhaul or a pre-loaded reef line through a cringle at the new clew, and hook through a cringle at the new tack.
- In-mast (or on-mast) roller-reefing. This method rolls the sail up around a vertical foil either inside a slot in the mast, or affixed to the outside of the mast. It requires a mainsail with either no battens, or newly-developed vertical battens.
- In-boom roller-reefing, with a horizontal foil inside the boom. This method allows for standard- or full-length horizontal battens.
Mainsail furling systems have become increasingly popular on cruising yachts, as they can be operated shorthanded and from the cockpit, in most cases. However, the sail can become jammed in the mast or boom slot if not operated correctly. Mainsail furling is almost never used while racing because it results in a less efficient sail profile. The classical slab-reefing method is the most widely used. Mainsail furling has an additional disadvantage in that its complicated gear may somewhat increase weight aloft. However, as the size of the boat increases, the benefits of mainsail roller furling increase dramatically.
An old saying goes, "The first time you think of reducing sail you should," and correspondingly, "When you think you are ready to take out a reef, have a cup of tea first."
Sail trimming
The most basic control of the sail consists of setting its angle relative to the wind. The control line that accomplishes this is called a "sheet." If the sheet is too loose the sail will flap in the wind, an occurrence that is called "luffing." Optimum sail angle can be approximated by pulling the sheet in just so far as to make the luffing stop, or by using of tell-tales - small ribbons or yarn attached each side of the sail that both stream horizontally to indicate a properly trimmed sail. Finer controls adjust the overall shape of the sail.
Two or more sails are frequently combined to maximize the smooth flow of air. The sails are adjusted to create a smooth laminar flow over the sail surfaces. This is called the "slot effect". The combined sails fit into an imaginary aerofoil outline, so that the most forward sails are more in line with the wind, whereas the more aft sails are more in line with the course followed. The combined efficiency of this sail plan is greater than the sum of each sail used in isolation.
More detailed aspects include specific control of the sail's shape, e.g.:
- reefing, or reducing the sail area in stronger wind
- altering sail shape to make it flatter in high winds
- raking the mast when going upwind (to tilt the sail towards the rear, this being more stable)
- providing sail twist to account for wind speed differential and to spill excess wind in gusty conditions
- gibbing or lowering a sail
Hull trim
Hull trim is the adjustment of a boat's loading so as to change its fore-and-aft attitude in the water. In small boats, it is done by positioning the crew. In larger boats the weight of a person has less effect on the hull trim, but it can be adjusted by shifting gear, fuel, water, or supplies. Different hull trim efforts are required for different kinds of boats and different conditions. Here are just a few examples: In a lightweight racing dinghy like a Thistle, the hull should be kept level, on its designed water line for best performance in all conditions. In many small boats, weight too far aft can cause drag by submerging the transom, especially in light to moderate winds. Weight too far forward can cause the bow to dig into the waves. In heavy winds, a boat with its bow too low may capsize by pitching forward over its bow (pitch-pole) or dive under the waves (submarine). On a run in heavy winds, the forces on the sails tend to drive a boat's bow down, so the crew weight is moved far aft.
When a ship or boat leans over to one side, from the action of waves or from the centrifugal force of a turn or under wind pressure or from amount of exposed topsides, it is said to 'heel'. A sailing boat that is over-canvassed and therefore heeling, may sail less efficiently depending on fundamental or opportunistic factors such as temporary nature of the feature (e.g. wind gust), use (e.g. racing), crew ability, point of sail, hull size & design.
When a vessel is subject to a heeling force (such as wind pressure), vessel buoyancy & beam of the hull will counter-act the heeling force. A weighted keel provides additional means to right the boat. In some high-performance racing yachts, water ballast or the angle of a canting keel can be changed to provide additional righting force to counteract heeling. The crew may move their personal weight to the high (upwind) side of the boat, this is called hiking, which also changes the centre of gravity & produces a righting lever to reduce the degree of heeling. Incidental benefits include faster vessel speed caused by more efficient action of the hull & sails. Other options to reduce heeling include reducing exposed sail area & efficiency of the sail setting & a variant of hiking called "trapezing". This can only be done if the vessel is designed for this, as in dinghy sailing. A sailor can (usually involuntarily) try turning upwind in gusts (it is known as rounding up). This can lead to difficulties in controlling the vessel if over-canvassed. Wind can be spilled from the sails by 'sheeting out', or loosening them. Number of sails, their size & shape can be altered. Raising the dinghy centreboard can reduce heeling by allowing more leeway.
The increasingly asymmetric underwater shape of the hull matching the increasing angle of heel may generate an increasing directional turning force into the wind. The sails' centre of effort will also increase this turning effect or force on the vessel's motion due to increasing lever effect with increased heeling which shows itself as increased human effort required to steer a straight course. Increased heeling reduces exposed sail area relative to the wind direction, so leading to an equilibrium state. As more heeling force causes more heel, weather helm may be experienced. This condition has a braking effect on the vessel. Small amounts (≤5 degrees) of weather helm are generally considered desirable because of the consequent aerofoil lift effect from the rudder. This aerofoil lift produces helpful motion to windward & the corollary of the reason why lee helm is dangerous. Lee helm, the opposite of weather helm, is generally considered to be dangerous because the vessel turns away from the wind when the helm is released.
Sailing hulls and hull shapes
Sailing boats with one hull are "monohulls", those with two are "catamarans", those with three are "trimarans". A boat is turned by a rudder, which itself is controlled by a tiller or a wheel, while at the same time adjusting the sheeting angle of the sails. Smaller sailing boats often have a stabilising, raisable, underwater fin called a centreboard, daggerboard, or leeboard; larger sailing boats have a fixed (or sometimes canting) keel. As a general rule, the former are called dinghies, the latter keelboats. However, up until the adoption of the Racing Rules of Sailing, any vessel racing under sail was considered a yacht, be it a multi-masted ship-rigged vessel (such as a sailing frigate), a sailboard (more commonly referred to as a windsurfer) or remote-controlled boat, or anything in between. (See Dinghy sailing.)
Multihulls use flotation and/or weight positioned away from the centre line of the sailboat to counter the force of the wind. This is in contrast to heavy ballast that can account for up to 90% (in extreme cases like AC boats) of the weight of a monohull sailboat. In the case of a standard catamaran there are two similarly-sized and -shaped slender hulls connected by beams, which are sometimes overlaid by a deck superstructure. Another catamaran variation is the proa. In the case of trimarans, which have an unballasted centre hull similar to a monohull, two smaller amas are situated parallel to the centre hull to resist the sideways force of the wind. The advantage of multihulled sailboats is that they do not suffer the performance penalty of having to carry heavy ballast, and their relatively lesser draft reduces the amount of drag, caused by friction and inertia, when moving through the water.
One of the most common dinghy hulls in the world is the Laser hull. It was designed by Bruce Kirby in 1971 and unveiled at the New York boat show (1971) It was designed with speed and simplicity in mind. The Laser is 13 feet 10.5 inches long and a 12.5 foot water line and 76 square feet (7.1 m2) of sail.
Types of sails and layouts
A traditional modern yacht is technically called a "Bermuda sloop" (sometimes a "Bermudan sloop"). A sloop is any boat that has a single mast and usually a single headsail (generally a jib) in addition to the mainsail (Bermuda rig but c.f. Friendship sloop). A cutter (boat) also has a single mast, set further aft than a sloop and more than one headsail. Additionally, Bermuda sloops only have a single sail behind the mast. Other types of sloops are gaff-rigged sloops and lateen sloops. Gaff-rigged sloops have quadrilateral mainsails with a gaff (a small boom) at their upper edge (the "head" of the sail). Gaff-rigged vessels may also have another sail, called a topsail, above the gaff. Lateen sloops have triangular sails with the upper edge attached to a gaff, and the lower edge attached to the boom, and the boom and gaff are attached to each other via some type of hinge. It is also possible for a sloop to be square rigged (having large square sails like a Napoleonic Wars-era ship of the line). Note that a "sloop of war", in the naval sense, may well have more than one mast, and is not properly a sloop by the modern meaning.
If a boat has two masts, it may be a schooner, a ketch, or a yawl, if it is rigged fore-and-aft on all masts. A schooner may have any number of masts provided the second from the front is the tallest (called the "main mast"). In both a ketch and a yawl, the foremost mast is tallest, and thus the main mast, while the rear mast is shorter, and called the mizzen mast. The difference between a ketch and a yawl is that in a ketch, the mizzen mast is forward of the rudderpost (the axis of rotation for the rudder), while a yawl has its mizzen mast behind the rudderpost. In modern parlance, a brigantine is a vessel whose forward mast is rigged with square sails, while her after mast is rigged fore-and-aft. A brig is a vessel with two masts both rigged square.
A spinnaker is a large, full sail that is only used when sailing off wind either reaching or downwind, to catch the maximum amount of wind.
Sailing by high altitude wind power
SkySails is sailing freighter ships. Speedsailor Dave Culp strongly introduced his OutLeader kite sail for speedsailing. Malcolm Phillips invents an advanced sailing technique using high altitude kites and kytoon.
Rigid foils
With modern technology, "wings", that is rigid sails, may be used in place of fabric sails. An example of this would be the International C-Class Catamaran Championship and the yacht USA 17 that won the 2010 America's Cup. Such rigid sails are typically made of thin plastic fabric held stretched over a frame.
Alternative wind-powered vessels
Some non-traditional rigs capture energy from the wind in a different fashion and are capable of feats that traditional rigs are not, such as sailing directly into the wind. One such example is the wind turbine boat, also called the windmill boat, which uses a large windmill to extract energy from the wind, and a propeller to convert this energy to forward motion of the hull. A similar design, called the autogyro boat, uses a wind turbine without the propellor, and functions in a manner similar to a normal sail. A more recent (2010) development is a cart that uses wheels linked to a propeller to "sail" dead downwind at speeds exceeding wind speed.
Kitesurfing and windsurfing
Sailing terminology
Sailors use traditional nautical terms for the parts of or directions on a vessel: starboard (right), port or larboard (left), forward or fore (front), aft or abaft (rearward), bow (forward part of the hull), stern (aft part of the hull), beam (the widest part). Vertical spars are masts, horizontal spars are booms (if they can hit the sailor), yards, gaffs (if they are too high to reach) or poles (if they cannot hit the sailor).
Rope and lines
In most cases, rope is the term used only for raw material. Once a section of rope is designated for a particular purpose on a vessel, it generally is called a line, as in outhaul line or dock line. A very thick line is considered a cable. Lines that are attached to sails to control their shapes are called sheets, as in mainsheet. If a rope is made of wire, it maintains its rope name as in 'wire rope' halyard.
Lines (generally steel cables) that support masts are stationary and are collectively known as a vessel's standing rigging, and individually as shrouds or stays. The stay running forward from a mast to the bow is called the forestay or headstay. Stays running aft are backstays or after stays.
Moveable lines that control sails or other equipment are known collectively as a vessel's running rigging. Lines that raise sails are called halyards while those that strike them are called downhauls. Lines that adjust (trim) the sails are called sheets. These are often referred to using the name of the sail they control (such as main sheet, or jib sheet). Sail trim may also be controlled with smaller lines attached to the forward section of a boom such as a cunningham; a line used to hold the boom down is called a vang, or a kicker in the United Kingdom. A topping lift is used to hold a boom up in the absence of sail tension. Guys are used to control the ends of other spars such as spinnaker poles.
Lines used to tie a boat up when alongside are called docklines, docking cables or mooring warps. In dinghies the single line from the bow is referred to as the painter. A rode is what attaches an anchored boat to its anchor. It may be made of chain, rope, or a combination of the two.
Some lines are referred to as ropes:
- a bell rope (to ring the bell),
- a bolt rope (attached to the edge of a sail for extra strength),
- a foot rope (for sailors on square riggers to stand on while reefing or furling the sails), and
- a tiller rope (to temporarily hold the tiller and keep the boat on course).
Other terms
Walls are called bulkheads or ceilings, while the surfaces referred to as ceilings on land are called 'overheads'. Floors are called 'soles' or decks. "Broken up" was the fate of a ship that hit a "rocky point" or was simply no longer wanted. The toilet is traditionally called the 'head', the kitchen is the galley. When lines are tied off, this may be referred to as 'made fast' or 'belayed.' Sails in different sail plans have unchanging names, however. For the naming of sails, see sail-plan.
Knots and line handling
The tying and untying of knots and hitches as well as the general handling of ropes and lines are fundamental to the art of sailing. The RYA basic 'Start Yachting' syllabus lists the following knots and hitches:
- figure-eight knot — stopper knot
- round turn and two half hitches — secure the end of a rope to a fixed object
- bowline — used to form a fixed loop at the end of a rope
The RYA Competent Crew syllabus adds the following to the list above, as well as knowledge of the correct use of each:
- clove hitch — securing lines running along a series of posts
- rolling hitch — rigging a stopper to relax the tension on a sheet
- reef knot — joining two ends of a single line to bind around an object
- single and double sheet bend — joining two ropes of different diameters
In addition it requires competent crewmembers to understand 'taking a turn' around a cleat and to be able to make cleated lines secure. Lines and halyards need to be coiled neatly for stowage and reuse. Dock lines need to be thrown and handled safely and correctly when coming alongside, up to a buoy, and when anchoring, as well as when casting off and getting under way.
Rules and regulations
Every vessel in coastal and offshore waters is subject to the International Regulations for Preventing Collisions at Sea (the COLREGS). On inland waterways and lakes other similar regulations, such as CEVNI in Europe, may apply. In some sailing events, such as the Olympic Games, which are held on closed courses where no other boating is allowed, specific racing rules such as the Racing Rules of Sailing (RRS) may apply. Often, in club racing, specific club racing rules, perhaps based on RRS, may be superimposed onto the more general regulations such as COLREGS or CEVNI.
In general, regardless of the activity, every sailor must
- Maintain a proper lookout at all times
- Adjust speed to suit the conditions
- Know whether to 'stand on' or 'give way' in any close-quarters situation.
The stand-on vessel must hold a steady course and speed but be prepared to take late avoiding action to prevent an actual collision if the other vessel does not do so in time. The give-way vessel must take early, positive and obvious avoiding action, without crossing ahead of the other vessel.(Rules 16-17)
- If an approaching vessel remains on a steady bearing, and the range is decreasing, then a collision is likely. (Rule 7) This can be checked with a hand-bearing compass.
- The sailing vessel on port tack gives way to the sailing vessel on starboard tack (Rule 12)
- If both sailing vessels are on the same tack, the windward boat gives way to the leeward one (Rule 12)
- If a vessel on port tack is unable to determine the tack of the other boat, she should be prepared to give way (Rule 12)
- An overtaking vessel must keep clear of the vessel being overtaken (Rule 13)
- Sailing vessels must give way to vessels engaged in fishing, those not under command, those restricted in their ability to manoeuvre and should avoid impeding the safe passage of a vessel constrained by her draft. (Rule 18)
The COLREGS go on to describe the lights to be shown by vessels under way at night or in restricted visibility. Specifically, for sailing boats, red and green sidelights and a white sternlight are required, although for vessels under 7 metres (23.0 ft) in length, these may be substituted by a torch or white all-round lantern. (Rules 22 & 25)
Sailors are required to be aware not only of the requirements for their own boat, but of all the other lights, shapes and flags that may be shown by other vessels, such as those fishing, towing, dredging, diving etc., as well as sound signals that may be made in restricted visibility and at close quarters, so that they can make decisions under the COLREGS in good time, should the need arise. (Rules 32 - 37)
In addition to the COLREGS, CEVNI and/or any specific racing rules that apply to a sailing boat, there are also
- The IALA International Association of Lighthouse Authorities standards for lateral marks, lights, signals, and buoyage and rules designed to support safe navigation.
- The SOLAS (International Convention for the Safety of Life at Sea) regulations, specifically Chapter V, which became mandatory for all leisure craft users of the sea from 1 July 2002. These regulations place the obligations for safety on the owners and operators of any boat including sailboats. They specify the safety equipment needed, the emergency procedures to be used appropriate to the boat's size and its sailing range, and requirements for passage planning with regard to weather and safety.
Licensing regulations vary widely across the world. While boating on international waters does not require any license, a license may be required to operate a vessel on coastal waters or inland waters. Some jurisdictions require a license when a certain size is exceeded (e.g., a length of 20 meters), others only require licenses to pilot passenger ships, ferries or tugboats. For example, the European Union issues the International Certificate of Competence, which is required to operate pleasure craft in most inland waterways within the union. The United States in contrast has no licensing, but instead has voluntary certification organizations such as the American Sailing Association. These US certificates are often required to charter a boat, but are not required by any federal or state law.
Sailboat racing
Sailboat racing generally fits into one of two categories:
- Where all the boats are substantially similar, and the first boat to finish wins. (e.g. 470, 49er, Contender, Farr 40, Laser, Lido 14, RS Feva, Soling, Star, Thistle, etc.)
- Where boats of different types sail against each other and are scored based on their handicaps which are calculated either before the start or after the finish. ( e.g. Fastnet Race, Commodore's Cup, Sydney to Hobart Yacht Race, Bermuda Race, etc.) The two most common handicap systems are the IRC and the Portsmouth Yardstick, while the Performance Handicap Racing Fleet (PHRF) is very common in the U.S.A.
Class racing can be further subdivided. Each class has its own set of class rules, and some classes are more restrictive than others.
At the other end of the extreme are the development classes based on a box-rule. The box-rule might specify only a few parameters such as maximum length, minimum weight, and maximum sail area, thus allowing creative engineering to develop the fastest boat within the constraints. Examples include the Moth (dinghy), the A Class Catamaran, and the boats used in the America's Cup, Volvo Ocean Race, and Barcelona World Race.
Many classes lie somewhere in between strict one-design and box rule. These classes allows some variation, but the boats are still substantially similar. For instance, both wood and fiberglass hulls are allowed in the Albacore, Wayfarer, and Fireball classes, but the hull shape, weight, and sail area are tightly constrained.
Sailboat racing ranges from single person dinghy racing to large boats with 10 or 20 crew and from small boats costing a few thousand dollars to multi-million dollar America's Cup or Sydney to Hobart Yacht Race campaigns. The costs of participating in the high end large boat competitions make this type of sailing one of the most expensive sports in the world. However, there are inexpensive ways to get involved in sailboat racing, such as at community sailing clubs, classes offered by local recreation organizations and in some inexpensive dinghy and small catamaran classes. Additionally high schools and colleges may offer sailboat racing programs through the Interscholastic Sailing Association (in the USA) and the Intercollegiate Sailing Association (in the USA and some parts of Canada). Under these conditions, sailboat racing can be comparable to or less expensive than sports such as golf and skiing. Sailboat racing is one of the few sports in which people of all ages and genders can regularly compete with and against each other.
Most sailboat and yacht racing is done in coastal or inland waters. However, in terms of endurance and risk to life, ocean races such as the Volvo Ocean Race, the solo VELUX 5 Oceans Race, and the non-stop solo Vendée Globe, rate as some of the most extreme and dangerous sporting events. Not only do participants compete for days with little rest, but an unexpected storm, a single equipment failure, or collision with an ice floe could result in the sailboat being disabled or sunk hundreds or thousands of miles from search and rescue.
Recreational sailing
Sailing for pleasure can involve short trips across a bay, day sailing, coastal cruising, and more extended offshore or 'blue-water' cruising. These trips can be singlehanded or the vessel may be crewed by families or groups of friends. Sailing vessels may proceed on their own, or be part of a flotilla with other like-minded voyagers. Sailing boats may be operated by their owners, who often also gain pleasure from maintaining and modifying their craft to suit their needs and taste, or may be rented for the specific trip or cruise. A professional skipper and even crew may be hired along with the boat in some cases. People take cruises in which they crew and 'learn the ropes' aboard craft such as tall ships, classic sailing vessels and restored working boats.
Cruising trips of several days or longer can involve a deep immersion in logistics, navigation, meteorology, local geography and history, fishing lore, sailing knowledge, general psychological coping, and serendipity. Once the boat is acquired it is not all that expensive an endeavor, often much less expensive than a normal vacation on land. It naturally develops self-reliance, responsibility, economy, and many other useful skills. Besides improving sailing skills, all the other normal needs of everyday living must also be addressed. There are work roles that can be done by everyone in the family to help contribute to an enjoyable outdoor adventure for all.
A style of casual coastal cruising called gunkholing is a popular summertime family recreational activity. It consists of taking a series of day sails to out of the way places and anchoring overnight while enjoying such activities as exploring isolated islands, swimming, fishing, etc. Many nearby local waters on rivers, bays, sounds, and coastlines can become great natural cruising grounds for this type of recreational sailing. Casual sailing trips with friends and family can become lifetime bonding experiences.
Long-distance voyaging, such as that across oceans and between far-flung ports, can be considered the near-absolute province of the cruising sailboat. Most modern yachts of 25–55 feet long, propelled solely by mechanical powerplants, cannot carry the fuel sufficient for a point-to-point voyage of even 250–500 miles without needing to resupply; but a well-prepared sail-powered yacht of similar length is theoretically capable of sailing anywhere its crew is willing to guide it. Even considering that the cost benefits are offset by a much reduced cruising speed, many people traveling distances in small boats come to appreciate the more leisurely pace and increased time spent on the water. Since the solo circumnavigation of Joshua Slocum in the 1890s, long-distance cruising under sail has inspired thousands of otherwise normal people to explore distant seas and horizons. The important voyages of Robin Lee Graham, Eric Hiscock, Don Street and others have shown that, while not strictly racing, ocean voyaging carries with it an inherent sense of competition, especially that between man and the elements. Such a challenging enterprise requires keen knowledge of sailing in general as well as maintenance, navigation (especially celestial navigation), and often even international diplomacy (for which an entire set of protocols should be learned and practiced). But one of the great benefits to sailboat ownership is that one may at least imagine the type of adventure that the average affordable powerboat could never accomplish.
See also
- Sailing (sport)
- Sailing at the Summer Olympics
- American Sail Training Association
- Boat building
- Canadian Yachting Association
- Catboat and Sloop
- Day sailer
- Dinghy racing
- Ice boat
- Land sailing
- Glossary of nautical terms
- Planing (sailing)
- Puddle Duck Racer
- Racing Rules of Sailing
- Royal Yachting Association
- Single-handed sailing
- Solar sail
- Trailer sailer
- U.S. intercollegiate sailing champions
- US Sailing
- Yacht charter
- Carter, Robert "Boat remains and maritime trade in the Persian Gulf during the sixth and fifth millennia BC"Antiquity Volume 80 No.307 March 2006
- "Transportation and Maps" in Virtual Vault, the art of the boat is sofa an online exhibition of Canadian historical art at Library and Archives Canada
- "2.972 How A Sail Boat Sails Into The Wind". Web.mit.edu. Retrieved 2010-06-30.
- "The physics of sailing". Animations.physics.unsw.edu.au. Retrieved 2010-06-30.
- "how a sail works @". Sailtheory.com. Retrieved 2010-06-30.
- "WSSR Newsletter No 177. Hydroptere World Records. 23/09/09". Sailspeedrecords.com. 2009-09-04. Retrieved 2010-06-30.
- "l'Hydroptère". Hydroptere.com. Retrieved 2010-06-30.
- See "How fast do these things really go?" in the "FAQ published by the Four Lakes Ice Yacht Club".
- How sail boats sail against the wind? Faster than the wind? http://PhysicsForArchitects.com/Sailing_against_the_wind.php
- "OZ PD Racer - Measuring Leeway and Tacking Angle - Michael Storer Boat Design". Storerboatplans.com. Retrieved 2010-06-30.
- Large sails of big area, spinnakers serve to increase the sail area for more performance downwind.
- Garrett, Ross (1996). The Symmetry of Sailing. Dobbs Ferry: Sheridan House. pp. 97–99. ISBN 1-57409-000-3.
- Each leg at 45 degrees to the true wind is 0.71 miles, but in reality is longer as total tacking angles greater than 90° are the norm and leeway can be significant
- http://www.sailing.org/documents/racing-rules.php, "51 MOVABLE BALLAST: All movable ballast, including sails that are not set, shall be properly stowed. Water, dead weight or ballast shall not be moved for the purpose of changing trim or stability. Floorboards, bulkheads, doors, stairs and water tanks shall be left in place and all cabin fixtures kept on board. However, bilge water may be bailed out."
- "SkySails - Home". Skysails.com. Retrieved 2010-06-30.
- Timothy Lesle (December 10, 2006). "Sailing an Oil Tanker". The New York Times. Retrieved 2010-06-30.
- * US Patent 6925949 Elevated sailing apparatus by Malcolm Phillips, filed Dec 31, 2002.
- "c class catamarans". Sailmagazine.com. Retrieved 2010-06-30.
- "Windmill Sailboat: Sailing Against the Wind". TreeHugger. Retrieved 2010-06-30.
- WebCite query result
- Cort, Adam (April 5, 2010). "Running Faster than the Wind". sailmagazine.com. Retrieved April 6, 2010.
- "Ride Like the Wind (only faster)". Retrieved April 6, 2010.
- Tom Lochhass. "Basic Sailing Knots". New York Times Company. Retrieved 9 July 2012.
- Jinks, Simon (2007). RYA Sail Cruising and Yachtmaster Scheme: Syllabus and logbook. Eastleigh, Hampshire: Royal yachting Association. p. 10. ISBN 978-1-905104-98 Check
- Competent Crew: Practical Course Notes. Eastleigh, Hampshire: Royal Yachting Association. 1990. pp. 32–43. ISBN 0 901501 35 2.
- Pearson, Malcolm (2007). Reeds Skipper's Handbook. Adlard Coles Nautical. p. 95. ISBN 978-0-7136-8338-7.
- Sails set for a breeze coming from the left hand side of the boat
- Sails set for a breeze coming from the right side of the boat
- Pearson, Malcolm (2007). Reeds Skipper's Handbook. Adlard Coles Nautical. p. 115. ISBN 978-0-7136-8338-7.
- "Transportation and Maps" in Virtual Vault, an online exhibition of Canadian historical art at Library and Archives Canada
- Rousmaniere, John, The Annapolis Book of Seamanship, Simon & Schuster, 1999
- Chapman Book of Piloting (various contributors), Hearst Corporation, 1999
- Herreshoff, Halsey (consulting editor), The Sailor’s Handbook, Little Brown and Company, 1983
- Seidman, David, The Complete Sailor, International Marine, 1995
- Jobson, Gary, Sailing Fundamentals, Simon & Schuster, 1987
|Wikimedia Commons has media related to: Sailing|
|Look up sailing in Wiktionary, the free dictionary.|
- American Sailing Association
- US Sailing
- Sailing at the Open Directory Project
- The physics of sailing (School of Physics, University of New South Wales, Sydney, Australia)
- Cruising on small craft travel guide from Wikivoyage | http://en.wikipedia.org/wiki/Sailing | 13 |
54 | Euclid's Parallel Postulate
in Ancient Greece and in Medieval Islam
Michelle Eder History of Mathematics Rutgers, Spring 2000
Throughout the course of history there have been many remarkable advances, both intellectual and physical, which have changed our conceptual framework. One area in which this is apparent is Mathematics. In some cases mathematicians have spent years of their lives trying to solve a single problem. Such are Euclid, Proclus, John Wallis, N.I. Lobachevsky and Abu' Ali Ibn al-Haytham, who will be considered here in connection with the history of Euclid's parallel postulate.
Little or nothing is reliably known about Euclid's life. It is believed that he lived in Alexandria, Greece around 300 B.C. (Varadarajan, page 3). Some say that he was the most successful textbook writer the world has ever known, whose manuscripts dominat E the teaching of the subject (Smith, page 103). In the writing of his Elements, Euclid "successfully incorporated all the essential parts of the accumulated mathematical knowledge of his time" (Sarton, page 104). And although he was no t the first of Greek mathematicians to consolidate the materials of geometry into a text, he did so so "perfectly" that it came to replace the works of his predecessors (Morrow, page xxii). Every step to the proofs of his theorems was justified by referring back to a previous definition, axiom, theorem or proof of a theorem. However, though Euclid's Elements became the "tool-box" for Greek mathematics, his Parallel Postulate, postulate V, raises a great deal of controversy within the mathematical field. Euclid's formulation of the parallel postulate was as follows:
(Heath, page 202)
This states: That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the tw o right angles. (Heath, page 202). This postulate, one of the most controversial topics in the history of mathematics, is one that geometers have tried to eliminate for more than two thousand years.
Among the first to explore other options to the parallel postulate were the Greeks. The Greek geometers of the 7th to 3rd centuries B.C. helped to enrich the science with new facts and took important steps toward the formulation of a "rigorou s logical sequence " (Pogorelov, page 186). They saw the parallel postulate as a theorem involving many difficulties whose proof required a number of definitions and theorems. In comparison to Euclid's other postulates the parallel postulate was com plicated and unclear. In addition, some found it difficult to accept on an intuitive basis. Even Euclid himself must have been displeased with it, for he made an effort to prove some of his other propositions without the use of the parallel postulate an D only began using it when it became absolutely necessary (Varadarajan, page 5). From his view, "there was no way out but to accept it as a postulate and go ahead. " (Sarton, page 39).
In the course of many failed attempts to modify this postulate, mathematicians have tried desperately to find an easier way to deal with the parallel postulate. a person of average intelligence might say that the proposition is evident and needs no proof (Sarton, page 39). However, from a more sophisticated mathematical viewpoint one would realize the need of a proof and attempt to give it (Sarton, page 39). In the attempt to clarify the status of this postulate some mathematicians tried to eliminate it altogether by replacing it with a simpler, more convincing axiom, while others tried to deduce it from other axioms. In these attempts, all these people proved that the fifth postulate is not necessary if one accepts another postulate "rendering the same service "; however, the use of them would seem "artificial " (Sarton, page 40). It is because of this that Euclid, seeing the necessity of the postulate, selected what he apparently found to be the simplest form of it as his fifth postulate (Sarton, page 40).
Among those who attempted a proof of the parallel postulate was Proclus, who lived 410 to 485 A.D. (Heath, page 29), receiving his training in Alexandria, Greece and afterwards Athens, where he became a "prolific writer " (Smith, page 139). His works, a valuable source of information on the history of Greek geometry, included a commentary on Book I of Euclid's Elements. This commentary may not have been written with the intention of correcting or improving upon Euclid, but there is one instance in which he attempts to alter a "difficulty " he finds in Euclid's Elements (Heath, page 31). This difficulty is what we commonly refer to as the "parallel postulate ".
The statement Proclus proves instead of the parallel postulate is, "Given a + b < 2d , prove that the straight lines g' and g'' meet at a certain point C."
In his proof of this, Proclus draws a straight line, g''' through a given point a parallel to g'. Then taking a point B on g '' he drops a perpendicular to g ''' from it. From this he reasons that since the distance from g ''' increases without limit as the distance between a and B grows and the distance between g' and g''' is constant then there must be a point C on g'' belonging to g'. And it is this point where g' and g'' meet, thus completing his proof. However, as with most of the other alternatives to the parallel postulate, this one had faults. It is observed by Pogorelov that the parallel straight lines this proof relies on are not explicitly contained in the other postulates or axioms and therefore cannot be deduced from them. (Pogorelov, page 188).
Another person who attempted a proof to the parallel postulate was John Wallis. Wallis studied at Emmanuel College in Cambridge where he earned both a B.A. and a M.A in Theology in 1637 and 1640, respectively (Smith, page 407). Although his degre es were in theology, his "taste " was in the line of physics and mathematics (Smith, page 407). In 1649 he was elected to the Savilian professorship of geometry at Oxford (Smith, page 407). In his interest in Mathematics, Wallis was one of the first to recognize the "significance of the generalization of exponents to include negative and fractional, as well as positive and integral, numbers. " (Smith, page 408).
In addition to Wallis' recognition of the significance of exponents, he also attempted a proof to the parallel postulate. However, instead of proving the theorem directly with neutral geometry, he proposed a new axiom. This postulate expressed the idea was that one could either magnify or shrink a triangle as much as one likes without distortion. Using this, Wallis proves the parallel postulate as follows.
He begins with two straight lines making, with a third infinite straight line, two interior angles, less than two right angles. He then "slides " one angle down the line AF until it reaches a designated position ab, cutting the first line at p Then using his first postulate, he claims that the two triangles aC b and ACP are similar, thus showing that AB and CD meet at a point P, and proving the theorem. However, this too had a fault. In fact, the original postulate that he based the proof on was logically equivalent to Euclid's fifth postulate. (Heath, page 210). Therefore, he had assumed what he was trying to prove, which makes his proof invalid.
In addition to Proclus' and Wallis' proofs, in 1826 another mathematician's replacement of the parallel postulate lead to the discovery of Non-Euclidean geometry. This mathematician was N.I. Lobachevsky. Lobachevsky was a Russian mathematician wh o lived 1792 to 1856. For his proof to the parallel postulate, Lobachevsky proved that "Atleast two straight lines not intersecting a given one pass through an outside point. " In proving this he hoped to find a contradiction in the "Eucli dean corollary system ". However, in the development of his theory, Lobachevsky, instead, saw that the system was "non-contradictory ". From this he drew the conclusion that there existed a geometry, different from Euclidean, with the fifth postulate not holding. This geometry became known as "Non-Euclidean " geometry (Pogorelov, page 190).
Another group to comment on Euclid's parallel postulate was the Medieval Islams. From the ninth to the fifteenth centuries, extensive mathematical activity revived only in the large cosmopolitan cities in Islam. Arabic thinkers cultivated mathema tics in at least two ways. The first was by "the preservation and transmission of older knowledge " (Calinger, page 166). And the second was by original contributions to arithmetic, algebra and geometry. In Islam, society became more firmly es tablished and they began to focus their energies more toward educational developments in Mathematics. To them mathematics was closely linked to astronomy, astrology, cosmology, geography, natural philosophy, and optics (Calinger, page 169). From this, Islamic society shifted its interest to Greek thought. The first of the Greek texts to be translated, Euclid's Elements, brought the issues involved in the parallel postulate to the attention of Islamic mathematicians, and they, too, as the Greeks before them, began exploring the possibility of proving this postulate.
One mathematician from this time who contributed to clarification of
parallel postulate was Abu' Ali Ibn al-Haytham,
an Arabic physicist, mathematician, and astronomer.
He begins his proof by first addressing Euclid's definition
of parallel lines:
To continue his proof, al-Haytham needs to show that line CD is equal to EF, and that both are greater than AB. Using what we now refer to as the Side-Angle-Side Theorem, he says that since CA is equal to AE, angle CAB is equal to angle EAB (right angles), and the side AB is common, therefore the triangles CAB and EAB must also be equal. Thus line CB is equal to EB and the two remaining angles must also be equal. al-Haytham continues that angle CBA and angle EBA are equal, and since angles ABD and ABF are equal, therefore the angles CBD and EBF must also be equal. Next, by what we now refer to as the Side-Angle-Side Theorem, he claims that since the angles CDB and EFB are equal and sides DB and BF are equal, therefore the triangles CDB and EFB must be equal. Therefore, CD and EF are also equal. Then, since CD is greater than AB (by assumption) EF must also be greater than AB. Next in al-Haytham's proof, he says to imagine EF moving along FB so that the angle EFB remains a right angle throughout the motion, with EF remaining perpendicular to FB. Then when point F coincides with point B, line EF will be "superposed" onto AB. But he claims that since the magnitude of EF is greater than that of AB, point E will lie outside AB (on the same side with A). Thus at this point EF is equal to HB. Next al-Haytham slides line BH along BD. If in this process point B coincides with point D then BH will be "superposed" on DC (because angles HBF and CDB are equal). Then since BH = EF = CD, al-Haytham claims that H coincides with C. Thus al-Haytham has showed that if EF is put in motion along FD, then the points E and F will coincide with C and D, respectively. Next he notes that if any straight line moves in this way then it's ends will describe a straight line. Thus the point E describes the straight line EHC. al-Haytham concludes that since H does not lie on AB it can not coincide with point a and therefore there must exist a surface bound by two straight lines which he finds to be &q uot;absurd ", therefore proving that CD is neither greater nor less than A. Thus al-Haytham has showed that CD and all other perpendicular lines dropped from AC to BD are equal to AB. (Rosenfield, pages 59-62).
In conclusion, throughout the past 2300 years of mathematical history many mathematicians from all around the world have unsuccessfully been trying to prove Euclid's parallel postulate. Although these attempted proofs did not lead to the desired result, they did play a part in the development of geometry, enriching it with new theorems that were not based on the fifth postulate, as well as leading to the construction of a new geometry, Non-Euclidean geometry, not based on the parallel postulate. | http://www.math.rutgers.edu/~cherlin/History/Papers2000/eder.html | 13 |
131 | 2008/9 Schools Wikipedia Selection. Related subjects: Physics; Space transport
Spacecraft propulsion is any method used to change the velocity of spacecraft and artificial satellites. There are many different methods. Each method has drawbacks and advantages, and spacecraft propulsion is an active area of research. However, most spacecraft today are propelled by exhausting a gas from the back/rear of the vehicle at very high speed through a supersonic de Laval nozzle. This sort of engine is called a rocket engine.
All current spacecraft use chemical rockets ( bipropellant or solid-fuel) for launch, though some (such as the Pegasus rocket and SpaceShipOne) have used air-breathing engines on their first stage. Most satellites have simple reliable chemical thrusters (often monopropellant rockets) or resistojet rockets for orbital station-keeping and some use momentum wheels for attitude control. Soviet bloc satellites have used electric propulsion for decades, and newer Western geo-orbiting spacecraft are starting to use them for north-south stationkeeping. Interplanetary vehicles mostly use chemical rockets as well, although a few have experimentally used ion thrusters (a form of electric propulsion) to great success.
The necessity for propulsion system
Artificial satellites must be launched into orbit, and once there they must be placed in their nominal orbit. Once in the desired orbit, they often need some form of attitude control so that they are correctly pointed with respect to the Earth, the Sun, and possibly some astronomical object of interest. They are also subject to drag from the thin atmosphere, so that to stay in orbit for a long period of time some form of propulsion is occasionally necessary to make small corrections ( orbital stationkeeping). Many satellites need to be moved from one orbit to another from time to time, and this also requires propulsion. When a satellite has exhausted its ability to adjust its orbit, its useful life is over.
Spacecraft designed to travel further also need propulsion methods. They need to be launched out of the Earth's atmosphere just as satellites do. Once there, they need to leave orbit and move around.
For interplanetary travel, a spacecraft must use its engines to leave Earth orbit. Once it has done so, it must somehow make its way to its destination. Current interplanetary spacecraft do this with a series of short-term trajectory adjustments. In between these adjustments, the spacecraft simply falls freely along its orbit. The simplest fuel-efficient means to move from one circular orbit to another is with a Hohmann transfer orbit: the spacecraft begins in a roughly circular orbit around the Sun. A short period of thrust in the direction of motion accelerates or decelerates the spacecraft into an elliptical orbit around the Sun which is tangential to its previous orbit and also to the orbit of its destination. The spacecraft falls freely along this elliptical orbit until it reaches its destination, where another short period of thrust accelerates or decelerates it to match the orbit of its destination. Special methods such as aerobraking are sometimes used for this final orbital adjustment.
Some spacecraft propulsion methods such as solar sails provide very low but inexhaustible thrust; an interplanetary vehicle using one of these methods would follow a rather different trajectory, either constantly thrusting against its direction of motion in order to decrease its distance from the Sun or constantly thrusting along its direction of motion to increase its distance from the Sun.
Spacecraft for interstellar travel also need propulsion methods. No such spacecraft has yet been built, but many designs have been discussed. Since interstellar distances are very great, a tremendous velocity is needed to get a spacecraft to its destination in a reasonable amount of time. Acquiring such a velocity on launch and getting rid of it on arrival will be a formidable challenge for spacecraft designers.
Effectiveness of propulsion systems
When in space, the purpose of a propulsion system is to change the velocity, or v, of a spacecraft. Since this is more difficult for more massive spacecraft, designers generally discuss momentum, mv. The amount of change in momentum is called impulse. So the goal of a propulsion method in space is to create an impulse.
When launching a spacecraft from the Earth, a propulsion method must overcome a higher gravitational pull to provide a net positive acceleration. In orbit, any additional impulse, even very tiny, will result in a change in the orbit path.
The rate of change of velocity is called acceleration, and the rate of change of momentum is called force. To reach a given velocity, one can apply a small acceleration over a long period of time, or one can apply a large acceleration over a short time. Similarly, one can achieve a given impulse with a large force over a short time or a small force over a long time. This means that for maneuvering in space, a propulsion method that produces tiny accelerations but runs for a long time can produce the same impulse as a propulsion method that produces large accelerations for a short time. When launching from a planet, tiny accelerations cannot overcome the planet's gravitational pull and so cannot be used.
The Earth's surface is situated fairly deep in a gravity well and it takes a velocity of 11.2 kilometers/second ( escape velocity) or more to escape from it. As human beings evolved in a gravitational field of 1g (9.8 m/s²), an ideal propulsion system would be one that provides a continuous acceleration of 1g (though human bodies can tolerate much larger accelerations over short periods). The occupants of a rocket or spaceship having such a propulsion system would be free from all the ill effects of free fall, such as nausea, muscular weakness, reduced sense of taste, or leaching of calcium from their bones.
The law of conservation of momentum means that in order for a propulsion method to change the momentum of a space craft it must change the momentum of something else as well. A few designs take advantage of things like magnetic fields or light pressure in order to change the spacecraft's momentum, but in free space the rocket must bring along some mass to accelerate away in order to push itself forward. Such mass is called reaction mass.
In order for a rocket to work, it needs two things: reaction mass and energy. The impulse provided by launching a particle of reaction mass having mass m at velocity v is mv. But this particle has kinetic energy mv²/2, which must come from somewhere. In a conventional solid, liquid, or hybrid rocket, the fuel is burned, providing the energy, and the reaction products are allowed to flow out the back, providing the reaction mass. In an ion thruster, electricity is used to accelerate ions out the back. Here some other source must provide the electrical energy (perhaps a solar panel or a nuclear reactor), while the ions provide the reaction mass.
When discussing the efficiency of a propulsion system, designers often focus on effectively using the reaction mass. Reaction mass must be carried along with the rocket and is irretrievably consumed when used. One way of measuring the amount of impulse that can be obtained from a fixed amount of reaction mass is the specific impulse, the impulse per unit weight-on-Earth (typically designated by Isp). The unit for this value is seconds. Since the weight on Earth of the reaction mass is often unimportant when discussing vehicles in space, specific impulse can also be discussed in terms of impulse per unit mass. This alternate form of specific impulse uses the same units as velocity (e.g. m/s), and in fact it is equal to the effective exhaust velocity of the engine (typically designated ve). Confusingly, both values are sometimes called specific impulse. The two values differ by a factor of gn, the standard acceleration due to gravity 9.80665 m/s² (Ispgn = ve).
A rocket with a high exhaust velocity can achieve the same impulse with less reaction mass. However, the energy required for that impulse is proportional to the square of the exhaust velocity, so that more mass-efficient engines require much more energy, and are typically less energy efficient. This is a problem if the engine is to provide a large amount of thrust. To generate a large amount of impulse per second, it must use a large amount of energy per second. So highly (mass) efficient engines require enormous amounts of energy per second to produce high thrusts. As a result, most high-efficiency engine designs also provide very low thrust.
Delta-v and propellant use
Burning the entire usable propellant of a spacecraft through the engines in a straight line in free space would produce a net velocity change to the vehicle; this number is termed ' delta-v' (Δv).
If the exhaust velocity is constant then the total Δv of a vehicle can be calculated using the rocket equation, where M is the mass of propellant, P is the mass of the payload (including the rocket structure), and ve is the velocity of the rocket exhaust. This is known as the Tsiolkovsky rocket equation:
For historical reasons, as discussed above, ve is sometimes written as
- ve = Ispgo
where Isp is the specific impulse of the rocket, measured in seconds, and go is the gravitational acceleration at sea level.
For a high delta-v mission, the majority of the spacecraft's mass needs to be reaction mass. Since a rocket must carry all of its reaction mass, most of the initially-expended reaction mass goes towards accelerating reaction mass rather than payload. If the rocket has a payload of mass P, the spacecraft needs to change its velocity by Δv, and the rocket engine has exhaust velocity ve, then the mass M of reaction mass which is needed can be calculated using the rocket equation and the formula for Isp:
For Δv much smaller than ve, this equation is roughly linear, and little reaction mass is needed. If Δv is comparable to ve, then there needs to be about twice as much fuel as combined payload and structure (which includes engines, fuel tanks, and so on). Beyond this, the growth is exponential; speeds much higher than the exhaust velocity require very high ratios of fuel mass to payload and structural mass.
For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example a launch mission to low Earth orbit requires about 9.3-10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer.
Power use and propulsive efficiency
Although solar power and nuclear power are virtually unlimited sources of energy, the maximum power they can supply is substantially proportional to the mass of the powerplant. For fixed power, with a large ve which is desirable to save propellant mass, it turns out that the maximum acceleration is inversely proportional to ve. Hence the time to reach a required delta-v is proportional to ve. Thus the latter should not be too large. It might be thought that adding power generation is helpful, however this takes mass away from payload, and ultimately reaches a limit as the payload fraction tends to zero.
For all reaction engines (such as rockets and ion drives) some energy must go into accelerating the reaction mass. Every engine will waste some energy, but even assuming 100% efficiency, to accelerate a particular mass of exhaust the engine will need energy amounting to
which is simply the energy needed to accelerate the exhaust. This energy is not necessarily lost- some of it usually ends up as kinetic energy of the vehicle, and the rest is wasted in residual motion of the exhaust.
Comparing the rocket equation (which shows how much energy ends up in the final vehicle) and the above equation (which shows the total energy required) shows that even with 100% engine efficiency, certainly not all energy supplied ends up in the vehicle - some of it, indeed usually most of it, ends up as kinetic energy of the exhaust.
The exact amount depends on the design of the vehicle, and the mission. However there are some useful fixed points:
- if the Isp is fixed, for a mission delta-v, there is a particular Isp that minimises the overall energy used by the rocket. This comes to an exhaust velocity of about ⅔ of the mission delta-v (see the energy computed from the rocket equation). Drives with a specific impulse that is both high and fixed such as Ion thrusters have exhaust velocities that can be enormously higher than this ideal for many missions.
- if the exhaust velocity can be made to vary so that at each instant it is equal and opposite to the vehicle velocity then the absolute minimum energy usage is achieved. When this is achieved, the exhaust stops in space ^ and has no kinetic energy; and the propulsive efficiency is 100%- all the energy ends up in the vehicle (in principle such a drive would be 100% efficient, in practice there would be thermal losses from within the drive system and residual heat in the exhaust). However in most cases this uses an impractical quantity of propellant, but is a useful theoretical consideration. Another complication is that unless the vehicle is moving initially, it cannot accelerate, as the exhaust velocity is zero at zero speed.
Some drives (such as VASIMR or Electrodeless plasma thruster ) actually can significantly vary their exhaust velocity. This can help reduce propellant usage or improve acceleration at different stages of the flight. However the best energetic performance and acceleration is still obtained when the exhaust velocity is close to the vehicle speed. Proposed ion and plasma drives usually have exhaust velocities enormously higher than that ideal (in the case of VASIMR the lowest quoted speed is around 15000 m/s compared to a mission delta-v from high Earth orbit to Mars of about 4000m/s).
Suppose we want to send a 10,000 kg space probe to Mars. The required Δv from LEO is approximately 3000 m/s, using a Hohmann transfer orbit. (A manned craft would need to take a faster route and use more fuel). For the sake of argument, let us say that the following thrusters may be used:
|Engine|| Effective Exhaust Velocity
|Energy per kg
|minimum power/thrust||Power generator mass/thrust*|
| Solid rocket
||1||100||190,000||95||500 kJ||0.5 kW/N||N/A|
| Bipropellant rocket
||5||500||8,200||103||12.6 MJ||2.5 kW/N||N/A|
|Ion thruster||50||5,000||620||775||1.25 GJ||25 kW/N||25 kg/N|
|Advance electrically powered drive||1,000||100,000||30||15,000||500 GJ||500 kW/N||500 kg/N|
- - assumes a specific power of 1kW
Observe that the more fuel-efficient engines can use far less fuel; its mass is almost negligible (relative to the mass of the payload and the engine itself) for some of the engines. However, note also that these require a large total amount of energy. For Earth launch, engines require a thrust to weight ratio of more than unity. To do this they would have to be supplied with Gigawatts of power — equivalent to a major metropolitan generating station. From the table it can be seen that this is clearly impractical with current power sources.
Instead, a much smaller, less powerful generator may be included which will take much longer to generate the total energy needed. This lower power is only sufficient to accelerate a tiny amount of fuel per second, and would be insufficient for launching from the Earth but in orbit, where there is no friction, over long periods the velocity will be finally achieved. For example. it took the Smart 1 more than a year to reach the Moon, while with a chemical rocket it takes a few days. Because the ion drive needs much less fuel, the total launched mass is usually lower, which typically results in a lower overall cost.
Mission planning frequently involves adjusting and choosing the propulsion system according to the mission delta-v needs, so as to minimise the total cost of the project, including trading off greater or lesser use of fuel and launch costs of the complete vehicle.
Space propulsion methods
Propulsion methods can be classified based on their means of accelerating the reaction mass. There are also some special methods for launches, planetary arrivals, and landings.
Most rocket engines are internal combustion heat engines (although non combusting forms exist). Rocket engines generally produce a high temperature reaction mass, as a hot gas. This is achieved by combusting a solid, liquid or gaseous fuel with an oxidiser within a combustion chamber. The extremely hot gas is then allowed to escape through a high-expansion ratio nozzle. This bell-shaped nozzle is what gives a rocket engine its characteristic shape. The effect of the nozzle is to dramatically accelerate the mass, converting most of the thermal energy into kinetic energy. Exhaust speeds as high as 10 times the speed of sound at sea level are common.
Ion propulsion rockets can heat a plasma or charged gas inside a magnetic bottle and release it via a magnetic nozzle, so that no solid matter need come in contact with the plasma. Of course, the machinery to do this is complex, but research into nuclear fusion has developed methods, some of which have been proposed to be used in propulsion systems, and some have been tested in a lab.
See rocket engine for a listing of various kinds of rocket engines using different heating methods, including chemical, electrical, solar, and nuclear.
Electromagnetic acceleration of reaction mass
Rather than relying on high temperature and fluid dynamics to accelerate the reaction mass to high speeds, there are a variety of methods that use electrostatic or electromagnetic forces to accelerate the reaction mass directly. Usually the reaction mass is a stream of ions. Such an engine very typically uses electric power, first to ionise atoms, and then uses a voltage gradient to accelerate the ions to high exhaust velocities.
For these drives, at the highest exhaust speeds, energetic efficiency and thrust are all inversely proportional to exhaust velocity. Their very high exhaust velocity means they require huge amounts of energy and thus with practical power sources provide low thrust, but use hardly any fuel.
For some missions, particularly reasonably close to the Sun, solar energy may be sufficient, and has very often been used, but for others further out or at higher power, nuclear energy is necessary; engines drawing their power from a nuclear source are called nuclear electric rockets.
With any current source of electrical power, chemical, nuclear or solar, the maximum amount of power that can be generated limits the amount of thrust that can be produced to a small value. Power generation adds significant mass to the spacecraft, and ultimately the weight of the power source limits the performance of the vehicle.
Current nuclear power generators are approximately half the weight of solar panels per watt of energy supplied, at terrestrial distances from the Sun. Chemical power generators are not used due to the far lower total available energy. Beamed power to the spacecraft shows some potential. However, the dissipation of waste heat from any power plant may make any propulsion system requiring a separate power source infeasible for interstellar travel.
Some electromagnetic methods:
- Ion thrusters (accelerate ions first and later neutralize the ion beam with an electron stream emitted from a cathode called a neutralizer)
- Electrostatic ion thruster
- Field Emission Electric Propulsion
- Hall effect thruster
- Colloid thruster
- Plasma thrusters (where both ions and electrons are accelerated simultaneously, no neutralizer is required)
- Magnetoplasmadynamic thruster
- Helicon Double Layer Thruster
- Electrodeless plasma thruster
- Pulsed plasma thruster
- Pulsed inductive thruster
- Variable specific impulse magnetoplasma rocket (VASIMR)
- Mass drivers (for propulsion)
Systems without reaction mass carried within the spacecraft
The law of conservation of momentum states that any engine which uses no reaction mass cannot move the centre of mass of a spaceship (changing orientation, on the other hand, is possible). But space is not empty, especially space inside the Solar System; there are gravitation fields, magnetic fields, solar wind and solar radiation. Various propulsion methods try to take advantage of these. However, since these phenomena are diffuse in nature, corresponding propulsion structures need to be proportionately large.
There are several different space drives that need little or no reaction mass to function. A tether propulsion system employs a long cable with a high tensile strength to change a spacecraft's orbit, such as by interaction with a planet's magnetic field or through momentum exchange with another object. Solar sails rely on radiation pressure from electromagnetic energy, but they require a large collection surface to function effectively. The magnetic sail deflects charged particles from the solar wind with a magnetic field, thereby imparting momentum to the spacecraft. A variant is the mini-magnetospheric plasma propulsion system, which uses a small cloud of plasma held in a magnetic field to deflect the Sun's charged particles.
For changing the orientation of a satellite or other space vehicle, conservation of angular momentum does not pose a similar constraint. Thus many satellites use momentum wheels to control their orientations. These cannot be the only system for controlling satellite orientation, as the angular momentum built up due to torques from external forces such as solar, magnetic, or tidal forces eventually needs to be "bled off" using a secondary system.
Gravitational slingshots can also be used to carry a probe onward to other destinations.
Planetary and atmospheric spacecraft propulsion
High thrust is of vital importance for Earth launch, thrust has to be greater than weight (see also gravity drag). Many of the propulsion methods above give a thrust/weight ratio of much less than 1, and so cannot be used for launch.
All current spacecraft use chemical rocket engines ( bipropellant or solid-fuel) for launch. Other power sources such as nuclear have been proposed, and tested, but safety, environmental and political considerations have so far curtailed their use.
One advantage that spacecraft have in launch is the availability of infrastructure on the ground to assist them. Proposed non-rocket spacelaunch ground-assisted launch mechanisms include:
- Space elevator (a geostationary tether to orbit)
- Launch loop (a very fast rotating loop about 80km tall)
- Space fountain (a very tall building held up by a stream of masses fired from base)
- Orbital ring (a ring around the Earth with spokes hanging down off bearings)
- Hypersonic skyhook (a fast spinning orbital tether)
- Electromagnetic catapult ( railgun, coilgun) (an electric gun)
- Space gun ( Project HARP, ram accelerator) (a chemically powered gun)
- Laser propulsion ( Lightcraft) (rockets powered from ground-based lasers)
Airbreathing engines for orbital launch
Studies generally show that conventional air-breathing engines, such as ramjets or turbojets are basically too heavy (have too low a thrust/weight ratio) to give any significant performance improvement when installed on a launch vehicle itself. However, launch vehicles can be air launched from separate lift vehicles (e.g. B-29, Pegasus Rocket and White Knight) which do use such propulsion systems.
On the other hand, very lightweight or very high speed engines have been proposed that take advantage of the air during ascent:
- SABRE - a lightweight hydrogen fuelled turbojet with precooler
- ATREX - a lightweight hydrogen fuelled turbojet with precooler
- Liquid air cycle engine - a hydrogen fuelled jet engine that liquifies the air before burning it in a rocket engine
- Scramjet - jet engines that use supersonic combustion
Normal rocket launch vehicles fly almost vertically before rolling over at an altitude of some tens of kilometers before burning sideways for orbit; this initial vertical climb wastes propellant but is optimal as it greatly reduces airdrag. Airbreathing engines burn propellant much more efficiently and this would permit a far flatter launch trajectory, the vehicles would typically fly approximately tangentially to the earth surface until leaving the atmosphere then perform a rocket burn to bridge the final delta-v to orbital velocity.
Planetary arrival and landing
When a vehicle is to enter orbit around its destination planet, or when it is to land, it must adjust its velocity. This can be done using all the methods listed above (provided they can generate a high enough thrust), but there are a few methods that can take advantage of planetary atmospheres and/or surfaces.
- Aerobraking allows a spacecraft to reduce the high point of an elliptical orbit by repeated brushes with the atmosphere at the low point of the orbit. This can save a considerable amount of fuel since it takes much less delta-V to enter an elliptical orbit compared to a low circular orbit. Since the braking is done over the course of many orbits, heating is comparatively minor, and a heat shield is not required. This has been done on several Mars missions such as Mars Global Surveyor, Mars Odyssey and Mars Reconnaissance Orbiter, and at least one Venus mission, Magellan.
- Aerocapture is a much more aggressive manoeuver, converting an incoming hyperbolic orbit to an elliptical orbit in one pass. This requires a heat shield and much trickier navigation, since it must be completed in one pass through the atmosphere, and unlike aerobraking no preview of the atmosphere is possible. If the intent is to remain in orbit, then at least one more propulsive maneuver is required after aerocapture—otherwise the low point of the resulting orbit will remain in the atmosphere, resulting in eventual re-entry. Aerocapture has not yet been tried on a planetary mission, but the re-entry skip by Zond 6 and Zond 7 upon lunar return were aerocapture maneuvers, since they turned a hyperbolic orbit into an elliptical orbit. On these missions, since there was no attempt to raise the perigee after the aerocapture, the resulting orbit still intersected the atmosphere, and re-entry occurred at the next perigee.
- Parachutes can land a probe on a planet with an atmosphere, usually after the atmosphere has scrubbed off most of the velocity, using a heat shield.
- Airbags can soften the final landing.
- Lithobraking, or stopping by simply smashing into the target, is usually done by accident. However, it may be done deliberately with the probe expected to survive (see, for example, Deep Space 2). Very sturdy probes and low approach velocities are required.
Proposed spacecraft methods that may violate the laws of physics
In addition, a variety of hypothetical propulsion techniques have been considered that would require entirely new principles of physics to realize and that may not actually be possible. To date, such methods are highly speculative and include:
- Diametric drive
- Pitch drive
- Bias drive
- Disjunction drive
- Alcubierre drive (a form of Warp drive)
- Differential sail
- Wormholes - theoretically possible, but impossible in practice with current technology
- Antigravity - requires the concept of antigravity; theoretically impossible
- Reactionless drives - breaks the law of conservation of momentum; theoretically impossible
- EmDrive - tries to circumvent the law of conservation of momentum; may be theoretically impossible
- A "hyperspace" drive based upon Heim theory
Table of methods and their specific impulse
Below is a summary of some of the more popular, proven technologies, followed by increasingly speculative methods.
Four numbers are shown. The first is the effective exhaust velocity: the equivalent speed that the propellant leaves the vehicle. This is not necessarily the most important characteristic of the propulsion method, thrust and power consumption and other factors can be, however:
- if the delta-v is much more than the exhaust velocity, then exorbitant amounts of fuel are necessary (see the section on calculations, above)
- if it is much more than the delta-v, then, proportionally more energy is needed; if the power is limited, as with solar energy, this means that the journey takes a proportionally longer time
The second and third are the typical amounts of thrust and the typical burn times of the method. Outside a gravitational potential small amounts of thrust applied over a long period will give the same effect as large amounts of thrust over a short period. (This result does not apply when the object is significantly influenced by gravity.)
The fourth is the maximum delta-v this technique can give (without staging). For rocket-like propulsion systems this is a function of mass fraction and exhaust velocity. Mass fraction for rocket-like systems is usually limited by propulsion system weight and tankage weight. For a system to achieve this limit, typically the payload may need to be a negligible percentage of the vehicle, and so the practical limit on some systems can be much lower.
|Method|| Effective Exhaust Velocity
|Firing Duration||Maximum Delta-v (km/s)|
|Propulsion methods in current use|
|Solid rocket||1 - 4||103 - 107||minutes||~ 7|
|Hybrid rocket||1.5 - 4.2||<0.1 - 107||minutes||> 3|
|Monopropellant rocket||1 - 3||0.1 - 100||milliseconds - minutes||~ 3|
|Bipropellant rocket||1 - 4.7||0.1 - 107||minutes||~ 9|
|Tripropellant rocket||2.5 - 5.3||minutes||~ 9|
|Resistojet rocket||2 - 6||10-2 - 10||minutes|
|Arcjet rocket||4 - 16||10-2 - 10||minutes|
|Hall effect thruster (HET)||8 - 50||10-3 - 10||months/years||> 100|
|Electrostatic ion thruster||15 - 80||10-3 - 10||months/years||> 100|
|Field Emission Electric Propulsion (FEEP)||100 - 130||10-6 - 10-3||months/years|
|Pulsed plasma thruster (PPT)||~ 20||~ 0.1||~ 2,000 - ~ 10,000 hours|
|Pulsed inductive thruster (PIT)||50||20||months|
|Nuclear electric rocket||As electric propulsion method used|
|Currently feasible propulsion methods|
|Solar sails||N/A||9 per km²
(at 1 AU)
|Tether propulsion||N/A||1 - 1012||minutes||~ 7|
|Mass drivers (for propulsion)||30 - ?||104 - 108||months|
|Launch loop||N/A||~104||minutes||>> 11|
|Orion Project (Near term nuclear pulse propulsion)||20 - 100||109 - 1012||several days||~30-60|
|Magnetic field oscillating amplified thruster||10 - 130||0,1 - 1||days - months||> 100|
|Variable specific impulse magnetoplasma rocket (VASIMR)||10 - 300||40 - 1,200||days - months||> 100|
|Magnetoplasmadynamic thruster (MPD)||20 - 100||100||weeks|
|Nuclear thermal rocket||9||105||minutes||> ~ 20|
|Solar thermal rocket||7 - 12||1 - 100||weeks||> ~ 20|
|Radioisotope rocket||7 - 8||months|
|Air-augmented rocket||5 - 6||0.1 - 107||seconds-minutes||> 7?|
|Liquid air cycle engine||4.5||1000 - 107||seconds-minutes||?|
|SABRE||30/4.5||0.1 - 107||minutes||9.4|
|Dual mode propulsion rocket|
|Technologies requiring further research|
|Mini-magnetospheric plasma propulsion||200||~1 N/kW||months|
|Nuclear pulse propulsion ( Project Daedalus' drive)||20 - 1,000||109 - 1012||years||~15,000|
|Gas core reactor rocket||10 - 20||10³ - 106|
|Nuclear salt-water rocket||100||10³ - 107||half hour|
|Beam-powered propulsion||As propulsion method powered by beam|
|Nuclear photonic rocket||300,000||10-5 - 1||years-decades|
|Fusion rocket||100 - 1,000|
|Space Elevator||N/A||N/A||Indefinite||> 12|
|Significantly beyond current engineering|
|Antimatter catalyzed nuclear pulse propulsion||200 - 4,000||days-weeks|
|Antimatter rocket||10,000 - 100,000|
|Bussard ramjet||2.2 - 20,000||indefinite||~30,000|
|Gravitoelectromagnetic toroidal launchers||<300,000|
Testing spacecraft propulsion
Spacecraft propulsion systems are often first statically tested on the Earth's surface, within the atmosphere but many systems require a vacuum chamber to test fully. Rockets are usually tested at a rocket engine test facility well away from habitation and other buildings for safety reasons. Ion drives are far less dangerous and require much less stringent safety, usually only a large-ish vacuum chamber is needed.
Famous static test locations can be found at Rocket Ground Test Facilities
Some systems cannot be adequately tested on the ground and test launches may be employed at a Rocket Launch Site. | http://www.pustakalaya.org/wiki/wp/s/Spacecraft_propulsion.htm | 13 |
55 | High School Statistics Curriculum
Below are the skills needed, with links to resources to help with that skill. We also enourage plenty of exercises and book work. Curriculum Home
Important: this is a guide only.
Check with your local education authority to find out their requirements.
High School Statistics | Data
☐ Categorize data as qualitative or quantitative
☐ Evaluate published reports and graphs that are based on data by considering: experimental design, appropriateness of the data analysis, and the soundness of the conclusions
☐ Identify and describe sources of bias and its effect, drawing conclusions from data
☐ Determine whether the data to be analyzed is univariate or bivariate
☐ Determine when collected data or display of data may be biased
☐ Understand the differences among various kinds of studies (e.g., survey, observation, controlled experiment)
☐ Determine factors which may affect the outcome of a survey
☐ Categorize quantitative data as discrete or continuous.
High School Statistics | Combinations
☐ Determine the number of possible events, using counting techniques or the Fundamental Principle of Counting
☐ Determine the number of possible arrangements (permutations) of a list of items
☐ Calculate the number of possible permutations (nPr) of n items taken r at a time
☐ Calculate the number of possible combinations (nCr) of n items taken r at a time
☐ Use permutations, combinations, and the Fundamental Principle of Counting to determine the number of elements in a sample space and a specific subset (event)
☐ Differentiate between situations requiring permutations and those requiring combinations
High School Statistics | Probability
☐ Know the definition of conditional probability and use it to solve for probabilities in finite sample spaces
☐ Determine the number of elements in a sample space and the number of favorable events
☐ Calculate the probability of an event and its complement
☐ Determine empirical probabilities based on specific sample data
☐ Determine, based on calculated probability of a set of events, if:
* some or all are equally likely to occur
* one is more likely to occur than another
* whether or not an event is certain to happen or not to happen
☐ Calculate the probability of:
* a series of independent events
* two mutually exclusive events
* two events that are not mutually exclusive
☐ Calculate theoretical probabilities, including geometric applications
☐ Calculate empirical probabilities
☐ Know and apply the binomial probability formula to events involving the terms exactly, at least, and at most
High School Statistics | Statistics
☐ Find the percentile rank of an item in a data set and identify the point values for first, second, and third quartiles
☐ Identify the relationship between the independent and dependent variables from a scatter plot (positive, negative, or none)
☐ Understand the difference between correlation and causation
☐ Identify variables that might have a correlation but not a causal relationship
☐ Compare and contrast the appropriateness of different measures of central tendency for a given data set
☐ Construct a histogram, cumulative frequency histogram, and a box-and-whisker plot, given a set of data
☐ Understand how the five statistical summary (minimum, maximum, and the three quartiles) is used to construct a box-and-whisker plot
☐ Create a scatter plot of bivariate data
☐ Analyze and interpret a frequency distribution table or histogram, a cumulative frequency distribution table or histogram, or a box-and-whisker plot
☐ Use the normal distribution as an approximation for binomial probabilities
☐ Calculate measures of central tendency with group frequency distributions
☐ Calculate measures of dispersion (range, quartiles, interquartile range, standard deviation, variance) for both samples and populations
☐ Know and apply the characteristics of the normal distribution
☐ Interpret within the linear regression model the value of the correlation coefficient as a measure of the strength of the relationship
☐ Use the Standardized Normal distribution table.
☐ Calculate the mean from a frequency table.
☐ In relation to the Normal Distribution, understand what is meant by the 1 sigma, 2 sigma and 3 sigma limits and how to calculate them.
☐ Understand what is meant by the Standard Normal Distribution; and know how to standardize a Normal Distribution with known mean and standard deviation.
☐ Understand what is meant by an Outlier and how it can affect the values of the mean, median and mode.
☐ Understand that data can be positively or negatively skewed, or have no skew (as in the case of the Normal Distribution).
☐ Know how to construct a grouped frequency distribution, and make decisions on the optimum size of each group. | http://www.mathsisfun.com/links/curriculum-high-school-statistics.html | 13 |
422 | A truss is a structure that acts like a beam but with major components, or members, subjected primarily to axial stresses. The members are arranged in triangular patterns. Ideally, the end of each member at a joint is free to rotate independently of the other members at the joint. If this does not occur, secondary stresses are induced in the members. Also if loads occur other than at panel points, or joints, bending stresses are produced in the members. Though trusses were used by the ancient Romans, the modern truss concept seems to have been originated by Andrea Palladio, a sixteenth century Italian architect. From his time to the present, truss bridges have taken many forms.
SECTION 13 TRUSS BRIDGES* John M. Kulicki, P.E. President and Chief Engineer Joseph E. Prickett, P.E. Senior Associate David H. LeRoy, P.E. Vice President Modjeski and Masters, Inc., Harrisburg, Pennsylvania A truss is a structure that acts like a beam but with major components, or members, subjected primarily to axial stresses. The members are arranged in triangular patterns. Ideally, the end of each member at a joint is free to rotate independently of the other members at the joint. If this does not occur, secondary stresses are induced in the members. Also if loads occur other than at panel points, or joints, bending stresses are produced in the members. Though trusses were used by the ancient Romans, the modern truss concept seems to have been originated by Andrea Palladio, a sixteenth century Italian architect. From his time to the present, truss bridges have taken many forms. Early trusses might be considered variations of an arch. They applied horizontal thrusts at the abutments, as well as vertical reactions, In 1820, Ithiel Town patented a truss that can be considered the forerunner of the modern truss. Under vertical loading, the Town truss exerted only vertical forces at the abutments. But unlike modern trusses, the diagonals, or web systems, were of wood lattice construction and chords were composed of two or more timber planks. In 1830, Colonel Long of the U.S. Corps of Engineers patented a wood truss with a simpler web system. In each panel, the diagonals formed an X. The next major step came in 1840, when William Howe patented a truss in which he used wrought-iron tie rods for vertical web members, with X wood diagonals. This was followed by the patenting in 1844 of the Pratt truss with wrought-iron X diagonals and timber verticals. The Howe and Pratt trusses were the immediate forerunners of numerous iron bridges. In a book published in 1847, Squire Whipple pointed out the logic of using cast iron in compression and wrought iron in tension. He constructed bowstring trusses with cast-iron verticals and wrought-iron X diagonals. *Revised and updated from Sec. 12, ‘‘Truss Bridges,’’ by Jack P. Shedd, in the first edition. 13.1 13.2 SECTION THIRTEEN These trusses were statically indeterminate. Stress analysis was difficult. Latter, simpler web systems were adopted, thus eliminating the need for tedious and exacting design pro- cedures. To eliminate secondary stresses due to rigid joints, early American engineers constructed pin-connected trusses. European engineers primarily used rigid joints. Properly proportioned, the rigid trusses gave satisfactory service and eliminated the possibility of frozen pins, which induce stresses not usually considered in design. Experience indicated that rigid and pin- connected trusses were nearly equal in cost, except for long spans. Hence, modern design favors rigid joints. Many early truss designs were entirely functional, with little consideration given to ap- pearance. Truss members and other components seemed to lie in all possible directions and to have a variety of sizes, thus giving the impression of complete disorder. Yet, appearance of a bridge often can be improved with very little increase in construction cost. By the 1970s, many speculated that the cable-stayed bridge would entirely supplant the truss, except on railroads. But improved design techniques, including load-factor design, and streamlined detailing have kept the truss viable. For example, some designs utilize Warren trusses without verticals. In some cases, sway frames are eliminated and truss-type portals are replaced with beam portals, resulting in an open appearance. Because of the large number of older trusses still in the transportation system, some historical information in this section applies to those older bridges in an evaluation or re- habilitation context. (H. J. Hopkins, ‘‘A Span of Bridges,’’ Praeger Publishers, New York; S. P. Timoshenko, ‘‘History of Strength of Materials,’’ McGraw-Hill Book Company, New York). 13.1 SPECIFICATIONS The design of truss bridges usually follows the specifications of the American Association of State Highway and Transportation Officials (AASHTO) or the Manual of the American Railway Engineering and Maintenance of Way Association (AREMA) (Sec. 10). A transition in AASHTO specifications is currently being made from the ‘‘Standard Specifications for Highway Bridges,’’ Sixteenth Edition, to the ‘‘LRFD Specifications for Highway Bridges,’’ Second Edition. The ‘‘Standard Specification’’ covers service load design of truss bridges, and in addition, the ‘‘Guide Specification for the Strength Design of Truss Bridges,’’ covers extension of the load factor design process permitted for girder bridges in the ‘‘Standard Specifications’’ to truss bridges. Where the ‘‘Guide Specification’’ is silent, applicable pro- visions of the ‘‘Standard Specification’’ apply. To clearly identify which of the three AASHTO specifications are being referred to in this section, the following system will be adopted. If the provision under discussion applies to all the specifications, reference will simply be made to the ‘‘AASHTO Specifications’’. Otherwise, the following notation will be observed: ‘‘AASHTO SLD Specifications’’ refers to the service load provisions of ‘‘Standard Spec- ifications for Highway Bridges’’ ‘‘AASHTO LFD Specifications’’ refers to ‘‘Guide Specification for the Strength Design of Truss Bridges’’ ‘‘AASHTO LRFD Specifications’’ refers to ‘‘LRFD Specifications for Highway Bridges.’’ 13.2 TRUSS COMPONENTS Principal parts of a highway truss bridge are indicated in Fig. 13.1; those of a railroad truss are shown in Fig. 13.2. TRUSS BRIDGES 13.3 FIGURE 13.1 Cross section shows principal parts of a deck-truss highway bridge. Joints are intersections of truss members. Joints along upper and lower chords often are referred to as panel points. To minimize bending stresses in truss members, live loads gen- erally are transmitted through floor framing to the panel points of either chord in older, shorter-span trusses. Bending stresses in members due to their own weight was often ignored in the past. In modern trusses, bending due to the weight of the members should be consid- ered. Chords are top and bottom members that act like the flanges of a beam. They resist the tensile and compressive forces induced by bending. In a constant-depth truss, chords are essentially parallel. They may, however, range in profile from nearly horizontal in a mod- erately variable-depth truss to nearly parabolic in a bowstring truss. Variable depth often improves economy by reducing stresses where chords are more highly loaded, around mid- span in simple-span trusses and in the vicinity of the supports in continuous trusses. Web members consist of diagonals and also often of verticals. Where the chords are essentially parallel, diagonals provide the required shear capacity. Verticals carry shear, pro- vide additional panel points for introduction of loads, and reduce the span of the chords under dead-load bending. When subjected to compression, verticals often are called posts, and when subjected to tension, hangers. Usually, deck loads are transmitted to the trusses through end connections of floorbeams to the verticals. Counters, which are found on many older truss bridges still in service, are a pair of diagonals placed in a truss panel, in the form of an X, where a single diagonal would be 13.4 SECTION THIRTEEN FIGURE 13.2 Cross section shows principal parts of a through-truss railway bridge. TRUSS BRIDGES 13.5 subjected to stress reversals. Counters were common in the past in short-span trusses. Such short-span trusses are no longer economical and have been virtually totally supplanted by beam and girder spans. X pairs are still used in lateral trusses, sway frames and portals, but are seldom designed to act as true counters, on the assumption that only one counter acts at a time and carries the maximum panel shear in tension. This implies that the companion counter takes little load because it buckles. In modern design, counters are seldom used in the primary trusses. Even in lateral trusses, sway frames, and portals, X-shaped trusses are usually comprised of rigid members, that is, members that will not buckle. If adjustable counters are used, only one may be placed in each truss panel, and it should have open turnbuckles. AASHTO LRFD specifies that counters should be avoided. The commentary to that provision contains reference to the historical initial force requirement of 10 kips. Design of such members by AASHTO SLD or LFD Specifications should include an allowance of 10 kips for initial stress. Sleeve nuts and loop bars should not be used. End posts are compression members at supports of simple-span tusses. Wherever prac- tical, trusses should have inclined end posts. Laterally unsupported hip joints should not be used. Working lines are straight lines between intersections of truss members. To avoid bending stresses due to eccentricity, the gravity axes of truss members should lie on working lines. Some eccentricity may be permitted, however, to counteract dead-load bending stresses. Furthermore, at joints, gravity axes should intersect at a point. If an eccentric connection is unavoidable, the additional bending caused by the eccentricity should be included in the design of the members utilizing appropriate interaction equations. AASHTO Specifications require that members be symmetrical about the central plane of a truss. They should be proportioned so that the gravity axis of each section lies as nearly as practicable in its center. Connections may be made with welds or high-strength bolts. AREMA practice, however, excludes field welding, except for minor connections that do not support live load. The deck is the structural element providing direct support for vehicular loads. Where the deck is located near the bottom chords (through spans), it should be supported by only two trusses. Floorbeams should be set normal or transverse to the direction of traffic. They and their connections should be designed to transmit the deck loads to the trusses. Stringers are longitudinal beams, set parallel to the direction of traffic. They are used to transmit the deck loads to the floorbeams. If stringers are not used, the deck must be designed to transmit vehicular loads to the floorbeams. Lateral bracing should extend between top chords and between bottom chords of the two trusses. This bracing normally consists of trusses placed in the planes of the chords to provide stability and lateral resistance to wind. Trusses should be spaced sufficiently far apart to preclude overturning by design lateral forces. Sway bracing may be inserted between truss verticals to provide lateral resistance in vertical planes. Where the deck is located near the bottom chords, such bracing, placed between truss tops, must be kept shallow enough to provide adequate clearance for passage of traffic below it. Where the deck is located near the top chords, sway bracing should extend in full-depth of the trusses. Portal bracing is sway bracing placed in the plane of end posts. In addition to serving the normal function of sway bracing, portal bracing also transmits loads in the top lateral bracing to the end posts (Art. 13.6). Skewed bridges are structures supported on piers that are not perpendicular to the planes of the trusses. The skew angle is the angle between the transverse centerline of bearings and a line perpendicular to the longitudinal centerline of the bridge. 13.3 TYPES OF TRUSSES Figure 13.3 shows some of the common trusses used for bridges. Pratt trusses have diag- onals sloping downward toward the center and parallel chords (Fig. 13.3a). Warren trusses, 13.6 SECTION THIRTEEN with parallel chords and alternating diago- nals, are generally, but not always, con- structed with verticals (Fig. 13.3c) to reduce panel size. When rigid joints are used, such trusses are favored because they provide an efficient web system. Most modern bridges are of some type of Warren configuration. Parker trusses (Fig. 13.3d ) resemble Pratt trusses but have variable depth. As in other types of trusses, the chords provide a couple that resists bending moment. With long spans, economy is improved by creating the required couple with less force by spac- ing the chords farther apart. The Parker truss, when simply supported, is designed to have its greatest depth at midspan, where moment is a maximum. For greatest chord economy, the top-chord profile should approximate a parabola. Such a curve, however, provides too great a change in slope of diagonals, with some loss of economy in weights of diago- nals. In practice, therefore, the top-chord profile should be set for the greatest change in truss depth commensurate with reasonable diagonal slopes; for example, between 40 FIGURE 13.3 Types of simple-span truss bridges. and 60 with the horizontal. K trusses (Fig. 13.3e) permit deep trusses with short panels to have diagonals with acceptable slopes. Two diagonals generally are placed in each panel to intersect at midheight of a vertical. Thus, for each diagonal, the slope is half as large as it would be if a single diagonal were used in the panel. The short panels keep down the cost of the floor system. This cost would rise rapidly if panel width were to increase considerably with increase in span. Thus, K trusses may be economical for long spans, for which deep trusses and narrow panels are desirable. These trusses may have constant or variable depth. Bridges also are classified as highway or railroad, depending on the type of loading the bridge is to carry. Because highway loading is much lighter than railroad, highway trusses generally are built of much lighter sections. Usually, highways are wider than railways, thus requiring wider spacing of trusses. Trusses are also classified as to location of deck: deck, through, or half-through trusses. Deck trusses locate the deck near the top chord so that vehicles are carried above the chord. Through trusses place the deck near the bottom chord so that vehicles pass between the trusses. Half-through trusses carry the deck so high above the bottom chord that lateral and sway bracing cannot be placed between the top chords. The choice of deck or through construction normally is dictated by the economics of approach construction. The absence of top bracing in half-through trusses calls for special provisions to resist lateral forces. AASHTO Specifications require that truss verticals, floorbeams, and their end connections be proportioned to resist a lateral force of at least 0.30 kip per lin ft, applied at the top chord panel points of each truss. The top chord of a half-through truss should be designed as a column with elastic lateral supports at panel points. The critical buckling force of the column, so determined, should be at least 50% larger than the maximum force induced in any panel of the top chord by dead and live loads plus impact. Thus, the verticals have to be designed as cantilevers, with a concentrated load at top-chord level and rigid connection to a floorbeam. This system offers elastic restraint to buckling of the top chord. The analysis of elastically restrained compression members is covered in T. V. Galambos, ‘‘Guide to Stability Design Criteria for Metal Structures,’’ Structural Stability Research Council. TRUSS BRIDGES 13.7 13.4 BRIDGE LAYOUT Trusses, offering relatively large depth, open-web construction, and members subjected pri- marily to axial stress, provide large carrying capacity for comparatively small amounts of steel. For maximum economy in truss design, the area of metal furnished for members should be varied as often as required by the loads. To accomplish this, designers usually have to specify built-up sections that require considerable fabrication, which tend to offset some of the savings in steel. Truss Spans. Truss bridges are generally comparatively easy to erect, because light equip- ment often can be used. Assembly of mechanically fastened joints in the field is relatively labor-intensive, which may also offset some of the savings in steel. Consequently, trusses seldom can be economical for highway bridges with spans less than about 450 ft. Railroad bridges, however, involve different factors, because of the heavier loading. Trusses generally are economical for railroad bridges with spans greater than 150 ft. The current practical limit for simple-span trusses is about 800 ft for highway bridges and about 750 ft for railroad bridges. Some extension of these limits should be possible with improvements in materials and analysis, but as span requirements increase, cantilever or continuous trusses are more efficient. The North American span record for cantilever con- struction is 1,600 ft for highway bridges and 1,800 ft for railroad bridges. For a bridge with several truss spans, the most economical pier spacing can be determined after preliminary designs have been completed for both substructure and superstructure. One guideline provides that the cost of one pier should equal the cost of one superstructure span, excluding the floor system. In trial calculations, the number of piers initially assumed may be increased or decreased by one, decreasing or increasing the truss spans. Cost of truss spans rises rapidly with increase in span. A few trial calculations should yield a satisfactory picture of the economics of the bridge layout. Such an analysis, however, is more suitable for approach spans than for main spans. In most cases, the navigation or hydraulic require- ment is apt to unbalance costs in the direction of increased superstructure cost. Furthermore, girder construction is currently used for span lengths that would have required approach trusses in the past. Panel Dimensions. To start economic studies, it is necessary to arrive at economic pro- portions of trusses so that fair comparisons can be made among alternatives. Panel lengths will be influenced by type of truss being designed. They should permit slope of the diagonals between 40 and 60 with the horizontal for economic design. If panels become too long, the cost of the floor system substantially increases and heavier dead loads are transmitted to the trusses. A subdivided truss becomes more economical under these conditions. For simple-span trusses, experience has shown that a depth-span ratio of 1:5 to 1:8 yields economical designs. Some design specifications limit this ratio, with 1:10 a common histor- ical limit. For continuous trusses with reasonable balance of spans, a depth-span ratio of 1:12 should be satisfactory. Because of the lighter live loads for highways, somewhat shal- lower depths of trusses may be used for highway bridges than for railway bridges. Designers, however, do not have complete freedom in selection of truss depth. Certain physical limitations may dictate the depth to be used. For through-truss highway bridges, for example, it is impractical to provide a depth of less than 24 ft, because of the necessity of including suitable sway frames. Similarly, for through railway trusses, a depth of at least 30 ft is required. The trend toward double-stack cars encourages even greater minimum depths. Once a starting depth and panel spacing have been determined, permutation of primary geometric variables can be studied efficiently by computer-aided design methods. In fact, preliminary studies have been carried out in which every primary truss member is designed 13.8 SECTION THIRTEEN for each choice of depth and panel spacing, resulting in a very accurate choice of those parameters. Bridge Cross Sections. Selection of a proper bridge cross section is an important deter- mination by designers. In spite of the large number of varying cross sections observed in truss bridges, actual selection of a cross section for a given site is not a large task. For instance, if a through highway truss were to be designed, the roadway width would determine the transverse spacing of trusses. The span and consequent economical depth of trusses would determine the floorbeam spacing, because the floorbeams are located at the panel points. Selection of the number of stringers and decisions as to whether to make the stringers simple spans between floorbeams or continuous over the floorbeams, and whether the stringers and floorbeams should be composite with the deck, complete the determination of the cross section. Good design of framing of floor system members requires attention to details. In the past, many points of stress relief were provided in floor systems. Due to corrosion and wear resulting from use of these points of movement, however, experience with them has not always been good. Additionally, the relative movement that tends to occur between the deck and the trusses may lead to out-of-plane bending of floor system members and possible fatigue damage. Hence, modern detailing practice strives to eliminate small unconnected gaps between stiffeners and plates, rapid change in stiffness due to excessive flange coping, and other distortion fatigue sites. Ideally, the whole structure is made to act as a unit, thus eliminating distortion fatigue. Deck trusses for highway bridges present a few more variables in selection of cross section. Decisions have to be made regarding the transverse spacing of trusses and whether the top chords of the trusses should provide direct support for the deck. Transverse spacing of the trusses has to be large enough to provide lateral stability for the structure. Narrower truss spacings, however, permit smaller piers, which will help the overall economy of the bridge. Cross sections of railway bridges are similarly determined by physical requirements of the bridge site. Deck trusses are less common for railway bridges because of the extra length of approach grades often needed to reach the elevation of the deck. Also, use of through trusses offers an advantage if open-deck construction is to be used. With through-trusses, only the lower chords are vulnerable to corrosion caused by salt and debris passing through the deck. After preliminary selection of truss type, depth, panel lengths, member sizes, lateral sys- tems, and other bracing, designers should review the appearance of the entire bridge. Es- thetics can often be improved with little economic penalty. 13.5 DECK DESIGN For most truss members, the percentage of total stress attributable to dead load increases as span increases. Because trusses are normally used for long spans, and a sizable portion of the dead load (particularly on highway bridges) comes from the weight of the deck, a light- weight deck is advantageous. It should be no thicker than actually required to support the design loading. In the preliminary study of a truss, consideration should be given to the cost, durability, maintainability, inspectability, and replaceability of various deck systems, including trans- verse, longitudinal, and four-way reinforced concrete decks, orthotropic-plate decks, and concrete-filled or overlaid steel grids. Open-grid deck floors will seldom be acceptable for new fixed truss bridges but may be advantageous in rehabilitation of bridges and for movable bridges. TRUSS BRIDGES 13.9 The design procedure for railroad bridge decks is almost entirely dictated by the proposed cross section. Designers usually have little leeway with the deck, because they are required to use standard railroad deck details wherever possible. Deck design for a highway bridge is somewhat more flexible. Most highway bridges have a reinforced-concrete slab deck, with or without an asphalt wearing surface. Reinforced concrete decks may be transverse, longitudinal or four-way slabs. • Transverse slabs are supported on stringers spaced close enough so that all the bending in the slabs is in a transverse direction. • Longitudinal slabs are carried by floorbeams spaced close enough so that all the bending in the slabs is in a longitudinal direction. Longitudinal concrete slabs are practical for short-span trusses where floorbeam spacing does not exceed about 20 ft. For larger spacing, the slab thickness becomes so large that the resultant dead load leads to an uneconomic truss design. Hence, longitudinal slabs are seldom used for modern trusses. • Four-way slabs are supported directly on longitudinal stringers and transverse floorbeams. Reinforcement is placed in both directions. The most economical design has a spacing of stringers about equal to the spacing of floorbeams. This restricts use of this type of floor system to trusses with floorbeam spacing of about 20 ft. As for floor systems with a longitudinal slab, four-way slabs are generally uneconomical for modern bridges. 13.6 LATERAL BRACING, PORTALS, AND SWAY FRAMES Lateral bracing should be designed to resist the following: (1) Lateral forces due to wind pressure on the exposed surface of the truss and on the vertical projection of the live load. (2) Seismic forces, (3) Lateral forces due to centrifugal forces when the track or roadway is curved. (4) For railroad bridges, lateral forces due to the nosing action of locomotives caused by unbalanced conditions in the mechanism and also forces due to the lurching movement of cars against the rails because of the play between wheels and rails. Adequate bracing is one of the most important requirements for a good design. Since the loadings given in design specifications only approximate actual loadings, it follows that refined assumptions are not warranted for calculation of panel loads on lateral trusses. The lateral forces may be applied to the windward truss only and divided between the top and bottom chords according to the area tributary to each. A lateral bracing truss is placed between the top chords or the bottom chords, or both, of a pair of trusses to carry these forces to the ends of the trusses. Besides its use to resist lateral forces, other purposes of lateral bracing are to provide stability, stiffen structures and prevent unwarranted lateral vibration. In deck-truss bridges, however, the floor system is much stiffer than the lateral bracing. Here, the major purpose of lateral bracing is to true-up the bridges and to resist wind load during erection. The portal usually is a sway frame extending between a pair of trusses whose purpose also is to transfer the reactions from a lateral-bracing truss to the end posts of the trusses, and, thus, to the foundation. This action depends on the ability of the frame to resist trans- verse forces. The portal is normally a statically indeterminate frame. Because the design loadings are approximate, an exact analysis is seldom warranted. It is normally satisfactory to make simplifying assumptions. For example, a plane of contraflexure may be assumed halfway between the bottom of the portal knee brace and the bottom of the post. The shear on the plane may be assumed divided equally between the two end posts. Sway frames are placed between trusses, usually in vertical planes, to stiffen the structure (Fig. 13.1 and 13.2). They should extend the full depth of deck trusses and should be made as deep as possible in through trusses. The AASHTO SLD Specifications require sway frames 13.10 SECTION THIRTEEN in every panel. But many bridges are serving successfully with sway frames in every other panel, even lift bridges whose alignment is critical. Some designs even eliminate sway frames entirely. The AASHTO LRFD Specifications makes the use and number of sway frames a matter of design concept as expressed in the analysis of the structural system. Diagonals of sway frames should be proportioned for slenderness ratio as compression members. With an X system of bracing, any shear load may be divided equally between the diagonals. An approximate check of possible loads in the sway frame should be made to ensure that stresses are within allowable limits. 13.7 RESISTANCE TO LONGITUDINAL FORCES Acceleration and braking of vehicular loads, and longitudinal wind, apply longitudinal loads to bridges. In highway bridges, the magnitudes of these forces are generally small enough that the design of main truss members is not affected. In railroad bridges, however, chords that support the floor system might have to be increased in section to resist tractive forces. In all truss bridges, longitudinal forces are of importance in design of truss bearings and piers. In railway bridges, longitudinal forces resulting from accelerating and braking may induce severe bending stresses in the flanges of floorbeams, at right angles to the plane of the web, unless such forces are diverted to the main trusses by traction frames. In single-track bridges, a transverse strut may be provided between the points where the main truss laterals cross the stringers and are connected to them (Fig. 13.4a). In double-track bridges, it may be necessary to add a traction truss (Fig. 13.4b). When the floorbeams in a double-track bridge are so deep that the bottoms of the stringers are a considerable distance above the bottoms of the floorbeams, it may be necessary to raise the plane of the main truss laterals from the bottom of the floorbeams to the bottom of the stringers. If this cannot be done, a complete and separate traction frame may be provided either in the plane of the tops of the stringers or in the plane of their bottom flanges. The forces for which the traction frames are designed are applied along the stringers. The magnitudes of these forces are determined by the number of panels of tractive or braking force that are resisted by the frames. When one frame is designed to provide for several panels, the forces may become large, resulting in uneconomical members and connections. 13.8 TRUSS DESIGN PROCEDURE The following sequence may serve as a guide to the design of truss bridges: • Select span and general proportions of the bridge, including a tentative cross section. • Design the roadway or deck, including stringers and floorbeams. • Design upper and lower lateral systems. • Design portals and sway frames. • Design posts and hangers that carry little stress or loads that can be computed without a complete stress analysis of the entire truss. • Compute preliminary moments, shears, and stresses in the truss members. • Design the upper-chord members, starting with the most heavily stressed member. • Design the lower-chord members. • Design the web members. TRUSS BRIDGES 13.11 FIGURE 13.4 Lateral bracing and traction trusses for resisting longitudinal forces on a truss bridge. • Recalculate the dead load of the truss and compute final moments and stresses in truss members. • Design joints, connections, and details. • Compute dead-load and live-load deflections. • Check secondary stresses in members carrying direct loads and loads due to wind. • Review design for structural integrity, esthetics, erection, and future maintenance and in- spection requirements. 13.8.1 Analysis for Vertical Loads Determination of member forces using conventional analysis based on frictionless joints is often adequate when the following conditions are met: 1. The plane of each truss of a bridge, the planes through the top chords, and the planes through the bottom chords are fully triangulated. 2. The working lines of intersecting truss members meet at a point. 13.12 SECTION THIRTEEN 3. Cross frames and other bracing prevent significant distortions of the box shape formed by the planes of the truss described above. 4. Lateral and other bracing members are not cambered; i.e., their lengths are based on the final dead-load position of the truss. 5. Primary members are cambered by making them either short or long by amounts equal to, and opposite in sign to, the axial compression or extension, respectively, resulting from dead-load stress. Camber for trusses can be considered as a correction for dead-load deflection. (If the original design provided excess vertical clearance and the engineers did not object to the sag, then trusses could be constructed without camber. Most people, however, object to sag in bridges.) The cambering of the members results in the truss being out of vertical alignment until all the dead loads are applied to the structure (geo- metric condition). When the preceding conditions are met and are rigorously modeled, three-dimensional computer analysis yields about the same dead-load axial forces in the members as the con- ventional pin-connected analogy and small secondary moments resulting from the self-weight bending of the member. Application of loads other than those constituting the geometric condition, such as live load and wind, will result in sag due to stressing of both primary and secondary members in the truss. Rigorous three-dimensional analysis has shown that virtually all the bracing members participate in live-load stresses. As a result, total stresses in the primary members are reduced below those calculated by the conventional two-dimensional pin-connected truss analogy. Since trusses are usually used on relatively long-span structures, the dead-load stress con- stitutes a very large part of the total stress in many of the truss members. Hence, the savings from use of three-dimensional analysis of the live-load effects will usually be relatively small. This holds particularly for through trusses where the eccentricity of the live load, and, there- fore, forces distributed in the truss by torsion are smaller than for deck trusses. The largest secondary stresses are those due to moments produced in the members by the resistance of the joints to rotation. Thus, the secondary stresses in a pin-connected truss are theoretically less significant than those in a truss with mechanically fastened or welded joints. In practice, however, pinned joints always offer frictional resistance to rotation, even when new. If pin-connected joints freeze because of dirt, or rust, secondary stresses might become higher than those in a truss with rigid connections. Three-dimensional analysis will however, quantify secondary stresses, if joints and framing of members are accurately modeled. If the secondary stress exceeds 4 ksi for tension members or 3 ksi for compression members, both the AASHTO SLD and LFD Specifications require that excess be treated as a primary stress. The AASHTO LRFD Specifications take a different approach including: • A requirement to detail the truss so as to make secondary force effects as small as practical. • A requirement to include the bending caused by member self-weight, as well as moments resulting from eccentricities of joint or working lines. • Relief from including both secondary force effects from joint rotation and floorbeam de- flection if the component being designed is more than ten times as long as it is wide in the plane of bending. When the working lines through the centroids of intersecting members do not intersect at the joint, or where sway frames and portals are eliminated for economic or esthetic pur- poses, the state of bending in the truss members, as well as the rigidity of the entire system, should be evaluated by a more rigorous analysis than the conventional. The attachment of floorbeams to truss verticals produces out-of-plane stresses, which should be investigated in highway bridges and must be accounted for in railroad bridges, due to the relatively heavier live load in that type of bridge. An analysis of a frame composed of a floorbeam and all the truss members present in the cross section containing the floor beam is usually adequate to quantify this effect. TRUSS BRIDGES 13.13 Deflection of trusses occurs whenever there are changes in length of the truss members. These changes may be due to strains resulting from loads on the truss, temperature variations, or fabrication effects or errors. Methods of computing deflections are similar in all three cases. Prior to the introduction of computers, calculation of deflections in trusses was a laborious procedure and was usually determined by energy or virtual work methods or by graphical or semigraphical methods, such as the Williot-Mohr diagram. With the widespread availability of matrix structural analysis packages, the calculation of deflections and analysis of indeterminant trusses are speedily executed. (See also Arts. 3.30, 3.31, and 3.34 to 3.39). 13.8.2 Analysis for Wind Loads The areas of trusses exposed to wind normal to their longitudinal axis are computed by multiplying widths of members as seen in elevation by the lengths center to center of inter- sections. The overlapping areas at intersections are assumed to provide enough surplus to allow for the added areas of gussets. The AREMA Manual specifies that for railway bridges this truss area be multiplied by the number of trusses, on the assumption that the wind strikes each truss fully (except where the leeward trusses are shielded by the floor system). The AASHTO Specifications require that the area of the trusses and floor as seen in elevation be multiplied by a wind pressure that accounts for 11⁄2 times this area being loaded by wind. The area of the floor should be taken as that seen in elevation, including stringers, deck, railing, and railing pickets. AREMA specifies that when there is no live load on the structure, the wind pressure should be taken as at least 50 psf, which is equivalent to a wind velocity of about 125 mph. When live load is on the structure, reduced wind pressures are specified for the trusses plus full wind load on the live load: 30 psf on the bridge, which is equivalent to a 97-mph wind, and 300 lb per lin ft on the live load on one track applied 8 ft above the top of rail. AASHTO SLD Specifications require a wind pressure on the structure of 75 psf. Total force, lb per lin ft, in the plane of the windward chords should be taken as at least 300 and in the plane of the leeward chords, at least 150. When live load is on the structure, these wind pressures can be reduced 70% and combined with a wind force of 100 lb per lin ft on the live load applied 6 ft above the roadway. The AASHTO LFD Specifications do not expressly address wind loads, so the SLD Specifications pertain by default. Article 3.8 of the AASHTO LRFD Specifications establish wind loads consistent with the format and presentation currently used in meteorology. Wind pressures are related to a base wind velocity, VB, of 100 mph as common in past specifications. If no better information is available, the wind velocity at 30 ft above the ground, V30, may be taken as equal to the base wind, VB. The height of 30 ft was selected to exclude ground effects in open terrain. Alternatively, the base wind speed may be taken from Basic Wind Speed Charts available in the literature, or site specific wind surveys may be used to establish V30. At heights above 30 ft, the design wind velocity, VDZ, mph, on a structure at a height, Z, ft, may be calculated based on characteristic meteorology quantities related to the terrain over which the winds approach as follows. Select the friction velocity, V0, and friction length, Z0, from Table 13.1 Then calculate the velocity from V30 Z VDZ 2.5 V0 ln (13.1) VB Z0 If V30 is taken equal to the base wind velocity, VB, then V30 / VB is taken as unity. The correction for structure elevation included in Eq. 13.1, which is based on current meteoro- logical data, replaces the 1⁄7 power rule used in the past. For design, Table 13.2 gives the base pressure, PB, ksf, acting on various structural com- ponents for a base wind velocity of 100 mph. The design wind pressure, PD, ksf, for the design wind velocity, VDZ, mph, is calculated from 13.14 SECTION THIRTEEN TABLE 13.1 Basic Wind Parameters Terrain Open country Suburban City V0, mph 8.20 10.9 12.0 Z0, ft 0.23 3.28 8.20 2 VDZ PD PB (13.2) VB Additionally, minimum design wind pressures, comparable to those in the AASHTO SLD Specification, are given in the LRFD Specifications. AASHTO Specifications also require that wind pressure be applied to vehicular live load. Wind Analysis. Wind analysis is typically carried out with the aid of computers with a space truss and some frame members as a model. It is helpful, and instructive, to employ a simplified, noncomputer method of analysis to compare with the computer solution to expose major modeling errors that are possible with space models. Such a simplified method is presented in the following. Idealized Wind-Stress Analysis of a through Truss with Inclined End Posts. The wind loads computed as indicated above are applied as concentrated loads at the panel points. A through truss with parallel chords may be considered as having reactions to the top lateral bracing system only at the main portals. The effect of intermediate sway frames, therefore, is ignored. The analysis is applied to the bracing and to the truss members. The lateral bracing members in each panel are designed for the maximum shear in the panel resulting from treating the wind load as a moving load; that is, as many panels are loaded as necessary to produce maximum shear in that panel. In design of the top-chord bracing members, the wind load, without live load, usually governs. The span for top-chord bracing is from hip joint to hip joint. For the bottom-chord members, the reduced wind pressure usually governs because of the considerable additional force that usually results from wind on the live load. For large trusses, wind stress in the trusses should be computed for both the maximum wind pressure without live load and for the reduced wind pressure with live load and full wind on the live load. Because wind on the live load introduces an effect of ‘‘transfer,’’ as TABLE 13.2 Base Pressures, PB for Base Wind Velocity, VB, of 100 mph Structural Windward Leeward component load, ksf load, ksf Trusses, Columns, 0.050 0.025 and Arches Beams 0.050 NA Large Flat 0.040 NA Surfaces TRUSS BRIDGES 13.15 described later, the following discussion is for the more general case of a truss with the reduced wind pressure on the structure and with wind on the live load applied 8 ft above the top of rail, or 6 ft above the deck. The effect of wind on the trusses may be considered to consist of three additive parts: • Chord stresses in the fully loaded top and bottom lateral trusses. • Horizontal component, which is a uniform force of tension in one truss bottom chord and compression in the other bottom chord, resulting from transfer of the top lateral end reactions down the end portals. This may be taken as the top lateral end reaction times the horizontal distance from the hip joint to the point of contraflexure divided by the spacing between main trusses. It is often conservatively assumed that this point of contra- flexure is at the end of span, and, thus, the top lateral end reaction is multiplied by the panel length, divided by the spacing between main trusses. Note that this convenient as- sumption does not apply to the design of portals themselves. • Transfer stresses created by the moment of wind on the live load and wind on the floor. This moment is taken about the plane of the bottom lateral system. The wind force on live load and wind force on the floor in a panel length is multiplied by the height of application above the bracing plane and divided by the distance center to center of trusses to arrive at a total vertical panel load. This load is applied downward at each panel point of the leeward truss and upward at each panel point of the windward truss. The resulting stresses in the main vertical trusses are then computed. The total wind stress in any main truss member is arrived at by adding all three effects: chord stresses in the lateral systems, horizontal component, and transfer stresses. Although this discussion applies to a par- allel-chord truss, the same method may be applied with only slight error to a truss with curved top chord by considering the top FIGURE 13.5 Top chord in a horizontal plane ap- chord to lie in a horizontal plane between hip proximates a curved top chord. joints, as shown in Fig. 13.5. The nature of this error will be described in the following. Wind Stress Analysis of Curved-Chord Cantilever Truss. The additional effects that should be considered in curved-chord trusses are those of the vertical components of the inclined bracing members. These effects may be illustrated by the behavior of a typical cantilever bridge, several panels of which are shown in Fig. 13.6. As transverse forces are applied to the curved top lateral system, the transverse shear creates stresses in the top lateral bracing members. The longitudinal and vertical components of these bracing stresses create wind stresses in the top chords and other members of the main trusses. The effects of these numerous components of the lateral members may be determined by the following simple method: • Apply the lateral panel loads to the horizontal projection of the top-chord lateral system and compute all horizontal components of the chord stresses. The stresses in the inclined chords may readily be computed from these horizontal components. FIGURE 13.6 Wind on a cantilever truss with curved top chord is resisted by the top lateral system. 13.16 SECTION THIRTEEN • Determine at every point of slope change in the top chord all the vertical forces acting on the point from both bracing diagonals and bracing chords. Compute the truss stresses in the vertical main trusses from those forces. • The final truss stresses are the sum of the two contributions above and also of any transfer stress, and of any horizontal component delivered by the portals to the bottom chords. 13.8.3 Computer Determination of Wind Stresses For computer analysis, the structural model is a three-dimensional framework composed of all the load-carrying members. Floorbeams are included if they are part of the bracing system or are essential for the stability of the structural model. All wind-load concentrations are applied to the framework at braced points. Because the wind loads on the floor system and on the live load do not lie in a plane of bracing, these loads must be ‘‘transferred’’ to a plane of bracing. The accompanying vertical required for equilibrium also should be applied to the framework. Inasmuch as significant wind moments are produced in open-framed portal members of the truss, flexural rigidity of the main-truss members in the portal is essential for stability. Unless the other framework members are released for moment, the computer analysis will report small moments in most members of the truss. With cantilever trusses, it is a common practice to analyze the suspended span by itself and then apply the reactions to a second analysis of the anchor and cantilever arms. Some consideration of the rotational stiffness of piers about their vertical axis is warranted for those piers that support bearings that are fixed against longitudinal translation. Such piers will be subjected to a moment resulting from the longitudinal forces induced by lateral loads. If the stiffness (or flexibility) of the piers is not taken into account, the sense and magnitude of chord forces may be incorrectly determined. 13.8.4 Wind-Induced Vibration of Truss Members When a steady wind passes by an obstruction, the pressure gradient along the obstruction causes eddies or vortices to form in the wind stream. These occur at stagnation points located on opposite sides of the obstruction. As a vortex grows, it eventually reaches a size that cannot be tolerated by the wind stream and is torn loose and carried along in the wind stream. The vortex at the opposite stagnation point then grows until it is shed. The result is a pattern of essentially equally spaced (for small distances downwind of the obstruction) and alternating vortices called the ‘‘Vortex Street’’ or ‘‘von Karman Trail.’’ This vortex street is indicative of a pulsating periodic pressure change applied to the obstruction. The frequency of the vortex shedding and, hence, the frequency of the pulsating pressure, is given by VS ƒ (13.3) D where V is the wind speed, fps, D is a characteristic dimension, ft, and S is the Strouhal number, the ratio of velocity of vibration of the obstruction to the wind velocity (Table 13.3). When the obstruction is a member of a truss, self-exciting oscillations of the member in the direction perpendicular to the wind stream may result when the frequency of vortex shedding coincides with a natural frequency of the member. Thus, determination of the torsional frequency and bending frequency in the plane perpendicular to the wind and sub- stitution of those frequencies into Eq. (13.3) leads to an estimate of wind speeds at which resonance may occur. Such vibration has led to fatigue cracking of some truss and arch members, particularly cable hangers and I-shaped members. The preceding proposed use of Eq. (13.3) is oriented toward guiding designers in providing sufficient stiffness to reasonably TRUSS BRIDGES 13.17 TABLE 13.3 Strouhal Number for Various Sections* Wind Strouhal Strouhal direction Profile number S Profile number S 0.120 0.200 0.137 0.144 0.145 b/d 2.5 0.060 2.0 0.080 1.5 0.103 1.0 0.133 0.147 0.7 0.136 0.5 0.138 * As given in ‘‘Wind Forces on Structures,’’ Transactions, vol. 126, part II, p. 1180, American Society of Civil Engineers. preclude vibrations. It does not directly compute the amplitude of vibration and, hence, it does not directly lead to determination of vibratory stresses. Solutions for amplitude are available in the literature. See, for example, M. Paz, ‘‘Structural Dynamics Theory and Computation,’’ Van Nostrand Reinhold, New York; R. J. Melosh and H. A. Smith, ‘‘New Formulation for Vibration Analysis,’’ ASCE Journal of Engineering Mechanics, vol. 115, no. 3, March 1989. C. C. Ulstrup, in ‘‘Natural Frequencies of Axially Loaded Bridge Members,’’ ASCE Jour- nal of the Structural Division, 1978, proposed the following approximate formula for esti- mating bending and torsional frequencies for members whose shear center and centroid coincide: 2 2 1/2 a knL KL ƒn 1 p (13.4) 2 I 13.18 SECTION THIRTEEN where ƒn natural frequency of member for each mode corresponding to n 1, 2, 3, . . . knL eigenvalue for each mode (see Table 13.4) K effective length factor (see Table 13.4) L length of the member, in I moment of inertia, in4, of the member cross section a coefficient dependent on the physical properties of the member EIg / A for bending ECwg / Ip for torsion p coefficient dependent on the physical properties of the member P / EI for bending GJA PIp for torsion AECw E Young’s modulus of elasticity, psi G shear modulus of elasticity, psi weight density of member, lb / in3 g gravitational acceleration, in / s2 P axial force (tension is positive), lb A area of member cross section, in2 Cw warping constant J torsion constant Ip polar moment of inertia, in4 In design of a truss member, the frequency of vortex shedding for the section is set equal to the bending and torsional frequency and the resulting equation is solved for the wind speed V. This is the wind speed at which resonance occurs. The design should be such that V exceeds by a reasonable margin the velocity at which the wind is expected to occur uniformly. 13.9 TRUSS MEMBER DETAILS The following shapes for truss members are typically considered: H sections, made with two side segments (composed of angles or plates) with solid web, perforated web, or web of stay plates and lacing. Modern bridges almost exclusively use H sections made of three plates welded together. TABLE 13.4 Eigenvalue kn L and Effective Length Factor K kn L K Support condition n 1 n 2 n 3 n 1 n 2 n 3 2 3 1.000 0.500 0.333 3.927 7.069 10.210 0.700 0.412 0.292 4.730 7.853 10.996 0.500 0.350 0.259 1.875 4.694 7.855 2.000 0.667 0.400 TRUSS BRIDGES 13.19 Channel sections, made with two angle segments, with solid web, perforated web, or web of stay plates and lacing. These are seldom used on modern bridges. Single box sections, made with side channels, beams, angles and plates, or side segments of plates only. The side elements may be connected top and bottom with solid plates, perforated plates, or stay plates and lacing. Alternatively, they may be connected at the top with solid cover plates and at the bottom with perforated plates, or stay plates and lacing. Modern bridges use primarily four-plate welded box members. The cover plates are usually solid, except for access holes for bolting joints. Double box sections, made with side channels, beams, angles and plates, or side segments of plates only. The side elements may be connected together with top and bottom per- forated cover plates, or stay plates and lacing. To obtain economy in member design, it is important to vary the area of steel in accord- ance with variations in total loads on the members. The variation in cross section plus the use of appropriate-strength grades of steel permit designers to use essentially the weight of steel actually required for the load on each panel, thus assuring an economical design. With respect to shop fabrication of welded members, the H shape usually is the most economical section. It requires four fillet welds and no expensive edge preparation. Require- ments for elimination of vortex shedding, however, may offset some of the inherent economy of this shape. Box shapes generally offer greater resistance to vibration due to wind, to buckling in compression, and to torsion, but require greater care in selection of welding details. For example, various types of welded cover-plate details for boxes considered in design of the second Greater New Orleans Bridge and reviewed with several fabricators resulted in the observations in Table 13.5. Additional welds placed inside a box member for development of the cover plate within the connection to the gusset plate are classified as AASHTO category E at the termination of the inside welds and should be not be used. For development of the cover plate within the gusset-plate connection, groove welds, large fillet welds, large gusset plates, or a com- bination of the last two should be used. Tension Members. Where practical, these should be arranged so that there will be no bending in the members from eccentricity of the connections. If this is possible, then the total stress can be considered uniform across the entire net area of the member. At a joint, the greatest practical proportion of the member surface area should be connected to the gusset or other splice material. Designers have a choice of a large variety of sections suitable for tension members, although box and H-shaped members are typically used. The choice will be influenced by the proposed type of fabrication and range of areas required for tension members. The design should be adjusted to take full advantage of the selected type. For example, welded plates are economical for tubular or box-shaped members. Structural tubing is available with almost 22 in2 of cross-sectional area and might be advantageous in welded trusses of moderate spans. For longer spans, box-shape members can be shop-fabricated with almost unlimited areas. Tension members for bolted trusses involve additional considerations. For example, only 50% of the unconnected leg of an angle or tee is commonly considered effective, because of the eccentricity of the connection to the gusset plate at each end. To minimize the loss of section for fastener holes and to connect into as large a proportion of the member surface area as practical, it is desirable to use a staggered fastener pattern. In Fig. 13.7, which shows a plate with staggered holes, the net width along Chain 1-1 equals plate width W, minus three hole diameters. The net width along Chain 2-2 equals W, minus five hole diameters, plus the quantity S 2 / 4g for each off four gages, where S is the pitch and g the gage. 13.20 SECTION THIRTEEN TABLE 13.5 Various Welded Cover-Plate Designs for Second Greater New Orleans Bridge Conventional detail. Has been used extensively in the past. It may be susceptible to lamellar tearing under lateral or torsional loads. Overlap increases for thicker web plate. Cover plate tends to curve up after welding. Very difficult to hold out-to-out dimension of webs due to thickness tolerance of the web plates. Groove weld is expensive, but easier to develop cover plate within the connection to gusset plate. The detail requires a wide cover plate and tight tolerance of the cover-plate width. With a large overlap, the cover may curve up after welding. Groove weld is expensive, but easier to develop cover plate within the connection to the gusset plate. Same as above, except the fabrication tolerance, which will be better with this detail. FIGURE 13.7 Chains of bolt holes used for determining the net section of a tension member. TRUSS BRIDGES 13.21 Compression Members. These should be arranged to avoid bending in the member from eccentricity of connections. Though the members may contain fastener holes, the gross area may be used in design of such columns, on the assumption that the body of the fastener fills the hole. Welded box and H-shaped members are typically used for compression members in trusses. Compression members should be so designed that the main elements of the section are connected directly to gusset plates, pins, or other members. It is desirable that member components be connected by solid webs. Care should be taken to ensure that the criteria for slenderness ratios, plate buckling, and fastener spacing are satisfied. Posts and Hangers. These are the vertical members in truss bridges. A post in a Warren deck truss delivers the load from the floorbeam to the lower chord. A hanger in a Warren through-truss delivers the floorbeam load to the upper chord. Posts are designed as compression members. The posts in a single-truss span are generally made identical. At joints, overall dimensions of posts have to be compatible with those of the top and bottom chords to make a proper connection at the joint. Hangers are designed as tension members. Although wire ropes or steel rods could be used, they would be objectionable for esthetic reasons. Furthermore, to provide a slenderness ratio small enough to maintain wind vibration within acceptable limits will generally require rope or rod area larger than that needed for strength. Truss-Member Connections. Main truss members should be connected with gusset plates and other splice material, although pinned joints may be used where the size of a bolted joint would be prohibitive. To avoid eccentricity, fasteners connecting each member should be symmetrical about the axis of the member. It is desirable that fasteners develop the full capacity of each element of the member. Thickness of a gusset plate should be adequate for resisting shear, direct stress, and flexure at critical sections where these stresses are maxi- mum. Re-entrant cuts should be avoided; however, curves made for appearance are permis- sible. 13.10 MEMBER AND JOINT DESIGN EXAMPLES—LFD AND SLD Design of a truss member by the AASHTO LFD and SLD Specifications is illustrated in the following examples, The design includes a connection in a Warren truss in which splicing of a truss chord occurs within a joint. Some designers prefer to have the chord run contin- uously through the joint and be spliced adjacent to the joint. Satisfactory designs can be produced using either approach. Chords of trusses that do not have a diagonal framing into each joint, such as a Warren truss, are usually continuous through joints with a post or hanger. Thus, many of the chord members are usually two panels long. Because of limitations on plate size and length for shipping, handling, or fabrication, it is sometimes necessary, however, to splice the plates within the length of a member. Where this is necessary, common practice is to offset the splices in the plates so that only one plate is spliced at any cross section. 13.10.1 Load-Factor Design of Truss Chord A chord of a truss is to be designed to withstand a factored compression load of 7,878 kips and a factored tensile load of 1,748 kips. Corresponding service loads are 4,422 kips com- pression and 391 kips tension. The structural steel is to have a specified minimum yield stress of 36 ksi. The member is 46 ft long and the slenderness factor K is to be taken as 13.22 SECTION THIRTEEN unity. A preliminary design yields the cross section shown in Fig. 13.8. The section has the following properties: Ag gross area 281 in2 Igx gross moment of inertia with respect to x axis 97,770 in4 Igy gross moment of inertia with respect to y axis 69,520 in4 w weight per linear foot 0.98 kips Ten 11⁄4-in-dia. bolt holes are provided in each web at the section for the connections at joints. The welds joining the cover plates and webs are minimum size, 3⁄8 in, and are clas- sified as AASHTO fatigue category B. FIGURE 13.8 Cross section of a truss chord with a box section. TRUSS BRIDGES 13.23 Although the AASHTO LFD Specification specifies a load factor for dead load of 1.30, the following computation uses 1.50 to allow for about 15% additional weight due to paint, diaphragms, weld metal and fasteners. Compression in Chord from Factored Loads. The uniform stress on the section is ƒc 7878 / 281 28.04 ksi The radius of gyration with respect to the weak axis is ry Igy / Ag 69,520 / 281 15.73 in and the slenderness ratio with respect to that axis is 2 KL 1 46 12 2 E 35 126 ry 15.73 Fy where E modulus of elasticity of the steel 29,000 ksi. The critical buckling stress in compression is 2 Fy KL Fcr Fy 1 4 2E ry (13.5) 36 36 1 (35)2 34.6 ksi 4 2E The maximum strength of a concentrically loaded column is Pu Agƒcr and ƒcr 0.85Fcr 0.85 34.6 29.42 ksi For computation of the bending strength, the sum of the depth-thickness ratios for the web and cover plates is s 54 36 2.0625 2 2 129.9 t 2.0625 0.875 The area enclosed by the centerlines of the plates is A 54.875(36 2.0625) 1,862 in2 Then, the design bending stress is given by 0.0641Fy SgL (s / t) Fa Fy 1 EA Iy 0.0641 36 3,507 46 12 129.9 36 1 (13.6) 29,000 1862 69,520 35.9 ksi For the dead load of 0.98 kips / ft, the dead-load factor of 1.50, the 46-ft span, and a factor of 1 / 10 for continuity in bending, the dead-load bending moment is 13.24 SECTION THIRTEEN MDL 0.98(46)2 12 1.50 / 10 3733 kip-in The section modulus is Sg Igx / c 97,770 / (54 / 2 0.875) 3507 in3 Hence, the maximum compressive bending stress is ƒb MDL / Sg 3733 / 3507 1.06 ksi The plastic section modulus is Zg 2(33.125 0.875(54 / 2 0.875 / 2) 2 2 2.0625 54 / 2 54 / 4 4598 in4 The ratio of the plastic section modulus to the elastic section modulus is Zg / Sg 4,598 / 3,507 1.31. For combined axial load and bending, the axial force P and moment M must satisfy the following equations: P MC 1.0 (13.7a) 0.85AgFcr Mu(1 P / AgFe) P M 1.0 (13.8a) 0.85AgFy Mp where Mu maximum strength, kip-in, in bending alone Sgƒa Mp full plastic moment, kip-in, of the section ZFy Z plastic modulus 1.31Sg C equivalent moment factor, taken as 0.85 in this case 2 Fe Euler buckling stress, ksi, with 0.85 factor 0.85E / (KL / rx)2 The effective length factor K is taken equal to unity and the radius of gyration rx with respect to the x axis, the axis of bending, is rx Ig / Ag 97,770 / 281 18.65 in The slenderness ratio KL / rx then is 46 12 / 18.65 29.60. 2 Fe 0.85 29,000 / 29.602 278 ksi For convenience of calculation, Eq. (13.7a) can be rewritten, for P AgFc, 0.85Fcr ƒcr, M Sgƒb, and Mu SgFa, as ƒc ƒb C 1.0 (13.7b) ƒcr Fa 1 P / AgFe Substitution of previously calculated stress values in Eq. (13.7b) yields 28.04 1.06 0.85 0.953 0.028 29.42 35.9 1 7878 / (281 278) 0.981 1.0 Similarly, Eq. (13.8a) can be rewritten as TRUSS BRIDGES 13.25 ƒc ƒb 1.0 (13.8b) 0.85Fy FyZ / Sg Substitution of previously calculated stress values in Eq. (13.8b) yields 28.04 1.06 0.916 0.022 0.938 1.0 0.85 36 36 1.31 The sum of the ratios, 0.981, governs (stability) and is satisfactory. The section is satisfactory for compression. Local Buckling. The AASHTO specifications limit the depth-thickness ratio of the webs to a maximum of d/t 180 / ƒc 180 / 28.04 34.0 The actual d / t is 54 / 2.0625 26.2 34.0—OK Maximum permissible width-thickness ratio for the cover plates is b/t 213.4 / ƒc 213.4 / 28.04 40.3 The actual b / t is 33.125 / 0.875 37.9 40.3—OK Tension in Chord from Factored Loads. The following treatment is based on a composite of AASHTO SLD Specifications for the capacity of tension members, and other aspects from the AASHTO LFD Specifications. This is done because the AASHTO LFD Specifications have not been updated. Clearly, this is not in complete compliance with the AASHTO LFD Specifications. Based on the above, the tensile capacity will be the lesser of the yield strength times the design gross area, or 90% of the tensile strength times the net area. Both areas are defined below. For determinations of the design strength of the section, the effect of the bolt holes must be taken into account by deducting the area of the holes from the gross section area to obtain the net section area. Furthermore, the full gross area should not be used if the holes occupy more than 15% of the gross area. When they do, the excess above 15% of the holes not greater than 1-1⁄4 in in diameter, and all of area of larger holes, should be deducted from the gross area to obtain the design gross area. The holes occupy 10 1.25 12.50 in of web-plate length, and 15% of the 54-in plate is 8.10 in. The excess is 4.40 in. Hence, the net area is An 281 12.50 2.0625 255 in2 and the design gross area, ADG 2 281 2 4.40 2.0625 263 in . The tensile capacity is the lesser of 0.90 255 58 13,311 kips or 263 36 9,468 kips. Thus, the design gross section capacity controls and the tensile capacity is 9,468 kips. For computation of design gross moment of inertia, assume that the excess is due to 4 bolts, located 7 and 14 in on both sides of the neutral axis in bending about the x axis. Equivalent diameter of each hole is 4.40 / 4 1.10 in. The deduction from the gross moment of inertia Ig 97,770 in4 then is Id 2 2 1.10 2.0625(72 142) 2220 in4 Hence, the design gross moment of inertia IDG is 97,770 2,220 95,550 in4, and the design gross elastic section modulus is 95,550 SDG 3428 in3 54 / 2 0.875 The stress on the design gross section for the axial tension load of 1,748 kips alone is 13.26 SECTION THIRTEEN ƒt 1748 / 263 6.65 ksi The bending stress due to MDL 3733 kip-in, computed previously, is ƒb 3733 / 3428 1.09 ksi For combined axial tension and bending, the sum of the ratios of required strength to design strength is P M 6.65 1.09 0.208 1—OK Pu Mp 36 36 1.31 The section is satisfactory for tension. Fatigue at Welds. Fatigue is to be investigated for the truss as a nonredundant path structure subjected to 500,000 cycles of loading. The category B welds between web plates and cover plates have an allowable stress range of 23 ksi. Maximum service loads on the chord are 391 kips tension and 4,422 kips compression. The stress range then is 391 ( 4,422) ƒsr 17.1 ksi 23 ksi 281 The section is satisfactory for fatigue. 13.10.2 Service-Load Design of Truss Chord The truss chord designed in Art. 13.10.1 by load-factor design and with the cross section shown in Fig. 13.8 is designed for service loads in the following, for illustrative purposes. Properties of the section are given in Art 13.10.1. Compression in Chord for Service Loads. The uniform stress in the section for the 4,422- kip load on the gross area Ag 281 in2 is ƒc 4422 / 281 15.74 ksi The AASHTO standard specifications give the following formula for the allowable axial stress for Fy 36 ksi: Fa 16.98 0.00053(KL / ry)2 (13.9) For the slenderness ratio KL / ry 35, determined in Art. 13.10.1, the allowable stress then is Fa 16.98 0.00053(35)2 16.33 ksi 15.74 ksi—OK The allowable bending stress is ƒb 20 ksi. Due to the 0.98 kips / ft weight of the 46-ft- long chord, the dead-load bending moment with a continuity factor of 1⁄10 is MDL 0.98(46)2 12 / 10 2488 kip-in For the section modulus Sgx 97,770 / 27.875 3507 in3, the dead-load bending stress is ƒb 2488 / 3507 0.709 ksi For combined bending and compression, AASHTO specifications require that the follow- ing interaction formula be satisfied: TRUSS BRIDGES 13.27 ƒc ƒb Cm (13.10) Fa Fb 1 ƒc / Fe The coefficient Cm is taken as 0.85 for the condition of transverse loading on a compression member with joint translation prevented. For bending about the x axis, with a slenderness ratio of KL / rx 29.60, as determined in Art. 13.10.1, the Euler buckling stress with a 2.12 safety factor is 2 2 E 29,000 Fe 154 ksi 2.12(KL / rx)2 2.12(29.60)2 Substitution of the preceding stresses in Eq. (13.10) yields 15.74 0.709 0.85 0.964 0.034 0.998 1—OK 16.33 20 1 15.74 / 154 The section is satisfactory for compression. Tension in Chord from Service Loads. The section shown in Fig. 13.8 has to withstand a tension load of 391 kips on the net area of 263 in2 computed in Art. 13.10.1. It was deter- mined in Art. 13.10.1 that the capacity was controlled by the design gross section, and while SLD allowable stresses are 0.50 Fu on the net section and 0.55 Fy on the design gross section, the same conclusion is reached here. The allowable tensile stress Ft is 20 ksi. The uniform tension stress on the design gross section is ƒt 391 / 263 1.49 ksi As computed in Art. 13.10.1, the moment of inertia of the design gross section is 95,550 in4 and the corresponding section modulus in Sn 3,428 in3. Also, as computed previously for compression in the chord, the dead-load bending moment MDL 2,488 kip-in. Hence, the maximum bending stress is ƒb 2488 / 3428 0.726 ksi The allowable bending stress Fb is 20 ksi. For combined axial tension and bending, the sum of the ratios of actual stress to allowable stress is ƒt ƒb 1.49 0.726 0.075 0.036 0.111 1—OK Ft Fb 20 20 The section is satisfactory for tension. Fatigue Design. See Art. 13.10.1. 13.11 MEMBER DESIGN EXAMPLE—LRFD The design of a truss hanger by the AASHTO LRFD Specifications is presented subse- quently. This is preceded by the following introduction to the LRFD member design pro- visions. 13.28 SECTION THIRTEEN 13.11.1 LRFD Member Design Provisions Tension Members. The net area, An, of a member is the sum of the products of thickness and the smallest net width of each element. The width of each standard bolt hole is taken as the nominal diameter of the bolt plus 0.125 in. The width deducted for oversize and slotted holes, where permitted in AASHTO LRFD Art. 18.104.22.168.1, is taken as 0.125 in greater than the hole size specified in AASHTO LRFD Art. 22.214.171.124.2. The net width is determined for each chain of holes extending across the member along any transverse, diagonal, or zigzag line, as discussed in Art. 13.9. In designing a tension member, it is conservative and convenient to use the least net width for any chain together with the full tensile force in the member. It is sometimes possible to achieve an acceptable, but slightly less conservative design, by checking each possible chain with a tensile force obtained by subtracting the force removed by each bolt ahead of that chain (bolt closer to midlength of the member), from the full tensile force in the member. This approach assumes that the full force is transferred equally by all bolts at one end. Members and splices subjected to axial tension must be investigated for two conditions: yielding on the gross section (Eq. 13.11), and fracture on the net section (Eq. 13.12). De- termination of the net section requires consideration of the following: • The gross area from which deductions will be made, or reduction factors applied, as appropriate • Deductions for all holes in the design cross-section • Correction of the bolt hole deductions for the stagger rule • Application of a reduction factor U, to account for shear lag • Application of an 85% maximum area efficiency factor for splice plates and other splicing elements The factored tensile resistance, Pr, is the lesser of the values given by Eqs. 13.11 and 13.12. Pr y Pny y Fy Ag (13.11) Pr u Pnu Fu AnU y (13.12) where Pny nominal tensile resistance for yielding in gross section (kip) Fy yield strength (ksi) Ag gross cross-sectional area of the member (in2) Pnu nominal tensile resistance for fracture in net section (kip) Fu tensile strength (ksi) An net area of the member as described above (in2) U reduction factor to account for shear lag; 1.0 for components in which force effects are transmitted to all elements; as described below for other cases y resistance factor for yielding of tension members, 0.95 u resistance factor for fracture of tension members, 0.80 The reduction factor, U, does not apply when checking yielding on the gross section because yielding tends to equalize the non-uniform tensile stresses over the cross section caused by shear lag. Unless a more refined analysis or physical tests are utilized to determine shear lag effects, the reduction factors specified in the AASHTO LRFD Specifications may be used to account for shear lag in connections as explained in the following. The reduction factor, U, for sections subjected to a tension load transmitted directly to each of the cross-sectional elements by bolts or welds may be taken as: TRUSS BRIDGES 13.29 U 1.0 (13.13) For bolted connections, the following three values of U may be used depending on the details of the connection: For rolled I-shapes with flange widths not less than two-thirds the depth, and structural tees cut from these shapes, provided the connection is to the flanges and has no fewer than three fasteners per line in the direction of stress, U 0.90 (13.14a) For all other members having no fewer than three fasteners per line in the direction of stress, U 0.85 (13.14b) For all members having only two fasteners per line in the direction of stress, U 0.75 (13.14c) Due to strain hardening, a ductile steel loaded in axial tension can resist a force greater than the product of its gross area and its yield strength prior to fracture. However, excessive elongation due to uncontrolled yielding of gross area not only marks the limit of usefulness, it can precipitate failure of the structural system of which it is a part. Depending on the ratio of net area to gross area and the mechanical properties of the steel, the component can fracture by failure of the net area at a load smaller than that required to yield the gross area. General yielding of the gross area and fracture of the net area both constitute measures of component strength. The relative values of the resistance factors for yielding and fracture reflect the different reliability indices deemed proper for the two modes. The part of the component occupied by the net area at fastener holes generally has a negligible length relative to the total length of the member. As a result, the strain hardening is quickly reached and, therefore, yielding of the net area at fastener holes does not constitute a strength limit of practical significance, except, perhaps, for some built-up members of unusual proportions. For welded connections, An is the gross section less any access holes in the connection region. Compression Members. Bridge members in axial compression are generally proportioned with width / thickness ratios such that the yield point can be reached before the onset of local buckling. For such members, the nominal compressive resistance, Pn, is taken as: If 2.25, then Pn 0.66 Fy As (13.15) 0.88Fy As If 2.25, then Pn (13.16) for which: 2 Kl Fy (13.17) rs E where As gross cross-sectional area (in2) Fy yield strength (ksi) E modulus of elasticity (ksi) K effective length factor l unbraced length (in) rs radius of gyration about the plane of buckling (in) 13.30 SECTION THIRTEEN To avoid premature local buckling, the width-to-thickness ratios of plate elements for compression members must satisfy the following relationship: b E k (13.18) t Fy where k plate buckling coefficient, b plate width (in), and t thickness (in). See Table 13.6 for values for k and descriptions of b. TABLE 13.6 Values of k for Calculating Limiting Width-Thickness Ratios Element Coefficient, k Width, b a. Plates supported along one edge Flanges and projecting legs or plates 0.56 Half-flange width of I-sections. Full-flange width of channels. Distance between free edge and first line of bolts or weld in plates. Full-width of an outstanding leg for pairs of angles in continuous contact. Stems of rolled tees 0.75 Full-depth of tee. Other projecting elements 0.45 Full-width of outstanding leg for single angle strut or double angle strut with separator. Full projecting width for others b. Plates supported along two edges Box flanges and cover plates 1.40 Clear distance between webs minus inside corner radius on each side for box flanges. Distance between lines of welds or bolts for flange cover plates. Webs and other plate elements 1.49 Clear distance between flanges minus fillet radii for webs of rolled beams. Clear distance between edge supports for all others. Perforated cover plates 1.86 Clear distance between edge supports. Source: Adapted from AASHTO LRFD Bridge Design Specification, American Association of State Highway and Transporation Officials, 444 North Capital St., N.W., Ste. 249, Washington, DC 20001. TRUSS BRIDGES 13.31 Members Under Tension and Flexure. A component subjected to tension and flexure must satisfy the following interaction equations: Pu If 0.2, then Pr (13.19) Pu Mux Muy 1.0 2.0Pr Mrx Mry Pu If 0.2, then Pr (13.20) Pu 8.0 Mux Muy 1.0 Pr 9.0 Mrx Mry where Pr factored tensile resistance (kip) Mrx, Mry factored flexural resistances about the x and y axes, respectively (k-in) Mux, Muy moments about x and y axes, respectively, resulting from factored loads (k-in) Pu axial force effect resulting from factored loads (kip) Interaction equations in tension and compression members are a design simplification. Such equations involving exponents of 1.0 on the moment ratios are usually conservative. More exact, nonlinear interaction curves are also available and are discussed in the literature. If these interaction equations are used, additional investigation of service limit state stresses is necessary to avoid premature yielding. A flange or other component subjected to a net compressive stress due to tension and flexure should also be investigated for local buckling. Members Under Compression and Flexure. For a component subjected to compression and flexure, the axial compressive load, Pu, and the moments, Mux and Muy, are determined for concurrent factored loadings by elastic analytical procedures. The following relationships must be satisfied: Pu If 0.2, then Pr (13.21) Pu Mux Muy 1.0 2.0Pr Mrx Mry Pu If 0.2, then Pr (13.22) Pu 8.0 Mux Muy 1.0 Pr 9.0 Mrx Mry where Pr factored compressive resistance, Pn (kip) Mrx factored flexural resistance about the x axis (k-in) Mry factored flexural resistance about the y axis (k-in) Mux factored flexural moment about the x axis calculated as specified below (k-in) Muy factored flexural moment about the y axis calculated as specified below (k-in) resistance factor for compression members The moments about the axes of symmetry, Mux and Muy, may be determined by either (1) a second order elastic analysis that accounts for the magnification of moment caused by the factored axial load, or (2) the approximate single step adjustment specified in AASHTO LRFD Art. 126.96.36.199.2b. 13.32 SECTION THIRTEEN TABLE 13.7 Unfactored Design Loads Axial Bending Bending tension moment, moment, load, P, Mx, My, Load component kN kN-m kN-m Dead load of structural components, DC 1344 0 9.01 Dead load of wearing surfaces and 149 0 1.07 utilities, DW Truck live load per lane, LLTR 32.9 0 35.8 Lane live load per lane, LLLA 82.4 0 90.0 Fatigue live load, LLFA 44.0, 1.10 0 15.0, 4.40 13.11.2 LRFD Design of Truss Hanger The following example, prepared in the SI system of units, illustrates the design of a tensile member that also supports a primary live load bending moment. The existence of the bending moment is not common in truss members, but can result from unusual framing. In this example, the bending moment serves to illustrate the application of various provisions of the LRFD Specifications. A fabricated H-shaped hanger member is subjected to the unfactored design loads listed in Table 13.7. The applicable AASHTO load factors for the Strength-I Limit State and the Fatigue Limit State are listed in Table 13.8. The impact factor, I, is 1.15 for the fatigue limit state and 1.33 for all other limit states. For the overall bridge cross section, the governing live load condition places three lanes of live load on the structure with a distribution factor, DF, of 2.04 and a multiple presence factor, MPF, of 0.85. For the fatigue limit state, the placement of the single fatigue truck produces a distribution factor of 0.743. The multiple presence factor is not applied to the fatigue limit state. The factored force effect, Q, in the member is calculated for the axial force and the moment in Table 13.7 from the following equation to obtain the factored member load and moment: TABLE 13.8 AASHTO Load Factors Type of factor Strength-I limit state* Fatigue limit state Ductility, D 1.00 1.0 Redundancy, R 1.05 1.0 Importance, I 1.05 1.0 D R I** 1.10 1.0 Dead load, DC 1.25 / 0.90 — Dead load, DW 1.50 / 0.65 — Live load impact, LL I 1.75 0.75 * Basic load combination relating to normal vehicular use of bridge without wind. ** 0.95 for loads for which a maximum load factor is appropriate; 1 / 1.10 for loads for which a minimum load factor is appropriate. TRUSS BRIDGES 13.33 TABLE 13.9 Factored Design Loads (Nominal Force Effects) Axial Bending Bending tension moment, moment, load, Pu, Mux, Muy, Limit state kN kN-m kN-m Strength-I 2515 0 450 Fatigue 28.2, 0.70 0 9.61, 2.82 Q [ DC DC DWDW (DF)(MPF)(LLTR * I LL I LLLA)] (13.23) where DF is the distribution factor, MPF is the multiple presence factor, I is the impact factor, and the other terms are defined in Tables 13.7 and 13.8. For example, for the axial load, Q is calculated as follows: Q 1.10[1.25 * 1344 1.50 * 149 1.75(2.04)(0.85)(32.9 * 1.33 82.4)] 2515 kN Table 13.9 summarizes the nominal force effects for the member. The preliminary section selected is shown in Fig. 13.9. The member length is 20 m, the yield stress 345 MPa, the tensile strength 450 MPa, and the diameter of A325 bolts is 24 mm. Section properties are listed in Table 13.10. Tensile Resistance. The tensile resistance is calculated as the lesser of Eqs. 13.11 and 13.12. From Eq. 13.11, gross section yielding, Pr 0.95 345 26,456 / 1000 8671 kN. From Eq. 13.12, net section fracture, assuming the force effects are transmitted to all components so that U 1.00, Pr 0.80 450 20,072 / 1000 7226 kN. Thus, net section fracture controls and Pr 7226 kN. Flexural Resistance. Because net section fracture controls, use net section properties for calculating flexural resistance. Also, because Mx 0, only investigate weak axis bending. The nominal moment strength, Mn, is defined by AASHTO in this case as the plastic moment. Thus, for an H-section about the weak axis, in terms of the yield stress, Fy, and section modulus, S, FIGURE 13.9 Cross section of H-shaped hanger. 13.34 SECTION THIRTEEN TABLE 13.10 Section Properties for Example Problem Area Ag 26,456 mm2 An 20,0772 mm2 Moment of Inertia Ixg 1.92 109 mm4 Ixn 1.44 109 mm4 Iyg 6.05 108 mm4 Iyn 4.56 108 mm4 Section Modulus Sxg 6.30 106 mm3 Sxn 4.71 106 mm3 Syg 1.98 106 mm3 Syn 1.49 106 mm3 Mny 1.5Fy S (13.24) Substituting y-axis values, Mny 1.5 345 1.49 106 / 10002 771 kN-m. The factored flexural resistance, Mr is defined as Mr ƒ Mn (13.24a) where ƒ is the resistance factor for flexure (1.00). Therefore, in this case, Mry 1.00 Mny 771 kN-m. Combined Tension and Flexure. This will be checked for the Strength-I limit state using the nominal force effects listed in Table 13.9. First calculate Pu / Pr 2515 / 7226 0.348. Because this exceeds 0.2, Eq. 13.20 applies. Substitute appropriate values as follows: 2515 8 450 0 0.87 1.00 OK 7226 9 771 Slenderness Ratio. AASHTO requires that tension members other than rods, eyebars, ca- bles and plates satisfy certain slenderness ratio (l / r) requirements. For main members subject to stress reversal, l / r 140. If the present case the least radius of gyration is r Iyg / Ag 6.05 * 108 / 26,456 151 mm and l / r 20,000 / 151 132. This is within the limit of 140. Fatigue Limit State. The member is fabricated from plates with continuous fillet welds parallel to the applied stress. Slip-critical bolts are used for the end connections. Both of these are category B fatigue details. The average daily truck traffic, ADTT, is 2250 and three lanes are available to trucks. The number of trucks per day in a single-lane, averaged over the design life is calculated from the AASHTO expression, ADTTSL p * ADTT (13.25) where p is the fraction of truck traffic in a single lane as follows: 1.00 for 1 truck lane, 0.85 for two truck lanes, and 0.80 for three or more truck lanes. Therefore, ADTTSL 0.80 * 2250 1800. The nominal fatigue resistance is calculated as a maximum permissible stress range as follows: 1/3 A 1 F ( F)TH (13.26) N 2 where TRUSS BRIDGES 13.35 N (365)(75)(n)(ADTTSL) (13.27) In the above, A is a fatigue constant that varies with the fatigue detail category, n is the number of stress range cycles per truck, and ( F )TH is the constant amplitude fatigue thresh- old. These constants are found in the AASHTO LRFD Specification for the present case as follows: A 39.3 * 1011 MPa3, n 1.0, and ( F )TH 110 MPa. Substitute in Eq. 13.26: 1/3 39.3 * 1011 1 F 43.0 MPa and ( F )TH 55 MPa 365 * 75 * 1.0 * 1800 2 Therefore, F 55 MPa. Next calculate the stress range for the force effects in Table 13.9. For the web-to-flange welds, which lie near the neutral axis, only the axial load is considered, and net section properties are used as the worst case: 28.2 ( 0.70) * 1000 1.44 MPa 55 MPa OK 20,072 For the extreme fiber at the slip-critical connections, both axial load and flexure is considered, and gross section properties are used: 28.2 ( 0.70) 9.61 ( 2.82) * 103 * 106 7.37 MPa 55 MPa OK 26,456 1.98 * 106 Thus, fatigue does not control and the member selection is satisfactory. A separate check shows that the bolts are also adequate. 13.12 TRUSS JOINT DESIGN PROCEDURE At every joint in a truss, working lines of the intersecting members preferably should meet at a point to avoid eccentric loading (Art. 13.2). While the members may be welded directly to each other, most frequently they are connected to each other by bolting to gusset plates. Angle members may be bolted to a single gusset plate, whereas box and H shapes may be bolted to a pair of gusset plates. A gusset plate usually is a one-piece element. When necessary, it may be spliced with groove welds. When the free edges of the plate will be subjected to compression, they usually are stiffened with plates or angles. Consideration should be given in design to the possibility of the stresses in gusset plates during erection being opposite in sense to the stresses that will be imposed by service loads. Gusset plates are sometimes designed by the method of sections based on conventional strength of materials theory. The method of sections involves investigation of stresses on various planes through a plate and truss members. Analysis of gusset plates by finite-element methods, however, may be advisable where unusual geometry exists. Transfer of member forces into and out of a gusset plate invokes the potential for block shear around the connector groups and is assumed to have about a 30 angle of distribution with respect to the gage line, as illustrated in Fig. 13.10 (line 1-5 and 4-6). The following summarizes a procedure for load-factor design of a truss joint. Splices are assumed to occur within the truss joints. (See examples in Arts. 13.13 and 13.14.) The concept employed in the procedure can also be applied to working-stress design. 1. Lay out the centerlines of truss members to an appropriate scale and the members to a scale of 1⁄2 in 1 ft, with gage lines. 2. Detail the fixed parts, such as floorbeam, strut, and lateral connections. 3. Determine the grade and size of bolts to be used. 13.36 SECTION THIRTEEN FIGURE 13.10 Typical design sections for a gusset plate. 4. Detail the end connections of truss diagonals. The connections should be designed for the average of the design strength of the diagonals and the factored load they carry but not less than 75% of the design strength. The design strength should be taken as the smallest of the following: (a) member strength, (b) column capacity, and (c) strength based on the width-thickness ratio b / t. A diagonal should have at least the major portion of its ends normal to the working line (square) so that milling across the ends will permit placing of templates for bolt-hole alignment accurately. The corners of the diagonal should be as close as possible to the cover plates of the chord and verticals. Bolts for connection to a gusset plate should be centered about the neutral axis of the member. 5. Design fillet welds connecting a flange plate of a welded box member to the web plates, or the web plate of an H member to the flange plates, to transfer the connection load from the flange plate into the web plates over the length of the gusset connection. Weld lengths should be designed to satisfy fatigue requirements. The weld size should be shown on the plans if the size required for loads or fatigue is larger than the minimum size allowed. 6. Avoid the need for fills between gusset plates and welded-box truss members by keeping the out-to-out dimension of web plates and the in-to-in dimension of gusset plates con- stant. 7. Determine gusset-plate outlines. This step is influenced principally by the diagonal con- nections. 8. Select a gusset-plate thickness t to satisfy the following criteria, as illustrated in Fig. 13.10: a. The loads for which a diagonal is connected may be resolved into components parallel to and normal to line A-A in Fig. 13.10 (horizontal and vertical). A shearing stress is induced along the gross section of line A-A through the last line of bolts. Equal to the sum of the horizontal components of the diagonals (if they act in the same TRUSS BRIDGES 13.37 direction), this stress should not exceed Fy / 1.35 3 , where Fy is the yield stress of the steel, ksi. b. A compression stress is induced in the edge of the gusset plate along Section A-A (Fig. 13.10) by the vertical components of the diagonals (applied at C and D) and the connection load of the vertical or floorbeam, when compressive. The compression stress should not exceed the permissible column stress for the unsupported length of the gusset plate (L or b in Fig. 13.10). A stiffening angle should be provided if the slenderness ratio L / r L 12 / t of the compression edge exceeds 120, or if the permissible column stress is exceeded. The L / r of the section formed by the angle plus a 12-in width of the gusset plate should be used to recheck that L / r 120 and the permissible column stress is not exceeded. In addition to checking the L / r of the gusset in compression, the width-thickness ratio b / t of every free edge should be checked to ensure that it does not exceed 348 / Fy. c. At a diagonal (Fig. 13.10), V1 V2 Pd (13.28) where Pd load from the diagonal, kips V1 shear strength, kips, along lines 1-2 and 3-4 AgFy / 3 Ag gross area, in2, along those lines V2 strength, kips, along line 2-3 based on AnFy for tension diagonals or AgFa for compression diagonals An net area, in2, of the section Fa allowable compressive stress, ksi The distance L in Fig. 13.10 is used to compute Fa for sections 2-3 and 5-6. d. Assume that the connection stress transmitted to the gusset plate by a diagonal spreads over the plate within lines that diverge outward at 30 to the axis of the member from the first bolt in each exterior row of bolts, as indicated by path 1-5-6- 4 (on the right in Fig. 13.10). Then, the stress on the section normal to the axis of the diagonal at the last row of bolts (along line 5-6) and included between these diverging lines should not exceed Fy on the net-section for tension diagonals and Fa for compression diagonals. 9. Design the chord splice (at the joint) for the full capacity of the chords. Arrange the gusset plates and additional splice material to balance, as much as practical, the segment being spliced. 10. When the chord splice is to be made with a web splice plate on the inside of a box member (Fig. 13.11), provide extra bolts between the chords and the gusset on each side of the inner splice plate when the joint lies along the centerline of the floorbeam. This should be done because in the diaphragm bolts at floorbeam connections deliver some floorbeam reaction across the chords. When a splice plate is installed on the outer side of the gusset, back of the floorbeam connection angles (Fig. 13.11), the entire group of floorbeam bolts will be stressed, both vertically and horizontally, and should not be counted as splice bolts. 11. Determine the size of standard perforations and the distances from the ends of the member. 13.13 EXAMPLE—LOAD FACTOR DESIGN OF TRUSS JOINT The joint shown in Fig. 13.11 is to be designed to satisfy the criteria listed in Table 13.11. Fasteners to be used are 11⁄8-in-dia. A325 high-strength bolts in a slip-critical connection 13.38 SECTION THIRTEEN FIGURE 13.11 Truss joint for example of load-factor design. TABLE 13.11 Allowable Stresses for Truss Joint, ksi* Yield stress of steel, ksi Design section 36 50 Shear on line A-A 15.4 21.4 Shear on lines 1-2 and 3-4 20.8 28.9 Tension on lines 2-3 and 5-5 36.0 50.0 * Figs. 13.10 and 13.11. TRUSS BRIDGES 13.39 with Class A surfaces, with an allowable shear stress Fv 15.5 ksi assume 16 ksi for this example. The bolts connecting a diagonal or vertical to a gusset plate then have a shear capacity, kips, for service loads Pv NAvFv 16NAv (13.29) where N number of bolts and Av cross-sectional area of a bolt, in2. For load-factor design, Pv is multiplied by a load factor. For example, for Group I loading, 1.5[D (4 / 3)(L I)] 1.5(1 R / 3)Pv (13.30) where R ratio of live load L to the total service load. Hence, for this loading, and load factor is 1.5(1 R / 3). Diagonal U15-L14. The diagonal is subjected to factored loads of 2,219 kips compression and 462 kips tension. It has a design strength of 2,379 kips. The AASHTO SLD Specifi- cations require that the connection to the gusset plate transmit 75% of the design strength or the average of the factored load and the design strength, whichever is larger. Thus, the design load for the connection is P (2219 2379) / 2 2299 kips 0.75 2379 The ratio of the service live load to the total service load for the diagonal is R 0.55. Hence, for Group I loading on the bolts, the load factor is 1.5(1 R / 3) 1.775. For service loads, the 11⁄8-in-dia. bolts have a capacity of 15.90 kips per shear plane. Therefore, since the member is connected to two gusset plates, the number of bolts required for diagonal U15-L14 is 2299 N 41 per side 2 1.775 15.90 Diagonal L14-U13. The diagonal is subjected to factored loads of a maximum of 3272 kips tension and a minimum of 650 kips tension. It has a design strength of 3425 kips. The design load for the connection is P (3272 3425) / 2 3349 kips 0.75 3425 The ratio of the service live load to the total service load is R 0.374, and the load factor for the bolts is 1.5(1 0.374 / 3) 1.687. Then, the number of 11⁄8-in bolts required is 3349 N 63 per side 2 1.687 15.90 Vertical U14-L14. The vertical carries a factored compression load of 362 kips. It has a design strength of 1439 kips, limited by b / t at a perforation. The design load for the con- nection is P 0.75 1439 1079 kips (362 1439) / 2 Since the vertical does not carry any live load, the load factor for the bolts is 1.5. Hence, the number of 11⁄8-in bolts required for the vertical is 1079 N 23 per side 2 1.5 15.90 13.40 SECTION THIRTEEN Splice of Chord Cover Plates. Each cover plate of the box chord is to be spliced with a plate on the inner and outer face (Fig. 13.12). A36 steel will be used for the splice material, as for the chord. Fasteners are 7⁄8-in-dia. A325 bolts, with a capacity for service loads of 9.62 kips per shear plane. The bolt load factor is 1.791. The cover plate on chord L14-L15 (Fig. 13.11) is 13⁄16 343⁄4 in but has 12-in-wide access perforations. Usable area of the plate is 18.48 in2. The cover plate for chord L13- L14 is 13⁄16 34 in, also with 12-in-wide access perforations. Usable area of this plate is 17.88 in2. Design of the chord splice is based on the 17.88-in2 area. The difference of 0.60 in2 between this area and that of the larger cover plate will be made up on the L14-L15 side of the web-plate splice as ‘‘cover excess.’’ Where the design section of the joint elements is controlled by allowances for bolts, only the excess exceeding 15% of the gross section area is deducted from the gross area to obtain the design area. (This is the designer’s interpretation of the applicable requirements for splices in the AASHTO SLD Specifications. The interpretation is based on the observation that, for the typical dimensions of members, holes, bolt patterns and grades of steel used on the bridge in question, the capacity of tension members was often controlled by the design gross area as illustrated in Arts. 13.10.1 and 13.10.2. The current edition of the specifications should be consulted on this and other interpretations, inasmuch as the specifications are under constant reevaluation.) The number of bolts needed for a cover-plate splice is 17.88 36 N 19 per side 2 1.791 9.62 Try two splice plates, each 3⁄8 31 in, with a gross area of 23.26 in2. Assume eight 1-in- dia. bolt holes in the cross section. The area to be deducted for the holes then is 2 0.375(8 1 0.15 31) 2.51 in2 Consequently, the area of the design net section is An 23.26 2.51 20.75 in2 17.88 in2—OK Tension Splice of Chord Web Plate. A splice is to be provided between the 11⁄4 54-in web of chord L14-L15 and the 15⁄8 54-in web of the L13-L14 chord. Because of the difference in web thickness, a 3⁄8-in fill will be place on the inner face of the 11⁄4-in web (Fig. 13.13). The gusset plate can serve as part of the needed splice material. The remainder is supplied by a plate on the inner face of the web and a plate on the outer face of the gusset. Fasteners are 11⁄8-in-dia. A325 bolts, with a capacity for service loads of 15.90 kips. Load factor is 1.791. The web of the L13-L14 chord has a gross area of 87.75 in2. After deduction of the 15% excess of seven 11⁄4-in-dia. bolt holes, the design area of this web is 86.69 in2. FIGURE 13.12 Cross section of chord cover-plate splice for example of load-factor design. TRUSS BRIDGES 13.41 FIGURE 13.13 Cross section of chord web-plate slice for example of load-factor design. The web on the L14-L15 chord has a gross area of 67.50 in2. After deduction of the 15% excess of seven bolt holes from the chord splice and addition of the ‘‘cover excess’’ of 0.60 in2, the design area of this web is 67.29 in2. The gusset plate is 13⁄16 in thick and 118 in high. Assume that only the portion that overlaps the chord web; that is, 54 in, is effective in the splice. To account for the eccentric application of the chord load to the gusset, an effectiveness factor may be applied to the overlap, with the assumption that only the overlapping portion of the gusset plate is stressed by the chord load. The effectiveness factor Eƒ is defined as the ratio of the axial stress in the overlap due to the chord load to the sum of the axial stress on the full cross section of the gusset and the moment due to the eccentricity of the chord relative to the gusset centroid. P / Ao Eƒ (13.31) P / Ag Pey / I where P chord load Ao overlap area 54t Ag full area of gusset plate 118t e eccentricity of P 118 / 2 54 / 2 32 in y 118 / 2 59 in I 1183t / 12 136,900t in4 Substitution in Eq. (13.31) yields P / 54t Eƒ 0.832 P / 118t 32 59P / 136,900t The gross area of the gusset overlap is 13⁄16 54 43.88 in2. After deduction of the 15% excess of thirteen 11⁄4-in-dia. bolt holes, the design area is 37.25 in2. Then, the effective area of the gusset as a splice plate is 0.832 37.25 30.99 in2. In addition to the 67.29 in2 of web area, the gusset has to supply an area for transmission of the 250-kip horizontal component from diagonal U15-L14 (Fig. 13.11). With Fy 36 ksi, this area equals 250 / (36 2) 3.47 in2. Hence, the equivalent web area from the L14- L15 side of the joint is 67.29 3.47 70.76 in2. The number of bolts required to transfer the load to the inside and outside of the web should be determined based on the effective areas of gusset that add up to 70.76 in2 but that provide a net moment in the joint close to zero. The sum of the moments of the web components about the centerline of the combination of outside splice plate and gusset plate is 3.47 0.19 67.29 1.22 0.66 82.09 82.75 in3. Dividing this by 2.59 in, the distance to the center of the inside splice plate, yields an effective area for the inside splice plate of 31.95 in2. Hence, the effective area of the 13.42 SECTION THIRTEEN combination of the gusset and outside splice plates in 70.76 31.95 38.81 in2. This is then distributed to the plates in proportion to thickness: gusset, 24.96 in2, and splice plate, 13.85 in2. The number of 11⁄8-in A325 bolts required to develop a plate with area A is given by N AFy / (1.791 15.90) 36A / 28.48 1.264A Table 13.12 list the number of bolts for the various plates. Check of Gusset Plates. At Section A-A (Fig. 13.11), each plate is 128 in wide and 118 in high, 13⁄16 in thick. The design shear stress is 15.4 ksi (Table 13.11). The sum of the horizontal components of the loads on the truss diagonals is 1244 1705 2949 kips. This produces a shear stress on section A-A of 2,949 ƒv 13 14.18 ksi 15.4 ksi—OK 2 128 ⁄16 The vertical component of diagonal U15-L14 produces a moment about the centroid of the gusset of 1,934 21 40,600 kip-in and the vertical component of U13-L14 produces a moment 2,883 20.5 59,100 kip-in. The sum of these moments is M 99,700 kip- in. The stress at the edge of one gusset plate due to this moment is 6M 6 99,700 ƒb 22.47 ksi td 2 2(13⁄16)1282 The vertical, carrying a 362-kip load, imposes a stress P 362 ƒc 13 1.74 ksi A 2 128 ⁄16 The total stress then is ƒ 22.47 1.74 24.21 ksi. The width b of the gusset at the edge is 48 in. Hence, the width-thickness ratio is b / t 48 / (13⁄16) 59. From step b in Art. 13.12, the maximum permissible b / t is 348 / Fy 348 / 36 58 59. The edge has to be stiffened. Use a stiffener angle 3 3 1⁄2 in. For computation of the design compressive stress, assume the angle acts with a 12-in width of gusset plates. The slenderness ratio of the edge is 48 / 0.73 65.75. The maximum permissible slenderness ratio is 2 2 2 E / Fy 2 29,000 / 36 126 65.75 Hence, the design compressive stress is TABLE 13.12 Number of Bolts for Plate Development Plate Area, in2 Bolts Inside splice plate 31.95 41 Outside splice plate 13.85 18 Gusset plate on L14-L15 side (13.85 24.96 3.47) 35.34 45 Gusset plate on L13-L14 side (13.85 24.96) 38.81 50 TRUSS BRIDGES 13.43 2 Fy L ƒa 0.85Fy 1 2 (13.32) 4 E r 2 36 48 0.85 36 1 2 4 29,000 0.73 26.44 ksi 24.21 ksi—OK Next, the gusset plate is checked for shear and compression at the connection with di- agonal U15-L14. The diagonal carries a factored compression load of 2,299 kips. Shear paths 1-2 and 3-4 (Fig. 13.10) have a gross length of 93 in. From Table 13.11, the design shear stress is 20.8 ksi. Hence, design shear on these paths is 13 Vd 2 20.8 93 ⁄16 3143 kips 2299 kips—OK Path 2-3 need not be investigated for compression. For compression on path 5-6, a 30 distribution from the first bolt in the exterior row is assumed (Art. 13.12, step 8d ). The length of path 5-6 between the 30 lines in 82 in. The design stress, computed from Eq. (13.32) with a slenderness ratio of 52.9, is 27.9 ksi. The design strength of the gusset plate then is 13 P 2 27.9 82 ⁄16 3718 kips 2299 kips—OK Also, the gusset plate is checked for shear and tension at the connection with diagonal L14-U13. The diagonal carries a tension load of 3,272 kips. Shear paths 1-2 and 3-4 (Fig. 13.10) have a gross length of 98 in. From Table 13.11, the allowable shear stress is 20.8 ksi. Hence, the allowable shear on these paths is 13 Vd 2 20.8 98 ⁄16 3312 kips 3,272 kips—OK For path 2-3, capacity in tension with Fy 36 ksi is 13 P23 2 36 27 ⁄16 1580 kips For tension on path 5-6 (Fig. 13.10), a 30 distribution from the first bolt in the exterior row is assumed (Art. 13.12, step 8d ). The length of path 5-6 between the 30 lines is a net of 83 in. The allowable tension then is 13 P56 2 36 83 ⁄16 4856 kips 3272 kips—OK Welds to Develop Cover Plates. The fillet weld sizes selected are listed in Table 13.13 with their capacities, for an allowable stress of 26.10 ksi. A 5⁄16-in weld is selected for the diag- onals. It has a capacity of 5.76 kips / in. The allowable compressive stress for diagonal U15-L14 is 22.03 ksi. Then, length of fillet weld required is 22.03(7⁄8)231⁄8 38.7 in 2 5.76 For Fy 36 ksi, the length of fillet weld required for diagonal L14-U13 is 36(1⁄2)231⁄8 36.1 in 2 5.76 13.44 SECTION THIRTEEN TABLE 13.13 Weld Capacities—Load-Factor Design Weld size, in Capacity of weld, kips per in 5 ⁄16 5.76 3 8 ⁄ 6.92 7 ⁄16 8.07 1 2 ⁄ 9.23 13.14 EXAMPLE—SERVICE-LOAD DESIGN OF TRUSS JOINT The joint shown in Fig. 13.14 is to be designed for connections with 11⁄8-in-dia. A325 bolts with an allowable stress Fv 16 ksi. Shear capacity of the bolts is 15.90 kips. Diagonal U15-L14. The diagonal is subjected to loads of 1250 kips compression and 90 kips tension. The connection is designed for 1288 kips, 3% over design load. The number of bolts required for the connection to the 11⁄16-in-thick gusset plate is FIGURE 13.14 Truss joint for example of service-load design. TRUSS BRIDGES 13.45 N 1288 / (2 15.90) 41 per side Diagonal L14-U13. The diagonal is subjected to a maximum tension of 1939 kips and a minimum tension of 628 kips. The connection is designed for 1997 kips, 3% over design load. The number of 11⁄8-in-dia. A325 bolts required is N 1997 / (2 15.90) 63 per side Vertical U14-L14. The vertical carries a compression load of 241 kips. The member is 74.53 ft long and has a cross-sectional area of 70.69 in2. It has a radius of gyration r 10.52 in and slenderness ratio of KL / r 74.53 12 / 10.52 85.0 with K taken as unity. The allowable compression stress then is Fa 16.98 0.00053(KL / r)2 (13.33) 16.98 0.00053 85.02 13.15 ksi The allowable unit stress for width-thickness ratio b / t, however, is 11.10 13.15 and gov- erns. Hence, the allowable load is P 70.69 11.10 785 kips The number of bolts required is determined for 75% of the allowable load: N 0.75 785 / (2 15.90) 19 bolts per side Splice of Chord Cover Plates. Each cover plate of the box chord is to be spliced with a plate on the inner and outer face (Fig. 13.15). A36 steel will be used for the splice material, as for the chord. Fasteners are 7⁄8-in-dia. A325 bolts, with a capacity of 9.62 kips per shear plane. The cover for L14-L15 (Fig. 13.14) is 13⁄16 by 343⁄4 in but has 12-in-wide access perfo- rations. Usable area of the plate is 18.48 in2. The cover plate for L13-L14 is 13⁄16 34 in, also with 12-in-wide access perforations. Usable area of this plate is 17.88 in2. Design of the chord splice is based on the 17.88-in2 area. The difference of 0.60 in2 between this area and that of the larger cover plate will be made up on the L14-L15 side of the web plate splice as ‘‘cover excess.’’ Where the net section of the joint elements is controlled by the allowance for bolts, only the excess exceeding 15% of the gross area is deducted from the gross area to obtain the design gross area, as in load-factor design (Art. 13.13). For an allowable stress of 20 ksi in the cover plate, the number of bolts needed for the cover-plate splice is FIGURE 13.15 Cross section of chord cover-plate splice for example of service-load design. 13.46 SECTION THIRTEEN 17.88 20 N 19 per side 2 9.62 Try two splice plates, each 3⁄8 31 in, with a gross area of 23.26 in2. Assume eight 1-in- dia. bolt holes in the cross section. The area to be deducted for the holes then is 2 0.375(8 1 0.15 31) 2.51 in2 Consequently, the area of the design gross section is An 23.26 2.51 20.75 in2 17.88 in2—OK Splice of Chord Web Plate. A splice is to be provided between the 11⁄4 54-in web of chord L14-L15 and the 15⁄8 54-in web of the L13-L14 chord. Because of the difference in web thickness, a 3⁄8-in fill will be placed on the inner face of the 11⁄4-in web (Fig. 13.16). The gusset plate can serve as part of the needed splice material. The remainder is supplied by a plate on the inner face of the web and a plate on the outer face of the gusset. Fasteners are 11⁄8-in-dia. A325 bolts, with a capacity of 15.90 kips. The web of the L13-L14 chord has a design gross area of 87.75 in2. After deduction of the 15% excess of seven 11⁄4-in-dia. bolt holes, the net area of this web is 86.69 in2. The web of the L14-L15 chord has a design gross area of 67.50 in2. After deduction of the 15% excess of seven bolt holes from the chord splice and addition of the ‘‘cover excess’’ of 0.60 in2, the net area of this web is 67.29 in2. The gusset plate is 11⁄16 in thick and 123 in high. Assume that only the portion that overlaps the chord web, that is, 54 in, is effective in the splice. To account for the eccentric application of the chord load to the gusset, an effectiveness factor Eƒ [Eq. (13.31)] may be applied to the overlap (Art. 13.13). The moment of inertia of the gusset is 1233t / 12 155,100t in4. P / 54t Eƒ 0.849 P / 123t P(123 / 2 54 / 2)(123 / 2) / 155,100t The gross area of the gusset overlap is 11⁄16 54 37.13 in2. After the deduction of the excess of thirteen 1 ⁄4-in-dia. bolt holes, the net area is 31.52 in2. Then, the effective area 1 of the gusset as a splice plate is 0.849 31.52 26.76 in2. In addition to the 67.29 in2 of web area, the gusset has to supply an area for transmission of the 49-kip horizontal component from diagonal U15-L14. With an allowable stress of 20 ksi, the area is 49 / (20 2) 1.23 in2. hence, the equivalent web area from the L14-L15 side of the joint is 67.29 1.23 68.52 in2. The number of bolts required to transfer the load to the inside and outside of the web should be based on the effective areas of gusset that add up to 68.52 in2 but that provide a net moment in the joint close to zero. The sum of the moments of the web components about the centerline of the combination of outside splice plate and gusset plate is 1.23 0.19 67.29 1.16 78.29 kip-in. FIGURE 13.16 Cross section of chord web-plate splice for example of service load design. TRUSS BRIDGES 13.47 Dividing this by 2.53, the distance to the center of the inside splice plate, yields an effective area for the inside splice plate of 30.94 in2. Hence, the effective area of the combination of the gusset and outside splice plates is 68.52 30.94 37.58 in2. This is then distributed to the plates as follows: gusset, 22.88 in2, and outside splice plate, 14.70 in2. The number of 11⁄8-in-dia. A325 bolts required to develop a plate with area A and allow- able stress of 20 ksi is N 20A / 15.90 1.258A Table 13.14 lists the number of bolts for the various plates. Check of Gusset Plates. At section A-A (Fig. 13.11), each plate is 134 in wide and 123 in high, 11⁄16 in thick. The allowable shear stress is 10 ksi. The sum of the horizontal compo- nents of the loads on the truss diagonals is 697 1017 1714 kips. This produces a shear stress on Section A-A of 1714 ƒv 11 9.30 ksi 10 ksi—OK 2 134 ⁄16 The vertical component of diagonal U15-L14 produces a moment about the centroid of the gusset of 1083 21 22,740 kip-in and the vertical component of U13-L14 produces a moment 1719 20.5 35,240 kip-in. The sum of these moments is 57,980 kip-in. The stress at the edge of one gusset plate due to this moment is 6M 6 57,980 ƒb 14.09 ksi td 2 2(11⁄16)1342 The vertical carrying a 241-kip load, imposes a stress P 241 ƒc 11 1.31 ksi A 2 134 ⁄16 The total stress then is 14.09 1.31 15.40 ksi The width b of the gusset at the edge is 52 in. Hence, the width-thickness ratio is b / t 52 / (11⁄16) 75.6. From step 8b in Art. 13.12, the maximum permissible b / t is 348 Fy 348 / 36 58 75.6. The edge has to stiffened. Use a stiffener angle 4 3 1⁄2 in. For computation of the allowable compressive stress, assume the angle acts with a 12-in width of gusset plate. The slenderness ratio of the edge is 52 / 1.00 52.0. The maximum permissible slenderness ratio is 2 2 2 E / Fy 2 29,000 / 36 126 552 Hence, the allowable stress from Eq. (13.33) is TABLE 13.14 Number of Bolts for Plate Development Plate Area, in2 Bolts Inside splice plate 30.94 39 Outside splice plate 14.70 19 Gusset plate on L14-L15 side (14.70 22.88 1.16) 36.42 46 Gusset plate on L13-L14 side (14.70 22.88) 37.58 48 13.48 SECTION THIRTEEN Fa 16.98 0.00053 522 15.55 ksi 15.40 ksi—OK Next, the gusset plate is checked for shear and compression at the connection with di- agonal U15-L14. The diagonal carries a load of 1,288 kips. Shear paths 1-2 and 3-4 (Fig. 13.10) have a gross length of 105 in. The allowable shear stress is 12 ksi. Hence, the allowable shear on these paths is 11 Vd 2 12 105 ⁄16 1733 kips 1288 kips—OK Path 2-3 need not be investigated for compression. For compression on path 5-6, a 30 distribution from the first bolt in the exterior row is assumed (Art. 13.12, step 8d ). The length of path 5-6 between the 30 lines is 88 in. The allowable stress, computed from Eq. (13.33) with a slenderness ratio KL / r 0.5 25 / 0.198 63, is 14.88 ksi. This permits the gusset to withstand a load 11 P 2 14.88 88 ⁄16 1800 kips 1288 kips Also, the gusset plate is checked for shear and tension at the connection with diagonal L14-U13. The diagonal carries a tension load of 1,997 kips. Shear paths 1-2 and 3-4 (Fig. 13.10) have a gross length of 102 in. The allowable shear stress is 12 ksi. Hence, the allowable shear on these paths is 11 Vd 2 12 102 ⁄16 1683 kips For path 2-3, capacity in tension with an allowable stress of 20 ksi is 11 P23 2 20 21.6 ⁄16 594 kips (1997 1683)—OK For tension on path 5-6 (Fig. 13.10), a 30 distribution from the first bolt in the exterior row is assumed (Art. 13.12, step 8d ). The length of path 5-6 between the 30 lines is a net of 88 in. The allowable tension then is 11 P 2 20 88 ⁄16 2420 kips 1997 kips—OK Welds to Develop Cover Plates. The fillet weld sizes selected are listed in Table 13.15 with their capacities, for an allowable stress of 15.66 ksi. A 5⁄16-in weld is selected for the diag- onals. It has a capacity of 3.46 kips / in. The allowable compressive stress for diagonal U15-L14 is 11.93 ksi. Then, length of fillet weld required is TABLE 13.15 Weld Capacities—Service-Load Design Weld size, in Capacity of weld, kips per in 5 ⁄16 3.46 3 8 ⁄ 4.15 7 ⁄16 4.84 1 2 ⁄ 5.54 TRUSS BRIDGES 13.49 11.93(7⁄8)231⁄8 34.9 in 2 3.46 The allowable tensile stress for diagonal L14-U13 is 20.99 ksi. In this case, the required weld length is 20.99(1⁄2)231⁄8 35.1 in 2 3.46 13.15 SKEWED BRIDGES To reduce scour and to avoid impeding stream flow, it is generally desirable to orient piers with centerlines parallel to direction of flow; therefore skewed spans may be required. Truss construction does not lend itself to bridges where piers are not at right angles to the super- structure (skew crossings). Hence, these should be avoided and this can generally be done by using longer spans with normal piers. In economic comparisons, it is reasonable to assume some increased cost of steel fabrication if skewed trusses are to be used. If a skewed crossing is a necessity, it is sometimes possible to establish a panel length equal to the skew distance W tan , where W is the distance between trusses and the skew angle. This aligns panels and maintains perpendicular connections of floorbeams to the trusses (Fig. 13.17). If such a layout is possible, there is little difference in cost and skewed spans and normal spans. Design principles are similar. If the skewed distance is less than the panel length, it might be possible to take up the difference in the angle of inclination of the end post, as shown in Fig. 13.17. This keeps the cost down, but results in trusses that are not symmetrical within themselves and, depending on the proportions, could be very unpleasing esthetically. If the skewed distance is greater than the panel length, it may be necessary to vary panel lengths along the bridge. One solution to such a skew is shown in Fig. 13.18, where a truss, similar to the truss in Fig. 13.17, is not symmetrical within itself FIGURE 13.17 Skewed bridge with skew distance less than panel length. 13.50 SECTION THIRTEEN FIGURE 13.18 Skewed bridge with skew distance exceeding panel length. and, again, might not be esthetically pleasing. The most desirable solution for skewed bridges is the alternative shown in Fig. 13.17. Skewed bridges require considerably more analysis than normal ones, because the load distribution is nonuniform. Placement of loads for maximum effect, distribution through the floorbeams, and determination of panel point concentrations are all affected by the skew. Unequal deflections of the trusses require additional checking of sway frames and floor system connections to the trusses. 13.16 TRUSS BRIDGES ON CURVES When it is necessary to locate a truss bridge on a curve, designers should give special consideration to truss spacing, location of bridge centerline, and stresses. For highway bridges, location of bridge centerline and stresses due to centrifugal force are of special concern. For through trusses, the permissible degree of curvature is limited because the roadway has to be built on a curve, while trusses are planar, constructed on chords. Thus, only a small degree of throw, or offset from a tangent, can be tolerated. Regardless of the type of bridge, horizontal centrifugal forces have to be transmitted through the floor system to the lateral system and then to supports. For railroad truss bridges, truss spacing usually provides less clearance than the spacing for highway bridges. Thus, designers must take into account tilting of cars due to super- elevation and the swing of cars overhanging the track. The centerline of a through-truss bridge on a curve often is located so that the overhang at midspan equals the overhang at each span end. For bridges with more than one truss span, layout studies should be made to determine the best position for the trusses. Train weight on a bridge on a curve is not centered on the centerline of track. Loads are greater on the outer truss than on the inner truss because the resultant of weight and cen- trifugal force is closer to the outer truss. Theoretically, the load on each panel point would be different and difficult to determine exactly. Because the difference in loading on inner and outer trusses is small compared with the total load, it is generally adequate to make a simple calculation for a percentage increase to be applied throughout a bridge. Stress calculations for centrifugal forces are similar to those for any horizontal load. Floorbeams, as well as the lateral system, should be analyzed for these forces. TRUSS BRIDGES 13.51 13.17 TRUSS SUPPORTS AND OTHER DETAILS End bearings transmit the reactions from trusses to substructure elements, such as abutments or piers. Unless trusses are supported on tall slender piers that can deflect horizontally with- out exerting large forces on the trusses, it is customary to provide expansion bearings at one end of the span and fixed bearings at the other end. Anchoring a truss to the support, a fixed bearing transmits the longitudinal loads from wind and live-load traction, as well as vertical loads and transverse wind. This bearing also must incorporate a hinge, curved bearing plate, pin arrangement, or elastomeric pads to permit end rotation of the truss in its plane. An expansion bearing transmits only vertical and transverse loads to the support. It per- mits changes in length of trusses, as well as end rotation. Many types of bearings are available. To ensure proper functioning of trusses in accord- ance with design principles, designers should make a thorough study of the bearings, in- cluding allowances for reactions, end rotations and horizontal movements. For short trusses, a rocker may be used for the expansion end of a truss. For long trusses, it generally is necessary to utilize some sort of roller support. See also Arts. 10.22 and 11.9. Inspection Walkways. An essential part of a truss design is provision of an inspection walkway. Such walkways permit thorough structural inspection and also are of use during erection and painting of bridges. The additional steel required to support a walkway is almost insignificant. 13.18 CONTINUOUS TRUSSES Many river crossings do not require more than one truss span to meet navigational require- ments. Nevertheless, continuous trusses have made possible economical bridge designs in many localities. Studies of alternative layouts are essential to ensure selection of the lowest- cost arrangement. The principles outlined in preceding articles of this section are just as applicable to continuous trusses as to simple spans. Analysis of the stresses in the members of continuous trusses, however, is more complex, unless computer-aided design is used. In this latter case, there is no practical difference in the calculation of member loads once the forces have been determined. However, if the truss is truly continuous, and, therefore, the truss in each span is statically indeterminant, the member forces are dependent on the stiff- ness of the truss members. This may make several iterations of member-force calculations necessary. But where sufficient points of articulation are provided to make each individual truss statically determinant, such as the case where a suspended span is inserted in a canti- lever truss, the member forces are not a function of member stiffness. As a result, live-load forces need be computed only once, and dead-load member forces need to be updated only for the change in member weight as the design cycle proceeds. When the stresses have been computed, design proceeds much as for simple spans. The preceding discussion implies that some simplification is possible by using cantilever design rather than continuous design. In fact, all other things being equal, the total weight of members will not be much different in the two designs if points of articulation are properly selected. More roadway joints will be required in the cantilever, but they, and the bearings, will be subject to less movement. However, use of continuity should be considered because elimination of the joints and devices necessary to provide for articulation will generally reduce maintenance, stiffen the bridge, increase redundancy and, therefore, improve the gen- eral robustness of the bridge. | http://www.docstoc.com/docs/42440485/Truss-bridge | 13 |
73 | In most mathematics, the concept of a limit is extremely important. Taking the limit of a sequence will tell us what happens at an infinite point of the sequence; it would be impossible to do this without limits, since we cannot do anything infinite times. Taking the limit of a function at a certain point is also significant - it gives us a lot of precise information about a function around a given point. In short, the limit is tremendously useful and is used in almost every branch of mathematics. For clarity's sake, we will begin with the limit of a sequence.
Suppose we have a sequence of numbers 2, 5, 8, 11, 14, 17, ... We can see that there is a pattern that describes the sequence of numbers. Knowing that pattern, we can give an explicit expression for the sequence: If we index 2 as the zeroeth term of the sequence, then the nth term is 2 + 3n. Mathematically, we write
an represents the nth term of the sequence. The limit of the sequence is the number the sequence approaches if we look at an infinite number of terms. We can see that the more terms we take, the greater the number gets, so if we take an infinite number of terms, the sequence goes to infinity. It is written mathematically like this:
This is an easy example, and it can be figured out logically. However, we need a methodical way of solving limits. In general, we first try plugging in ∞ for n, and evaluating the limit according to a few rules.
Rules For Computing Limits
To begin with, we must start with the fundamental rules of limits.
1. The Constant Rule
When we take the limit of a constant, non-changing function, the limit will simply be that constant. For example, suppose an = 4 no matter what n we choose(note that the function is just a single number; this is what we mean by a constant function). This sequence would just be 4, 4, 4, 4, 4, ... Then, if we took the limit of this function, , it would always be 4.
2. The Multiplication Rule
If two sequences have limits that exist, then the limit of the product is the product of the limits. Suppose we know that and, for a different sequence bn, , then we immediately know, by the multiplication rule, that . In this case, the limit of our product, anbn, is equal to the product of the limit of an, A, and the limit of bn, B, otherwise known as the product of the limits. This rule will have more application when we get to limits of functions.
3. The Sum Rule
If two sequences have limits that exist, then the limit of the sum of sequences is the sum of the limits of the sequences. Suppose, again, that we know and . Then, by this rule, we therefore know that . This rule will also have more application when we get to limits of functions, but it is still useful with sequences.
Remember that in order for rules 2 and 3 to be relevant, our limits must exist. If you end up with a limit of ∞ or a non-existent limit, you can not add or multiply it to anything.
Beyond these rules, there are a few very important truths to know when calculating limits.
1/∞ = 0. This is based on the premise that a number with a very large denominator, like 1/999999, is very close to zero.
Also, if our sequence is described by a fraction with powers of n on the top and the bottom,
if the degree of n is higher on top, then the limit is infinity:
if the degree of n is higher on the bottom, then the limit is zero:
if the degree of n is the same in the numerator and denominator, then the limit is the ratio of the leading coeffecients:
In general, since infinity has the quality that a higher power of infinity is uncountably greater than a lower power, we only really worry about the highest powers of n. To see in greater detail the motivation behind these rules and examples, look at the following flash animation:
Limits of Functions
The notion of a limit becomes very useful when we look at
functions. With functions, we take the limit of a function as our
dependent variable, x, approaches a certain x-value. The limit will tell us the value of a
function at a certain point. For example, if we have f(x)
= x over (x+4), then we can take the value of this function at x =
1. limf(x) = 1/5.
We may also learn alot about the function using limits. Through our knowledge of domains, we know that the function is not defined at x = -4. We can check what happens to the function at a discontinuity by taking a limit at this point. limf(x) = ∞. This tells us that the function has an asymptotic discontinuity at x = -4. Graphically, it also tells us that the function approaches either positive or negative infinity.
It may also help to see what happens to the function as the x approaches infinity - this will tell us the behavior for large x-values. Again using the function defined above, if we take the limit as x approaches infinity we find that limf(x) = 1, by using the same rules as limits of sequences. This tells us that along the positive x-axis, the graph of the function will eventually be very close to the line y = 1.
Left-hand and Right-hand LimitsIf we look at the graph of a function like _____________, you may notice that at the discontinuity x = -4, on one side of it the function approaches positive infinity, and the other side of the function approaches negative infinity. At certain points, a function can have different neighborhood on its left and right sides. Because of this, we can take one-sided limits. In our example, we may try taking the left-hand limit at our discontinuity, connotated by. The -4- represents a number infinitely close to -4 but the tiniest bit smaller than it (or graphically, to the left of it). You can think of it as -4.00001 (or -4.0000000000001, etc.). The point is that it is very close to -4 but is clearly on the left-hand side of the limit of -4. We take the limit just as we would normally, but now, the (x+4) term in the denominator is definitively defined as (-4- + 4). Remembering that this is similar to (-4.0000001 +4), we can see that the denominator is now still very close to zero but it is decidedly negative. This tells us that the function approaches negative infinity on the left side. A right-hand limit functions similarly, except we take limf(x). | http://www.math.brown.edu/UTRA/limits.html | 13 |
191 | Planck's law describes the electromagnetic radiation emitted by a black body in thermal equilibrium at a definite temperature. The law is named after Max Planck, who originally proposed it in 1900. It is a pioneer result of modern physics and quantum theory.
Planck's law may be written:
where B is the spectral radiance of the surface of the black body, T is its absolute temperature, ν is the frequency of the emitted radiation, λ is its wavelength, kB is the Boltzmann constant, h is the Planck constant, and c is the speed of light. These are not the only ways to express the law; expressing it in terms of wavenumber rather than frequency or wavelength is also common, as are expression in terms of the number of photons emitted at a certain wavelength, rather than energy emitted. In the limit of low frequencies (i.e. long wavelengths), Planck's law becomes the Rayleigh–Jeans law, while in the limit of high frequencies (i.e. small wavelengths) it tends to the Wien approximation.
Max Planck developed the law in 1900, originally with only empirically determined constants, and later showed that, expressed as an energy distribution, it is the unique stable distribution for radiation in thermodynamic equilibrium. As an energy distribution, it is one of a family of thermal equilibrium distributions which include the Bose–Einstein distribution, the Fermi–Dirac distribution and the Maxwell–Boltzmann distribution.
Every physical body spontaneously and continuously emits electromagnetic radiation. If the body has a definite temperature, the higher it is the more total radiation is emitted, and the shorter the peak wavelength. For example, at room temperature (~300 K), bodies emit thermal radiation that is mostly infrared and invisible to the eye. At higher temperatures the amount of infrared radiation increases and can be felt as heat, and the object glows red. At extremely high temperatures, bodies are dazzlingly bright yellow or blue-white and emit significant amounts of short wavelength radiation, including ultraviolet and even x-rays. The surface of the sun (~6000 K) emits large amounts of both infrared and ultraviolet radiation, but its emission is peaked in the visible spectrum.
Planck's law describes the electromagnetic radiation in a cavity with rigid opaque walls that are not perfectly reflective at any wavelength, in a state of thermodynamic equilibrium. Subject to these provisos, the chemical composition of the walls does not matter. Planck radiation is the maximum amount of radiation any body at thermal equilibrium can emit from its surface, no matter what its chemical composition or surface structure. The thermal radiation from a the surface of a physical body can be characterized by its emissivity, the radiance of the body divided by the Planck radiance, which is always less than or equal to one. The emissivity of a piece of material is in general dependent on its chemical composition and physical structure, its temperature, the wavelength of the radiation, and the angles of emission and polarization, and most especially and importantly upon its interface with the medium into which it is emitting.
The surface of a black body can be well approximately modeled by a small hole in the wall of a large enclosure which is maintained at a uniform temperature with rigid opaque walls that are not perfectly reflective at any wavelength. At equilibrium, the radiation inside this enclosure follows Planck's law, and is isotropic, homogeneous, unpolarized, and incoherent. This radiation is well sampled by the radiation that is emitted at right angles from the small hole that models the surface of a black body. The far field of such radiation has the same normalized spectrum as the radiation emitted at the hole, but this is only because it satisfies a certain condition on its spatial coherence, that is not necessarily satisfied for every kind of source.
Just as the Maxwell–Boltzmann distribution for thermodynamic equilibrium at a given temperature is the unique maximum entropy energy distribution for a gas of many conserved massive particles, so also is Planck's distribution for a gas of photons, which are not conserved and have zero rest mass. If the photon gas is not initially Planckian, the second law of thermodynamics guarantees that interactions (between the photons themselves or photons and other particles) will cause the photon energy distribution to change and approach the Planck distribution. In such an approach to thermodynamic equilibrium, or if the temperature is changed or work is done on the photon gas, photons are created or annihilated in the right numbers and with the right energies to fill the cavity with a Planck distribution at the eventual equilibrium temperature. For a photon gas, already having a Planck distribution, or any other given initial isotropic and homogeneous energy distribution, in a perfectly reflecting container devoid of contained material such as a speck of carbon, if adiabatic compression work is done on, or adiabatic expansion work is done by, the gas, so slowly that it is practically reversible, then the motion of the walls during the compression or expansion, combined with the reflection of the light with them, has the effect that the new distribution is also Planckian, or follows the given initial distribution.
The spectral radiance, pressure and energy density of a photon gas at equilibrium are entirely determined by the temperature. This is because black-body radiation in a cavity that contains no material is a system consisting only of energy, with no chemical constituent. In contrast, material gases consist of chemical substances, so that, besides temperature, also the total numbers of material particles and their properties, such as mass, independently contribute to the determination of the pressure and energy density.
The quantity Bν(T) is the spectral radiance as a function of temperature and frequency. It has units of W·m−2·sr−1·Hz−1 in the SI system. An infinitesimal amount of power Bν(T) dA dΩ dν is radiated from infinitesimal surface area dA into infinitesimal solid angle dΩ in an infinitesimal frequency band of width dν centered on frequency ν. The total power radiated into any solid angle the integral of Bν(T) over those three quantities, and is given by the Stefan–Boltzmann law. The spectral radiance of Planckian radiation from a black body has the same value for every direction and angle of polarization, and so the black body is said to be a Lambertian radiator.
Different forms
Planck's law can be encountered in several forms depending on the conventions and preferences of different scientific fields. The various forms of the law for spectral radiance are summarized in the table below. Forms on the left are most often encountered in experimental fields, while those on the right are most often encountered in theoretical fields.
|with h||with ħ|
These distributions represent the spectral radiance of blackbodies—the power emitted from the emitting surface, per unit projected area of emitting surface, per unit solid angle, per spectral unit (frequency, wavelength, wavenumber or their angular equivalents). Since the radiance is isotropic (i.e. independent of direction), the power emitted at an angle to the normal is proportional to the projected area, and therefore to the cosine of that angle as per Lambert's cosine law, and is unpolarized.
Correspondence between spectral variable forms
Different spectral variables require different corresponding forms of expression of the law. In general, one may not convert between the various forms of Planck's law simply by substituting one variable for another, because this would not take into account that the different forms have different units.
Corresponding forms of expression are related because they express one and the same physical fact: For a particular physical spectral increment, a particular physical energy increment is radiated.
This is so whether it is expressed in terms of an increment of frequency, dν, or, correspondingly, of wavelength, dλ. Introduction of a minus sign can indicate that an increment of frequency corresponds with decrement of wavelength. For the above corresponding forms of expression of the spectral radiance, one may use an obvious expansion of notation, temporarily for the present calculation only. Then, for a particular spectral increment, the particular physical energy increment may be written
- which leads to
It follows that the location of the peak of the distribution for Planck's law depends on the choice of spectral variable.
Spectral energy density form
These distributions have units of energy per volume per spectral unit.
Consider a cube of side L with conducting walls filled with electromagnetic radiation in thermal equilibrium at temperature T. If there is a small hole in one of the walls, the radiation emitted from the hole will be characteristic of a perfect black body. We will first calculate the spectral energy density within the cavity and then determine the spectral radiance of the emitted radiation.
At the walls of the cube, the parallel component of the electric field and the orthogonal component of the magnetic field must vanish. Analogous to the wave function of a particle in a box, one finds that the fields are superpositions of periodic functions. The three wavelengths λ1, λ2, and λ3, in the three directions orthogonal to the walls can be:
where the ni are integers. For each set of integers ni there are two linear independent solutions (modes). According to quantum theory, the energy levels of a mode are given by:
The quantum number r can be interpreted as the number of photons in the mode. The two modes for each set of ni correspond to the two polarization states of the photon which has a spin of 1. Note that for r = 0 the energy of the mode is not zero. This vacuum energy of the electromagnetic field is responsible for the Casimir effect. In the following we will calculate the internal energy of the box at absolute temperature T.
According to statistical mechanics, the probability distribution over the energy levels of a particular mode is given by:
The denominator Z(β), is the partition function of a single mode and makes Pr properly normalized:
Here we have implicitly defined
which is the energy of a single photon. As explained here, the average energy in a mode can be expressed in terms of the partition function:
This formula, apart from the first vacuum energy term, is a special case of the general formula for particles obeying Bose–Einstein statistics. Since there is no restriction on the total number of photons, the chemical potential is zero.
If we measure the energy relative to the ground state, the total energy in the box follows by summing over all allowed single photon states. This can be done exactly in the thermodynamic limit as L approaches infinity. In this limit, ε becomes continuous and we can then integrate over this parameter. To calculate the energy in the box in this way, we need to evaluate how many photon states there are in a given energy range. If we write the total number of single photon states with energies between ε and ε + dε as g(ε)dε, where g(ε) is the density of states (which we'll evaluate in a moment), then we can write:
To calculate the density of states we rewrite equation (1) as follows:
where n is the norm of the vector n = (n1, n2, n3):
For every vector n with integer components larger than or equal to zero, there are two photon states. This means that the number of photon states in a certain region of n-space is twice the volume of that region. An energy range of dε corresponds to shell of thickness dn = (2L/hc)dε in n-space. Because the components of n have to be positive, this shell spans an octant of a sphere. The number of photon states g(ε)dε, in an energy range dε, is thus given by:
Inserting this in Eq. (2) gives:
From this equation one can derive the spectral energy density as a function of frequency and as a function of wavelength uλ(T):
This is also a spectral energy density function with units of energy per unit wavelength per unit volume. Integrals of this type for Bose and Fermi gases can be expressed in terms of polylogarithms. In this case, however, it is possible to calculate the integral in closed form using only elementary functions. Substituting
in Eq. (3), makes the integration variable dimensionless giving:
where J is a Bose–Einstein integral given by:
The total electromagnetic energy inside the box is thus given by:
where V = L3 is the volume of the box.
This is not the Stefan–Boltzmann law (which provides the total energy radiated by a black body per unit surface area per unit time), but it can be written more compactly using the Stefan–Boltzmann constant σ, giving
The constant 4σ/c is sometimes called the radiation constant.
Since the radiation is the same in all directions, and propagates at the speed of light (c), the spectral radiance of radiation exiting the small hole is
It can be converted to an expression for Bλ(T) in wavelength units by substituting by c/λ and evaluating
Note that dimensional analysis shows that the unit of steradians, shown in the denominator of left hand side of the equation above, is generated in and carried through the derivation but does not appear in any of the dimensions for any element on the left-hand-side of the equation.
This derivation is based on Brehm & Mullin 1989.
Planck's law describes the unique and characteristic spectral distribution for electromagnetic radiation in thermodynamic equilibrium, when there is no net flow of matter or energy. Its physics is most easily understood by considering the radiation in a cavity with rigid opaque walls. Motion of the walls can affect the radiation. If the walls are not opaque, then the thermodynamic equilibrium is not isolated. It is of interest to explain how the thermodynamic equilibrium is attained. There are two main cases: (a) when the approach to thermodynamic equilibrium is in the presence of matter, when the walls of the cavity are imperfectly reflective for every wavelength or when the walls are perfectly reflective while the cavity contains a small black body (this was the main case considered by Planck); or (b) when the approach to equilibrium is in the absence of matter, when the walls are perfectly reflective for all wavelengths and the cavity contains no matter. For matter not enclosed in such a cavity, thermal radiation can be approximately explained by appropriate use of Planck's law.
Classical physics provides an account of some aspects of the Planck distribution, such as the Stefan–Boltzmann law, and the Wien displacement law. Other aspects of the Planck distribution cannot be accounted for in classical physics, and require quantum theory for their understanding. For the case of the presence of matter, quantum mechanics provides a good account, as found below in the section headed Einstein coefficients. This was the case considered by Einstein, and is nowadays used for quantum optics. For the case of the absence of matter, quantum field theory is called upon, because quantum mechanics alone does not provide a sufficient account.
Quantum theoretical explanation of Planck's law views the radiation as a gas of massless, uncharged, bosonic particles, namely photons, in thermodynamic equilibrium. Photons are viewed as the carriers of the electromagnetic interaction between electrically charged elementary particles. Photon numbers are not conserved. Photons are created or annihilated in the right numbers and with the right energies to fill the cavity with the Planck distribution. For a photon gas in thermodynamic equilibrium, the internal energy density is entirely determined by the temperature; moreover, the pressure is entirely determined by the internal energy density. This is unlike the case of thermodynamic equilibrium for material gases, for which the internal energy is determined not only by the temperature, but also, independently, by the respective numbers of the different molecules, and independently again, by the specific characteristics of the different molecules. For different material gases at given temperature, the pressure and internal energy density can vary independently, because different molecules can carry independently different excitation energies.
Planck's law arises as a limit of the Bose–Einstein distribution, the energy distribution describing non-interactive bosons in thermodynamic equilibrium. In the case of massless bosons such as photons and gluons, the chemical potential is zero and the Bose-Einstein distribution reduces to the Planck distribution. There is another fundamental equilibrium energy distribution: the Fermi–Dirac distribution, which describes fermions, such as electrons, in thermal equilibrium. The two distributions differ because multiple bosons can occupy the same quantum state, while multiple fermions cannot. At low densities, the number of available quantum states per particle is large, and this difference becomes irrelevant. In the low density limit, the Bose-Einstein and the Fermi-Dirac distribution each reduce to the Maxwell–Boltzmann distribution.
Kirchhoff's law of thermal radiation
Kirchhoff's law of thermal radiation is a succinct and brief account of a complicated physical situation. The following is an introductory sketch of that situation, and is very far from being a rigorous physical argument. The purpose here is only to summarize the main physical factors in the situation, and the main conclusions.
Spectral dependence of thermal radiation
There is a difference between conductive heat transfer and radiative heat transfer. Radiative heat transfer can be filtered to pass only a definite band of radiative frequencies. It is generally known that the hotter a body becomes, the more heat it radiates, and this at every frequency. In cavity in an opaque body with rigid walls in thermodynamic equilibrium, there is only one temperature, and it must be shared in common by the radiation of every frequency.
One may imagine two opaque rigid bodies, each in its own isolated thermodynamic equilibrium, each containing an internal cavity containing thermal radiation in equilibrium with it. One may imagine an optical device that allows heat transfer between the two cavities, filtered to pass only a definite band of radiative frequencies. If the values of the spectral radiances of the radiations in the cavities differ in that frequency band, heat may be expected to pass from the hotter to the colder. One might propose to use such a filtered transfer of heat in such a band to drive a heat engine. If the two bodies are at the same temperature, the second law of thermodynamics does not allow the heat engine to work. It may be inferred that for a temperature common to the two bodies, the values of the spectral radiances in the pass-band must also be common. This must hold for every frequency band. This became clear to Balfour Stewart and later to Kirchhoff. Balfour Stewart found experimentally that of all surfaces, one of lamp black emitted the greatest amount of thermal radiation for every quality of radiation, judged by various filters. Thinking theoretically, Kirchhoff went a little further, and pointed out that this implied that the spectral radiance, as a function of radiative frequency, of any such cavity in thermodynamic equilibrium must be a unique universal function of temperature. He postulated an ideal black body that interfaced with its surrounds in just such a way as to absorb all the radiation that falls on it. By the Helmholtz reciprocity principle, radiation from the interior of such a body would pass unimpeded, directly to its surrounds without reflection at the interface. In thermodynamic equilibrium, the thermal radiation emitted from such a body would have that unique universal spectral radiance as a function of temperature.
Relation between absorptivity and emissivity
One may imagine a small homogeneous spherical material body labeled X at a temperature TX, lying in a radiation field within a large cavity with walls of material labeled Y at a temperature TY. The body X emits its own thermal radiation. At a particular frequency ν, the radiation emitted from a particular cross-section through the centre of X in one sense in a direction normal to that cross-section may be denoted Iν, X (TX), characteristically for the material of X. At that frequency ν, the radiative power from the walls into that cross-section in the opposite sense in that direction may be denoted Iν, Y (TY), for the wall temperature TY. For the material of X, defining the absorptivity αν, X,Y (TX, TY) as the fraction of that incident radiation absorbed by X, that incident energy is absorbed at a rate αν, X,Y (TX, TY) Iν, Y (TY).
The rate q(ν, TX, TY) of accumulation of energy in one sense into the cross-section of the body can then be expressed
Kirchhoff's seminal insight, mentioned just above, was that, at thermodynamic equilibrium at temperature T, there exists a unique universal radiative distribution, nowadays denoted Bν(T), that is independent of the chemical characteristics of the materials X and Y, that leads to a very valuable understanding of the radiative exchange equilibrium of any body at all, as follows.
When there is thermodynamic equilibrium at temperature T, the cavity radiation from the walls has that unique universal value, so that Iν, Y (TY) = Bν(T). Further, one may define the emissivity εν, X (TX) of the material of the body X just so that at thermodynamic equilibrium at temperature TX = T, one has Iν, X (TX) = Iν, X (T) = εν, X (T) Bν(T) .
When thermal equilibrium prevails at temperature T = TX = TY, the rate of accumulation of energy vanishes so that q(ν,TX,TY) = 0. It follows that:
Kirchhoff pointed out that it follows that
Introducing the special notation αν, X (T ) for the absorptivity of material X at thermodynamic equilibrium at temperature T (justified by a discovery of Einstein, as indicated below), one further has the equality
The equality of absorptivity and emissivity here demonstrated is specific for thermodynamic equilibrium at temperature T and is in general not to be expected to hold when conditions of thermodynamic equilibrium do not hold. The emissivity and absorptivity are each separately properties of the molecules of the material but they depend differently upon the distributions of states of molecular excitation on the occasion, because of a phenomenon known as "stimulated emission", that was discovered by Einstein. On occasions when the material is in thermodynamic equilibrium or in a state known as local thermodynamic equilibrium, the emissivity and absorptivity become equal. Very strong incident radiation or other factors can disrupt thermodynamic equilibrium or local thermodynamic equilibrium. Local thermodynamic equilibrium in a gas means that molecular collisions far outweigh light emission and absorption in determining the distributions of states of molecular excitation.
Kirchhoff pointed out that he did not know the precise character of Bν(T), but he thought it important that it should be found out. Four decades after Kirchhoff's insight of the general principles of its existence and character, Planck's contribution was to determine the precise mathematical expression of that equilibrium distribution Bν(T).
Black body
In physics, one considers an ideal black body, here labeled B, defined as one that completely absorbs all of the electromagnetic radiation falling upon it at every frequency ν (hence the term "black"). According to Kirchhoff's law of thermal radiation, this entails that, for every frequency ν, at thermodynamic equilibrium at temperature T, one has αν, B (T) = εν, B (T) = 1, so that the thermal radiation from a black body is always equal to the full amount specified by Planck's law. No physical body can emit thermal radiation that exceeds that of a black body, since if it were in equilibrium with a radiation field, it would be emitting more energy than was incident upon it.
Though perfectly black materials do not exist, in practice a black surface can be accurately approximated. As to its material interior, a body of condensed matter, liquid, solid, or plasma, with a definite interface with its surroundings, is completely black to radiation if it is completely opaque. That means that it absorbs all of the radiation that penetrates the interface of the body with its surroundings, and enters the body. This is not too difficult to achieve in practice. On the other hand, a perfectly black interface is not found in nature. A perfectly black interface reflects no radiation, but transmits all that falls on it, from either side. The best practical way to make an effectively black interface is to simulate an 'interface' by a small hole in the wall of a large cavity in a completely opaque rigid body of material that does not reflect perfectly at any frequency, with its walls at a controlled temperature. Beyond these requirements, the component material of the walls is unrestricted. Radiation entering the hole has almost no possibility of escaping the cavity without being absorbed by multiple impacts with its walls.
Lambert's cosine law
As explained by Planck, a radiating body has an interior consisting of matter, and an interface with its contiguous neighbouring material medium, which is usually the medium from within which the radiation from the surface of the body is observed. The interface is not composed of physical matter but is a theoretical conception, a mathematical two-dimensional surface, a joint property of the two contiguous media, strictly speaking belonging to neither separately. Such an interface can neither absorb nor emit, because it is not composed of physical matter; but it is the site of reflection and transmission of radiation, because it is a surface of discontinuity of optical properties. The reflection and transmission of radiation at the interface obey the Stokes–Helmholtz reciprocity principle.
At any point in the interior of a black body located inside a cavity in thermodynamic equilibrium at temperature T the radiation is homogeneous, isotropic and unpolarized. A black body absorbs all and reflects none of the electromagnetic radiation incident upon it. According to the Helmholtz reciprocity principle, radiation from the interior of a black body is not reflected at its surface, but is fully transmitted to its exterior. Because of the isotropy of the radiation in the body's interior, the spectral radiance of radiation transmitted from its interior to its exterior through its surface is independent of direction.
This is expressed by saying that radiation from the surface of a black body in thermodynamic equilibrium obeys Lambert's cosine law. This means that the spectral flux dΦ(dA, θ, dΩ, dν) from a given infinitesimal element of area dA of the actual emitting surface of the black body, detected from a given direction that makes an angle θ with the normal to the actual emitting surface at dA, into an element of solid angle of detection dΩ centred on the direction indicated by θ, in an element of frequency bandwidth dν, can be represented as
where L0(dA, dν) denotes the flux, per unit area per unit frequency per unit solid angle, that area dA would show if it were measured in its normal direction θ = 0.
The factor cos θ is present because the area to which the spectral radiance refers directly is the projection, of the actual emitting surface area, onto a plane perpendicular to the direction indicated by θ . This is the reason for the name cosine law.
Taking into account the independence of direction of the spectral radiance of radiation from the surface of a black body in thermodynamic equilibrium, one has L0(dA, dν) = Bν (T) and so
Thus Lambert's cosine law expresses the independence of direction of the spectral radiance Bν (T) of the surface of a black body in thermodynamic equilibrium.
Stefan–Boltzmann law
The total power emitted per unit area at the surface of a black body (P) may be found by integrating the black body spectral flux found from Lambert's law over all frequencies, and over the solid angles corresponding to a hemisphere (h) above the surface.
The infinitesimal solid angle can be expressed in spherical polar coordinates:
Radiative transfer
The equation of radiative transfer describes the way in which radiation is affected as it travels through a material medium. For the special case in which the material medium is in thermodynamic equilibrium in the neighborhood of a point in the medium, Planck's law is of special importance.
For simplicity, we can consider the linear steady state, without scattering. The equation of radiative transfer states that for a beam of light going through a small distance ds, energy is conserved: The change in the (spectral) radiance of that beam () is equal to the amount removed by the material medium plus the amount gained from the material medium. If the radiation field is in equilibrium with the material medium, these two contributions will be equal. The material medium will have a certain emission coefficient and absorption coefficient.
The absorption coefficient is the fractional change in the intensity of the light beam as it travels the distance ds, and has units of 1/length. It is composed of two parts, the decrease due to absorption and the increase due to stimulated emission. Stimulated emission is emission by the material body which is caused by and is proportional to the incoming radiation. It is included in the absorption term because, like absorption, it is proportional to the intensity of the incoming radiation. Since the amount of absorption will generally vary linearly as the density of the material, we may define a "mass absorption coefficient" which is a property of the material itself. The change in intensity of a light beam due to absorption as it traverses a small distance ds will then be
The "mass emission coefficient" is equal to the radiance per unit volume of a small volume element divided by its mass (since, as for the mass absorption coefficient, the emission is proportional to the emitting mass) and has units of power/solid angle/frequency/density. Like the mass absorption coefficient, it too is a property of the material itself. The change in a light beam as it traverses a small distance ds will then be
The equation of radiative transfer will then be the sum of these two contributions:
If the radiation field is in equilibrium with the material medium, then the radiation will be homogeneous (independent of position) so that and:
which is another statement of Kirchhoff's law, relating two material properties of the medium, and which yields the radiative transfer equation at a point around which the medium is in thermodynamic equilibrium:
Einstein coefficients
The principle of detailed balance states that, at thermodynamic equilibrium, each elementary process is equilibrated by its reverse process.
In 1916, Albert Einstein applied this principle on an atomic level to the case of an atom radiating and absorbing radiation due to transitions between two particular energy levels, giving a deeper insight into the equation of radiative transfer and Kirchhoff's law for this type of radiation. If level 1 is the lower energy level with energy , and level 2 is the upper energy level with energy , then the frequency of the radiation radiated or absorbed will be determined by Bohr's frequency condition: .
If and are the number densities of the atom in states 1 and 2 respectively, then the rate of change of these densities in time will be due to three processes:
Spontaneous emission Stimulated emission Photo-absorption
where is the spectral radiance of the radiation field. The three parameters , and , known as the Einstein coefficients, are associated with the photon frequency produced by the transition between two energy levels (states). As a result, each line in a spectra has it own set of associated coefficients. When the atoms and the radiation field are in equilibrium, the radiance will be given by Planck's law and, by the principle of detailed balance, the sum of these rates must be zero:
Since the atoms are also in equilibrium, the populations of the two levels are related by the Boltzmann distribution:
where and are the multiplicities of the respective energy levels. Combining the above two equations with the requirement that they be valid at any temperature yields two relationships between the Einstein coefficients:
so that knowledge of one coefficient will yield the other two. For the case of isotropic absorption and emission, the emission coefficient () and absorption coefficient () defined in the radiative transfer section above, can be expressed in terms of the Einstein coefficients. The relationships between the Einstein coefficients will yield the expression of Kirchhoff's law expressed in the Radiative transfer section above, namely that
These coefficients apply to both atoms and molecules.
The distributions and peak at
where W is the Lambert W function.
The distributions and however, peak at a different energy
The reason for this is that, as mentioned above, one cannot go from (for example) to simply by substituting by . In addition, one must also multiply the result of the substitution by . This factor shifts the peak of the distribution to higher energies.
The radiance increases as the square of the frequency, illustrating the ultraviolet catastrophe. In the limit of high frequencies (i.e. small wavelengths) Planck's law tends to the Wien approximation:
Both approximations were known to Planck before he developed his law. He was led by these two approximations to develop a law which incorporated both limits, which ultimately became Planck's law.
Wien's displacement law in its stronger form states that the shape of Planck's law is independent of temperature. It is therefore possible to list the percentile points of the total radiation as well as the peaks for wavelength and frequency, in a form which gives the wavelength λ when divided by temperature T. The second row of the following table lists the corresponding values of λT, that is, those values of x for which the wavelength λ is x/T micrometers at the radiance percentile point given by the corresponding entry in the first row.
That is, 0.01% of the radiation is at a wavelength below 910/T µm, 20% below 2676/T µm, etc. The wavelength and frequency peaks are in bold and occur at 25.0% and 64.6% respectively. The 41.8% point is the wavelength-frequency-neutral peak. These are the points at which the respective Planck-law functions , , and divided by exp(hν/kBT) − 1 attain their maxima. Also note the much smaller gap in ratio of wavelengths between 0.1% and 0.01% (1110 is 22% more than 910) than between 99.9% and 99.99% (113374 is 120% more than 51613), reflecting the exponential decay of energy at short wavelengths (left end) and polynomial decay at long.
Which peak to use depends on the application. The conventional choice is the wavelength peak at 25.0% given by Wien's displacement law in its weak form. For some purposes the median or 50% point dividing the total radiation into two halves may be more suitable. The latter is closer to the frequency peak than to the wavelength peak because the radiance drops exponentially at short wavelengths and only polynomially at long. The neutral peak occurs at a shorter wavelength than the median for the same reason.
For the Sun, T is 5778 K, allowing the percentile points of the Sun's radiation, in nanometers, to be tabulated as follows when modeled as a black body radiator, to which the Sun is a fair approximation. For comparison a planet modeled as a black body radiating at a nominal 288 K (15 °C) as a representative value of the Earth's highly variable temperature has wavelengths more than twenty times that of the Sun, tabulated in the third row in micrometers (thousands of nanometers).
|Sun λ (nm)||157||192||251||380||463||502||540||620||635||711||821||882||967||1188||1623||3961||8933||19620|
|288 K planet λ (µm)||3.16||3.85||5.03||7.62||9.29||10.1||10.8||12.4||12.7||14.3||16.5||17.7||19.4||23.8||32.6||79.5||179||394|
That is, only 1% of the Sun's radiation is at wavelengths shorter than 251 nm, and only 1% at longer than 3961 nm. Expressed in micrometers this puts 98% of the Sun's radiation in the range from 0.251 to 3.961 µm. The corresponding 98% of energy radiated from a 288 K planet is from 5.03 to 79.5 µm, well above the range of solar radiation (or below if expressed in terms of frequencies instead of wavelengths ).
A consequence of this more-than-order-of-magnitude difference in wavelength between solar and planetary radiation is that filters designed to pass one and block the other are easy to construct. For example windows fabricated of ordinary glass or transparent plastic pass at least 80% of the incoming 5778 K solar radiation, which is below 1.2 µm in wavelength, while blocking over 99% of the outgoing 288 K thermal radiation from 5 µm upwards, wavelengths at which most kinds of glass and plastic of construction-grade thickness are effectively opaque.
The Sun's radiation is that arriving at the top of the atmosphere (TOA). As can be read from the table, radiation below 400 nm, or ultraviolet, is about 12%, while that above 700 nm, or infrared, starts at about the 49% point and so accounts for 51% of the total. Hence only 37% of the TOA insolation is visible to the human eye. The atmosphere shifts these percentages substantially in favor of visible light as it absorbs most of the ultraviolet and significant amounts of infrared.
Balfour Stewart
In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie. He wrote "Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power."
Stewart measured radiated power with a thermo-pile and sensitive galvanometer read with a microscope. He was concerned with selective thermal radiation, which he investigated with plates of substances that radiated and absorbed selectively for different qualities of radiation rather than maximally for all qualities of radiation. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Helmholtz reciprocity principle (though he did not use an eponym for it). He did not in this paper mention that the qualities of the rays might be described by their wavelengths, nor did he use spectrally resolving apparatus such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared by heating to equilibrium with boiling water. His measurements confirmed that substances that emit and absorb selectively respect the principle of selective equality of emission and absorption at thermal equilibrium.
Stewart offered a theoretical proof that this should be the case separately for every selected quality of thermal radiation, but his mathematics was not rigorously valid. According to historian D.M Siegel: "He was not a practitioner of the more sophisticated techniques of nineteenth-century mathematical physics; he did not even make use of the functional notation in dealing with spectral distributions." He made no mention of thermodynamics in this paper, though he did refer to conservation of vis viva. He proposed that his measurements implied that radiation was both absorbed and emitted by particles of matter throughout depths of the media in which it propagated. He applied the Helmholtz reciprocity principle to account for the material interface processes as distinct from the processes in the interior material. He concluded that his experiments showed that, in the interior of an enclosure in thermal equilibrium, the radiant heat, reflected and emitted combined, leaving any part of the surface, regardless of its substance, was the same as would have left that same portion of the surface if it had been composed of lamp-black. He did not mention the possibility of ideally perfectly reflective walls; in particular he noted that highly polished real physical metals absorb very slightly.
Gustav Kirchhoff
In 1859, not knowing of Stewart's work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber.
Kirchhoff then went on to consider bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at temperature T.
Here is used a notation different from Kirchhoff's. Here, the emitting power E(T, i) denotes a dimensioned quantity, the total radiation emitted by a body labeled by index i at temperature T. The total absorption ratio a(T, i) of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature T . (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.) Thus the ratio E(T, i) / a(T, i) of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power, because a(T, i) is dimensionless. Also here the wavelength-specific emitting power of the body at temperature T is denoted by E(λ, T, i) and the wavelength-specific absorption ratio by a(λ, T, i) . Again, the ratio E(λ, T, i) / a(λ, T, i) of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power.
In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers. His theoretical proof was and still is considered by some writers to be invalid. His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorption ratio has one and the same common value for all bodies that emit and absorb at that wavelength. In symbols, the law stated that the wavelength-specific ratio E(λ, T, i) / a(λ, T, i) has one and the same value for all bodies, that is for all values of index i . In this report there was no mention of black bodies.
In 1860, still not knowing of Stewart's measurements for selected qualities of radiation, Kirchhoff pointed out that it was long established experimentally that for total heat radiation, of unselected quality, emitted and absorbed by a body in equilibrium, the dimensioned total radiation ratio E(T, i) / a(T, i), has one and the same value common to all bodies, that is, for every value of the material index i. Again without measurements of radiative powers or other new experimental data, Kirchhoff then offered a fresh theoretical proof of his new principle of the universality of the value of the wavelength-specific ratio E(λ, T, i) / a(λ, T, i) at thermal equilibrium. His fresh theoretical proof was and still is considered by some writers to be invalid.
But more importantly, it relied on a new theoretical postulate of "perfectly black bodies", which is the reason why one speaks of Kirchhoff's law. Such black bodies showed complete absorption in their infinitely thin most superficial surface. They correspond to Balfour Stewart's reference bodies, with internal radiation, coated with lamp-black. They were not the more realistic perfectly black bodies later considered by Planck. Planck's black bodies radiated and absorbed only by the material in their interiors; their interfaces with contiguous media were only mathematical surfaces, capable neither of absorption nor emission, but only of reflecting and transmitting with refraction.
Kirchhoff's proof considered an arbitrary non-ideal body labeled i as well as various perfect black bodies labeled BB . It required that the bodies be kept in a cavity in thermal equilibrium at temperature T . His proof intended to show that the ratio E(λ, T, i) / a(λ, T, i) was independent of the nature i of the non-ideal body, however partly transparent or partly reflective it was.
His proof first argued that for wavelength λ and at temperature T, at thermal equilibrium, all perfectly black bodies of the same size and shape have the one and the same common value of emissive power E(λ, T, BB), with the dimensions of power. His proof noted that the dimensionless wavelength-specific absorption ratio a(λ, T, BB) of a perfectly black body is by definition exactly 1. Then for a perfectly black body, the wavelength-specific ratio of emissive power to absorption ratio E(λ, T, BB) / a(λ, T, BB) is again just E(λ, T, BB), with the dimensions of power. Kirchhoff considered, successively, thermal equilibrium with the arbitrary non-ideal body, and with a perfectly black body of the same size and shape, in place in his cavity in equilibrium at temperature T . He argued that the flows of heat radiation must be the same in each case. Thus he argued that at thermal equilibrium the ratio E(λ, T, i) / a(λ, T, i) was equal to E(λ, T, BB), which may now be denoted Bλ (λ, T), a continuous function, dependent only on λ at fixed temperature T, and an increasing function of T at fixed wavelength λ, at low temperatures vanishing for visible but not for longer wavelengths, with positive values for visible wavelengths at higher temperatures, which does not depend on the nature i of the arbitrary non-ideal body. (Geometrical factors, taken into detailed account by Kirchhoff, have been ignored in the foregoing.)
Thus Kirchhoff's law of thermal radiation can be stated: For any material at all, radiating and absorbing in thermodynamic equilibrium at any given temperature T, for every wavelength λ, the ratio of emissive power to absorptive ratio has one universal value, which is characteristic of a perfect black body, and is an emissive power which we here represent by Bλ (λ, T) . (For our notation Bλ (λ, T), Kirchhoff's original notation was simply e.)
Kirchhoff announced that the determination of the function Bλ (λ, T) was a problem of the highest importance, though he recognized that there would be experimental difficulties to be overcome. He supposed that like other functions that do not depend on the properties of individual bodies, it would be a simple function. That function Bλ (λ, T) has occasionally been called 'Kirchhoff's (emission, universal) function', though its precise mathematical form would not be known for another forty years, till it was discovered by Planck in 1900. The theoretical proof for Kirchhoff's universality principle was worked on and debated by various physicists over the same time, and later. Kirchhoff stated later in 1860 that his theoretical proof was better than Balfour Stewart's, and in some respects it was so. Kirchhoff's 1860 paper did not mention the second law of thermodynamics, and of course did not mention the concept of entropy which had not at that time been established. In a more considered account in a book in 1862, Kirchhoff mentioned the connection of his law with "Carnot's principle", which is a form of the second law.
Empirical sources of Planck's law
In 1860, Kirchhoff predicted experimental difficulties for the empirical determination of the function that described the dependence of the black-body spectrum as a function only of temperature and wavelength. And so it turned out. It took some forty years of development of improved methods of measurement of electromagnetic radiation to get a reliable result.
In 1865, John Tyndall described radiation from electrically heated filaments and from carbon arcs as visible and invisible. Tyndall spectrally decomposed the radiation by use of a rock salt prism, which passed heat as well as visible rays, and measured the radiation intensity by means of a thermopile.
In 1880, André-Prosper-Paul Crova published a diagram of the three-dimensional appearance of the graph of the strength of thermal radiation as a function of wavelength and temperature.
In a series of papers from 1881 to 1886, Langley reported measurements of the spectrum of heat radiation, using diffraction gratings and prisms, and the most sensitive detectors that he could make. He reported that there was a peak intensity that increased with temperature, that the shape of the spectrum was not symmetrical about the peak, that there was a strong fall-off of intensity when the wavelength was shorter than an approximate cut-off value for each temperature, that the approximate cut-off wavelength decreased with increasing temperature, and that the wavelength of the peak intensity decreased with temperature, so that the intensity increased strongly with temperature for short wavelengths that were longer than the approximate cut-off for the temperature.
In 1898, Otto Lummer and Ferdinand Kurlbaum published an account of their cavity radiation source. Their design has been used largely unchanged for radiation measurements to the present day. It was a platinum box, divided by diaphragms, with its interior blackened with iron oxide. It was an important ingredient for the progressively improved measurements that led to the discovery of Planck's law. A version described in 1901 had its interior blackened with a mixture of chromium, nickel, and cobalt oxides.
The importance of the Lummer and Kurlbaum cavity radiation source was that it was an experimentally accessible source of black-body radiation, as distinct from radiation from a simply exposed incandescent solid body, which previously was the nearest available experimental approximation to black-body radiation over a suitable range of temperatures. The simply exposed incandescent solid bodies, that had previously been used, emitted radiation with departures from the black-body spectrum that made it impossible to find the true black-body spectrum from experiments.
Planck's views just before the empirical facts led him to find his eventual law
Theoretical and empirical progress enabled Lummer and Pringsheim to write in 1899 that available experimental evidence was approximately consistent with the specific intensity law Cλ−5e(−c/λT) where C and c denote empirically measurable constants, and where λ and T denote wavelength and temperature respectively. For theoretical reasons, Planck at that time accepted this formulation, which has an effective cut-off of short wavelengths.
Finding the empirical law
Max Planck originally produced his law on 19 October 1900 as an improvement upon the Wien approximation, published in 1896 by Wilhelm Wien, which fit the experimental data at short wavelengths (high frequencies) but deviated from it at long wavelengths (low frequencies). In June 1900, based on heuristic theoretical considerations, Rayleigh had suggested a formula that he proposed might be checked experimentally. The suggestion was that the Stewart–Kirchhoff universal function might be of the form . This was not the celebrated Rayleigh–Jeans formula , which did not emerge until 1905, though it did reduce to the latter for long wavelengths, which are the relevant ones here. According to Klein, one may speculate that it is likely that Planck had seen this suggestion though he did not mention it in his papers of 1900 and 1901. Planck would have been aware of various other proposed formulas which had been offered. On 7 October 1900, Rubens told Planck that in the complementary domain (long wavelength, low frequency), and only there, Rayleigh's 1900 formula fitted the observed data well.
For long wavelengths, Rayleigh's 1900 heuristic formula approximately meant that energy was proportional to temperature, U = const. T. It is known that and this leads to and thence to d2S /dU 2 = − const. / U 2 for long wavelengths. But for short wavelengths, the Wien formula leads to 1 / T = − const. / ln U + const. and thence to d2S /dU 2 = − const. / U for short wavelengths. Planck perhaps patched together these two heuristic formulas, for long and for short wavelengths, to produce a formula
This led Planck to the formula
where Planck used the symbols and to denote empirical fitting constants.
Planck sent this result to Rubens, who compared it with his and Kurlbaum's observational data and found that it fitted for all wavelengths remarkably well. On 19 October 1900, Rubens and Kurlbaum briefly reported the fit to the data, and Planck added a short presentation to give a theoretical sketch to account for his formula. Within a week, Rubens and Kurlbaum gave a fuller report of their measurements confirming Planck's law. Their technique for spectral resolution of the longer wavelength radiation was called the residual ray method. The rays were repeatedly reflected from polished crystal surfaces, and the rays that made it all the way through the process were 'residual', and were of wavelengths preferentially reflected by crystals of suitably specific materials.
Trying to find a physical explanation of the law
Once Planck had discovered the empirically fitting function, he constructed a physical derivation of this law. His thinking revolved around entropy rather than being directly about temperature. Planck considered a cavity with perfectly reflective walls; the cavity contained finitely many hypothetical well separated and recognizable but identically constituted, of definite magnitude, resonant oscillatory bodies, several such oscillators at each of finitely many characteristic frequencies. The hypothetical oscillators were for Planck purely imaginary theoretical investigative probes, and he said of them that such oscillators do not need to "really exist somewhere in nature, provided their existence and their properties are consistent with the laws of thermodynamics and electrodynamics.". Planck did not attribute any definite physical significance to his hypothesis of resonant oscillators, but rather proposed it as a mathematical device that enabled him to derive a single expression for the black body spectrum that matched the empirical data at all wavelengths. He tentatively mentioned the possible connection of such oscillators with atoms. In a sense, the oscillators corresponded to Planck's speck of carbon; the size of the speck could be small regardless of the size of the cavity, provided the speck effectively transduced energy between radiative wavelength modes.
Partly following a heuristic method of calculation pioneered by Boltzmann for gas molecules, Planck considered the possible ways of distributing electromagnetic energy over the different modes of his hypothetical charged material oscillators. This acceptance of the probabilistic approach, following Boltzmann, for Planck was a radical change from his former position, which till then had deliberately opposed such thinking proposed by Boltzmann. Heuristically, Boltzmann had distributed the energy in arbitrary merely mathematical quanta ϵ, which he had proceeded to make tend to zero in magnitude, because the finite magnitude ϵ had served only to allow definite counting for the sake of mathematical calculation of probabilities, and had no physical significance. Referring to a new universal constant of nature, h, Planck supposed that, in the several oscillators of each of the finitely many characteristic frequencies, the total energy was distributed to each in an integer multiple of a definite physical unit of energy, ϵ, not arbitrary as in Boltzmann's method, but now for Planck, in a new departure, characteristic of the respective characteristic frequency. His new universal constant of nature, h, is now known as Planck's constant.
Planck explained further that the respective definite unit, ϵ, of energy should be proportional to the respective characteristic oscillation frequency of the hypothetical oscillator, and in 1901 he expressed this with the constant of proportionality h:
Planck did not propose that light propagating in free space is quantized. The idea of quantization of the free electromagnetic field was developed later, and eventually incorporated into what we now know as quantum field theory.
In 1906 Planck acknowledged that his imaginary resonators, having linear dynamics, did not provide a physical explanation for energy transduction between frequencies. Present-day physics explains the transduction between frequencies in the presence of atoms by their quantum excitability, following Einstein. Planck believed that in a cavity with perfectly reflecting walls and with no matter present, the electromagnetic field cannot exchange energy between frequency components. This is because of the linearity of Maxwell's equations. Present-day quantum field theory predicts that, in the absence of matter, the electromagnetic field obeys nonlinear equations and in that sense does self-interact. Such interaction in the absence of matter has not yet been directly measured because it would require very high intensities and very sensitive and low-noise detectors, which are still in the process of being constructed. Planck believed that a field with no interactions neither obeys nor violates the classical principle of equipartition of energy, and instead remains exactly as it was when introduced, rather than evolving into a black body field. Thus, the linearity of his mechanical assumptions precluded Planck from having a mechanical explanation of the maximization of the entropy of the thermodynamic equilibrium thermal radiation field. This is why he had to resort to Boltzmann's probabilistic arguments.
Planck's law may be regarded as fulfilling the prediction of Gustav Kirchhoff that his law of thermal radiation was of the highest importance. In his mature presentation of his own law, Planck offered a thorough and detailed theoretical proof for Kirchhoff's law, theoretical proof of which until then had been sometimes debated, partly because it was said to rely on unphysical theoretical objects, such as Kirchhoff's perfectly absorbing infinitely thin black surface.
Subsequent events
It was not till five years after Planck made his heuristic assumption of abstract elements of energy or of action that Albert Einstein conceived of really existing quanta of light in 1905 as a revolutionary explanation of black-body radiation, of photoluminescence, of the photoelectric effect, and of the ionization of gases by ultraviolet light. In 1905, "Einstein believed that Planck's theory could not be made to agree with the idea of light quanta, a mistake he corrected in 1906." Contrary to Planck's beliefs of the time, Einstein proposed a model and formula whereby light was emitted, absorbed, and propagated in free space in energy quanta localized in points of space. As an introduction to his reasoning, Einstein recapitulated Planck's model of hypothetical resonant material electric oscillators as sources and sinks of radiation, but then he offered a new argument, disconnected from that model, but partly based on a thermodynamic argument of Wien, in which Planck's formula ϵ = played no role. Einstein gave the energy content of such quanta in the form . Thus Einstein was contradicting the undulatory theory of light held by Planck. In 1910, criticizing a manuscript sent to him by Planck, knowing that Planck was a steady supporter of Einstein's theory of special relativity, Einstein wrote to Planck: "To me it seems absurd to have energy continuously distributed in space without assuming an aether."
According to Thomas Kuhn, it was not till 1908 that Planck more or less accepted part of Einstein's arguments for physical as distinct from abstract mathematical discreteness in thermal radiation physics. Still in 1908, considering Einstein's proposal of quantal propagation, Planck opined that such a revolutionary step was perhaps unnecessary. Until then, Planck had been consistent in thinking that discreteness of action quanta was to be found neither in his resonant oscillators nor in the propagation of thermal radiation. Kuhn wrote that, in Planck's earlier papers and in his 1906 monograph, there is no "mention of discontinuity, [nor] of talk of a restriction on oscillator energy, [nor of] any formula like U = nhν." Kuhn pointed out that his study of Planck's papers of 1900 and 1901, and of his monograph of 1906, had led him to "heretical" conclusions, contrary to the widespread assumptions of others who saw Planck's writing only from the perspective of later, anachronistic, viewpoints. Kuhn's conclusions, finding a period till 1908, when Planck consistently held his 'first theory', have been accepted by other historians.
In the second edition of his monograph, in 1912, Planck sustained his dissent from Einstein's proposal of light quanta. He proposed in some detail that absorption of light by his virtual material resonators might be continuous, occurring at a constant rate in equilibrium, as distinct from quantal absorption. Only emission was quantal. This has at times been called Planck's "second theory".
It was not till 1919 that Planck in the third edition of his monograph more or less accepted his 'third theory', that both emission and absorption of light were quantal.
The colourful term "ultraviolet catastrophe" was given by Paul Ehrenfest in 1911 to the paradoxical result that the total energy in the cavity tends to infinity when the equipartition theorem of classical statistical mechanics is (mistakenly) applied to black body radiation. But this had not been part of Planck's thinking, because he had not tried to apply the doctrine of equipartition: when he made his discovery in 1900, he had not noticed any sort of "catastrophe". It was first noted by Lord Rayleigh in 1900, and then in 1901 by Sir James Jeans; and later, in 1905, by Einstein when he wanted to support the idea that light propagates as discrete packets, later called 'photons', and by Rayleigh and by Jeans.
In 1913, Bohr gave another formula with a further different physical meaning to the quantity hν. In contrast to Planck's and Einstein's formulas, Bohr's formula referred explicitly and categorically to energy levels of atoms. Bohr's formula was where and denote the energy levels of quantum states of an atom, with quantum numbers and . The symbol denotes the frequency of a quantum of radiation that can be emitted or absorbed as the atom passes between those two quantum states. In contrast to Planck's model, the frequency has no immediate relation to frequencies that might describe those quantum states themselves.
Later, in 1924, Satyendra Nath Bose developed the theory of the statistical mechanics of photons, which allowed a theoretical derivation of Planck's law. The actual word 'photon' was invented still later, by G.N. Lewis in 1926, who mistakenly believed that photons were conserved, contrary to Bose–Einstein statistics; nevertheless the word 'photon' was adopted to express the Einstein postulate of the packet nature of light propagation. In an electromagnetic field isolated in a vacuum in a vessel with perfectly reflective walls, such as was considered by Planck, indeed the photons would be conserved according to Einstein's 1905 model, but Lewis was referring to a field of photons considered as a system closed with respect to ponderable matter but open to exchange of electromagnetic energy with a surrounding system of ponderable matter, and he mistakenly imagined that still the photons were conserved, being stored inside atoms.
Ultimately, Planck's law of black-body radiation contributed to Einstein's concept of quanta of light carrying linear momentum, which became the fundamental basis for the development of quantum mechanics.
The above-mentioned linearity of Planck's mechanical assumptions, not allowing for energetic interactions between frequency components, was superseded in 1925 by Heisenberg's original quantum mechanics. In his paper submitted on 29 July 1925, Heisenberg's theory accounted for Bohr's above-mentioned formula of 1913. It admitted non-linear oscillators as models of atomic quantum states, allowing energetic interaction between their own multiple internal discrete Fourier frequency components, on the occasions of emission or absorption of quanta of radiation. The frequency of a quantum of radiation was that of a definite coupling between internal atomic meta-stable oscillatory quantum states. At that time, Heisenberg knew nothing of matrix albegra, but Max Born read the manuscript of Heisenberg's paper and recognized the matrix character of Heisenberg's theory. Then Born and Jordan published an explicitly matrix theory of quantum mechanics, based on, but in form distinctly different from, Heisenberg's original quantum mechanics; it is the Born and Jordan matrix theory that is today called matrix mechanics. Heisenberg's explanation of the Planck oscillators, as non-linear effects apparent as Fourier modes of transient processes of emission or absorption of radiation, showed why Planck's oscillators, viewed as enduring physical objects such as might be envisaged by classical physics, did not give an adequate explanation of the phenomena.
Nowadays, as a statement of the energy of a light quantum, often one finds the formula E = ħω, where ħ = h/2π, and ω = denotes angular frequency, and less often the equivalent formula E = . This statement about a really existing and propagating light quantum, based on Einstein's, has a physical meaning different from that of Planck's above statement ϵ = about the abstract energy units to be distributed amongst his hypothetical resonant material oscillators.
See also
- Planck 1914, pp. 6, 168
- Chandrasekhar 1960, p. 8
- Rybicki & Lightman 1979, p. 22
- Planck 1914, p. 42
- Planck, M. (1914), p. 168.
- Hapke 1993, pp. 362–373
- Wolf, E. (2007). Introduction to the Theory of Coherence and Polarization of Light, Cambridge University Press, Cambridge UK, ISBN 978-0-521-82211-4, Section 5.6, pp. 102–108.
- Planck 1914
- Loudon, R. (2000), Chapter 1, pp. 3–45
- Planck, M. (1914), Chapter III, pp. 69–86.
- Landsberg 1961, p. 131
- Tisza 1966, p. 78
- Caniou 1999, p. 117
- Kramm & Mölders 2009
- Sharkov 2003, p. 210
- Goody & Yung 1989, p. 16.
- Fischer 2011
- Loudon 2000
- Mandel & Wolf 1995
- Wilson 1957, p. 182
- Adkins (1968/1983), pp. 147–148
- Landsberg 1978, p. 208
- Grimes & Grimes 2012, p. 176
- Siegel & Howell 2002, p. 25
- Planck 1914, pp. 9–11
- Planck 1914, p. 35
- Landsberg 1961, pp. 273–274
- Born & Wolf 1999, pp. 194–199
- Born & Wolf 1999, p. 195
- Rybicki & Lightman 1979, p. 19
- Chandrasekhar 1960, p. 7
- Chandrasekhar 1960, p. 9
- Einstein 1916
- Bohr 1913
- Jammer 1989, pp. 113, 115
- Kittel & Kroemer 1980, p. 98
- Jeans 1905a, p. 98
- Rayleigh 1905
- Rybicki & Lightman 1979, p. 23
- Wien 1896, p. 667
- Planck 1906, p. 158
- Lowen & Blanch 1940
- Stewart 1858
- Siegel 1976
- Kirchhoff 1860a
- Kirchhoff 1860b
- Schirrmacher 2001
- Kirchhoff 1860c
- Planck 1914, p. 11
- Milne 1930, p. 80
- Rybicki & Lightman 1979, pp. 16–17
- Mihalas & Weibel-Mihalas 1984, p. 328
- Goody & Yung 1989, pp. 27–28
- Paschen, F. (1896), personal letter cited by Hermann 1971, p. 6
- Hermann 1971, p. 7
- Kuhn 1978, pp. 8, 29
- Mehra and Rechenberg 1982, pp. 26, 28, 31, 39
- Kirchhoff 1862/1882, p. 573
- Kragh 1999, p. 58
- Kangro 1976
- Tyndall 1865a
- Tyndall 1865b
- Kangro 1976, pp. 8–10
- Crova 1880
- Kangro 1976, pp. 15–26
- Lummer & Kurlbaum 1898
- Kangro 1976, p. 159
- Lummer & Kurlbaum 1901
- Kangro 1976, pp. 75–76
- Paschen 1895, pp. 297–301
- Lummer & Pringsheim 1899, p. 225
- Kangro 1976, p. 174
- Planck 1900d
- Rayleigh 1900, p. 539
- Kangro 1976, pp. 181–183
- Planck 1900a
- Planck 1900b
- Rayleigh 1900
- Klein 1962
- Dougal 1976
- Planck 1943, p. 156
- Hettner 1922
- Rubens & Kurlbaum 1900a
- Rubens & Kurlbaum 1900b
- Kangro 1976, p. 165
- Mehra & Rechenberg 1982, p. 41
- Planck 1914, p. 135
- Kuhn 1978, pp. 117–118
- Hermann 1971, p. 16
- Planck 1900c
- Kangro 1976, p. 214
- Kuhn 1978, p. 106
- Kragh 2000
- Planck 1901
- Planck 1915, p. 89
- Schumm 2004, p. 34
- Ehrenfest & Kamerlingh Onnes 1914, p. 873
- ter Haar 1967, p. 14
- Stehle 1994, p. 128
- Scully & Zubairy 1997, p. 21.
- Planck 1906, p. 220
- Kuhn 1978, p. 162
- Planck 1914, pp. 44–45, 113–114
- Stehle 1994, p. 150
- Jauch & Rohrlich 1980, Chapter 13
- Robert Karplus* and Maurice Neuman,"The Scattering of Light by Light", Phys. Rev. 83, 776–784 (1951)
- Tommasini, D.; Ferrando, F.; Michinel, H.; Seco, M. (2008). "Detecting photon-photon scattering in vacuum at exawatt lasers". Phys. Rev. A 77: 042101. arXiv:quant-ph/0703076. Bibcode:2008PhRvA..77a2101M. doi:10.1103/PhysRevA.77.012101.
- Jeffreys 1973, p. 223
- Planck 1906, p. 178
- Planck 1914, p. 26
- Boltzmann 1878
- Kuhn 1978, pp. 38–39
- Planck 1914, pp. 1–45
- Cotton 1899
- Einstein 1905
- Kragh 1999, p. 67
- Stehle 1994, pp. 132–137
- Einstein 1993, p. 143, letter of 1910.
- Planck 1915, p. 95
- Planck 1906
- Kuhn 1984, p. 236
- Kuhn 1978, pp. 196–202
- Kuhn 1984
- Darrigol 1992, p. 76
- Kragh 1999, pp. 63–66
- Planck 1914, p. 161
- Kuhn 1978, pp. 235–253
- Kuhn 1978, pp. 253–254
- Ehrenfest 1911
- Kuhn 1978, p. 152
- Kuhn 1978, pp. 151–152
- Kangro 1976, p. 190
- Kuhn 1978, pp. 144–145
- See footnote on p. 398 in Jeans 1901
- Jeans 1905b
- Jeans 1905c
- Jeans 1905d
- Sommerfeld 1923, p. 43
- Heisenberg 1925, p. 108
- Brillouin 1970, p. 31
- Lewis 1926
- Heisenberg 1925
- Razavy 2011, pp. 39–41
- Born & Jordan 1925
- Stehle 1994, p. 286
- Razavy 2011, pp. 42–43
- Messiah 1958, p. 14
- Pauli 1973, p. 1
- Feynman, Leighton & Sands 1963, p. 38-1
- Schwinger 2001, p. 203
- Bohren & Clothiaux 2006, p. 2
- Schiff 1949, p. 2
- Mihalas & Weibel-Mihalas 1984, p. 143
- Rybicki & Lightman 1979, p. 20
- Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (first edition 1968), third edition 1983, Cambridge University Press, ISBN 0-521-25445-0.
- Bohr, N. (1913). "On the constitution of atoms and molecules". Philosophical Magazine 26: 1–25. doi:10.1080/14786441308634993.
- Bohren, C. F.; Clothiaux, E. E. (2006). Fundamentals of Atmospheric Radiation. Wiley-VCH. ISBN 3-527-40503-8.
- Boltzmann, L. (1878). "Über die Beziehung zwischen dem zweiten Hauptsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung, respective den Sätzen über das Wärmegleichgewicht". Sitzungsberichte Mathematisch-Naturwissenschaftlichen Classe der kaiserlichen Akademie der Wissenschaften in Wien 76 (2): 373–435.
- Born, M.; Wolf, E. (1999). Principles of Optics (7th ed.). Cambridge University Press. ISBN 0-521-64222-1.
- Born, M.; Jordan, P. (1925). "Zur Quantenmechanik". Zeitschrift für Physik 34: 858–888. Bibcode:1925ZPhy...34..858B. doi:10.1007/BF01328531. Translated in part as "On quantum mechanics" in van der Waerden, B.L. (1967). Sources of Quantum Mechanics. North-Holland Publishing. pp. 277–306.
- Brehm, J. J.; Mullin, W. J. (1989). Introduction to the Structure of Matter. Wiley. ISBN 0-471-60531-X.
- Brillouin, L. (1970). Relativity Reexamined. Academic Press. ISBN 978-0-12-134945-5.
- Caniou, J. (1999). Passive Infrared Detection: Theory and Applications. Springer. ISBN 978-0-7923-8532-5.
- Chandrasekhar, S. (1960) . Radiative Transfer (Revised reprint ed.). Dover Publications. ISBN 978-0-486-60590-6.
- Cotton, A. (1899). "The present status of Kirchhoff's law". The Astrophysical Journal 9: 237–268. Bibcode:1899ApJ.....9..237C. doi:10.1086/140585.
- Crova, A.-P.-P. (1880). "Étude des radiations émises par les corps incandescents. Mesure optique des hautes températures". Annales de chimie et de physique. Série 5 19: 472–550.
- Dougal, R. C. (September 1976). "The presentation of the Planck radiation formula (tutorial)". Physics Education 11 (6): 438–443. Bibcode:1976PhyEd..11..438D. doi:10.1088/0031-9120/11/6/008.
- Ehrenfest, P. (1911). "Welche Züge der Lichtquantenhypothese spielen in der Theorie der Wärmestrahlung eine wesentliche Rolle?". Annalen der Physik 36: 91–118. Bibcode:1911AnP...341...91E. doi:10.1002/andp.19113411106.
- Ehrenfest, P.; Kamerlingh Onnes, H. (1914). "Simplified deduction of the formula from the theory of combinations which Planck uses as the basis of his radiation theory". Proceedings of the Royal Dutch Academy of Sciences in Amsterdam 17: 870–873.
- Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt". Annalen der Physik 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607. Translated in Arons, A. B.; Peppard, M. B. (1965). "Einstein's proposal of the photon concept: A translation of the Annalen der Physik paper of 1905". American Journal of Physics 33 (5): 367. Bibcode:1965AmJPh..33..367A. doi:10.1119/1.1971542.
- Einstein, A. (1916). "Zur Quantentheorie der Strahlung". Mitteilungen der Physikalischen Gesellschaft Zürich 18: 47–62. and a nearly identical version Einstein, A. (1917). "Zur Quantentheorie der Strahlung". Physikalische Zeitschrift 18: 121–128. Bibcode:1917PhyZ...18..121E. Translated in ter Haar, D. (1967). The Old Quantum Theory. Pergamon Press. pp. 167–183. LCCN 66029628. See also .
- Einstein, A. (1993). The Collected Papers of Albert Einstein 3. English translation by Beck, A. Princeton University Press. ISBN 0-691-10250-3.
- Feynman, R. P.; Leighton, R. B.; Sands, M. (1963). The Feynman Lectures on Physics, Volume 1. Addison-Wesley. ISBN 0-201-02010-6.
- Fischer, T. (1 November 2011). "Topics: Derivation of Planck's Law". ThermalHUB. Retrieved 1 November 2011.
- Goody, R. M.; Yung, Y. L. (1989). Atmospheric Radiation: Theoretical Basis (2nd ed.). Oxford University Press. ISBN 978-0-19-510291-8.
- Grimes, D. M.; Grimes, C. A. (2012). Photon Creation–Annihilation. Continuum Electromagnetic Theory. World Scientific. ISBN 978-981-4383-36-3.
- Guggenheim, E.A. (1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists (fifth revised ed.). North-Holland Publishing Company.
- Haken, H. (1981). Light (Reprint ed.). Amsterdam: North-Holland Publishing. ISBN 0-444-86020-7.
- Hapke, B. (1993). Theory of Reflectance and Emittance Spectroscopy. Cambridge University Press, Cambridge UK. ISBN 0-521-30789-9.
- Heisenberg, W. (1925). "Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen". Zeitschrift für Physik 33: 879–893. Bibcode:1925ZPhy...33..879H. doi:10.1007/BF01328377. Translated as "Quantum-theoretical Re-interpretation of kinematic and mechanical relations" in van der Waerden, B.L. (1967). Sources of Quantum Mechanics. North-Holland Publishing. pp. 261–276.
- Heisenberg, W. (1930). The Physical Principles of the Quantum Theory. Eckart, C.; Hoyt, F. C. (transl.). University of Chicago Press.
- Hermann, A. (1971). The Genesis of Quantum Theory. Nash, C.W. (transl.). MIT Press. ISBN 0-262-08047-8. a translation of Frühgeschichte der Quantentheorie (1899–1913), Physik Verlag, Mosbach/Baden.
- Hettner, G. (1922). "Die Bedeutung von Rubens Arbeiten für die Plancksche Strahlungsformel". Naturwissenschaften 10: 1033–1038. Bibcode:1922NW.....10.1033H. doi:10.1007/BF01565205.
- Jammer, M. (1989). The Conceptual Development of Quantum Mechanics (second ed.). Tomash Publishers/American Institute of Physics. ISBN 0-88318-617-9.
- Jauch, J. M.; Rohrlich, F. (1980) . The Theory of Photons and Electrons. The Relativistic Quantum Field Theory of Charged Particles with Spin One-half (second printing of second ed.). Springer. ISBN 0–387–07295–0.
- Jeans, J. H. (1901). "The Distribution of Molecular Energy". Philosophical Transactions of the Royal Society A 196 (274–286): 397. Bibcode:1901RSPTA.196..397J. doi:10.1098/rsta.1901.0008. JSTOR 90811.
- Jeans, J. H. (1905a). "XI. On the partition of energy between matter and æther". Philosophical Magazine 10 (55): 91. doi:10.1080/14786440509463348.
- Jeans, J. H. (1905b). "On the Application of Statistical Mechanics to the General Dynamics of Matter and Ether". Proceedings of the Royal Society A 76 (510): 296. Bibcode:1905RSPSA..76..296J. doi:10.1098/rspa.1905.0029. JSTOR 92714.
- Jeans, J. H. (1905c). "A Comparison between Two Theories of Radiation". Nature 72 (1865): 293. Bibcode:1905Natur..72..293J. doi:10.1038/072293d0.
- Jeans, J. H. (1905d). "On the Laws of Radiation". Proceedings of the Royal Society A 76 (513): 545. Bibcode:1905RSPSA..76..545J. doi:10.1098/rspa.1905.0060. JSTOR 92704.
- Jeffreys, H. (1973). Scientific Inference (3rd ed.). Cambridge University Press. ISBN 978-0-521-08446-8.
- Kangro, H. (1976). Early History of Planck's Radiation Law. Taylor & Francis. ISBN 0-85066-063-7.
- Kirchhoff, G.; [27 October 1859] (1860a). "Über die Fraunhofer'schen Linien". Monatsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin: 662–665.
- Kirchhoff, G.; [11 December 1859] (1860b). "Über den Zusammenhang zwischen Emission und Absorption von Licht und Wärme". Monatsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin: 783–787.
- Kirchhoff, G. (1860c). "Über das Verhältniss zwischen dem Emissionsvermögen und dem Absorptionsvermögen der Körper für Wärme and Licht". Annalen der Physik und Chemie 109: 275–301. Translated by Guthrie, F. as Kirchhoff, G. (1860). "On the relation between the radiating and absorbing powers of different bodies for light and heat". Philosophical Magazine. Series 4 20: 1–21.
- Kirchhoff, G. (1882) , "Über das Verhältniss zwischen dem Emissionsvermögen und dem Absorptionsvermögen der Körper für Wärme und Licht", Gessamelte Abhandlungen, Johann Ambrosius Barth, pp. 571–598
- Kittel, C.; Kroemer, H. (1980). Thermal Physics (2nd ed.). W. H. Freeman. ISBN 0-7167-1088-9.
- Klein, M.J. (1962). "Max Planck and the beginnings of the quantum theory". Archive for History of Exact Sciences 1 (5): 459–479. doi:10.1007/BF00327765.
- Kragh, H. (1999). Quantum Generations. A History of Physics in the Twentieth Century. Princeton University Press. ISBN 0-691-01206-7.
- Kragh, H. (December 2000). "Max Planck: The reluctant revolutionary". Physics World.
- Kramm, Gerhard; Mölders, N. (2009). "Planck's Blackbody Radiation Law: Presentation in Different Domains and Determination of the Related Dimensional Constant". Journal of the Calcutta Mathematical Society 5 (1–2): 27–61. arXiv:0901.1863. Bibcode:2009arXiv0901.1863K.
- Kuhn, T. S. (1978). Black–Body Theory and the Quantum Discontinuity. Oxford University Press. ISBN 0-19-502383-8.
- Landsberg, P.T. (1961). Thermodynamics with Quantum Statistical Illustrations. Interscience Publishers.
- Landsberg, P.T. (1978). Thermodynamics and Statistical Mechanics. Oxford University Press. ISBN 0-19-851142-6.
- Lewis, G. N. (1926). "The Conservation of Photons". Nature 118 (2981): 874. Bibcode:1926Natur.118..874L. doi:10.1038/118874a0.
- Loudon, R. (2000). The Quantum Theory of Light (3rd ed.). Oxford University Press. ISBN 0-19-850177-3.
- Lowen, A. N.; Blanch, G. (1940). "Tables of Planck's radiation and photon functions". Journal of the Optical Society of America 30 (2): 70. doi:10.1364/JOSA.30.000070.
- Lummer, O.; Kurlbaum, F. (1898). "Der electrisch geglühte "absolut schwarze" Körper und seine Temperaturmessung". Verhandlungen der Deutschen Physikalischen Gesellschaft 17: 106–111.
- Lummer, O.; Pringsheim, E. (1899). "1. Die Vertheilung der Energie in Spectrum des schwarzen Körpers und des blanken Platins; 2. Temperaturbestimmung fester glühender Körper". Verhandlungen der Deutschen Physikalischen Gesellschaft 1: 215–235.
- Lummer, O.; Kurlbaum, F. (1901). "Der elektrisch geglühte "schwarze" Körper". Annalen der Physik 310 (8): 829–836. Bibcode:1901AnP...310..829L. doi:10.1002/andp.19013100809.
- Mandel, L.; Wolf, E. (1995). Optical Coherence and Quantum Optics. Cambridge University Press. ISBN 0–521–41711–2.
- Mehra, J.; Rechenberg, H. (1982). The Historical Development of Quantum Theory 1 (1). Springer-Verlag. ISBN 0-387-90642-8.
- Messiah, A. (1958). Quantum Mechanics. Temmer, G. G. (transl.). Wiley.
- Mihalas, D.; Weibel-Mihalas, B. (1984). Foundations of Radiation Hydrodynamics. Oxford University Press. ISBN 0-19-503437-6.
- Milne, E.A. (1930). "Thermodynamics of the Stars". Handbuch der Astrophysik 3 (1): 63–255.
- Paltridge, G. W.; Platt, C. M. R. (1976). Radiative Processes in Meteorology and Climatology. Elsevier. ISBN 0-444-41444-4.
- Paschen, F. (1895). "Über Gesetzmäßigkeiten in den Spectren fester Körper und über ein neue Bestimmung der Sonnentemperatur". Nachrichten von der Königlichen Gesellschaft der Wissenschaften zu Göttingen (Mathematisch-Physikalische Klasse): 294–304.
- Pauli, W. (1973). Enz, C. P., ed. Wave Mechanics. Margulies, S.; Lewis, H. R. (transl.). MIT Press. ISBN 0-262-16050-1.
- Planck, M. (1900a). "Über eine Verbesserung der Wienschen Spektralgleichung". Verhandlungen der Deutschen Physikalischen Gesellschaft 2: 202–204. Translated in ter Haar, D. (1967). "On an Improvement of Wien's Equation for the Spectrum". The Old Quantum Theory. Pergamon Press. pp. 79–81. LCCN 66029628.
- Planck, M. (1900b). "Zur Theorie des Gesetzes der Energieverteilung im Normalspektrum". Verhandlungen der Deutschen Physikalischen Gesellschaft 2: 237. Translated in ter Haar, D. (1967). "On the Theory of the Energy Distribution Law of the Normal Spectrum". The Old Quantum Theory. Pergamon Press. p. 82. LCCN 66029628.
- Planck, M. (1900c). "Entropie und Temperatur strahlender Wärme". Annalen der Physik 306 (4): 719–737. Bibcode:1900AnP...306..719P. doi:10.1002/andp.19003060410.
- Planck, M. (1900d). "Über irreversible Strahlungsvorgänge". Annalen der Physik 306 (1): 69–122. Bibcode:1900AnP...306...69P. doi:10.1002/andp.19003060105.
- Planck, M. (1901). "Über das Gesetz der Energieverteilung im Normalspektrum". Annalen der Physik 4: 553. Translated in Ando, K. "On the Law of Distribution of Energy in the Normal Spectrum". Retrieved 13 October 2011.
- Planck, M. (1906). Vorlesungen über die Theorie der Wärmestrahlung. Johann Ambrosius Barth. LCCN 07004527.
- Planck, M. (1914). The Theory of Heat Radiation. Masius, M. (transl.) (2nd ed.). P. Blakiston's Son & Co. OL 7154661M.
- Planck, M. (1915). Eight Lectures on Theoretical Physics. Wills, A. P. (transl.). Dover Publications. ISBN 0-486-69730-4.
- Planck, M. (1943). "Zur Geschichte der Auffindung des physikalischen Wirkungsquantums". Naturwissenschaften 31 (14–15): 153–159. Bibcode:1943NW.....31..153P. doi:10.1007/BF01475738.
- Rayleigh, Lord (1900). "LIII. Remarks upon the law of complete radiation". Philosophical Magazine. Series 5 49 (301): 539. doi:10.1080/14786440009463878.
- Rayleigh, Lord (1905). "The Dynamical Theory of Gases and of Radiation". Nature 72 (1855): 54–55. Bibcode:1905Natur..72...54R. doi:10.1038/072054c0.
- Razavy, M. (2011). Heisenberg's Quantum Mechanics. World Scientific. ISBN 978-981-4304-10-8.
- Rubens, H.; Kurlbaum, F. (1900a). "Über die Emission langer Wellen durch den schwarzen Körper". Verhandlungen der Deutschen Physikalischen Gesellschaft 2: 181.
- Rubens, H.; Kurlbaum, F. (1900b). "Über die Emission langwelliger Wärmestrahlen durch den schwarzen Körper bei verschiedenen Temperaturen". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin: 929–941. Translated in Rubens, H.; Kurlbaum, F. (1901). "On the heat-radiation of long wave-length emitted by black bodies at different temperatures". The Astrophysical Journal 14: 335–348. Bibcode:1901ApJ....14..335R. doi:10.1086/140874.
- Rybicki, G. B.; Lightman, A. P. (1979). Radiative Processes in Astrophysics. John Wiley & Sons. ISBN 0-471-82759-2.
- Sharkov, E. A. (2003). "Black-body radiation". Passive Microwave Remote Sensing of the Earth. Springer. ISBN 978-3-540-43946-2.
- Schiff, L. I. (1949). Quantum Mechanics. McGraw-Hill.
- Schirrmacher, A. (2001). Experimenting theory: the proofs of Kirchhoff's radiation law before and after Planck. Münchner Zentrum für Wissenschafts und Technikgeschichte.
- Schumm, B. A. (2004). Deep down things: the breathtaking beauty of particle physics. Johns Hopkins University Press. ISBN 978-0-8018-7971-5.
- Schwinger, J. (2001). Englert, B.-G., ed. Quantum Mechanics: Symbolism of Atomic Measurements. Springer. ISBN 3-540-41408-8.
- Scully, M. O.; Zubairy, M.S. (1997). Quantum Optics. Cambridge University Press. ISBN 0-521-43458-0.
- Siegel, D.M. (1976). "Balfour Stewart and Gustav Robert Kirchhoff: two independent approaches to "Kirchhoff's radiation law"". Isis 67: 565–600.
- Siegel, R.; Howell, J. R. (2002). Thermal Radiation Heat Transfer, Volume 1 (4th ed.). Taylor & Francis. ISBN 978-1-56032-839-1.
- Sommerfeld, A. (1923). Atomic Structure and Spectral Lines. Brose, H. L. (transl.) (from 3rd German ed.). Methuen.
- Stehle, P. (1994). Order, Chaos, Order. The Transition from Classical to Quantum Physics. Oxford University Press. ISBN 0-19-507513-7.
- Stewart, B. (1858). "An account of some experiments on radiant heat". Transactions of the Royal Society of Edinburgh 22: 1–20.
- ter Haar, D. (1967). The Old Quantum Theory. Pergamon Press. LCCN 66-029628.
- Thornton, S. T.; Rex, A. F. (2002). Modern Physics. Thomson Learning. ISBN 0-03-006049-4.
- Tisza, L. (1966). Generalized Thermodynamics. MIT Press.
- Tyndall, J. (1865a). "Über leuchtende und dunkle Strahlung". Annalen der Physik und Chemie 200: 36–53.
- Tyndall, J. (1865b). Heat considered as a Mode of Motion. D. Appleton & Company.
- Wien, W. (1896). "Über die Energievertheilung im Emissionsspectrum eines schwarzen Körpers". Annalen der Physik und Chemie 294: 662–669.
- Wilson, A. H. (1957). Thermodynamics and Statistical Mechanics. Cambridge University Press.
- Summary of Radiation
- Radiation of a Blackbody – interactive simulation to play with Planck's law
- Scienceworld entry on Planck's Law | http://en.wikipedia.org/wiki/Planck's_law | 13 |
94 | In the hydrologic cycle
Precipitation = Runoff + Infiltration + Evapotranspiration
Surface water runoff is the water that flows at the Earth's surface. Infiltration is the water that soaks into the ground to become part of the groundwater system. Infiltration accounts, by far, for the smallest proportion of precipitation. Evapotranspiration (ET) refers to the water that is directly evaporated or transpired by plants (evaporated from leaves) back into the atmosphere. ET generally accounts for the largest proportion of precipitation.
Stream Networks - Stream Drainage Basins The water from precipitation is drained from the land via networks of connected streams. Each stream network lies in its own drainage basin separated from adjacent stream networks by drainage divides.
The average annual runoff is that portion of the annual precipitation in a drainage basin that flows out of the basin as surface water. It is calculated as:
Runoff = Total Annual Discharge / Basin Size
Runoff increases with increasing precipitation, steepness of the land, imperviousness of the land, and use by man.
Stream velocity is the speed of the water in the stream. Units are distance per time (e.g., meters per second or feet per second). Stream velocity is greatest in midstream near (just below) the surface and is slowest along the stream bed and banks due to friction.
Stream velocity depends on the slope of the stream bed, the degree of roughness of the stream bed, and the hydraulic radius (see next).
If velocity increases because of a local increase in slope, then the depth must decrease to maintain a constant discharge (the water in the steeper stretch can't get ahead of the water behind it). One consequence of this is that the shear (velocity gradient from the stream bed to the surface) will increase.
Hydraulic radius (HR or just R) is the ratio of the cross-sectional area divided by the wetted perimeter. For a hypothetical stream with a rectangular cross sectional shape (a stream with a flat bottom and vertical sides) the cross-sectional area is simply the width multiplied by the depth (W * D). For the same hypothetical stream the wetted perimeter would be the depth plus the width plus the depth (W + 2D). The greater the cross-sectional area in comparison to the wetted perimeter, the more freely flowing will the stream be because less of the water in the stream is in proximity to the frictional bed. So as hydraulic radius increases so will velocity (all other factors being equal). As streams get deeper, for example during a flood, the hydraulic radius increases. There is a greater mass of water per frictional wetted perimeter. The stream flows more efficiently.
Stream discharge is the quantity (volume) of water passing by a given point in a certain amount of time. It is calculated as Q = V * A, where V is the stream velocity and A is the stream's cross-sectional area. Units of discharge are volume per time (e.g., m3/sec or million gallons per day, mgpd).
Stream Gaging: Stream discharge can be calculated by determining the cross sectional area of a stream at a given point and the velocity at that point. The most accurate method is with the construction of a concrete weir at a point across the stream. The weir creates bottom and sides with a known shape, so as the stream level increases and decreases the cross sectional area can easily be determined. The U.S. Geological Survey maintains a network of approximately 10,000 stream gaging stations throughout the United States where water depth and stream velocity are continuously monitored. Discharge is calculated from these.
At low velocity, especially if the stream bed is smooth, streams may exhibit laminar flow in which all of the water molecules flow in parallel paths. At higher velocities turbulence is introduced into the flow (turbulent flow). The water molecules don't follow parallel paths.
Perennial and Ephemeral Streams
Gaining (effluent) streams receive water from the groundwater. In other words, a gaining stream discharges water from the water table. Gaining streams are perennial streams: they typically flow year around.
On the other hand losing (influent) streams lie above the water table (e.g., in an arid climate) and water seeps through the stream bed to recharge the water table below. Losing streams may be intermittent (flow part of the year) or ephemeral (only flow briefly following precipitation events). They only flow when there is sufficient runoff from recent rains or spring snow melt.
Some streams are gaining part of the year and losing part of the year or just in particular years, as the water table drops during an extended dry season.
Streams have two sources of water: storm charge, from overland flow after rain events, and baseflow, supplied by groundwater.
Stream hydrographs show a stream's discharge over time. Peak discharges follow rain events. There is a lag time between the rain event and the peak stream discharge due to the time required for rain water to collect as overland flow and for inputs from tributary streams. The rainwater that infiltrates the ground spreads out the peak flow following a rain event.
Streams carry dissolved ions as dissolved load, fine clay and silt particles as suspended load, and coarse sands and gravels as bed load. Fine particles will only remain suspended if flow is turbulent. In laminar flow, suspended particles will slowly settle to the bed.
Hjulstrom's Diagram plots two curves representing 1) the minimum stream velocity required to erode sediments of varying sizes from the stream bed, and 2) the minimum velocity required to continue to transport sediments of varying sizes. Notice that for coarser sediments (sand and gravel) it takes just a little higher velocity to initially erode particles than it takes to continue to transport them. For small particles (clay and silt) considerably higher velocities are required for erosion than for transportation because these finer particles have cohesion resulting from electrostatic attractions. Think of how sticky wet mud is.
Stream competence refers to the heaviest particles a stream can erode and thus transport. Caliber referst to the diameter of the largest particle that can be eroded. Stream competence depends on stream velocity (as shown on the Hjulstrom diagram above). The faster the current, the heavier the particle that can be eroded and transported. Note however that the finest particles (clays) are cohesive and require particularly high velocities to erode them from the stream bed.
Competence also depends on the magnitude of shear at the stream bed. Since stream velocity is lowest (approaching zero) along the stream bed and increases toward the surface, the greater the rate of change of velocity near the stream bed the greater the shear stress applied to sedimentary particles lying on the stream bed. Given two streams with different depths, flowing at the same velocity, the shallower stream will exert a greater shear on the sediments of the stream bed.
Stream capacity is the maximum amount of solid load (bed and suspended) a stream can carry. It depends on both the discharge and the velocity (since velocity affects the competence and therefore the range of particle sizes that may be transported).
As stream velocity and discharge increase so do competence and capacity. But it is not a linear relationship (e.g., doubling velocity does not simply double the competence). Competence (weight of the heaviest particle) varies as approximately the sixth power of velocity:
Δ competence (weight) ∝ Δ velocity6
or since weight is proportional to particle diameter cubed (or volume), the caliber, maximum particle diameter varies as the velocity squared:
Δ caliber (diameter) ∝ Δ velocity2
For example, doubling the velocity results in a 64 times (26) increase in the weight or 4 times (22) increase in the diameter of the largest particles being transported .
Capacity varies as the discharge squared or cubed.
Δ capacity ∝ Δ discharge2 to Δ discharge3
For example, tripling the discharge results in a 9 to 27 times (32 to 33) increase in the capacity.
Therefore, most of the work of streams is accomplished during floods when stream velocity and discharge (and therefore competence and capacity) are many times their level during low flow regimes. This work is in the form of bed scouring (erosion), sediment transport (bed and suspended loads), and sediment deposition.
Flood Erosion and Deposition: As flood waters rise and the stream gets deeper the hydraulic radius increases thereby making the stream more free flowing. The stream flows faster. The increased velocity and the increased cross-sectional area mean that discharge increases. As discharge and velocity increase so do the stream's competence and capacity. In the rising stages of a flood much sediment is dumped into streams by overland flow and gully wash. This can result in some aggradation or building up of sediments on the stream bed. However, after the flood peaks less sediment is carried and a great deal of bed scouring (erosion) occurs. As the flood subsides and competence and capacity decline sediments are deposited and the stream bed aggrades again. Even though the stream bed may return to somewhat like its pre-flood state, huge quantities of sediments have been transported downstream. Much fine sediment has probably been deposited on the flood plain.
The load supplied to a stream from the surrounding land is dependent on:
Stream channels may be straight, meandering, or braided. Streams are seldom straight for any significant distance. Turbulence in the stream and inhomogeneity in the bank materials lead to small deviations from a perfectly straight course. These deviations are amplified as discussed below.
At a bend in a stream the channel is deeper near the outer bank than it is near the inner bank. The greater depth on the outer side of the bend results in there being a higher velocity at the outer bank because of the greater mass of water compared to the friction from the bottom and sides. Also at the bend, the water's momentum carries the mass of the water against the outer bank. The greater velocity combined with the greater inertial force on the outer bank increase the competence and capacity. Erosion occurs on the outer bank or cut bank. The cut bank is steep and frequently undermined by stream erosion.
Meanwhile, the part of the stream channel near the inner bank is shallower. This shallowing results in less mass of water to over come the friction of the stream bed and thus a reduction in velocity. The velocity reduction results in a decrease in the competence and capacity. Deposition of the coarser fraction of the stream load occurs, leading to the formation of a point bar. Point bar deposition gradually produces flat, low-lying sheets of sand.
Over time, the position of the stream changes as the bend migrates in the direction of the cut bank. As oxbow bends accentuate and migrate two bends can erode together forming a cutoff and eventually leaving an oxbow lake.
The size of meanders has long been of interest to geomorphologists. Meanders come in many sizes. One obvious relationship is that big streams have big meanders and small streams have small meanders. Measures of the size of meanders include the meander wavelength (distance from the center of one meander to the next) and the radius of curvature of a meander (like the radius of a circle). Meander wavelength is proportional to streams width. More importantly, the wavelength is proportional to discharge. The greater the discharge the larger the meander, or, the greater the mass of water flowing down a stream the harder it is for the stream banks to turn back the mass of water. Meander wavelength is also inversely proportional to the clay and silt content. Since clay and silt are cohesive it is easier for banks rich in clay and silt to withstand the inertia-driven erosion on the outer bank. The stream is more readily turned by the cohesive bank. Smaller tighter meanders are produced where streams flow through clay and silt rich valleys.
Another characteristic measure of a stream's meandering nature is its sinuosity.
A stream's sinuosity, S = channel length / valley length
The sinuosity of a stream is a way the stream maintains a constant slope. The more sinuous a stream, the more gentle the slope. If tectonic uplift or subsidence occurs along the course of a stream, the stream can readjust its slope by changing its sinuosity.
A stream's width to depth ratio is also inversely proportional to clay and silt content. Streams flowing between banks rich in clay and silt are deeper and narrower while streams flowing between banks poor in clay and silt are broad and shallow.
At the extreme, where streams flow through beds of sand and gravel, they erode very wide and shallow channels, and take on a braided pattern. The water in a braided stream does not occupy a single channel but the flow is diverted into many separate ribbons of water with sand bars between. The separate ribbons continually diverge and re-merge downstream. The sand bars and channel positions frequently change during floods.
Braided streams (and broad, shallow single channel streams) have a lower hydraulic radius than single channel meandering streams of similar discharge. They also have a very large supply of coarse-grained bed load. In order to overcome friction and continue to transport their sediment load, the slope must increase (to increase the velocity and therefore competence). Braided streams flow on steeper slopes than single channel streams of similar discharge.
Considering the longitudinal (downstream) profile of a stream:
Local Grading: Where a stream flows from a gentler to a steeper slope, velocity will suddenly increase. At the point where the velocity increases (and the depth decreases and shear increases) competence and capacity increase resulting in increased erosion. Where that stream then flows from a steeper slope onto a gentler slope, the velocity decreases (competence and capacity decrease) and deposition will result. This process will erode away and fill in abrupt changes in slope resulting in a more even slope through the course of the stream. Young streams have many changes of slope resulting in rapids and falls. As streams mature, these gradually wear away.
The Big Picture:
The ideal graded profile of a stream is concave upward: steeper near the head or beginning and flatter near the bottom or mouth of the stream. The reason for this is that in the upper reaches of a stream its discharge is smaller. As streams merge with other streams their discharge increases, their cross-sectional area increases, and their hydraulic radius increases. As one goes downstream and the stream grows in size, the waters flow more freely (less friction). Less slope is required to maintain velocity (and competence). In the upper reaches, a small stream must be steeper to transport its sediments. The extra gravitational energy on the steeper slope is needed to overcome the frictional forces in the shallow stream.
Maintenance of Grade: If the slope is too gentle and velocity is too slow to transport the sediments being supplied by weathering and erosion, the sediments will pile up. This increases the gradient which causes the water to flow faster, which increases erosion and transport, which then reduces the gradient. In the lower reaches of a stream, where the discharge is greater, since friction is less, the stream need not be so steep to transport the load. If it were steeper than needed to transport the sediments, erosion would result. But this would decrease the gradient leading to a decrease in erosion.
It seems counter-intuitive but stream velocity generally doesn't decrease from the steep headlands to the flat plains, from the dashing mountain brook to the broad peaceful river. The broad lowland rivers have much greater discharge and hydraulic radius. They flow much more freely (e.g., the water doesn't have to dash around boulders in the stream). The net result is that velocity actually increases somewhat.
Equilibrium: When a stream has attained a graded profile, it neither erodes downward (degrades) nor builds its bed upward (aggrades). Rather it transports sediments from the source areas (headwaters and hills surrounding valley) to its end.
However, the ideal graded profile and equilibrium are seldom achieved, or not for extended periods of time, because conditions are constantly changing requiring streams to constantly change their grade. Changing climate can change stream discharge or sediment supply necessitating changes in slope. Uplift of headlands or lowering of sea level can cause slopes to increase, streams to flow faster and valleys to be eroded downward. Rising sea level or building of dams elevates streams' base levels and cause sediment to gradually aggrade upstream.
Meandering streams produce sheets of sand that are built laterally across the flood plain as a bend and its point bar migrates across the valley. Periodically, when the stream floods, sheets of mud (silt and clay) are spread across the flood plain. As flood waters spill out of the stream channel onto the adjacent land, the velocity suddenly decreases because of the sudden decrease in depth and increase in friction. The courser sediments (sand and course silt) are therefore deposited adjacent to the stream. Through many floods, natural levees are built along the banks of streams. They are highest near the river and gradually taper away from the river.
In some large river valleys, the flood plain may develop into a convex shape as levee deposits grow on the central portion of the valley. Streams that formerly fed as tributaries into the river may be cut off from the river by the broad levees. These yazoo streams then flow down the valley, parallel to the river, until they eventually meet with it farther downstream. The lower, poorly-drained margins of these convex stream valleys may also contain backswamps.
Stream Valley Evolution
Youthful Stream Valleys have steep-sloping, V-shaped valleys and little or no flat land next to the stream channel in the valley bottom.
Mature Stream Valleys have gentle slopes and a flood plain; the meander belt width equals the flood plain width.
Old Age Stream Valleys have very subdued topography and very broad flood plains; the flood plain width is greater than the meander belt width.
Rejuvenated Streams: At any stage in the development of a stream valley, if the gradient increases, for example due to tectonic uplift or drop in sea level, the stream will flow faster. This will result in a new period of vertical downcutting and development of a steep-sided V-shaped valley profile. The stream cuts into its former flood plain leaving incised meanders. Gradually the stream will erode the sides of its new valley and eventually develop a new flood plain lying at a lower level than the original flood plain. The original flood plain remains as a stream terrace. | http://myweb.cwpost.liu.edu/vdivener/notes/streams_geomorph.htm | 13 |
85 | According to legend, the first mathematical formulation of what we might today call a law of nature dates back to an Ionian named Pythagoras (ca. 580 BC-ca. 490 bc), famous for the theorem named after him: that the square of the hypotenuse (longest side) of a right triangle equals the sum of the squares of the other two sides. Pythagoras is said to have discovered the numerical relationship between the length of the strings used in musical instruments and the harmonic combinations of the sounds. In today's language we would describe that relationship by saying that the frequency—the number of vibrations per second—of a string vibrating under fixed tension is inversely proportional to the length of the string. From the practical point of view, this explains why bass guitars must have longer strings than ordinary guitars. Pythagoras probably did not it really discover this—he also did not discover the theorem that bears his name—but there is evidence that some relation between string length and pitch was known in his day If so, one could call that simple mathematical formula the first instance of what we now know as theoretical physics.
Apart from the Pythagorean law of strings, the only physical laws known correctly to the ancients were three laws detailed by Archimedes (ca. 287 BC-ca. 212 bc), by far the most eminent physicist of antiquity. In today's terminology, the law of the lever explains that small forces can lift large weights because the lever amplifies a force according to the ratio of the distances from the lever's fulcrum. The law of buoyancy states that any object immersed in a fluid will experience an upward force equal to the weight of the displaced fluid. And the law of reflection asserts that the angle between a beam of light and a mirror is equal to the angle between the mirror and the reflected beam. But Archimedes did not call them laws, nor did he explain them with reference to observation and measurement. Instead he treated them as if they were purely mathematical theorems, in an axiomatic system much like the one Euclid created for geometry.
As the Ionian influence spread, there appeared others who saw that the universe possesses an internal order, one that could be understood through observation and reason. Anaximander (ca. 6io BC-ca. 546 bc), a friend and possibly a student of Thales, argued that since human infants are helpless at birth, if the first human had somehow appeared on earth as an infant, it would not have survived. In what may have been humanity's first inkling of evolution, people, Anaximander reasoned, must therefore have evolved from other animals whose young are hardier. In Sicily, Empedocles (ca. 490 BC-ca. 430 bc) observed the use of an instrument called a clepsydra. Sometimes used as a ladle, it consisted of a sphere with an open neck and small holes in its bottom. When immersed in water it would fill, and if the open neck was then covered, the clepsydra could be lifted out without the water in it falling through the holes. Empedocles noticed that if you cover the neck before you immerse it, a clepsydra does not fill. He reasoned that something invisible must be preventing the water from entering the sphere through the holes—he had discovered the material substance we call air.
Around the same time Democritus (ca 460 BC-ca. 370 bc). from an Ionian colony in northern Greece, pondered what happened when you break or cut an object into pieces. He argued that you ought not to be able to continue the process indefinitely. Instead he postulated that everything, including all living beings. is made of fundamental particles that cannot be cut or broken into parts. He named these ultimate particles atoms, from the Greek adjective meaning "uncuttable." Democritus believed that every material phenomenon is a product of the collision of atoms. In his view, dubbed atomism, all atoms move around in space, and, unless disturbed, move forward indefinitely. Today that idea is called the law of inertia.
The revolutionary idea that we are but ordinary inhabitants of the universe, not special beings distinguished by existing at its center, was first championed by Aristarchus (ca. 310 BC-ca. 230 BC), one of the last of the Ionian scientists. Only one of his calculations survives, a complex geometric analysis of careful observations he made of the size of the earth's shadow on the moon during a lunar eclipse. He concluded from his data that the sun must be much larger than the earth. Perhaps inspired by the idea that tiny objects ought to orbit mammoth ones and not the other way around, he became the first person to argue that the earth is not the center of our planetary system, but rather that it and the other planets orbit the much larger sun. It is a small step from the realization that the earth is just another planet to the idea that our sun is nothing special either. Aristarchus suspected that this was the case and believed that the stars we see in the night sky are actually nothing more than distant suns.
The Ionians were but one of many schools of ancient Greek philosophy, each with different and often contradictory traditions. Unfortunately, the Ionians' view of nature—that it can be explained through general laws and reduced to a simple set of principles—exerted a powerful influence for only a few centuries. One reason is that Ionian theories often seemed to have no place for the notion of free will or purpose, or the concept that gods intervene in the workings of the world. These were startling omissions as profoundly unsettling to many Greek thinkers as they are to many people today. The philosopher Epicurus (341 BC-270 bc), for example, opposed atomism on the grounds that it is "better to follow the myths about the gods than to become a 'slave' to the destiny of natural philosophers." Aristotle too rejected the concept of atoms because he could not accept that human beings were composed of soulless, inanimate objects. The Ionian idea that the universe is not human-centered was a milestone in our understanding of the cosmos, but it was an idea that would b dropped and not picked up again, or commonly accepted, until Galileo, almost twenty centuries later.
Our modern understanding of the term "law of nature" is an issue philosophers argue at length, and it is a more subde question than one may at first think. For example, the philosopher John W. Carroll compared the statement "All gold spheres are less than a mile in diameter" to a statement like "All uranium-23 spheres are less than a mile in diameter." Our observations of the world tell us that there are no gold spheres larger than a mile wide, and we can be pretty confident there never will be. Still, we have no reason to believe that there couldn't be one, and so the statement is not considered a law. On the other hand, the statement "All uranium-235 spheres are less than a mile in diameter" could be thought of as a law of nature because, according to what we know about nuclear physics, once a sphere of uranium-235 grew to a diameter greater than about six inches, it would demolish itself in a nuclear explosion. Hence we can be sure that suet spheres do not exist. (Nor would it be a good idea to try to make one!) This distinction matters because it illustrates that not all generalizations we observe can be thought of as laws of nature. and that most laws of nature exist as part of a larger, interconnected system of laws.
While conceding that human behavior is indeed determined by the laws of nature, it also seems reasonable to conclude that the outcome is determined in such a complicated way and with so many variables as to make it impossible in practice to predict. For that one would need a knowledge of the initial state of each of the thousand trillion trillion molecules in the human body and to solve something like that number of equations. That would take a few billion years, which would be a bit late to duck when the person opposite aimed a blow.
because it is so impractical to use the underlying physical laws to predict human behavior, we adopt what is called an effective theory. In physics, an effective theory is a framework created to model certain observed phenomena without describing in detail all of the underlying processes. For example, we cannot solve exactly the equations governing the gravitational interactions of every atom in a person's body with every atom in the earth. But for all practical purposes the gravitational force between a person and the earth can be described in terms of just a few numbers, such as the person's total mass. Similarly, we cannot solve the equations governing the behavior of complex atoms and molecules, but we have developed an effective theory called chemistry that provides an adequate explanation of how atoms and molecules behave in chemical reactions without accounting tor every detail of the interactions. In the case of people, since we cannot solve the equations that determine our behavior, we use the effective theory that people have free will. The study of our will, and of the behavior that arises from it, is the science of psychology. Economics is also an effective theory, based on the notion of free will plus the assumption that people evaluate their possible alternative courses of action and choose the best. That effective theory is only moderately successful in predicting behavior because, as we all mow, decisions are often not rational or are based on a defective analysis of the consequences of the choice. That is why the world is in such a mess.
We make models in science, but we also make them in everyday life. Model-dependent realism applies not only to scientific models but also to the conscious and subconscious mental models we all create in order to interpret and understand the everyday world. There is no way to remove the observer—us—from our perception of the world, which is created through our sensory processing and through the way we think and reason. Our perception—and hence the observations upon which our theories are based—is is not direct, but rather is shaped by a kind of lens, the interpretive structure of our human brains.
Model-dependent realism corresponds to the way we perceive objects. In vision, one's brain receives a series of signals down the optic nerve. Those signals do not constitute the sort of image you would accept on your television. There is a blind spot where the optic nerve attaches to the retina, and the only part of your field of vision with good resolution is a narrow area of about i degree of visual angle around the retina's center, an area the width of your thumb when held at arm's length. And so the raw data sent to the brain are like a badly pixilated picture with a hole in it. Fortunately, the human brain processes that data, combining the input from both eyes, filling in gaps on the assumption that the visual properties of neighboring locations are similar and interpolating. Moreover, it reads a two-dimensional array of data from the retina and creates from it the impression of three-dimensional space. The brain, in other words, builds a mental picture or model.
The brain is so good at model building that if people are fitted with glasses that turn the images in their eyes upside down, their brains, after a time, change the model so that they again see things the right way up. If the glasses are then removed, they see the world upside down for a while, then again adapt. This shows that what one means when one says "I see a chair" is merely that one has used the light scattered by the chair to build a mental image or model of the chair. If the model is upside down, with luck one's brain will correct it before one tries to sit on the chair.
Imagine, say, that you wanted to travel from New York to Madrid, two cities that are at almost the same latitude. If the earth were flat, the shortest route would be to head straight east. If you did that, you would arrive in Madrid after traveling 3,707 miles. But due to the earth's curvature, there is a path that on a flat map looks curved and hence longer, but which is actually shorter. You can get there in 3,605 miles if you follow the great-circle route. which is to first head northeast, then gradually turn east, and then southeast. The difference in distance between the two routes is due to the earth's curvature, and a sign of its non-Euclidean geometry. Airlines know this, and arrange for their pilots to follow great-circle routes whenever practical.
1. Gravity. This is the weakest of the four, but it is a long-range force and acts on everything in the universe as an attraction. This means that for large bodies the gravitational forces all add up and can dominate over all other forces.
2. Electromagnetism. This is also long-range and is much stronger than gravity, but it acts only on particles with an electric charge. being repulsive between charges of the same sign and attractive between charges of the opposite sign. This means the electric forces between large bodies cancel each other out, but on the scales of atoms and molecules they dominate. Electromagnetic forces are responsible for all of chemistry and biology.
3. Weak nuclear force. This causes radioactivity and plays a vital role in the formation of the elements in stars and the early universe. We don't, however, come into contact with this force in our everyday lives.
4. Strong nuclear force. This force holds together the protons and neutrons inside the nucleus of an atom. It also holds together the protons and neutrons themselves, which is necessary because they are made of still timer particles, the quarks we mentioned in Chapter 3. The strong force is the energy source for the sun i and nuclear power, but, as with the weak force, we don't have direct contact with it.
Feynman's graphical method provides a way of visualizing each term in the sum over histories. Those pictures, called Feynman diagrams, are one of the most important tools of modern physics. In QED the sum over all possible histories can be represented as a sum over Feynman diagrams like those below, which represent some of the ways it is possible for two electrons to scatter off each other through the electromagnetic force. In these diagrams the solid lines represent the electrons and the wavy lines represent photons. Time is understood as progressing from bottom to top. and places where lines join correspond to photons being emitted or absorbed by an electron. Diagram (A) represents the two electrons approaching each other, exchanging a photon, and then continuing on their way. That is the simplest way in which two electrons can interact electromagnetically, but we must consider all possible histories. Hence we must also include diagrams like (B). That diagram also pictures two lines coming in—the approaching electrons—and two lines going out—the scattered ones—but in this diagram the electrons exchange two photons before flying off The diagrams pictured are only a few of the possibilities; in fact, there are an infinite number of diagrams, which must be mathematically accounted for.
Feynman diagrams aren't just a neat way of picturing and categorizing how interactions can occur. Feynman diagrams come with rules that allow you to read off, from the lines and vertices in each diagram, a mathematical expression. The probability, say. that the incoming electrons, with some given initial momentum. win end up flying off with some particular final momentum is then obtained by summing the contributions from each Feynman diagram. That can take some work, because, as we've said, there are an infinite number of them. Moreover, although the incoming and outgoing electrons are assigned a definite energy and momentum, the particles in the closed loops in the interior of the diagram can have any energy and momentum. That is important because in forming the Feynman sum one must sum not only over all diagrams but also over all those values of energy and momentum.
Feynman diagrams provided physicists with enormous help in visualizing and calculating the probabilities of the processes described by QED. But they did not cure one important ailment suffered by the theory: When you add the contributions from the infinite number of different histories, you get an infinite result. (If the successive terms in an infinite sum decrease fast enough, it is possible for the sum to be finite, but that, unfortunately, doesn't happen here.) In particular, when the Feynman diagrams are added up, the answer seems to imply that the electron has an infinite mass and charge. This is absurd, because we can measure the mass and charge and they are finite. To deal with these infinities, a procedure called renormalization was developed.
The process of renormalization involves subtracting quantities that are defined to be infinite and negative in such a way that, with careful mathematical accounting, the sum of the negative infinite values and the positive infinite values that arise in the theory almost cancel out, leaving a small remainder, the finite observed values of mass and charge. These manipulations might sound like the sort of things that get you a flunking grade on a school math exam, and renormalization is indeed, as it sounds, mathematically dubious. One consequence is that the values obtained by this method for the mass and charge of the electron can be any finite number. That has the advantage that physicists may choose the negative infinities in a way that gives the right answer, but the disadvantage that the mass and charge of the electron therefore cannot be predicted from the theory. But once we have fixed the mass and charge of the electron in this manner, we can employ QED to make many other very precise predictions, which all agree extremely closely with observation, so renormalization is one of the essential ingredients of QED. An early triumph of QED, for example, was the correct prediction of the so-called Lamb shift, a small change in the energy of one of the states of the hydrogen atom discovered in 1947.
The success of renormalization in QED encouraged attempts to look for quantum field theories describing the other three. forces of nature. But the division of natural forces into four classes is probably artificial and a consequence of our lack of understanding. People have therefore sought a theory of everything that will unify the four classes into a single law that is compatible with quantum theory. This would be the holy grail of physics.
Whether M-theory exists as a single formulation or only as a network, we do know some of its properties. First, M-theory has eleven space-time dimensions, not ten. String theorists had long suspected that the prediction often dimensions might have to be adjusted, and recent work showed that one dimension had indeed been overlooked. Also, M-theory can contain not just vibrating strings but also point particles, two-dimensional membranes. three-dimensional blobs, and other objects that are more difficult to picture and occupy even more dimensions of space, up to nine. These objects are called p-branes (where p runs from zero to nine).
What about the enormous number of ways to curl up the tiny dimensions. In M-theory those extra space dimensions cannot be curled up in just any way. The mathematics of the theory restricts the manner in which the dimensions of the internal space can be curled. The exact shape of the internal space determines both the values of physical constants, such as the charge of the electron. and the nature of the interactions between elementary particles. In other words, it determines the apparent laws of nature. We say "apparent" because we mean the laws that we observe in our universe—the laws of the four forces, and the parameters such as mass and charge that characterize the elementary particles. But the more fundamental laws are those of M-theory.
The laws of M-theory therefore allow for different universes with different apparent laws, depending on how the internal space is curled. M-theory has solutions that allow for many different internal spaces, perhaps as many as 10^500, which means it allows for 10^500 different universes, each with its own laws. To get an idea low many that is, think about this: If some being could analyze the laws predicted for each of those universes in just one millisecond and had started working on it at the big bang, at present that being would have studied just 10^20 of them. And that's without coffee breaks.
That the universe is expanding was news to Einstein. But the possibility that the galaxies are moving away from each other had been proposed a few years before Hubble's papers on theoretical grounds arising from Einstein's own equations. In 1922, Russian physicist and mathematician Alexander Friedmann investigated what would happen in a model universe based upon two assumptions that greatly simplified the mathematics: that the universe looks identical in every direction, and that it looks that way from every observation point. We know that Friedmann's first assumption is not exactly true—the universe fortunately is not uniform everywhere! If we gaze upward in one direction, we might see the sun; in another, the moon or a colony of migrating vampire bats. But the universe does appear to be roughly the same in every direction when viewed on a scale that is far larger—larger even than the distance between galaxies. It is something like looking down at a forest. If you are close enough, you can make out individual leaves, or at least trees, and the spaces between them. But if you are so high up that if you hold out your thumb it covers a square mile of trees, the forest will appear to be a uniform shade of green. We would say that, on that scale, the forest is uniform.
Based on his assumptions Friedmann was able to discover a solution to Einstein's equations in which the universe expanded in the manner that Hubble would soon discover to be true. In particular, Friedmann's model universe begins with zero size and expands until gravitational attraction slows it down, and eventually causes it to collapse in upon itself (There are, it turns out, two other types of solutions to Einstein's equations that also satisfy the assumptions of Friedmann's model, one corresponding to a universe in which the expansion continues forever, though it does slow a bit, and another to a universe in which the rate of expansion slows toward zero, but never quite reaches it.) Friedmann died a few years after producing this work, and his ideas remained largely unknown until after Hubble's discovery. But in 1927 a professor of physics and Roman Catholic priest named Georges Lemaitre proposed a similar idea: If you trace the history of the universe backward into the past, it gets tinier and tinier until you come upon a creation event—what we now call the big bang.
Not everyone liked the big bang picture. In fact, the term "big bang" was coined in 1949 by Cambridge astrophysicist Fred Hoyle, who believed in a universe that expanded forever, and meant the term as a derisive description. The first direct observations supporting the idea didn't come until 1965, with the discovery that there is a faint background of microwaves throughout space. This cosmic microwave background radiation, or CMBR, is the same as that in your microwave oven, but much less powerful. You can observe the CMBR yourself by tuning your television to an unused channel—a few percent of the snow you see on the screen will be caused by it. The radiation was discovered by accident by two Bell Labs scientists trying to eliminate such static from their microwave antenna. At first they thought the static might be coming from the droppings of pigeons roosting in their apparatus, but it turned out their problem had a more interesting origin—the CMBR is radiation left over from the very hot and dense early universe that would have existed shortly after the big bang. As the universe expanded, it cooled until the radiation became just the faint remnant we now observe. At present these microwaves could heat your food to only about -270 degrees Centigrade —3 degrees above absolute zero, and not very useful for popping corn.
Astronomers have also found other fingerprints supporting the big bang picture of a hot, tiny early universe. For example, during the first minute or so, the universe would have been hotter than the center of a typical star. During that period the entire universe would have acted as a nuclear fusion reactor. The reactions would have ceased when the universe expanded and cooled suficiently. but the theory predicts that this should have left a universe composed mainly of hydrogen, but also about 23 percent helium, with traces of lithium (all heavier elements were made later, inside stars). The calculation is in good accordance with the amounts of helium, hydrogen, and lithium we observe.
Our very existence imposes rules determining from where and at what time it is possible for us to observe the universe. That is, the fact of our being restricts the characteristics of the kind of environment in which we find ourselves. That principle is called the weak anthropic principle. (We'll see shortly why the adjective weak" is attached.) A better term than "anthropic principle" would have been "selection principle," because the principle refers to how our own knowledge of our existence imposes rules that select, out of all the possible environments, only those environments with the characteristics that allow life.
Though it may sound like philosophy, the weak anthropic principle can be used to make scientific predictions. For example, how old is the universe? As we'll soon see, for us to exist the universe must contain elements such as carbon, which are produced by cooking lighter elements inside stars. The carbon must then be scattered through space in a supernova explosion, and eventually condense as part of a planet in a new-generation solar system. In i96i physicist Robert Dicke argued that the process takes about lo billion years, so our being here means that the universe must be at least that old. On the other hand, the universe cannot be much older than lo billion years, since in the far future all the fuel for stars will have been used up, and we require hot stars for our sustenance. Hence the universe must be about lo billion years old. That is not an extremely precise prediction, but it is true—according to current data the big bang occurred about 13.7 billion years ago.
As was the case with the age of the universe, anthropic predictions usually produce a range of values for a given physical parameter rather than pinpointing it precisely. That's because our existence, while it might not require a particular value of some physical parameter, often is dependent on such parameters not varying too far from where we actually find them. We furthermore expect that the actual conditions in our world are typical within the anthropically allowed range. For example, if only modest orbital eccentricities, say between zero and 0.5, will allow life, then an eccentricity of o.i should not surprise us because among all the planets in the universe, a fair percentage probably have orbits with eccentricities that small. But if it turned out that the earth moved in a near-perfect circle, with eccentricity, say, of 0.00000000001, that would make the earth a very special planet indeed, and motivate us to try to explain why we find ourselves living in such an anomalous home. That idea is sometimes called the principle of mediocrity.
'he lucky coincidences pertaining to the shape of planetary or3its, the mass of the sun, and so on are called environmental because they arise from the serendipity of our surroundings and not from a fluke in the fundamental laws of nature. The age of the universe is also an environmental factor, since there are an earlier and a later time in the history of the universe, but we must live in this era because it is the only era conducive to life. Environmental coincidences are easy to understand because ours is only one cosmic habitat among many that exist in the universe, and we obviously must exist in a habitat that supports life.
The weak anthropic principle is not very controversial. But there is a stronger form that we will argue for here, although it is regarded with disdain among some physicists. The strong anthropic principle suggests that the fact that we exist imposes constraints not just on our environment but on the possible form and content of the laws of nature themselves. The idea arose because it is not only the peculiar characteristics of our solar system that seem oddly conducive to the development of human life but also the characteristics of our entire universe, and that is much more difficult to explain.
The tale of how the primordial universe of hydrogen, helium. and a bit of lithium evolved to a universe harboring at least one world with intelligent life like us is a tale of many chapters. As we mentioned earlier, the forces of nature had to be such that heavier elements—especially carbon—could be produced from the primordial elements, and remain stable for at least billions of years. Those heavy elements were formed in the furnaces we call stars, so the forces first had to allow stars and galaxies to form. Those grew from the seeds of tiny inhomogeneities in the early universe. which was almost completely uniform but thankfully contained density variations of about i part in 100,000. However, the existence of stars, and the existence inside those stars of the elements we are made of, is not enough. The dynamics of the stars had to be such that some would eventually explode, and, moreover, explode precisely in a way that could disburse the heavier elements through space. In addition, the laws of nature had to dictate that those remnants could recondense into a new generation of stars. these surrounded by planets incorporating the newly formed heavy elements. Just as certain events on early earth had to occur in order to allow us to develop, so too was each link of this chain necessary for our existence. But in the case of the events resulting in the evolution of the universe, such developments were governed by the balance of the fundamental forces of nature, and it is those whose interplay had to be just right in order for us to exist.
Though one might imagine "living" organisms such as intelligent computers produced from other elements, such as silicon, it is doubtful that life could have spontaneously evolved in the absence of carbon. The reasons for that are technical but have to do with the unique manner in which carbon bonds with other elements. Carbon dioxide, for example, is gaseous at room temperature, and biologically very useful. Since silicon is the element directly below carbon on the periodic table, it has similar chemical properties. However, silicon dioxide, quartz, is far more useful in a rock collection than in an organism's lungs. Still, perhaps lifeforms could evolve that feast on silicon and rhythmically twirl their tails in pools of liquid ammonia. Even that type of exotic life could not evolve from just the primordial elements, for those elements can form only two stable compounds, lithium hydride, which is a colorless crystalline solid, and hydrogen gas, neither of them a compound likely to reproduce or even to fall in love. Also, the fact remains that we are a carbon life-form, and that raises the issue of how carbon, whose nucleus contains six protons, and the other heavy elements in our bodies were created.
The first step occurs when older stars start to accumulate helium, which is produced when two hydrogen nuclei collide and fuse with each other. This fusion is how stars create the energy that warms us. Two helium atoms can in turn collide to form beryllium, an atom whose nucleus contains four protons. Once beryllium is formed, it could in principle fuse with a third helium nucleus to form carbon. But that doesn't happen, because the isotope of beryllium that is formed decays almost immediately back into helium nuclei.
The situation changes when a star starts to run out of hydrogen. When that happens the star's core collapses until its central temperature rises to about lO0 million degrees Kelvin. Under those conditions, nuclei encounter each other so often that some beryllium nuclei collide with a helium nucleus before they have had a chance to decay. Beryllium can then fuse with helium to form an isotope of carbon that is stable. But that carbon is still a long way from forming ordered aggregates of chemical compounds of the type that can enjoy a glass of Bordeaux, juggle flaming bowling pins, or ask questions about the universe. For beings such as humans to exist, the carbon must be moved from inside the star to friendlier neighborhoods. That, as we've said, is accomplished when the star, at the end of its life cycle, explodes as a supernova, expelling carbon and other heavy elements that later condense into a planet.
This process of carbon creation is called the triple alpha process because "alpha particle" is another name for the nucleus of the isotope of helium involved, and because the process requires that three of them (eventually) fuse together. The usual physics predicts that the rate of carbon production via the triple alpha process ought to be quite small. Noting this, in 1952 Hoyle predicted that the sum of the energies of a beryllium nucleus and a helium nucleus must be almost exactly the energy of a certain quantum state of the isotope of carbon formed, a situation called a resonance, which greatly increases the rate of a nuclear reaction. At the time. no such energy level was known, but based on Hoyle's suggestion, William Fowler at Caltech sought and found it, providing important support for Hoyle's views on how complex nuclei were created.
Hoyle wrote, "I do not believe that any scientist who examined the evidence would fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce inside the stars." At the time no one knew enough nuclear physics to understand the magnitude of the serendipity that resulted in these exact physical laws. But in investigating the validity of the strong anthropic principle, in recent years physicists began asking themselves what the universe would have been like if the laws of nature were different. Today we can create computer models that tell us how the rate of the triple alpha reaction depends upon the strength of the fundamental forces of nature. Such calculations show that a change of as little as 0.5 percent in the strength of the strong nuclear force, or 4 percent in the electric force, would destroy either nearly all carbon or all oxygen in every star, and hence the possibility of life as we know it. Change those rules of our universe just a bit, and the conditions for our existence disappear!
If one assumes that a few hundred million years in stable orbit are necessary for planetary life to evolve, the number of space dimensions is also fixed by our existence. That is because, according to the laws of gravity, it is only in three dimensions that stable elliptical orbits are possible. Circular orbits are possible in other dimensions, but those, as Newton feared, are unstable. In any but three dimensions even a small disturbance, such as that produced by the pull of the other planets, would send a planet off its circular orbit and cause it to spiral either into or away from the sun, so we would either burn up or freeze. Also, in more than three dimensions the gravitational force between two bodies would decrease more rapidly than it does in three dimensions. In three dimensions the gravitational force drops to 1/4 of its value if one doubles the distance. In four dimensions it would drop to 1/8, in five dimensions it would drop to 1/16, and so on. As a result, in more than three dimensions the sun would not be able to exist in a stable state with its internal pressure balancing the pull of gravity. It would either fall apart or collapse to form a black hole, either of which could ruin your day. On the atomic scale, the electrical forces would behave in the same way as gravitational forces. That means the electrons in atoms would either escape or spiral into the nucleus. In neither case would atoms as we know them be possible.
If the total energy of the universe must always remain zero, and it costs energy to create a body, how can a whole universe be created from nothing. That is why there must be a law like gravity. Because gravity is attractive, gravitational energy is negative: One has to do work to separate a gravitationally bound system. such as the earth and moon. This negative energy can balance the positive energy needed to create matter, but it's not quite that simple. The negative gravitational energy of the earth, for example, is less than a billionth of the positive energy of the matter particles the earth is made of A body such as a star will have more negative gravitational energy, and the smaller it is (the closer the different parts of it are to each other), the greater this negative gravitational energy will be. But before it can become greater than the positive energy of the matter, the star will collapse to a black hole, and black holes have positive energy. That's why empty space is stable. Bodies such as stars or black holes cannot just appear out of nothing. But a whole universe can.
Because gravity shapes space and time, it allows space-time to be locally stable but globally unstable. On the scale of the entire universe, the positive energy of the matter can be balanced by the negative gravitational energy, and so there is no restriction on the creation of whole universes. Because there is a law like gravity, the universe can and will create itself from nothing in the manner described in Chapter 6. Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.
Why are the fundamental laws as we have described them.> The ultimate theory must be consistent and must predict finite results for quantities that we can measure. We've seen that there must be a law like gravity, and we saw in Chapter 5 that for a theory of gravity to predict finite quantities, the theory must have what is called supersymmetry between the forces of nature and the matter on which they act. M-theory is the most general supersymmetric theory of gravity. For these reasons M-theory is the only candidate for a complete theory of the universe. If it is finite—and this has yet to be proved—it will be a model of a universe that creates itself We must be part of this universe, because there is no other consistent model.
M-theory is the unified theory Einstein was hoping to find. The fact that we human beings—who are ourselves mere collections of fundamental particles of nature—have been able to come this close to an understanding of the laws governing us and our universe is a great triumph. But perhaps the true miracle is that abstract considerations of logic lead to a unique theory that predicts and describes a vast universe hill of the amazing variety that we see. If the theory is confirmed by observation, it will be the successful conclusion of a search going back more than 3,000 years. We will have found the grand design.
M-theory is not a theory in the usual sense. It is a whole family of different theories, each of which is a good description of observations only in some range of physical situations. It is a bit like a map. As is weU known, one cannot show the whole of the earth's surface on a single map. The usual Mercator projection used for maps of the world makes areas appear larger and larger in the far north and south and doesn't cover the North and South Poles. To faithfully map the entire earth, one has to use a collection of maps, each of which covers a limited region. The maps overlap each other, and where they do, they show the same landscape. M-theory is similar. The different theories in the M-theory family may look very different, but they can all be regarded as aspects of the same underlying theory. They are versions of the theory that are applicable only in limited ranges—for example, when certain quantities such as energy are small. Like the overlapping maps in a Mercator projection, where the ranges of different versions overlap, they predict the same phenomena. But just as there is no fiat map that is a good representation of the earth's entire surface. there is no single theory that is a good representation of observations in all situations.
By examining the model universes we generate when the theories of physics are altered in certain ways, one can study the effect of changes to physical law in a methodical manner. It turns out that it is not only the strengths of the strong nuclear force and the electromagnetic force that are made to order for our existence. Most of the fundamental constants in our theories appear fine tuned in the sense that if they were altered by only modest amounts, the universe would be qualitatively different, and in many cases unsuitable for the development of life. For example, if the other nuclear force, the weak force, were much weaker, in the early universe all the hydrogen in the cosmos would have turned to helium, and hence there would be no normal stars; if it were much stronger, exploding supernovas would not eject their outer envelopes, and hence would fail to seed interstellar space with the heavy elements planets require to foster life. If protons were 0.2 percent heavier, they would decay into neutrons, destabilizing atoms. If the sum of the masses of the types of quark that make up a proton were changed by as little as lo percent, there would be far fewer of the stable atomic nuclei of which we are made; in fact. the summed quark masses seem roughly optimized for the existence of the largest number of stable nuclei. | http://memexplex.com/Reference/id=475 | 13 |
111 | The roots of a tree are usually under the ground. One case for which this is not true are the roots of the mangrove tree. A single tree has many roots. The roots carry food and water from the ground through the trunk and branches to the leaves of the tree. They can also breathe in air. Sometimes, roots are specialized into aerial roots, which can also provide support, as is the case with the Banyan tree.
The trunk is the main body of the tree. The trunk is covered with bark which protects it from damage. Branches grow from the trunk. They spread out so that the leaves can get more sunlight.
The leaves of a tree are green most of the time, but they can come in many colours, shapes and sizes. The leaves take in sunlight and use water and food from the roots to make the tree grow, and to reproduce.
Trees and shrubs take in water and carbon dioxide and give out oxygen with sunlight to form sugars. This is the opposite of what animals do in respiration. Plants also do some respiration using oxygen the way animals do. They need oxygen as well as carbon dioxide to live.
Parts of trees [change]
The parts of a tree are the roots, trunk(s), branches, twigs and leaves. Tree stems are mainly made of support and transport tissues (xylem and phloem). Wood consists of xylem cells, and bark is made of phloem and other tissues external to the vascular cambium.
Growth of the trunk [change]
As a tree grows, it may produce growth rings as new wood is laid down around the old wood.It may live to be a thousand years old. In areas with seasonal climate, wood produced at different times of the year may alternate light and dark rings. In temperate climates, and tropical climates with a single wet-dry season alternation, the growth rings are annual, each pair of light and dark rings being one year of growth. In areas with two wet and dry seasons each year, there may be two pairs of light and dark rings each year; and in some (mainly semi-desert regions with irregular rainfall), there may be a new growth ring with each rainfall.
In tropical rainforest regions, with constant year-round climate, growth is continuous. Growth rings are not visible and there is no change in the wood texture. In species with annual rings, these rings can be counted to find the age of the tree. This way, wood taken from trees in the past can be dated, because the patterns of ring thickness are very distinctive. This is dendrochronology. Very few tropical trees can be accurately dated in this manner.
The roots of a tree are generally down in earth, providing anchorage for the parts above ground, and taking in water and nutrients from the soil. Most trees need help from a fungus for better uptake of nutriens: this is mycorrhiza. Most of a tree's biomass comes from carbon dioxide absorbed from the atmosphere (see photosynthesis). Above ground, the trunk gives height to the leaf-bearing branches, competing with other plant species for sunlight. In many trees, the order of the branches makes exposure of the leaves to sunlight better.
Not all trees have all the organs or parts as mentioned above. For example, most palm trees are not branched, the saguaro cactus of North America has no functional leaves, tree ferns do not produce bark, etc. Based on their general shape and size, all of these are nonetheless generally regarded as trees. Trees can vary very much. A plant form that is similar to a tree, but generally having smaller, multiple trunks and/or branches that arise near the ground, is called a shrub (or a bush). Even though that is true, no precise differentiation between shrubs and trees is possible. Given their small size, bonsai plants would not technically be 'trees', but one should not confuse reference to the form of a species with the size or shape of individual specimens. A spruce seedling does not fit the definition of a tree, but all spruces are trees.
The tree form has changed separately in classes of plants that are not related, in response to similar problems (for the tree). With about 100,000 types of trees, the number of tree types in the whole world might be one fourth of all living plant types. Most tree species grow in tropical parts of the world and many of these areas have not been surveyed yet by botanists (they study plants), making species difference and ranges not well understood.
The earliest trees were tree ferns, horsetails and lycophytes, which grew in forests in the Carboniferous period; tree ferns still survive, but the only surviving horsetails and lycophytes are not of tree form. Later, in the Triassic Period, conifers, ginkgos, cycads and other gymnosperms appeared, and subsequently flowering plants in the Cretaceous period. Most species of trees today are flowering plants (Angiosperms) and conifers.
A small group of trees growing together is called a grove or copse, and a landscape covered by a dense growth of trees is called a forest. Several biotopes are defined largely by the trees that inhabit them; examples are rainforest and taiga (see ecozones). A landscape of trees scattered or spaced across grassland (usually grazed or burned over periodically) is called a savanna. A forest of great age is called old growth forest or ancient woodland (in the UK). A very young tree is called a sapling.
Stoutest trees [change]
The stoutest living single-trunk species in diameter is the African Baobab: 15.9 m (52 ft), Glencoe Baobab (measured near the ground), Limpopo Province, South Africa. This tree split up in November 2009 and now the stoutest baobab could be Sunland Baobab (South Africa) with diameter 10.64 m and circumference of 33.4 m.
Some trees develop multiple trunks (whether from an individual tree or multiple trees) which grow together. The Sacred Fig is a notable example of this, forming additional 'trunks' by growing adventitious roots down from the branches, which then thicken up when the root reaches the ground to form new trunks; a single Sacred Fig tree can have hundreds of such trunks.
Oldest trees [change]
The oldest trees are determined by growth rings, which can be seen if the tree is cut down or in cores taken from the edge to the center of the tree. Correct determination is only possible for trees which make growth rings, generally those which occur in seasonal climates; trees in uniform non-seasonal tropical climates are always growing and do not have distinct growth rings. It is also only possible for trees which are solid to the center of the tree; many very old trees become hollow as the dead heartwood decays away. For some of these species, age estimates have been made on the basis of extrapolating current growth rates, but the results are usually little better than guesses or wild speculation. White proposes a method of estimating the age of large and veteran trees in the United Kingdom through the correlation between a tree's stem diameter, growth character and age.
The verified oldest measured ages are:
- Great Basin Bristlecone Pine (Methuselah) Pinus longaeva: 4,844 years
- Alerce: 3,622 years
- Giant Sequoia: 3,266 years
- Sugi: 3,000 years
- Huon-pine: 2,500 years
Other species suspected of reaching exceptional age include European Yew Taxus baccata (probably over 2,000 years) and Western Redcedar Thuja plicata. The oldest known European Yew is the Llangernyw Yew in the Churchyard of Llangernyw village in North Wales which is estimated to be between 4,000 and 5,000 years old.
The oldest reported age for an angiosperm tree is 2293 years for the Sri Maha Bodhi Sacred Fig (Ficus religiosa) planted in 288 BC at Anuradhapura, Sri Lanka; this is said to be the oldest human-planted tree with a known planting date.
Tree value estimation [change]
Studies have shown that trees contribute as much as 27% of the appraised land value in certain markets.
These most likely use diameter measured at breast height (dbh), 4.5 feet (140 cm) above ground—not the larger base diameter. A general model for any year and diameter is:
assuming 2.2% inflation per year.
Tree climbing [change]
Tree climbing is an activity where one moves around in the crown of trees.
Use of a rope, helmet, and harness are the minimum requirements to ensure the safety of the climber. Other equipment can also be used depending on the experience and skill of the tree climber. Some tree climbers take special hammocks called "Treeboats" and Portaledges with them into the tree canopies where they can enjoy a picnic or nap, or spend the night.
Tree climbing is an "on rope" activity that puts together many different tricks and gear originally derived from rock climbing and caving. These techniques are used to climb trees for many purposes, including tree care (arborists), animal rescue, recreation, sport, research, and activism.
The three major (big) sources of tree damage are biotic (from living sources) , abiotic (from non-living sources)and Deforestation (cutting trees down). Biotic sources would include insects which might bore into the tree, deer which might rub bark off the trunk, or fungi, which might attach themselves to the tree.
Abiotic sources include lightning, vehicles impacts, and construction activities. Construction activities can involve a number of damage sources, including grade changes that prevent aeration to roots, spills involving toxic chemicals such as cement or petroleum products, or severing of branches or roots. People can damage trees also.
Both damage sources can result in trees becoming dangerous, and the term "hazard trees" is commonly used by arborists, and industry groups such as power line operators. Hazard trees are trees which due to disease or other factors are more susceptible to falling during windstorms, or having parts of the tree fall.
The process of finding the danger a tree presents is based on a process called the Quantified Tree Risk Assessment.
Trees are similar to people. Both can take a lot of some types of damage and survive, but even small amounts of certain types of trauma can result in death. Arborists are very aware that established trees will not tolerate any appreciable disturbance of the root system. Even though that is true, most people and construction professionals do not realize how easily a tree can be killed.
One reason for confusion about tree damage from construction involves the dormancy of trees during winter. Another factor is that trees may not show symptoms of damage until 24 months or longer after damage has occurred. For that reason, persons who do not know about caring for trees may not link the actual cause with the later damaged effect.
Various organizations have long recognized the importance of construction activities that impact tree health. The impacts are important because they can result in monetary losses due to tree damage and resultant remediation or replacement costs, as well as violation of government ordinances or community or subdivision restrictions.
As a result, protocols (standard ways) for tree management prior to, during and after construction activities are well established, tested and refined (changed). These basic steps are involved:
- Review of the construction plans
- Development of the related tree inventory
- Application of standard construction tree management protocols
- Assessment of potential for expected tree damages
- Development of a tree protection plan (providing for pre-, concurrent, and post construction damage prevention and remediation steps)
- Development of a tree protection plan
- Development of a remediation plan
- Implementation of tree protection zones (TPZs)
- Assessment of construction tree damage, post-construction
- Implementation of the remediation plan
International standards are uniform in analyzing damage potential and sizing TPZs (tree protection zones) to minimize damage. For mature to fully mature trees, the accepted TPZ comprises a 1.5-foot clearance for every 1 inch diameter of trunk. That means for a 10 inch tree, the TPZ would extend 15 feet in all directions from the base of the trunk at ground level.
For young or small trees with minimal crowns (and trunks less than 4 inches in diameter) a TPZ equal to 1 foot for every inch of trunk diameter may be good enough. That means for a 3 inch tree, the TPZ would extend 3 feet in all directions from the base of the trunk at ground level. Detailed information on TPZs and related topics is available at minimal cost from organizations like the International Society for Arboriculture.
Trees in culture [change]
The tree has always been a cultural symbol. Common icons are the World tree, for instance Yggdrasil, and the tree of life. The tree is often used to represent nature or the environment itself. A common mistake (wrong thing) is that trees get most of their mass from the ground. In fact, 99% of a tree's mass comes from the air.
Wishing trees [change]
A Wish Tree (or wishing tree) is a single tree, usually distinguished by species, position or appearance, which is used as an object of wishes and offerings. Such trees are identified as possessing a special religious or spiritual value. By tradition, believers make votive offerings in order to gain from that nature spirit, saint or goddess fulfillment of a wish.
Tree worship [change]
Tree worship refers to the tendency of many societies in all of history to worship or otherwise mythologize trees. Trees have played a very important role in many of the world's mythologies and religions, and have been given deep and sacred meanings throughout the ages. Human beings, seeing the growth and death of trees, the elasticity of their branches, the sensitiveness and the annual (every year) decay and revival of their foliage, see them as powerful symbols of growth, decay and resurrection. The most ancient cross-cultural symbolic representation of the universe's construction is the 'world tree'.
World tree [change]
The tree, with its branches reaching up into the sky, and roots deep into the earth, can be seen to dwell in three worlds - a link between heaven, the earth, and the underworld, uniting above and below. It is also both a feminine symbol, bearing sustenance; and a masculine, phallic symbol - another union.
For this reason, many mythologies around the world have the concept of the World tree, a great tree that acts as an Axis mundi, holding up the cosmos, and providing a link between the heavens, earth and underworld. In European mythology the best known example is the tree Yggdrasil from Norse mythology.
The world tree is also an important part of Mesoamerican mythologies, where it represents the four cardinal directions (north, south, east, and west). The concept of the world tree is also closely linked to the motif of the Tree of life.
In literature [change]
In literature, a mythology was notably developed by J.R.R. Tolkien, his Two Trees of Valinor playing a central role in his 1964 Tree and Leaf. William Butler Yeats describes a "holy tree" in his poem The Two Trees (1893).
List of trees [change]
There are many types of trees. Here is a list of some of them:
- Coconut Tree
- Cottonwood Tree
- Gum tree
- Horse chestnut
- Redwood Tree
- Rubber Tree
Related pages [change]
- Wattezia is the earliest tree in the fossil record.
- "Mangrove Trees". Naturia.per.sg. http://www.naturia.per.sg/buloh/plants/mangrove_trees.htm.
- Mirov, N.T. 1967. The genus Pinus. Ronald Press.
- "TreeBOL project". http://www.talkbx.com/2008/05/02/scientists-to-capture-tree-dna-worldwide/#more-835. Retrieved 2008-07-11.
- Friis, Ib, and Henrik Balslev. 2005. Plant diversity and complexity patterns: local, regional, and global dimensions : proceedings of an international symposium held at the Royal Danish Academy of Sciences and Letters in Copenhagen, Denmark, 25–28 May 2003. Biologiske skrifter, 55. Copenhagen: Royal Danish Academy of Sciences and Letters. pp 57-59.
- "Gymnosperm Database: Sequoia sempervirens". http://www.conifers.org/cu/se/index.htm. Retrieved 2007-06-10. "Hyperion, Redwood National Park, CA, 115.55 m"
- "List of Champion Trees published for comment, 2005, South African Department of Water Affairs and Forestry". http://www2.dwaf.gov.za/dwaf/download.asp?f=4148___list+of+proposed+Champion+trees.pdf&docId=4148. Retrieved 2010-01-18.
- White J. 1990. Estimating the age of large and veteran trees in Britain. Forestry Commission Edinburgh.
- Gymnosperm Database: How old is that tree?. Retrieved on 2008-04-17.
- Suzuki E. 1997. The dynamics of old Cryptomeria japonica forest on Yakushima Island. Tropics 6(4): 421–428. online
- Harte J. 1996. How old is that old yew? At the Edge 4: 1-9. Available online
- Kinmonth F. 2006. Ageing the yew - no core, no curve? International Dendrology Society Yearbook 2005: 41-46 ISSN 0307-332X
- "Protecting Existing Trees on Building Sites" p.4 published by the City of Raleigh, North Carolina, March 1989, Reprinted February 2000
- "How Valuable Are Your Trees" by Gary Moll, April, 1985, American Forests Magazine
- based on 1985 to 2009, using NASA inflation calculator
- "Benefits of Tree Climbing". http://www.treeclimbing.com/index.php?option=com_content&view=category&layout=blog&id=17&Itemid=140.
- Wiseman, P. Eric 2008. Integrated pest management tactics. Continuing Education Unit, International Arboricultural Society Vol 17.
- Ellison M.J. 2005 Quantified Tree Risk Assessment Used in the Management of Amenity Trees. Journal Arboric. International Society of Arboriculture, Savoy, Illinois. 31:2 57-65
- Schoeneweiss, D.F. "Prevention and treatment of construction damage", Journal of Arborculture 8:169
- Mountfort, Paul Rhys (2003). Nordic runes: understanding, casting, and interpreting the ancient Viking oracle. Inner Traditions / Bear & Company. p. 41. ISBN 9780892810932. http://books.google.co.in/books?id=_3B7EmvAqngC.
- Jonathan Drori on what we think we know | Video on TED.com
Other websites [change]
|Wikimedia Commons has media related to: Trees| | http://simple.wikipedia.org/wiki/Trees | 13 |
444 | In physics, a force is the influence that one physical object has on another that causes (unless offset by an equal and opposite force) a change in motion. Forces are commonly thought of as a things that "push" or "pull" an object. A force has both magnitude and direction properties, making it a vector quantity.
Modern undertanding of how force works is rooted in the mathematical "laws of motion" developed by Isaac Newton approximately 300 years ago.
Newton's formulation states that the net force on a body is equal to its change in momentum with time. For a body with fixed mass, this becomes the a direct measure of acceleration; the change in velocity over time. Because velocity is a vector, it can change in two ways: a change in magnitude and/or a change in direction.
Forces which do not act uniformly on all parts of a body will also cause mechanical stresses, which causes deformation of matter. While mechanical stress can remain embedded in a solid object, gradually deforming it, mechanical stress in a fluid determines changes in its pressure and volume.
There are four "fundamental" forces in nature: gravity, electromangetic, strong, and weak forces. In day-to-day experience, these fundamental forces manifest themselves in a way that are labeled as normal, friction, tension, torque, elastic, centripetal, or by other words. These are often refered to a "non fundamental" forces.
In physics, the consideration of forces acting on objects is refered as "mechanics." The branch of mechanics that deals with stationary objects is known as "statics," while the branch of mechanics that deals with moving objects is known as "dynamics." In constrast, the consideration of the motion of objects without reference to the cause of that motion is known as "kinematics."
Classical mechanics, which handles most objects much larger than atoms much slower than the speed of light, is rooted in Newton's laws of motion. At speeds close to that of light, Einstein's special and general theories of relativity become significant and eventually dominant over classical formulations of mechanics. Quantum mechanics, developed in the 1920s, applies at atomic and subatomic scales.
Related concepts include thrust - any force which increases the velocity of the object, drag - any force which decreases the velocity of any object, and torque - the tendency of a force to cause changes in rotational speed about an axis.
Since antiquity, the concept of force has been recognized as integral to the functioning of simple machines ( levers, wheels and axles, pulleys, inclined planes, wedges, screws). The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes (287 BC – 212 BC) who was especially famous for formulating a treatment of buoyant forces inherent in fluids.
Aristotle (384 BC – 322 BC) provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the natural world held four elements (earth, water, air, fire) that existed in "natural states". Aristotle believed that it was the natural state of objects with mass on Earth, such as the elements water and earth, to be motionless on the ground and that they tended towards that state if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force.
This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. The place where forces were applied to projectiles was only at the start of the flight, and while the projectile sailed through the air, no discernible force acts on it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path provided the needed force to continue the projectile moving. This explanation demands that air is needed for projectiles and that, for example, in a vacuum, no projectile would move after the initial push. Additional problems with the explanation include the fact that air resists the motion of the projectiles.
Aristotelian physics began facing criticism in Medieval science, first by John Philoponus (490–570 AD) and then by other physicists from the 11th century, when Avicenna's The Book of Healing introduced an alternative theory of mayl, where motion is a result of an inclination (mayl) transferred to the projectile by the thrower, projectile motion in a vacuum does not cease, and inclination is a permanent force whose effect is dissipated by external forces such as air resistance. Avicenna (c. 980 - 1037) also developed the concept of momentum, referring to mayl as being proportional to weight times velocity.
The shortcomings of Aristotelian physics would not be fully corrected until the seventeenth century work of Galileo Galilei (1564 – 1642), who was influenced by the late Medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion early in the seventeenth century. He showed that the bodies were accelerated by gravity to an extent which was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached.
Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest to be correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. However, when this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity.
Classical "Newtonian" mechanics
Sir Isaac Newton (1642–1727) sought to describe the motion of all objects using the concepts of inertia and force, and in doing so, he found that they obey certain conservation laws. In 1687, Newton went on to publish his thesis "Philosophiae Naturalis Principia Mathematica" In this work, Newton set out three laws of motion that, to this day, are the way forces are described in physics.
Newton's first law
Newton's first law of motion states that objects continue to move in a state of constant velocity unless acted upon by an external net force or "resultant force". This law is an extension of Galileo's insight that constant velocity was associated with a lack of net force.
Equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extense body, it is also necessary that the net torque in it is zero. There are two kinds of equilibrium: static equilibrium when the body is at rest and dynamic equilibrium when the body is in motion
- The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction (see Figure right). For example , an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, surface forces resist the downward force with equal upward force (called the normal force). The situation is one of zero net force and no acceleration. Pushing against an object on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object. A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force" which equals object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Newton expounded his laws of motion.
Force as a Vector
Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous. For example, if one knows that two people are pulling on the same rope with known magnitudes of force but you do not know which direction either person is pulling, it is impossible to determine what the acceleration of the rope will be. The two people could be pulling against each other as in tug of war or the two people could be pulling in the same direction. In this simple one-dimensional example, without knowing the direction of the forces it is impossible to decide whether the net force is the result of adding the two force magnitudes or subtracting one from the other. Associating forces with vectors avoids such problems.
Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on an object, the resulting force, the resultant, can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector which is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the resultant.
As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Or, as in the Figure above of the block on an inclined plane, resolved into a force parallel to the plane and into the plane. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right-angles to the other two.
- A simple case of dynamical equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in a net zero force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. However, when kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.
Newton proposed that every object with mass has an innate inertia that functions as the fundamental equilibrium "natural state" in place of the Aristotelian idea of the "natural state of rest". That is, the first law contradicts the intuitive Aristotelian belief that a net force is required to keep an object moving with constant velocity.
By making rest physically indistinguishable from non-zero constant velocity, Newton's first law directly connects inertia with the concept of relative velocities. Specifically, in systems where objects are moving with different velocities, it is impossible to determine which object is "in motion" and which object is "at rest". In other words, to phrase matters more technically, the laws of physics are the same in every inertial frame of reference, that is, in all frames related by a Galilean transformation.
For example, while traveling in a moving vehicle at a constant velocity, the laws of physics do not change from being at rest. A person can throw a ball straight up in the air and catch it as it falls down without worrying about applying a force in the direction the vehicle is moving. This is true even though another person who is observing the moving vehicle pass by also observes the ball follow a curving parabolic path in the same direction as the motion of the vehicle. It is the inertia of the ba
ll associated with its constant velocity in the direction of the vehicle's motion that ensures the ball continues to move forward even as it is thrown up and falls back down. From the perspective of the person in the car, the vehicle and every thing inside of it is at rest: It is the outside world that is moving with a constant speed in the opposite direction. Since there is no experiment that can distinguish whether it is the vehicle that is at rest or the outside world that is at rest, the two situations are considered to be physically indistinguishable. Inertia therefore applies equally well to constant velocity motion as it does to rest.
The concept of inertia can be further generalized to explain the tendency of objects to continue in many different forms of constant motion, even those that are not strictly constant velocity. The rotational inertia of planet Earth is what fixes the constancy of the length of a day and the length of a year.
Albert Einstein extended the principle of inertia further when he explained that reference frames subject to constant acceleration, such as those free-falling toward a gravitating object, were physically equivalent to inertial reference frames. This is why, for example, astronauts experience weightlessness when in free-fall orbit around the Earth, and why Newton's Laws of Motion are more easily discernible in such environments. If an astronaut places an object with mass in mid-air next to herself, it will remain stationary with respect to the astronaut due to its inertia. This is the same thing that would occur if the astronaut and the object were in intergalactic space with no net force of gravity acting on their shared reference frame. This principle of equivalence was one of the foundational underpinnings for the development of the general theory of relativity.
Newton's second law
A modern statement of Newton's second law is a vector differential equation:
where p is the momentum of the system, and F is the net (vector sum) force. In equilibrium, there is zero net force by definition, but (balanced) forces may be present nevertheless. In contrast, the second law states an unbalanced force acting on an object will result in the object's momentum changing over time.
By the definition of momentum,
where m is the mass and v is the velocity.
The product rule shows that:
For closed systems (systems of constant total mass), the time derivative of mass is zero and the equation becomes
By substituting the definition of acceleration, the algebraic version of this common simplification of Newton's second law is derived:
Newton's second law (although Newton never explicitly stated the formula in the reduced form above) asserts the proportionality of acceleration and mass to force. It is sometimes called the "second most famous formula in physics" (after E=mc2).
Accelerations can be defined through kinematic measurements. However, while kinematics are well-described through reference frame analysis in advanced physics, there are still deep questions that remain as to what is the proper definition of mass. General relativity offers an equivalence between space-time and mass, but lacking a coherent theory of quantum gravity, it is unclear as to how or whether this connection is relevant on microscales. With some justification, Newton's second law can be taken as a quantitative definition of mass by writing the law as an equality; the relative units of force and mass then are fixed.
The use of Newton's second law as a definition of force has been disparaged in some of the more rigorous textbooks, because it is essentially a mathematical truism. The equality between the abstract idea of a "force" and the abstract idea of a "changing momentum vector" ultimately has no observational significance because one cannot be defined without simultaneously defining the other. What a "force" or "changing momentum" is must either be referred to an intuitive understanding of our direct perception, or be defined implicitly through a set of self-consistent mathematical formulas. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of "force" include Ernst Mach, Clifford Truesdell and Walter Noll.
Newton's second law can be used to measure the strength of forces. For instance, knowledge of the masses of planets along with the accelerations of their orbits allows scientists to calculate the gravitational forces on planets.
Newton's third law
Newton's third law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. For any two objects (call them 1 and 2), Newton's third law states that any force that is applied to object 1 due to the action of object 2 is automatically accompanied by a force applied to object 2 due to the action of object 1
Image:Force equation 6.png
This law implies that forces always occur in action-and-reaction pairs. If object 1 and object 2 are considered to be in the same system, then the net force on the system due to the interactions between objects 1 and 2 is zero since
Image:Force equation 7.png or Image:Force equation 8.png
This means that in a closed system where there is no net external force, linear momentum of the system does not change with time. This is know as the conservation of linear momentum. Using the similar arguments, it is possible to generalizing this to a system of an arbitrary number of particles. This shows that exchanging momentum between constituent objects will not affect the net momentum of a system. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.
All the forces in the universe are based on four fundamental forces. The strong and weak forces act only at very short distances, and are responsible for the interactions between subatomic particles including nucleons and compound nuclei. The electromagnetic force acts between electric charges and the gravitational force acts between masses. All other forces are based on the existence of the four fundamental interactions. For example, friction is a manifestation of the electromagnetic force acting between the atoms of two surfaces, and the Pauli Exclusion Principle, which does not allow atoms to pass through each other. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces and the Exclusion Principle acting together to return the object to its equilibrium position. Centrifugal forces are acceleration forces which arise simply from the acceleration of rotating frames of reference.
The development of fundamental theories for forces proceeded along the lines of unification of disparate ideas. For example, Isaac Newton unified the force responsible for objects falling at the surface of the Earth with the force responsible for the orbits of celestial mechanics in his universal theory of gravitation. Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through one consistent theory of electromagnetism. In the twentieth century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This standard model of particle physics posits a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory subsequently confirmed by observation. The complete formulation of the standard model predicts an as yet unobserved Higgs mechanism, but observations such as neutrino oscillations indicate that the standard model is incomplete. A grand unified theory allowing for the combination of the electroweak interaction with the strong force is held out as a possibility with candidate theories such as supersymmetry proposed to accommodate some of the outstanding unsolved problems in physics. Physicists are still attempting to develop self-consistent unification models that would combine all four fundamental interactions into a theory of everything. Einstein tried and failed at this endeavor, but currently the most popular approach to answering this question is string theory.
What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as g and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of m will experience a force:
Image:Force equation 18.png
In free-fall, this force is unopposed and therefore the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reactions of their supports. For example, a person standing on the ground experiences zero net force, since his weight is balanced by a normal force exerted by the ground.
Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's Laws of Planetary Motion.
Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration due to gravity is proportional to the mass of the attracting body. Combining these ideas gives a formula that relates the mass Mo and the radius Ro of the Earth to the gravitational acceleration:
Image:Force equation 19.png
where the vector direction is given by r, the unit vector directed outward from the center of the Earth.
In this equation, a dimensional constant G is used to describe the relative strength of gravity. This constant has come to be known as Newton's Universal Gravitation Constant, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of G using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing the G could allow one to solve for the Earth's mass given the above equation. Newton, however, realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's Law of Gravitation states that the force on a spherical object of mass m1 due to the gravitational pull of mass m2 is
Image:Force equation 20.png
where r is the distance between the two objects' centers of mass and r(hat) is the unit vector pointed in the direction away from the center of the first object toward the center of the second object.
This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the twentieth century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed.
It was only the orbit of the planet Mercury that Newton's Law of Gravitation seemed not to fully explain. Some astrophysicists predicted the existence of another planet (Vulcan) that would explain the discrepancies; however, despite some early indications, no such planet could be found. When Albert Einstein finally formulated his theory of general relativity (GR) he turned his attention to the problem of Mercury's orbit and found that his theory added a correction which could account for the discrepancy. This was the first time that Newton's Theory of Gravity had been shown to be less correct than an alternative.
Since then, and so far, general relativity has been acknowledged as the theory which best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved space-time – defined as the shortest space-time path between two space-time events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of space-time can be observed and the force is inferred from the object's curved path. Thus, the straight line path in space-time is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its space-time trajectory (when the extra ct dimension is added) is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force".
The electrostatic force was first described in 1784 by Coulomb as a force which existed intrinsically between two charges. The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the law of superposition. Coulomb's Law unifies all these observations into one succinct statement.
Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force. Thus the electric field anywhere in space is defined as
Image:Force equation 21.png
where q is the magnitude of the hypothetical test charge.
Meanwhile, the Lorentz force of magnetism was discovered to exist between two electric currents. It has the same mathematical character as Coulomb's Law with the proviso that like currents attract and unlike currents repel. Similar to the electric field, the magnetic field can be used to determine the magnetic force on an electric current at any point in space. In this case, the magnitude of the magnetic field was determined to be
Image:Force equation 22.png
where I is the magnitude of the hypothetical test current and l is the length of hypothetical wire through which the test current flows. The magnetic field exerts a force on all magnets including, for example, those used in compasses. The fact that the Earth's magnetic field is aligned closely with the orientation of the Earth's axis causes compass magnets to become oriented because of the magnetic force pulling on the needle.
Through combining the definition of electric current as the time rate of change of electric charge, a rule of vector multiplication called Lorentz's Law describes the force on a charge moving in an magnetic field. The connection between electricity and magnetism allows for the description of a unified electromagnetic force that acts on a charge. This force can be written as a sum of the electrostatic force (due to the electric field) and the magnetic force (due to the magnetic field). Fully stated, this is the law:
Image:Force equation 23.png
where F is the electromagnetic force, q is the magnitude of the charge of the particle, E is the electric field, v is the velocity of the particle which is crossed with the magnetic field (B).
The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a succinct set of four equations. These "Maxwell Equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed which he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum.
However, attempting to reconcile electromagnetic theory with two observations, the photoelectric effect, and the nonexistence of the ultraviolet catastrophe, proved troublesome. Through the work of leading theoretical physicists, a new theory of electromagnetism was developed using quantum mechanics. This final modification to electromagnetic theory ultimately led to quantum electrodynamics (or QED), which fully describes all electromagnetic phenomena as being mediated by wave particles known as photons. In QED, photons are the fundamental exchange particle which described all interactions relating to electromagnetism including the electromagnetic force.
It is a common misconception to ascribe the stiffness and rigidity of solid matter to the repulsion of like charges under the influence of the electromagnetic force. However, these characteristics actually result from the Pauli Exclusion Principle. Since electrons are fermions, they cannot occupy the same quantum mechanical state as other electrons. When the electrons in a material are densely packed together, there are not enough lower energy quantum mechanical states for them all, so some of them must be in higher energy states. This means that it takes energy to pack them together. While this effect is manifested macroscopically as a structural "force", it is technically only the result of the existence of a finite set of electron states.
There are two "nuclear forces" which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei while the weak nuclear force is responsible for the decay of certain nucleons into leptons and other types of hadrons.
The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong interaction is the most powerful of the four fundamental forces.
The strong force only acts directly upon elementary particles. However, a residual of the force is observed between hadrons (the best known example being the force that acts between nucleons in atomic nuclei) as the nuclear force. Here the strong force acts indirectly, transmitted as gluons which form part of the virtual pi and rho mesons which classically transmit the nuclear force (see this topic for more). The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called colour confinement.
The weak force is due to the exchange of the heavy W and Z bosons. Its most familiar effect is beta decay (of neutrons in atomic nuclei) and the associated radioactivity. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately 1015 Kelvin. Such temperatures have been probed in modern particle accelerators and show the conditions of the universe in the early moments of the Big Bang.
Some forces are consequences of fundamental. In such situations, idealized models can be utilized to gain physical insight.
The normal force is the repulsive force of interaction between atoms at close contact. When their electron clouds overlap, Pauli repulsion (due to fermionic nature of electrons) follows resulting in the force which acts normal to the surface interface between two objects. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.
Friction is a surface force that opposes relative motion. The frictional force is directly related to the normal force which acts to keep two solid objects separated at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.
The static friction force F will exactly oppose forces applied to an object parallel to a surface contact up to the limit specified by the coefficient of static friction (μsf) multiplied by the normal force (FN). In other words the magnitude of the static friction force satisfies the inequality:
Image:Force equation 24.png
The kinetic friction force (Fkf) is independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals:
Fkf = μkfFN
where μkf is the coefficient of kinetic friction. For most surface interfaces, the coefficient of kinetic friction is less than the coefficient of static friction.
Tension forces can be modeled using ideal strings which are massless, frictionless, unbreakable, and unstretchable. They can be combined with ideal pulleys which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action-reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a set-up that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. However, even though such machines allow for an increase in force, there is a corresponding increase in the length of string that must be displaced in order to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.
An elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If Δx is the displacement, the force exerted by an ideal spring equals:
Image:Force equation 25.png
where k is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the elastic force to act in opposition to the applied load.
Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. However, in real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows:
Image:Force equation 26.png
where V is the volume of the object in the fluid and P is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.
A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction:
Image:Force equation 27.png
b is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and
v is the velocity of the object.
More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as
Image:Force equation 28.png
where A is the relevant cross-sectional area for the volume for which the stress-tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all deformations including also tensile stresses and compressions.
There are forces which are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating.
In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry. As an extension, Kaluza-Klein theory and string theory ascribe electromagnetism and the other fundamental forces respectively to the curvature of differently scaled dimensions, which would ultimately imply that all forces are fictitious.
Rotations and torque
Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque on a particle is defined as the cross-product:
Image:Force equation 29.png
r is the particle's position vector relative to a pivot
F is the force acting on the particle.
Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. All the formal treatments of Newton's Laws that applied to forces equivalently apply to torques. Thus, as a consequence of Newton's First Law of Motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's Second Law of Motion can be used to derive an alternative definition of torque:
Image:Force equation 30.png
I is the moment of inertia of the particle
α is the angular acceleration of the particle.
This provides a definition for the moment of inertia which is the rotational equivalent for mass. In more advanced treatments of mechanics, the moment of inertia acts as a tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.
Equivalently, the differential form of Newton's Second Law provides an alternative definition of torque:
Image:Force equation 31.png
where L is the angular momentum of the particle.
Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques.
For an object accelerating in circular motion, the unbalanced force acting on the object equals:
Image:Force equation 32.png
where m is the mass of the object, v is the velocity of the object and r is the distance to the center of the circular path and r is the unit vector pointing in the radial direction outwards from the center. This means that the unbalanced centripetal force felt by any object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. The unbalanced force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force which accelerates the object by either slowing it down or speeding it up and the radial (centripetal) force which changes its direction.
Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse:
Image:Force equation 33.png
which, by Newton's Second Law, must be equivalent to the change in momentum (yielding the Impulse momentum theorem).
Similarly, integrating with respect to position gives a definition for the work done by a force:
Image:Force equation 34.png
which is equivalent to changes in kinetic energy (yielding the work energy theorem).
Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change dx in a time interval dt:
Image:Force equation 35.png
with Image:Force equation 36.png the velocity.
Instead of a force, often the mathematically related concept of a potential energy field can be used for convenience. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field U(r) is defined as that field whose gradient is equal and opposite to the force produced at every point:
Image:Force equation 37.png
Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while non-conservative forces are not.
A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.
Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models which are dependent on a position often given as a radial vector r emanating from spherically symmetric potentials. Examples of this follow:
Image:Force equation 38.png
where G is the gravitational constant, and mn is the mass of object n.
For electrostatic forces:
Image:Force equation 39.png
where ε0 is electric permittivity of free space, and qn is the electric charge of object n.
For spring forces:
Image:Force equation 40.png
where k is the spring constant.
For certain physical scenarios, it is impossible to model forces as being due to gradient of potentials. This is often due to macrophysical considerations which yield forces as arising from a macroscopic statistical average of microstates. For example, friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model which is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. However, for any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.
The connection between macroscopic non-conservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second Law of Thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.
Units of measurement
The International_System_of_Units_(SI) unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s-2. The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s-2. A newton is thus equal to 100,000 dyne.
The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s-2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force.
An alternative unit of force in a different foot-pound-second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one pound mass at a rate of one foot per second squared. The units of slug and poundal are designed to avoid a constant of proportionality in Newton's second law.
The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass which accelerates at 1 m·s-2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated; however it still sees use for some purposes as expressing jet thrust, bicycle spoke tension, torque wrench settings and engine output torque. Other arcane units of force include the sthène which is equivalent to 1000 N and the kip
Units of Force
≡ 1 kg·m/s²
= 105 dyn
≈ 0.10197 kp
≈ 0.22481 lbf
≈ 7.2330 pdl
= 10−5 N
≡ 1 g·cm/s²
≈ 1.0197×10−6 kp
≈ 2.2481×10−6 lbf
≈ 7.2330×10−5 pdl
= 9.80665 N
= 980665 dyn
≡ gn·(1 kg)
≈ 2.2046 lbf
≈ 70.932 pdl
≈ 4.448222 N
≈ 444822 dyn
≈ 0.45359 kp
≡ gn·(1 lb)
≈ 32.174 pdl
≈ 0.138255 N
≈ 13825 dyn
≈ 0.014098 kp
≈ 0.031081 lbf
≡ 1 lb·ft/s²
- Physics Texts
- Lectures on Physics, R. P. Feynman, R. B. Leighton and M. Sands, Vol 1. Addison-Wesley (1963) http://www.amazon.com/dp/0-201-02116-1/?tag=encycofearth-20 ISBN: 0-201-02116-1
- An Introduction to Mechanics by Daniel Kleppner and Robert Kolenkow, McGraw-Hill (1973), p. 133–134 http://www.amazon.com/dp/0070350485/?tag=encycofearth-20 ISBN: 0070350485
- University Physics by Francis W. Sears, Mark W. Zemansky, Hugh D. Young, Addison-Wesley, Reading, MA (1982) http://www.amazon.com/dp/0-201-07199-1/?tag=encycofearth-20 ISBN: 0-201-07199-1
- Classical Mechanics, H.C. Corbell and Philip Stehle, p 28, Dover publications (1994)http://www.amazon.com/dp/0-486-68063-0/?tag=encycofearth-20 ISBN: 0-486-68063-0
- “On the Concept of Force”, Walter Noll (2004)
- Physics, Sixth Edition by Jodn Cutnell and Kenneth W. Johnson, John Wiley & Sons Inc. (2004) http://www.amazon.com/dp/041-44895-8/?tag=encycofearth-20 ISBN: 041-44895-8
- Physics v. 1 by David Halliday, Robert Resnick and Kenneth S. Krane, John Wiley & Sons (2001 http://www.amazon.com/dp/0-471-32057-9/?tag=encycofearth-20 ISBN: 0-471-32057-9
- Encyclopedia of Physics, by Sybil Parker, McGraw-Hill (1993) p 443 http://www.amazon.com/dp/0-07-051400-3/?tag=encycofearth-20 ISBN: 0-07-051400-3
- Physics for Scientists and Engineers by Raymond A. Serway, Saunders College Publishing (2003) http://www.amazon.com/dp/0-534-40842-7/?tag=encycofearth-20 ISBN: 0-534-40842-7
- Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics 5th edition by Paul Tipler, W. H. Freeman (2004) http://www.amazon.com/dp/0-7167-0809-4/?tag=encycofearth-20 ISBN: 0-7167-0809-4
- Concepts of Physics Vol 1 (2004 Reprint) by H.C. Verma, Bharti Bhavan (2004) http://www.amazon.com/dp/81-7709-187-5/?tag=encycofearth-20 ISBN: 81-7709-187-5
- Space and Time: Inertial Frames, Robert DiSalle, Stanford Encyclopedia of Philosophy
- General Physics; mechanics and molecular physics by Lev Landau, A. I. Akhiezer, and A. M. Lifshitz, Translated by: J. B. Sykes, A. D. Petford, and C. L. Petford, Pergamon Press (1967) http://www.amazon.com/dp/0080033040/?tag=encycofearth-20 ISBN: 0080033040
- The Principia Mathematical Principles of Natural Philosophy by Isaac Newton, University of California Press (1999)(This is a recent translation into English by I. Bernard Cohen and Anne Whitman, with help from Julia Budenz.)http://www.amazon.com/dp/0-520-08817-4/?tag=encycofearth-20 ISBN: 0-520-08817-4
- Lesson 4: Newton's Third Law of Motion, Tom Henderson, The Physics Classroom
- Dynamics of translational motion, Dr. Nikitin (2007)
- Introduction to Free Body Diagrams, Physics Tutorial Menu, University of Guelph
- The Physics Classroom, Tom Henderson, The Physics Classroom and Mathsoft Engineering & Education, Inc. (2004)
- Static Equilibrium, Physics Static Equilibrium (forces and torques), University of the Virgin Islands
- Seminar: Visualizing Special Relativity, THE RELATIVISTIC RAYTRACER
- Four-Vectors (4-Vectors) of Special Relativity: A Study of Elegant Physics, John B. Wilson, The Science Realm: John's Virtual Sci-Tech Universe
- Pauli Exclusion Principle, R. Nave, HyperPhysics***** Quantum Physics
- Fermions & Bosons, The Particle Adventure
- A New Absolute Determination of the Acceleration due to Gravity at the National Physical Laboratory, A. H. Cook. Nature V. 208, p. 279 (1965)
- Perturbation Analysis, Regular and Singular, Thayer Watkins, Department of Economics, San José State University
- Recherches théoriques et expérimentales sur la force de torsion et sur l'élasticité des fils de metal, Charles Coulomb, Histoire de l’Académie Royale des Sciences p.229–269 (1784)br>
- Electricity and Magnetism, 3rd Ed. by William Duffin, McGraw-Hill (1980) http://www.amazon.com/dp/0-07-084111-X/?tag=encycofearth-20 ISBN: 0-07-084111-X
- Quantum-Chromodynamics: A Definition - Science Articles, Tab Stevens (2003)
- Tension Force, Non-Calculus Based Physics I
- Strings, pulleys, and inclines, Richard Fitzpatrick (2006)
- Elasticity, Periodic Motion, HyperPhysics, Georgia State University (2008)
- The Coriolis Force, Vincent Mallette, Publications in Science and Mathematics, Computing and the Humanities, Inwit Publishing, Inc. (1982-2008)
- Newton's Second Law for Rotation, HyperPhysics***** Mechanics ***** Rotation
- Engineering Mechanics, 12th edition by Russell C. Hibbeler, Pearson Prentice Hall (2010) http://www.amazon.com/dp/0-13-607791-9/?tag=encycofearth-20 ISBN: 0-13-607791-9
- The Foundation of the General Theory of Relativity, Albert Einstein , Annalen der Physik V.49 p. 769–822 (1916)
- Newton's third law of motion, Richard Fitzpatrick (2007)
- Centripetal Force, R. Nave, HyperPhysics***** Mechanics ***** Rotation
- Conservative force, Sunil Kumar Singh, Connexions (2007)
- Conservation of Energy, Doug Davis, General physics
- Metric Units in Engineering by Cornelius Wandmacher and Arnold Johnson, ASCE Publications (1995) http://www.amazon.com/dp/0784400709/?tag=encycofearth-20 ISBN: 0784400709
- Video lecture on Newton's three laws by Walter Lewin from MIT OpenCourseWare
- A Java simulation on vector addition of forces
- History Texts
- The Works of Archimedes, The unabridged work in PDF form, T.L. Heath,(1897).
- The Order of Nature in Aristotle's Physics: Place and the Elements by Helen Land(1998)
- Cosmology: Historical, Literary, Philosophical, Religious, and Scientific Perspectives by Norriss S. Hetherington, Garland Reference Library of the Humanities (1993)http://www.amazon.com/dp/0815310854/?tag=encycofearth-20 ISBN: 0815310854
- "Ibn S?n? and Buridan on the Motion of the Projectile", A. Sayili Espinoza, Annals of the New York Academy of Sciences 500 (1), pp. 477–482: (1987)
- "Abu'l-Barak?t al-Baghd?d? , Hibat Allah" by Shlomo Pines, Dictionary of Scientific Biography, Charles Scribner's Sons (1970) p. 26–28 http://www.amazon.com/dp/0684101149/?tag=encycofearth-20 ISBN: 0684101149
- "Avempace, Projectile Motion, and Impetus Theory", Abel B. Franco, Journal of the History of Ideas, 64(4), p. 521-546 .) (2003)
- "Ibn S?n? and Buridan on the Motion of the Projectile", A. Sayili (1987), Annals of the New York Academy of Sciences, 500 (1),
- "Islamic Conception Of Intellectual Life", Seyyed Hossein Nasr, in Philip P. Wiener (ed.), Dictionary of the History of Ideas, Vol. 2, p. 65, Charles Scribner's Sons, New York, 1973-1974.
- "Ibn al-Haytham or Alhazen", Nader El-Bizri, in Josef W. Meri (2006), "Medieval Islamic Civilization: An Encyclopaedia", Vol. II, p. 343-345, Routledge, New York, London.
- The History of Science from Augustine to Galileo by A. C. Crombie, Dover Publications (1996) ISBN 0486288501.
- "La dynamique d’Ibn Bajja", Shlomo Pines, in Mélanges Alexandre Koyré, I, 442-468 [462, 468], Paris (1964)
- Abu Arrayhan Muhammad ibn Ahmad al-Biruni, J J O'Connor and E F Robertson, The MacTutor History of Mathematics archive, (1999)
- "Galileo and Avempace: The Dynamics of the Leaning Tower Experiment (II)", Ernest A. Moody, Journal of the History of Ideas, 12(3), p. 375-422 (June 1951).
- Galileo At Work by Stillman Drake, Chicago: University of Chicago Press (1978).http://www.amazon.com/dp/0-226-16226-5/?tag=encycofearth-20 ISBN: 0-226-16226-5
- "An analysis of the historical development of ideas about motion and its implications for teaching", Fernando Espinoza, Physics Education, 40(2), p. 141. (2005)
- Sir Isaac Newton: The Universal Law of Gravitation, Astronomy 161 The Solar System
- Neptune's Discovery. The British Case for Co-Prediction., Nick Kollerstrom, University College London (2001)
Note: This article uses material from the Wikipedia article Force that was accessed on April 7, 2010. The Author(s) and Topic Editor(s) associated with this article have significantly modified the content derived from that original content or with content drawn from other sources. All such content has been reviewed and approved by those Author(s) and Topic Editor(s), and is subject to the same peer review process as other content in the EoE. The current version of the article differs significantly from the version that existed on the date of access. See the EoE’s Policy on the Use of Content from Wikipedia for more information. | http://www.eoearth.org/article/Force?topic=49557 | 13 |
78 | Spring 2007 Name______________________
Physical Science Lab
Uniform Circular Motion
Objective: To study the forces necessary to maintain uniform circular motion and to confirm the equation for centripetal motion.
Complete questions 1-5 before coming to class. You will need to read the entire experiment to do this (as well as read pages 48-50 in your text).
1) Write the equation, which is used to calculate the centripetal force.
2) Explain what each of the following (used in the formula in question 1) represents. Place the correct units for measuring each of these in parentheses at the end of your answer.
a) r = b) m = c) F = d) v =
3) Calculate the centripetal force of an object of mass 0.7 kg rotating with a speed of 4 m/s at a radius of 0.5 m.
Write the equation for calculating the circumference of a circle. (If you don’t remember, read the lab.)
When a ball on the end of a string is swung in a vertical circle, if the ball travels too slowly, it will not complete the circle. To make the circle the gravitational force must equal the centripetal force. Write the equation for the force of gravity (weight) of an object of mass m. Use the gravitational constant g in your formula.
(4) Now set this equation equal to the centripetal force as you wrote it in question 1, and solve the equation for the velocity of the object. Clearly show each step of the solution.
W = F v =
How fast must an object be swung in a vertical circle to keep the string tight at the top if the radius of the circle is .5 m? Clearly show your work.
(5) Copy this answer to part 14 of this lab. Mark an “x” when copied.____
Note: There is only one apparatus for the first part of this experiment, so everyone in the lab will work together, with different people doing various steps of the experiment.
Method: The apparatus consists of a weight on a spring, which is rapidly rotated. As the weight and the spring rotate, the spring is stretched. The speed of rotation necessary to maintain the weight in position is measured. Weights are then hung on the spring to determine the amount of force necessary to stretch the spring the same amount as when it was rotating. This force is then compared to the centripetal force as determined by the equation in question 2.
6) To use the equation for centripetal force (question 2) we need know three things: a) the mass of the rotating weight, b) the speed (velocity) with which it is moving, and c) its mass. The mass has already been measured and is marked on the apparatus, the mass is
m = _______________ g = _________________ kg.
7) To measure the velocity, we will first determine how many revolutions the apparatus makes in a known amount of time, determine how far it has gone during that time, then use the v=d/t equation to determine the velocity. The first step is to make sure the rotational velocity is a constant as possible and that the apparatus is rotating just fast enough to produce the centripetal force necessary to stretch the spring a known amount. To do this we will count the number of revolutions in 30 sec. Everyone in the class will do this. Then we will take an average. Results that are quite different from the rest of the data will not be included in the average.
Revolutions in 30 seconds
1. ________________ 8. ________________ 15._____________
2. ________________ 9._________________ 16._____________
3. ________________ 10.________________ 17._____________
4. ________________ 11.________________ 18._____________
5. ________________ 12.________________ 19._____________
6. ________________ 13.________________ 20._____________
7. ________________ 14.________________ 21._____________
Ave = ____________________
8) To determine how far the mass traveled in 30 sec., we will first calculate the distance the center of the mass travels in one revolution, then multiply by the number of revolutions. You may remember the circumference (the distance around) a circle is 2πR, where π = 3.14 and R is the radius of a circle. To measure R, we hang the apparatus by one end and stretch the spring the amount necessary to make the pointer move to the same position as when rotating. Vernier Calipers are used to measure the distance.
R = ___________________ cm = ______________________ m
Distance traveled in one rotation = 2πR =
Distance traveled in 30 sec. =
Velocity = distance/time =
9) Calculation of Centripetal force using the equation
F = (mv2)/R (label all units correctly!)
Mass = _____________, R = ________________ V = _____________
F = (__________) x (___________)2 / (__________) = _________
10) To check our results, we now determine the amount of mass necessary to stretch the spring and calculate the weight of this mass.
Total mass on spring = _______________g = _______________kg
F = ma, W = mg =(_______________)x(9.8 m/s2) =_____________
11) The calculation of the percent error will give a quantitative measure of the agreement of the two answers. This is calculated by:
(answer 10) - (answer 9)
answer 10 x 100 = percent error
In our case this is (Show your work):
% diff = ____________________________________ = [ ]
12) List what you believe could be some of the causes of error in this experiment and tell how you think the results could be improved.
13) Gravity versus centripetal force.
You will swing a tennis ball on the end of a string just fast enough to keep it going in a vertical circle. Your partner will time the number of revolutions in 15 sec. You will then use the equations you developed on the front page to do the calculations.
14) Copy from first page of your lab: equation for v =
Use this equation to calculate the velocity necessary to keep a ball on a .5 m circle.
(a) v =
Take one of the tennis balls connected to a string and measure a distance from the center of the ball to 50 cm up the string.
Twirl the ball in a vertical circle with a radius of 50 cm as slow as it will go and stay on the circle (the string should not go slack).
Number of revolutions in 15 seconds = ___________
Calculate the distance around a 50 cm radius circle = 2πr =___________
(Be sure your answer is in meters)
15) Velocity = d/t=total distance traveled in 15 sec/15sec =
The velocities calculated in items 14) and 15) are actually quite different. The velocity in item 16 is at the top of the circle, and that of item 14) is the average velocity. Gravity makes the ball speed up as it falls and slow down at the top of the circle. To compensate for the difference we will calculate the velocity of an object that falls one meter (the distance the ball falls each revolution.)
Use the following equation with h = 1 meter.
16) Velocity of an object after falling a height h is given by: v = =
17) The average velocity will be half this value or ________________ m/s
To compare your result to the calculated value in item 15 you should subtract the average velocity calculated in item 18.
Experimental result = velocity from item 15 - velocity from item 17
18) Experimental result =
Calculate the percent error
Item 18 - Item 14(a)
Experimental result - calculated result = __________________ = %
Tell what you believe to be the most important thing you learned in this experiment.
If any part of the experiment is unclear, what is it? | http://www.moval.edu/Faculty/gaultj/Physical%20Science/Labs/CENTFORC.htm | 13 |
60 | See F. W. Cousins, Sundials (1969); R. R. J. Rohr, Sundials (tr. 1970).
A sundial is a device that measures time by the position of the Sun. In common designs such as the horizontal sundial, the sun casts a shadow from its style (a thin rod or a sharp, straight edge) onto a flat surface marked with lines indicating the hours of the day. As the sun moves across the sky, the shadow-edge progressively aligns with different hour-lines on the plate. Such designs rely on the style being aligned with the axis of the Earth's rotation. Hence, if such a sundial is to tell the correct time, the style must point towards true North (not the north or south magnetic pole) and the style's angle with horizontal must equal the sundial's geographical latitude. However, many sundials do not fit this description, and operate on different principles.
The principles of sundials can be understood most easily from an ancient model of the Sun's motion. Science has established that the Earth rotates on its axis, and revolves in an elliptic orbit about the Sun; however, meticulous astronomical observations and physics experiments were required to establish this. For navigational and sundial purposes, it is an excellent approximation to assume that the Sun revolves around a stationary Earth on the celestial sphere, which rotates every 23 hours and 56 minutes about its celestial axis, the line connecting the celestial poles. Since the celestial axis is aligned with the axis about which the Earth rotates, its angle with the local horizontal equals the local geographical latitude. Unlike the fixed stars, the Sun changes its position on the celestial sphere, being at positive declination in summer, at negative declination in winter, and having exactly zero declination (i.e., being on the celestial equator) at the equinoxes. The path of the Sun on the celestial sphere is known as the ecliptic, which passes through the twelve constellations of the zodiac in the course of a year.
This model of the Sun's motion helps to understand the principles of sundials. If the shadow-casting gnomon is aligned with the celestial poles, its shadow will revolve at a constant rate, and this rotation will not change with the seasons. This is perhaps the most commonly seen design and, in such cases, the same set of hour lines may be used throughout the year. The hour-lines will be spaced uniformly if the surface receiving the shadow is either perpendicular (as in the equatorial sundial) or circularly symmetric about the gnomon (as in the armillary sphere). In other cases, the hour-lines are not spaced evenly, even though the shadow is rotating uniformly. If the gnomon is not aligned with the celestial poles, even its shadow will not rotate uniformly, and the hour lines must be corrected accordingly. The rays of light that graze the tip of a gnomon, or which pass through a small hole, or which reflect from a small mirror, trace out a cone that is aligned with the celestial poles. The corresponding light-spot or shadow-tip, if it falls onto a flat surface, will trace out a conic section, such as a hyperbola, ellipse or (at the North or South Poles) a circle. This conic section is the intersection of the cone of light rays with the flat surface. This cone and its conic section change with the seasons, as the Sun's declination changes; hence, sundials that follow the motion of such light-spots or shadow-tips often have different hour-lines for different times of the year, as seen in shepherd's dials, sundial rings, and vertical gnomons such as obelisks. Alternatively, sundials may change the angle and/or position of the gnomon relative to the hour lines, as in the analemmatic dial or the Lambert dial.
In general, sundials indicate the time by casting a shadow or throwing light onto a surface known as a dial face or dial plate. Although usually a flat plane, the dial face may also be the inner or outer surface of a sphere, cylinder, cone, helix, and various other shapes.
The time is indicated where the shadow or light falls on the dial face, which is usually inscribed with hour lines. Although usually straight, these hour lines may also be curved, depending on the design of the sundial (see below). In some designs, it is possible to determine the date of the year, or it may be required to know the date to find the correct time. In such cases, there may be multiple sets of hour lines for different months, or there may be mechanisms for setting/calculating the month. In addition to the hour lines, the dial face may offer other data—such as the horizon, the equator and the tropics—which are referred to collectively as the dial furniture.
The entire object that casts a shadow or light onto the dial face is known as the sundial's gnomon. However, it is usually only an edge of the gnomon (or another linear feature) that casts the shadow used to determine the time; this linear feature is known as the sundial's style. The style is usually aligned with the axis of the celestial sphere, and therefore aligned with the local geographical meridian. In some sundial designs, only a point-like feature, such as the tip of the style, is used to determine the time and date; this point-like feature is known as the sundial's nodus. Some sundials use both a style and a nodus to determine the time and date.
The gnomon is usually fixed relative to the dial face, but not always; in some designs such as the analemmatic sundial, the style is moved according to the month. If the style is fixed, the line on the dial plate perpendicularly beneath the style is called the substyle, meaning "below the style". The angle the style makes perpendicularly with the dial plate is called the substyle height, an unusual use of the word height to mean an angle. On many wall dials, the substyle is not the same as the noon line (see below). The angle on the dial plate between the noon line and the substyle is called the substyle distance, an unusual use of the word distance to mean an angle.
By tradition, many sundials have a motto. The motto is usually in the form of an epigram: sometimes somber reflections on the passing of time and the brevity of life, but equally often humorous witticisms of the dial maker.
A dial is said to be equiangular if its hour-lines are straight and spaced equally. Most equiangular sundials have a fixed gnomon style aligned with the Earth's rotational axis, as well as a shadow-receiving surface that is symmetrical about that axis; examples include the equatorial dial, the equatorial bow, the armillary sphere, the cylindrical dial and the conical dial. However, other designs are equiangular, such as the Lambert dial, a version of the analemmatic dial with a moveable style.
Most of the sundials described below use shadow to indicate time, whether it be the shadow-edge of the style, or the shadow-point of the nodus. However, light may be used in equivalent ways. Nodus-based sundials may use a small hole or mirror to isolate a single ray of light; the former are sometimes called aperture dials. The oldest example is perhaps the antiborean sundial (antiboreum), a spherical nodus-based sundial that faces true North; a ray of sunlight enters from the South through a small hole located at the sphere's pole and falls on the hour and date lines inscribed within the sphere, which resemble lines of longitude and latitude, respectively, on a globe.
Light may also be used to replace the shadow-edge of a gnomon. Whereas the style usually casts a sheet of shadow, an equivalent sheet of light can be created by allowing the sun's rays through a thin slit, reflecting them from a long, slim mirror (usually half-cylindrical), or focusing them through a cylindrical lens. For illustration, the Benoy Dial uses a cylindrical lens to create a sheet of light, which falls as a line on the dial surface. Benoy dials can be seen throughout Great Britain, such as
On any given day, the Sun appears to rotate uniformly about this axis, at about 15° per hour, making a full circuit (360°) in 24 hours. A linear gnomon aligned with this axis will cast a sheet of shadow (a half-plane) that, falling opposite to the Sun, likewise rotates about the celestial axis at 15° per hour. The shadow is seen by falling on a receiving surface that is usually flat, but which may be spherical, cylindrical, conical or of other shapes. If the shadow falls on a surface that is symmetrical about the celestial axis (as in an armillary sphere, or an equatorial dial), the surface-shadow likewise moves uniformly; the hour-lines on the sundial are equally spaced. However, if the receiving surface is not symmetrical (as in most horizontal sundials), the surface shadow generally moves non-uniformly and the hour-lines are not equally spaced; one exception is the Lambert dial described below.
Some types of sundials are designed with a fixed gnomon that is not aligned with the celestial poles, such as a vertical obelisk. Such sundials are covered below under the section, "Nodus-based sundials".
The distinguishing characteristic of the equatorial dial (also called the equinoctial dial) is the planar surface that receives the shadow, which is exactly perpendicular to the gnomon's style. This plane is called equatorial, because it is parallel to the equator of the Earth and of the celestial sphere. If the gnomon is fixed and aligned with the Earth's rotational axis, the sun's apparent rotation about the Earth casts a uniformly rotating sheet of shadow from the gnomon; this produces a uniformly rotating line of shadow on the equatorial plane. Since the sun rotates 360° in 24 hours, the hour-lines on an equatorial dial are all spaced 15° apart (360/24). The uniformity of their spacing makes this type of sundial easy to construct. Both sides of the equatorial dial must be marked, since the shadow will be cast from below in winter and from above in summer. Near the equinoxes in spring and autumn, the sun moves on a circle that is nearly the same as the equatorial plane; hence, no clear shadow is produced on the equatorial dial at those times of year, a drawback of the design.
A nodus is sometimes added to equatorial sundials, which allows the sundial to tell the time of year. On any given day, the shadow of the nodus moves on a circle on the equatorial plane, and the radius of the circle measures the declination of the sun. The ends of the gnomon bar may be used as the nodus, or some feature along its length. An ancient variant of the equatorial sundial has only a nodus (no style) and the concentric circular hour-lines are arranged to resemble a spider-web.
where λ is the sundial's geographical latitude, θ is the angle between a given hour-line and the noon hour-line (which always points towards true North) on the plane, and t is the number of hours before or after noon. For example, the angle θ of the 3pm hour-line would equal the arctangent of sin(λ), since tan(45°) = 1. When λ equals 90° (at the North Pole), the horizontal sundial becomes an equatorial sundial; the style points straight up (vertically), and the horizontal plane is aligned with the equatorial place; the hour-line formula becomes θ = 15° × t, as for an equatorial dial. However, a horizontal sundial is impractical on the Earth's equator, where λ equals 0°, the style would lie flat in the plane and cast no shadow.
The chief advantages of the horizontal sundial are that it is easy to read, and the sun lights the face throughout the year. All the hour-lines intersect at the point where the gnomon's style crosses the horizontal plane. Since the style is aligned with the Earth's rotational axis, the style points true North and its angle with the horizontal equals the sundial's geographical latitude λ. A sundial designed for one latitude can be used in another latitude, provided that the sundial is tilted upwards or downwards by an angle equal to the difference in latitude. For example, a sundial designed for a latitude of 40° can be used at a latitude of 45°, if the sundial plane is tilted upwards by 5°, thus aligning the style with the Earth's rotational axis.
In the common vertical dial, the shadow-receiving plane is aligned vertically; as usual, the gnomon's style is aligned with the Earth's axis of rotation. As in the horizontal dial, the line of shadow does not move uniformly on the face; the sundial is not equiangular. If the face of the vertical dial points directly south, the angle of the hour-lines is instead described by the formula
where λ is the sundial's geographical latitude, θ is the angle between a given hour-line and the noon hour-line (which always points due north) on the plane, and t is the number of hours before or after noon. For example, the angle θ of the 3pm hour-line would equal the arctangent of cos(λ), since tan(45°) = 1. Interestingly, the shadow moves counter-clockwise on a South-facing vertical dial, whereas it runs clockwise on horizontal and equatorial dials.
Dials that face due South, North, East or West are called vertical direct dials. If the face of a vertical dial does not face due South, the hours of sunlight that the dial receives may be limited. For example, a vertical dial that faces due East will tell time only in the morning hours; in the afternoon, the sun does not shine on its face. Vertical dials that face due East or West are polar dials, which will be described below. Vertical dials that face North are rarely used, since they tell time only before 6am or after 6pm, by local solar time. For non-direct vertical dials — those that face in non-cardinal directions — the mathematics of arranging the hour-lines becomes more complicated, and is often done by observation; such dials are said to be declining dials.
Vertical dials are commonly mounted on the walls of buildings, such as town-halls, cupolas and church-towers, where they are easy to see from far away. In some cases, vertical dials are placed on all four sides of a rectangular tower, providing the time throughout the day. The face may be painted on the wall, or displayed in inlaid stone; the gnomon is often a single metal bar, or a tripod of metal bars for rigidity. If the wall of the building does not face in a cardinal direction such as due South, the hour lines must be corrected. Since the gnomon's style is aligned with the Earth's rotation axis, it points true North and its angle with the horizontal equals the sundial's geographical latitude; consequently, its angle with the vertical face of the dial equals the colatitude, or 90°-latitude.
where H is the height of the style above the plane, and t is the time (in hours) before or after the center-time for the polar dial. The center time is the time when the style's shadow falls directly down on the plane; for an East-facing dial, the center time will be 6am, for a West-facing dial, this will be 6pm, and for the inclined dial described above, it will be noon. When t approaches ±6 hours away from the center time, the spacing X diverges to infinity; this occurs when the sun's rays become parallel to the plane.
A declining dial is any non-horizontal, planar dial that does not face in a cardinal direction, such as (true) North, South, East or West. As usual, the gnomon's style is aligned with the Earth's rotational axis, but the hour-lines are not symmetrical about the noon hour-line. For a vertical dial, the angle θ between the noon hour-line and another hour-line is given by the formula
where λ is the sundial's geographical latitude, t is the time before or after noon, and η is the angle of declination from true South. When such a dial faces South (η=0°), this formula reduces to the formula given above, tan θ = cos λ tan(15° × t).
When a sundial is not aligned with a cardinal direction, the substyle of its gnomon is not aligned with the noon hour-line. The angle β between the substyle and the noon hour-line is given by the formula
If a vertical sundial faces true South or North (η=0° or 180°, respectively), the correction β=0° and the substyle is aligned with the noon hour-line.
where χ is the desired angle of reclining, λ is the sundial's geographical latitude, θ is the angle between a given hour-line and the noon hour-line (which always points due north) on the plane, and t is the number of hours before or after noon. For example, the angle θ of the 3pm hour-line would equal the arctangent of sin(λ+χ), since tan(45°) = 1. When χ equals 90° (in other words, a South-facing vertical dial), we obtain the vertical formula above, since sin(λ+90°) = cos(λ).
Some authors use a more specific nomenclature to describe the orientation of the shadow-receiving plane. If the plane's face points downwards towards the ground, it is said to be proclining or inclining, whereas a dial is said to be reclining when the dial face is pointing away from the ground.
where λ is the sundial's geographical latitude, t is the time before or after noon, and χ and η are the angles of inclination and declination, respectively.
As in the simpler declining dial, the gnomon-substyle is not aligned with the noon hour-line. The general formula for the angle β between the substyle and the noon-line is given by
The surface receiving the shadow need not be a plane, but can can have any shape, provided that the sundial maker is willing to mark the hour-lines. If the style is aligned with the Earth's rotational axis, a spherical shape is convenient since the hour-lines are equally spaced, as they are on the equatorial dial above; the sundial is equiangular. This is the principle behind the armillary sphere and the equatorial bow sundial. However, some equiangular sundials — such as the Lambert dial described below — are based on other principles.
In the equatorial bow sundial, the gnomon is a bar, slot or stretched wire parallel to the celestial axis. The face is a semicircle (corresponding to the equator of the sphere, with markings on the inner surface. This pattern, built a couple of meters wide out of temperature-invariant steel invar, was used to keep the trains running on time in France before World War I.
Among the most precise sundials ever made are two equatorial bows constructed of marble found in Yantra mandir. This collection of sundials and other astronomical instruments was built by Maharaja Jai Singh II at his then-new capital of Jaipur, India between 1727 and 1733. The larger equatorial bow is called the Samrat Yantra (The Supreme Instrument); standing at 27 meters, its shadow moves visibly at 1 mm per second, or roughly a hand's breadth (6 cm) every minute.
As an elegant alternative, the gnomon may be located on the circumference of a cylinder or sphere, rather than at its center of symmetry. In that case, the hour lines are again spaced equally, but at double the usual angle, due to the geometrical inscribed angle theorem. This is the basis of some modern sundials, but it was also used in ancient times; in one type, the edges of a half-cylindrical gnomon served as the styles.
Just as the armillary sphere is largely open for easy viewing of the dial, such non-planar surfaces need not be complete. For example, a cylindrical dial could be rendered as a helical ribbon-like surface, with a thin gnomon located either along its center or at its periphery.
Although the Sun appears to rotate nearly uniformly about the Earth, it is not perfectly uniform, due to the ellipticity of the Earth's orbit (the fact that the Earth's orbit about the Sun is not perfectly circular) and the tilt (obliquity) of the Earth's rotational axis relative to the plane of its orbit. Therefore, sundials time varies from standard clock time. On four days of the year, the correction is effectively zero, but on others, it can be as much as a quarter-hour early or late. The amount of correction is described by the equation of time. This correction is universal; it does not depend on the local latitude of the sundial.
In some sundials, the equation of time correction is provided as a plaque affixed to the sundial. In more sophisticated sundials, however, the equation can be incorporated automatically. For example, some equatorial bow sundials are supplied with a small wheel that sets the time of year; this wheel in turn rotates the equatorial bow, offsetting its time measurement. In other cases, the hour lines may be curved, or the equatorial bow may be shaped like a vase, which exploits the changing altitude of the sun over the year to effect the proper offset in time. A heliochronometer is a precision sundial that corrects apparent solar time to mean solar time or another standard time. Heliochronometers usually indicate the minutes to within 1 minute of Universal Time. See this discussion of the limits of Sundial Accuracy
An analemma may be added to many types of sundials to correct apparent solar time to mean solar time or another standard time. These usually have hour lines shaped like "figure eights" (analemmas) according to the equation of time. This compensates for the slight eccentricity in the Earth's orbit that causes up to a 15 minute variation from mean solar time. This is a type of dial furniture seen on more complicated horizontal and vertical dials.
In its simplest form, the style is a thin slit that allows the sun's rays to fall on the hour-lines of a equatorial ring. As usual, the style is aligned with the Earth's axis; to do this, the user may orient the dial towards true North and suspend the ring dial vertically from the appropriate point on the meridian ring. Such dials may be made self-aligning with the addition of a more complicated central bar, instead of a simple slit-style. This bar could pivot about its end points and held a perforated slider that was positioned to the month and day according to a scale scribed on the bar. The time was determined by rotating the bar towards the sun so that the light shining through the hole fell on the equatorial ring. This forced the user to rotate the instrument, which had the effect of aligning the instrument's vertical ring with the meridian.
When not in use, the equatorial and meridian rings can be folded together into a small disk.
In 1610, Edward Wright created the sea ring, which mounted a universal ring dial over a magnetic compass. This permitted mariners to determine the time and magnetic variation in a single step.
An analemmatic sundial uses a vertical gnomon and its hour lines are the vertical projection of the hour lines of a circular equatorial sundial onto a flat plane. Therefore, the analemmatic sundial is an ellipse, where the short axis is aligned North-South and the long axis is aligned East-West. The noon hour line points true North, where as the hour lines for 6am and 6pm point due West and East, respectively; the ratio of the short to long axes equals the sine sin(Φ) of the local geographical latitude, denoted Φ. All the hour lines converge to a single center; the angle θ of a given hour line with the noon hour is given by the formula
where W is the width of the ellipse and δ is the Sun's declination at that time of year. The declination measures how far the sun is above the celestial equator; at the equinoxes, δ=0 whereas it equals roughly ±23.5° at the summer and winter solstices.
Accurate dials of this type fit nicely in a public square, using a ball at the tip of a flagpole as the nodus, with the face painted on or inlaid in the pavement. A less accurate version of the sundial is to lay out the hour marks on concrete, and then let the user stand in a square marked with the month. In middle latitudes, the ellipse with the hour-marks should be about six meters wide, so the shadow of the head of the beholder will fall near it most of the time. The month squares are arranged to correct the sundial for the time of year. The user's head then forms the gnomon of the dial. If the sundial is molded into the concrete, it resists vandalism and is engaging and reasonably accurate.
where R is the radius of the Lambert dial and δ again indicates the Sun's declination for that time of year.
It was four o'clock according to my guess,
Since eleven feet, a little more or less,
my shadow at the time did fall,
Considering that I myself am six feet tall.
An equivalent type of sundial using a vertical rod of fixed length is known as a backstaff dial.
A shepherd's dial — also known as a shepherds' column dial, pillar dial, cylinder dial or chilindre — is a portable cylindrical sundial with a gnomon that juts out perpendicularly. When held vertically and pointed at the Sun, it measures the altitude of the Sun, from which the hour can be calculated if the day is known. The hour curves are inscribed on the cylinder for reading the time. Shepherd's dials are sometimes hollow, so that the gnomon can be stored within when not in use.
"Goth now your wey," quod he, "al stille and softe,
And lat us dyne as sone as that ye may;
for by my chilindre it is pryme of day."
O God! methinks it were a happy life
To be no better than a homely swain;
To sit upon a hill, as I do now,
To carve out dials, quaintly, point by point,
Thereby to see the minutes, how they run--
How many makes the hour full complete,
How many hours brings about the day,
How many days will finish up the year,
How many years a mortal man may live.
The cylindrical shepherd's dial can be unrolled into a flat plate. In one simple version, the front and back of the plate each have three columns, corresponding to pairs of months with roughly the same solar declination (June-July, May-August, April-September, March-October, February-November, and January-December). The top of each column has a hole for inserting the shadow-casting gnomon, a peg. Only two times are marked on the column below, one for noon and the other for mid-morning/mid-afternoon.
Timesticks, clock spear, or shepherds' time stick, are based on the same principles as dials. The time stick is carved with eight vertical time scales for a different period of the year, each bearing a time scale calculated according to the relative amount of daylight during the different months of the year. Any reading depends not only on the time of day but also on the latitude and time of year. A peg gnomon is inserted at the top in the appropriate hole or face for the season of the year, and turned to the Sun so that the shadow falls directly down the scale. Its end displays the time.
Another type of sundial follows the motion of a single point of light or shadow, which may be called the nodus. For example, the sundial may follow the sharp tip of a gnomon's shadow, e.g., the shadow-tip of a vertical obelisk (e.g., the Solarium Augusti) or the tip of the horizontal marker in a shepherd's dial. Alternatively, sunlight may be allowed to pass through a small hole or reflected from a small (e.g., coin-sized) circular mirror, forming a small spot of light whose position may be followed. In such cases, the rays of light trace out a cone over the course of a day; when the rays fall on a surface, the path followed is the intersection of the cone with that surface. Most commonly, the receiving surface is a geometrical plane, so that the path of the shadow-tip or light-spot traces out a conic section such as a hyperbola or an ellipse. The collection of hyperbolae was called a pelekonon (axe) by the Greeks, because it resembles a double-bladed ax, narrow in the center (near the noonline) and flaring out at the ends (early morning and late evening hours).
The diptych consisted of two small flat faces, joined by a hinge. Diptychs usually folded into little flat boxes suitable for a pocket. The gnomon was a string between the two faces. When the string was tight, the two faces formed both a vertical and horizontal sundial. These were made of white ivory, inlaid with black lacquer markings. The gnomons were black braided silk, linen or hemp string. With a knot or bead on the string as a nodus, and the correct markings, a diptych (really any sundial large enough) can keep a calendar well-enough to plant crops. A common error describes the diptych dial as self-aligning. This is not correct for diptych dials consisting of a horizontal and vertical dial using a string gnomon between faces, no matter the orientation of the dial faces. Since the string gnomon is continuous, the shadows must meet at the hinge; hence, any orientation of the dial will show the same time on both dials.
Such multiface dials have the advantage of receiving light (and, thus, telling time) at every hour of the day. They can also be designed to give the time in different time-zones simultaneously. However, they are generally not self-aligning, since their various dials generally use the same principle to tell time, that of a gnomon-style aligned with the Earth's axis of rotation. Self-aligning dials require that at least two independent principles are used to tell time, e.g., a horizontal dial (in which the style is aligned with the Earth's axis) and an analemmatic dial (in which the style is not). In many cases, the multiface dials are erected never to be moved and, thus, need be aligned only once.
The intersection of the two threads' shadows gives the solar time.
The simplest sundials do not give the hours, but rather note the exact moment of 12:00 noon. In centuries past, such dials were used to correct mechanical clocks, which were sometimes so inaccurate as to lose or gain significant time in a single day.
In U.S. colonial-era houses, a noon-mark can often be found carved into a floor or windowsill. Such marks indicate local noon, and they provide a simple and accurate time reference for households that do not possess accurate clocks. In modern times, some Asian countries, post offices have set their clocks from a precision noon-mark. These in turn provided the times for the rest of the society. The typical noon-mark sundial was a lens set above an analemmatic plate. The plate has an engraved figure-eight shape., which corresponds to plotting the equation of time (described above) versus the solar declination. When the edge of the sun's image touches the part of the shape for the current month, this indicates that it is 12:00 noon.
Martin Bernhardt created a special gnomon for an equatorial sundial that allows to read of the time at one scale without knowing the date and taking into account the equation of time. The precicision of such a sundial is less then a minute.
The ancient Greeks developed many of the principles and forms of the sundial. Sundials are believed to have been introduced into Greece by Anaximander of Miletus, c. 560 BC. According to Herodotus, the Greeks sundials were initially derived from the Babylonian counterparts. The Greeks were well-positioned to develop the science of sundials, having founded the science of geometry, and in particular discovering the conic sections that are traced by a sundial nodus. The mathematician and astronomer Theodosius of Bithynia (ca. 160 BC-ca. 100 BC) is said to have invented a universal sundial that could be used anywhere on Earth.
The Romans adopted the Greek sundials, so much so that Plautus complained in one of his plays about his day being "chopped into pieces" by the ubiquitous sundials. Writing in ca. 25 BC, the Roman author Vitruvius listed all the known types of dials in Book IX of his De Architectura, together with their Greek inventors. All of these are believed to be nodus-type sundials, differing mainly in the surface that receives the shadow of the nodus.
The Romans built a very large sundial in 10 BC, the Solarium Augusti, which is a classic nodus-based obelisk casting a shadow on a planar pelekinon.
The Greek dials were inherited and developed further by the Islamic Caliphate cultures and the post-Renaissance Europeans. Since the Greek dials were nodus-based with straight hour-lines, they indicated unequal hours — also called temporary hours — that varied with the seasons, since every day was divided into twelve equal segments; thus, hours were shorter in winter and longer in summer. The idea of using hours of equal time length throughout the year was the innovation of Abu'l-Hasan Ibn al-Shatir in 1371, based on earlier developments in trigonometry by Muhammad ibn Jābir al-Harrānī al-Battānī (Albategni). Ibn al-Shatir was aware that "using a gnomon that is parallel to the Earth's axis will produce sundials whose hour lines indicate equal hours on any day of the year." His sundial is the oldest polar-axis sundial still in existence. The concept later appeared in Western sundials from at least 1446.
The onset of the Renaissance saw an explosion of new designs. Italian astronomer Giovanni Padovani published a treatise on the sundial in 1570, in which he included instructions for the manufacture and laying out of mural (vertical) and horizontal sundials. Giuseppe Biancani's Constructio instrumenti ad horologia solaria (ca. 1620) discusses how to make a perfect sundial, with accompanying illustrations.
The oldest sundial in England is incorporated into the Bewcastle Cross ca. 800 AD. The dial is divided into four tides, covering the parts of the working day in areas influenced by the Vikings, a maritime culture which noted the passage of time in the progression of the two high and two low tides each day.
The custom of measuring time by one's shadow has persisted since ancient times. In Aristophanes' play, Assembly of Women, Praxagora asks her husband to return when his shadow reaches . The Venerable Bede also gave instructions to his follows, how to interpret their shadow lengths to know what time it is
Sundials are associated with the passage of time, and it has become common to inscribe a motto into a sundial, often one that prompts the viewer to reflect on the transience of the world and the inevitability of death, e.g., "Do not kill time, for it will surely kill thee." A more cheerful popular motto is "I count only the sunny hours." Another is "I am a sundial, and I make a botch of what is done far better by a watch." Various collections of sundial mottoes have been published over the past few centuries. | http://www.reference.com/browse/sundial | 13 |
52 | Prepared by Tracey Hall & Nicole Strangman
One way to help make a curriculum more supportive of students and teachers is to incorporate graphic organizers. Graphic organizers come in many varieties and have been widely researched for their effectiveness in improving learning outcomes for various students. The following five sections present a definition of graphic organizers, a sampling of different types and their applications, a discussion of the research evidence for their effectiveness, useful Web resources, and a list of referenced research articles. We have focused this overview on applications of graphic organizers to reading instruction, with the intention of later expanding the discussion into other subject areas.
A graphic organizer is a visual and graphic display that depicts the relationships between facts, terms, and or ideas within a learning task. Graphic organizers are also sometimes referred to as knowledge maps, concept maps, story maps, cognitive organizers, advance organizers, or concept diagrams.
Types of Graphic Organizers
Graphic organizers come in many different forms, each one best suited to organizing a particular type of information. The following examples are merely a sampling of the different types and uses of graphic organizers.
This graphic organizer is made up of a series of shapes in several rows. The top row is made up of a diamond in the center with two circles, one on each side, and two vertical rows of rectangles, one on each outer side. The diamond is labeled "Main Idea." The two circles are each labeled "Subord. Idea" and the top rectangle of each outer row is labeled "Support Detail." Underneath the center diamond is a circle and beneath the circle is a horizontal row of three rectangles. The circle is labeled "Subord. Idea" and the center rectangle is labeled "Support Detail." Lines connect the shapes of the graphic organizer.
This graphic organizer is entitled "Network Tree" and is made up of a series of ovals of two different sizes. At the top are three large ovals, one above a row of two. They are connected by two black lines. At the bottom are two rows of three smaller ovals. One row of three is connected by black lines to a larger oval above them on the right, and one set of three is connected by black lines to a larger oval above them on the left.
This graphic organizer is entitled "Spider Map" and is made up of a large, central oval with four sets of black lines extending from it. The central oval is labeled "Topic, Concept, Theme." Four slanted lines extend from the oval, and each one has two horizontal lines attached. Along the side of the slanted line at the top right of the graphic organizer is the label "Main Idea." On one of the horizontal lines at the top left is the label "Detail."
This graphic organizer is entitled "Problem and Solution Map" and is made up of a series of boxes. On the left, a vertical row of two boxes have arrows pointing to a larger box in the center of the graphic organizer. Each of the two boxes are labeled "Influence." The center box is labeled "Cause." An arrow points from the center box to a diamond on the right. The diamond is labeled "Effect." An arrow points from the bottom of the center box to a rectangle beneath it. The rectangle is labeled "Solution."
This graphic organizer is entitled "Problem-Solution Outline" and is made up of a vertical row of three rectangles. To the left of the top rectangle is the label "Problem." The word "Who" appears at the top right corner, and the words "What" and "Why" appear within the rectangle. An arrow points from the top rectangle to the one in the middle. The middle rectangle is larger than the other two. An arrow cuts through the center of the rectangle, pointing down. To the left of the rectangle is the label "Solution." Within the rectangle, on the left, the words "Attempted Solutions" appear, with the numbers one and two beneath. Within the rectangle, on the right, the word "Results" appears, with the numbers one and two beneath. An arrow points from the center rectangle to the one beneath, which is labeled "End Result."
This graphic organizer is entitled "Sequential Episodic Map" and is made up of an oval at the top and three vertical rows of boxes beneath. The oval is labeled "Main Idea." The top box on the left is labeled "Cause..." The top box in the middle is labeled "Effect...Cause..." The top box on the right is labeled "Effect..." Lines connect the boxes beneath, and arrows point from the rows to the top boxes. Each box in the second row is labeled "Influence."
This graphic organizer is entitled "Fishbone Map" and is made up of a series of horizontal and slanted lines. In the center of the graphic organizer, is a thick black line with two sets of two slanted lines extending from it in the shape of two arrows pointing to the left. The arrow's lines on the left are labeled "Cause 1" on the top and "Cause 2" on the bottom. The arrow's lines on the right are labeled "Cause 3" on the top and "Cause 4" on the bottom. A horizontal line extends from the top of each arrow. The horizontal line on the left is labeled "Detail." Two horizontal lines extend from the bottom of each arrow. The bottom line on the left is labeled "Detail."
This graphic organizer is entitled "Comparative and Contrastive Map" and is made up of two ovals at the top, a series of rectangles connected by straight and zig-zag lines beneath, and a vertical row of three circles on the left. The oval on the top left is labeled "Concept 1." The oval on the top right is labeled "Concept 2." Below this, three rows of three rectangles are attached to each other and to the top ovals by straight lines. Zig-zag lines connect the rectangles horizontally. The rectangle on the top left and the rectangle on the top right are each labeled "Diff. Feature." The top rectangle in the center is labeled "Sim. Feature." Brackets to the left of each rectangle in the row on the left indicate each of three circles on the far left of the graphic organizer. Each of the three circles are labeled "Dimension 1."
This graphic organizer is entitled "Compare-Contrast Matrix" and is made up of a table with three columns and three rows. In the column on the left, the first table cell of each row is labeled "Attribute 1," "Attribute 2," and "Attribute 3," from top to bottom. The rest of the table cells are shaded grey.
This graphic organizer is entitled "Continuum Scale" and is made up of a straight horizontal line with a short vertical line at each end. The left end of the line is labeled "Low," and the right end of the line is labeled "High."
This graphic organizer is entitled "Series of Events Chain" and is made up of a vertical row of three rectangles with arrows pointing down between them. Above the top rectangle is the label "Initiating Event." Within the rectangles are the labels "Event 1," "Event 2," and "Event 3," from top to bottom. Above the third rectangle is the label "Final Event."
This graphic organizer is entitled "Cycle" and is made up of four squares with four curved arrows between each square, pointing clockwise from the top square, creating a circular appearance. The top square is labeled "1." The square on the right is labeled "2." The bottom square is labeled "3." The square on the left is labeled "4."
This graphic organizer is entitled "Human Interaction Outline" and is made up of five rectangles, one large rectangle in the center, and four smaller rectangles, two above and two below the center rectangle. The rectangle on the top left is labeled "Person/Group 1." An arrow points from its bottom right corner to the center rectangle. The rectangle on the top right is labeled "Person/Group 2." An arrow points from its bottom left corner to the center rectangle. The center rectangle is labeled "Action" at the top left, with an arrow pointing across the rectangle to the label "Reaction" on the top right. An arrow points from this label diagonally across the rectangle to the bottom left and the label "Action" (repeated). An arrow points across the bottom of the rectangle to the bottom right and the label "Reaction" (repeated). Two arrows point from the bottom center of the center rectangle to the two smaller rectangles on the bottom left and right. The two bottom rectangles are blank.
A Descriptive or Thematic Map works well for mapping generic information, but particularly well for mapping hierarchical relationships.
Organizing a hierarchical set of information, reflecting superordinate or subordinate elements, is made easier by constructing a Network Tree.
When the information relating to a main idea or theme does not fit into a hierarchy, a Spider Map can help with organization.
When information contains cause and effect problems and solutions, a Problem and Solution Map can be useful for organizing.
A Problem-Solution Outline helps students to compare different solutions to a problem.
A Sequential Episodic Map is useful for mapping cause and effect.
When cause-effect relationships are complex and non-redundant a Fishbone Map may be particularly useful.
A Comparative and Contrastive Map can help students to compare and contrast two concepts according to their features.
Another way to compare concepts' attributes is to construct a Compare-Contrast Matrix.
Continuum Scale is effective for organizing information along a dimension such as less to more, low to high, and few to many.
A Series of Events Chain can help students organize information according to various steps or stages.
A Cycle Map is useful for organizing information that is circular or cyclical, with no absolute beginning or ending.
A Human Interaction Outline is effective for organizing events in terms of a chain of action and reaction (especially useful in social sciences and humanities).
Applications Across Curriculum Areas
Graphic organizers have been applied across a range of curriculum subject areas. Although reading is by far the most well studied application, science, social studies, language arts, and math are additional content areas that are represented in the research base on graphic organizers. Operations such as mapping cause and effect, note taking, comparing and contrasting concepts, organizing problems and solutions, and relating information to main ideas or themes can be beneficial to many subject areas. The observed benefits in these subject areas go beyond those known to occur in reading comprehension (Bulgren, Schumaker, & Deshler, 1988; Darch, Carnine, & Kammenui, 1986; Herl, O'Neil, Chung, & Schacter, 1999; Willerman & Mac Harg, 1991).
Evidence for Effectiveness
There is solid evidence for the effectiveness of graphic organizers in facilitating learning. Ten of the 12 studies investigating effects of graphic organizer use on learning reviewed here reported some positive learning outcome. We focus this overview on two main areas: comprehension and vocabulary knowledge.
By far the most frequently investigated learning measure in the studies we reviewed is comprehension. Of 15 studies, 7 (Boyle & Weishaar, 1997; Bulgren et al., 1988; Darch et al., 1986; Gardill & Jitendra, 1999; Idol & Croll, 1987; Sinatra, Stahl-Gemake, & Berg, 1984; Willerman & Mac Harg, 1991) reported that graphic organizer use elevated comprehension. Comprehension measures included the Stanford Diagnostic Reading Test (Boyle & Weishaar, 1997), comprehension questions (Alvermann & Boothby, 1986; Boyle & Weishaar, 1997; Darch et al. 1986; Gardill & Jitendra, 1999; Idol & Croll, 1987; Sinatra et al, 1984), a concept acquisition test (Bulgren et al., 1988), teacher-made tests (Bulgren et al., 1988; Willerman & Mac Harg, 1991), written summaries (Gallego et al., 1989), and story grammar tests (Gardill & Jitendra, 1999). The reliability of these improvements in comprehension is further supported by Moore and Readence's (1984) metaanalysis. When looking across 23 different studies they found a small but consistent effect on comprehension.
Although 3 studies reported no effect of graphic organizer use on comprehension, these findings appear to be attributable to deficiencies in experimental design. Carnes, Lindbeck, & Griffin (1987) reported no effect of advance organizer use relative to non-advance organizer use on the comprehension of microcomputer physics tutorials. However, students in this study were not trained to use the advanced organizers. This same factor may account for the lack of effect in the Clements-Davis & Ley (1991) study, where high school students received no instruction on how to use the thematic pre-organizers that they were given to assist story reading. Alvermann and Boothby (1986) also failed to demonstrate an improvement in comprehension. In this case, the lack of improvement is quite likely due to a ceiling effect - as comprehension scores were quite high even before the intervention. Thus, weighing the collective evidence there still appears to be strong support for the ability of graphic organizers to improve reading comprehension.
Moore and Readence's (1984) meta-analysis suggests that gains in vocabulary knowledge following graphic organizer use may be even greater than gains in comprehension. The average effect size for the 23 studies reviewed was more than twice as large as that reported for comprehension. Thus, graphic organizers appear to be a very effective tool for improving vocabulary knowledge.
Factors Influencing Effectiveness
Research studies have established that successful learning outcomes in the areas described above are contingent on certain factors. Important variables include grade level, point of implementation, instructional context, and ease of implementation. We elaborate the influence of these variables here.
Successful learning outcomes have been demonstrated for students with (Anderson-Inman, Knox-Quinn, & Horney, 1996; Boyle & Weishaar, 1997; Bulgren et al., 1988; Gallego et al., 1989; Gardill & Jitendra, 1999; Idol & Croll, 1987; Newby, Caldwell, & Recht, 1989; Sinatra et al., 1984) and without (Alvermann & Boothby, 1986; Bulgren et al., 1988; Darch et al., 1986; Willerman & Harg, 1991) learning disabilities across a range of grade levels, including elementary, junior high, and high school. However, on average the largest effects of graphic organizers on learning from text have been reported for University populations (Moore & Readence, 1984). There are consistent although more modest effects for elementary populations (Moore & Readence, 1984).
Point of Implementation
Graphic organizers may be introduced as advance organizers, before the learning task, or as post organizers, after encountering the learning material. A review of the research from 1980-1991 (Hudson, Lignugaris-Kraft, & Miller, 1993) concludes that visual displays can be successfully implemented at several phases of the instructional cycle. Indeed, positive outcomes have been reported when graphic organizers are used as both advance (Boyle & Weishaar, 1997; Gallego et al., 1989) and post organizers (Alvermann & Boothby, 1986; Boyle & Weishaar, 1997; Gardill & Jitendra, 1999; Idol & Croll, 1987; Newby et al., 1989; Sinatra et al., 1984; Willerman & Mac Harg, 1991).
However, the precise point of implementation does appear to influence the degree of graphic organizers' effectiveness. In their comprehensive review, Moore and Readence (1984) report that the point of implementation is a crucial factor in determining the magnitude of improvement in learning outcome. When graphic organizers were used as a pre-reading activity, average effect sizes were small. In contrast, graphic organizers used as a follow up to reading yielded somewhat large improvements in learning outcomes. Thus, efforts to improve learning outcomes may be more successful when graphic organizers are introduced after the learning material.
In reviewing 11 years of research, Hudson et al. (1993) note that positive outcomes for curricular enhancements require the use of effective teaching practices. Merkley & Jefferies (2001) note that "It is important, however, that GO planning extend beyond construction of the visual to the deliberate consideration of the teacher's strategies…to accompany the visual." Thus, instructional context is another determinant of the effectiveness of graphic organizers for improving learning.
Without teacher instruction on how to use them, graphic organizers may not be effective learning tools (Carnes et al. 1987; Clements-Davis & Ley, 1991). Graphic organizers can successfully improve learning when there is a substantive instructional context such as explicit instruction incorporating teacher modeling (Boyle & Weishaar, 1997; Gardill & Jitendra, 1999; Idol & Croll, 1987; Willerman & Mac Harg, 1991) and independent practice with feedback (Boyle & Weishaar, 1997; Gardill & Jitendra, 1999; Idol & Croll, 1987), strategy instruction (Anderson-Inman et al., 1996; Boyle & Weishaar, 1997; Darch et al., 1986; Scanlon, Deshler, & Schumaker, 1996), story mapping (Gardill & Jitendra, 1999; Idol & Croll, 1987), semantic mapping (Gallego et al., 1989), and concept teaching routines (Bulgren et al., 1988). Most successful interventions minimally include a teacher introduction describing the purpose of the graphic organizer and setting the reading purpose.
In the absence of systematic study of the role of instructional context, it is difficult to identify with confidence specific aspects that are tied to success. However, in our review an interactive/collaborative approach involving teacher modeling, student-teacher discussion, and practice with feedback appeared to be consistently correlated with learning improvement (Alvermann & Boothby, 1986; Bulgren et al, 1988; Gardill & Jitendra, 1999; Idol & Croll, 1987; Scanlon et al., 1996). Thus, contexts that provide opportunity for student input and interaction with the teacher and/or one another (Darch et al., 1986; Gallego et al., 1989) may be especially effective.
Also useful are Merkley and Jefferies' (2001) specific suggestions for teaching with graphic organizers. Their guidelines include: verbalizing relationships between the concepts represented within the organizer, providing opportunities for student input, connecting new information to past learning, making reference to upcoming text, and reinforcing decoding and structural analysis.
A relatively new area of research is the investigation of computer-based methods for presenting graphic organizer instruction. Herl et al. (1999) tested the effectiveness of two, computer-based knowledge mapping systems in a population of middle and high school students. Students either worked individually using an artificial Web space to augment and revise knowledge maps or networked with one another across computers to collaboratively construct maps. Knowledge mapping scores (determined by comparison to expert maps) were significantly improved for individuals working individually to elaborate maps, but not for students involved in collaborative construction. These findings indicate that a computer-based system can be successfully used to instruct students on how to develop concept maps. They also suggest that web searching methods may improve students' abilities to develop sophisticated maps. Student collaborative approaches, however, may be less effective.
Carnes et al. (1987) constructed computerized advanced organizers to help introduce high school physics students to microcomputer physics tutorials but were unable to establish a significant improvement in learning rate, retention, or performance on a teacher made achievement test. However, the lack of effect is likely attributable to the absence of teacher introduction or training with the organizers.
Findings by Anderson-Inman et al. (1996) found substantial variability in the adoption of computer-based graphic organizer study strategies. Some students became quite skilled and independent with these strategies, while others developed only basic skills and remained reluctant in their use. Their finding that differences in adoption level were correlated with reading test and intelligence scores suggests that it may be possible to predict levels of user proficiency.
Successful learning outcomes can be obtained in a variety of classroom settings, including special education classrooms (Anderson-Inman et al., 1996; Boyle & Weishaar, 1999; Gardill & Jitendra, 1999) mainstream classrooms (Alvermann & Boothby, 1986; Bulgren et al., 1988; Darch et al., 1986; Willerman & Mac Harg, 1991) and one-on-one instruction (Idol & Croll, 1987; Newby et al., 1989; Sinatra et al., 1984). However, the relative ease of implementation is an important determinant of this success (Novak, 1980). Some instructional contexts that have been successful in research studies are unfortunately difficult for teachers and or students to implement. For example, Scanlon et al. (1996) developed (collaboratively with teachers) a 5-step strategy and substrategy for helping students in academically diverse classes to process information and put it into a graphic organizer for studying and/or writing. Teachers in the study implemented the prescribed teaching behaviors to much less of a degree than they had promised and expressed dissatisfaction with the lack of fit with their regular teaching routine. Students trained with the strategy performed better than controls on a strategy performance test, but to only a modest degree. They were at best ambivalent about the utility of the strategy for improving learning. Moore and Readence (1984) make similar observations in their meta-analysis, noting frequent reports that students were unable to appreciate the value of graphic organizers to learning and felt that these tools were out of place in the current instructional context. To draw more solid conclusions about the best ways to implement graphic organizers, more systematic investigations of the role of instructional context are needed.
The Graphic Organizer
This site is a rich resource for learning about graphic organizers, offering links, lists of references and books about graphic organizers, information about using graphic organizers for writing, guidelines for designing graphic organizers and assisting students in designing them, and samples of student work with graphic organizers.
This report is based in part on an earlier version conducted by Roxanne Ruzic and Kathy O'Connell, National Center on Accessing the General Curriculum.
Ruzic, R. & O'Connell, K., (2001). An overview: enhancements literature review.
Alvermann, D. E., & Boothby, P. R. (1986). Children's transfer of graphic organizer instruction. Reading Psychology, 7(2), 87-100.
Anderson-Inman, L., Knox-Quinn, C., & Horney, M. A. (1996). Computer-based study strategies for students with learning disabilities: Individual differences associated with adoption level. Journal of Learning Disabilities, 29(5), 461-484.
Boyle, J. R., & Weishaar, M. (1997). The effects of expert-generated versus student- generated cognitive organizers on the reading comprehension of students with learning disabilities. Learning Disabilities Research & Practice, 12(4), 228-235.
Bulgren, J., Schumaker, J. B., & Deschler, D. D. (1988). Effectiveness of a concept teaching routine in enhancing the performance of LD students in secondary-level mainstream classes. Learning Disability Quarterly, 11(1), 3-17.
Carnes, E. R., Lindbeck, J. S., & Griffin, C. F. (1987). Effects of group size and advance organizers on learning parameters when using microcomputer tutorials in kinematics. Journal of Research in Science Teaching, 24(9), 781-789.
Clements-Davis, G. L., & Ley, T. C. (1991). Thematic preorganizers and the reading comprehension of tenth-grade world literature students. Reading Research & Instruction, 31(1), 43-53.
Darch, C. B., Carnine, D. W., & Kammeenui, E. J. (1986). The role of graphic organizers and social structure in content area instruction. Journal of Reading Behavior, 18(4), 275-295.
Gallego, M. A., Duran, G. Z., & Scanlon, D. J. (1989). Interactive teaching and learning: Facilitating learning disabled students' transition from novice to expert. Literacy Theory and Research, 311-319.
Gardill, M. C., & Jitendra, A. K. (1999). Advanced story map instruction: Effects on the reading comprehension of students with learning disabilities. The Journal of Special Education, 33(1), 2-17.
Herl, H. E., O'Neil, H. F. Jr., Chung, G. K. W. K. & Schacter, J. (1999). Reliability and validity of a computer-based knowledge mapping system to measure content understanding. Computers in Human Behavior, 15(3-4), 315-333.
Hudson, P., Lignugaris-Kraft, B., & Miller, T. Using content enhancements to improve the performance of adolescents with learning disabilities in content classes. Learning Disabilities Research & Practice, 8 (2), 106-126.
Idol, L., & Croll, V. J. (1987). Story-mapping training as a means of improving reading comprehension. Learning Disability Quarterly, 10(3), 214-229.
Merkley, D.M. & Jefferies, D. (2001) Guidelines for implementing a graphic organizer. The Reading Teacher, 54 (4) 350-357.
Moore, D. W., & Readence, J. E. (1984). A quantitative and qualitative review of graphic organizer research. Journal of Educational Research, 78(1), 11-17.
Newby, R. F., Caldwell, J., & Recht, D. R. (1989). Improving the reading comprehension of children with dysphonetic and dyseidetic dyslexia using story grammar. Journal of Learning Disabilities, 22(6), 373-380.
Novak, J. D. (1990). Concept maps and Vee diagrams: two metacognitive tools to facilitate meaningful learning. Instructional Science, 19(1), 29-52.
Scanlon, D., Deshler, D. D., & Schumaker, J. B. (1996). Can a strategy be taught and learned in secondary inclusive classrooms? Learning Disabilities Research & Practice, 11(1), 41-57.
Sinatra, R. C., Stahl-Gemake, J., & Berg, D. N. (1984). Improving reading comprehension of disabled readers through semantic mapping. Reading Teacher, 38(1), 22-29.
Tindal, G., Nolet, V., Blake, G. (1992). Focus on teaching and learning in content classes. Resource Consultant Training Program, University of Oregon Eugene; Training Module No. 3, 34-38.
Willerman, M., & Mac Harg, R. A. (1991). The concept map as an advance organizer. Journal of Research in Science Teaching, 28(8), 705-712.
This content was developed pursuant to cooperative agreement #H324H990004 under CFDA 84.324H between CAST and the Office of Special Education Programs, U.S. Department of Education. However, the opinions expressed herein do not necessarily reflect the position or policy of the U.S. Department of Education or the Office of Special Education Programs and no endorsement by that office should be inferred.
Cite this paper as follows:
Hall, T., & Strangman, N. (2002). Graphic organizers. Wakefield, MA: National Center on Accessing the General Curriculum. Retrieved [insert date] from http://aim.cast.org/learn/historyarchive/backgroundpapers/graphic_organi.... | http://aim.cast.org/learn/historyarchive/backgroundpapers/graphic_organizers | 13 |
103 | FIRST BLACK TROOPS
Some early black troops from Illinois
On May 25, 1863, a group of fifteen to twenty African American men gathered at a railroad platform in Springfield to begin a journey east to Massachusetts. There they would be mustered into the Fifty-Fifth Massachusetts Infantry Regiment, one of the Union's earliest black military units. The men were some of the first of many black troops from Illinois that would fight in our Civil War.
It is likely that a number of factors led blacks from Illinois to enlist in units raised and organized by Massachusetts, which became the 54th and 55th regiments of infantry. Perhaps the most important was opportunity. The Bay State eagerly sought out African American recruits for units that would be made up of black privates and non-commissioned staff, and led by white officers.
Such was not the case in Illinois. As early as the summer of 1862 some black men residing in Illinois, supported by local white leaders, offered military service to state officials. The larger public, however, was not ready to place weapons in the hands of black men so that they might fight for the nation.
Reports of a rally held in Chicago in late April 1863 noted that the African American leader John Jones expressed sadness that the "State of his present residence would not accept his services in this glorious cause, but he was determined to participate, and... should embrace the opportunity presented by the noble and patriotic Governor of Massachusetts." Jones was "willing to adopt Massachusetts as his home, provided she gave him the right which every freeman covets, namely, that of fighting for her honor, and the honor of his country."
The meeting unanimously adopted a series of resolutions, one noting that African Americans had been "ever loyal to a government that has oppressed us for 200 years and more, and do pledge our lives and sacred honors to... aid and assist in defending the common liberties of the entire nation."
The meeting also issued a call for black men in Illinois to enroll in the Massachusetts units, declaring that they would play a crucial role in bringing freedom to their brothers and sisters throughout the South:
"In this war the liberty of the slaves is at once established, so soon as the Proclamation is carried to them.
Who is to bear to our brethren of the South this good news... thousands of whom have not yet heard it. It is our duty-the duty of the black man to bear this proclamation, which can only be effectively accomplished by entering the army as soldiers... and thus carry it by fire and sword, not only to the heart of the South, but into the hearts of rebels."
Where were they from?
Massachusetts records and the published history of the 55th show recruits in varying numbers from towns across Illinois, including: Alton, Bloomington, Cairo, Champaign, Chicago, Jacksonville, Newman, Peoria, Quincy, Springfield, Tuscola, and Waterloo.
It appears that many of those from Chicago, Springfield, and Jacksonville, among others, were new to Illinois, having recently escaped slavery. Among them was Andrew Jackson Smith, an escaped slave who had come north to De Witt County after working as servant to Colonel John Warner of the 41st Illinois Infantry.
A surprisingly large group of the state's first black enlistees were the men from rural Clinton County, located about forty-five miles east of St. Louis. The father-son team of Daniel and James Mayhew gave their home as Breese Station, as did Tecumseh Pendergrass. Nearby Jamestown was home to John Coleman, Joseph Haren, John Morgan, and John Oglesby.
The standouts in the Clinton County cluster were members of what appears to have been an extended family. Franklin Curtis and the brothers Napoleon and Richard Curtis hailed from Jamestown, while John, Joseph, and Pleasant Curtis all hailed from near Carlyle. Census and land sale records show the family group to have arrived in the area by the late 1830s. Like many other of the earliest settlers the heads of the family purchased public domain land directly from the federal government. The U.S. population census for 1840, taken shortly after the family's arrival, shows 111 African Americans living in Clinton County as free persons. Another 10 were held as slaves, some of them on farms located within a mile or so of members of Curtis family farms.
The 55th in service
The 55th was mustered into service at Readville, Massachusetts, on June 22, 1863, and a month later headed south. The men then began a long stretch of fatigue duty on South Carolina coastal islands. It was the kind of work seen by most Americans, north and south, as being most appropriate to African Americans. In February 1864 the regiment moved to serve near Jacksonville, Florida, for two months, after which they returned to the islands of the South Carolina coast. On July 2, the 55th took part in a small action on James Island.
Shortly after a November transfer to Hilton Head, South Carolina, the regiment took part in what would be its most notable fight, on November 30, 1864, near the town of Honey Hill. The action, in which the 55th fought with the now-famous 54th Massachusetts, was small but very deadly. The ground to be crossed in attacking Confederate lines was broken and covered with thick underbrush, making rapid movement impossible. Rebel rifle and artillery fire took a large toll. The two sides continued their small arms fire through the day in spite of the inability to see their opponents due to the thick vegetation. That evening the Federal forces withdrew under cover of darkness. The men of the 55th had acquitted themselves well. Andrew J. Smith, who lived at Clinton, Illinois, at the time of his enlistment, saved the regiment's flag when the color bearer was "blown to pieces by the explosion of a shell." Smith's exploit was rewarded, in 2000, with the Congressional Medal of Honor.
After Honey Hill the 55th returned to its pattern of frequent movements along the South Carolina coast, punctuated by occasional short expeditions up the area's rivers. In February 1865 the black troops of the 55th had the privilege of marching through the just-captured Charleston, South Carolina, perceived by many as the birthplace of the secession movement that had led to the war.
During its term of service the regiment lost 64 enlisted men killed or mortally wounded, among them Joseph Haren of Jamestown, Elijah Thomas and Alvers Northrup of Springfield, Andre Haggins of Quincy,
John Abbott of Bloomington, and Emery Burton of Jacksonville. Another 128 were taken by disease. One of them was John Curtis, of the Clinton County family that contributed six members to the 55th.
After the war
Authors of the 1868 regimental history made at least some attempt to locate veterans of the 55th. Many of the men who had enlisted from Illinois had returned to the state, including the surviving members of the Curtis family, and Daniel and James Mayhew, all of Clinton County. Others returned to their prewar homes of Alton, Champaign County, Chicago, Douglas County, Jacksonville, and Springfield.
A few Illinoisans of the 55th, however, adopted Massachusetts as their postwar home, at least for a time. William Miledam and Jacob Payne located in Boston, while Lewis Clark who enlisted from Jacksonville, lived elsewhere in the Bay State.
Interested in learning more?
Bert G. Wilder, The Fifty-Fifth Regiment of Massachusetts Volunteer Infantry, colord, June 1863-September 1865, which includes a regimental roster, can be found online at
Who in the world was Lewis B. Parsons?
In December 1861 Lewis Baldwin Parsons was placed in charge of all military rail and river transportation within the Department of the Mississippi, which extended from Pittsburg to west of the Mississippi and south to New Orleans. In a war that made huge use of steam to move troops and supplies, Parsons was a crucial player.
Lewis Parsons was born in New York State in 1818, the year of Illinois statehood. After graduating Yale in 1840 he began teaching in Alabama against the wishes of his father, who feared the influences of living in a society based on slavery. He soon began the study of law at Harvard, and after graduation began a practice in Alton, Illinois. Parsons soon began to invest in land, by 1855 owning an estimated 2,000 acres. About the same time he began work as an officer of the Ohio and Mississippi Railroad. He left the company in 1860, as the nation's sectional crisis reached a new level of intensity.
War's early days
Parsons lived in St. Louis when secessionists fired on Fort Sumter in April 1861. Southern sympathizers and Union men struggled for control of the city. The former railroad attorney served as a volunteer aid to unionist politician Francis P. Blair, Jr., taking part in the capture in May of secessionists gathered at Camp Jackson on the city's outskirts.
Months later Parsons served on the staff of Gen. George B. McClellan, an old acquaintance. McClellan had been named president of the eastern division of the Ohio and Mississippi Railroad at the time Parsons left the company in 1860. After small but notable early victories in western Virginia, McClellan was called by President Lincoln to reorganize the Union forces that had been trounced at the First Battle of Bull Run. On November 1, 1861, Parsons's friend was named general-in-chief of Union forces.
Bringing order from chaos
The day before McClellan's promotion Parsons had been transferred to the army quartermaster's department at St. Louis, where he served on a commission to make sense of financial claims against the government that had been incurred during the command of Gen. John C. Frémont.
Parsons's most important wartime service, however, began with an order issued December 9, 1861:
"You will take charge of all the transportation pertaining to the Department of the Mississippi by River and Railroad and discharge all employees not required to facilitate this particular service."
The former railroad man quickly set out to put the transportation of masses of men, equipment, and animals on a businesslike basis. Up to the time of his appointment quartermaster officers had operated largely on their own. The result was a chaotic system without an overarching plan of operation. Parsons established a strict standard of accountability for officers' actions, declaring that if the U.S. treasury was not to be bankrupted "some general system would be required for the entire West." That shaking up caused anger among some shippers, who quickly protested Parsons’s orders.
Other problems came from lack of communication within the army. The important Illinois Central Railroad, which had been financed largely by a federal land grant, complained that it was not being paid for its services. Parsons wanted to pay but could not get information from superiors about rates. Final orders regarding the payment did not come for almost nine months. Parsons also felt in a bind because he was not informed of plans for the future campaigns and the transportation needs that they might involve. Able only to guess at the army’s future plans, Parsons could write to his subordinates in December 1862 only that "Large Movements are in prospect, and you will require very soon, a large supply of coal...."
Parsons also shook up the system by which basic transportation costs were established. In the early days much of the transportation was arranged by charter and rates thus varied from project to project. The business-like Parsons established regular rates by term contracts. This was especially important in arranging steamboat transport, which carried the majority of food, equipment, and clothing to the army. While railroads were owned by corporations, steamboats were owned by individuals or small groups of investors. Arranging to use many boats meant dealing with many owners. The contract negotiated in December 1862 for all river transport between St. Louis and New Orleans provided for "precedence and prompt dispatch to transportation of the United States" and set rate specific rates for passengers and goods. (For the specific rates see his Reports to the War Department link at the end of this feature.)
The pre-arrangements provided by contracts made by Parsons allowed for quick movement of troops and military equipment. In early 1862 his work played a crucial role in supporting water-based campaigns including Fort Henry, Fort Donelson, and Island Number 10. When in late December 1862 Gen. William Sherman planned an attack on Haines's Bluff, Mississippi, and
Parsons provided transports for 13,000 men on twelve hours' notice. When the assault failed the transports evacuated Sherman's army and their equipment by next morning, saving the force from real trouble.
More rank and responsibility
Parsons's work in quickly moving troops and war materiel was hailed throughout the army. On August 26, 1864, he was promoted to Chief of the Quartermaster Department's Division of Rail and River Transportation. He was now in charge of all military rail and river operations.
Parsons soon received the biggest test of his career. In January 1865 he organized and supervised the movement of the 20,000 men and 1,000 animals of the 23rd Army Corps from Eastport, Mississippi to Chesapeake Bay on the Atlantic coast, where they would join the war's final campaign. After receiving his orders Parsons began to gather hundreds of railroad cars and the locomotives needed to move them. He then began the process of dealing with managers of the different railroads whose lines would be used. He also arranged for enough steamboats to bring men to the major rail center of Parkersburg, West Virginia, from which they would move east by rail.
It was not an easy proposition. Freezing on the Ohio River forced many boats to land at Cincinnati, where the men transferred to trains that had been rerouted from their original starting point. The trains then contended with miles of snow-drifted track. In spite of these problems things could not have gone much better. In a bit over two weeks Parsons and those working under him moved 20,000 men, with artillery and animals, a distance of about 1,400 miles. Secretary of War Edwin Stanton called the successful effort "without parallel in the movement of Armies." Months later
President Lincoln signaled his appreciation by calling for Parsons to be promoted to the rank of brigadier general.
It is impossible to know the exact numbers of soldiers, animals, and tons of equipment moved under the superintendence of Lewis Parsons. One small sign is a record of the amount of military transportation (by rail and river) that moved through the single city of St. Louis during the year ending June 30, 1863:
|Food, ammunition, clothing, medical equipment, etc.||491,014,463 lbs.|
|Wagons and ambulances ||4,348|
|Board-feet of lumber||2,314,619|
Interested in learning more?
A short sketch of Parsons's life and work is Harry E. Pratt, "Lewis B. Parsons: Mover of Armies and Railroad Builder," Journal of the Illinois State Historical Society 44:4 (Winter 1951): 349-54, which can be found online at:
After the war Parsons wrote about his work- Reports to the War Department by Brev. Maj. Gen. Lewis B. Parsons, Chief of Rail and River Transportation (1867) http://hdl.handle.net/2027/hvd.32044086280112 and Rail and River Army Transportation in the Civil War (1899)http://hdl.handle.net/2027/pst.000004306702
The major collection of Parsons's papers can be studied at the Abraham Lincoln Presidential Library in Springfield, Illinois.
Illinois schools during the war
While the nation was preoccupied with events on the battlefront, important things were happening on the home front. In Illinois hundreds of thousands of youngsters continued to attend classes in the state’s relatively new system of public schools.
The system in 1861
Laws enacted in the 1850s Illinois created a mandatory system of public schools financed through taxation. The system was overseen by a state superintendent of public instruction. Each county organized itself into districts, each of which operated a school. Persons seeking employment as teachers had to be certified as competent by county school officials. Textbooks were adopted at the local level, though the state superintendent offered advice as to the quality of textbooks offered in the marketplace. An Illinois State Normal University (today Illinois State University) operated as the state’s teacher-training institution.
When Fort Sumter was fired on in April 1861 the Illinois public school system consisted of about 9,300 schools serving almost 473,000 students. This was estimated to be about 80% of the white population between the ages of 5 and 21 years. In 1860 males dominated the field of teaching with 8,200 positions, while just over 6,400 women worked in classrooms. Males also dominated the pay scale, receiving on average over 50% more than their female counterparts. Officials in many counties reported some opposition to public education and consequently slow advances in raising the quality of their teaching staff and in improving school buildings and instructional equipment.
The school laws used the word "white" to designate the clients of the system. Some small provision for African American children was made by declaring that in "townships in which there shall be persons of color, the board of trustees shall allow such persons a portion of the school fund equal to the amount of taxes collected for school purposes from such persons of color..." While this did in the abstract create a right on the part of black children to public education, in the concrete it all but ensured a level of public funding that would make impossible a genuine education.
Ideas about education and the war
In his 1863 report on the state's schools Superintendent of Public Instruction Newton Bateman, after outlining the workings of the system, raised "deeper questions... questions that look beyond the domain of means, far out into the realm of results; that... propound to us the earnest inquiry, what then?" The result was a lengthy section about the role of educators in the national life, asking further "What is the great end of popular education? Are our public schools answering that end? How can they be made to do so more perfectly?"
Bateman's answered his own questions. "The chief end is to make GOOD CITIZENS. Not to make precocious scholars—not to make smart boys and girls-not to gratify the vanity of parents and friends... but simply, in the widest and truest sense, to make good citizens." What exactly did that entail? Bateman provided a long explanation highlighting some important characteristics. Most important was “cordial submission to lawful authority" -good citizens willingly submitted themselves to the law of God and of the just governments that God ordained. Another important attribute was "moral rectitude," which could be taught by exposing children to the sad examples of failed individuals and nations. Teachers, though, should also raise positive models and "fail not to point the young to those substantial and enduring honors which cluster... upon the brow of virtue." A final goal was to engender love of country. "The true American is ever a worshipper. The starry symbol of his country's sovereignty is to him radiant with a diviner glory than that which meets his moral vision." The southern rebellion, Bateman argued, was a consequence of ignorance and a resulting lack of true good citizenship.
The war brings a change
Probably the greatest effect the war had on schools was to change the composition of the teaching force. Several county superintendents reported difficulty in finding competent replacements for experienced men who enlisted in the military. Coles County's superintendent lamented that "I have been compelled to grant certificates... to persons who failed to come up to the standard I have adopted, in order that the schools might all be supplied."
Local schools responded by hiring women to fill many vacant teaching positions. By 1865 the number of male teachers had dropped to 6,172. Women picked up the slack with 10,843 presiding over classrooms.
Some districts employed women reluctantly, but later reported happy results. Jersey County's superintendent wrote that many "female teachers have had but little experience in the art of teaching, but, by the assistance of the directors and patrons, have succeeded in giving pretty general satisfaction." W. L. Campbell of Mercer County proved much more enthusiastic. He noted that 80% of new teachers in his county were women and complained of the injustice of their being paid roughly 60% of the salary awarded their male counterparts. "I cannot see why this difference should exist. The competent, faithful and true female teacher... is entitled to just as much compensation as the male. She performs the same amount of mental labor, undergoes the same wear and tear of mind and body, and accomplishes the same amount of good. The honest truth is, we do not pay our teachers enough, and particularly our females."
The return of peace in 1865 changed many things in Illinois. One was the trend of women dominating public school teaching. Between 1865 and 1866 the number of male teachers rose by about 650. In that same period the number of women teaching declined by about 400. Though holding a smaller percentage of all teaching positions, women continued to preside in a solid majority of Illinois classrooms. They would continue to do so—a major change from prewar days.
One thing that did not change was the status of African Americans in the public school system. The 1850s statute that allowed but did not mandate public education opportunities for black children remained in effect. State superintendent Bateman put it bluntly in early 1867: "For the education of these six thousand colored children [of school age], the general school law of the State makes, virtually, no provision. By the discriminating terms employed throughout the statute, it is plainly the intention to exclude them from a joint participation in the benefits of the free school system." He joined the Illinois State Teachers' Association in calling urgently for the General Assembly to remove the word "white" from the state’s school laws. That would take some time.
Interested in learning more?
The state superintendent of public instruction issued biennial reports that included much statistical information and reports by county school superintendents. Reports covering the war years can be found online at http://hdl.handle.net/2027/nyp.33433075985816 and http://hdl.handle.net/2027/nyp.33433075985808
Fireworks in Springfield
The competing public meetings to discuss emancipation held by the political parties at the state capitol building in Springfield in early January 1863 (see the January 2013 monthly feature) foreshadowed a complete breakdown of relations between the Republican governor and the Democratic majority in the legislature. The 1863 session of the Illinois General Assembly was filled with fireworks and ended with an action never seen before—or since. Here are a few high points.
Battling over emancipation
Not surprisingly one of the first issues to arise after the session's January 5 opening was emancipation, the President Lincoln’s proclamation having gone into effect just days before. In his opening message to the new legislature, Republican Governor Richard Yates proudly declared himself a radical in the struggle against slavery, arguing that freedom was the will of God:
"[T]he only road out of this war is blows aimed at the heart of the rebellion, is the entire demolition of the evil which is the cause of our present fearful complications....The rebellion, which was designed to perpetuate slavery....is now, under a righteous providence, being made the instrument to destroy it....I demand the removal of slavery. In the name of my country, whose peace it has disturbed....in the names of the heroes it has slain....in the name of justice, whose high tribunals it has corrupted....in the name of God himself, I demand the utter and entire demolition of this Heaven-cursed wrong of human bondage."
Democratic legislators had very different ideas. In fact they worked to walk back the steps made toward freedom. The first days of the legislative session saw the submission of a bill by which Illinois would ratify a proposed amendment to the U. S. Constitution approved in December 1860 by Congress and President James Buchanan. If ratified by the needed number of states, it would give permanent protection to the institution of slavery in the United States. Strong majorities in both houses of the Illinois legislature passed the law by which the Prairie State would ratify the proposed amendment.
A call for an armistice and a peace convention
The Democratic majority also quickly began a movement to adopt an official statement denouncing the Lincoln administration's conduct of the war and calling for a convention of the states that would work to bring peace between North and South.
The Democratic majority of the House Committee on Federal Relations condemned the president's suspension of the writ of habeas corpus, the closing of opposition newspapers by military authority, and other infringements on civil liberties. Such moves, they said, threatened to create "a consolidated military despotism." Members then declared that, given the failure and the corruption of the Lincoln administration’s war effort, "it is to the people we must look for a restoration of the Union, and the blessing of peace." A convention should be held at a place to be determined upon by the states "to so adjust our National difficulties that the States may hereafter live together in harmony." Six commissioners—all but one opponents of the war—were named to work with similar groups from other states to arrange a military ceasefire and convention.
Members of the Republican minority issued a report of their own, a full-throated call for supporting President Lincoln's conduct of the war. The Emancipation Proclamation was called "a necessity demanded of the President....a necessary and constitutional war measure." Violations of the civil liberties of citizens were deplored, but the report declared that civil rights had little to do with fighting traitors-"no man has a right to be a traitor-no man has a right to aid and abet the enemies of his country." Most important, it was imperative to "crush out the existing rebellion," said the Republicans. "Our own happiness, prosperity and power as a people, and the fate of republican institutions throughout the world" turns on the success of the Union, they declared.
An observer later wrote that "no one not present at the time can imagine the bitterness, even ferocity of temper, with which these resolutions were discussed." After days during which little else was addressed, the majority resolutions, including the call for a negotiated peace, were adopted by a party-line vote. The majority then called for a recess until June, to allow the proposed convention to meet and attempt a peace.
Senator Funk explodes
As the legislature prepared to recess, the actions by the antiwar majority brought Senator Isaac Funk to white heat. The McLean County legislator announced that he would happily serve as judge, jury, and executioner against those fellow legislators that he saw as enemies of the United States:
"Mr. Speaker, you must excuse me; I could sit no longer in my seat and listen to these traitors. My heart....would not let me. My heart, that cries out for the lives of our brave volunteers in the field, that these traitors at home are destroying by thousands, would not let me. My heart, that bleeds for the widows and the orphans at home, would not let me. Yes, these traitors and villains in the senate are killing my neighbors' boys, now fighting in the field.
Mr. Speaker, these traitors on this floor should be provided with hempen collars. They deserve them. They deserve hanging, I say.... I go for hanging them, and I dare to tell them so, right here, to their traitorous faces. Traitors should be hung. It would be the salvation of the country to hang them."
Funk's speech was a sensation, astonishing some and bringing cheers from others in the galleries. It was reported or quoted at length in newspapers throughout the northern states and was even issued in pamphlet form in both English and German.
Thinking during the recess
The General Assembly returned to Springfield on June 2, following the recess that was meant to await the results of the national peace convention to which it had appointed delegates. There was no report because there had been no convention.
Legislators who had followed the public press knew that there was much dissatisfaction with their earlier actions. Published reports of meetings of Illinois regiments at the battle fronts indicated anger among the troops over what were seen as measures that hurt the military. Many soldiers bitterly denounced the proposals for a cease-fire and peace convention, threatening to punish what they saw as treason at home after defeating traitors in the South. In spite of weariness after two years of war, a number of public meetings across Illinois expressed similar feelings.
Governor Yates's coup
Governor Yates and others quickly decided "that the State and country would receive a greater benefit from the adjournment of this legislature than from the passage of any measure" it might pass. When the two houses did not agree on a date for adjournment Yates used a provision of the state constitution to "prorogue" the legislature—to declare the session at an end. Republican members immediately deserted the legislative chambers, leaving Democrats to fume at Yates's stunning move. Opposition leaders soon brought before the Illinois Supreme Court a suit challenging the governor's action. Months later, at a time when the case's outcome could have little practical effect, the court sustained the governor. Richard Yates would fight the rest of war without further legislative restraints.
Interested in learning more?
A fuller look at the struggle between Governor Yates and the Democratic opposition is Jack Nortrup, "Yates, the Prorogued Legislature, and the Constitutional Convention," Journal of the Illinois State Historical Society 62:1 (Spring 1969): 5-34. It can be found online at http://dig.lib.niu.edu/ISHS/ishs-1969spring/ishs-1969spring-005.pdf
Isaac Funk's speech in pamphlet format can be read at http://catalog.hathitrust.org/api/volumes/oclc/2900928.html
On January 1, 1863, President Lincoln signed the Emancipation Proclamation, declaring free Africans Americans held as slaves in areas under Confederate control. This followed his September 22, 1862, preliminary proclamation, which warned those in rebellious states that they could save their "peculiar institution" by laying down their arms and renewing their allegiance to the Unites States within one hundred days. Reaction to the proclamation in Illinois—like that in the Union as a whole-varied from condemnation to approbation.
Mass meetings to express opinions
Reactions of the two political parties came quickly. Political leaders from throughout Illinois gathered in Springfield for the January 5, 1863, opening of the new legislature. They soon followed the old practice of holding public meetings during the legislative gathering to develop and stake out their stands on emancipation.
Democrats condemn the president and his policy
Democrats, who held majorities in both houses of the General Assembly, called a public meeting for that very evening to condemn the Lincoln administration and its policies, especially the emancipation of African American slaves. Representatives Hall was reported to be filled, while "Hundreds came and went away, unable to obtain entrance." A committee on resolutions met to develop an official statement of belief, while speeches were made condemning national war policy. The resolutions committee's statement was enthusiastically adopted, laying out the Illinois Democratic position on emancipation. It condemned Lincoln's action, charging him with changing the purpose of the war, creating radical and highly dangerous social change, and encouraging the kind of violent uprising of enslaved people envisioned by men such as Denmark Vesey, Nat Turner, and John Brown:
"Resolved, That the emancipation proclamation of the president of the United States is as unwarrantable in military as in civil law; a gigantic usurpation, at once converting the war, professedly commenced by the administration for the vindication of the authority of the constitution, into a crusade for the sudden, unconditional and violent liberation of three millions of negro slaves; a result of which would not only be a total subversion of the Federal union, but a revolution in the social organization of the southern states, the immediate and remote, the present and far-reaching consequences of which to both races cannot be contemplated without the most dismal forebodings of horror and dismay. The proclamation invites servile insurrection as an element in this emancipation crusade-a means of warfare, the inhumanity and diabolism of which are without example in civilized warfare, and which we denounce, and which all the civilized world will denounce, as an ineffaceable disgrace to the American name."
Leading Democrats spoke again at a meeting on January 8. This time they generally condemned Lincoln's management of the war and the arrest of administration opponents. Several advocated peace even if it included disunion. A single speaker spoke against the Lincoln administration but emphasized support for the struggle against rebels who would destroy the Union established by the Founders with the adoption of the U.S. Constitution.
Republicans stand with their leader
Republicans held their first public meeting in Representatives Hall on January 9. Again, the chamber was reported to be packed, with hundreds turned away for lack of room. "[P]atriotic Union ladies of Springfield and other parts of the State-those who are daily laboring with their hands for their brothers, relatives and friends in the field" composed a large part of the audience. A special level of patriotism and support from Illinois soldiers was suggested by decorating the front of the room with the bullet-riddled flags of several regiments.
The main feature of the evening was a speech by Major General Richard J. Oglesby, who had been nearly killed three months before at the Battle of Corinth, Mississippi. Oglesby's speech emphasized the importance of supporting the war to preserve the Union, referencing the Emancipation Proclamation as a necessary action on the part of the Lincoln administration to preserve the Union.
"This proclamation is a great thing, perhaps the greatest thing that has occurred in this century. It is too big for us to realize. When we fully comprehend what it is we shall like it better than we do now."
On January 15 Republicans held another meeting, this one to adopt formal resolutions supporting the actions of President Lincoln in fighting the rebellion. One of them touched upon the proclamation:
"Resolved, That the Constitution of the United States confers upon the Government of the same all the powers necessary to the effectual suppression of the rebellion... and to this end it may deprive them [those in rebellion] of life, liberty or property if required, in its judgment, and that an imperious necessity demanded of the President of the United States the issuing of his proclamation of freedom to the slaves in the rebellious States... and we pledge ourselves to sustain him in the same."
Emancipation as a military necessity united Republicans, many of whom did not oppose slavery primarily on humanitarian grounds.
Illinois troops respond
In his January 9 speech General (and later governor) Richard Oglesby, lately arrived from the war front, commented, "You want to know about the proclamation, and what the army thinks about it. I do not know the sentiment of the army. No man knows the sentiment of so large a body of men." The sentiments of Illinoisans in the service began to trickle in, mostly through letters to friends and family, many of which found their way into local newspapers. Some condemned Lincoln's proclamation and perceived evils that they saw arising from it, including racial mixing. Many more supported emancipation, not as a move toward human rights but as a blow against traitors who wished to kill them. Depriving rebels of slave property removed a source of labor that could be used by Confederates against Union troops. Practicality aside, emancipation also punished people who had turned against the nation.
How did Illinois African Americans note the day?
It is not possible to know exactly how Illinois African Americans reacted to the January 1 proclamation. Meetings likely were held in those towns that were home to a black population of any size. Certainly more concerned with emancipation's meaning for its white readers, the press (an all-white institution in the Illinois of 1863) seems to have given such meetings very little notice. Frederick Douglass' Monthly reported that "At Chicago, as our Western correspondent 'PILGRIM' reports, the colored people celebrated the gladsome New Year's Day with appropriate public festivities-feeling sure of the coming of the Proclamation, before it was issued."
Interested in learning more?
It is possible to get an idea of reaction to emancipation in Illinois towns by reading their locally published newspapers. A catalog of Illinois newspapers on microfilm available for loan through the Abraham Lincoln Presidential Library can be found online at http://www.illinoishistory.gov/lib/newspaper.htm
Studies of the reactions of Illinoisans to emancipation include Bruce Tap, "Race, Rhetoric, and Emancipation: The Election of 1862 in Illinois," Civil War History 39 (1993): 101-25. David W. Adams, "Illinois Soldiers and the Emancipation Proclamation," Journal of the Illinois State Historical Society 67 (1974): 406-21 can be found online at http://dig.lib.niu.edu/ISHS/ishs-1974sept/ishs-1974sept-406.pdf
"You are sure to be happy again":
President Lincoln consoles Fanny McCullough
Executive Mansion, Washington, December 23, 1862.
It is with deep grief that I learn of the death of your kind and brave Father; and, especially, that it is affecting your young heart beyond what is common in such cases. In this sad world of ours, sorrow comes to all; and, to the young, it comes with bitterest agony, because it takes them unawares. The older have learned to ever expect it. I am anxious to afford some alleviation of your present distress. Perfect relief is not possible, except with time. You can not now realize that you will ever feel better. Is not this so? And yet it is a mistake. You are sure to be happy again. To know this, which is certainly true, will make you some less miserable now. I have had experience enough to know what I say; and you need only to believe it, to feel better at once. The memory of your dear Father, instead of an agony, will yet be a sad sweet feeling in your heart, of a purer, and holier sort than you have known before. Please present my kind regards to your afflicted mother.
Miss. Fanny McCullough.
Your sincere friend
Abraham Lincoln's letter to twenty-one-year-old Fanny McCullough stands as one of the great writings of consolation. Though less famous than the 1864 letter to Lydia Bixby, the mother of several sons thought to have been lost in the service, the McCullough missive holds special interest. Firstly, the original letter exists today, which is not the case with Bixby. Secondly, Lincoln's words were written as encouragement to a young person whose life had been devastated by the war.
The death of William McCullough
Bloomington resident William McCullough joined the 4th Illinois Cavalry in the summer of 1861 and was commissioned lieutenant colonel. The disabled fifty-year-old (he had lost his right arm to a threshing machine) left behind his wife, two daughters, and two sons. Judging by surviving correspondence Mrs. McCullough and all of the children suffered from some form of bad health or behavioral issue. Friends at home worried for the McCulloughs.
William had developed a wide circle of friends in the legal profession while serving McLean County as sheriff and later as circuit clerk. Among them were Abraham Lincoln, who had practiced regularly in McLean County, and local Republican party leaders including David Davis and Leonard Swett. At the time of William’s death in 1862 those friends were president of the United States, an associate justice of the U.S. Supreme Court, and the recent Republican nominee for U. S. Congress.
McCullough died on December 5, 1862, in a skirmish near Coffeeville, Mississippi. His unit had been surrounded in the late afternoon dusk. Refusing to surrender, he was killed by musketry. The body was recovered under a flag of truce to be returned to Bloomington for burial.
The sad news
News of McCullough's death reached Leonard Swett in Bloomington on December 8. During the Civil War information about soldier death moved through informal channels rather than by government telegrams or visits of army officials such as that portrayed in "Saving Private Ryan." Swett described how he broke the news to the colonel's family:
"'Col McCullough killed in battle—buried by the enemy, flag of truce gone for the body'
The first shock of this terrible news over, the question was how I should bear the news to this already afflicted family I did not know in just what condition Mrs M was; so taking the advice only of my wife, I concluded first to see Mrs Orme [McCullough's elder daughter].
The announcement of course affected her considerably, but, it was solid grief for her 'poor Father'.... She soon composed herself, for I told [her] if she were the only one I could leave her to grieve, but her Mother & Fanny were so weak, I had to come to [her] for help to strengthen them. It was but very few moments until she rallied from it & at her request I went with her to Mrs McCullough's
We found Mrs M. sitting up apparently quite comfortable. Mrs. Orme had advised telling the both of them together so Fanny was sent for and I began what to me I would rather have avoided than a battle. I told that I was the messenger of evil that would shock & wound, that they were both unable to bear it, that before I told them they must bear themselves up & summon all their fortitude, that I was afraid of its consequences Mrs. M. sat quietly & told me to go She could bear it.... She seemed like one taught in the school of affliction. Fanny manifested impatience to hear it quick but her face looked as though she would dodge the blow There [was] a shrinking and fear in it.
....Is it Father said Fanny? It is, it is I replied I was sorry to say that was true Mrs McC. as she sat in her chair quietly dropped her head in her hand & wept Fanny dropt her head on her Mother's shoulder A moment of silence when Fanny began to show signs of nervous excitement She rung her hands crying Father's dead! Father's dead! poor Father! Is it so? Why don't you tell me, why don't you tell me. The anxiety of all for her, knowing her nervous condition led us all to forget everything else The doctor was sent for She became gradually more quiet & soon sat in her chair composed A few people came & she shortly went to her room—locked herself in A lady friend stood at the door & finally got in"
Friends fear for Fanny McCullough
Fanny's behavior continued to worry friends even as her mother showed what seemed to be resignation toward her husband's death. Swett's wife Laura wrote to David Davis, now serving on the U. S. Supreme Court, that Fanny appeared "afflicted—crushed, and I fear, broken-hearted.... She has neither ate or slept since the tidings of her father’s death, but shuts herself in her room, in solitude, where she passes her time in pacing the floor in violent grief, or sitting in lethargic silence."
Davis was himself heartbroken at the death of his old friend, and especially concerned about Fanny. To Laura Swett he commented on Fanny's suffering in words that said much about his own:
"Would that I was in Bloomington. I could do much to sooth, my poor friend, Fanny McCullough
I love her as I would a child, & believe, that if I was at home, that I could do a great deal to lift her out of her great grief- She has had trials & griefs such as few girls of her age ever had.
She is a guileless, truthful, warm hearted, noble girl. The good hearted people of Bloomington shd not let her sink under this affliction
Her father was my devoted friend, for many long years- In his friendships he was as true as the needle to the pole- Where he loved he gave his whole heart.- He had his faults- who of us does not. Let them be buried with him in the silence of the grave-
One by one, my old friends drop off- A feeling of intense sadness has been on me all weeks- Poor Fanny, loved her father, with all his faults, as devotedly as ever child loved a parent-
She should not be suffered to grieve over much- I know that Mr Swett & yourself will do all that is in your power, to comfort her-
I know exactly how she feels, and how dark the world is to her"
".... a very good effect in soothing her troubled mind"
It may be that David Davis informed President Lincoln of William McCullough's death. He wrote on December 16 that "Mr Lincoln had a warm attachment to McCullough & feels his loss keenly." Probably prompted by Davis himself, the president promised that "He will write to Fanny."
Business intruded, however. The president was dealing with the fallout of the Federal defeat on December 13 in the disastrous battle at Fredericksburg, Virginia. Conflict between the secretary of state and the secretary of the treasury threatened to tear apart Lincoln's cabinet. Davis told his wife that he continued to write to Fanny "frequently" and that "I will see Mr Lincoln again, & prompt him to write her- He promised the other day that he would- The cares of this Government are very heavy on him now, & unless prompted, the matter may pass out of his mind."
Lincoln composed his letter on December 23. Five days later David Davis happily informed Laura Swett in Bloomington that "Mr Lincoln has written to Fanny." On the evening of January 1, 1863, the president's letter was delivered to the young woman. William Orme wrote to Justice Davis in Washington that "Fanny is still in much distress of mind- But your letter to her was so full of good kind love for her that it did much to relieve her. Last night she received a letter from Mr Lincoln which was beautifully written and had a very good effect in soothing her troubled mind.”
Interested in learning more?
An image of the letter of President Lincoln to Fanny McCullough can be found at http://myloc.gov/Exhibitions/lincoln/presidency/CommanderInChief/BattlingIncompetence/ExhibitObjects/ExperienceEnoughtoKnow.aspx .. Much correspondence among William McCullough's friends discussing his death and Fanny's reaction to it are in the David Davis Family Papers at the Abraham Lincoln Presidential Library, and in the William W. Orme Papers located in the Illinois History Collection at the library of the University of Illinois-Urbana.
The 92nd Illinois riles Kentuckians
The 92nd Illinois riles Kentuckians
In November 1862 the 92nd Illinois Infantry, which had become known as a "slave-stealing" regiment for its refusal to return fleeing slaves to their owners, marched through Kentucky towns with weapons loaded to overawe threatening pro-slavery mobs. The controversy mirrored the evolving struggle in the North over how the war should affect African American slavery.
Time and change
President Abraham Lincoln declared with the opening of war that his government's purpose was to preserve unbroken the nation established by the Founders, not to end or modify the institution of slavery in the rebellious states. Under this policy many African Americans who escaped slavery and sought freedom within U.S. Army lines were returned to their masters.
As the war stretched from weeks to months, many in the loyal states began to rethink the place of slavery in the struggle. This was the result in part of dealing with circumstances and issues unseen in the war's first days, but also of changing attitudes toward stubborn rebels who refused to lay down their arms.
The military complicated matters when Major General Benjamin Butler used the Confederate concept of the enslaved being a form of property to declare escaped African Americans who entered his Federal lines to be contraband of war, subject like other forms of property to seizure by the government. Many other officers began to use this legalism to employ escapees as army laborers.
Congress also played a role, passing laws regarding confiscation of rebel property. In March 1862 a new article of war—a portion of the code that governed the armed forces—was adopted. It declared that all military officers "are prohibited from employing any of the forces under their respective commands for the purpose of returning fugitives from service or labor, who may have escaped. ..." Officers found guilty by a court-martial of violating the article were to be dismissed from the service.
President Lincoln took part in the process by signing the Congressional legislation, though sometimes with misgivings. In the fall of 1861 he drew the line at supporting a general emancipation, forcing the revocation of Major General John C. Frémont's proclamation of emancipation in Missouri. The move, Lincoln wrote, was among other things "not within the range of military law, or government necessity." Twelve months later the president thought differently. On September 22, 1862, he issued a preliminary proclamation of emancipation that warned those living in areas under Confederate control that if they did not lay down their arms by January 1, 1863, their slaves would be declared free.
The 92nd Illinois enters the fray
Into this state of affairs marched the 92nd Illinois Infantry. Raised in the northern Illinois counties of Stephenson, Ogle, and Carroll, the regiment was commanded by Republican attorney
Smith D. Atkins of Freeport. The men learned of the preliminary Emancipation Proclamation while being organized in Rockford in September 1862. According to the regimental historian, "little knots were gathered through the camp discussing it. The general verdict was approved. Indeed, many hoped that the war would not end before the hundred days [allowed by the Proclamation to the rebels to lay down their arms and retain their slaves] expired, and the freedom of the black man had become secure."
The newly organized regiment marched into Kentucky in October 1862. The slaveholding state had remained loyal, but how the Lincoln government dealt with slavery always threatened to upset the relationship. When African Americans flocked to the 92nd's camp seeking refuge, Atkins sent them away. But when ordered by his superior officer to return to the owner a slave who had actually come within the regiment's lines Atkins pondered the situation. He studied the War Department's General Orders 1391. The fugitive was not surrendered to his owner, but put out of the lines and pointed toward the Ohio River and the free states beyond. A meeting of the officers of the 92nd determined that future orders to surrender fleeing slaves "should not be obeyed."
On November 2 Colonel Atkins issued a proclamation assuming command of the Mount Sterling area. In it he warned that "no part of my command will in any way be used for the purpose of returning fugitive slaves. It is not necessary for Illinois soldiers to become slave-hounds to demonstrate their loyalty..." He justified his stand to a friend in Illinois, writing that "under the President's proclamation of Sept 22d 62. I cannot conscientiously force my boys to become the slavehounds of Kentuckians & I am determined I will not." Hoping to make the matter moot, he issued orders to keep all civilian personnel, white and black, outside the regiment's lines.
Over the next weeks scores of African Americans sought refuge within the lines of the 92nd. Colonel Atkins responded by turning the refugees out of his lines but refusing to return them to their masters or to local law enforcement officials. About fifteen blacks remained with the regiment, working for Union officers as servants.
Marching through Winchester and Lexington
On November 16 the regiment marched toward Winchester. Atkins was warned that a mob in the town planned to remove the black servants and restore them to their owners. On reaching the brow of a hill overlooking the town it was seen that hundreds, including the 14th Kentucky Infantry, were milling in the streets awaiting the Illinois regiment. Atkins ordered the regiment to load their muskets and fix their bayonets. He then declared "we are threatened with difficulty in passing through this town. I hope there will not be any. Listen to my orders. You will march in silence.... If a gun is fired at you; if a brickbat, or club, or stone be thrown at you,—do not await orders, but resent it at once with steel and bayonet.... You must not fire first; but if fired upon, kill every human being in the town and burn every building." A member of the regiment remembered later that "a shout from the regiment that shook the houses, told that the men understood the orders...." As the regiment entered the town Atkins was met by the local sheriff and served with "a hat full of documents" demanding the surrender of stolen property in the form of escaped slaves. Atkins received the papers, and the regiment proceeded through town without further incident.
Days later the 92nd marched to Lexington, the hometown of Henry Clay and Mary Lincoln. A section of the regiment's last company was surrounded and cut off from the rest of the unit by sheriffs, special deputies, and members of the 14th Kentucky Infantry who attempted to pull a black servant from the ranks. The sergeant in command ordered his men to block the attempt with their bayonets. A Kentucky officer asked the sergeant if he "intended to defend the ------ n…..r." The sergeant affirmed he did to which the Kentucky officer replied, "I have come for him, and will have him or die." The full regiment soon marched back into town and Atkins gave an ultimatum, allowing the mob three minutes to disperse. If they failed to do so, he warned, "these streets will run with blood." The mob broke up without incident. The following day the local sheriff arrived to serve more papers in additional cases accusing Atkins of theft of property.
The end of the story
In December 1862 Colonel Atkins reported to the Secretary of War on his recent legal problems. He informed the War Department that
"[t]he Grand Jury of Montgomery County have found several indictments against me for stealing negroes—Grand Larceny. These negroes came into my camp, said they belonged to the enemies of our country, claimed protection, were employed ... as servants; Col. J. C. Cochrain ... ordered me to deliver them up- I declined to do so- and find myself proceeded against as a 'criminal' in these Kentucky Courts that a few weeks ago could not hold a sitting were it not for the protection of the bayonets the union soldiers wield. I have acted in good faith throughout, trying only to do my duty.... and I will, of course, expect the government I serve to protect and shield me from these oppressive suits."
At the same time Atkins also wrote to abolitionist leader and congressman Owen Lovejoy of Princeton to ask for legal advice. Atkins feared that as he could not "stop from the pursuit of rebels" to attend civil courts and so would lose the court cases by default, his "property in Illinois go to sale to pay for the freedom of these men that the law has made free, and the Presidents order and article of war forbid my sending back to bondage." If that happened, he warned, "we ourselves.... become the victims, and that for only obeying the Articles of war, the Orders of the President, and our consciences as men."
Atkins did not become a victim. The whole Kentucky incident, which had threatened eruptions of musketry, ended with a whimper of court summonses that were coolly ignored.
Interested in learning more?
The regimental history Ninety-Second Illinois Volunteers (1875) is unusual in its declaration that "African slavery was the real cause of the war." Pages 33 to 56 discuss the 1862 Kentucky affair, printing many of the official documents. The book is online at http://www.archive.org/details/ninetysecondilli00illi . Colonel Atkins's copies of documents are in the regiment's papers in "Administrative Files on Civil War Companies and Regiments," RS301.018, in the Illinois State Archives. Other documents regarding the incident are found in Ira Berlin et al, eds., Freedom: A Documentary History of Emancipation 1861-1867 series 1, volume 1, pp. 528-38. An important study of the hardening Federal attitude toward rebels is Mark Grimsley, The Hard Hand of War (1997). The twisting road toward emancipation and Lincoln's role in it is laid out in many works, including David Herbert Donald, Lincoln (1995) and Allen C. Guelzo, Lincoln's Emancipation Proclamation: The End of Slavery in America (2004).
The popular literature of the Civil War is filled with inspiring accounts of courage and sacrifice. Countless such acts, even if exaggerated, took place. Every great event, however, contains many stories that create bumps in the overall smooth narrative. One is the story of desertion during the American Civil War. Military officials estimated that about 200,000 men deserted the U. S. Army during the conflict. Illinois contributed to this number, though not in proportion to the number of Illinoisans who joined the military or were subject to a call for service.
What and why?
What exactly qualified as desertion? The most basic definition was any form of being absent from one's unit without official leave. Army officials admitted that many men were unjustly listed as deserters due to poor record keeping and genuine inadvertence, especially in the confusion of battle and its aftermath. The definition also included men who had been called up in a draft but never appeared for examination and potential induction, or disappeared after having been examined and certified for service.
What led a man to take the risks involved in intentionally leaving his regiment? After all, the Articles of War governing the U.S. Army specified
death by execution as a legitimate punishment for the crime.
Army officials believed that ignorance of military law and the fact that most new volunteer soldiers "had always freely acted according to their own ideas and wishes, restrained by no other legal requirements than those of the civil law governing a free people" caused many excusable problems, especially in the early months of the war.
Many men were moved to desertion by letters from family members telling of suffering at home in the absence of the chief breadwinner. Many (but by no means all) counties provided for financial assistance to soldiers' families, but the support levels were generally low.
Others were thought to have left because they believed they could get by with it. In many areas opponents of the Lincoln government's war policies helped to shield deserters from detection, sometimes allowing them to live openly at their homes.
Deserters from Illinois
Desertions from Illinois regiments reached their wartime high in October 1862. From the war's earliest days the number of Illinois troops deserting in any month was generally a trickle of under 100. In July 1862 the rate climbed rapidly, to more than 300, continuing to grow until it reached 823 in October. It then began to fall, though slowly, until July 1863 when it fell to a fairly consistent rate near 100 per month. A single unit that certainly skewed the statistics was the 128th Illinois Infantry. In April 1863, after less than five months in service, the regiment’s roster had been reduced from 860 to 161, mostly by desertions. The War Department broke up the unit, bouncing the sadly incompetent officers and sending the remaining enlisted men to the 9th Illinois Infantry, where most served well.
In April 1863 the army created a new system of assistant provost marshals general operating in each state. One of this officer's tasks was the apprehension of deserters. James Oakes, the assistant provost assigned to Illinois, reported in August 1865 that he had arrested and sent back to their units 5,805 Illinois deserters.
As was the case in other states, in some parts of Illinois military deserters could count on aid from neighbors. On January 28, 1863, Illinois Governor Yates telegraphed the War Department that civilians in several counties covered for deserters and that a mob in Paris (Edgar County) had forcibly rescued a deserter from military arrest. Fulton County was especially troublesome. During the summer of 1863 draft enrollment there was disrupted and several army deserters camped in the area, counting on local sympathy to protect them. In August 1863 Colonel Oakes informed newspapers of new army orders that deserters who surrendered themselves would be treated leniently, suffering only the loss of pay for the time they were absent.
The punishment of desertion by execution was seldom carried out. While more than 200,000 Union soldiers were reported as deserters, only about 200 paid with their lives. Those from Illinois and their execution dates were:
Valentine Benjamin, 44th Illinois Infantry, Nov. 13, 1863
Erastus Daily, 88th Illinois Infantry, Nov. 13, 1863
David Geer, 28th Illinois Infantry, March 4, 1865
Henry McLean, 2nd Illinois Light Artillery, Aug. 25, 1865
William Wilson, 12th Illinois Cavalry, July 28, 1865
Charles Conzet, 123rd Illinois Infantry—deserter
An Illinois case that sheds light on an important motivation for desertion was that of Charles Conzet, who lived near Greenup (Cumberland County). On January 9, 1863, Conzet deserted his post as a second lieutenant in Company B, 123rd Illinois Infantry, as the unit passed through Nashville, Tennessee. Weeks later he was taken into custody at his Illinois home and soon returned to Tennessee for trial.
At Conzet's request Major James A. Connolly of the 123rd represented him at the court martial. After his appearance Connolly wrote of the affair:
"One of our young lieutenants deserted when our regiment came through Nashville, and he was arrested at his home in Illinois and brought back here in irons. He was tried on a charge of 'desertion in the face of the enemy,' before a general court martial at Murfreesboro, and he sent to me to defend him. I went, but I knew he was guilty and I wanted to see him punished, yet at the same time I was very sorry for him. He had been married very shortly before entering the service and he left his wife but very little money, expecting to receive pay form the government two months. In this he was disappointed like all the rest of us. His wife kept writing to him that she was out of money and could scarcely procure the necessaries of life, and finally she wrote him that she had become a mother. The poor fellow could stand it no longer, he didn't know how to make out a leave of absence, and he determined to go home and make some provision for his wife and infant child, risking all consequences. This is his story and there is nothing in evidence to contradict it. I don't yet know what the finding and sentence of the court is but I presume they found him guilty, and probably sentenced him to be shot, but I am sure President Lincoln will never let him be shot. It’s a hard case, but he had no business to have a wife—and baby to think about."
The court found Conzet guilty of desertion and sentenced him "to be stripped of his badges of office, and shot until he is dead, with musketry." His division commander and the army commander, William S. Rosecrans, approved the sentence. The regiment's field officers and company captains petitioned Secretary of War Edwin M. Stanton for a commutation. They wrote that Conzet was "induced to abandon his post by letters from his wife begging him to come home an relieve her from her destitute condition, representing to him that the community in which she lived was opposed to the war, and would do nothing to relieve her necessities because her husband was in the Army." They also noted that the regiment had not been paid for over five months and requested that the punishment be reduced to "reduction to the ranks with forfeiture of all pay and allowance." Conzet himself wrote of his hope that he would "be allowed to return to his company so that he may yet prove himself to be a man." Abraham Lincoln saved Conzet's life but did not allow for the redemption of his honor. On September 24, 1864, the president ordered, "Let the prisoner be ordered from confinement and dishonorably dismissed [from] the service of the United States."
Interested in learning more?
A marvelous, detailed study of Illinois military desertion during the Civil War is Bob Sterling, “Discouragement, Weariness, and War Politics: Desertions from Illinois Regiments during the Civil War,” Illinois Historical Journal 82 (Winter 1989), pp. 239-62, online at http://dig.lib.niu.edu/ISHS/ishs-1989winter/ishs-1989winter239.pdf .
Volunteer soldiers had barely gathered in training camps and organized into units in the summer of 1862 (see August 2012 feature) when Illinois Governor Richard Yates began to receive panicked messages calling for the new regiments to be sent forward to protect Kentucky against rebel invasion. Confederate armies were on the move, hoping to take back territory lost earlier in the year.
The Confederate government and its armed forces were determined to regain ground lost during the spring and summer of 1862. The capture in February by federal armies of Fort Henry and Fort Donelson in Tennessee, Grant's hard-won victory in April at Shiloh, and the May capture of Corinth in northern Mississippi had driven most rebel forces from Kentucky and Tennessee. Union forces also threatened to capture Chattanooga, which could then be used as a base for operations against the Deep South.
Confederates hoped to retake much of Tennessee and of Kentucky by strikes to the north, aiming for the Ohio River and important cities—Louisville, Kentucky and Cincinnati, Ohio - that lay on its banks. Major General Edmund Kirby Smith, commanding Confederate forces at Knoxville, Tennessee, would first attempt to clear the Cumberland Gap of federal troops. He would then link with army of General Braxton Bragg to drive the federals from middle Tennessee. Smith moved in mid-August, soon forcing Union troops in the region to retreat to the Ohio River. On September 1 he captured Lexington, in Kentucky's Bluegrass Region.
Bragg moved his army north, marching from Chattanooga on August 28. He headed for the Bluegrass, intending to link with Smith but also hoping to gain recruits and supplies for his army. Several small clashes with federal troops took place as Bragg's federal opponent, Major General Don C. Buell, retreated northward toward Louisville, carefully keeping his army between the Confederates and the Ohio River.
Illinois responds to the growing fear
As the Confederate plan unfolded, federal commanders showed great concern. On August 24, Major General Horatio G. Wright, commanding the Department of the Ohio—which included the states of Ohio, Indiana, Illinois, Wisconsin, Michigan, and central and eastern Kentucky—wrote Illinois Governor Richard Yates to "Send your troops here [Louisville, Kentucky] as rapidly as possible." Wright needed to counter the move of Smith's Confederates, said to be "in large force" near Richmond, Kentucky. Yates responded the same day, "I am doing everything in my power to send troops." About 50,000 Illinoisans had been recruited, but a lack of mustering officers prevented units from being organized and sent to the front. Yates reported to Wright that he hoped to be able to send from sixteen to twenty-five infantry regiments during the next two weeks.
At the same time he was appealed to for soldiers to Kentucky, Yates make efforts to secure Illinois itself from the possibility of invasion by Confederates or rebellion by disloyal citizens. On August 25 it was announced that "In order to protect the State from raids without and rebel sympathizers within," camps would be established at Quincy, Jonesboro, Chester, and Shawneetown. "It will be seen by these dispositions of troops the Governor is protecting the flanks of the State from guerilla raids, and also looking to this possible contingency of secession sympathizers at home. The Governor originally urged this disposition of troops, but other officers did not think them necessary."
On August 30 General Wright again telegraphed Governor Yates from Cincinnati asking, "When and how fast can regiments and batteries be forwarded from your State? Let them all go to Louisville as fast as transportation to that point is available. Buell seems to be in a tight place, and a force for his relief must be collected... The troops must do the best they can without tents till supply can be obtained." That was followed three days later by yet another telegram, this one declaring, "It is of last importance that fifteen regiments be sent from Illinois at once. Seize all transportation and send them forward as fast as possible." The governor responded by sending "telegraphic imperative orders to all parts of the State where troops are located" for newly organized regiments to move for Kentucky.
Governor Yates offers advice
On September 15, as Confederate armies continued their advance in the East and the West, Governor Yates issued a proclamation to the people of Illinois, criticizing the Lincoln administration for its conduct of the war and offering suggestions for change. Firstly, the government was too soft in dealing with traitors. "By reason of the conciliatory policy of the General Government, disagreements among our Generals, or some other cause, the rebels have succeeded not only in regaining the ground they had lost in front of Washington, but have also undertaken the daring project of invading the loyal states..." He called on the federal government to create a reserve force of one million armed and equipped men, to be ready to move at any time to any place of danger. All the stops should be pulled out. "We should ever be ready to march upon the enemy with an overwhelming force. We should make the very earth tremble beneath the feet of our well trained and invincible battalions."
In closing, Yates wrote that there might be problems aplenty, but "I shall never, so help me God, harbor any other idea than that of an unshaken faith in the re-establishment of an unbroken and perpetual union of all the States. I do not know the councils of Almighty God; but at least I may be permitted to believe, that it is not in the plans of Providence to permit the destruction of the noble fabric of government—the temple of civil and religious freedom, the beacon of light to oppressed nations, the hope of humanity now and forever."
A turning point?
Days after Yates issued his message to the people of Illinois, one portion of the Confederate advance began to recede, when the Army of the Potomac won a victory against the Army of Northern Virginia at the battle of Antietam near Sharpsburg, Maryland. Losses on both sides were high, but the rebels retreated across the Potomac River, ending Robert E. Lee's invasion of the loyal states.
The situation in Kentucky, however, remained grim for the friends of the Union. Bragg and Smith remained on the move, sparking panic in Ohio River cities such as Louisville and Cincinnati. General Wright continued his frantic calls on northern governors to hurry troops to the field "as fast as they can be mustered and armed. The rebels are passing rapidly northward and must be met with larger forces than we yet have. Every day is of importance." On September 23 he again begged Yates to hasten new Illinois regiments to Kentucky, as Louisville "is seriously threatened by Bragg." Ten days later he telegraphed, "It is of the utmost importance that the new regiments be got ready... Delay is ruinous. One regiment now is worth more than many would be a few weeks hence." Yates and other state officials worked frantically to move the men south.
A week later, on October 8, the armies of Bragg and Buell stumbled into battle at Perryville, Kentucky. Several of the new Illinois infantry regiments took part in the action, often fighting beside more experienced troops. The 123rd Illinois Infantry was raw to the point of never having taken part in a battalion drill. Most of the new units acquitted themselves well in their first combat. At Perryville Buell won a victory, of sorts. Confederate foreces retreated to the southeast, and the invasion of Kentucky came to a close.
Interested in learning more?
A recent account of the Kentucky campaign of late 1862 is found in: Earl J. Hess, Banners to the Breeze: The Kentucky Campaign, Corinth, and Stones River. Much of the correspondence between General Wright and state officials in Springfield is found in Official Records of the Union and Confederate Armies... series 1, volume 16, part 2
"We are coming Father Abraham"
On July 1, 1862, President Lincoln issued a call for 300,000 new volunteers to join the armies of the Union for a term of three years. That act would result in the composing of one of the great patriotic songs of the war -
"We are Coming Father Abraham" - as well as bringing about the largest single influx of Illinoisans into the military during the Civil War.
Lincoln's call came in reply to a carefully written letter by Secretary of State William H. Seward that was signed by the governors of eighteen northern and border states, who believed
that in view of the present state of the important military movements now in progress and the reduced condition of our effective forces in the field... the time has arrived for prompt and vigorous measures to be adopted ... [W]e respectfully request ...that you at once call upon the several States for such number of men as may be required to fill up all military organizations now in the field, and add to the armies ...such additional number of men as may in your judgment be necessary to garrison and hold all of the numerous cities and military positions that have been captured by our armies, and to speedily crush the rebellion that still exists in several of the Southern States, thus practically restoring to the civilized world our great and good Government. All believe that the decisive moment is near at hand...
Governor Richard Yates in turn soon issued a proclamation calling for Illinoisans to mobilize, noting the recent Confederate successes in defending their capital at Richmond, Virginia.
Your all is at stake. The crisis is such that every man must feel that the success of the cause depend upon himself, and not upon his neighbor. Whatever his position, his wealth, his rank or condition, he must be ready to devote all to the service of the country. Let all, old and young, contribute, work, speak and in every possible mode further the work of the speedy enrollment of our forces. Let not only every man, but every woman be a soldier, if not to fight, yet to cheer and encourage and to provide comfort a relief for the sick and the wounded.
Yates called for a rapid response by the public, repeating President Lincoln's comment to him that "Time is everything. Please act in view of this."
Meetings to encourage enlistments were held in towns and villages throughout the state. At a July 22 event at the capitol at Springfield, cannon were fired from the statehouse yard, and the building's rotunda was so crowded that the gathering moved outdoors. Thousands of men enlisted, in spite of the hardship created by removing workers from an already reduced farm labor force at an important time in the growing season.
The results of such meetings were very encouraging, and more men than were needed were expected to volunteer. State officials telegraphed to the president that "Many counties tender a regiment. Can we say that all will be accepted?" Lincoln forwarded the message to the War Department, saying, "I think we better take while we can get." He desperately wanted troops, and in early August ordered another call for more even more men.
The War Department's quotas could be a problem, however. Adjutant General Fuller wrote the War Department that he believed 50,000 volunteers could be put into camps by September 1 "if we can accept them," but "if they are disappointed and refused permission now to enlist... the reaction in a few days will be terrible."
The outpouring of volunteers made "Adjutant Fuller... the busiest man in the State of Illinois" and overwhelmed the ability of federal officials to provide uniforms, weapons, and even camp equipage for the new enlistees. As a result Fuller on August 14 announced that companies and squads raised in each county should meet at or near their county seat until called to one of the official camps of rendezvous.
In spite of Lincoln's plea that "Time is everything," problems of supply hampered efforts to prepare new regiments for service. On August 19 Governor Yates reported to the War Department that within days an estimated 50,000 Illinoisans would be enrolled but, because of a lack of mustering officers, only four regiments had actually been taken into the service. He declared that with proper assistance from federal officials fifty regiments could be organized by September 1. "Our State is much neglected in the failure of the Government to supply our troops with arms, tents, and camp utensils. Thousands are sleeping on the naked earth without any covering."
In early September officials noted that of 23 regiments taken into service only 13 had been issued weapons. A month later-three months after Lincoln’s initial call-fifteen regiments were still in the process of organization. It was late November before the final units raised under the calls of July and August were officially mustered into the service. Illinois had by that date sent into the field more than 53,800 new volunteers.
Interested in learning more?
"We are Coming Father Abraham," published in Chicago.
A rendition can be heard at http://www.youtube.com/watch?v=vaNKL_dfyi4 .
Full lyrics can be found at http://www.civilwarpoetry.org/union/songs/coming.html .
Cheatham Hill, Georgia: The Illinois Civil War Soldiers and their Monument
It is an easy 200-yard walk from the parking lot at the Cheatham Hill section of the Kennesaw Mountain National Battlefield to the Illinois Monument. Ironically, the stroll meanders through a placid oak-hickory forest that is so like the woodlands of Illinois that one could imagine the setting is in the Land of Lincoln instead of fifteen miles northwest of downtown Atlanta, Georgia. Similarly, one could easily misconstrue the low ridge beside the path as a soft, natural rise in the landscape. But a sign instructing visitors to stay on the trail sets the record straight about the ridge: "CONFEDERATE EARTHWORKS," the plain wooden message declares in all capital letters, "PLEASE KEEP OFF." At a preserved Civil War battlefield, pains are taken to preserve everything.
Footsteps trod upon this path today traverse the site where fighting in late June of 1864 claimed the lives of nearly four hundred men from a brigade largely composed of Illinois soldiers. When the survivors of the fierce battle were much older, they sought to commemorate their fallen comrades. Thanks to their efforts a sixty-acre parcel was purchased; it was the first step of a historic preservation movement that resulted in the protection of nearly three thousand acres of American history. Today, Kennesaw Mountain is one of the nation’s most visited Civil War battlefields, hosting over one and a half million guests annually.
Along the path, which runs mostly parallel to those Confederate earthworks, there are several interpretive panels describing the awful carnage of that early summer nearly 150 years ago. It is easy to be taken aback by one particular sign's reminder that nature bears the scars of battle long after the humans who fought have gone. This graphic says that a dead oak removed from that spot in 1980 was found to have a number of Civil War bullets imbedded in its trunk; several still-living trees are deformed at their canopies due to scars from the firefight.
Presently, the visitor arrives at the focal point of the trail, the Illinois Monument itself. An article in the Atlanta Constitution in 1922 opines that "(it) is one of the finest memorials in the country," high praise indeed coming from a southern paper whose readership included a few people old enough to remember the conflict and perhaps still disturbed by the actions of the Yankee soldiers. The spirit of the Monument represents Illinois, but its composition is strong, beautiful marble from Georgia. Rising twenty-six-feet high and covering eighteen square feet at its base, the Monument is no larger than some similar memorials in cities, but it is striking how standing here, the only manmade structure in a woodland clearing, the Illinois Monument commands a presence it would not have in an urban town square.
The layout of the trail mandates an approach to the Monument from its rear face, which bears a long inscription. One might stop here to read this text, but the brightness of the open field ahead tends to exert a subtle forward pull, leading the visitor to first encircle the Monument and look at it from the front, where it rises above two flights of a dozen steps each. Here, with a grassy field sloping gently downhill at his back, the viewer's eyes are drawn to the main feature of the work, a grouping of three prominent bronze figures. Standing seven feet tall, the central figure is an austere soldier in the parade rest position, his rifle at his right hand. On either side he is flanked by a six-foot-tall female figure. The woman to the soldier's left represents Illinois; she holds the state's coat of arms snugly with both hands. To the soldier's right, the woman has a more universal representation--she symbolizes peace. Above the figures, framed by a wreath, the word "Illinois" appears in bold relief on the marble. Soaring over it all are two eagles: one contained in the medallion bearing another Illinois coat of arms, the other, crowning the Monument, a magnificent bird with wings outstretched.
What exactly happened here at Cheatham Hill? The story is told by that inscription on the back of the Monument and by several signboards posted nearby.
Named after Major General B.F. Cheatham of the Confederacy's Army of Tennessee, Cheatham Hill was the site of one of the battles waged during the Atlanta Campaign. In early May 1864, nearly one hundred thousand Union troops under the command of General William T. Sherman, set off from Chattanooga, Tennessee, intent on taking the city of Atlanta, about 120 miles to the south. Along the way, Union troops encountered determined resistance, but they overcame it. By June 27th the Northern forces had reached the ground upon which the Illinois Monument now stands.
The fighting that spring of 1864 was widespread—indeed, the Kennesaw Mountain Battlefield Park includes several sites of conflict—but the inscription on the Monument concentrates on the clash at this particular spot. In part, it reads:
"On this field the men of Col. Dan McCook's 3rd brigade, 2nd Div. 14th Army Corps assaulted the Confederate works on the 27th day of June, 1864, losing four hundred and eighty killed and wounded, including two commanders… (The) brigade reached Confederate works and at less than one hundred feet from them maintained a line for six days and nights without relief, at the end of which time the Confederates evacuated."
McCook's brigade included five volunteer infantry regiments: the 85th, 86th, and 125th Illinois; the 22nd Indiana; and the 52nd Ohio, plus a volunteer light artillery regiment, Battery I, 2nd Illinois. So composed, the majority of the troops in the brigade were from Illinois; Colonel McCook, however, was from Steubenville, Ohio.
A daunting and hazardous task faced McCook's brigade: attacking Cheatham's Tennessee troops at a protruding point of their earthworks that both sides ominously nicknamed "The Dead Angle." While the Monument's inscription stresses that the Third Brigade held the line for six days, it does not mention that the real horror of war was manifested in a twenty-minute charge on the 27th; nearly all of the soldiers in the brigade who lost their lives were killed in that short span. Particularly hard hit was the 125th Illinois, with 54 killed, 63 wounded, and seven missing in action.
Following the initial carnage, the Northern and Southern troops hunkered down and faced each other separated by just one hundred feet. In pursuit of a plan to approach the enemy underground to plant explosives, the Union soldiers began digging a tunnel towards Cheatham's lines. Laudable as the effort of the McCook brigade to hold the line may have been, it was actually Union troops moving in from elsewhere that led the Confederate forces to finally evacuate the earthworks, which, in turn, caused the Third Brigade to abandon their unfinished tunnel.
Colonel McCook—affectionately known as "Colonel Dan" to those he commanded—was a well-educated man; prior to the war he was a law partner with General Sherman. And he knew his poetry. To inspire his troops, he recited from Horatius, a poem about a hero of classical Rome written by the nineteenth-century Englishman Thomas Macaulay. Stanza 27 of this long poem contains the lines McCook hoped his men would take to heart:
"Then out spake brave Horatius,
The Captain of the gate:
'To every man upon this earth
Death cometh soon or late.
And how can man die better
Than facing fearful odds,
For the ashes of his fathers,
And the temples of his Gods,'"
McCook himself was among the casualties of the battle, suffering a mortal wound, but living long enough to be carried back home to Steubenville to die.
The Union pressed ahead and took Atlanta that summer. In the spring of the following year, 1865, the Civil War ended.
Three and a half decades passed. Then in 1899, survivors of the Third Brigade's Illinois regiments bought the sixty-acre Cheatham Hill parcel. As these gentlemen aged, perhaps it tugged at them that they had a solemn responsibility to preserve the site where many of their comrades fell in combat. In any case, the following year, the Grand Army of the Republic, a fraternal organization of Union veterans, held its national encampment in Chicago. Illinois veterans in attendance formed the Colonel Daniel McCook Brigade Association for the express purpose of raising funds to build a monument. When they were unsuccessful securing sufficient donations, they turned to the General Assembly of the State of Illinois for help, and in 1911 the legislators approved the expenditure of $20,000 for the project. The resolution authorizing these funds noted that the Third Brigade was "largely composed of Illinois troops, conspicuous for their courage and gallantry" and declared that "It is a patriotic duty for people of this State to keep in perpetual remembrance the heroism of our fallen soldiers."
Propelled by that government assistance, a monument was commissioned. Illinois' own state architect, James B. Dibelka was the designer; sculptor J. Mario Korbel created the striking bronze figures. Interestingly, both Dibelka and Korbel were Czech immigrants, a reminder of the great impact of immigration to America following the Civil War.
On June 27th, 1914—the fiftieth anniversary of the start of the Battle of Cheatham Hill—the Illinois Monument was unveiled. One week earlier, an article appeared in the Chicago Daily Tribune describing the upcoming event; it remarked that a special train had been arranged to carry Governor Edward Dunne and other Illinois dignitaries to Georgia for the dedication. Among these VIPs were the men who formed the monument commission, those fortunate few who survived the battle and were still alive that half century later: W.A. Payton of Danville, Captain L.J. Dawdy of Peoria, J.B. Shawgo of Quincy, and H.F. Reason of Mason City. The names of all these veterans are inscribed on the Monument except, inexplicably, for that of Shawgo. No further information about the service of Shawgo or of Reason appears in the Tribune piece, but it does report that Payton served as a drummer boy and that "Capt. Dawdy was wounded while so close to the Confederate breastworks that he fell across the trenches, where he was seized and made a prisoner." The Tribune also mentions the touching way the ceremony stressed the importance of the Civil War to those for whom the conflict was part of a distant, unfamiliar past—Payton's eleven-year-old granddaughter Sarah Fadely would do the unveiling.
In 1935, more than twenty years after dedication of the Illinois Monument, Kennesaw Mountain National Battlefield Park was established by Congress; the designated land included the original Cheatham Hill purchase. Subsequent acquisitions led to the 2,923-acre historical park that exists today, but the work of preservation began when the Illinois veterans sought to honor their lost comrades.
Kennesaw Mountain is certainly not the only Civil War battlefield with a memorial for Illinois troops. There are monuments commemorating the service and valor of Illinois soldiers at Gettysburg, Shiloh, Chattanooga, Vicksburg, and elsewhere. These are of notable variety in design. For example, the Illinois State Monument at Vicksburg, Mississippi is a grand looking temple based on the Pantheon in Rome. One could plausibly argue it appears more impressive than the subdued structure at Cheatham Hill.
But in a 1941 article considering these various Illinois memorials, former Chicago newspaperman and co-founder of the Civil War Roundtable, the late Donald Russell, noted an important distinction. At the other battlefields the monument for Illinois is one among several, as similar works dedicated to soldiers of other states stand nearby. At Cheatham Hill, however, a lovely monument to soldiers from the Land of Lincoln stands by itself, having no competition and therefore commanding full attention. This is what makes the viewing experience powerful and moving: the sight of the Illinois Monument rising alone, proudly but respectfully prominent in a landscape that is wonderfully peaceful today but was the scene of deadly combat in late June of 1864.
Interested in learning more?
Anonymous. 1914. Illinois Tribute to Dead in South. The Chicago Daily Tribune, Jun 21, p. F 19.
Bunting, Frank C. 1922. Memorial to Illinois Civil War Soldiers, Erected by Western State, Stands Near Marietta. The Atlanta Constitution, June 11, p. 7.
Harper's Ferry Interpretive Design Center and the National Park Service. 2010. Kennesaw Mountain National Battlefield Park Long-Range Interpretive Plan.
Russell, Don. 1941. Illinois monuments on Civil War battlefields. Papers in Illinois History and Transactions for the Year 1941, pp. 1-37.
About the Author
Brett Bannor is a freelance writer. Bannor was born and raised in Chicago, but as an adult lives in Georgia. His grandmother was born and raised in Georgia but as an adult lived in Chicago. Given this coming full circle in two generations, Bannor naturally had an interest in the connections between the Prairie State and the Peachtree State.
Brett Bannor, Atlanta, Georgia
Looking for Excitement: The Story of Albert D. J. Cashier
Women have served in every war this nation has fought since the Revolutionary War, when Deborah Sampson Gannett took up arms. Ironically, it is only in the 20th and 21st centuries that women have been excluded from the infantry. The number of women who disguised themselves as men to fight on the frontlines in the American Civil War was estimated at about 400. Mary Livermore, a key agent and organizer of the U. S. Sanitary Commission, believed the number to be much higher "then was dreamed of." There are references in the writings of soldiers to uniformed women found dead in the "heaps of bodies" on the battlefields, newspaper accounts of women attempting to enlist, and many documented accounts in regimental histories and official records of women soldiers discovered after injury or death. Illinois, having raised more troops than most other states, no doubt had its share of women who served their country during the conflict.
One well-documented story is that of Albert D. J. Cashier, an Irish immigrant born in Clogherhead, Ireland, on December 25, 1843. Jennie Hodgers, as he* was known at birth, left that name far behind when he came to America. Unlike many other women who disguised themselves to go to war, Cashier was living as a man prior to his enlistment. He worked as a farm hand, laborer, and a shepherd. Though he often refused to speak of his past or family, later in life Cashier offered a variety of reasons for his choice to live as a man many of which were conflicting. In one account, Cashier said he was given the name Albert by his stepfather and worked with him in a shoe factory in New York. A second story held that he joined the army to follow his lover. And, in yet another account, he told his former sergeant, Charles Ives, that he had a twin brother and that his mother dressed them both in boys' clothing. Whatever the truth may have been, from a practical stand point, Cashier could earn a better wage and have more opportunities for work as a man. He could also own property, have a bank account, and vote. None of those rights were afforded to Jennie Hodgers, a woman.
With this background, the transition to the life of a soldier was an easy one for Cashier. He was accustomed to hard work and, as an illiterate immigrant, he was used to less-than-ideal standards of living. The medical exam given to prospective recruits at the time was cursory at best. The muster record indicates that Cashier was nineteen years old, with amber hair and blue eyes, and stood 5'3" tall. Upon mustering into Company G of the Illinois 95th Infantry in the Union Army on August 6, 1862, at Belvidere, Illinois, the regiment was sent to Camp Fuller at Rockford for training.
Cashier saw action in more than forty skirmishes and battles, including the Siege of Vicksburg and the Red River Campaign. The 95th traveled more than 9,500 miles according to its regimental history and was considered a strong fighting unit. Many of his fellow soldiers noted that while Cashier was of small size he was "able to do as much work as anyone in the company." He was noted for his bravery in battle and was considered an exceptional soldier. During reconnaissance at Vicksburg, Cashier was captured by a Confederate outpost. He escaped by knocking away a pistol from the hand of his guard and out-ran his captors to return to his unit. Cashier and his fellow soldiers endured great hardships at Brice's Crossroad near Tupelo, Mississippi. His unit had marched quickly to arrive at the scene, enduring the intense heat of June, which had made some soldiers faint from exhaustion. Despite the death of their Colonel Thomas Humphrey and removal of Captain William Stewart from the battlefield, the unit fought on. Eventually they retreated, in chaos and exhaustion. After regrouping, the 95th endured several more difficult encounters with the Confederates before being mustered out in August 1865. Cashier chose to retain his masculine identity after the war.
Cashier proved an able soldier though not an outgoing one. In Robert D. Hannah's deposition, taken prior to Cashier's death in 1915, he noted that Cashier was "very quiet in her manner and she was not easy to get acquainted with." Despite this, his comrades came to his aid after the War on more than one occasion and for a brief period Cashier owned a plant nursery with a fellow soldier. He settled in Saunemin, Illinois, and worked various odd jobs. He later worked as a farmhand for the Chesebro family. The family built a small home for Cashier, frequently hosted him in their home, and provided a burial plot in the family's cemetery. In 1890, Cashier applied for a soldier's pension, which he received. Experiencing some difficulty of circumstance and health issues, he requested an increase in benefits in 1899. His application was supported by more than a dozen of his former comrades who signed a letter confirming his status.
Cashier continued to work and from time to time did odd jobs for Illinois State Senator Ira Lish. During one such job, the Senator inadvertently ran over Cashier's leg with his car. When the Doctor, called in by Lish, set the leg he realized Cashier's nearly life-long secret. Senator Lish and the doctor agreed to keep the matter quiet, only telling the Chesebro sisters who were enlisted to help care for Cashier. The leg, which was broken, did not heal properly and by the age of 66, Cashier was an invalid. He was placed at the Quincy Soldiers and Sailors home with the assistance of Senator Lish, who made sure his identity as a woman remained a secret. The staff of the home did not reveal the new resident was anything other than an aged male veteran.
As with many elderly people, Cashier's mental health began to decline. The superintendent of the Soldiers and Sailors home assigned Dr. Leroy Scott, a psychiatrist, to Cashier. Scott spent many years interviewing Cashier about his life and trying to establish his history. By 1913 the physicians at the Soldiers home determined that Cashier was beyond their ability to care for and began proceedings to have Cashier declared insane. In 1914, it was revealed to the public that Cashier was, in fact, a woman. The sensational story made its way to newspapers across the country. When the news reached the Pension Bureau, an investigation was immediately launched. Several depositions were taken with Cashier's former comrades and many were interviewed. The Pension Bureau determined that Jennie Hodgers and Albert D. J. Cashier were indeed the same person and that no fraud had been perpetrated. Cashier continued to receive his pension benefits.
In March of 1914 Cashier was sent to Watertown State Hospital after the courts concluded that he was insane. Cashier's comrades visited him often. Some expressed concern and anger over his treatment at the State Hospital and by those reportedly caring for him. One letter remarked that a priest was more interested in Cashier's money than his person. Upon arrival at the hospital, Cashier was placed in the women's wing and forced to wear a dress despite his objections. This insensitivity proved to be his undoing. Being unaccustomed to long skirts and dresses, Cashier fell and broke his hip. He died shortly thereafter on October 10, 1915, and was buried in uniform with full military honors. His tombstone, at Sunnyslope Cemetery in Saunemin, Illinois, reads simply Albert D.J. Cashier, Co G, Ill Inf, 1843-1915. Local residents later added a stone honoring both Cashier and Jennie Hodgers.
Albert D. J. Cashier remains the only known female soldier to serve a complete tour of duty and to receive a military pension. He endured the horrors and hardships of war like any other soldier. As to why he chose to fight, Cashier remarked "Lots of boys enlisted under the wrong name. So did I. The country needed men, and I wanted excitement."
* Throughout the article I have used to the masculine pronoun when referring to Cashier to honor his choice to live life as a man, regardless of his biological sex.
Interested in learning more?
Blanton, DeAnne, and Lauren M. Cook.They Fought Like Demons: Women Soldiers in the American Civil War.Baton Rouge: Louisiana State University Press, 2002.
Clausius, Gerhard P. "The Little Soldier of the 95th: Albert D.J. Cashier."Journal of the Illinois State Historical Society 51 no. 4 (1958).
Davis, Rodney O. "Private Albert Cashier as Regarded by His/Her Comrades."Journal of the Illinois State Historical Society 72 no. 2 (1989).
Dunn, Margaret "Peggy".With Hearts of Fire: Women in the Civil War. Springfield: University of Illinois at Springfield, 2005.
Lannon, Mary Catherine. "Albert D. J. Cashier and the Ninety-fifth Illinois Infantry (1844-1915)." Master's thesis, Illinois State University, 1969.
Tsui, Bonnie.She Went to the Field: Women Soldiers of the Civil War. Guilford,
Connecticut: The Globe Pequot Press, 2003.
"Every thing reminds me of Arthur": Mourning a lost friend
Lieutenant Arthur L. Bailhache of Springfield, adjutant of the 38th Illinois Infantry, died of disease at Pilot Knob, Missouri, on January 9, 1862. His body was sent to Springfield for burial.
Bailhache's death left a void in many lives, including that of Anna Ridgely. The daughter of Springfield banker Nicholas and Jane Huntington Ridgely, Anna kept a journal into which she poured her anxieties, as well as accounts of her life's enjoyable moments. Anna's journal provides a poignant look at how one young woman in Illinois dealt with a painful wartime loss.
In late 1861 Anna was a nineteen-year-old, struggling to build her Christian faith. Spiritual crises were a recurrent theme of her writing. She was often conscience-stricken and certain that she did not measure up to the standards of a true Christian. As her family members were not believers, she sought help from a Presbyterian minister but found him to be of little help—"He generally talks of his affairs and never mentions my own."
The opening of the war seems to have had little practical effect on Anna's life. Her position as daughter of a leading Springfield family kept her in the social whirl, "yet still feel that it has not been very profitable-it seems so selfish to be doing for our own pleasure all the time when there is so much to be done for others." It appears that church work for the city's needy served as her outlet. The war did come prominently to her mind in October 1861, with the military defeat and death of one-time Springfield resident Edward D. Baker at Ball’s Bluff, Virginia. The defeat raised difficult questions, which Anna quickly put to rest with the knowledge that "God who directs all things knows what is best for us and we must pray for resignation and trust in him."
As 1861 came to an end Anna dwelled especially on her spiritual state, worrying about what she saw as her ingratitude to God. "I have so many mercies, such continued health a perfect body and sound mind." She feared that she must rebuild the relationship "or God will send upon me a real trouble a real sorrow."
A "real sorrow" came days later when Anna learned of the death of her friend Arthur Bailhache. "I went slowly up into my own room, locked the door and sat down perfectly stunned. I could not believe it... I sat thus all evening. tears would not flow. I could not weep no only groan and moan and think of Arthur. I fell asleep and thought of him all the time and awaked often in the night saying it cannot be it cannot be."
Anna visited Arthur's remains at the family home in Springfield. The tears finally came to her "as we gazed upon that sweet face." On returning home she "staid up in my room all the evening thinking, thinking." "[I]t was some comfort to look upon the cold and silent face. I could imagine he slept and bent over his lifeless form and implore him to speak to me. but now he is put way out of my sight and I must live on without him... every place remind me of my lost friend."
Arthur's funeral at the Episcopal church was an impressive affair, his casket covered with an American flag atop which lay his sword and military cap. At the cemetery a military honor guard fired three volleys over the new grave. Anna "felt as if I should die. it was so awful to leave Arthur there all alone in the cold ground, and we wept and moaned so bitterly..." Still, within days Anna worried "that I will soon forget Arthur."
It was a foolish fear. In fact, she found it difficult to see many faces and places because they reminded her of her lost friend. In February Anna wrote of visiting Arthur's father. Seeing him "reminded me of a gentle face I had last seen cold and silent == oh will this aching sadness ever leave me. every thing reminds me of Arthur. where ere I go I think of him and sometimes I am so rebellious yes rebellious toward so kind a Father-but I do miss him so much...I long to be with him once more."
Anna's inability to forget Arthur continued. Several weeks later she wrote, "some how to day I feel very unhappy. I do not know the reason unless it is... my unattractiveness-and then I think of Arthur. How he loved me with all and was interested in all I did and then comes the painful thought that he is gone never, never to return and I reflect how little I valued his friendship how little I appreciated him whom I now would give worlds to see..." She quickly turned away from such thoughts, her faith telling her to trust providence even when she could not understand it-"all this is wrong and shows a stubborn unregenerate heart. what shall I do, where shall I turn for a loving friend. oh for a kind christian friend to help me."
It appears that Arthur's death may have awakened Anna to the dangers faced by soldiers. In February she for the first time attended a meeting of the Springfield Ladies' Soldiers' Aid Society, formed the previous August. Her journals indicate that Anna began to make regular visits to the society's rooms to sew, knit, or make bandages to be sent to Illinois soldiers in the South. At the same time she began to reengage with the social "duties" she so feared would cause Arthur to leave her consciousness.
On September 16, 1862, Anna Ridgely wrote the last lengthy mention of her great wartime loss. "I have thought a good deal of Arthur lately—perhaps I have grieved away the holy Spirit in mourning for him, but I miss him so much. so many things recall his dear memory I am just beginning to realize his loss-to feel that he is really gone-but has not the Lord removed him, and 'shall not the judge of all the earth do right.' God forgive me and oh be merciful."
Interested in learning more?
After the war Anna Ridgely married James L. Hudson. The wartime journals of Anna Ridgely Hudson can be read in the Manuscripts Department of the Abraham Lincoln Presidential Library. A book that uses hundreds of diaries, collections of letters, and journals like Anna's to learn about American's attitudes regarding issues including faith and death is Lewis O. Saum, The Popular Mood of Pre-Civil War America. Two recent books dealing with American attitudes toward death during the Civil War era are Drew Gilpin Faust, This Republic of Suffering: Death and the Civil War, and Mark S. Schantz, Awaiting the Heavenly Country: The Civil War and America's Culture of Death.
Governor Richard Yates visits the battlefront
On April 10, 1862, Governor Richard Yates and a handful of other state officials left Springfield for Cairo. There they would meet dozens of volunteer surgeons and nurses and proceed to Pittsburg Landing, Tennessee, to provide care for and evacuate hundreds of Illinoisans seriously wounded during the battle of Shiloh. The governor had arranged such an expedition following the fight at Fort Donelson, Tennessee, several weeks earlier, but the Pittsburg Landing project would dwarf his earlier effort.
Illinois at the battle of Shiloh
The number of Illinoisans killed and wounded at Shiloh (more than 700 killed, 3,000 wounded, and 800 captured or missing) rocked the folks at home. Illinois had been represented at Shiloh by the army commander, four of the army's six division commanders, and more than two dozen regiments of infantry and cavalry, and several batteries of artillery.
Governor Yates plunges ahead
On receiving word of the number of Illinoisans wounded in the fight, Governor Yates immediately planned an expedition to the battlefield to aid Illinois' wounded. He issued a call for volunteers to serve as doctors and nurses and chartered the steamboat Black Hawk, operating out of St. Louis, with all to meet at Cairo before moving to Tennessee.
The steamer arrived late at Cairo, delayed perhaps by the necessity of purchasing and loading goods to be used in the relief effort. The State of Illinois purchased from St. Louis merchants large volumes of special foods, including rice, macaroni, vermicelli, teas, tomatoes, and other items not a part of regular army issue.
An observer reported an outpouring of volunteers for the expedition, which led Yates to charter another vessel. "[C]onsiderable difficulty was had to confine the list of surgeons, nurses and assistants to anything like moderate proportions—three times as many insisting upon a passage as could be accommodated. The best was done that could be under the circumstances..." On April 13 religious services were held aboard the boat "to offer up thanks for our late victories, and prayers on behalf of our many wounded and their afflicted families." At Fort Henry the boat landed and a "brief half hour" was given to touring the fort.
On April 14 the party reached Pittsburg Landing, mooring near boats sent by other state governors. "Having been so fortunate as to obtain an order to receive Illinois wounded alone," the boat began loading men at 2 p.m. and reached its full capacity by 6 p.m. On leaving from nearby Savannah (Tennessee) for Cairo, several surgeons and nurses volunteered to remain and help with wounded there and at the battlefield itself.
At Cairo many of men disembarked so that they could travel home by rail. As the boat continued to St. Louis "the wounded began to improve very much—good nursing and good living had worked wonders with them." The expedition, one participant wrote, "was one of entire success. His Excellency, the Governor, was untiring in his efforts to accomplish everything possible... The wounded were full of expressions of gratitude to him for having done more for them than the Governor of any other State had been able to do for their wounded. ..."
The second voyage of the Black Hawk
On April 24 the Black Hawk began a second trip from Quincy to the Pittsburg Landing area, chartered by Adjutant General Allen Fuller on instructions of Governor Yates. This time the men making up the volunteer corps resisted the urge to visit the battlefield in search of souvenirs and remained at the boat. "When the sick and wounded were taken on board, day and night did these true men stand by the suffering soldiers... until all were either started toward their homes or safely deposited in the hospital at Quincy." There were some problems along the way-at Savannah, Tennessee, Fuller found between five and six hundred ill or wounded Illinoisans believed to have been already evacuated, and there was a delay in removing men from Pittsburg Landing for fear of imminent battle at Corinth, Mississippi.
Some lessons learned
Adjutant General Fuller returned from the relief expedition with suggestions for changes in future operations. He first noted that the practice of accepting donations to be shipped only to specific units was absolutely "impracticable" and suggested that soldier relief organizations in Chicago and St. Louis "are eminently deserving [of] the confidence of the public, and I recommend that supplies be sent to them for distribution." He had also seen the confusion that resulted when boxes arrived without invoices revealing their contents, and the waste that resulted from unneeded or useless goods filling precious cargo space. In the future, "Boxes containing such supplies should be plainly marked and invoices of their contents should accompany them," and would-be donors should contact the central relief commissions to see what is needed at a given time. "A little attention to this, will prevent an unnecessary accumulation of some articles and a deficiency of others."
A new hero
Governor Yates continued the relief operation through late May, again personally visiting the battlefield and nearby hospitals. Illinoisans celebrated the good work of their governor. "Every care and attention that can be shown them is rendered. Good surgeons, faithful nurses, and an abundance of medical stores of every kind, ice water, (a luxury that our poor boys prize above gold,)... all that the sick could desire, is here to cheer them on their homeward way. The forethought and energy of Governor Yates in providing for the emergencies of the forthcoming battle of Corinth is beyond all praise. Before this letter is before your readers, thousands of hearts will be made glad by the return of the loved ones whom Illinois, acting through her Governor, has snatched form the very jaws of death."
Even non-Illinoisans gave thanks for Yates's efforts. A newspaper correspondent wrote that on a trip to Mississippi in May, Yates encountered the 7th Iowa Infantry. Its commander shouted to his men, "Soldiers, I have the honor to introduce to you Gov. Yates of Illinois. We are citizens of Iowa, and love her, yet to Gov. Yates... we owe a debt of gratitude for furnishing us clothing and other supplies at a time of sore need, when our government, though inability or carelessness, failed to supply us. I propose three cheers for Gov. Yates."
Interested in learning more?
Hawk, including invoices for goods provided, are located in: "Richard Yates (1815-1873) Correspondence, RS101.013, Illinois State Archives. Governor Yates's interactions with Illinois soldiers are discussed in Jack Nortrup, "Richard Yates: A Personal Glimpse of the Illinois Soldiers' Friend." Journal of the Illinois State Historical Society 56 (1963): 121-38. (It can be found using the webpagehttp://dig.lib.niu.edu/ISHS/index.html). Many documents relating to the voyage of the Black Hawk, including invoices for goods provided, are located in: Richard Yates (1815-1873) Correspondence, RS101.013, Illinois State Archives.
Illinois boys in blue
For 150 years the military service of boys has provided iconic images of Civil War memory. Stories in wartime newspapers and magazines celebrated young—and often unnamed—patriots who served their country. During the postwar period people could hear concerts of war songs by an adult "Drummer Boy of the Rappahannock." Three of the four groups of statuary placed on the tomb of Abraham Lincoln (designed in 1868) include figures of very young men. The Civil War centennial period gave birth to the 1959 novel Johnny Shiloh and its 1963 adaptation for television by Walt Disney. Boys, these images would tell us, were everywhere on the battlefield.
Boys and young men certainly played a role in fighting on both sides during the war. Was it as large a role as the iconic images would lead us to think? Statistics based on military records kept during the war are notoriously slippery. Scholars over the years have estimated that boys under 18 made up anywhere from 1.0 to 1.5% of the U.S. army that fought the war. Those numbers are based on enlistment records and take at face value the statements of enlistee ages. However, many minors lied about their age in order to enter the service, though just how many cannot be known.
During the war years most states conferred legal adulthood on males when they reached their twenty-first birthday. A federal statute enacted in 1850 provided that young men between the ages of 18 and 21 could enlist in the army, but only with the written consent of "his parent, guardian, or master," and that "recruiting officers must be very particular in ascertaining the true age of the recruit." The act also provided for the discharge of minors who had somehow managed to enlist without parental consent.
In the early days of the war a number of males under 21 years enlisted in the army. Many of them obeyed the law, sometimes serving in the same units as their fathers or adult brothers. Others entered the service fraudulently, lying about their age and the need for a parent's consent.
Problems developed as the excitement of the war's early days passed and soldiers learned the drudgery and danger of army life. By late 1861 discharges from the service were being sought by many under-age soldiers and their parents. In Chicago federal judge Thomas Drummond ruled that under the law "if a minor has enlisted in the regular army, or the volunteer service, who is under the age of eighteen years, and who has been enlisted without the consent of his parents or guardian, such enlistment is illegal."
Congress in February 1862 passed a new statue, that rescinded both the rule allowing young men under 18 to enter the service and the one requiring recruiting officers to "be very particular in ascertaining the true age of the recruit." The new law declared that "the oath of enlistment taken by the recruit shall be conclusive as to his age," and youngsters who lied about their age were stuck with the consequences.
Judge Drummond, despite finding the new statute "a harsh, unjust and oppressive law, ignoring the authority of the father over the son, and discreditable to the Legislature," was now forced to rule differently in cases of minors seeking to leave the service. The Chicago Tribune approved of the change, commenting, "It may be thought harsh... as under the rendering of the late law a youth of ten years, if once enlisted, could not be discharged... by the usual application... It will have a good general effect upon all future enlistments. Recruiting officers will not be troubled with boys, and if boys enlist they will not be able to play soldier a few months and then beg off by pleading the baby act."
Illinois sent an unknown number of minor boys to the field during the Civil War. Some entered the service with the consent and even encouragement of their families, others apparently by lying about their age. Here are short sketches of just a few.
Lyston D. Howe
Lyston D. Howe of Waukegan joined the 15th Illinois Infantry with his father, William, in June 1861, both serving as musicians. Lyston was 10 years and 9 months old at the time of enlistment. The boy, who stood 4' 2" tall, was discharged "for youthfulness" in October 1861 after five months of service. Four months later Lyston joined the 55th Illinois Infantry, his father's new outfit, in which he served a full three-year enlistment.
Orion P. Howe
Orion, Lyston's older brother, entered the service as a member of the 55th Illinois Infantry—his father's and brother Lyston's unit- in September 1862, at the age of 13. During an assault on Vicksburg, Mississippi, in May 1863 Howe was one of several soldiers sent for supplies of badly needed ammunition. The others were killed and Howe was badly wounded in his successful attempt to reach Gen. William T. Sherman. The exploit won him a postwar appointment to the naval academy at Annapolis (he was too short for West Point), and, in 1896, the Medal of Honor.
George R. Yost
Fourteen-year-old George R. Yost joined the U. S. Navy in January 1862 and served as "first class boy" aboard the river gunboat USS Cairo. George was on duty manning a gun on the morning of December 12, 1862, when a mine ignited by an electrical charge tore into the Cairo, sinking the vessel in about fifteen minutes. Yost survived the attack and saved the journal he kept during his service, which provides wonderful insight into life on the war vessels that steamed the great rivers.
Ransom P. Stowe
Ransom P. Stowe joined the 33rd Illinois Infantry in May 1861 at the age of 14. He served with the regiment through the war, only to be badly hurt by an accident in March 1865. The injury led to his discharge from the service in June 1865 and the award of an invalid pension in the 1870s. Ransom committed suicide in 1908, which friends and family attributed to the years of suffering due to his wartime injury.
Rebels come North: POWs near Springfield
On February 1862, the U.S. army commanded by Ulysses S. Grant captured Fort Donelson, Tennessee. The loss to the Confederates was great—control of a portion of western Tennessee, dozens of pieces of artillery, and over 16,000 soldiers. Federal officers moved quickly to transfer the captured men to the North, away from friendly territory and potential aid.
For the first time a large number of non-resident secessionist sympathizers would be residing in Illinois, at Camp Butler near Springfield, at Camp Douglas outside of Chicago, and at the old state penitentiary at Alton. Governor Richard Yates, Secretary of State Ozias Hatch, and Auditor Jesse Dubois argued with military officials against sending any number of captured Confederates to Springfield, because "there are so many secessionists at that place." Gen. Henry Halleck replied a few days later that “I shall probably be obliged” to send about 3,000 men to Springfield, and ordering that a force of guards ready to receive them.
That same day a train full of prisoners from the 51st Tennessee passed through Springfield on the way to Chicago's Camp Douglas. Crowds gathered to see them during a short stop. One observer noted that they "presented rather a motley appearance, being clothed in almost every style and color, and rather the worse of the wear at that. Quite a crowd assembled at the depot to see them, and we were glad to notice that very little ill feeling was manifested toward them by the crowd. Several jokes passed between them...." The curiosity and friendly banter was noticed in other towns, too.
A few days later prisoners began to arrive at Camp Butler, which still lacked a perimeter wall. A local newspaper editor "welcomed" the men. Its editor hoped that they would be treated with respect, remarking "we trust that nothing like taunt or insult will be exhibited towards them.... [L]et us treat them not as rebels, but prisoners of war. It is no part of magnanimity to crow over and, least of all, deride a conquered foe." Like many in the North, this writer saw the average Confederate soldier as a man who had been used by scheming politicians and ambitious landed elites for their own ends. "Hundreds of those soldiers come among us with less reluctance than they entered the ranks of the rebellious army. They were impressed into the service, against their wishes.... They are poor men, and they know very well that they have nothing to gain by the rebellion, even if it were successful. They know, as well as we that this rebellion means nothing more than.... the perpetuation of slave labor, to the detriment and disgrace of free labor." The sense expressed in those comments seems to have represented, at least for a time, a large part of the unionist population. Even members the Springfield Soldiers' Aid Society, which created hospital clothing and gathered other supplies for the men defending the Union, expressed feelings of good will.
Sympathy of another kind soon appeared as well. Prisoners disappeared from the camp, and it was suspected that "disloyal" locals helped them on their way home through southern Illinois. Among others, six men living south of Springfield were arrested and sent to the prison at Alton on a charge of aiding a prisoner to escape. Soon a petition called for their release, Governor Yates was said to have vouched for some of the accused. Springfield's Republican Illinois State Journal suggested that two weeks in the Alton prison provided lesson enough, and that the men should be released.
The feelings of many, however, soon began to change. The victory at Fort Donelson did not end the war in the West-far from it. Heavy losses on the field of battle and in hospitals continued. Numbers of prisoners refused to see the error of their ways and sought to escape back to the Confederacy. Others expressed their scorn for the Union and those who supported the war to preserve it. The Illinois State Journal, which had earlier called for Southern brothers to be treated with respect, reflected the changed attitude. In reporting about Camp Butler the newspaper began to talk of unrepentant "braggarts," and called for an end to "kindly treatment.... 'Tis time we were tired of throwing pearls before swine." Though some humanitarians and others who sympathized with the secessionists continued efforts to comfort Camp Butler's prisoners, real war, with its characteristic hardening of hearts, had come.
Interested in learning more?
The issue of treatment of prisoners during the Civil War remains a contentious one. It has generated a huge literature. Prisoner of war activities at Camp Butler are discussed in Camilla A. Corlas Quinn, "Forgotten Soldiers: The Confederate Prisoners at Camp Butler, 1862-1863," Illinois Historical Journal 81 (1988): 35-44, online at http://dig.lib.niu.edu/ISHS/ishs-1988spring/ishs-1988spring35.pdf.
The shift from initial friendly interest to harsh feeling tempered by the efforts of humanitarians and those sympathizing with the South also played out in Chicago. For coverage of Camp Douglas see Theodore J. Karamanski, Rally 'Round the Flag: Chicago and the Civil War (1993), chapter 5.
For military prison at Rock Island, opened in December 1863 following the victories at Chattannoga, see Neil Dahlstrom, "Rock Island Prison, 1863-1865: Andersonville of the North Dispelled," Journal of Illinois History 4 (2001): 291-306, and Benton McAdams, Rebels at Rock Island: The Story of a Civil War Prison (2000).
General Benjamin M. Prentiss of Quincy, Illinois
Dr. David Costigan
When the Civil War erupted in April 1861, Benjamin M. Prentiss of Quincy (Adams County), a colonel in the Illinois militia, was given command of seven companies with which to defend Cairo, located at the critically important junction of the Mississippi and Ohio rivers. In late April his men seized munitions aboard river steamers bound for the South, an indication of his aggressiveness. It was done four days before the War Department authorized such confiscations. The Virginia-born Prentiss received some military experience as a militia lieutenant in Illinois' Mormon "troubles" of 1844-1845 and as captain at the battle of Buena Vista during the Mexican-American War.
In August 1861 Prentiss received promotion to brigadier general, one of the early generalships awarded to Illinois. Most of those named were lawyer-politicians, with the exception of former U.S. Army captain Ulysses S. Grant. When Grant received an assignment to Cairo, he gave orders to Prentiss, who balked, claiming that he was the senior officer. Grant announced that by law, because of his former rank in the U.S. service, he was the superior officer. Prentiss demurred before leaving for St. Louis to seek another command. He subsequently was assigned to oversee northern Missouri above the Hannibal and St. Joseph Railroad. In his Memoirs, Grant lamented Prentiss's decision to leave his command. He wrote: "General Prentiss made a great mistake.... When I came to know him better, I regretted it much.... He was a brave and very earnest soldier. No man in the service was more sincere in the cause for which we were battling; none more ready to make sacrifices or risk life on it."
Dramatic events of April 1862 in western Tennessee vindicated Grant’s faith in Prentiss. The high point of Prentiss's service came as he commanded the 6th Division of Grant's Army of the Tennessee at the battle of Shiloh. Union troops encamped near Pittsburg Landing, Tennessee, were surprised by a rebel assault under the command of Gen. Albert Sidney Johnston. Prentiss's troops managed to hold off the rebels for about six hours. Prentiss's position was overrun and he was compelled to surrender. Nevertheless, some historians contend that Prentiss and his troops bought valuable time with their brave defense, which helped the Union forces on the second day turn the tables on the rebels at Shiloh, producing an important victory.
Prentiss remained in Confederate prisons until October 1862, when he was exchanged. He was rewarded for his service with a promotion to major general, reassigned to Grant's command and detailed to oversee the defense of the eastern district of Arkansas.
In early July 1863, news arrived in Quincy that Prentiss's troops had been attacked at Helena, Arkansas, by troops commanded by Confederate Gen. Sterling Price. The Quincy Daily Herald published Prentiss's account of the battle. Prentiss had anticipated an attack and established formidable defenses, placing four batteries of artillery on heights overlooking invasion routes. Trees were felled to block roads. The outcome was an impressive victory. Prentiss's forces were outnumbered by approximately 6,500 to 4,000. The victory, however, was overshadowed by the huge Union successes won at the same time at Gettysburg, Pennsylvania, and Vicksburg, Mississippi. The action at Helena, also ironically, constituted Prentiss's final combat action.
On July 17, less than two weeks after his notable victory, Prentiss returned to Quincy and was feted in a reception hosted by the activist women's organization, the Needle Pickets. Prentiss sought a new command but none was forthcoming. In October 1863 he resigned from the army on grounds of health and family responsibilities. In fact, he was perturbed at being passed over for command. Gen. Stephen Hurlbut, a self-promoting officer and a fellow division commander at Shiloh, notified General-in-Chief Henry Wager Halleck that he disapproved of any position for Prentiss. Thus a "hero" of the battles of Shiloh and Helena spent the last eighteen months of the war at home.
The Prentiss story reveals more about Civil War leadership than immediately meets the eye. Early in the war the huge expansion of the military required a hurried search for leaders. Prentiss had served in the Mexican-American War and had run for political office, and thus appeared to fill the bill. He acquitted himself well at Cairo, Shiloh, and Helena, but authorities now decided that other officers better suited their plans for conducting the remainder of the war. The sorting process had consigned Prentiss to the sidelines.
Prentiss practiced law in his return to civilian life in Quincy. When Ulysses S. Grant became president in March 1869, he appointed his old comrade a federal pension agent. He served in this capacity for eight years. In 1881, Prentiss moved to Bethany, Missouri, where he served as general agent for the federal land office. In 1888, President Benjamin Harrison named him postmaster of Bethany and he was reappointed by President William McKinley. The government he had served had taken care of him with three separate patronage appointments. Prentiss died in 1901 at the age of 81.
Was Benjamin Mayberry Prentiss a politician who became a general? The answer must be a qualified one.
In the military situation brought on by the Civil War, there was need to almost instantly expand the army from about 16,000 troops to more than 75,000. Where were the officers to come from? Because of his previous military experience, Prentiss seemed a logical choice for command and acquitted himself quite well. Later in the war he was deemed of lesser competence and was deprived of additional commands. Whether this was a legitimate judgment is debatable. As a Republican politician he had advantages in receiving a significant command. His partisan posture likewise aided him in the postwar picture. Did the system work in Prentiss's case? It can be concluded that a jerry-built structure worked tolerably well, and Prentiss brought credit to himself and to his community.
David Costigan is professor emeritus of history at Quincy University
. He held the Aaron M. Pembleton Chair of History at Quincy University
. He is a member of the advisory board of the Lincoln-Douglas Debate Interpretive Center.
Interested in learning more?
A sketch of Prentiss's life can be found in Ezra J. Warner, Generals in Blue: Lives of the Union Commanders. Studies of the battle of Shiloh and role played in it by Prentiss and his division are: O. Edward Cunningham, Shiloh and the Western Campaign of 1862, Gary Joiner and Timothy B. Smith, editors; Wiley Sword, Shiloh: Bloody April; and James L. McDonough, Shiloh: In Hell before Night. Official eyewitness reports describing Prentiss's action at Shiloh can be found in the Official Records of the War of the Rebellion Series I, volume 10, part 1, online at http://ebooks.library.cornell.edu/cgi/t/text/text-idx?c=moawar;cc=moawar;view=toc;subview=short;idno=waro0010.
An end and a beginning
December 1861 closed a tumultuous year that saw secession of southern states following the election of a Republican president of the United States and bloodshed after the organization of the Confederate States of America. Perhaps more so than in most years, the close of 1861 was a time to reflect on the meaning of the past and to anticipate the future.
A major change in the Illinois command structure took place on November 11 with the appointment of Allen C. Fuller of Belvidere (Boone County) as adjutant general, the state government's chief military administrator. At the time of Fuller's appointment, thousands of volunteers had gathered in camps at Springfield and Chicago, awaiting organization into regiments. Many units were too small to be accepted for service, and they waited while men who hoped to become officers worked to enlist more recruits. Fuller soon became determined to create regiments out of these fragments regardless of the concerns of would-be officers. He made a "flying visit" to Camp Douglas in Chicago "to consolidate the skeletons and complete at once their regimental organization." That trip and time similarly spent at Camp Butler near Springfield quickly resulted in the organization of units ready for service, one newspaper happily announcing "the satisfaction of stating that . . . both encampments [are] entirely cleaned out, without a remnant of a company or a squad being left."
On December 10 Fuller reported to Governor Richard Yates the number and location of Illinois troops. He noted that 60,540 men served in Illinois regiments. More than 17,400 of them were still encamped in Illinois, soon to be sent to the front. The great majority of regiments already in the field were stationed in Missouri. Fuller noted with real concern that an estimated 10,000 to 15,000 Illinois men had enlisted in regiments raised in Missouri, Kansas, or Iowa at a time when the federal government refused to accept eager offers of more troops from Illinois.
On December 12 Governor Yates issued a statement calling attention to Adjutant General Fuller’s report on "the grand army of Illinois." He also noted proudly that through his efforts during the last two weeks over 6,000 "new and superior arms" had been distributed to Illinois units "most exposed to the enemy." The state was also taking delivery of a number of James rifled cannon—"obtained from the Secretary of War during my last visit to Washington"—to be issued to new companies of artillery.
Illinois' "grand army" grew in size and confidence, and it was relatively unbloodied by the first months of fighting. At the end of December most counties counted a few men lost in the national service. The pace of the fighting had been slow, and few battles had been fought, most notably for Illinoisans at the Missouri towns of Lexington, Fredericktown, and Belmont. By the end of December 1861, Springfield—the state’s fourth-largest city with a population just less than 10,000—had lost two residents in battle. Most deaths occurred off the battlefield, in camps, with illness and accident claiming victims.
Observance of Christmas 1861 in Illinois seems to have been affected little by the outbreak of war. Newspapers were filled with the usual advertisements announcing the imminent arrival of Santa Claus and accounts of church gatherings at which attention centered on "Christmas trees" decorated with small gifts.
Still, there was a difference. An antislavery newspaper editor from Peoria likely reflected the belief of many as he wrote of a terrible judgment being responsible for the current state of affairs: "This will be a strange Christmas to a large number of American citizens . . . [as] war, horrid and unnatural, reigns supreme over the land, making hundreds of hearths desolate to-day, and thousands of happy homes sorrowful. Like all other public calamities, this is only the result of a violation of the Divine Law . . . that we must do unto others as we would have others do unto us."
Illinois men in military camps felt the absence from loved ones but seem to have borne it well. Many units created celebrations around boxes of goodies shipped by friends from home. A member of the 36th Illinois Infantry wrote almost lightheartedly that "even in the rebel states of Dixie, Christmas is regarded by all as a day of feasting and pleasure. . . . But with us, living this military life of ours, all days are the same, and thus we have passed, for us, the strangest Christmas." Apparently the 36th did not receive gifts of food from home, "yet on this day almost every mess has managed to procure some extra that they might at least keep up a semblance of those not yet forgotten festivities around the family table."
New Year's Day seems to have been celebrated much in the prewar spirit as well, with the male heads of families making calls upon their friends while wives and daughters served refreshments to callers at their homes. In another old tradition many boys who delivered newspapers distributed an elaborately printed annual "carrier’s address," hopeful of a tip for the past year's service.
The end of 1861 and advent of 1862 brought high hopes to many. A perhaps typical feeling was that of George Willis of the 15th Illinois Infantry. Stationed near Otterville, Missouri, he wrote in the last days of the eventful year 1861 that "The coming year will like the present, be fraught with bright hopes and fair promises, some of which hopes, will be disappointed, and some of the promises will be forgotten, still we feel kindly towards the stranger [the year 1862], and rejoice that our acquaintance of nearly twelve months is about to bid us farewell forever."
For the Boys: Early soldier aid efforts
The opening of war with the firing on Fort Sumter in April 1861 found the northern states woefully unprepared for a military conflict. As the federal and state governments struggled to arm and equip the men who rallied to the flag, local governments and civilian groups worked quickly to aid their husbands, sons, and brothers who had enlisted in the service. These were the first steps in creating soldier-aid services that would provide crucial support through four years of war.
Creating a military look
In the first weeks of the war most efforts to support local troops took the form of creating uniforms. Many county board meetings in April and May appropriated funds to reimburse such projects. Carroll County authorized $5,000 for military clothing, while Stark County offered up to $3,000 at the rate of $6 per uniform. While some towns ordered uniforms from tailors or wholesale houses, located mostly in Chicago, many created the clothing locally. Several Illinois newspapers described how community merchants supplied the cloth (often at cost) and tailors did the cutting, after which the pieces of fabric were parceled out to women who did the actual assembling and sewing, sometimes in improvised sewing shops but more often in their own homes. In some cases the uniforms produced were simply trousers and a decorative shirt. More often they included uniform jackets patterned on those of the U.S. army or a militia organization. This resulted in many early Illinois outfits wearing uniforms of the grey color that would later be associated with their Confederate enemies.
An important event in the life of many military companies was the presentation, usually just before its departure from town, of a United States flag. It appears that until at least early 1862 Illinois units carried only flags made by women of the town or purchased from a supplier located in one of the state’s larger cities, such as Chicago or Peoria. No matter what their origin, soldiers saw their flags as a special link to their home communities and promised to return them, stained perhaps with battle smoke and blood but untouched by the hands of traitors.
For families left behind
When county officials voted funds to supply local volunteers with proper military uniforms most also made another appropriation to be used in providing financial support to the families of soldiers. The amounts of such appropriations ranged from $5,000 to $10,000, while the city of Pekin (Tazewell) authorized $1,000. The Will County board set benefit rates at $1.25 per month for a woman heading a family and 50c per month for a child under the age of 12. In the first glow of patriotism such payments were largely welcomed, and in most counties they continued to be made through the war. Later, at least some dissatisfaction was felt over special public benefits being provided to needy wives and children of soldiers. Early on Jasper County appropriated a special fund to assist soldiers’ families but soon reverted to the standard prewar system of support for paupers.
An ongoing, organized effort begins
By the fall of 1861 the reality had begun to sink in that the war would not end with a few decisive battles. One response was the formation of permanent organizations to provide local servicemen with needed items that the army did not or could not provide. On April 20 those meeting at the Lee County courthouse in Dixon formed the Lee County Volunteer Aid Association, but this seems to have been unusual during the war’s first weeks. The fall months, however, saw such groups organizing in towns across the state, including Galena (Jo Daviess), Galesburg (Knox), Middleport (Iroquois), Salem (Marion), Sterling (Whiteside), Toulon (Stark), and Wyoming (Stark).
The experience many Illinoisans had with religious and other organizations likely made soldier-aid societies seem the obvious answer to the complaints of hometown soldiers regarding food and shortages of medical supplies. The new groups just formalized efforts that had been improvised a few months before. For months they would periodically ship Illinois state arsenalboxes of goods direct to their friends in camp, sometimes accompanied by a local civic leader who returned with a first-hand account of how the boys were doing.
Women performed much of the labor. They knitted socks and mittens, sewed clothing for hospital use, and prepared foods that supplemented the army basics of salted beef or pork, beans, hard bread, and coffee. It seems that in these early days the feeling was one of excitement and rather lighthearted. The women of Middleport, however, apparently realized that their men were involved in the business of killing. In November they issued a public appeal that called especially for mittens “knit with one finger, so as to give free use of the index finger of the hand.”
Interested in learning more?
Newspapers are the best source for learning about community efforts to support the troops. Many Illinois libraries hold copies of local newspapers of the period. Microfilm copies of many Illinois newspapers may be borrowed via interlibrary loan from the Abraham Lincoln Presidential Library. The catalog of holdings can be found at: http://www.illinoishistory.gov/lib/newspaper.htm .
Women, children, and explosives: Making ammunition at the state arsenal
In the rush to war following the firing on Fort Sumter, Illinois Governor Richard Yates launched a crash program to arm the state’s newly enlisted troops, especially those sent to protect the strategic city of Cairo at the junction of the Ohio and Mississippi Rivers. While agents visited the East in hope of purchasing supplies of muskets and cannon, officials in Springfield created a factory to produce the ammunition that would be consumed by those weapons. For more than seven months in 1861 the Illinois state arsenal employed not just men but also dozens of women and children in its explosives operation.
On April 22, 1861, an anonymous writer to the Springfield Illinois State Journal commented, "We do not know how long the present unhappy contest may continue. The public safety requires that we should rely upon ourselves" to provide the resources to protect the state from attack. That very day the Illinois state arsenal began operating an ammunition factory. At first the work probably was done in the arsenal building itself. Soon it would expand to rented and newly constructed buildings on and near the arsenal property. Within a month a local newspaper reported that the business was "going on briskly," with about forty employees making about 12,000 to 14,000 musket cartridges per day. In early June workers began to produce artillery ammunition as well.
The presence of the factory brought real benefits to Springfield. The Illinois State Journal crowed in mid-June 1861 that "Altogether, this business is the means of disbursing a good quantity of gold and silver in our city." Virtually everything needed by the operation could be purchased from local vendors. At one point ten men at the foundry of John C. Lamb cast iron shot for artillery pieces, twenty more cast musket balls for the contractor Newman & Fisk, and another twenty turned lumber by the carload into shipping cases. Dozens of others earned money assembling the different elements into finished cartridges.
During a period in July 1861, when officials worried about a potential attack in Cairo, the shop operated even on Sunday with eighty employees rolling musket cartridges or assembling artillery rounds. Two women sewed woolen powder bags for the artillery, while two men used lathes to turn the wooden sabots that held together the large iron shot and the explosive powder charges. At one point a day’s production reached 25,000 musket cartridges and 425 rounds of artillery ammunition.
From a small start of ten employees the ammunition factory workforce grew quickly. The great majority of the employees were women and children, some reported to be as young as eight years old. Women sewed the woolen powder bags used to create artillery rounds, and both women and children created the cartridges to be used in infantry weapons. Officials declared a priority of giving jobs to the wives of men in the army, "many of whom are strictly dependent upon their labor for their support." The Illinois quartermaster general remarked that “the manufacture of ammunition employment was given to a large number of children, both boys and girls, from eight to sixteen years of age, who, besides helping in the maintenance of their mothers and their younger brothers and sisters . . . acquired habits of industry and became accustomed to a discipline that will have its salutary effect upon the formation of their characters.”
Pay rates varied by the type of work. Males engaged in skilled work such as turning sabots on lathe received $1.25 per day, while the great majority of the women and children received 50 cents or 33 1/3 cents per day. Judging by army ordnance manuals, ten hours made up a full day of work. Though officials claimed priority for women and children struggling for an income, a few of the less needy found jobs, including twelve-Beverly Herndon—the twelve-year-old son of Lincoln law partner William H. Herndon—and four children of factory superintendent Enoch Paine.
For all of the enthusiasm about the benefits of employing soldiers’ wives and helping children to learn good habits, producing ammunition was inherently dangerous work. In 1864 explosions at large factories in Washington, D.C., and Springfield, Massachusetts, each killed dozens of workers. Army ordnance manuals discussed the need to be conscious of safety; for example trying to prevent sparks by ordering those in the presence of black powder to wear socks or moccasins rather than shoes, and to not drag or shuffle one’s feet while walking.
The Springfield arsenal operation produced musket rounds of different calibers to suit the weapons being used by infantry troops. For a time the arsenal supplied a major portion of the musket ammunition used by General John C. Frémont in Missouri. Large amounts were also sent to General George B. McClellan, then fighting in western Virginia. Each individual round consisted of a musket ball and a charge of explosive black powder wrapped together in a paper tube. The finished cartridges were then packaged in lots of ten, which were distributed to soldiers. One hundred of those packages were then boxed for shipment to the front.
Click here to see more.
The artillery ammunition made at the arsenal factory consisted of a projectile, attached to a cloth bag full of explosive powder by means of a wooden sabot. The bags, sewn by women at the factory, were of wool merino or serge “closely woven” so that powder did not sift out. The sabots that brought the projectile and powder bag together were made of poplar or some other “close-grained” wood lathe-turned at the factory.
Click here to see more.
Surviving records show that from April 22 to August 31 the cartridge factory at the Illinois state arsenal produced about 1.4 million rounds of ammunition for muskets and rifles, just over 8,000 for artillery pieces, and 6,000 for pistols. During the four-month campaign in 1864 to capture Atlanta, Georgia, which included several major battles, General William T. Sherman’s army reported firing just short of 22 million rounds of musket ammunition and over 149,000 rounds of artillery ammunition.
The factory closes
Factory operations closed at the end of November 1861, as the war department took over from the states the purchasing of ammunition, arms, and other military equipment. Governor Yates and other state officials protested the Federal takeover and consequent loss of control over contracting. Closing the ammunition factory was especially sad, “the employment of hundreds of little hands, thereby affording a means of support to many a desolated soldier’s household,” making it a matter of “great regret.”
Interested in learning more?
Surviving records of employees and production at the state arsenal factory are located at the Illinois State Archives in Record Series 301.082. Detailed instructions on how to produce musket and artillery ammunition are found in The Ordnance Manual for the Use of the Officers of the United States Army (1861) on pages 255–81. It can be found online at http://www.archive.org/details/cu31924031187887.
Lexington, Missouri—A battle is lost but a flag is saved
The capture of Illinois and Missouri regiments at the siege of Lexington, Missouri, in September 1861 added to the list of defeats suffered by Federal troops during the summer and early fall of 1861. A bit of comfort, however, was provided by a young private from Illinois who outwitted the rebels and redeemed a captured United States flag.
Missouri in 1861
Missouri became the frontline of action for many Illinois troops beginning in April 1861. After Cairo and southern Illinois were made secure, the Ohio River front became temporarily quiet as both Federal and Confederate governments respected the shaky neutrality declared by Kentucky. Missouri, however, was soon flooded with Federal troops, many of them from Illinois. Some took part in the removal to Illinois of weapons from the U. S. arsenal at St. Louis (see the monthly feature for April 2011). Others moved to protect railroads and loyal citizens from Confederate sympathizers as the state wrestled over the question of secession.
In September 1861 Federal forces and loyal Missouri units attempted to hold the line of the Missouri River and protect the capitol at Jefferson City. Missouri State Guard units numbering about 8,000 men and commanded by Sterling Price moved northward following their defeat in August of U.S. forces at the battle of Wilson’s Creek, hoping to encourage support for secession. They soon headed for Lexington, a Missouri River town held by Federal troops under the command of Col. James A. Mulligan of Chicago. Lexington was targeted because of its location commanding the river, the secessionist leanings of the nearby population, and the more than $900,000 cash that had been taken from a local bank by the Federals.
The siege and battle
Mulligan’s force was made up of his own 23rd Illinois Infantry, more popularly known as the Irish Brigade, the 1st Illinois Cavalry, the 13th and 27th Missouri Infantry, and a number of small unionist home guard units. It totaled about 3,500 men. They occupied a former college campus overlooking the river and quickly began to build defenses. What would prove a fatal mistake was made when a spring of water was left outside the defensive line, leaving only a few wells to supply water for the men and horses. The lead units of Price’s force arrived at Lexington on September 12, the army's size almost doubled by new recruits and continuing to grow. Price laid siege to the fortified college campus, the two sides struggling to control buildings that could conceal riflemen. Mulligan held on, convinced that reinforcements were on their way. They were not. The wells, supplying water to 3,500 men and several hundred animals, soon began to run dry. On September 20 the Missourians continued to press the Federal forces, this time using soaked hemp bales as a moveable, bullet-resistant barricade. Confusion soon broke out within Mulligan’s lines when a subordinate displayed a white flag. Mulligan asked for terms of surrender, to which Price responded that the surrender must be unconditional. The terms were accepted.
For Mulligan and his men the defeat was a bitter one. A witness wrote that "the scenes at the capitulation were extraordinary. Col. Mulligan shed tears. The men threw themselves upon the ground...demanding to be led out and 'finish the thing.'" It was reported that some cavalrymen "shot their horses dead on the spot, unwilling that their companions in the campaign should now fall into the hands of the enemy." Officers were to be held as prisoners of war. The enlisted men were to be released on turning over their weapons and all equipment except the clothing on their backs. As the secessionist band played "Dixie" many of the defeated Federals "wept to leave behind their colors [flags], each Company in the Brigade having its own standard presented to it by their friends."
Redeeming the flag
But not all of the United States flags at Lexington were surrendered. Nineteen-year-old private
Henry C. Carico of Co. A, 1st Illinois Cavalry saved the company's flag from capture, wrapping it around his body and then covering it with his uniform before being marched away. Carico’s exploit electrified Illinoisans bitter over the Lexington defeat. The banner was proudly displayed as Company A passed through Illinois towns on the return to Bloomington. On October 12 "the old battle-worn flag" fluttered from the rear of the train that brought Company A to Bloomington in triumph.
People rushed to "look at the torn banner... The contrast in the appearance of the flag when it was borne from here, and its looks on its return, indicates fully the scenes it has passed through. Then, its silken folds glistened as it waved in the breeze, bright... and unstained. Now, it still floats as proudly as ever, with its honor untarnished, but torn and defaced by cannon balls and bullets, dimmed by battle smoke, and stained here and there with spots of blood."
The crowd at a public meeting held at the McLean County courthouse raised three loud cheers for the flag, three for Capt. John McNulta and the men of Co. A, "and three for private Carico, who rescued the flag and brought it safely home."
Interested in learning more?
An accessible history of the siege is the Lexington, Missouri, Historical Society's, The battle of Lexington, fought in and around the city of Lexington, Missouri, on September 18th, 19th and 20th, 1861, by forces under command of Colonel James A. Mulligan, and General Sterling Price. The official records of both parties to the conflict; to which is added memoirs of participants (digital format: http://digital.library.umsystem.edu/cgi/t/text/text-idx?;c=umlib;idno=umlc000087 ).
The history of James Mulligan and the Irishmen of the 23rd Illinois Infantry is outlined in Harold F. Smith, "Mulligan and the Irish Brigade," Journal of the Illinois State Historical Society 56:2 (Summer 1963), pp. 164-76. The State of Missouri operates the Battle of Lexington State Historic Site. For more information visit http://mostateparks.com/park/battle-lexington-state-historic-site.
August 1, Emancipation Day
On August 1, 1861, many African Americans in Illinois joined others throughout the Union in celebrating Emancipation Day, marking the anniversary of the August 1, 1834, abolition of slavery in most of the British Empire. For many African Americans, August 1 seemed more appropriate for celebration than the Independence Day anniversary just weeks before. They certainly sensed more than their white countrymen the contradiction between the ideals of the Declaration of Independence and the reality of millions of their fellows being regarded as a form of property. Even for those blacks residing in free states, daily life involved limitations that brought into question the "self-evident" truths of the Declaration.
Emancipation Day celebrations were first held in eastern states during the 1840s. In larger towns and cities they included large, organized parades, much like those of Independence Day. The events, planned by local black community leaders including ministers, usually began with prayer and included long, formal speeches that often included a history lesson highlighting the role Africans had played in building the nation. The planners aimed for an atmosphere of religious and educational uplift and restrained celebration, in part, to prove to white neighbors that African Americans were “respectable”—an important attribute among those aspiring to the growing middling class—and as capable of carrying civic responsibility as any other American. Many participants seem to have looked forward more to the picnic lunch and a chance to spend a peaceful day meeting and relaxing with friends.
Emancipation Day in Illinois
It appears that Emancipation Day observances came to Illinois in the middle 1850s, a time of growing tension over the role of slavery in national life and growing hostility to the institution. In 1857 more than 200 Chicago blacks and whites met at the African Methodist Church and marched to the railroad depot, led by the city band. At a grove south of the city they listened to several speeches and enjoyed a picnic lunch. After a late afternoon return to the city another gathering was held downtown. After short speeches and "an elegant supper," participants danced until after midnight. The event seems to have been fairly typical, one to which white neighbors were welcomed and which at least some supported and attended.
In Galesburg, a center of antislavery activity, African Americans from several counties gathered on property owned by white antislavery activist George W. Gale. The event opened with "a fervid prayer," followed by singing of a women's choir. Joseph H. Barquet then made a speech lasting over an hour, "full of burning eloquence, deep thorough and historical research... recurring the bloody scenes that were being enacted in the State." Joseph D. Allen of Knoxville followed with a sermon. After an elaborate picnic lunch the meeting resumed, formally adopting statements that condemned the U. S. Supreme Court's Dred Scott decision, which ruled that blacks were not, and never could be, citizens of the United States. Another resolution officially invited Frederick Douglass to lecture in central Illinois. The last action thanked Gale for use of his grounds, "and the citizens of Galesburg, for their liberality and protection," the altter an important point since African American civic events were sometimes disrupted by disapproving whites. At Galesburg, "the meeting then adjourned; every body in the best of humor."
Emancipation Day 1861
In 1861 Emancipation Day came less than four months after the opening of hostilities at Charleston, South Carolina. Any excitement felt over the possibilities for black freedom that might come from the war apparently melted in the oppressive heat that covered Illinois. In Bloomington, where the temperature topped 100°, the celebration took place in a local grove. The local newspaper reported that "their Fourth of July" comes at a time too hot for whites, "yet they are going in with heart and strength to have a good time." Any activities that had been planned for Springfield fell by the wayside because "The day was entirely too hot to feel good." The people in Quincy stuck it out, and the observances "took place in one of the public squares, and were appropriate and harmonious."
Emancipation Day 1862
By August 1862 the war had been underway for over a year, and circumstances had pushed Abraham Lincoln and the Congress to actions that slowly changed the situation of African Americans. Black men were being quietly recruited into the army, and newly signed laws gave freedom to the slaves of those actively involved in the rebellion and ended slavery in the District of Columbia and the territories of the United States.
Emancipation Day ceremonies in Bloomington emphasized the promise of the hour. Local blacks were reported to have celebrated "in the usual style, though we think with something more than the usual spirit and interest." E. Hutchens, "a man of many years, whose life has been spent amidst slavery," made a speech in which he "dwelt at length upon the duty of his brethren in the free states, to educate their children, elevate their moral and religious character, and fit them for the higher position which he dared to hope they would soon be allowed to enjoy." J. W. Hill of Peoria spoke on temperance and education, "and the importance of the African demonstrating that he is capable of self government." The white reporter in attendance commented that "if any one is impressed with the idea that the negro cannot enjoy freedom, a few moments spent on the ground yesterday would have dispelled it... and it seemed pleasant to reflect that the fond dreams of freedom so long indulged by this oppressed people, promise a speedy realization."
Interested in learning more?
Detailed studies of Emancipation Day observances and how they changed over time can be found in Mitch Kachun, Festivals of Freedom: Memory and Meaning in African American Emancipation Celebrations, 1808-1915, and J. R. Kerr-Ritchie, Rites of August First: Emancipation Day in the Black Atlantic World.
Going to Camp
In the months from April 1861 to the summer of 1865 more than forty military camps dotted Illinois. Most were temporary and served as places where individual companies from a small region gathered and organized into regiments, received arms and uniforms, and experienced the first doses of military discipline. A few permanent facilities, notably Camp Butler (near Springfield) and Camp Douglas (Chicago), served as training, staging, and processing stations throughout the war. Other camps in Illinois served fully trained and equipped units as jumping off points to the nearby scenes of conflict in western Kentucky and eastern Missouri.
The response to the Illinois legislature's April 1861 call for ten infantry regiments set a pattern for dealing with the organization, equipping, and training of regiments later in the war. A temporary camp was established in each of the state's nine congressional districts. There, the units raised in that district would be organized and begin their military lives and the transition from home, family, and friends to the battlefront. Officials helped to ease the movement of units by locating most campsites within an easy march of a river landing or a town on the growing network of railroads, which in 1860 consisted of about 2,800 miles of track.
Many of these camps occupied the fairgrounds owned by county agricultural societies. In fact, officials cancelled the 1862 state fair to be held in Peoria because of occupation of the site by troops undergoing training. Fairgrounds proved to be almost perfect for use as military camps. They contained shelter in the form of animal sheds and other buildings, plentiful water supplies, large open areas that could be used for marching and drilling, and, usually, a well-established boundary that could be policed. Will County historian George Woodruff later recalled Camp Goodell, which in 1861 on "the old fair grounds on the well-known Stevens' place, having on it fine, shady oak openings, an abundant spring of water, and buildings already erected... To these, company barracks were quickly added." He also noted sadly that military use of the grounds meant that "men were now reversing the prophetic scripture, and turning their scythes in to swords and their pruning-hooks into bayonets."
In other cases officials raised camps from scratch, constructing buildings and establishing parade grounds on what before has been open land. Camp Butler (near Springfield), Camp Douglas (Chicago), and Camp Fuller (Rockford) quickly rose on old farms or local picnic grounds. One recruit at Camp Fuller described the newly constructed barracks as having "bunks from floor to ceiling and two men would occupy a bunk... When any of the boys were out on guard two hours in the night, he would declare when he returned that the barracks stank enough to knock him down."
The establishment of a camp often helped the economy of the nearby town, especially in those cases where all of the buildings had to be constructed by local labor using locally produced lumber. Even when camps occupied already existing buildings on a fairground, local vendors received orders for necessaries such as bread, pork or beef, firewood, straw for bedding, and feed for horses. Soldiers visiting town on passes visited photograph studios, saloons, and restaurants.
For most men, life in camp after enlistment provided their first encounter with military discipline and the transition from civilian to soldier was not an easy one for many men. Camp Scott (Freeport) provided several examples of recent inductees having difficulties with military discipline. In May 1861 a circus visited Freeport. Many soldiers decided to attend, quietly slipping out of camp without a pass. A man gave his friends the password that would allow them back into the camp before roll call. When officers got wise to the scheme and suddenly changed the password "you had better think there was some swearing about the time they wanted to come in... There were some 40 or 50 put in the guard house..." Later in the month the camp commandant placed almost two hundred men under arms as a guard to crush the threatened mutiny of one of the companies.
Visits to camps by friends from home softened the sting of new and unfamiliar discipline. Visitors often brought delicacies and picnic foods to the hometown boys, one of whom described army food as "none of the fancy kitchen fixings." An especially important moment of connection with friends from home came with the presentation to the unit of a United States flag. Until early 1862 the state government did not issue flags, and friends were more than happy to fill the need. Presentation ceremonies were very public affairs, often including hundreds of outside spectators. The program usually included a short speech, often by a young woman, on behalf of the donors, who were often women. The company or regimental commander responded with a speech of thanks, followed by the cheering of the men and the singing of patriotic songs.
The final parting from the home folk came when the new regiment left for one of the state's large permanent camps, or for one of the battlefronts in the South. John King of the newly created 92nd Illinois Infantry recalled the regiment’s 1862 departure by train from Rockford: "We all realized that it would be a last good-bye for many of us; we could not tell who would fall or who would return... Tears were gradually dried as we sped along towards Chicago... What was going to be our future?"
Camps and Musters
Click above to see a List of Illinois cities and towns hosting training camps, compiled from Schedule A, Report of the Adjutant General of the State of Illinois... Containing reports for the years 1861-66 (revised 1900), vol. 1, pages 151-56.
Interested in learning more?
A regiment-by-regiment list of towns in which units mustered into service is found in Jasper Reece, Report of the Adjutant General of the State of Illinois ... Containing Reports for the Years 1861-66 (revised 1900), vol. 1, pages 151-56, which can be found at http://hdl.handle.net/2027/uiuo.ark:/13960/t66401872 .
Many Illinois newspapers printed letters from local men in the service, who described their new lives as new soldiers in the Illinois camps. The Abraham Lincoln Presidential Library holds microfilm of many of these newspapers. For their newspaper microfilm catalog, visit http://www.illinoishistory.gov/lib/newspaper.htm.
Many regimental histories and soldier memoirs include accounts of soldier life in Illinois camps of instruction. Historian Daniel Sauerwein argues for the importance of the Illinois camp experience in his paper "The Impact of Camps of Instruction on Illinois Soldiers and Communities" at http://civilwarhistory.wordpress.com/2007/10/08/the-impact-of-camps-of-instruction-on-illinois-soldiers-and-communities/ .
Benjamin H. Grierson --- Not Only a Great Civil War Soldier
Keith A. Sculle, Ph. D.
Illinois Historic Preservation Agency (retired)
Think of Illinois and the Civil War and what quickly comes to mind for most of us? Abraham Lincoln, man of the people risen from modest means to perhaps the greatest President, saved the nation from dismemberment in the bloody Civil War. He further added to the United States' unique idealism in the family of nations by promoting an end to slavery. Yet, nearby, during Lincoln’s years in central Illinois, a lesser known man matured deserving memory too---Benjamin Grierson (1826-1911).
Born of Scots-Irish immigrants in Pittsburgh who later moved west to Youngstown, Ohio, Grierson's moorings typify many who fought for the Union---finding his way in the world according to his talents. In 1851, Grierson relocated to Jacksonville, Illinois. The city was founded but a few years earlier (1825) as a speculative venture on the Illinois frontier but, it rapidly grew into a sophisticated city for its time and was well connected to the world beyond. The seat of Morgan County, Jacksonville quickly attracted many professionals, one of Illinois' first public care facilities, a medical school, two colleges, and a railroad. A man of many talents, including music and the written word, Grierson recorded in his autobiography that "During my residence in Ohio, I had composed and arranged a considerable amount of music for bands and orchestras, and after my arrival in Illinois much additional music was written and arranged for the excellent band and orchestra in Jacksonville, of which I was the leader." After marrying an early sweetheart from Ohio, whom he became reacquainted with during her visits to relatives in nearby Springfield, he decided that his income as a music teacher was insufficient to start a family and moved to nearby Meredosia. There, in 1855, he and a partner opened a store that was lucrative until the financial crash of 1857 but remained open until 1860. Grierson, who gave up his homestead to pay his debts, was, by his own accounting, "virtually left without a dollar" and moved back to Jacksonville.
By no means a quitter, his reflection on why he had spent five years in Meredosia exemplified his faith "that the experience thus gained in sustaining what I deemed a just and righteous cause was absolutely necessary to enable me to put forth greater efforts in the memorable struggle which was soon to follow." This 'memorable struggle' referred to in Grierson's cryptic note began with his decision to join and support a new political party In an overwhelmingly Democratic county and at seemingly physical peril to himself, he joined the new Republican Party. He campaigned for and helped organize the fledgling party and its first presidential candidate Abraham Lincoln. He also remembered that during this time he "composed a great many songs which were widely sung and published throughout the country, and, often met and was intimately acquainted with Mr. Lincoln."
1863 was the year that saw Grierson's reputation vaulted to new heights. The raids of April 17th to May 2nd--some 15 days--mark a turning point in the Civil War and represent the legacy of Benjamin Grierson. Might it have ever been guessed that his background prepared him for it?
At his mother's request, he had refused an appointment to the United States Military Academy at West Point. Years later, in April 1861, when southern secessionists bombarded Fort Sumter, he resolved to go to war. He was initially an aide de camp, later rose to major in the Sixth Illinois Cavalry , in April 1862 to a colonel and in November of that year to brigadier cavalry commander in the Army of the Tennessee. Grierson's assignments were limited to comparatively small operations but, in them, he had earned his men's respect. General Sherman recommended him to General Grant to command 1700 men in a diversion slicing approximately 600 miles southwestward from the southern tip of Tennessee through Mississippi to Union-held Baton Rouge, Louisiana. This feint would distract the Confederate army, permit Grant's conquest of Vicksburg, split the Confederate forces to enable Union control along the entire length of the Mississippi River, and end the Confederacy's flow of material eastward. This bold strategy was successful during a particularly low ebb in Union fortunes. With both Union and Confederate forces bogged down in the East, however, hope glimmered in the West. As skill and as luck would have it, Grierson accomplished his mission. He and his exhausted men were surprised to be greeted as heroes and opinion rose that the Confederacy could be defeated. Even a beaten Confederate commander praised him: "Grierson was here; no, he was there, sixty miles away. He marched north, no, south, or again west... The trouble was, my men ambushed you where you did not go; they waited for you till morning while you passed by night."
In time, Grierson's miraculous reputation dimmed, and, after the Civil War, he went on to a long military career in the West where he organized a unit of the black cavalry known as the Buffalo Soldiers . A century later, the Grierson's Raid rekindled imaginations when Dee Brown, head librarian of the University of Illinois and later famous for Bury My Heart at Wounded Knee, wrote an account of the raid and Harold Sinclair, a lesser known novelist and creative non-fiction writer from Bloomington, wrote a fictional account, The Horse Soldiers . Hollywood adapted it for a star-studded movie cast.
Grierson---an historian too---penned a true and idealistic testimony deserving memory: "It is said that whatever withdraws us from the power of our senses, whatever makes the past, the distant, or the future, predominate over the present, advances us in the dignity of thinking beings, and as we can only at most, form a vague conception of the future, the time may not be unprofitably employed in glancing through the past, even if we gain nothing thereby beyond a better acquaintance with the history of our ancestors." He meant it about his family; but it can just as easily stand for our collective history.
Ulysses S. Grant goes to war
On April 25, 1861, Ulysses S. Grant departed from Galena for Springfield, accompanying Jo Daviess County volunteers responding to President Lincoln's call for soldiers to put down the insurrection. While in the state capital Grant came to the attention of Governor Richard Yates. Exactly how that happened is uncertain, since many claimed to have played a role in the process. In any event, the visit presented Grant with opportunity that put him on a road to acclaim.
Grant, a West Point graduate and former captain of infantry, had fallen on hard times, and was working in his father's Galena leather-goods store when secessionists fired on Fort Sumter. Though Grant was a relative newcomer, local leaders including U.S. Congressman Elihu B. Washburne invited "Captain Grant" to preside over a meeting to enlist volunteers, and later to assist in their organization and to ride with them to the capital.
The day after arriving in Springfield, Grant wrote to his wife, Julia, that Washburne persuaded him to stay for a time, though he had intended to return immediately to Galena. Governor Yates hoped that Grant would organize and train additional volunteer companies to be accepted under the terms of legislation moving through the General Assembly. On April 29, Grant carried out his first military task for the governor— an inventory of weapons stored in the state arsenal. This and other duties did not seem challenging - "I am on duty with the Governer, at his request, occupation principally smoking and occationally giving advice as to how an order should be communicated &c." Still, Grant's knowledge of military organization allowed him to serve Yates as a troubleshooter. He wrote to Julia, "I don’t see really that I am doing any good. But when I speak of going it is objected to...." On May 4 he was placed in command of Springfield's Camp Yates, and a few days later departed for southern Illinois, where he assisted in organizing regiments and mustering them into service.
Though he recognized the importance of his civilian service, Grant hoped for command of a regiment with the rank of colonel. To accept lesser rank would not be acceptable, given his West Point training and previous army service. Achieving the rank, however, required his being elected by a regiment's volunteer officers or favored by "log rolling" political leaders, "and I do not care to be under such persons," wrote Grant. Still, he recognized that "the time I spend here [in Springfield]... has enabled me to become acquainted with the principle men in the state... I do not know that I shall receive any benefit... but it does no harm." He offered his services to army officials in Washington by letter and in Ohio in person. No offer resulted.
While in Indiana, returning from the fruitless Ohio visit, Grant learned of his appointment as colonel in the Illinois volunteer force. Governor Yates had appointed him to command an infantry regiment that had been mustered at Mattoon but had since fallen into chaos due to incompetent leadership. Officials moved the regiment to Springfield and on June 18 Grant took command. His process of quiet but firm discipline and military instruction soon paid off. When officials asked the men, who had volunteered for a term of one month, to extend their enlistment for three years, the response was overwhelming. The regiment came into the service as the Twenty-first Illinois Infantry. Within weeks they headed for the front, to guard rail facilities in northern Missouri. While most regiments moved by rail, Grant decided that his would pick up some experience by marching the ninety-odd miles to the Mississippi River town of Quincy.
In mid-July Col. Grant received orders to deal with secessionist Missouri State Guard troops under Thomas Harris. After days of marching, Grant and his men finally closed in on their objective. He recalled later that
Interested in learning more?
Grant described his early wartime service in chapters 17 and 18 of his Memoirs.
Conflicting postwar memories as to how Grant came to the notice of Illinois officials and who assisted him are reviewed in Lloyd Lewis, Captain Sam Grant, chapter 23.
Grant's service through the appointment as brigadier general is found in Brooks D. Simpson, Ulysses S. Grant: Triumph over Adversity, 1822-1865, chapter 6.
Grant's correspondence during this period—and the whole of his life—can be found at http://library.msstate.edu/usgrant/facstaff.asp.
"my heart kept getting higher and higher until I thought it was in my throat. I would have given anything then to be back in Illinois, but I had not the moral courage to halt and consider what to do; I kept right on. When we reached a point from which the valley below was in full view I halted. The place where Harris had been encamped a few days before was still there... but the troops were gone. My heart resumed its place. It occurred to me at once that Harris had been as afraid of me as I had been of him. This was a view of the question I had never taken before; but it was one I never forgot afterwards... I never forgot that [the enemy] had as much reason to fear my forces as I had his. The lesson was valuable."
On July 31, 1861, Abraham Lincoln nominated Grant for promotion to brigadier general. Grant's name was one of six sent to the president by the Illinois congressional delegation, placed at the top of the list by Galena's own Elihu B. Washburne. On August 5 the U.S. Senate confirmed the appointment.
Illinois responds to Fort Sumter
The firing on Fort Sumter in Charleston Harbor, South Carolina, and President Lincoln's April 15 call for troops found Illinois largely unprepared. Although Americans had discussed the possibility if not likelihood of war since the election of a Republican president in November 1860, the state had made no real preparations for the coming of conflict. Illinois leaders had to make up for lost time, and they moved quickly on multiple fronts.
Legislation and bureaucracy
After calling for men to volunteer for national service, Governor Richard Yates called for the Illinois General Assembly to meet in special session to begin on April 23. The agenda included, among other things, providing for the organization of volunteers into regiments, providing funds for the military defense of the state, and providing for the arrest and punishment of those who would destroy the railroad's capacity to carry troops or military supplies or who would use telegraph lines "for illegal and revolutionary purposes."
With the opening of war the state's adjutant general, who administered the skeletal militia, became a lead player in organizing the state’s war effort. Men volunteered by the thousands, and the adjutant had to decide who to accept and who to turn away. Regiments then had to be organized, fed, clothed, and armed. A Springfield newspaper noted that on April 15 the "Adjutant General's office, in the State House, began ... to assume quite a military appearance, an orderly in full uniform being stationed outside the door, and military men constantly coming and going." Two days later "Adjt. Gen. Mather, with several aids, was busy all day registering the companies as they were reported, receiving and answering dispatches, and in the transaction of other business pertaining to his department." The chaotic situation calmed somewhat late in the month when a former U.S. army captain, Ulysses S. Grant, "took supervision of the muster rolls and assignments of companies, and before a week had expired had brought order out of chaos in the papers."
Keeping weapons from rebels
On April 17, 1861, a group of Illinois state officials including Governor Yates and U. S. Senator Lyman Trumbull wrote President Lincoln to warn of the danger that secessionist leaders in Missouri might capture the 30,000 muskets stored at the U. S. arsenal in St. Louis. They proposed that the arsenal commander be ordered to provide Illinois with at least 10,000 of the weapons, some of which could then be provided to Union men in St. Louis. The more guns removed the better. "We are anxiously waiting for instructions... Our people burn with patriotism..."
On April 20 Secretary of War Simon Cameron instructed Governor Yates to send Illinois troops "to support the garrison of the St. Louis Arsenal, and to receive their arms and accoutrements there," with an additional 10,000 muskets for later distribution. Captain James Stokes moved on April 24, steaming from Alton, Illinois, on the steamboat City of Alton. Stokes hatched a plan with arsenal commander Captain Nathaniel Lyon to haul the guns to the steamer under cover of night and then move quickly to the Illinois side of the river. A decoy shipment of broken weapons diverted the attention of secessionist scouts, and wagons carrying about 20,000 muskets reached the City of Alton without being detected.
The removal of arms from the St. Louis arsenal and their provision to troops enlisted to defend the Union provided a boost to Illinoisans in the weeks after the surrender of Fort Sumter. The operation provided weapons for several Illinois regiments, while others were soon ordered shipped to Ohio and Wisconsin, much to the disappointment of Governor Yates. Newspapers described the exploit to their readers, and the Illinois General Assembly voted its thanks to Captain Stokes "for the tact and energy displayed by him” in capturing the " arms and munitions requisite for equipping the volunteer forces of this state."
Holding Cairo for the Union
The same April 17 letter to the president from Illinois officials declared that if "Federal troops can be spared... they ought to be sent instantly to Cairo, that point being considered the most important and commanding point of the West." On April 19 the Secretary of War ordered state officials to occupy the city. Located at the far southern tip of Illinois and overlooking the junction of the Ohio and Mississippi Rivers, control of the city meant control of large stretches of the two important waterways. Holding Cairo would also make it easier to protect the important Illinois Central Railroad line, and to prevent the movement of weapons, ammunition, and other military goods to the rebellious states.
Governor Richard Yates immediately ordered Richard K. Swift of Chicago to move without delay all available men to carry out the War Department's instructions. About 600 troops departed Chicago on April 21. Others joined the force as it moved south on the Illinois Central Railroad, finally totaling about 900. The men, "indifferently armed with rifles, shot-guns, muskets and carbines, hastily gathered from stores and shops in Chicago," reached Cairo on the morning of April 23. Troops continued to arrive, establishing what would be a major base throughout the whole war.
The following day, orders came from Governor Yates to stop the steamboats C. E. Hillman and John D. Perry, said to be carrying "large quantities" of weapons and munitions south from St. Louis. The ships were stopped, boarded, and the military goods confiscated. Though the seizure was not ordered or authorized by the War Department, federal officials soon gave their approval for this act and issued orders that made Cairo a crucial point in attempting to choke off the flow of goods to the South.
Interested in learning more?
A detailed study of the history of the St. Louis arsenal and the capture of its weapons in 1861 can be found at http://www.civilwarstlouis.com/arsenal/index.htm.
For more on the holding of Cairo see William A. Pitkin, "When Cairo was saved for the Union". Journal of the Illinois State Historical Society 51 (1958): 284-305, in digital format at:
and the report by Illinois adjutant general Allen C. Fuller on pages 7-9 of
Missions for the president
Abraham Lincoln declared publicly a confidence that cool heads would prevail among Southern leaders, that a kind of silent majority would express its affection for the nation of the Founders, and that bloodshed would be avoided. At the same time he had to consider the fate of Fort Sumter, a U. S. Army post located in the harbor of Charleston, South Carolina. The continuing occupation of the fort, the U.S. flag flying above, was like a bone in the throat to secessionists and threatened to result in violence. Within weeks of his inauguration as president Lincoln sent two old Illinois friends to Charleston. One would sample opinion in the city at the heart of the secession movement. Another would attempt to meet with Confederate leaders in Charleston, as Lincoln "believed it possible to effect some accommodation by dealing directly with the most chivalrous among their leaders; at all events he thought it his duty to try... [The] embassy to Charleston was one of his experiments in that direction."
Lincoln chose for these missionsStephen A. Hurlbut of Belvidere (Boone County), and
Ward H. Lamon of Danville (Vermilion County). A Republican politician and acquaintance since the mid-1840s, Hurlbut had been born into a family of New Englanders living in Charleston, and spent most of his first thirty years living in the city. His professional mentor, attorney James Pettigru, in 1861 was the most vocal (some would say only) unionist in South Carolina. The Virginia-born Lamon had worked with Lincoln during the 1850s on Illinois’ Eighth Judicial Circuit. The burly attorney accompanied his friend on the inaugural journey to Washington, D.C., acting informally as bodyguard.
On arriving in Charleston on March 24, Hurlbut and Lamon parted to perform their separate missions. Hurlbut began visiting old acquaintances, playing the role of "a private person upon my last visit to my relatives." A few days of conversation with Charleston lawyers, merchants, and tradesmen brought a disappointing conclusion—no unionist sentiment existed. Hurlbut wrote to Lincoln that "I have no hesitation in reporting... that Separate Nationality is a fixed fact... that there is an unanimity of sentiment which is to my mind astonishing—that there is no attachment to the Union." In scanning the harbor, "I regret to say that no single vessel in port displayed American colours... the Flag of the Southern Confederacy and of the State of South Carolina were visible everywhere..." Even men who supported President Andrew Jackson against South Carolina's secession movement in 1832 "are now... ready to take arms if necessary for the Southern Confederacy."
Although everyone Hurlbut spoke with supported secession, they split over whether to actually begin a war. He wrote that power in South Carolina and the Deep South states "is now in the hands of Conservatives—of men who desire no war, seek no armed collision, but hope & expect peaceable separation." Others though, "desire to precipitate collision, inaugurate war & unite the Southern Confederacy by that means. These men dread the effect of time & trial upon their institutions... These are the men who demand an immediate attack upon the forts." Summing up, the outlook for sending supplies to and holding Fort Sumter was bleak. "I have no doubt that a ship known to contain only provisions for Sumpter would be stopped & refused admittance Even the moderate men who desire not to open fire, believe in the safer policy of time and Starvation." Moving on to offer Lincoln his opinion and advice, the Belvidere attorney declared, "If Sumpter is abandoned it is to a certain extent a concession of jurisdiction which cannot fail to have its effects at home and abroad," and thus, "At all hazards and under all circumstances ... any Fortress accessible by the Sea, over which we still have dominion, should be held & if war comes, let it come."
As Hurlbut worked to determine public opinion, Ward H. Lamon held conversations that misled both U. S. and Confederate officials about Lincoln’s thoughts regarding Fort Sumter. He informed South Carolina governor F. W. Pickens and U. S. Army major Robert Anderson that the fort would be abandoned by the United States. Confederate general Pierre G. T. Beauregard reported to his secretary of war on March 26 that "Mr. Lamon left here last night, saying that Major Anderson and command would soon be withdrawn from Fort Sumter in a satisfactory manner," but that the general continued to mount additional cannon, just in case. Anderson reported later in the week that "remarks made to me by Colonel Lamon ... have induced me ... to believe that orders would soon be issued for my abandoning this work." When informed that an attempt would be made to resupply Sumter by ship, the confused major responded to his superior in Washington, "Colonel Lamon's remark convinced me that the idea ... would not be carried out." Still, Anderson wrote, whatever the Lincoln administration decided, "We shall strive to do our duty."
An overview of the Hurlbut and Lamon missions and their background can be found in Michael Burlingame, Abraham Lincoln: A Life (2009), chapter 22. Hurlbut's March 27, 1861, report to Lincoln is in the Abraham Lincoln Papers held by the Library of Congress. Search via:
Ward H. Lamon recalls the mission in Recollections of Abraham Lincoln, 1847–1865, pp. 68–79, although most of the account dwells on his boldly standing up to threats of physical violence. Hurlbut's controversial wartime career is described in Jeffery N. Lash, A Politician Turned General: The Civil War Career of Stephen Augustus Hurlbut.
A president is elected. States secede. What next?
The election of a Republican as president of the United States quickly led southern states to consider breaking up the nation. Illinoisans held many different views of the crisis and what should be done, and they would continue to do so through the months and years that followed.
As southern states declared independence from the United States, many in Illinois thought that they should be allowed to leave. Some antislavery men did so with an attitude of "good riddance" to what they saw as a way of life that made a mockery of American ideals of freedom. Others saw secession as the only way that slaveholding states could react to a radical party taking power in Washington. A Democratic editor in Belleville wrote shortly after the election that the choice of a Republican president proved "that the North is hopelessly abolitionized," and that the question "to submit...or secede, is forced upon the South... Thus far, they have justice and right on their side." The Rockford Register disapproved of secession but worried over the idea of forcing states to remain in the Union- "If a separation must come, let it be a peaceful one."
Others thought secession to be disruptive of business, misguided, or downright illegal. Some farmers and others who shipped produce and other goods via New Orleans, worried about a foreign power controlling the Mississippi River. However, most Illinois Democrats, though disappointed over the Republican election victory, thought that seceding before Abraham Lincoln was even inaugurated would be foolish. Democratic leader and Illinois Senator Stephen A. Douglas argued for the principle of majority rule—that the constitutional election of a president provided no excuse for disunion. Beyond that, he believed that no state could on its own decide to leave the Union. The Constitution, he reasoned, is a contract between all of the states, and a state can break from that contract only with the agreement of all the other parties.
Prairie State Republicans tended to be of one mind concerning secession. They had won an election and would not compromise their policies in order to quiet southern radicals. This view was expressed by W. H. Hanna of Bloomington, who wrote to Senator Lyman Trumbull: "I am in favor of 20 years of war rather than the loss of one inch of territory or the surrender of any principal that concedes the right of secession, which is the disruption of the government."
As the crisis deepened with the secession of more states, many hoped for some compromise. Senator Douglas, a strong nationalist who believed secession to be illegal, seems to hope for a cooling-off period. He feared that if war began it could end only with the complete, crushing defeat of one side or the other. Even if the Union was maintained by such a war, he thought that bitterness between the sections would be felt for years. Illinois Republicans continued to refuse serious consideration of compromise. Abraham Lincoln’s inaugural speech, denying the right of states to leave the Union and promising to "hold, occupy, and possess" federal property throughout the United States, suited them perfectly
The firing on U. S. troops and their flag at Fort Sumter on April 12, 1861, forced Americans to take sides after months of discussion about the wisdom or the legality of secession. The majority of Illinois residents saw the attack as an outrage that could not be justified. A few Democrats continued to strongly support a right of secession, and others remained unsure as to just what stand they should take, looking to Senator Douglas for an answer. He gave it to them in speeches in Springfield and Chicago, calling on all, regardless of party, to stand for the Union and the preservation of majority rule through the ballot box. Douglas died weeks later, with his last words telling his sons to "obey the laws and support the Constitution of the United States."
Fort Sumter brought Illinoisans together in support of Abraham Lincoln's effort to preserve the Union. As time passed, as costs in blood and treasure grew, and as African Americans became more visible players in the conflict, the united front of Spring 1861 broke. The state's residents again divided, this time over exactly how the nation should be preserved.
Read letters from an American Revolutionary War veteran from Quincy, Illinois and others on the secession of southern states from the February 7th, 1861 edition of the Quincy Daily Herald.
A dated but still worthwhile look at the many shades of opinion in Illinois during the secession and Fort Sumter crises is found in Arthur C. Cole, The Era of the Civil War, 1848-1870 The Sesquicentennial History of Illinois, volume 3 (Urbana and Chicago: University of Illinois Press, 1987), chap. 11. | http://www.illinoiscivilwar150.org/monthly.html | 13 |
51 | Uji kuadrat-chi Pearson
|Artikel ieu keur dikeureuyeuh, ditarjamahkeun tina basa Inggris.
Bantosanna diantos kanggo narjamahkeun.
It tests a null hypothesis that the relative frequencies of occurrence of observed events follow a specified frequency distribution. The events are assumed to be independent and have the same distribution, and the outcomes of each event must be mutually exclusive. A simple example is the hypothesis that an ordinary six-sided die is "fair", i.e., all six outcomes occur equally often. Pearson's chi-square is the original and most widely-used chi-square test.
Chi-square is calculated by finding the difference between each observed and theoretical frequency for each possible outcome, squaring them, dividing each by the theoretical frequency, and taking the sum of the results. The number of degrees of freedom is equal to the number of possible outcomes, minus 1:
- = an observed frequency;
- = an expected (theoretical) frequency, asserted by the null hypothesis;
- = the number of possible outcomes of each event.
Pearson's chi-square is used to assess two types of comparison: tests of goodness of fit and tests of independence. A test of goodness of fit establishes whether or not an observed frequency distribution differs from a theoretical distribution. A test of independence assesses whether paired observations on two variables, expressed in a contingency table, are independent of each other – for example, whether people from different regions differ in the frequency with which they report that they support a political candidate.
A chi-square probability of 0.05 or less is commonly interpreted by applied workers as justification for rejecting the null hypothesis that the row variable is unrelated (that is, only randomly related) to the column variable. The alternate hypothesis is not rejected when the variables have an associated relationship.
For example, to test the hypothesis that a random sample of 100 people has been drawn from a population in which men and women are equal in frequency, the observed number of men and women would be compared to the theoretical frequencies of 50 men and 50 women. If there were 45 men in the sample and 55 women, then
If the null hypothesis is true (i.e., men and women are chosen with equal probability in the sample), the test statistic will be drawn from a chi-square distribution with one degree of freedom. Though one might expect two degrees of freedom (one each for the men and women), we must take into account that the total number of men and women is constrained (100), and thus there is only one degree of freedom (2 − 1). Alternatively, if the male count is known the female count is determined, and vice-versa.
Consultation of the chi-square distribution for 1 degree of freedom shows that the probability of observing this difference (or a more extreme difference than this) if men and women are equally numerous in the population is approximately 0.3. This probability is higher than conventional criteria for statistical significance, so normally we would not reject the null hypothesis that the number of men in the population is the same as the number of women.
The approximation to the chi-square distribution breaks down if expected frequencies are too low. It will normally be acceptable so long as no more than 10% of the events have expected frequencies below 5. Where there is only 1 degree of freedom, the approximation is not reliable if expected frequencies are below 10. In this case, a better approximation can be had by reducing the absolute value of each difference between observed and expected frequencies by 0.5 before squaring; this is called Yates' correction for continuity.
In cases where the expected value, E, is found to be small (indicating either a small underlying population probability, or a small number of observations), the normal approximation of the multinomial distribution can fail, and in such cases it is found to be more appropriate to use the G-test, a likelihood ratio-based test statistic. Where the total sample size is small, it is necessary to use an appropriate exact test, typically either the binomial test or (for contingency tables) Fisher's exact test; but note that this test assumes fixed and known marginal totals.
This approximation arises as the true distribution, under the null hypothesis, if the expected value is given by a multinomial distribution. For large sample sizes, the central limit theorem says this distribution tends toward a certain multivariate normal distribution.
Two cells [édit]
In the special case where there are only two cells in the table, the expected values follow a binomial distribution,
- p = probability, under the null hypothesis,
- n = number of observations in the sample.
In the above example the hypothesised probability of a male observation is 0.5, with 100 samples. Thus we expect to observe 50 males.
If n is sufficiently large, the above binomial distribution may be approximated as by Gaussian (normal) distribution and thus the Pearson test statistic approximates a chi-squared distribution,
Let O1 be the number of observations from the sample that are in the first cell. The Pearson test statistic can be expressed as
which can in turn be expressed as
By the normal approximation to a binomial this is the square of one standard normal variate, and hence is distributed as chi-square with 1 degree of freedom. Note that the denominator is one standard deviation of the Gaussian approximation, so can be written
So as consistent with the meaning of the chi-square distribution, we are measuring how probable the observed number of standard deviations away from the mean is under the Gaussian approximation (which is a good approximation for large n).
The chi-square distribution is then integrated on the right of the statistic value to obtain the probability that this result or worse were observed given the model.
Many cells [édit]
Similar arguments as above lead to the desired result. (TODO: details) Each cell (except the final one, whose value is completely determined by the others) is treated as an independent binomial variable, and their contributions are summed and each contributes one degree of freedom.
Advanced uses [édit]
A more complicated, but more widely used form of Pearson's chi-square test arises in the case where the null hypothesis of interest includes unknown parameters. For instance we may wish to test whether some data follows a normal distribution but without specifying a mean or variance. In this situation the unknown parameters need to be estimated by the data, typically by maximum likelihood estimation, and these estimates are then used to calculate the expected values in the Pearson statistic. It is commonly stated that the degrees of freedom for the chi-square distribution of the statistic are then k − 1 − r, where r is the number of unknown parameters. This result is valid when the original data was multinomial and hence the estimated parameters are efficient for minimizing the chi-square statistic. More generally however, when maximum likelihood estimation does not coincide with minimum chi-square estimation, the distribution will lie somewhere between a chi-square distribution with k − 1 − r and k − 1 degrees of freedom (See for instance Chernoff and Lehmann 1954).
See also [édit]
- Statistics for Applications. MIT OpenCourseWare. Lecture 23. Retrieved 21 March 2007.
- Chernoff H, Lehmann E.L. The use of maximum likelihood estimates in tests for goodness-of-fit. The Annals of Mathematical Statistics 1954; 25:579-586.
- Sampling Distribution of the Sample Chi-Square Statistic — a Java applet showing the sampling distribution of the Pearson test statistic.
- Online Chi-Square Test for uniform distribution
- Statistic distribution tables including chi
- A tutorial on the chi-square test devised for Oxford University psychology students | http://su.wikipedia.org/wiki/Uji_kuadrat-chi_Pearson | 13 |
264 | History of trigonometry
Early study of triangles can be traced to the 2nd millennium BC, in Egyptian mathematics (Rhind Mathematical Papyrus) and Babylonian mathematics. Systematic study of trigonometric functions began in Hellenistic mathematics, reaching India as part of Hellenistic astronomy. In Indian astronomy, the study of trigonometric functions flowered in the Gupta period, especially due to Aryabhata (6th century). During the Middle Ages, the study of trigonometry continued in Islamic mathematics, whence it was adopted as a separate subject in the Latin West beginning in the Renaissance with Regiomontanus. The development of modern trigonometry shifted during the western Age of Enlightenment, beginning with 17th-century mathematics (Isaac Newton and James Stirling) and reaching its modern form with Leonhard Euler (1748).
The term "trigonometry" derives from the Greek "τριγωνομετρία" ("trigonometria"), meaning "triangle measuring", from "τρίγωνο" (triangle) + "μετρεῖν" (to measure).
Our modern word "sine" is derived from the Latin word sinus, which means "bay", "bosom" or "fold", translating Arabic jayb. The Arabic term is in origin a corruption of Sanskrit jīvā "chord". Sanskrit jīvā in learned usage was a synonym of jyā "chord", originally the term for "bow-string". Sanskrit jīvā was loaned into Arabic as jiba.[clarification needed] This term was then transformed into the genuine Arabic word jayb, meaning "bosom, fold, bay", either by the Arabs or by a mistake of the European translators such as Robert of Chester (perhaps because the words were written without vowels), who translated jayb into Latin as sinus. Particularly Fibonacci's sinus rectus arcus proved influential in establishing the term sinus.
The words "minute" and "second" are derived from the Latin phrases partes minutae primae and partes minutae secundae. These roughly translate to "first small parts" and "second small parts".
Early trigonometry
The ancient Egyptians and Babylonians had known of theorems on the ratios of the sides of similar triangles for many centuries. However, as pre-Hellenic societies lacked the concept of an angle measure, they were limited to studying the sides of triangles instead.
The Babylonian astronomers kept detailed records on the rising and setting of stars, the motion of the planets, and the solar and lunar eclipses, all of which required familiarity with angular distances measured on the celestial sphere. Based on one interpretation of the Plimpton 322 cuneiform tablet (c. 1900 BC), some have even asserted that the ancient Babylonians had a table of secants. There is, however, much debate as to whether it is a table of Pythagorean triples, a solution of quadratic equations, or a trigonometric table.
The Egyptians, on the other hand, used a primitive form of trigonometry for building pyramids in the 2nd millennium BC. The Rhind Mathematical Papyrus, written by the Egyptian scribe Ahmes (c. 1680–1620 BC), contains the following problem related to trigonometry:
"If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is its seked?"
Ahmes' solution to the problem is the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity he found for the seked is the cotangent of the angle to the base of the pyramid and its face.
Greek mathematics
Ancient Greek and Hellenistic mathematicians made use of the chord. Given a circle and an arc on the circle, the chord is the line that subtends the arc. A chord's perpendicular bisector passes through the center of the circle and bisects the angle. One half of the bisected chord is the sine of the bisected angle, that is,
and consequently the sine function is also known as the "half-chord". Due to this relationship, a number of trigonometric identities and theorems that are known today were also known to Hellenistic mathematicians, but in their equivalent chord form.
Although there is no trigonometry in the works of Euclid and Archimedes, in the strict sense of the word, there are theorems presented in a geometric way (rather than a trigonometric way) that are equivalent to specific trigonometric laws or formulas. For instance, propositions twelve and thirteen of book two of the Elements are the laws of cosines for obtuse and acute angles, respectively. Theorems on the lengths of chords are applications of the law of sines. And Archimedes' theorem on broken chords is equivalent to formulas for sines of sums and differences of angles. To compensate for the lack of a table of chords, mathematicians of Aristarchus' time would sometimes use the statement that, in modern notation, sin α/sin β < α/β < tan α/tan β whenever 0° < β < α < 90°, now known as Aristarchus' inequality.
The first trigonometric table was apparently compiled by Hipparchus of Nicaea (180 – 125 BCE), who is now consequently known as "the father of trigonometry." Hipparchus was the first to tabulate the corresponding values of arc and chord for a series of angles.
Although it is not known when the systematic use of the 360° circle came into mathematics, it is known that the systematic introduction of the 360° circle came a little after Aristarchus of Samos composed On the Sizes and Distances of the Sun and Moon (ca. 260 BC), since he measured an angle in terms of a fraction of a quadrant. It seems that the systematic use of the 360° circle is largely due to Hipparchus and his table of chords. Hipparchus may have taken the idea of this division from Hypsicles who had earlier divided the day into 360 parts, a division of the day that may have been suggested by Babylonian astronomy. In ancient astronomy, the zodiac had been divided into twelve "signs" or thirty-six "decans". A seasonal cycle of roughly 360 days could have corresponded to the signs and decans of the zodiac by dividing each sign into thirty parts and each decan into ten parts. It is due to the Babylonian sexagesimal numeral system that each degree is divided into sixty minutes and each minute is divided into sixty seconds.
Menelaus of Alexandria (ca. 100 AD) wrote in three books his Sphaerica. In Book I, he established a basis for spherical triangles analogous to the Euclidean basis for plane triangles. He establishes a theorem that is without Euclidean analogue, that two spherical triangles are congruent if corresponding angles are equal, but he did not distinguish between congruent and symmetric spherical triangles. Another theorem that he establishes is that the sum of the angles of a spherical triangle is greater than 180°. Book II of Sphaerica applies spherical geometry to astronomy. And Book III contains the "theorem of Menelaus". He further gave his famous "rule of six quantities".
Later, Claudius Ptolemy (ca. 90 – ca. 168 AD) expanded upon Hipparchus' Chords in a Circle in his Almagest, or the Mathematical Syntaxis. The Almagest is primarily a work on astronomy, and astronomy relies on trigonometry. Ptolemy's table of chords gives the lengths of chords of a circle of diameter 120 as a function of the number of degrees n in the corresponding arc of the circle, for n ranging from 1/2 to 180 by increments of 1/2. The thirteen books of the Almagest are the most influential and significant trigonometric work of all antiquity. A theorem that was central to Ptolemy's calculation of chords was what is still known today as Ptolemy's theorem, that the sum of the products of the opposite sides of a cyclic quadrilateral is equal to the product of the diagonals. A special case of Ptolemy's theorem appeared as proposition 93 in Euclid's Data. Ptolemy's theorem leads to the equivalent of the four sum-and-difference formulas for sine and cosine that are today known as Ptolemy's formulas, although Ptolemy himself used chords instead of sine and cosine. Ptolemy further derived the equivalent of the half-angle formula
Ptolemy used these results to create his trigonometric tables, but whether these tables were derived from Hipparchus' work cannot be determined.
Neither the tables of Hipparchus nor those of Ptolemy have survived to the present day, although descriptions by other ancient authors leave little doubt that they once existed.
Indian mathematics
The next significant developments of trigonometry were in India. Influential works from the 4th–5th century, known as the Siddhantas (of which there were five, the most complete survivor of which is the Surya Siddhanta) first defined the sine as the modern relationship between half an angle and half a chord, while also defining the cosine, versine, and inverse sine. Soon afterwards, another Indian mathematician and astronomer, Aryabhata (476–550 AD), collected and expanded upon the developments of the Siddhantas in an important work called the Aryabhatiya. The Siddhantas and the Aryabhatiya contain the earliest surviving tables of sine values and versine (1 − cosine) values, in 3.75° intervals from 0° to 90°, to an accuracy of 4 decimal places. They used the words jya for sine, kojya for cosine, utkrama-jya for versine, and otkram jya for inverse sine. The words jya and kojya eventually became sine and cosine respectively after a mistranslation described above.
In the 7th century, Bhaskara I produced a formula for calculating the sine of an acute angle without the use of a table. He also gave the following approximation formula for sin(x), which had a relative error of less than 1.9%:
Later in the 7th century, Brahmagupta redeveloped the formula
Bhaskara II was the first to discover and trigonometric results like:
Madhava (c. 1400) made early strides in the analysis of trigonometric functions and their infinite series expansions. He developed the concepts of the power series and Taylor series, and produced the power series expansions of sine, cosine, tangent, and arctangent. Using the Taylor series approximations of sine and cosine, he produced a sine table to 12 decimal places of accuracy and a cosine table to 9 decimal places of accuracy. He also gave the power series of π and the θ, radius, diameter, and circumference of a circle in terms of trigonometric functions. His works were expanded by his followers at the Kerala School up to the 16th century.
|No.||Series||Name||Western discoverers of the series
and approximate dates of discovery
|1||sin x = x − x3 / 3! + x5 / 5! − x7 / 7! + ...||Madhava's sine series||Isaac Newton (1670) and Wilhelm Leibniz (1676)|
|2||cos x = 1 − x2 / 2! + x4 / 4! − x6 / 6! + ...||Madhava's cosine series||Isaac Newton (1670) and Wilhelm Leibniz (1676)|
|3||tan−1x = x − x3 / 3 + x5 / 5 − x7 / 7 + ...||Madhava's arctangent series||James Gregory (1671) and Wilhelm Leibniz (1676)|
The Indian text the Yuktibhāṣā contains proof for the expansion of the sine and cosine functions and the derivation and proof of the power series for inverse tangent, discovered by Madhava. The Yuktibhāṣā also contains rules for finding the sines and the cosines of the sum and difference of two angles.
Islamic mathematics
The Indian works were later translated and expanded in the medieval Islamic world by Muslim mathematicians of mostly Persian and Arab descent, who enunciated a large number of theorems which freed the subject of trigonometry from dependence upon the complete quadrilateral, as was the case in Hellenistic mathematics due to the application of Menelaus' theorem. According to E. S. Kennedy, it was after this development in Islamic mathematics that "the first real trigonometry emerged, in the sense that only then did the object of study become the spherical or plane triangle, its sides and angles."
In addition to Indian works, Hellenistic methods dealing with spherical triangles were also known, particularly the method of Menelaus of Alexandria, who developed "Menelaus' theorem" to deal with spherical problems. However, E. S. Kennedy points out that while it was possible in pre-lslamic mathematics to compute the magnitudes of a spherical figure, in principle, by use of the table of chords and Menelaus' theorem, the application of the theorem to spherical problems was very difficult in practice. In order to observe holy days on the Islamic calendar in which timings were determined by phases of the moon, astronomers initially used Menalaus' method to calculate the place of the moon and stars, though this method proved to be clumsy and difficult. It involved setting up two intersecting right triangles; by applying Menelaus' theorem it was possible to solve one of the six sides, but only if the other five sides were known. To tell the time from the sun's altitude, for instance, repeated applications of Menelaus' theorem were required. For medieval Islamic astronomers, there was an obvious challenge to find a simpler trigonometric method.
In the early 9th century AD, Muhammad ibn Mūsā al-Khwārizmī produced accurate sine and cosine tables, and the first table of tangents. He was also a pioneer in spherical trigonometry. In 830 AD, Habash al-Hasib al-Marwazi produced the first table of cotangents. Muhammad ibn Jābir al-Harrānī al-Battānī (Albatenius) (853-929 AD) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°.
By the 10th century AD, in the work of Abū al-Wafā' al-Būzjānī, Muslim mathematicians were using all six trigonometric functions. Abu al-Wafa had sine tables in 0.25° increments, to 8 decimal places of accuracy, and accurate tables of tangent values. He also developed the following trigonometric formula:
- (a special case of Ptolemy's angle-addition formula; see above)
In his original text, Abū al-Wafā' states: "If we want that, we multiply the given sine by the cosine minutes, and the result is half the sine of the double". Abū al-Wafā also established the angle addition and difference identities presented with complete proofs:
For the second one, the text states: "We multiply the sine of each of the two arcs by the cosine of the other minutes. If we want the sine of the sum, we add the products, if we want the sine of the difference, we take their difference".
Al-Jayyani (989–1079) of al-Andalus wrote The book of unknown arcs of a sphere, which is considered "the first treatise on spherical trigonometry" in its modern form. It "contains formulae for right-handed triangles, the general law of sines, and the solution of a spherical triangle by means of the polar triangle." This treatise later had a "strong influence on European mathematics", and his "definition of ratios as numbers" and "method of solving a spherical triangle when all sides are unknown" are likely to have influenced Regiomontanus.
The method of triangulation was first developed by Muslim mathematicians, who applied it to practical uses such as surveying and Islamic geography, as described by Abu Rayhan Biruni in the early 11th century. Biruni himself introduced triangulation techniques to measure the size of the Earth and the distances between various places. In the late 11th century, Omar Khayyám (1048–1131) solved cubic equations using approximate numerical solutions found by interpolation in trigonometric tables. In the 13th century, Nasīr al-Dīn al-Tūsī was the first to treat trigonometry as a mathematical discipline independent from astronomy, and he developed spherical trigonometry into its present form. He listed the six distinct cases of a right-angled triangle in spherical trigonometry, and in his On the Sector Figure, he stated the law of sines for plane and spherical triangles, discovered the law of tangents for spherical triangles, and provided proofs for both these laws.
In the 15th century, Jamshīd al-Kāshī provided the first explicit statement of the law of cosines in a form suitable for triangulation. In France, the law of cosines is still referred to as the theorem of Al-Kashi. He also gave trigonometric tables of values of the sine function to four sexagesimal digits (equivalent to 8 decimal places) for each 1° of argument with differences to be added for each 1/60 of 1°. Ulugh Beg also gives accurate tables of sines and tangents correct to 8 decimal places around the same time.
Chinese mathematics
In China, Aryabhata's table of sines were translated into the Chinese mathematical book of the Kaiyuan Zhanjing, compiled in 718 AD during the Tang Dynasty. Although the Chinese excelled in other fields of mathematics such as solid geometry, binomial theorem, and complex algebraic formulas, early forms of trigonometry were not as widely appreciated as in the earlier Greek, Hellenistic, Indian and Islamic worlds. Instead, the early Chinese used an empirical substitute known as chong cha, while practical use of plane trigonometry in using the sine, the tangent, and the secant were known. However, this embryonic state of trigonometry in China slowly began to change and advance during the Song Dynasty (960–1279), where Chinese mathematicians began to express greater emphasis for the need of spherical trigonometry in calendrical science and astronomical calculations. The polymath Chinese scientist, mathematician and official Shen Kuo (1031–1095) used trigonometric functions to solve mathematical problems of chords and arcs. Victor J. Katz writes that in Shen's formula "technique of intersecting circles", he created an approximation of the arc s of a circle given the diameter d, sagitta v, and length c of the chord subtending the arc, the length of which he approximated as
Sal Restivo writes that Shen's work in the lengths of arcs of circles provided the basis for spherical trigonometry developed in the 13th century by the mathematician and astronomer Guo Shoujing (1231–1316). As the historians L. Gauchet and Joseph Needham state, Guo Shoujing used spherical trigonometry in his calculations to improve the calendar system and Chinese astronomy. Along with a later 17th century Chinese illustration of Guo's mathematical proofs, Needham states that:
Guo used a quadrangular spherical pyramid, the basal quadrilateral of which consisted of one equatorial and one ecliptic arc, together with two meridian arcs, one of which passed through the summer solstice point...By such methods he was able to obtain the du lü (degrees of equator corresponding to degrees of ecliptic), the ji cha (values of chords for given ecliptic arcs), and the cha lü (difference between chords of arcs differing by 1 degree).
Despite the achievements of Shen and Guo's work in trigonometry, another substantial work in Chinese trigonometry would not be published again until 1607, with the dual publication of Euclid's Elements by Chinese official and astronomer Xu Guangqi (1562–1633) and the Italian Jesuit Matteo Ricci (1552–1610).
European mathematics
A simplified trigonometric table, the "toleta de marteloio", was used by sailors in the Mediterranean Sea during the 14th-15th Centuries. to calculate navigation courses. It is described by Ramon Llull of Majorca in 1295, and laid out in the 1436 atlas of Venetian captain Andrea Bianco.
Regiomontanus was perhaps the first mathematician in Europe to treat trigonometry as a distinct mathematical discipline, in his De triangulis omnimodus written in 1464, as well as his later Tabulae directionum which included the tangent function, unnamed.
The Opus palatinum de triangulis of Georg Joachim Rheticus, a student of Copernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596.
In the 18th century, Leonhard Euler's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, defining them as infinite series and presenting "Euler's formula" eix = cos x + i sin x. Euler used the near-modern abbreviations sin., cos., tang., cot., sec., and cosec. Prior to this, Roger Cotes had computed the derivative of sine in his Harmonia Mensurarum (1722).
Also in the 18th century, Brook Taylor defined the general Taylor series and gave the series expansions and approximations for all six trigonometric functions. The works of James Gregory in the 17th century and Colin Maclaurin in the 18th century were also very influential in the development of trigonometric series.
See also
- Greek mathematics
- History of mathematics
- Trigonometric functions
- Aryabhata's sine table
Citations and footnotes
- Boyer (1991), page 252: It was Robert of Chester's translation from the Arabic that resulted in our word "sine." The Hindus had given the name jiva to the half-chord in trigonometry, and the Arabs had taken this over as jiba. In the Arabic language there is also the word jaib meaning "bay" or "inlet." When Robert of Chester came to translate the technical word jiba, he seems to have confused this with the word jaib (perhaps because vowels were omitted); hence, he used the word sinus, the Latin word for "bay" or "inlet."
- Maor, Eli (1998). Trigonometric Delights. Princeton University Press. p. 20. ISBN 0-691-09541-8
- Boyer (1991), page 252: "It was Robert of Chester's translation from the Arabic that resulted in our word "sine." The Hindus had given the name jiva to the half-chord in trigonometry, and the Arabs had taken this over as jiba. In the Arabic language there is also the word jaib meaning 'bay' or 'inlet.' When Robert of Chester came to translate the technical word jiba, he seems to have confused this with the word jaib (perhaps because vowels were omitted); hence, he used the word sinus, the Latin word for 'bay' or 'inlet.'"
- O'Connor (1996).
- Boyer (1991). "Greek Trigonometry and Mensuration". pp. 166–167. "It should be recalled that form the days of Hipparchus until modern times there were no such things as trigonometric ratios. The Greeks, and after them the Hindus and the Arabs, used trigonometric lines. These at first took the form, as we have seen, of chords in a circle, and it became incumbent upon Ptolemy to associate numerical values (or approximations) with the chords. [...] It is not unlikely that the 260-degree measure was carried over from astronomy, where the zodiac had been divided into twelve "signs" or 36 "decans." A cycle of the seasons of roughly 360 days could readily be made to correspond to the system of zodiacal signs and decans by subdividing each sign into thirty parts and each decan into ten parts. Our common system of angle measure may stem from this correspondence. Moreover since the Babylonian position system for fractions was so obviously superior to the Egyptians unit fractions and the Greek common fractions, it was natural for Ptolemy to subdivide his degrees into sixty partes minutae primae, each of these latter into sixty partes minutae secundae, and so on. It is from the Latin phrases that translators used in this connection that our words "minute" and "second" have been derived. It undoubtedly was the sexagesimal system that led Ptolemy to subdivide the diameter of his trigonometric circle into 120 parts; each of these he further subdivided into sixty minutes and each minute of length sixty seconds."
- Boyer (1991). "Greek Trigonometry and Mensuration". pp. 158–159. "Trigonometry, like other branches of mathematics, was not the work of any one man, or nation. Theorems on ratios of the sides of similar triangles had been known to, and used by, the ancient Egyptians and Babylonians. In view of the pre-Hellenic lack of the concept of angle measure, such a study might better be called "trilaterometry," or the measure of three sided polygons (trilaterals), than "trigonometry," the measure of parts of a triangle. With the Greeks we first find a systematic study of relationships between angles (or arcs) in a circle and the lengths of chords subtending these. Properties of chords, as measures of central and inscribed angles in circles, were familiar to the Greeks of Hippocrates' day, and it is likely that Eudoxus had used ratios and angle measures in determining the size of the earth and the relative distances of the sun and the moon. In the works of Euclid there is no trigonometry in the strict sense of the word, but there are theorems equivalent to specific trigonometric laws or formulas. Propositions II.12 and 13 of the Elements, for example, are the laws of cosines for obtuse and acute angles respectively, stated in geometric rather than trigonometric language and proved by a method similar to that used by Euclid in connection with the Pythagorean theorem. Theorems on the lengths of chords are essentially applications of the modern law of sines. We have seen that Archimedes' theorem on the broken chord can readily be translated into trigonometric language analogous to formulas for sines of sums and differences of angles."
- Joseph (2000b, pp.383–84).
- Boyer (1991). "Greek Trigonometry and Mensuration". p. 163. "In Book I of this treatise Menelaus establishes a basis for spherical triangles analogous to that of Euclid I for plane triangles. Included is a theorem without Euclidean analogue – that two spherical triangles are congruent if corresponding angles are equal (Menelaus did not distinguish between congruent and symmetric spherical triangles); and the theorem A + B + C > 180° is established. The second book of the Sphaerica describes the application of spherical geometry to astronomical phenomena and is of little mathematical interest. Book III, the last, contains the well known "theorem of Menelaus" as part of what is essentially spherical trigonometry in the typical Greek form – a geometry or trigonometry of chords in a circle. In the circle in Fig. 10.4 we should write that chord AB is twice the sine of half the central angle AOB (multiplied by the radius of the circle). Menelaus and his Greek successors instead referred to AB simply as the chord corresponding to the arc AB. If BOB' is a diameter of the circle, then chord A' is twice the cosine of half the angle AOB (multiplied by the radius of the circle)." More than one of
- Boyer (1991). "Greek Trigonometry and Mensuration". p. 159. "Instead we have an Aristarchan treatise, perhaps composed earlier (ca. 260 BC), On the Sizes and Distances of the Sun and Moon, which assumes a geocentric universe. In this work Aristarchus made the observation that when the moon is just half-full, the angle between the lines of sight to the sun and the moon is less than a right angle by one thirtieth of a quadrant. (The systematic introduction of the 360° circle came a little later. In trigonometric language of today this would mean that the ratio of the distance of the moon to that of the sun (the ration ME to SE in Fig. 10.1) is sin 3°. Trigonometric tables not having been developed yet, Aristarchus fell back upon a well-known geometric theorem of the time which now would be expressed in the inequalities sin α/ sin β < α/β < tan α/ tan β, for 0° < β < α < 90°.)" More than one of
- Boyer (1991). "Greek Trigonometry and Mensuration". p. 162. "For some two and a half centuries, from Hippocrates to Eratosthenes, Greek mathematicians had studied relationships between lines and circles and had applied these in a variety of astronomical problems, but no systematic trigonometry had resulted. Then, presumably during the second half of the 2nd century BC, the first trigonometric table apparently was compiled by the astronomer Hipparchus of Nicaea (ca. 180–ca. 125 BC), who thus earned the right to be known as "the father of trigonometry." Aristarchus had known that in a given circle the ratio of arc to chord decreases as the arc decreases from 180° to 0°, tending toward a limit of 1. However, it appears that not until Hipparchus undertook the task had anyone tabulated corresponding values of arc and chord for a whole series of angles." More than one of
- Boyer (1991). "Greek Trigonometry and Mensuration". p. 162. "It is not known just when the systematic use of the 360° circle came into mathematics, but it seems to be due largely to Hipparchus in connection with his table of chords. It is possible that he took over from Hypsicles, who earlier had divided the day into parts, a subdivision that may have been suggested by Babylonian astronomy." More than one of
- Needham, Volume 3, 108.
- Toomer, G. J. (1998), Ptolemy's Almagest, Princeton University Press, ISBN 0-691-00260-6
- Boyer (1991). "Greek Trigonometry and Mensuration". pp. 164–166. "The theorem of Menelaus played a fundamental role in spherical trigonometry and astronomy, but by far the most influential and significant trigonometric work of all antiquity was composed by Ptolemy of Alexandria about half a century after Menelaus. [...] Of the life of the author we are as little informed as we are of that of the author of the Elements. We do not know when or where Euclid and Ptolemy were born. We know that Ptolemy made observations at Alexandria from AD. 127 to 151 and, therefore, assume that he was born at the end of the 1st century. Suidas, a writer who lived in the 10th century, reported that Ptolemy was alive under Marcus Aurelius (emperor from AD 161 to 180).
Ptolemy's Almagest is presumed to be heavily indebted for its methods to the Chords in a Circle of Hipparchus, but the extent of the indebtedness cannot be reliably assessed. It is clear that in astronomy Ptolemy made use of the catalog of star positions bequeathed by Hipparchus, but whether or not Ptolemy's trigonometric tables were derived in large part from his distinguished predecessor cannot be determined. [...] Central to the calculation of Ptolemy's chords was a geometric proposition still known as "Ptolemy's theorem": [...] that is, the sum of the products of the opposite sides of a cyclic quadrilateral is equal to the product of the diagonals. [...] A special case of Ptolemy's theorem had appeared in Euclid's Data (Proposition 93): [...] Ptolemy's theorem, therefore, leads to the result sin(α − β) = sin α cos β − cos α sin Β. Similar reasoning leads to the formula [...] These four sum-and-difference formulas consequently are often known today as Ptolemy's formulas.
It was the formula for sine of the difference – or, more accurately, chord of the difference – that Ptolemy found especially useful in building up his tables. Another formula that served him effectively was the equivalent of our half-angle formula."
- Boyer, pp. 158–168.
- Boyer (1991), p. 208.
- Boyer (1991), p. 209.
- Boyer (1991), p. 210
- Boyer (1991), p. 215
- Joseph (2000a, pp.285–86).
- O'Connor and Robertson (2000).
- Pearce (2002).
- Charles Henry Edwards (1994). The historical development of the calculus. Springer Study Edition Series (3 ed.). Springer. p. 205. ISBN 978-0-387-94313-8.
- Kennedy, E. S. (1969). "The History of Trigonometry". 31st Yearbook (National Council of Teachers of Mathematics, Washington DC) (cf. Haq, Syed Nomanul. The Indian and Persian background. pp. 60–3, in Seyyed Hossein Nasr, Oliver Leaman (1996). History of Islamic Philosophy. Routledge. pp. 52–70. ISBN 0-415-13159-6)
- O'Connor, John J.; Robertson, Edmund F., "Menelaus of Alexandria", MacTutor History of Mathematics archive, University of St Andrews. "Book 3 deals with spherical trigonometry and includes Menelaus's theorem."
- Kennedy, E. S. (1969). "The History of Trigonometry". 31st Yearbook (National Council of Teachers of Mathematics, Washington DC): 337 (cf. Haq, Syed Nomanul. The Indian and Persian background. p. 68, in Seyyed Hossein Nasr, Oliver Leaman (1996). History of Islamic Philosophy. Routledge. pp. 52–70. ISBN 0-415-13159-6)
- Gingerich, Owen (April 1986). "Islamic astronomy". Scientific American 254 (10): 74. Retrieved 2008-05-18
- Jacques Sesiano, "Islamic mathematics", p. 157, in Selin, Helaine; D'Ambrosio, Ubiratan, eds. (2000). Mathematics Across Cultures: The History of Non-western Mathematics. Springer. ISBN 1-4020-0260-2
- "trigonometry". Encyclopædia Britannica. Retrieved 2008-07-21.
- Boyer (1991) p. 238.
- Moussa, Ali (2011). "Mathematical Methods in Abū al-Wafāʾ's Almagest and the Qibla Determinations". Arabic Sciences and Philosophy (Cambridge University Press) 21 (1): 1–56. doi:10.1017/S095742391000007X.
- William Charles Brice, 'An Historical atlas of Islam', p.413
- O'Connor, John J.; Robertson, Edmund F., "Abu Abd Allah Muhammad ibn Muadh Al-Jayyani", MacTutor History of Mathematics archive, University of St Andrews.
- Donald Routledge Hill (1996), "Engineering", in Roshdi Rashed, Encyclopedia of the History of Arabic Science, Vol. 3, p. 751–795 .
- O'Connor, John J.; Robertson, Edmund F., "Abu Arrayhan Muhammad ibn Ahmad al-Biruni", MacTutor History of Mathematics archive, University of St Andrews.
- Berggren, J. Lennart (2007). "Mathematics in Medieval Islam". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton University Press. p. 518. ISBN 978-0-691-11485-9.
- Needham, Volume 3, 109.
- Needham, Volume 3, 108–109.
- Katz, 308.
- Restivo, 32.
- Gauchet, 151.
- Needham, Volume 3, 109–110.
- Needham, Volume 3, 110.
- Simonson, Shai. "The Mathematics of Levi ben Gershon, the Ralbag" (PDF). Retrieved 2009-06-22.
- Boyer, p. 274
- "Why the sine has a simple derivative", in Historical Notes for Calculus Teachers by V. Frederick Rickey
- Boyer, Carl B. (1991). A History of Mathematics (Second ed.). John Wiley & Sons, Inc. ISBN 0-471-54397-7.
- Gauchet, L. (1917). Note Sur La Trigonométrie Sphérique de Kouo Cheou-King.
- Joseph, George G. (2000). The Crest of the Peacock. Princeton, NJ: Princeton University Press. ISBN 0-691-00659-8.
- Joseph, George G. (2000). The Crest of the Peacock: Non-European Roots of Mathematics (2nd ed.). London: Penguin Books. ISBN 0-691-00659-8.
- Katz, Victor J. (2007). The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton: Princeton University Press. ISBN 0-691-11485-4.
- Needham, Joseph (1986). Science and Civilization in China: Volume 3, Mathematics and the Sciences of the Heavens and the Earth. Taipei: Caves Books, Ltd.
- O'Connor, J.J., and E.F. Robertson, "Trigonometric functions", MacTutor History of Mathematics Archive. (1996).
- O'Connor, J.J., and E.F. Robertson, "Madhava of Sangamagramma", MacTutor History of Mathematics Archive. (2000).
- Pearce, Ian G., "Madhava of Sangamagramma", MacTutor History of Mathematics Archive. (2002).
- Restivo, Sal. (1992). Mathematics in Society and History: Sociological Inquiries. Dordrecht: Kluwer Academic Publishers. ISBN 1-4020-0039-1. | http://en.wikipedia.org/wiki/History_of_trigonometry | 13 |
57 | Temporal range: Middle Triassic–Present, 231.4–0 Ma (range includes birds)
|A collection of fossil dinosaur skeletons. Clockwise from top left: Microraptor gui (a winged theropod), Apatosaurus louisae (a giant sauropod), Stegosaurus stenops (a plated stegosaur), Triceratops horridus (a horned ceratopsian), Edmontosaurus regalis (a duck-billed ornithopod), Gastonia burgei (an armored ankylosaur).|
Dinosaurs are a diverse group of animals of the clade Dinosauria. They first appeared during the Triassic period, approximately 230 million years ago, and were the dominant terrestrial vertebrates for 135 million years, from the beginning of the Jurassic (about 201 million years ago) until the end of the Cretaceous (66 million years ago), when the Cretaceous–Paleogene extinction event led to the extinction of most dinosaur groups at the close of the Mesozoic Era. The fossil record indicates that birds evolved from theropod dinosaurs during the Jurassic Period and, consequently, they are considered a subgroup of dinosaurs by many paleontologists. Some birds survived the extinction event that occurred 66 million years ago, and their descendants continue the dinosaur lineage to the present day.
Dinosaurs are a varied group of animals from taxonomic, morphological and ecological standpoints. Birds, at over 9,000 living species, are the most diverse group of vertebrates besides perciform fish. Using fossil evidence, paleontologists have identified over 500 distinct genera and more than 1,000 different species of non-avian dinosaurs. Dinosaurs are represented on every continent by both extant species and fossil remains. Some are herbivorous, others carnivorous. While dinosaurs were ancestrally bipedal, many extinct groups included quadrupedal species, and some were able to shift between these stances. Elaborate display structures such as horns or crests are common to all dinosaur groups, and some extinct groups developed skeletal modifications such as bony armor and spines. Evidence suggests that egg laying and nest building are additional traits shared by all dinosaurs. While modern birds are generally small due to the constraints of flight, many prehistoric dinosaurs were large-bodied—the largest sauropod dinosaurs may have achieved lengths of 58 meters (190 feet) and heights of 9.25 meters (30 feet 4 inches). Still, the idea that non-avian dinosaurs were uniformly gigantic is a misconception based on preservation bias, as large, sturdy bones are more likely to last until they are fossilized. Many dinosaurs were quite small: Xixianykus, for example, was only about 50 cm (20 in) long.
Although the word dinosaur means "terrible lizard", the name is somewhat misleading, as dinosaurs are not lizards. Instead, they represent a separate group of reptiles which, like many extinct forms, did not exhibit characteristics traditionally seen as reptilian, such as a sprawling limb posture or ectothermy. Additionally, many prehistoric animals, including mosasaurs, ichthyosaurs, pterosaurs, plesiosaurs, and Dimetrodon, are popularly conceived of as dinosaurs, but are not classified as dinosaurs. Through the first half of the 20th century, before birds were recognized to be dinosaurs, most of the scientific community believed dinosaurs to have been sluggish and cold-blooded. Most research conducted since the 1970s, however, has indicated that all dinosaurs were active animals with elevated metabolisms and numerous adaptations for social interaction.
Since the first dinosaur fossils were recognized in the early 19th century, mounted fossil dinosaur skeletons have been major attractions at museums around the world, and dinosaurs have become an enduring part of world culture. The large sizes of some groups, as well as their seemingly monstrous and fantastic nature, have ensured dinosaurs' regular appearance in best-selling books and films, such as Jurassic Park. Persistent public enthusiasm for the animals has resulted in significant funding for dinosaur science, and new discoveries are regularly covered by the media.
The taxon Dinosauria was formally named in 1842 by paleontologist Sir Richard Owen, who used it to refer to the "distinct tribe or sub-order of Saurian Reptiles" that were then being recognized in England and around the world.:103 The term is derived from the Greek words δεινός (deinos, meaning "terrible," "potent," or "fearfully great") and σαῦρος (sauros, meaning "lizard" or "reptile").:103 Though the taxonomic name has often been interpreted as a reference to dinosaurs' teeth, claws, and other fearsome characteristics, Owen intended it merely to evoke their size and majesty.
Under phylogenetic taxonomy, dinosaurs are usually defined as the group consisting of Triceratops, Neornithes [modern birds], their most recent common ancestor (MRCA), and all descendants". It has also been suggested that Dinosauria be defined with respect to the MRCA of Megalosaurus and Iguanodon, because these were two of the three genera cited by Richard Owen when he recognized the Dinosauria. Both definitions result in the same set of animals being defined as dinosaurs: "Dinosauria = Ornithischia + Saurischia", encompassing theropods (mostly bipedal carnivores and birds), ankylosaurians (armored herbivorous quadrupeds), stegosaurians (plated herbivorous quadrupeds), ceratopsians (herbivorous quadrupeds with horns and frills), ornithopods (bipedal or quadrupedal herbivores including "duck-bills"), and, perhaps, sauropodomorphs (mostly large herbivorous quadrupeds with long necks and tails).
Many paleontologists note that the point at which sauropodomorphs and theropods diverged may omit sauropodomorphs from the definition for both saurischians and dinosaurs. To avoid instability, Dinosauria can be more conservatively defined with respect to four anchoring nodes: Triceratops horridus, Saltasaurus loricatus, and Passer domesticus, their MRCA, and all descendants. This "safer" definition can be expressed as "Dinosauria = Ornithischia + Sauropodomorpha + Theropoda".
There is near universal consensus among paleontologists that birds are the descendants of theropod dinosaurs. In traditional taxonomy, birds were considered a separate "class" that had evolved from dinosaurs. However, a majority of modern paleontologists reject the traditional style of classification in favor of phylogenetic nomenclature, which requires that all descendants of a single common ancestor must be included in a group for that group to be natural. Birds are thus considered by most modern scientists to be dinosaurs and dinosaurs are, therefore, not extinct. Birds are classified by most paleontologists as belonging to the subgroup Maniraptora, which are coelurosaurs, which are theropods, which are saurischians, which are dinosaurs.
Using one of the above definitions, dinosaurs can be generally described as archosaurs with limbs held erect beneath the body. Many prehistoric animal groups are popularly conceived of as dinosaurs, such as ichthyosaurs, mosasaurs, plesiosaurs, pterosaurs, and Dimetrodon, but are not classified scientifically as dinosaurs, and none had the erect limb posture characteristic of true dinosaurs. Dinosaurs were the dominant terrestrial vertebrates of the Mesozoic, especially the Jurassic and Cretaceous periods. Other groups of animals were restricted in size and niches; mammals, for example, rarely exceeded the size of a cat, and were generally rodent-sized carnivores of small prey.
Dinosaurs have always been an extremely varied group of animals; according to a 2006 study, over 500 non-avialan dinosaur genera have been identified with certainty so far, and the total number of genera preserved in the fossil record has been estimated at around 1850, nearly 75% of which remain to be discovered. An earlier study predicted that about 3400 dinosaur genera existed, including many which would not have been preserved in the fossil record. By September 17, 2008, 1047 different species of dinosaurs had been named. Some are herbivorous, others carnivorous, including seed-eaters, fish-eaters, insectivores, and omnivores. While dinosaurs were ancestrally bipedal (as are all modern birds), some prehistoric species were quadrupeds, and others, such as Ammosaurus and Iguanodon, could walk just as easily on two or four legs. Cranial modifications like horns and crests are common dinosaurian traits, and some extinct species had bony armor. Although known for large size, many Mesozoic dinosaurs were human-sized or smaller, and modern birds are generally small in size. Dinosaurs today inhabit every continent, and fossils show that they had achieved global distribution by at least the early Jurassic period. Modern birds inhabit most available habitats, from terrestrial to marine, and there is evidence that some non-avialan dinosaurs (such as Microraptor) could fly or at least glide, and others, such as spinosaurids, had semi-aquatic habits.
Distinguishing anatomical features
While recent discoveries have made it more difficult to present a universally agreed-upon list of dinosaurs' distinguishing features, nearly all dinosaurs discovered so far share certain modifications to the ancestral archosaurian skeleton. Although some later groups of dinosaurs featured further modified versions of these traits, they are considered typical across Dinosauria; the earliest dinosaurs had them and passed them on to all their descendants. Such common features across a taxonomic group are called synapomorphies.
A detailed assessment of archosaur interrelations by S. Nesbitt confirmed or found the following 12 unambiguous synapomorphies, some previously known:
- in the skull, a supratemporal fossa (excavation) is present in front of the supratemporal fenestra
- epipophyses present in anterior neck vertebrae (except atlas and axis)
- apex of deltopectoral crest (a projection on which the deltopectoral muscles attach) located at or more than 30% down the length of the humerus (upper arm bone)
- radius shorter than 80% of humerus length
- fourth trochanter (projection where the caudofemoralis muscle attaches) on the femur (thigh bone) is a sharp flange
- fourth trochanter asymmetrical, with distal margin forming a steeper angle to the shaft
- on the astragalus and calcaneum the proximal articular facet for fibula occupies less than 30% of the transverse width of the element
- exocciptials (bones at the back of the skull) do not meet along the midline on the floor of the endocranial cavity
- proximal articular surfaces of the ischium with the ilium and the pubis separated by a large concave surface
- cnemial crest on the tibia (shinbone) arcs anterolaterally
- distinct proximodistally oriented ridge present on the posterior face of the distal end of the tibia
Nesbitt found a number of further potential synapomorphies, and discounted a number of synapomorphies previously suggested. Some of these are also present in silesaurids, which Nesbitt recovered as a sister group to Dinosauria, including a large anterior trochanter, metatarsals II and IV of subequal length, reduced contact between ischium and pubis, the presence of a cenmial crest on the tibia and of an ascending process on the astragalus, and many others.
A variety of other skeletal features are shared by dinosaurs. However, because they are either common to other groups of archosaurs or were not present in all early dinosaurs, these features are not considered to be synapomorphies. For example, as diapsids, dinosaurs ancestrally had two pairs of temporal fenestrae (openings in the skull behind the eyes), and as members of the diapsid group Archosauria, had additional openings in the snout and lower jaw. Additionally, several characteristics once thought to be synapomorphies are now known to have appeared before dinosaurs, or were absent in the earliest dinosaurs and independently evolved by different dinosaur groups. These include an elongated scapula, or shoulder blade; a sacrum composed of three or more fused vertebrae (three are found in some other archosaurs, but only two are found in Herrerasaurus); and a perforate acetabulum, or hip socket, with a hole at the center of its inside surface (closed in Saturnalia, for example). Another difficulty of determining distinctly dinosaurian features is that early dinosaurs and other archosaurs from the Late Triassic are often poorly known and were similar in many ways; these animals have sometimes been misidentified in the literature.
Dinosaurs stand erect in a manner similar to most modern mammals, but distinct from most other reptiles, whose limbs sprawl out to either side. This posture is due to the development of a laterally facing recess in the pelvis (usually an open socket) and a corresponding inwardly facing distinct head on the femur. Their erect posture enabled early dinosaurs to breathe easily while moving, which likely permitted stamina and activity levels that surpassed those of "sprawling" reptiles. Erect limbs probably also helped support the evolution of large size by reducing bending stresses on limbs. Some non-dinosaurian archosaurs, including rauisuchians, also had erect limbs but achieved this by a "pillar erect" configuration of the hip joint, where instead of having a projection from the femur insert on a socket on the hip, the upper pelvic bone was rotated to form an overhanging shelf.
Origins and early evolution
Dinosaurs diverged from their archosaur ancestors approximately 230 million years ago during the Middle to Late Triassic period, roughly 20 million years after the Permian–Triassic extinction event wiped out an estimated 95% of all life on Earth. Radiometric dating of the rock formation that contained fossils from the early dinosaur genus Eoraptor establishes its presence in the fossil record at this time. Paleontologists think that Eoraptor resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as Marasuchus and Lagerpeton in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators. Dinosaurs may have appeared as early as 243 million years ago, as evidenced by remains of the genus Nyasasaurus from that period, though known fossils of these animals are too fragmentary to tell if they are dinosaurs or very close dinosaurian relatives.
When dinosaurs appeared, terrestrial habitats were occupied by various types of archosaurs and therapsids, such as aetosaurs, cynodonts, dicynodonts, ornithosuchids, rauisuchians, and rhynchosaurs. Most of these other animals became extinct in the Triassic, in one of two events. First, at about the boundary between the Carnian and Norian faunal stages (about 215 million years ago), dicynodonts and a variety of basal archosauromorphs, including the prolacertiforms and rhynchosaurs, became extinct. This was followed by the Triassic–Jurassic extinction event (about 200 million years ago), that saw the end of most of the other groups of early archosaurs, like aetosaurs, ornithosuchids, phytosaurs, and rauisuchians. These losses left behind a land fauna of crocodylomorphs, dinosaurs, mammals, pterosaurians, and turtles. The first few lines of early dinosaurs diversified through the Carnian and Norian stages of the Triassic, most likely by occupying the niches of the groups that became extinct.
Evolution and paleobiogeography
Dinosaur evolution after the Triassic follows changes in vegetation and the location of continents. In the Late Triassic and Early Jurassic, the continents were connected as the single landmass Pangaea, and there was a worldwide dinosaur fauna mostly composed of coelophysoid carnivores and early sauropodomorph herbivores. Gymnosperm plants (particularly conifers), a potential food source, radiated in the Late Triassic. Early sauropodomorphs did not have sophisticated mechanisms for processing food in the mouth, and so must have employed other means of breaking down food farther along the digestive tract. The general homogeneity of dinosaurian faunas continued into the Middle and Late Jurassic, where most localities had predators consisting of ceratosaurians, spinosauroids, and carnosaurians, and herbivores consisting of stegosaurian ornithischians and large sauropods. Examples of this include the Morrison Formation of North America and Tendaguru Beds of Tanzania. Dinosaurs in China show some differences, with specialized sinraptorid theropods and unusual, long-necked sauropods like Mamenchisaurus. Ankylosaurians and ornithopods were also becoming more common, but prosauropods had become extinct. Conifers and pteridophytes were the most common plants. Sauropods, like the earlier prosauropods, were not oral processors, but ornithischians were evolving various means of dealing with food in the mouth, including potential cheek-like organs to keep food in the mouth, and jaw motions to grind food. Another notable evolutionary event of the Jurassic was the appearance of true birds, descended from maniraptoran coelurosaurians.
By the Early Cretaceous and the ongoing breakup of Pangaea, dinosaurs were becoming strongly differentiated by landmass. The earliest part of this time saw the spread of ankylosaurians, iguanodontians, and brachiosaurids through Europe, North America, and northern Africa. These were later supplemented or replaced in Africa by large spinosaurid and carcharodontosaurid theropods, and rebbachisaurid and titanosaurian sauropods, also found in South America. In Asia, maniraptoran coelurosaurians like dromaeosaurids, troodontids, and oviraptorosaurians became the common theropods, and ankylosaurids and early ceratopsians like Psittacosaurus became important herbivores. Meanwhile, Australia was home to a fauna of basal ankylosaurians, hypsilophodonts, and iguanodontians. The stegosaurians appear to have gone extinct at some point in the late Early Cretaceous or early Late Cretaceous. A major change in the Early Cretaceous, which would be amplified in the Late Cretaceous, was the evolution of flowering plants. At the same time, several groups of dinosaurian herbivores evolved more sophisticated ways to orally process food. Ceratopsians developed a method of slicing with teeth stacked on each other in batteries, and iguanodontians refined a method of grinding with tooth batteries, taken to its extreme in hadrosaurids. Some sauropods also evolved tooth batteries, best exemplified by the rebbachisaurid Nigersaurus.
There were three general dinosaur faunas in the Late Cretaceous. In the northern continents of North America and Asia, the major theropods were tyrannosaurids and various types of smaller maniraptoran theropods, with a predominantly ornithischian herbivore assemblage of hadrosaurids, ceratopsians, ankylosaurids, and pachycephalosaurians. In the southern continents that had made up the now-splitting Gondwana, abelisaurids were the common theropods, and titanosaurian sauropods the common herbivores. Finally, in Europe, dromaeosaurids, rhabdodontid iguanodontians, nodosaurid ankylosaurians, and titanosaurian sauropods were prevalent. Flowering plants were greatly radiating, with the first grasses appearing by the end of the Cretaceous. Grinding hadrosaurids and shearing ceratopsians became extremely diverse across North America and Asia. Theropods were also radiating as herbivores or omnivores, with therizinosaurians and ornithomimosaurians becoming common.
The Cretaceous–Paleogene extinction event, which occurred approximately 66 million years ago at the end of the Cretaceous period, caused the extinction of all dinosaur groups except for the neornithine birds. Some other diapsid groups, such as crocodilians, sebecosuchians, turtles, lizards, snakes, sphenodontians, and choristoderans, also survived the event.
The surviving lineages of neornithine birds, including the ancestors of modern ratites, ducks and chickens, and a variety of waterbirds, diversified rapidly at the beginning of the Paleogene period, entering ecological niches left vacant by the extinction of Mesozoic dinosaur groups such as the arboreal enantiornithines, aquatic hesperornithines, and even the larger terrestrial theropods (in the form of Gastornis, mihirungs, and "terror birds"). However, mammals were also rapidly diversifying during this time, and out-competed the neornithines for dominance of most terrestrial niches.
Dinosaurs are archosaurs, like modern crocodilians. Within the archosaur group, dinosaurs are differentiated most noticeably by their gait. Dinosaur legs extend directly beneath the body, whereas the legs of lizards and crocodilians sprawl out to either side.
Collectively, dinosaurs as a clade are divided into two primary branches, Saurischia and Ornithischia. Saurischia includes those taxa sharing a more recent common ancestor with birds than with Ornithischia, while Ornithischia includes all taxa sharing a more recent common ancestor with Triceratops than with Saurischia. Anatomically, these two groups can be distinguished most noticeably by their pelvic structure. Early saurischians—"lizard-hipped", from the Greek sauros (σαῦρος) meaning "lizard" and ischion (ἰσχίον) meaning "hip joint—retained the hip structure of their ancestors, with a pubis bone directed cranially, or forward. This basic form was modified by rotating the pubis backward to varying degrees in several groups (Herrerasaurus, therizinosauroids, dromaeosaurids, and birds). Saurischia includes the theropods (exclusively bipedal and with a wide variety of diets) and sauropodomorphs (long-necked herbivores which include advanced, quadrupedal groups).
By contrast, ornithischians—"bird-hipped", from the Greek ornitheios (ὀρνίθειος) meaning "of a bird" and ischion (ἰσχίον) meaning "hip joint"—had a pelvis that superficially resembled a bird's pelvis: the pubis bone was oriented caudally (rear-pointing). Unlike birds, the ornithischian pubis also usually had an additional forward-pointing process. Ornithischia includes a variety of species which were primarily herbivores. (NB: the terms "lizard hip" and "bird hip" are misnomers – birds evolved from dinosaurs with "lizard hips".)
The following is a simplified classification of dinosaur groups based on their evolutionary relationships, and organized based on the list of Mesozoic dinosaur species provided by Holtz (2008). A more detailed version can be found at Dinosaur classification. The cross (†) is used to signify groups with no living members.
- Saurischia ("lizard-hipped"; includes Theropoda and Sauropodomorpha)
- †Herrerasauria (early bipedal carnivores)
- †Coelophysoidea (small, early theropods; includes Coelophysis and close relatives)
- †Dilophosauridae (early crested and carnivorous theropods)
- †Ceratosauria (generally elaborately horned, the dominant southern carnivores of the Cretaceous)
- Tetanurae ("stiff tails"; includes most theropods)
- †Megalosauroidea (early group of large carnivores including the semi-aquatic spinosaurids)
- †Carnosauria (Allosaurus and close relatives, like Carcharodontosaurus)
- Coelurosauria (feathered theropods, with a range of body sizes and niches)
- †Compsognathidae (common early coelurosaurs with reduced forelimbs)
- †Tyrannosauridae (Tyrannosaurus and close relatives; had reduced forelimbs)
- †Ornithomimosauria ("ostrich-mimics"; mostly toothless; carnivores to possible herbivores)
- †Alvarezsauroidea (small insectivores with reduced forelimbs each bearing one enlarged claw)
- Maniraptora ("hand snatchers"; had long, slender arms and fingers)
- †Therizinosauria (bipedal herbivores with large hand claws and small heads)
- †Oviraptorosauria (mostly toothless; their diet and lifestyle are uncertain)
- †Archaeopterygidae (small, winged theropods or primitive birds)
- †Deinonychosauria (small- to medium-sized; bird-like, with a distinctive toe claw)
- Avialae (modern birds and extinct relatives)
- †Scansoriopterygidae (small primitive avialans with long third fingers)
- †Omnivoropterygidae (large, early short-tailed avialans)
- †Confuciusornithidae (small toothless avialans)
- †Enantiornithes (primitive tree-dwelling, flying avialans)
- Euornithes (advanced flying birds)
- †Sauropodomorpha (herbivores with small heads, long necks, long tails)
- †Guaibasauridae (small, primitive, omnivorous sauropodomorphs)
- †Plateosauridae (primitive, strictly bipedal "prosauropods")
- †Riojasauridae (small, primitive sauropodomorphs)
- †Massospondylidae (small, primitive sauropodomorphs)
- †Sauropoda (very large and heavy, usually over 15 meters (49 feet) long; quadrupedal)
- †Cetiosauridae ("whale reptiles")
- †Turiasauria (European group of Jurassic and Cretaceous sauropods)
- †Neosauropoda ("new sauropods")
- †Diplodocoidea (skulls and tails elongated; teeth typically narrow and pencil-like)
- †Macronaria (boxy skulls; spoon- or pencil-shaped teeth)
- †Ornithischia ("bird-hipped"; diverse bipedal and quadrupedal herbivores)
- †Heterodontosauridae (small basal ornithopod herbivores/omnivores with prominent canine-like teeth)
- †Thyreophora (armored dinosaurs; mostly quadrupeds)
- †Neornithischia ("new ornithischians")
- †Ornithopoda (various sizes; bipeds and quadrupeds; evolved a method of chewing using skull flexibility and numerous teeth)
- †Marginocephalia (characterized by a cranial growth)
Knowledge about dinosaurs is derived from a variety of fossil and non-fossil records, including fossilized bones, feces, trackways, gastroliths, feathers, impressions of skin, internal organs and soft tissues. Many fields of study contribute to our understanding of dinosaurs, including physics (especially biomechanics), chemistry, biology, and the earth sciences (of which paleontology is a sub-discipline). Two topics of particular interest and study have been dinosaur size and behavior.
Current evidence suggests that dinosaur average size varied through the Triassic, early Jurassic, late Jurassic and Cretaceous periods. Predatory theropod dinosaurs, which occupied most terrestrial carnivore niches during the Mesozoic, most often fall into the 100 to 1000 kilogram (220 to 2200 lb) category when sorted by estimated weight into categories based on order of magnitude, whereas recent predatory carnivoran mammals peak in the 10 to 100 kilogram (22 to 220 lb) category. The mode of Mesozoic dinosaur body masses is between one and ten metric tonnes. This contrasts sharply with the size of Cenozoic mammals, estimated by the National Museum of Natural History as about 2 to 5 kilograms (5 to 10 lb).
The sauropods were the largest and heaviest dinosaurs. For much of the dinosaur era, the smallest sauropods were larger than anything else in their habitat, and the largest were an order of magnitude more massive than anything else that has since walked the Earth. Giant prehistoric mammals such as the Paraceratherium (the largest land mammal ever) were dwarfed by the giant sauropods, and only modern whales approach or surpass them in size. There are several proposed advantages for the large size of sauropods, including protection from predation, reduction of energy use, and longevity, but it may be that the most important advantage was dietary. Large animals are more efficient at digestion than small animals, because food spends more time in their digestive systems. This also permits them to subsist on food with lower nutritive value than smaller animals. Sauropod remains are mostly found in rock formations interpreted as dry or seasonally dry, and the ability to eat large quantities of low-nutrient browse would have been advantageous in such environments.
Largest and smallest
Scientists will probably never be certain of the largest and smallest dinosaurs to have ever existed. This is because only a tiny percentage of animals ever fossilize, and most of these remain buried in the earth. Few of the specimens that are recovered are complete skeletons, and impressions of skin and other soft tissues are rare. Rebuilding a complete skeleton by comparing the size and morphology of bones to those of similar, better-known species is an inexact art, and reconstructing the muscles and other organs of the living animal is, at best, a process of educated guesswork.
The tallest and heaviest dinosaur known from good skeletons is Giraffatitan brancai (previously classified as a species of Brachiosaurus). Its remains were discovered in Tanzania between 1907 and 1912. Bones from several similar-sized individuals were incorporated into the skeleton now mounted and on display at the Museum für Naturkunde Berlin; this mount is 12 meters (39 ft) tall and 22.5 meters (74 ft) long, and would have belonged to an animal that weighed between 30000 and 60000 kilograms (70000 and 130000 lb). The longest complete dinosaur is the 27-meter (89 ft) long Diplodocus, which was discovered in Wyoming in the United States and displayed in Pittsburgh's Carnegie Natural History Museum in 1907.
There were larger dinosaurs, but knowledge of them is based entirely on a small number of fragmentary fossils. Most of the largest herbivorous specimens on record were all discovered in the 1970s or later, and include the massive Argentinosaurus, which may have weighed 80000 to 100000 kilograms (90 to 110 short tons); some of the longest were the 33.5 meters (110 ft) long Diplodocus hallorum (formerly Seismosaurus) and the 33 meters (108 ft) long Supersaurus; and the tallest, the 18 meters (59 ft) tall Sauroposeidon, which could have reached a sixth-floor window. The heaviest and longest of them all may have been Amphicoelias fragillimus, known only from a now lost partial vertebral neural arch described in 1878. Extrapolating from the illustration of this bone, the animal may have been 58 meters (190 ft) long and weighed over 120000 kg (260000 lb). The largest known carnivorous dinosaur was Spinosaurus, reaching a length of 16 to 18 meters (52 to 60 ft), and weighing in at 8150 kg (18000 lb). Other large meat-eaters included Giganotosaurus, Carcharodontosaurus and Tyrannosaurus.
Not including birds (Avialae), the smallest known dinosaurs were about the size of pigeons. Not surprisingly, the smallest non-avialan dinosaurs were those theropods most closely related to birds. Anchiornis huxleyi, for example, had a total skeletal length of under 35 centimeters (1.1 ft). A. huxleyi is currently the smallest non-avialan dinosaur described from an adult specimen, with an estimated weight of 110 grams. The smallest herbivorous non-avialan dinosaurs included Microceratus and Wannanosaurus, at about 60 cm (2 ft) long each.
Many modern birds are highly social, often found living in flocks. There is general agreement that some behaviors which are common in birds, as well as in crocodiles (birds' closest living relatives), were also common among extinct dinosaur groups. Interpretations of behavior in fossil species are generally based on the pose of skeletons and their habitat, computer simulations of their biomechanics, and comparisons with modern animals in similar ecological niches.
The first potential evidence for herding or flocking as a widespread behavior common to many dinosaur groups in addition to birds was the 1878 discovery of 31 Iguanodon bernissartensis, ornithischians which were then thought to have perished together in Bernissart, Belgium, after they fell into a deep, flooded sinkhole and drowned. Other mass-death sites have been subsequently discovered. Those, along with multiple trackways, suggest that gregarious behavior was common in many early dinosaur species. Trackways of hundreds or even thousands of herbivores indicate that duck-bills (hadrosaurids) may have moved in great herds, like the American Bison or the African Springbok. Sauropod tracks document that these animals traveled in groups composed of several different species, at least in Oxfordshire, England, although there is not evidence for specific herd structures. Congregated into herds may have evolved for defense, for migratory purposes, or to provide protection for young. There is evidence that many types of slow-growing dinosaurs, including various theropods, sauropods, ankylosaurians, ornithopods, and ceratopsians, formed aggregations of immature individuals. One example is a site in Inner Mongolia that has yielded the remains of over 20 Sinornithomimus, from one to seven years old. This assemblage is interpreted as a social group that was trapped in mud. The interpretation of dinosaurs as gregarious has also extended to depicting carnivorous theropods as pack hunters working together to bring down large prey. However, this lifestyle is uncommon among modern birds, crocodiles, and other reptiles, and the taphonomic evidence suggesting mammal-like pack hunting in such theropods as Deinonychus and Allosaurus can also be interpreted as the results of fatal disputes between feeding animals, as is seen in many modern diapsid predators.
The crests and frills of some dinosaurs, like the marginocephalians, theropods and lambeosaurines, may have been too fragile to be used for active defense, and so they were likely used for sexual or aggressive displays, though little is known about dinosaur mating and territorialism. Head wounds from bites suggest that theropods, at least, engaged in active aggressive confrontations.
From a behavioral standpoint, one of the most valuable dinosaur fossils was discovered in the Gobi Desert in 1971. It included a Velociraptor attacking a Protoceratops, providing evidence that dinosaurs did indeed attack each other. Additional evidence for attacking live prey is the partially healed tail of an Edmontosaurus, a hadrosaurid dinosaur; the tail is damaged in such a way that shows the animal was bitten by a tyrannosaur but survived. Cannibalism amongst some species of dinosaurs was confirmed by tooth marks found in Madagascar in 2003, involving the theropod Majungasaurus.
Comparisons between the scleral rings of dinosaurs and modern birds and reptiles have been used to infer daily activity patterns of dinosaurs. Although it has been suggested that most dinosaurs were active during the day, these comparisons have shown that small predatory dinosaurs such as dromaeosaurids, Juravenator, and Megapnosaurus were likely nocturnal. Large and medium-sized herbivorous and omnivorous dinosaurs such as ceratopsians, sauropodomorphs, hadrosaurids, ornithomimosaurs may have been cathemeral, active during short intervals throughout the day, although the small ornithischian Agilisaurus was inferred to be diurnal.
Based on current fossil evidence from dinosaurs such as Oryctodromeus, some ornithischian species seem to have led a partially fossorial (burrowing) lifestyle. Many modern birds are arboreal (tree climbing), and this was also true of many Mesozoic birds, especially the enantiornithines. While some early bird-like species may have already been arboreal as well (including dromaeosaurids such as Microraptor) most non-avialan dinosaurs seem to have relied on land-based locomotion. A good understanding of how dinosaurs moved on the ground is key to models of dinosaur behavior; the science of biomechanics, in particular, has provided significant insight in this area. For example, studies of the forces exerted by muscles and gravity on dinosaurs' skeletal structure have investigated how fast dinosaurs could run, whether diplodocids could create sonic booms via whip-like tail snapping, and whether sauropods could float.
Modern birds are well known for communicating using primarily visual and auditory signals, and the wide diversity of visual display structures among fossil dinosaur groups suggests that visual communication has always been important to dinosaur biology. However, the evolution of dinosaur vocalization is less certain. In 2008, paleontologist Phil Senter examined the evidence for vocalization in Mesozoic animal life, including dinosaurs. Senter found that, contrary to popular depictions of roaring dinosaurs in motion pictures, it is likely that most Mesozoic dinosaurs were not capable of creating any vocalizations (though the hollow crests of the lambeosaurines could have functioned as resonance chambers used for a wide range of vocalizations). To draw this conclusion, Senter studied the distribution of vocal organs in modern reptiles and birds. He found that vocal cords in the larynx probably evolved multiple times among reptiles, including crocodilians, which are able to produce guttural roars. Birds, on the other hand, lack a larynx. Instead, bird calls are produced by the syrinx, a vocal organ found only in birds, and which is not related to the larynx, meaning it evolved independently from the vocal organs in reptiles. The syrinx depends on the air sac system in birds to function; specifically, it requires the presence of a clavicular air sac near the wishbone or collar bone. This air sac leaves distinctive marks or opening on the bones, including a distinct opening in the upper arm bone (humerus). While extensive air sac systems are a unique characteristic of saurischian dinosaurs, the clavicular air sac necessary to vocalize does not appear in the fossil record until the enantiornithines (one exception, Aerosteon, probably evolved its clavicular air sac independently of birds for reasons other than vocalization).
The most primitive dinosaurs with evidence of a vocalizing syrinx are the enantironithine birds. Any bird-line archosaurs more primitive than this probably did not make vocal calls. Rather, several lines of evidence suggest that early dinosaurs used primarily visual communication, in the form of distinctive-looking (and possibly brightly colored) horns, frills, crests, sails and feathers. This is similar to some modern reptile groups such as lizards, in which many forms are largely silent (though like dinosaurs they possess well-developed senses of hearing) but use complex coloration and display behaviors to communicate.
In addition, dinosaurs use other methods of producing sound for communication. Other animals, including other reptiles, use a wide variety of non-vocal sound communication, including hissing, jaw grinding or clapping, use of environment (such as splashing), and wing beating (possible in winged maniraptoran dinosaurs).
All dinosaurs lay amniotic eggs with hard shells made mostly of calcium carbonate. Eggs are usually laid in a nest. Most species create somewhat elaborate nests, which can be cups, domes, plates, beds scrapes, mounds, or burrows. Some species of modern bird have no nests; the cliff-nesting Common Guillemot lays its eggs on bare rock, and male Emperor Penguins keep eggs between their body and feet. Primitive birds and many non-avialan dinosaurs often lay eggs in communal nests, with males primarily incubating the eggs. While modern birds have only one functional oviduct and lay one egg at a time, more primitive birds and dinosaurs had two oviducts, like crocodiles. Some non-avialan dinosaurs, such as Troodon, exhibited iterative laying, where the adult might lay a pair of eggs every one or two days, and then ensured simultaneous hatching by delaying brooding until all eggs were laid.
When laying eggs, females grow a special type of bone between the hard outer bone and the marrow of their limbs. This medullary bone, which is rich in calcium, is used to make eggshells. A discovery of features in a Tyrannosaurus rex skeleton provided evidence of medullary bone in extinct dinosaurs and, for the first time, allowed paleontologists to establish the sex of a fossil dinosaur specimen. Further research has found medullary bone in the carnosaur Allosaurus and the ornithopod Tenontosaurus. Because the line of dinosaurs that includes Allosaurus and Tyrannosaurus diverged from the line that led to Tenontosaurus very early in the evolution of dinosaurs, this suggests that the production of medullary tissue is a general characteristic of all dinosaurs.
Another widespread trait among modern birds is parental care for young after hatching. Jack Horner's 1978 discovery of a Maiasaura ("good mother lizard") nesting ground in Montana demonstrated that parental care continued long after birth among ornithopods, suggesting this behavior might also have been common to all dinosaurs. There is evidence that other non-theropod dinosaurs, like Patagonian titanosaurian sauropods (1997 discovery), also nested in large groups. A specimen of the Mongolian oviraptorid Citipati osmolskae was discovered in a chicken-like brooding position in 1993, which indicates that they had begun using an insulating layer of feathers to keep the eggs warm. Parental care being a trait common to all dinosaurs is supported by other finds. For example, the fossilized remains of a grouping of Psittacosaurus has been found, consisting of one adult and 34 juveniles; in this case, the large number of juveniles may be due to communal nesting. Additionally, a dinosaur embryo (pertaining to the prosauropod Massospondylus) was found without teeth, indicating that some parental care was required to feed the young dinosaurs. Trackways have also confirmed parental behavior among ornithopods from the Isle of Skye in northwestern Scotland. Nests and eggs have been found for most major groups of dinosaurs, and it appears likely that all dinosaurs cared for their young to some extent either before or shortly after hatching.
Like other reptiles, dinosaurs are primarily uricotelic, that is, their kidneys extract nitrogenous wastes from their bloodstream and excrete it as uric acid instead of urea or ammonia via the ureters into the intestine. In most living species, uric acid is excreted along with feces as a semisolid waste. However, at least some modern birds (such as hummingbirds) can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. They also excrete creatine, rather than creatinine like mammals. This material, as well as the output of the intestines, emerges from the cloaca. In addition, many species regurgitate pellets, and fossil pellets that may have come from dinosaurs are known from as long ago as the Cretaceous period.
Because both modern crocodilians and birds have four-chambered hearts (albeit modified in crocodilians), it is likely that this is a trait shared by all archosaurs, including all dinosaurs. While all modern birds have high metabolisms and are "warm blooded" (endothermic), a vigorous debate has been ongoing since the 1960s regarding how far back in the dinosaur lineage this trait extends. Originally, scientists broadly disagreed as to whether non-avian dinosaurs or even early birds were capable of regulating their body temperatures at all. More recently, endothermy for all dinosaurs has become the consensus view, and debate has focused on the mechanisms of temperature regulation.
After non-avian dinosaurs were discovered, paleontologists first posited that they were ectothermic. This supposed "cold-bloodedness" was used to imply that the ancient dinosaurs were relatively slow, sluggish organisms, even though many modern reptiles are fast and light-footed despite relying on external sources of heat to regulate their body temperature. The idea of dinosaurs as ectothermic and sluggish remained a prevalent view until Robert T. "Bob" Bakker, an early proponent of dinosaur endothermy, published an influential paper on the topic in 1968.
Modern evidence indicates that even non-avian dinosaurs and birds thrived in cooler temperate climates, and that at least some early species must have regulated their body temperature by internal biological means (aided by the animals' bulk in large species and feathers or other body coverings in smaller species). Evidence of endothermy in Mesozoic dinosaurs includes the discovery of polar dinosaurs in Australia and Antarctica (where they would have experienced a cold, dark six-month winter), and analysis of blood-vessel structures within fossil bones that are typical of endotherms. Scientific debate continues regarding the specific ways in which dinosaur temperature regulation evolved.
In the saurischian dinosaurs, higher metabolisms were supported by the evolution of the avian respiratory system, characterized by an extensive system of air sacs that extended the lungs and invaded many of the bones in the skeleton, making them hollow.
Early avian-style respiratory systems with air sacs may have been capable of sustaining higher activity levels than mammals of similar size and build could sustain. In addition to providing a very efficient supply of oxygen, the rapid airflow would have been an effective cooling mechanism, which is essential for animals that are active but too large to get rid of all the excess heat through their skin.
Origin of birds
The possibility that dinosaurs were the ancestors of birds was first suggested in 1868 by Thomas Henry Huxley. After the work of Gerhard Heilmann in the early 20th century, the theory of birds as dinosaur descendants was abandoned in favor of the idea of their being descendants of generalized thecodonts, with the key piece of evidence being the supposed lack of clavicles in dinosaurs. However, as later discoveries showed, clavicles (or a single fused wishbone, which derived from separate clavicles) were not actually absent; they had been found as early as 1924 in Oviraptor, but misidentified as an interclavicle. In the 1970s, John Ostrom revived the dinosaur–bird theory, which gained momentum in the coming decades with the advent of cladistic analysis, and a great increase in the discovery of small theropods and early birds. Of particular note have been the fossils of the Yixian Formation, where a variety of theropods and early birds have been found, often with feathers of some type. Birds share over a hundred distinct anatomical features with theropod dinosaurs, which are now generally accepted to have been their closest ancient relatives. They are most closely allied with maniraptoran coelurosaurs. A minority of scientists, most notably Alan Feduccia and Larry Martin, have proposed other evolutionary paths, including revised versions of Heilmann's basal archosaur proposal, or that maniraptoran theropods are the ancestors of birds but themselves are not dinosaurs, only convergent with dinosaurs.
Archaeopteryx was the first fossil found which revealed a potential connection between dinosaurs and birds. It is considered a transitional fossil, in that it displays features of both groups. Brought to light just two years after Darwin's seminal The Origin of Species, its discovery spurred the nascent debate between proponents of evolutionary biology and creationism. This early bird is so dinosaur-like that, without a clear impression of feathers in the surrounding rock, at least one specimen was mistaken for Compsognathus.
Since the 1990s, a number of additional feathered dinosaurs have been found, providing even stronger evidence of the close relationship between dinosaurs and modern birds. Most of these specimens were unearthed in the lagerstätte of the Yixian Formation, Liaoning, northeastern China, which was part of an island continent during the Cretaceous. Though feathers have been found in only a few locations, it is possible that non-avian dinosaurs elsewhere in the world were also feathered. The lack of widespread fossil evidence for feathered non-avian dinosaurs may be because delicate features like skin and feathers are not often preserved by fossilization and thus are absent from the fossil record. To this point, protofeathers (thin, filament-like structures) are known from dinosaurs at the base of Coelurosauria, such as compsognathids like Sinosauropteryx and tyrannosauroids (Dilong), but barbed feathers are known only among the coelurosaur subgroup Maniraptora, which includes oviraptorosaurs, troodontids, dromaeosaurids, and birds. The description of feathered dinosaurs has not been without controversy; perhaps the most vocal critics have been Alan Feduccia and Theagarten Lingham-Soliar, who have proposed that protofeathers are the result of the decomposition of collagenous fiber that underlaid the dinosaurs' integument, and that maniraptoran dinosaurs with barbed feathers were not actually dinosaurs, but convergent with dinosaurs. However, their views have for the most part not been accepted by other researchers, to the point that the question of the scientific nature of Feduccia's proposals has been raised.
Because feathers are often associated with birds, feathered dinosaurs are often touted as the missing link between birds and dinosaurs. However, the multiple skeletal features also shared by the two groups represent another important line of evidence for paleontologists. Areas of the skeleton with important similarities include the neck, pubis, wrist (semi-lunate carpal), arm and pectoral girdle, furcula (wishbone), and breast bone. Comparison of bird and dinosaur skeletons through cladistic analysis strengthens the case for the link.
Large meat-eating dinosaurs had a complex system of air sacs similar to those found in modern birds, according to an investigation which was led by Patrick O'Connor of Ohio University. The lungs of theropod dinosaurs (carnivores that walked on two legs and had bird-like feet) likely pumped air into hollow sacs in their skeletons, as is the case in birds. "What was once formally considered unique to birds was present in some form in the ancestors of birds", O'Connor said. In a 2008 paper published in the online journal PLoS ONE, scientists described Aerosteon riocoloradensis, the skeleton of which supplies the strongest evidence to date of a dinosaur with a bird-like breathing system. CT-scanning of Aerosteon's fossil bones revealed evidence for the existence of air sacs within the animal's body cavity.
Fossils of the troodonts Mei and Sinornithoides demonstrate that some dinosaurs slept with their heads tucked under their arms. This behavior, which may have helped to keep the head warm, is also characteristic of modern birds. Several deinonychosaur and oviraptorosaur specimens have also been found preserved on top of their nests, likely brooding in a bird-like manner. The ratio between egg volume and body mass of adults among these dinosaurs suggest that the eggs were primarily brooded by the male, and that the young were highly precocial, similar to many modern ground-dwelling birds.
Some dinosaurs are known to have used gizzard stones like modern birds. These stones are swallowed by animals to aid digestion and break down food and hard fibers once they enter the stomach. When found in association with fossils, gizzard stones are called gastroliths.
Extinction of major groups
The discovery that birds are a type of dinosaur showed that dinosaurs in general are not, in fact, extinct as is commonly stated. However, all non-avian dinosaurs as well as many groups of birds did suddenly become extinct approximately 66 million years ago. Many other groups of animals also became extinct at this time, including ammonites (nautilus-like mollusks), mosasaurs, plesiosaurs, pterosaurs, and many groups of mammals. This mass extinction is known as the Cretaceous–Paleogene extinction event. The nature of the event that caused this mass extinction has been extensively studied since the 1970s; at present, several related theories are supported by paleontologists. Though the consensus is that an impact event was the primary cause of dinosaur extinction, some scientists cite other possible causes, or support the idea that a confluence of several factors was responsible for the sudden disappearance of dinosaurs from the fossil record.
At the peak of the Mesozoic, there were no polar ice caps, and sea levels are estimated to have been from 100 to 250 meters (300 to 800 ft) higher than they are today. The planet's temperature was also much more uniform, with only 25 °C (45 °F) separating average polar temperatures from those at the equator. On average, atmospheric temperatures were also much higher; the poles, for example, were 50 °C (90 °F) warmer than today.
The atmosphere's composition during the Mesozoic is a matter for debate. While some academics argue that oxygen levels were much higher than today, others argue that biological adaptations seen in birds and dinosaurs indicate that respiratory systems evolved beyond what would be necessary if oxygen levels were high. By the late Cretaceous, the environment was changing dramatically. Volcanic activity was decreasing, which led to a cooling trend as levels of atmospheric carbon dioxide dropped. Oxygen levels in the atmosphere also started to fluctuate and would ultimately fall considerably. Some scientists hypothesize that climate change, combined with lower oxygen levels, might have led directly to the demise of many species. If the dinosaurs had respiratory systems similar to those commonly found in modern birds, it may have been particularly difficult for them to cope with reduced respiratory efficiency, given the enormous oxygen demands of their very large bodies.
The asteroid collision theory, which was brought to wide attention in 1980 by Walter Alvarez and colleagues, links the extinction event at the end of the Cretaceous period to a bolide impact approximately 65 million years ago. Alvarez et al. proposed that a sudden increase in iridium levels, recorded around the world in the period's rock stratum, was direct evidence of the impact. The bulk of the evidence now suggests that a bolide 5 to 15 kilometers (3 to 9 mi) wide hit in the vicinity of the Yucatán Peninsula (in southeastern Mexico), creating the approximately 180 km (110 mi) Chicxulub Crater and triggering the mass extinction. Scientists are not certain whether dinosaurs were thriving or declining before the impact event. Some scientists propose that the meteorite caused a long and unnatural drop in Earth's atmospheric temperature, while others claim that it would have instead created an unusual heat wave. The consensus among scientists who support this theory is that the impact caused extinctions both directly (by heat from the meteorite impact) and also indirectly (via a worldwide cooling brought about when matter ejected from the impact crater reflected thermal radiation from the sun). Although the speed of extinction cannot be deduced from the fossil record alone, various models suggest that the extinction was extremely rapid, being down to hours rather than years.
In September 2007, U.S. researchers led by William Bottke of the Southwest Research Institute in Boulder, Colorado, and Czech scientists used computer simulations to identify the probable source of the Chicxulub impact. They calculated a 90% probability that an asteroid named Baptistina, approximately 160 km (99 mi) in diameter, orbiting in the asteroid belt which lies between Mars and Jupiter, was struck by a smaller unnamed asteroid about 55 km (35 mi) in diameter about 160 million years ago. The impact shattered Baptistina, creating a cluster which still exists today as the Baptistina family. Calculations indicate that some of the fragments were sent hurtling into earth-crossing orbits, one of which was the 10 km (6.2 mi) wide meteorite which struck Mexico's Yucatan peninsula at the end of the Cretaceous, creating the Chicxulub crater. In 2011, new data from the Wide-field Infrared Survey Explorer revised the date of the collision which created the Baptistina family to about 80 million years ago. This makes an asteroid from this family highly improbable to be the asteroid that created the Chicxulub Crater, as typically the process of resonance and collision of an asteroid takes many tens of millions of years.
A similar but more controversial explanation proposes that "passages of the [hypothetical] solar companion star Nemesis through the Oort comet cloud would trigger comet showers." One or more of these comets then collided with the Earth at approximately the same time, causing the worldwide extinction. As with the impact of a single asteroid, the end result of this comet bombardment would have been a sudden drop in global temperatures, followed by a protracted cool period.
Before 2000, arguments that the Deccan Traps flood basalts caused the extinction were usually linked to the view that the extinction was gradual, as the flood basalt events were thought to have started around 68 million years ago and lasted for over 2 million years. However, there is evidence that two thirds of the Deccan Traps were created in only 1 million years about 65.5 million years ago, and so these eruptions would have caused a fairly rapid extinction, possibly over a period of thousands of years, but still longer than would be expected from a single impact event.
The Deccan Traps could have caused extinction through several mechanisms, including the release into the air of dust and sulphuric aerosols, which might have blocked sunlight and thereby reduced photosynthesis in plants. In addition, Deccan Trap volcanism might have resulted in carbon dioxide emissions, which would have increased the greenhouse effect when the dust and aerosols cleared from the atmosphere. Before the mass extinction of the dinosaurs, the release of volcanic gases during the formation of the Deccan Traps "contributed to an apparently massive global warming. Some data point to an average rise in temperature of 8 °C (14 °F) in the last half million years before the impact [at Chicxulub]."
In the years when the Deccan Traps theory was linked to a slower extinction, Luis Alvarez (who died in 1988) replied that paleontologists were being misled by sparse data. While his assertion was not initially well-received, later intensive field studies of fossil beds lent weight to his claim. Eventually, most paleontologists began to accept the idea that the mass extinctions at the end of the Cretaceous were largely or at least partly due to a massive Earth impact. However, even Walter Alvarez has acknowledged that there were other major changes on Earth even before the impact, such as a drop in sea level and massive volcanic eruptions that produced the Indian Deccan Traps, and these may have contributed to the extinctions.
Possible Paleocene survivors
Non-avian dinosaur remains are occasionally found above the Cretaceous–Paleogene boundary. In 2001, paleontologists Zielinski and Budahn reported the discovery of a single hadrosaur leg-bone fossil in the San Juan Basin, New Mexico, and described it as evidence of Paleocene dinosaurs. The formation in which the bone was discovered has been dated to the early Paleocene epoch, approximately 64.5 million years ago. If the bone was not re-deposited into that stratum by weathering action, it would provide evidence that some dinosaur populations may have survived at least a half million years into the Cenozoic Era. Other evidence includes the finding of dinosaur remains in the Hell Creek Formation up to 1.3 meters (51 in) above (40000 years later than) the Cretaceous–Paleogene boundary. Similar reports have come from other parts of the world, including China. Many scientists, however, dismissed the supposed Paleocene dinosaurs as re-worked, that is, washed out of their original locations and then re-buried in much later sediments. However, direct dating of the bones themselves has supported the later date, with U–Pb dating methods resulting in a precise age of 64.8 ± 0.9 million years ago. If correct, the presence of a handful of dinosaurs in the early Paleocene would not change the underlying facts of the extinction.
History of study
Dinosaur fossils have been known for millennia, although their true nature was not recognized. The Chinese, whose modern word for dinosaur is konglong (恐龍, or "terrible dragon"), considered them to be dragon bones and documented them as such. For example, Hua Yang Guo Zhi, a book written by Zhang Qu during the Western Jin Dynasty, reported the discovery of dragon bones at Wucheng in Sichuan Province. Villagers in central China have long unearthed fossilized "dragon bones" for use in traditional medicines, a practice that continues today. In Europe, dinosaur fossils were generally believed to be the remains of giants and other creatures killed by the Great Flood.
Scholarly descriptions of what would now be recognized as dinosaur bones first appeared in the late 17th century in England. Part of a bone, now known to have been the femur of a Megalosaurus, was recovered from a limestone quarry at Cornwell near Chipping Norton, Oxfordshire, England, in 1676. The fragment was sent to Robert Plot, Professor of Chemistry at the University of Oxford and first curator of the Ashmolean Museum, who published a description in his Natural History of Oxfordshire in 1677. He correctly identified the bone as the lower extremity of the femur of a large animal, and recognized that it was too large to belong to any known species. He therefore concluded it to be the thigh bone of a giant human similar to those mentioned in the Bible. In 1699, Edward Lhuyd, a friend of Sir Isaac Newton, was responsible for the first published scientific treatment of what would now be recognized as a dinosaur when he described and named a sauropod tooth, "Rutellum implicatum", that had been found in Caswell, near Witney, Oxfordshire.
Between 1815 and 1824, the Rev William Buckland, a professor of geology at Oxford University, collected more fossilized bones of Megalosaurus and became the first person to describe a dinosaur in a scientific journal. The second dinosaur genus to be identified, Iguanodon, was discovered in 1822 by Mary Ann Mantell – the wife of English geologist Gideon Mantell. Gideon Mantell recognized similarities between his fossils and the bones of modern iguanas. He published his findings in 1825.
The study of these "great fossil lizards" soon became of great interest to European and American scientists, and in 1842 the English paleontologist Richard Owen coined the term "dinosaur". He recognized that the remains that had been found so far, Iguanodon, Megalosaurus and Hylaeosaurus, shared a number of distinctive features, and so decided to present them as a distinct taxonomic group. With the backing of Prince Albert of Saxe-Coburg-Gotha, the husband of Queen Victoria, Owen established the Natural History Museum in South Kensington, London, to display the national collection of dinosaur fossils and other biological and geological exhibits.
In 1858, the first known American dinosaur was discovered, in marl pits in the small town of Haddonfield, New Jersey (although fossils had been found before, their nature had not been correctly discerned). The creature was named Hadrosaurus foulkii. It was an extremely important find: Hadrosaurus was one of the first nearly complete dinosaur skeletons found (the first was in 1834, in Maidstone, Kent, England), and it was clearly a bipedal creature. This was a revolutionary discovery as, until that point, most scientists had believed dinosaurs walked on four feet, like other lizards. Foulke's discoveries sparked a wave of dinosaur mania in the United States.
Dinosaur mania was exemplified by the fierce rivalry between Edward Drinker Cope and Othniel Charles Marsh, both of whom raced to be the first to find new dinosaurs in what came to be known as the Bone Wars. The feud probably originated when Marsh publicly pointed out that Cope's reconstruction of an Elasmosaurus skeleton was flawed: Cope had inadvertently placed the plesiosaur's head at what should have been the animal's tail end. The fight between the two scientists lasted for over 30 years, ending in 1897 when Cope died after spending his entire fortune on the dinosaur hunt. Marsh 'won' the contest primarily because he was better funded through a relationship with the US Geological Survey. Unfortunately, many valuable dinosaur specimens were damaged or destroyed due to the pair's rough methods: for example, their diggers often used dynamite to unearth bones (a method modern paleontologists would find appalling). Despite their unrefined methods, the contributions of Cope and Marsh to paleontology were vast: Marsh unearthed 86 new species of dinosaur and Cope discovered 56, a total of 142 new species. Cope's collection is now at the American Museum of Natural History in New York, while Marsh's is on display at the Peabody Museum of Natural History at Yale University.
After 1897, the search for dinosaur fossils extended to every continent, including Antarctica. The first Antarctic dinosaur to be discovered, the ankylosaurid Antarctopelta oliveroi, was found on Ross Island in 1986, although it was 1994 before an Antarctic species, the theropod Cryolophosaurus ellioti, was formally named and described in a scientific journal.
Current dinosaur "hot spots" include southern South America (especially Argentina) and China. China in particular has produced many exceptional feathered dinosaur specimens due to the unique geology of its dinosaur beds, as well as an ancient arid climate particularly conducive to fossilization.
The "dinosaur renaissance"
The field of dinosaur research has enjoyed a surge in activity that began in the 1970s and is ongoing. This was triggered, in part, by John Ostrom's discovery of Deinonychus, an active predator that may have been warm-blooded, in marked contrast to the then-prevailing image of dinosaurs as sluggish and cold-blooded. Vertebrate paleontology has become a global science. Major new dinosaur discoveries have been made by paleontologists working in previously unexploited regions, including India, South America, Madagascar, Antarctica, and most significantly China (the amazingly well-preserved feathered dinosaurs in China have further consolidated the link between dinosaurs and their conjectured living descendants, modern birds). The widespread application of cladistics, which rigorously analyzes the relationships between biological organisms, has also proved tremendously useful in classifying dinosaurs. Cladistic analysis, among other modern techniques, helps to compensate for an often incomplete and fragmentary fossil record.
Soft tissue and DNA
One of the best examples of soft-tissue impressions in a fossil dinosaur was discovered in Petraroia, Italy. The discovery was reported in 1998, and described the specimen of a small, very young coelurosaur, Scipionyx samniticus. The fossil includes portions of the intestines, colon, liver, muscles, and windpipe of this immature dinosaur.
In the March 2005 issue of Science, the paleontologist Mary Higby Schweitzer and her team announced the discovery of flexible material resembling actual soft tissue inside a 68-million-year-old Tyrannosaurus rex leg bone from the Hell Creek Formation in Montana. After recovery, the tissue was rehydrated by the science team.
When the fossilized bone was treated over several weeks to remove mineral content from the fossilized bone-marrow cavity (a process called demineralization), Schweitzer found evidence of intact structures such as blood vessels, bone matrix, and connective tissue (bone fibers). Scrutiny under the microscope further revealed that the putative dinosaur soft tissue had retained fine structures (microstructures) even at the cellular level. The exact nature and composition of this material, and the implications of Schweitzer's discovery, are not yet clear; study and interpretation of the material is ongoing.
The successful extraction of ancient DNA from dinosaur fossils has been reported on two separate occasions, but, upon further inspection and peer review, neither of these reports could be confirmed. However, a functional visual peptide of a theoretical dinosaur has been inferred using analytical phylogenetic reconstruction methods on gene sequences of related modern species such as reptiles and birds. In addition, several proteins, including hemoglobin, have putatively been detected in dinosaur fossils.
By human standards, dinosaurs were creatures of fantastic appearance and often enormous size. As such, they have captured the popular imagination and become an enduring part of human culture. Entry of the word "dinosaur" into the common vernacular reflects the animals' cultural importance: in English, "dinosaur" is commonly used to describe anything that is impractically large, obsolete, or bound for extinction.
Public enthusiasm for dinosaurs first developed in Victorian England, where in 1854, three decades after the first scientific descriptions of dinosaur remains, the famous dinosaur sculptures were unveiled in London's Crystal Palace Park. The Crystal Palace dinosaurs proved so popular that a strong market in smaller replicas soon developed. In subsequent decades, dinosaur exhibits opened at parks and museums around the world, ensuring that successive generations would be introduced to the animals in an immersive and exciting way. Dinosaurs' enduring popularity, in its turn, has resulted in significant public funding for dinosaur science, and has frequently spurred new discoveries. In the United States, for example, the competition between museums for public attention led directly to the Bone Wars of the 1880s and 1890s, during which a pair of feuding paleontologists made enormous scientific contributions.
The popular preoccupation with dinosaurs has ensured their appearance in literature, film, and other media. Beginning in 1852 with a passing mention in Charles Dickens' Bleak House, dinosaurs have been featured in large numbers of fictional works. Jules Verne's 1864 novel Journey to the Center of the Earth, Sir Arthur Conan Doyle's 1912 book The Lost World, the iconic 1933 film King Kong, the 1954 Godzilla and its many sequels, the best-selling 1990 novel Jurassic Park by Michael Crichton and its 1993 film adaptation are just a few notable examples of dinosaur appearances in fiction. Authors of general-interest non-fiction works about dinosaurs, including some prominent paleontologists, have often sought to use the animals as a way to educate readers about science in general. Dinosaurs are ubiquitous in advertising; numerous companies have referenced dinosaurs in printed or televised advertisements, either in order to sell their own products or in order to characterize their rivals as slow-moving, dim-witted, or obsolete.
|Wikimedia Commons has media related to: Dinosauria|
- Evolutionary history of life
- List of dinosaurs
- List of dinosaur-bearing rock formations
- Physiology of dinosaurs
- Prehistoric reptile
Notes and references
- Feduccia, A. (2002). "Birds are dinosaurs: simple answer to a complex problem". The Auk 119 (4): 1187–1201. doi:10.1642/0004-8038(2002)119[1187:BADSAT]2.0.CO;2.
- Rey LV, Holtz, Jr TR (2007). Dinosaurs: the most complete, up-to-date encyclopedia for dinosaur lovers of all ages. New York: Random House. ISBN 0-375-82419-7.
- Alfaro, M.E., F. Santini, C. Brock, H. Alamillo, A. Dornburg. D.L. Rabosky, G. Carnevale, and L.J. Harmon (2009). "Nine exceptional radiations plus high turnover explain species diversity in jawed vertebrates". Proceedings of the National Academy of Sciences USA 106 (32): 13410–13414. Bibcode:2009PNAS..10613410A. doi:10.1073/pnas.0811087106. PMC 2715324. PMID 19633192.
- Wang, S.C., and Dodson, P. (2006). "Estimating the Diversity of Dinosaurs". Proceedings of the National Academy of Sciences USA 103 (37): 13601–13605. Bibcode:2006PNAS..10313601W. doi:10.1073/pnas.0606028103. PMC 1564218. PMID 16954187.
- Amos J (2008-09-17). "Will the real dinosaurs stand up?". BBC News. Retrieved 2011-03-23.
- MacLeod, N, Rawson, PF, Forey, PL, Banner, FT, Boudagher-Fadel, MK, Bown, PR, Burnett, JA, Chambers, P, Culver, S, Evans, SE, Jeffery, C, Kaminski, MA, Lord, AR, Milner, AC, Milner, AR, Morris, N, Owen, E, Rosen, BR, Smith, AB, Taylor, PD, Urquhart, E & Young, JR (1997). "The Cretaceous–Tertiary biotic transition". Journal of the Geological Society 154 (2): 265–292. doi:10.1144/gsjgs.154.2.0265.
- Carpenter, Kenneth (2006). "Biggest of the big: a critical re-evaluation of the mega-sauropod Amphicoelias fragillimus". In Foster, John R.; and Lucas, Spencer G. (eds.). Paleontology and Geology of the Upper Jurassic Morrison Formation. New Mexico Museum of Natural History and Science Bulletin 36. Albuquerque: New Mexico Museum of Natural History and Science. pp. 131–138.
- Owen, R (1842)). Report on British Fossil Reptiles." Part II. Report of the Eleventh Meeting of the British Association for the Advancement of Science; Held at Plymouth in July 1841. London: John Murray. pp. 60–204.
- "Liddell–Scott–Jones Lexicon of Classical Greek". Retrieved 2008-08-05.
- Farlow, J.O., and Brett-Surman, M.K. (1997). "Preface". In Farlow, J.O., and Brett-Surman, M.K. (eds.). The Complete Dinosaur. Indiana University Press. pp. ix–xi. ISBN 0-253-33349-0.
- Benton, Michael J. (2004). "Origin and relationships of Dinosauria". In Weishampel, David B.; Dodson, Peter; and Osmólska, Halszka (eds.). The Dinosauria (2nd ed.). Berkeley: University of California Press. pp. 7–19. ISBN 0-520-24209-2.
- Olshevsky, G. (2000). "An annotated checklist of dinosaur species by continent". Mesozoic Meanderings 3: 1–157.
- Sereno, P. (2005). "The logical basis of phylogenetic taxonomy". Systematic Biology 54 (4): 595–619. doi:10.1080/106351591007453. PMID 16109704.
- Padian K (2004). "Basal avialae". In Weishampel DB, Dodson P, Osmólska H. The Dinosauria (2d edition). University of California Press. pp. 210–231. ISBN 0-520-24209-2.
- Glut, Donald F. (1997). Dinosaurs: The Encyclopedia. Jefferson, North Carolina: McFarland & Co. p. 40. ISBN 0-89950-917-7.
- Lambert, David; and the Diagram Group (1990). The Dinosaur Data Book. New York: Avon Books. p. 288. ISBN 0-380-75896-2.
- Morales, Michael (1997). "Nondinosaurian vertebrates of the Mesozoic". In Farlow JO, Brett-Surman MK. The Complete Dinosaur. Bloomington: Indiana University Press. pp. 607–624. ISBN 0-253-33349-0.
- Russell, Dale A. (1995). "China and the lost worlds of the dinosaurian era". Historical Biology 10: 3–12. doi:10.1080/10292389509380510.
- Amiot, R.; Buffetaut, E.; Lécuyer, C.; Wang, X.; Boudad, L.; Ding, Z.; Fourel, F.; Hutt, S.; Martineau, F.; Medeiros, A.; Mo, J.; Simon, L.; Suteethorn, V.; Sweetman, S.; Tong, H.; Zhang, F.; and Zhou, Z. (2010). "Oxygen isotope evidence for semi-aquatic habits among spinosaurid theropods". Geology 38 (2): 139–142. doi:10.1130/G30402.1.
- Nesbitt S.J. (2011). "The early evolution of archosaurs : relationships and the origin of major clades". Bulletin of the American Museum of Natural History 352: 1–292. doi:10.1206/352.1.
- Holtz, Jr., T.R. (2000). "Classification and evolution of the dinosaur groups". In Paul, G.S. The Scientific American Book of Dinosaurs. St. Martin's Press. pp. 140–168. ISBN 0-312-26226-4.
- Smith, Dave. "Dinosauria Morphology". UC Berkeley. Retrieved 21 January 2013.
- Langer, M.C., Abdala, F., Richter, M., and Benton, M.J.; Abdala; Richter; Benton (1999). "A sauropodomorph dinosaur from the Upper Triassic (Carnian) of southern Brazil". Comptes Rendus de l'Academie des Sciences, Paris: Sciences de la terre et des planètes 329 (7): 511–517. Bibcode:1999CRASE.329..511L. doi:10.1016/S1251-8050(00)80025-7.
- Nesbitt, Sterling J.; Irmis, Randall B.; Parker, William G. (2007). "A critical re-evaluation of the Late Triassic dinosaur taxa of North America". Journal of Systematic Palaeontology 5 (2): 209–243. doi:10.1017/S1477201907002040.
- This was recognized not later than 1909: "Dr. Holland and the Sprawling Sauropods". The arguments and many of the images are also presented in Desmond, A. (1976). Hot Blooded Dinosaurs. DoubleDay. ISBN 0-385-27063-1.
- Benton, M.J. (2004). Vertebrate Paleontology. Blackwell Publishers. xii–452. ISBN 0-632-05614-2.
- Cowen, Richard (2004). "Dinosaurs". History of Life (4th ed.). Blackwell Publishing. pp. 151–175. ISBN 1-4051-1756-7. OCLC 53970577.
- Kubo, T.; Benton, Michael J. (2007). "Evolution of hindlimb posture in archosaurs: limb stresses in extinct vertebrates". Palaeontology 50 (6): 1519–1529. doi:10.1111/j.1475-4983.2007.00723.x.
- Kump LR, Pavlov A & Arthur MA (2005). "Massive release of hydrogen sulfide to the surface ocean and atmosphere during intervals of oceanic anoxia". Geology 33 (5): 397–400. Bibcode:2005Geo....33..397K. doi:10.1130/G21295.1.
- Tanner LH, Lucas SG & Chapman MG (2004). "Assessing the record and causes of Late Triassic extinctions" (PDF). Earth-Science Reviews 65 (1–2): 103–139. Bibcode:2004ESRv...65..103T. doi:10.1016/S0012-8252(03)00082-5. Archived from the original on October 25, 2007. Retrieved 2007-10-22.
- Sereno PC (1999). "The evolution of dinosaurs". Science 284 (5423): 2137–2147. doi:10.1126/science.284.5423.2137. PMID 10381873.
- Sereno, P.C.; Forster, Catherine A.; Rogers, Raymond R.; Monetta, Alfredo M. (1993). "Primitive dinosaur skeleton from Argentina and the early evolution of Dinosauria". Nature 361 (6407): 64–66. Bibcode:1993Natur.361...64S. doi:10.1038/361064a0.
- Nesbitt, S. J., Barrett, P. M., Werning, S., Sidor, C. A., and A. J. Charig. (2012). "The oldest dinosaur? A Middle Triassic dinosauriform from Tanzania." Biology Letters.
- Holtz, Thomas R., Jr.; Chapman, Ralph E.; and Lamanna, Matthew C. (2004). "Mesozoic biogeography of Dinosauria". In Weishampel, David B.; Dodson, Peter; and Osmólska, Halszka (eds.). The Dinosauria (2nd ed.). Berkeley: University of California Press. pp. 627–642. ISBN 0-520-24209-2.
- Fastovsky, David E.; and Smith, Joshua B. (2004). "Dinosaur paleoecology". In Weishampel, David B.; Dodson, Peter; and Osmólska, Halszka. The Dinosauria (2nd ed.). Berkeley: University of California Press. pp. 614–626. ISBN 0-520-24209-2.
- Sereno, P.C.; Wilson, JA; Witmer, LM; Whitlock, JA; Maga, A; Ide, O; Rowe, TA; Kemp, Tom (2007). "Structural extremes in a Cretaceous dinosaur". In Kemp, Tom. PLoS ONE 2 (11): e1230. Bibcode:2007PLoSO...2.1230S. doi:10.1371/journal.pone.0001230. PMC 2077925. PMID 18030355.
- Prasad, V.; Strömberg, CA; Alimohammadian, H; Sahni, A (2005). "Dinosaur coprolites and the early evolution of grasses and grazers". Science 310 (5751): 1170–1180. Bibcode:2005Sci...310.1177P. doi:10.1126/science.1118806. PMID 16293759.
- Archibald, J. David; and Fastovsky, David E. (2004). "Dinosaur Extinction". In Weishampel, David B.; Dodson, Peter; and Osmólska, Halszka (eds.). The Dinosauria (2nd ed.). Berkeley: University of California Press. pp. 672–684. ISBN 0-520-24209-2.
- Lindow, B.E.K. (2011). "Bird Evolution Across the K–Pg Boundary and the Basal Neornithine Diversification." In Dyke, G. and Kaiser, G. (eds.)Living Dinosaurs: The Evolutionary History of Modern Birds, John Wiley & Sons, Ltd, Chichester, UK. doi:10.1002/9781119990475.ch14
- Paul, G.S. (1988). Predatory Dinosaurs of the World. New York: Simon and Schuster. pp. 248–250. ISBN 0-671-61946-2.
- Clark J.M., Maryanska T., Barsbold R (2004). "Therizinosauroidea". In Weishampel DB, Dodson P, Osmólska H. The Dinosauria (2d edition). University of California Press. pp. 151–164. ISBN 0-520-24209-2.
- Norell MA, Makovicky PJ (2004). "Dromaeosauridae". In Weishampel DB, Dodson P, Osmólska H. The Dinosauria (2d edition). University of California Press. pp. 196–210. ISBN 0-520-24209-2.
- Dal Sasso, C. and Signore, M. (1998). "Exceptional soft-tissue preservation in a theropod dinosaur from Italy". Nature 392 (6674): 383–387. Bibcode:1998Natur.392..383D. doi:10.1038/32884.
- Schweitzer, M.H., Wittmeyer, J.L. and Horner, J.R. (2005). "Soft-Tissue Vessels and Cellular Preservation in Tyrannosaurus rex". Science 307 (5717): 1952–1955. Bibcode:2005Sci...307.1952S. doi:10.1126/science.1108397. PMID 15790853.
- Farlow JA (1993). "On the rareness of big, fierce animals: speculations about the body sizes, population densities, and geographic ranges of predatory mammals and large, carnivorous dinosaurs". In Dodson, Peter; and Gingerich, Philip. Functional Morphology and Evolution. American Journal of Science, Special Volume 293-A. pp. 167–199.
- Peczkis, J. (1994). "Implications of body-mass estimates for dinosaurs". Journal of Vertebrate Paleontology 14 (4): 520–33. doi:10.1080/02724634.1995.10011575.
- "Anatomy and evolution". National Museum of Natural History. Retrieved 2007-11-21.
- Sander, P. Martin et al.; Christian, Andreas; Clauss, Marcus; Fechner, Regina; Gee, Carole T.; Griebeler, Eva-Maria; Gunga, Hanns-Christian; Hummel, Jürgen et al. (2011). "Biology of the sauropod dinosaurs: the evolution of gigantism". Biological Reviews 86 (1): 117–155. doi:10.1111/j.1469-185X.2010.00137.x. PMC 3045712. PMID 21251189.
- Colbert, Edwin Harris (1971). Men and dinosaurs: the search in field and laboratory. Harmondsworth [Eng.]: Penguin. ISBN 0-14-021288-4.
- Lovelace, David M. (2007). "Morphology of a specimen of Supersaurus (Dinosauria, Sauropoda) from the Morrison Formation of Wyoming, and a re-evaluation of diplodocid phylogeny". Arquivos do Museu Nacional 65 (4): 527–544.
- dal Sasso C, Maganuco S, Buffetaut E, Mendez MA (2006). "New information on the skull of the enigmatic theropod Spinosaurus, with remarks on its sizes and affinities" (PDF). Journal of Vertebrate Paleontology 25 (4): 888–896. doi:10.1671/0272-4634(2005)025[0888:NIOTSO]2.0.CO;2. Retrieved 2011-05-05.
- Therrien, F.; and Henderson, D.M. (2007). "My theropod is bigger than yours ... or not: estimating body size from skull length in theropods". Journal of Vertebrate Paleontology 27 (1): 108–115. doi:10.1671/0272-4634(2007)27[108:MTIBTY]2.0.CO;2.
- Zhang F, Zhou Z, Xu X, Wang X, Sullivan C (2008). "A bizarre Jurassic maniraptoran from China with elongate ribbon-like feathers". Nature 455 (7216): 1105–1108. Bibcode:2008Natur.455.1105Z. doi:10.1038/nature07447. PMID 18948955.
- Xu X, Zhao Q, Norell M, Sullivan C, Hone D, Erickson G, Wang XL, Han FL, Guo Y (2008). "A new feathered maniraptoran dinosaur fossil that fills a morphological gap in avian origin". Chinese Science Bulletin 54 (3): 430–435. doi:10.1007/s11434-009-0009-6.
- Butler, R.J.; Zhao, Q. (2009). "The small-bodied ornithischian dinosaurs Micropachycephalosaurus hongtuyanensis and Wannanosaurus yansiensis from the Late Cretaceous of China". Cretaceous Research 30 (1): 63–77. doi:10.1016/j.cretres.2008.03.002.
- Yans J, Dejax J, Pons D, Dupuis C & Taquet P (2005). "Implications paléontologiques et géodynamiques de la datation palynologique des sédiments à faciès wealdien de Bernissart (bassin de Mons, Belgique)". Comptes Rendus Palevol (in French) 4 (1–2): 135–150. doi:10.1016/j.crpv.2004.12.003.
- Day, J.J.; Upchurch, P; Norman, DB; Gale, AS; Powell, HP (2002). "Sauropod trackways, evolution, and behavior". Science 296 (5573): 1659. doi:10.1126/science.1070167. PMID 12040187.
- Wright, Joanna L. (2005). "Steps in understanding sauropod biology". In Curry Rogers, Kristina A.; and Wilson, Jeffrey A. The Sauropods: Evolution and Paleobiology. Berkeley: University of California Press. pp. 252–284. ISBN 0-520-24623-3.
- Varricchio, D.J.; Sereno, Paul C.; Xijin, Zhao; Lin, Tan; Wilson, Jeffery A.; Lyon, Gabrielle H. (2008). "Mud-trapped herd captures evidence of distinctive dinosaur sociality" (PDF). Acta Palaeontologica Polonica 53 (4): 567–578. doi:10.4202/app.2008.0402. Retrieved 2011-05-06.
- Lessem, Don; and Glut, Donald F. (1993). "Allosaurus". The Dinosaur Society's Dinosaur Encyclopedia. Random House. pp. 19–20. ISBN 0-679-41770-2.
- Maxwell, W. D.; Ostrom, John (1995). "Taphonomy and paleobiological implications of Tenontosaurus–Deinonychus associations". Journal of Vertebrate Paleontology 15 (4): 707–712. doi:10.1080/02724634.1995.10011256.
- Roach, Brian T.; Brinkman, Daniel L. (2007). "A reevaluation of cooperative pack hunting and gregariousness in Deinonychus antirrhopus and other nonavian theropod dinosaurs". Bulletin of the Peabody Museum of Natural History 48 (1): 103–138. doi:10.3374/0079-032X(2007)48[103:AROCPH]2.0.CO;2.
- Tanke, Darren H. (1998). "Head-biting behavior in theropod dinosaurs: paleopathological evidence" (PDF). Gaia (15): 167–184.
- "The Fighting Dinosaurs". American Museum of Natural History. Retrieved 2007-12-05.
- Carpenter, K. (1998). "Evidence of predatory behavior by theropod dinosaurs". Gaia 15: 135–144.
- Rogers, Raymond R.; Krause, DW; Curry Rogers, K (2007). "Cannibalism in the Madagascan dinosaur Majungatholus atopus". Nature 422 (6931): 515–518. doi:10.1038/nature01532. PMID 12673249.
- Schmitz, L.; Motani, R. (2011). "Nocturnality in Dinosaurs Inferred from Scleral Ring and Orbit Morphology". Science 332 (6030): 705–708. Bibcode:2011Sci...332..705S. doi:10.1126/science.1200043. PMID 21493820.
- Varricchio DJ, Martin, AJ and Katsura, Y (2007). "First trace and body fossil evidence of a burrowing, denning dinosaur". Proceedings of the Royal Society B: Biological Sciences 274 (1616): 1361–1368. doi:10.1098/rspb.2006.0443. PMC 2176205. PMID 17374596.
- Chiappe, L.M. and Witmer, L.M. (2002). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press. ISBN 0-520-20094-2
- Chatterjee, S.; Templin, R. J. (2007). "Biplane wing planform and flight performance of the feathered dinosaur Microraptor gui" (PDF). Proceedings of the National Academy of Sciences 104 (5): 1576–1580. Bibcode:2007PNAS..104.1576C. doi:10.1073/pnas.0609975104. PMC 1780066. PMID 17242354.
- Alexander RM (2006). "Dinosaur biomechanics". Proceedings of the Royal Society of Biological Sciences 273 (1596): 1849–1855. doi:10.1098/rspb.2006.3532. PMC 1634776. PMID 16822743.
- Goriely A & McMillen T (2002). "Shape of a cracking whip". Physical Review Letters 88 (24): 244301. Bibcode:2002PhRvL..88x4301G. doi:10.1103/PhysRevLett.88.244301. PMID 12059302.
- Henderson, D.M. (2003). "Effects of stomach stones on the buoyancy and equilibrium of a floating crocodilian: A computational analysis". Canadian Journal of Zoology 81 (8): 1346–1357. doi:10.1139/z03-122.
- Senter, P. (2008). "Voices of the past: a review of Paleozoic and Mesozoic animal sounds". Historical Biology 20 (4): 255–287. doi:10.1080/08912960903033327.
- Hopson, James A. (1975). "The evolution of cranial display structures in hadrosaurian dinosaurs". Paleobiology 1 (1): 21–43.
- Diegert, Carl F. (1998). "A digital acoustic model of the lambeosaurine hadrosaur Parasaurolophus tubicen". Journal of Vertebrate Paleontology 18 (3, Suppl.): 38A.
- Currie, Philip J and Kevin Padian, ed. (1997). Encyclopedia of Dinosaurs. Academic Press. p. 206.
- Hansell M (2000). Bird Nests and Construction Behaviour. University of Cambridge Press ISBN 0-521-46038-7
- Varricchio, David J.; Horner, John J.; Jackson, Frankie D. (2002). "Embryos and eggs for the Cretaceous theropod dinosaur Troodon formosus". Journal of Vertebrate Paleontology 22 (3): 564–576. doi:10.1671/0272-4634(2002)022[0564:EAEFTC]2.0.CO;2.
- Lee, Andrew H.; Werning, S (2008). "Sexual maturity in growing dinosaurs does not fit reptilian growth models". Proceedings of the National Academy of Sciences 105 (2): 582–587. Bibcode:2008PNAS..105..582L. doi:10.1073/pnas.0708903105. PMC 2206579. PMID 18195356.
- Horner, J.R.; Makela, Robert (1979). "Nest of juveniles provides evidence of family structure among dinosaurs". Nature 282 (5736): 296–298. Bibcode:1979Natur.282..296H. doi:10.1038/282296a0.
- Chiappe, Luis M.; Jackson, Frankie; Coria, Rodolfo A.; and Dingus, Lowell (2005). "Nesting titanosaurs from Auca Mahuevo and adjacent sites". In Curry Rogers, Kristina A.; and Wilson, Jeffrey A. The Sauropods: Evolution and Paleobiology. Berkeley: University of California Press. pp. 285–302. ISBN 0-520-24623-3.
- "Discovering Dinosaur Behavior: 1960–present view". Encyclopædia Britannica. Retrieved 2011-05-05.
- Meng Qingjin; Liu Jinyuan; Varricchio, David J.; Huang, Timothy; and Gao Chunling (2004). "Parental care in an ornithischian dinosaur". Nature 431 (7005): 145–146. Bibcode:2004Natur.431..145M. doi:10.1038/431145a. PMID 15356619.
- Reisz RR, Scott, D Sues, H-D, Evans, DC & Raath, MA (2005). "Embryos of an Early Jurassic prosauropod dinosaur and their evolutionary significance". Science 309 (5735): 761–764. Bibcode:2005Sci...309..761R. doi:10.1126/science.1114942. PMID 16051793.
- Clark NDL, Booth P, Booth CL, Ross DA (2004). "Dinosaur footprints from the Duntulm Formation (Bathonian, Jurassic) of the Isle of Skye" (PDF). Scottish Journal of Geology 40 (1): 13–21. doi:10.1144/sjg40010013. Retrieved 2011-05-05.
- Ehrlich, Paul R.; David S. Dobkin, and Darryl Wheye (1988). "Drinking". Birds of Stanford. Standford University. Retrieved 2007-12-13.
- Tsahar, Ella; Martínez Del Rio, C; Izhaki, I; Arad, Z (2005). "Can birds be ammonotelic? Nitrogen balance and excretion in two frugivores" (Free full text). Journal of Experimental Biology 208 (6): 1025–34. doi:10.1242/jeb.01495. PMID 15767304.
- Skadhauge, E; Erlwanger, KH; Ruziwa, SD; Dantzer, V; Elbrønd, VS; Chamunorwa, JP (2003). "Does the ostrich (Struthio camelus) coprodeum have the electrophysiological properties and microstructure of other birds?". Comparative biochemistry and physiology. Part A, Molecular & integrative physiology 134 (4): 749–755. doi:10.1016/S1095-6433(03)00006-0. PMID 12814783.
- Preest, Marion R.; Beuchat, Carol A. (April 1997). "Ammonia excretion by hummingbirds". Nature 386 (6625): 561–62. Bibcode:1997Natur.386..561P. doi:10.1038/386561a0.
- Mora, J.; Martuscelli, J; Ortiz Pineda, J; Soberon, G (July 1965). "The Regulation of Urea-Biosynthesis Enzymes in Vertebrates" (PDF). Biochemical Journal 96 (1): 28–35. PMC 1206904. PMID 14343146.
- Packard, Gary C. (1966). "The Influence of Ambient Temperature and Aridity on Modes of Reproduction and Excretion of Amniote Vertebrates". The American Naturalist 100 (916): 667–82. doi:10.1086/282459. JSTOR 2459303.
- Balgooyen, Thomas G. (1 October 1971). "Pellet Regurgitation by Captive Sparrow Hawks (Falco sparverius)" (PDF). Condor 73 (3): 382–85. doi:10.2307/1365774. JSTOR 1365774.
- Chinsamy A, Hillenius WJ (2004). "Physiology of nonavian dinosaurs". In Weishampel DB, Dodson P, Osmólska H. The Dinosauria (2nd ed.). University of California Press. pp. 643–659. ISBN 0-520-24209-2.
- "Hot-Blooded or Cold-Blooded??". University of California Museum of Paleontology. Retrieved February 12, 2012.
- Parsons, Keith M. (2001). Drawing out Leviathan: Dinosaurs and the science wars. Bloomington: Indiana University Press. pp. 22–48. ISBN 0-253-33937-5.
- Sereno PC, Martinez RN, Wilson JA, Varricchio DJ, Alcober OA, et al. (September 2008). "Evidence for Avian Intrathoracic Air Sacs in a New Predatory Dinosaur from Argentina". In Kemp, Tom. PLoS ONE 3 (9): e3303doi=10.1371/journal.pone.0003303. Bibcode:2008PLoSO...3.3303S. doi:10.1371/journal.pone.0003303. PMC 2553519. PMID 18825273. Retrieved 2008-09-29.
- Reid, R.E.H. (1997). "Dinosaurian Physiology: the Case for "Intermediate" Dinosaurs". In Farlow, J.O., and Brett-Surman, M.K. The Complete Dinosaur. Bloomington: Indiana University Press. pp. 449–473. ISBN 0-253-33349-0.
- Huxley, Thomas H. (1868). "On the animals which are most nearly intermediate between birds and reptiles". Annals of the Magazine of Natural History 4 (2): 66–75.
- Heilmann, Gerhard (1926). The Origin of Birds. London: Witherby. pp. 208pp. ISBN 0-486-22784-7.
- Osborn, Henry Fairfield (1924). "Three new Theropoda, Protoceratops zone, central Mongolia" (PDF). American Museum Novitates 144: 1–12.
- Ostrom, John H. (1973). "The ancestry of birds". Nature 242 (5393): 136. Bibcode:1973Natur.242..136O. doi:10.1038/242136a0.
- Gauthier, Jacques. (1986). "Saurischian monophyly and the origin of birds". In Padian, Kevin. (ed.). The Origin of Birds and the Evolution of Flight. Memoirs of the California Academy of Sciences 8. pp. 1–55.
- Mayr, G., Pohl, B. and Peters, D.S. (2005). "A Well-Preserved Archaeopteryx Specimen with Theropod Features". Science 310 (5753): 1483–1486. Bibcode:2005Sci...310.1483M. doi:10.1126/science.1120331. PMID 16322455.
- Martin, Larry D. (2006). "A basal archosaurian origin for birds". Acta Zoologica Sinica 50 (6): 977–990.
- Wellnhofer, P (1988). "Ein neuer Exemplar von Archaeopteryx". Archaeopteryx 6: 1–30.
- Xu X.; Norell, M.A.; Kuang X.; Wang X.; Zhao Q.; and Jia C. (2004). "Basal tyrannosauroids from China and evidence for protofeathers in tyrannosauroids". Nature 431 (7009): 680–684. Bibcode:2004Natur.431..680X. doi:10.1038/nature02855. PMID 15470426.
- Göhlich, U.B.; Chiappe, LM (2006). "A new carnivorous dinosaur from the Late Jurassic Solnhofen archipelago". Nature 440 (7082): 329–332. Bibcode:2006Natur.440..329G. doi:10.1038/nature04579. PMID 16541071.
- Lingham-Soliar, T. (2003). "The dinosaurian origin of feathers: perspectives from dolphin (Cetacea) collagen fibers". Naturwissenschaften 90 (12): 563–567. Bibcode:2003NW.....90..563L. doi:10.1007/s00114-003-0483-7. PMID 14676953.
- Feduccia, A.; Lingham-Soliar, T; Hinchliffe, JR (2005). "Do feathered dinosaurs exist? Testing the hypothesis on neontological and paleontological evidence". Journal of Morphology 266 (2): 125–166. doi:10.1002/jmor.10382. PMID 16217748.
- Lingham-Soliar, T.; Feduccia, A; Wang, X (2007). "A new Chinese specimen indicates that 'protofeathers' in the Early Cretaceous theropod dinosaur Sinosauropteryx are degraded collagen fibres". Proceedings of the Biological Sciences 274 (1620): 1823–9. doi:10.1098/rspb.2007.0352. PMC 2270928. PMID 17521978.
- Prum, Richard O. (April 2003). "Are Current Critiques Of The Theropod Origin Of Birds Science? Rebuttal To Feduccia 2002". The Auk 120 (2): 550–61. doi:10.1642/0004-8038(2003)120[0550:ACCOTT]2.0.CO;2. JSTOR 4090212.
- O'Connor, P.M. & Claessens, L.P.A.M. (2005). "Basic avian pulmonary design and flow-through ventilation in non-avian theropod dinosaurs". Nature 436 (7048): 253–256. Bibcode:2005Natur.436..253O. doi:10.1038/nature03716. PMID 16015329.
- Sereno, P.C.; Martinez, RN; Wilson, JA; Varricchio, DJ; Alcober, OA; Larsson, HC; Kemp, Tom (September 2008). "Evidence for Avian Intrathoracic Air Sacs in a New Predatory Dinosaur from Argentina". In Kemp, Tom. PLoS ONE 3 (9): e3303. Bibcode:2008PLoSO...3.3303S. doi:10.1371/journal.pone.0003303. PMC 2553519. PMID 18825273. Retrieved 2008-10-27.
- "Meat-Eating Dinosaur from Argentina Had Bird-Like Breathing System". Retrieved 2011-05-05.
- Xu, X. and Norell, M.A. (2004). "A new troodontid dinosaur from China with avian-like sleeping posture". Nature 431 (7010): 838–841. Bibcode:2004Natur.431..838X. doi:10.1038/nature02898. PMID 15483610.
- Norell M.A., Clark J.M., Chiappe L.M., Dashzeveg D. (1995). "A nesting dinosaur". Nature 378 (6559): 774–776. Bibcode:1995Natur.378..774N. doi:10.1038/378774a0.
- Varricchio, D. J.; Moore, J. R.; Erickson, G. M.; Norell, M. A.; Jackson, F. D.; Borkowski, J. J. (2008). "Avian Paternal Care Had Dinosaur Origin". Science 322 (5909): 1826–8. Bibcode:2008Sci...322.1826V. doi:10.1126/science.1163245. PMID 19095938.
- Wings O (2007). "A review of gastrolith function with implications for fossil vertebrates and a revised classification" (PDF). Palaeontologica Polonica 52 (1): 1–16. Retrieved 2011-05-05.
- Dingus, L. and Rowe, T. (1998). The Mistaken Extinction – Dinosaur Evolution and the Origin of Birds. New York: W. H. Freeman.
- Miller KG, Kominz MA, Browning JV, Wright JD, Mountain GS, Katz ME, Sugarman PJ, Cramer BS, Christie-Blick N, Pekar SF (2005). "The Phanerozoic record of global sea-level change". Science 310 (5752): 1293–8. Bibcode:2005Sci...310.1293M. doi:10.1126/science.1116412. PMID 16311326.
- McArthura JM, Janssenb NMM, Rebouletc S, Lengd MJ, Thirlwalle MF & van de Shootbruggef B (2007). "Palaeotemperatures, polar ice-volume, and isotope stratigraphy (Mg/Ca, δ18O, δ13C, 87Sr/86Sr): The Early Cretaceous (Berriasian, Valanginian, Hauterivian)". Palaeogeography, Palaeoclimatology, Palaeoecology 248 (3–4): 391–430. doi:10.1016/j.palaeo.2006.12.015.
- Paul, Gregory S. (2002). Dinosaurs of the air: the evolution and loss of flight in dinosaurs and birds. Johns Hopkins University Press. p. 397. ISBN 0-8018-6763-0.
- Alvarez, LW, Alvarez, W, Asaro, F, and Michel, HV (1980). "Extraterrestrial cause for the Cretaceous–Tertiary extinction". Science 208 (4448): 1095–1108. Bibcode:1980Sci...208.1095A. doi:10.1126/science.208.4448.1095. PMID 17783054.
- Hildebrand, Alan R.; Penfield, Glen T.; Kring, David A.; Pilkington, Mark; Zanoguera, Antonio Camargo; Jacobsen, Stein B.; Boynton, William V. (September 1991). "Chicxulub Crater; a possible Cretaceous/Tertiary boundary impact crater on the Yucatan Peninsula, Mexico". Geology 19 (9): 867–871. Bibcode:1991Geo....19..867H. doi:10.1130/0091-7613(1991)019<0867:CCAPCT>2.3.CO;2.
- Pope KO, Ocampo AC, Kinsland GL, Smith R (1996). "Surface expression of the Chicxulub crater". Geology 24 (6): 527–30. Bibcode:1996Geo....24..527P. doi:10.1130/0091-7613(1996)024<0527:SEOTCC>2.3.CO;2. PMID 11539331.
- Robertson, D.S.; et al. (30 September 2003). "Survival in the first hours of the Cenozoic". Geological Society of America Bulletin 116 (5/6): 760–768. doi:10.1130/B25402.1. Retrieved 15 June 2011.
- Claeys, P; Goderis, S (2007-09-05). "Solar System: Lethal billiards". Nature 449 (7158): 30–31. Bibcode:2007Natur.449...30C. doi:10.1038/449030a. PMID 17805281.
- Plotner, Tammy (2011). "Did Asteroid Baptistina Kill the Dinosaurs? Think other WISE ...". Universe Today. Retrieved 2011-09-20.
- edited by Christian Koeberl and Kenneth G. MacLeod. (2002). Catastrophic Events and Mass Extinctions. Geological Society of America. ISBN 0-8137-2356-6. OCLC 213836505. Unknown parameter
- Hofman, C, Féraud, G & Courtillot, V (2000). "40Ar/39Ar dating of mineral separates and whole rocks from the Western Ghats lava pile: further constraints on duration and age of the Deccan traps". Earth and Planetary Science Letters 180: 13–27. Bibcode:2000E&PSL.180...13H. doi:10.1016/S0012-821X(00)00159-X.
- Duncan, RA & Pyle, DG (1988). "Rapid eruption of the Deccan flood basalts at the Cretaceous/Tertiary boundary". Nature 333 (6176): 841–843. Bibcode:1988Natur.333..841D. doi:10.1038/333841a0.
- Alvarez, W (1997). T. rex and the Crater of Doom. Princeton University Press. pp. 130–146. ISBN 978-0-691-01630-6.
- Fassett, JE, Lucas, SG, Zielinski, RA, and Budahn, JR; Lucas; Zielinski; Budahn (2001). "Compelling new evidence for Paleocene dinosaurs in the Ojo Alamo Sandstone, San Juan Basin, New Mexico and Colorado, USA" (PDF). Catastrophic events and mass extinctions, Lunar and Planetary Contribution 1053: 45–46. Bibcode:2001caev.conf.3139F. Retrieved 2007-05-18.
- Sloan, R. E., Rigby, K,. Van Valen, L. M., Gabriel, Diane (1986). "Gradual dinosaur extinction and simultaneous ungulate radiation in the Hell Creek Formation". Science 232 (4750): 629–633. Bibcode:1986Sci...232..629S. doi:10.1126/science.232.4750.629. PMID 17781415.
- Fastovsky, David E.; Sheehan, Peter M. (2005). "Reply to comment on "The Extinction of the dinosaurs in North America"" (PDF). GSA Today 15 (7): 11. doi:10.1130/1052-5173(2005)015[11b:RTEOTD]2.0.CO;2.
- Sullivan, RM (2003). "No Paleocene dinosaurs in the San Juan Basin, New Mexico". Geological Society of America Abstracts with Programs 35 (5): 15. Retrieved 2007-07-02.
- Fassett J.E., Heaman L.M., Simonetti A. (2011). "Direct U–Pb dating of Cretaceous and Paleocene dinosaur bones, San Juan Basin, New Mexico". Geology 39 (2): 159–162. doi:10.1130/G31466.1.
- Dong Zhiming (1992). Dinosaurian Faunas of China. China Ocean Press, Beijing. ISBN 3-540-52084-8. OCLC 26522845.
- "Dinosaur bones 'used as medicine'". BBC News. 2007-07-06. Retrieved 2007-07-06.
- Sarjeant WAS (1997). "The earliert discoveries". In Farlow JO, Brett-Surman MK. The Complete Dinosaur. Bloomington: Indiana University Press. pp. 3–11. ISBN 0-253-33349-0.
- Lhuyd, E. (1699). Lithophylacii Britannici Ichnographia, sive lapidium aliorumque fossilium Britannicorum singulari figura insignium. Gleditsch and Weidmann:London.
- Delair, J.B.; Sarjeant, W.A.S. (2002). "The earliest discoveries of dinosaurs: the records re-examined". Proceedings of the Geologists' Association 113 (3): 185–197. doi:10.1016/S0016-7878(02)80022-0.
- Gunther RT (1968). Life and letters of Edward Lhwyd,: Second keeper of the Museum Ashmoleanum (Early science in Oxford Volume XIV). Dawsons of Pall Mall.
- Buckland W (1824). "Notice on the Megalosaurus or great Fossil Lizard of Stonesfield". Transactions of the Geological Society of London 1 (2): 390–396. doi:10.1144/transgslb.1.2.390.
- Mantell, Gideon A. (1825). "Notice on the Iguanodon, a newly discovered fossil reptile, from the sandstone of Tilgate forest, in Sussex". Philosophical Transactions of the Royal Society 115: 179–186. doi:10.1098/rstl.1825.0010. JSTOR 107739.
- Sues, Hans-Dieter (1997). "European Dinosaur Hunters". In Farlow JO, Brett-Surman MK. The Complete Dinosaur. Bloomington: Indiana University Press. p. 14. ISBN 0-253-33349-0.
- Holmes T (1996). Fossil Feud: The Bone Wars of Cope and Marsh, Pioneers in Dinosaur Science. Silver Burdett Press. ISBN 978-0-382-39147-7. OCLC 34472600.
- Wang, H., Yan, Z. and Jin, D. (1 May 1997). "Reanalysis of published DNA sequence amplified from Cretaceous dinosaur egg fossil". Molecular Biology and Evolution 14 (5): 589–591. doi:10.1093/oxfordjournals.molbev.a025796. PMID 9159936. Retrieved 2007-12-05.
- Chang BS, Jönsson K, Kazmi MA, Donoghue MJ, Sakmar TP (1 September 2002). "Recreating a Functional Ancestral Archosaur Visual Pigment". Molecular Biology and Evolution 19 (9): 1483–1489. doi:10.1093/oxfordjournals.molbev.a004211. PMID 12200476. Retrieved 2007-12-05.
- Schweitzer MH, Marshall M, Carron K, Bohle DS, Busse SC, Arnold EV, Barnard D, Horner JR, Starkey JR (1997). "Heme compounds in dinosaur trabecular bone". Proc Natl Acad Sci U S A. 94 (12): 6291–6. Bibcode:1997PNAS...94.6291S. doi:10.1073/pnas.94.12.6291. PMC 21042. PMID 9177210.
- Embery G, Milner AC, Waddington RJ, Hall RC, Langley MS, Milan AM (2003). "Identification of proteinaceous material in the bone of the dinosaur Iguanodon". Connect Tissue Res 44 (Suppl 1): 41–6. doi:10.1080/713713598. PMID 12952172.
- Peterson, JE; Lenczewski, ME; Reed, PS (October 2010). "Influence of Microbial Biofilms on the Preservation of Primary Soft Tissue in Fossil and Extant Archosaurs". In Stepanova, Anna. PLoS ONE 5 (10): 13A. doi:10.1371/journal.pone.0013334.
- "Dinosaur – Definition and More". Merriam-Webster Dictionary. Retrieved 2011-05-06.
- Torrens, H.S. (1993). "The dinosaurs and dinomania over 150 years". Modern Geology 18 (2): 257–286.
- Breithaupt, Brent H. (1997). "First golden period in the USA". In Currie, Philip J. and Padian, Kevin (eds.). The Encyclopedia of Dinosaurs. San Diego: Academic Press. pp. 347–350. ISBN 978-0-12-226810-6.
- "London. Michaelmas term lately over, and the Lord Chancellor sitting in Lincoln's Inn Hall. Implacable November weather. As much mud in the streets, as if the waters had but newly retired from the face of the earth, and it would not be wonderful to meet a Megalosaurus, forty feet long or so, waddling like an elephantine lizard up Holborne Hill." Dickens CJH (1852). Bleak House. London: Bradbury & Evans. p. 1.
- Glut, D.F., and Brett-Surman, M.K. (1997). In Farlow, James O. and Brett-Surman, Michael K. (eds.). The Complete Dinosaur. Indiana University Press. pp. 675–697. ISBN 978-0-253-21313-6.
- Bakker, Robert T. (1986). The Dinosaur Heresies: New Theories Unlocking the Mystery of the Dinosaurs and Their Extinction. New York: Morrow. ISBN 0-688-04287-2.
- Holtz, Thomas R. Jr. (2007). Dinosaurs: The Most Complete, Up-to-Date Encyclopedia for Dinosaur Lovers of All Ages. New York: Random House. ISBN 978-0-375-82419-7.
- Paul, Gregory S. (2000). The Scientific American Book of Dinosaurs. New York: St. Martin's Press. ISBN 0-312-26226-4.
- Paul, Gregory S. (2002). Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Baltimore: The Johns Hopkins University Press. ISBN 0-8018-6763-0.
- Stewart, Tabori & Chang (1997). "The Humongous Book of Dinosaurs". New York: Stewart, Tabori & Chang. Retrieved May 17, 2012. ISBN 1-55670-596-4 (Article: The Humongous Book of Dinosaurs)
- Zhou, Z. (2004). "The origin and early evolution of birds: discoveries, disputes, and perspectives from fossil evidence". Naturwissenschaften 91 (10): 455–471. Bibcode:2004NW.....91..455Z. doi:10.1007/s00114-004-0570-4. PMID 15365634.
|Look up dinosaur in Wiktionary, the free dictionary.|
- The Science and Art of Gregory S. Paul Influential paleontologist's anatomy art and paintings
- Skeletal Drawing Professional restorations of numerous dinosaurs, and discussions of dinosaur anatomy.
- BBC Nature:Watch dinosaurs brought to life and get experts' interpretations with videos from BBC programmes including Walking with Dinosaurs
- Dinosaurs & other extinct creatures: From the Natural History Museum, a well illustrated dinosaur directory.
- Dinosaurnews (www.dinosaurnews.org) The dinosaur-related headlines from around the world. Recent news on dinosaurs, including finds and discoveries, and many links.
- Dinosauria From UC Berkeley Museum of Paleontology Detailed information – scroll down for menu.
- LiveScience.com All about dinosaurs, with current featured articles.
- Zoom Dinosaurs (www.enchantedlearning.com) From Enchanted Learning. Kids' site, info pages and stats, theories, history.
- Dinosaur genus list contains data tables on nearly every published Mesozoic dinosaur genus as of January 2011.
- LiveScience.com Giant Dinosaurs Get Downsized by LiveScience, June 21, 2009
- Palaeontologia Electronica From Coquina Press. Online technical journal.
- Dinobase A searchable dinosaur database, from the University of Bristol, with dinosaur lists, classification, pictures, and more.
- DinoData (www.dinodata.org) Technical site, essays, classification, anatomy.
- Thescelosaurus! By Justin Tweet. Includes a cladogram and small essays on each relevant genera and species. | http://en.wikipedia.org/wiki/Dinosaur | 13 |
267 | XII. Statics and Kinematics [PDF]
Units and Dimensions
In the preceding chapters, different units have been introduced without strict definitions, but now it is necessary to define both units and dimensions. The word “dimension” in the English language is used with two different meanings. In everyday language, the term “dimensions of an object” refers to the size of the object, but in physics “dimensions” mean the fundamental categories by means of which physical bodies, properties, or processes are described. In mechanics and hydrodynamics, these fundamental dimensions are mass, length, and time, denoted by M, L, and T. When using the word “dimension” in this sense, no indication of numerical magnitude is implied, but the concept is emphasized that any physical characteristic or property can be described in terms of certain categories, the dimensions. This will be clarified by examples on p. 402.
Fundamental Units. In physics the generally accepted units of mass, length, and time are gram, centimeter, and second; that is, quantities are expressed in the centimeter-gram-second (c.g.s.) system. In oceanography, it is not always practicable to retain these units, because, in order to avoid using large numerical values, it is convenient to measure depth, for instance, in meters and not in centimeters. Similarly, it is often practical to use one metric ton as a unit of mass instead of one gram. The second is retained as the unit of time. A system of units based on meter, ton, and second (the m.t.s. system) was introduced by V. Bjerknes and different collaborators (1910). Compared to the c.g.s. system the new units are 1 m = 102 cm, 1 metric ton = 106 g, 1 sec = 1 sec. For thermal processes, the fundamental unit, 1°C, should be added.
Unfortunately, it is not practical to use even the m.t.s. system consistently. In several cases it is of advantage to adhere to the c.g.s. system in order to make results readily comparable with laboratory results that are expressed in such units, or because the numerical values are more conveniently handled in the c.g.s. system. When measuring horizontal
Derived Units. Units in mechanics other than mass, M, length, L, and time, T, can be expressed by the three dimensions, M, L, and T, and by the unit values adopted for these dimensions. Thus, velocity has the dimension length divided by time, which is written as LT−1 and is expressed in centimeters per second or in meters per second. Velocity, of course, can be expressed in many other units, such as nautical miles per hour (knots), or miles per day, but the dimensions remain unaltered. Acceleration is the time change of a velocity and has the dimensions LT−2. Force is mass times acceleration and has dimensions MLT−2.
Table 60 shows the dimensions of a number of the terms that will be used. Several of the terms in the table have the same dimensions, but the concepts on which the terms are based differ. Work, for instance, is defined as force times distance, whereas kinetic energy is defined as mass times the square of a velocity, but work and kinetic energy both have the dimensions ML2T−2. Similarly, one and the same term can be defined differently, depending upon the concepts that are introduced. Pressure, for instance, can be defined as work per unit volume, ML2T−2L−3 = ML−1T−2, but is more often defined as force per unit area, MLT−2L−2 = ML−1T−2.
|Term||Dimension||Unit in c.g.s. system||Unit in m.t.s. system|
|Mass||M||g||metric ton = 106 g|
|Length||L||cm||meter = 102 cm|
|Velocity||LT−1||cm/sec||m/sec = 100 cm/sec|
|Acceleration||LT−2||cm/sec2||m/sec2 = 100 cm/sec2|
|Momentum||MLT−1||g cm/sec||ton m/sec = 108 g cm/sec|
|Force||MLT−2||g cm/sec2 = 1 dyne||ton m/sec2 = 108 dynes|
|Impulse||MLT−1||g cm/sec||ton m/sec = 108 g cm/sec|
|work||ML2T−2||g cm2/sec2 = 1 erg||ton m2/sec2 = 1 kilojoule|
|Kinetic energy||ML2T−2||g cm2/sec2 = 1 erg||ton m2/sec2 = 1 kilojoule|
|Activity (power)||ML2T−3||g cm2/sec2 = erg/sec||ton m2sec3 = 1 kilowatt|
|Density||ML−3||g/cm2||ton/m3 = g/cm3|
|Specific volume||M−1L3||cm3/g||m3/ton = cm3/g|
|Pressure||ML−1T−2||g/cm/sec2 = dyne/cm2||ton/m/sec2 = 1 centibar|
|Gravity potential||L2T−2||cm2/sec2||m2/sec2 = 1 dynamic decimeter|
|Dynamic viscosity||ML−1T−1||g/cm/sec||ton/m/sec = 104 g/cm/sec|
|Kinematic viscosity||L2T−1||cm2/sec||m2/sec = 104 cm2/sec|
|Diffusion||L2T−1||cm2/sec||m2sec = 104 cm2/sec|
In any equation of physics, all terms must have the same dimensions, or, applied to mechanics, in all terms the exponents of the fundamental
Some of the constants that appear in the equations of physics have dimensions, and their numerical values will therefore depend upon the particular units that have been assigned to the fundamental dimensions, whereas other constants have no dimensions and are therefore independent of the system of units. Density has dimensions ML−3, but the density of pure water at 4° has the numerical value 1 (one) only if the units of mass and length are selected in a special manner (grams and centimeters or metric tons and meters). On the other hand, the specific gravity, which is the density of a body relative to the density of pure water at 4°, has no dimensions (ML−3/ML−3) and is therefore expressed by the same number, regardless of the system of units that is employed.
The Fields of Gravity, Pressure, and Mass
Level Surfaces. Coordinate surfaces of equal geometric depth below the ideal sea surface are useful when considering geometrical features, but in problems of statics or dynamics that involve consideration of the acting forces, they are not always satisfactory. Because the gravitational force represents one of the most important of the acting forces, it is convenient to use as coordinate surfaces the level surfaces, defined as surfaces that are everywhere normal to the force of gravity. It will presently be shown that these surfaces do not coincide with surfaces of equal geometric depth.
It follows from the definition of level surfaces that, if no forces other than gravitational are acting, a mass can be moved along a level surface without expenditure of work and that the amount of work expended or gained by moving a unit mass from one surface to another is independent of the path taken.
The amount of work, W, required for moving a unit mass a distance, h, along the plumb line is
In the following, the sea surface will be considered a level surface. The work required or gained in moving a unit mass from sea level to a point above or below sea level is called the gravity potential, and in the m.t.s. system the unit of gravity potential is thus one dynamic decimeter.
The practical unit of the gravity potential is the dynamic meter, for which the symbol D is used. When dealing with the sea the vertical axis is taken as positive downward. The geopotential of a level surface at the geometrical depth, z, is therefore, in dynamic meters,
The acceleration of gravity varies with latitude and depth, and the geometrical distance between standard level surfaces therefore varies with the coordinates. At the North Pole the geometrical depth of the 1000-dynamic-meter surface is 1017.0 m, but at the Equator the depth is 1022.3 m, because g is greater at the Poles than at the Equator. Thus, level surfaces and surfaces of equal geometric depth do not coincide. Level surfaces slope relative to the surfaces of equal geometric depth, and therefore a component of the acceleration of gravity acts along surfaces of equal geometrical depth.
The topography of the sea bottom is represented by means of isobaths—that is, lines of equal geometrical depth—but it could be presented equally well by means of lines of equal geopotential. The contour lines would then represent the lines of intersection between the level surfaces and the irregular surface of the bottom. These contours would no longer be at equal geometric distances, and hence would differ from the usual topographic chart, but their characteristics would be that the amount of work needed for moving a given mass from one contour to another would be constant. They would also represent the new coast lines if the sea level were lowered without alterations of the topographic features of the bottom, provided the new sea level would assume perfect hydrostatic equilibrium and adjust itself normal to the gravitational force.
Any scalar field can similarly be represented by means of a series of topographic charts of equiscalar surfaces in which the contour lines
The Field of Gravity. The fact that gravity is the resultant of two forces, the attraction of the earth and the centrifugal force due to the earth's rotation, need not be considered, and it is sufficient to define gravity as the force that is derived empirically by pendulum observations. Furthermore, it is not necessary to take into account the minor irregular variations of gravity that detailed surveys reveal, but it is enough to make use of the “normal” value, in meters per second per second, which at sea level can be represented as a function of the latitude, ϕ, by Helmert's formula:
The field of gravity can be completely described by means of a set of equipotential surfaces corresponding to standard intervals of the gravity potential. These are at equal distances if the geopotential is
The Field of Pressure. The distribution of pressure in the sea can be determined by means of the equation of static equilibrium:
The hydrostatic equation will be discussed further in connection with the equations of motion (p. 440). At this time it is enough to emphasize that, as far as conditions in the ocean are concerned, the equation, for all practical purposes, is exact.
Introducing the geopotential expressed in dynamic meters as the vertical coordinate, one has 10dD = gdz. When the pressure is measured in decibars (defined by 1 bar = 106 dynes per square centimeter), the factor k becomes equal to 1/10, and equation (XII, 3) is reduced to
Because ρs, ϕ, pand αs, ϕ, p differ little from unity, a difference in pressure is expressed in decibars by nearly the same number that expresses the difference in geopotential in dynamic meters, or the difference in geometric depth in meters. Approximately,
The pressure field can be completely described by means of a system of isobaric surfaces. Using the geopotential as the vertical coordinate, one can present the pressure distribution by a series of charts showing isobars at standard level surfaces or by a series of charts showing the geopotential topography of standard isobaric surfaces. In meteorology, the former manner of representation is generally used on weather maps, in which the pressure distribution at sea level is represented by isobars. In oceanography, on the other hand, it has been found practical to represent the geopotential topography of isobaric surfaces.
The pressure gradient is defined by
The pressure gradient has two principal components: the vertical, directed normal to the level surfaces, and the horizontal, directed parallel to the level surfaces. When static equilibrium exists, the vertical component, expressed as force per unit mass, is balanced by the acceleration of gravity. This is the statement which is expressed mathematically by means of the equation of hydrostatic equilibrium. In a resting system the horizontal component of the pressure gradient is not balanced by any other force, and therefore the existence of a horizontal pressure gradient indicates that the system is not at rest or cannot remain at rest. The horizontal pressure gradients, therefore, although extremely small, are all-important to the state of motion, whereas the vertical are insignificant in this respect.
It is evident that no motion due to pressure distribution exists or can develop if the isobaric surfaces coincide with level surfaces. In such a state of perfect hydrostatic equilibrium the horizontal pressure gradient vanishes. Such a state would be present if the atmospheric pressure, acting on the sea surface, were constant, if the sea surface coincided with the ideal sea level and if the density of the water depended on pressure only. None of these conditions is fulfilled. The isobaric surfaces are generally inclined relative to the level surfaces, and horizontal pressure gradients are present, forming a field of internal force.
This field of force can also be defined by considering the slopes of isobaric surfaces instead of the horizontal pressure gradients. By definition the pressure gradient along an isobaric surface is zero, but, if this surface does not coincide with a level surface, a component of the acceleration of gravity acts along the isobaric surface and will tend to set the water in motion, or must be balanced by other forces if a steady state of motion is reached. The internal field of force can therefore be represented also by means of the component of the acceleration of gravity along isobaric surfaces (p. 440).
Regardless of the definition of the field of force that is associated with the pressure distribution, for a complete description of this field one must know the absolute isobars at level surfaces or the absolute geopotential contour lines of isobaric surfaces. These demands cannot possibly be met. One reason is that measurements of geopotential distances of isobaric surfaces must be made from the actual sea surface, the topography of which is unknown. It will be shown that all one can do is to determine the pressure field that would be present if the pressure
In order to illustrate this point a fresh-water lake will be considered which is so small that horizontal differences in atmospheric pressure can be disregarded and the acceleration of gravity can be considered constant. Let it first be assumed that the water is homogeneous, meaning that the density is independent of the coordinates. In this case, the distance between any two isobaric surfaces is expressed by the equation
This equation simply states that the geometrical distance between isobaric surfaces is constant, and it defines completely the internal field of pressure. The total field of pressure depends, however, upon the configuration of the free surface of the lake. If no wind blows and if no stress is thus exerted on the free surface of the lake, perfect hydrostatic equilibrium exists, the free surface is a level surface, and, similarly, all other isobaric surfaces coincide with level surfaces. On the other hand, if a wind blows across the lake, the equilibrium will be disturbed, the water level will be lowered at one end of the lake, and water will be piled up against the other end. The free surface will still be an isobaric surface, but it will now be inclined relative to a level surface. The relative field of pressure, however, will remain unaltered as represented by equation (XII, 4), meaning that all other isobaric surfaces will have the same geometric shape as that of the free surface.
One might continue and introduce a number of layers of different density, and one would find that the same reasoning would be applicable. The method is therefore also applicable when one deals with a liquid within which the density changes continually with depth. By means of observations of the density at different depths, one can derive the relative field of pressure and can represent this by means of the topography of the isobaric surfaces relative to some arbitrarily or purposely selected isobaric surface. The relative field of force can be derived from the slopes of the isobaric surface relative to the selected reference surface, but, in order to find the absolute field of pressure and the corresponding absolute field of force, it is necessary to determine the absolute shape of one isobaric surface.
These considerations have been set forth in great detail because it is essential to be fully aware of the difference between the absolute field of pressure and the relative field of pressure, and to know what types of data are needed in order to determine each of these fields.
The Field of Mass. The field of mass in the ocean is generally described by means of the specific volume as expressed by (p. 57)
The former field is of a simple character. The surfaces of α35,0,p coincide with the isobaric surfaces, the deviations of which from level surfaces are so small that for practical purposes the surfaces of α35,0,p can be considered as coinciding with level surfaces or with surfaces of equal geometric depth. The field of α35,0,p can therefore be fully described by means of tables giving α35,0,p as a function of pressure and giving the average relationships between pressure, geopotential and geometric depths. Since this field can be considered a constant one, the field of mass is completely described by means of the anomaly of the specific volume, δ, the determination of which was discussed on p. 58.
The field of mass can be represented by means of the topography of anomaly surfaces or by means of horizontal charts or vertical sections in which curves of δ = constant are entered. The latter method is the most common. It should always be borne in mind, however, that the specific volume in situ is equal to the sum of the standard specific volume, α35,0,p at the pressure in situ and the anomaly, δ.
The Relative Field of Pressure. It is impossible to determine the relative field of pressure in the sea by direct observations, using some type of pressure gauge, because an error of only 0.1 m in the depth of a pressure gauge below the sea surface would introduce errors greater than the horizontal differences that should be established. If the field of mass is known, however, the internal field of pressure can be determined from the equation of static equilibrium in one of the forms
Integration of the latter form gives
Equation (XII, 6) can be interpreted as expressing that the relative field of pressure is composed of two fields: the standard field and the field of anomalies. The standard field can be determined once and for all, because the standard geopotential distance between isobaric surfaces represents the distance if the salinity of the sea water is constant at 35 ‰ and the temperature is constant at 0°C. The standard geopotential distance decreases with increasing pressure, because the specific volume decreases (density increases) with pressure, as is evident from table 7H in Bjerknes (1910), according to which the standard geopotential distance between the isobaric surfaces 0 and 100 decibars is 97.242 dynamic meters, whereas the corresponding distance between the 5000-and 5100-decibar surfaces is 95.153 dynamic meters.
The standard geopotential distance between any two standard isobaric surfaces is, on the other hand, independent of latitude, but the geometric distance between isobaric surfaces varies with latitude because g varies.
Because in the standard field all isobaric surfaces are parallel relative to each other, this standard field lacks a relative field of horizontal force. The relative field of force, which is associated with the distribution of mass, is completely described by the field of the geopotential anomalies. It follows that a chart showing the topography of one isobaric surface relative to another by means of the geopotential anomalies is equivalent to a chart showing the actual geopotential topography of one isobaric surface relative to another. The practical determination of the relative field of pressure is therefore reduced to computation and representation of the geopotential anomalies, but the absolute pressure field can be found only if one can determine independently the absolute topography of one isobaric surface.
In order to evaluate equation (XII, 7), it is necessary to know the anomaly, δ, as a function of absolute pressure. The anomaly is computed from observations of temperature and salinity, but oceanographic observations give information about the temperature and the salinity at known geometrical depths below the actual sea surface, and not at known pressures. This difficulty can fortunately be overcome by means of an artificial substitution, because at any given depth the numerical value of the absolute pressure expressed in decibars is nearly the same as the numerical value of the depth expressed in meters, as is evident from the following corresponding values:
|Standard sea pressure (decibars)||1000||2000||3000||4000||5000||6000|
|Approximate geometric depth (m)||990||1975||2956||3933||4906||5875|
Thus, the numerical values of geometric depth deviate only 1 or 2 percent from the numerical values of the standard pressure at that depth. This agreement is not accidental, but has been brought about by the selection of the practical unit of pressure, the decibar.
It follows that the temperature at a pressure of 1000 decibars is nearly equal to the temperature at a geometric depth of 990 m, or the temperature at the pressure of 6000 decibars is nearly equal to the temperature at a depth of 5875 m. The vertical temperature gradients in the ocean are small, especially at great depths, and therefore no serious error is introduced if, instead of using the temperature at 990 m when computing δ, one makes use of the temperature at 1000 m, and so on. The difference between anomalies for neighboring stations will be even less affected by this procedure, because within a limited area the vertical temperature gradients will be similar. The introduced error will be nearly the same at both stations, and the difference will be an error of absolutely negligible amount. In practice one can therefore consider the numbers that represent the geometric depth in meters as representing absolute pressure in decibars. If the depth in meters at which either directly observed or interpolated values of temperature and salinity are available is interpreted as representing pressure in decibars, one can compute, by means of the tables in the appendix, the anomaly of specific volume at the given pressure. By multiplying the average anomaly of specific volume between two pressures by the difference in pressure in decibars (which is considered equal to the difference in depth in meters), one obtains the geopotential anomaly of the isobaric sheet in question expressed in dynamic meters. By adding these geopotential anomalies, one can find the corresponding anomaly between any two given pressures. An example of a complete computation is given in table 61.
Certain simple relationships between the field of pressure and the field of mass can be derived by means of the equations for equiscalar surfaces (p. 155) and the hydrostatic equation. In a vertical profile the isobars and the isopycnals are defined by
|Meters or decibars||Temp. (°C)||Salinity (‰)||σt||105Δs,ϕ||105δs,p||105δϕ,p||105δ||ΔD||ΔD(dynamic meter)|
Profiles of isobaric surfaces based on the data from a series of stations in a section must evidently be in agreement with the inclination of the δ curves, as shown in a section and based on the same data, but this obvious rule often receives little or no attention.
Relative Geopotential Topography of Isobaric Surfaces. If simultaneous observations of the vertical distribution of temperature and salinity were available from a number of oceanographic stations within a given area, the relative pressure distribution at the time of the observations could be represented by a series of charts showing the geopotential topography of standard isobaric surfaces relative to one arbitrarily or purposely selected reference surface. From the preceding it is evident that these topographies are completely represented by means of the geopotential anomalies.
In practice, simultaneous observations are not available, but in many instances it is permissible to assume that the time changes of the pressure distribution are so small that observations taken within a given period may be considered simultaneous. The smaller the area, the shorter must be the time interval within which the observations are made. Figs. 110, p. 454 and 204, p. 726, represent examples of geopotential topographies. The conclusions as to currents which can be based on such charts will be considered later.
Charts of geopotential topographies can be prepared in two different ways. By the common method, the anomalies of a given surface relative to the selected reference surface are plotted on a chart and isolines are drawn, following the general rules for presenting scalar quantities. In this manner, relative topographies of a series of isobaric surfaces can be prepared, but the method has the disadvantage that each topography is prepared separately.
By the other method a series of charts of relative topographies is prepared stepwise, taking advantage of the fact that the anomaly of geopotential thickness of an isobaric sheet is proportional to the average
This method is widely used in meteorology, but is not commonly employed in oceanography because, for the most part, the different systems of curves are so nearly parallel to each other that graphical addition is cumbersome. The method is occasionally useful, however, and has the advantage of showing clearly the relationship between the distribution of mass and the distribution of pressure. It especially brings out the geometrical feature that the isohypses of the isobaric surfaces retain their form when passing from one isobaric surface to another only if the anomaly curves are of the same form as the isohypses. This characteristic of the field is of great importance to the dynamics of the system.
Character of the Total Field of Pressure. From the above discussion it is evident that, in the absence of a relative field of pressure, isosteric and isobaric surfaces must coincide. Therefore, if for some reason one isobaric surface, say the free surface, deviates from a level surface, then all isobaric and isosteric surfaces must deviate in a similar manner. Assume that one isobaric surface in the disturbed condition lies at a distance Δh cm below the position in undisturbed conditions. Then all other isobaric surfaces along the same vertical are also displaced the distance Δh from their undisturbed position. The distance Δh is positive downward because the positive z axis points downward. Call the pressure at a given depth at undisturbed conditions p0. Then the pressure at disturbed conditions is pt = p0 − Δp, where Δp = gρΔh and where the displacement Δh can be considered as being due to a deficit or an excess of mass in the water column under consideration.
The above considerations are equally valid if a relative field of pressure exists. The absolute distribution of pressure can always be completely determined from the equation
Significance of σt Surfaces
The density of sea water at atmospheric pressure, expressed as σt = (ρs,ϕ,0 − 1) × 103, is often computed and represented in horizontal charts or vertical sections. It is therefore necessary to study the significance of σt surfaces, and in order to do so the following problem will be considered: Can water masses be exchanged between different places in the ocean space without altering the distribution of mass?
The same problem will first be considered for the atmosphere, assuming that this is a perfect, dry gas. In such an atmosphere the potential temperature means the temperature which the air would have if it were brought by an adiabatic process to a standard pressure. The potential temperature, θ, is
Consider two air masses, one of temperature ϕ1 at pressure p1, and one of temperature ϕ2 at pressure p2. If both have the same potential temperature, it follows that
With regard to the ocean, the question to be considered is whether surfaces of similar characteristics can be found there. Let one water mass at the geopotential depth D1 be characterized by salinity S1 and temperature ϕ1, and another water mass at geopotential depth D2 be characterized by salinity S2 and temperature ϕ2. The densities in situ of these small water masses can then be expressed as σs1,ϑ1,D1 and σs2,ϑ2,D2.
Now consider that the mass at the geopotential depth D1 is moved adiabatically to the geopotential depth D2. During this process the temperature of the water mass will change adiabatically from ϕ1 to θ1 and the density in situ will be σs1,θ1,D2. Moving the other water mass adiabatically from D2 to D1 will change its temperature from ϕ2 to θ2. If the two water masses are interchanged, the conditions
The adiabatic change in temperature between the geopotential depths of 200 and 700 dyn meters is 0.09°, and thus θ1 = 13.82, θ2 = 8.01. By means of the Hydrographic Tables of Bjerknes and collaborators, one finds
It should also be observed that the mixing of two water masses that are at the same depth and are of the same density in situ, but of different temperatures and salinities, produces water of a higher density. If, at D = 700 dyn meters, equal parts of water S1 = 36.01 ‰, ϕ1 = 13.82°,
This discussion leads to the conclusion that in the ocean no surfaces exist along which interchange or mixing of water masses can take place without altering the distribution of mass and thus altering the potential energy and the entropy of the system (except in the trivial case that isohaline and isothermal surfaces coincide with level surfaces). There must exist, however, a set of surfaces of such character that the change of potential energy and entropy is at a minimum if interchange and mixing takes place along these surfaces. It is impossible to determine the shape of these surfaces, but the σt surfaces approximately satisfy the conditions. In the preceding example, which represents very extreme conditions, the two water masses were lying nearly on the same σt surface (σt1, = 27.05, σt2 = 26.97).
Thus, in the ocean, the σt surfaces can be considered as being nearly equivalent to the isentropic surfaces in a dry atmosphere, and the σt surfaces may therefore be called quasi-isentropic surfaces. The name implies only that interchange or mixing of water masses along σt surfaces brings about small changes of the potential energy and of the entropy of the body of water.
The change in a vertical direction of σt is nearly proportional to the vertical stability of the system. Assume that a water mass is displaced vertically upward from the geopotential depth D2 to the geopotential depth D1. The difference between the density of this mass and the surrounding water (see p. 57) will then be
|Depth (m)||Temp. (°C)||Salinity (‰)||σt||108E||105(dσt/dz)|
Hesselberg and Sverdrup (1914–15) have published tables by means of which the terms of equation (XII, 13) are found, and give an example based on observations in the Atlantic Ocean on May, 1910, in lat. 28°37′N, long. 19°08′W (Helland-Hansen, 1930). This example is reproduced in
Hesselberg and Sverdrup have also computed the order of magnitude of the different terms in equation (XII, 13) and have shown that dσt/dz is an accurate expression of the stability down to a depth of 100 m, but that between 100 and 2000 m the terms containing ε may have to be considered, and that below 2000 m all terms are important. The following practical rules can be given:
Above 100 m the stability is accurately expressed by means of 10−3dσt/dz.
Below 100 m the magnitude of the other terms of the exact equation (XII, 13) should be examined if the numerical value of 10−3dσt/dz is less than 40 × 10−8.
The stability can also be expressed in a manner that is useful when considering the stability of the deep water:
A vector field can be completely represented by means of three sets of charts, one of which shows the scalar field of the magnitude of the vector and two of which show the direction of the vector in horizontal and vertical planes. It can also be fully described by means of three sets of scalar fields representing the components of the vector along the principal coordinate axes (V. Bjerknes and different collaborators, 1911). In oceanography, one is concerned mainly with vectors that are horizontal, such as velocity of ocean currents—that is, two-dimensional vectors.
Representation of a two-dimensional vector field by vectors of indicated direction and magnitude and by vector lines and equiscalar curves.
Vector lines cannot intersect except at singular points or lines, where the magnitude of the vector is zero. Vector lines cannot begin or end within the vector field except at singular points, and vector lines are continuous.
The simplest and most important singularities in a two-dimensional vector field are shown in fig. 96: These are (1) points of divergence (fig. 96A and C) or convergence (fig. 96B and D), at which an infinite number of vector lines meet; (2) neutral points, at which two or more vector lines intersect (the example in fig. 96E shows a neutral point of the first order in which two vector lines intersect—that is, a hyperbolic point); and (3) lines of divergence (fig. 96F) or convergence (fig. 96G), from which an infinite number of vector lines diverge asymptotically or to which an infinite number of vector lines converge asymptotically.
It is not necessary to enter upon all the characteristics of vector fields or upon all the vector operations that can be performed, but two important vector operations must be mentioned.
Singularities in a two-dimensional vector field. A and C, points of divergence; B and D, points of convergence; E, neutral point of first order (hyperbolic point); F, line of convergence; and G, line of divergence.
Assume that a vector A has the components Ax, Ay, and Az. The scalar quantity
The vector which has the components
Two representations of a vector that varies in space and time will also be mentioned. A vector that has been observed at a given locality during a certain time interval can be represented by means of a central vector diagram (fig. 97). In this diagram, all vectors are plotted from the same point, and the time of observation is indicated at each vector. Occasionally the end points of the vector are joined by a curve on which the time of observation is indicated and the vectors themselves are omitted. This form of representation is commonly used when dealing with periodic currents such as tidal currents. A central vector diagram is also used extensively in pilot charts to indicate the frequency of winds from given directions. In this case the direction of the wind is shown by an arrow, and the frequency of wind from that direction is shown by the length of the arrow.
Time variation of a vector represented by a central vector diagram (left) and a progressive vector diagram (right).
If it can be assumed that the observations were made in a uniform vector field, a progressive vector diagram is useful. This diagram is constructed by plotting the second vector from the end point of the first, and so on (fig. 97). When dealing with velocity, one can compute the displacement due to the average velocity over a short interval of time. When these displacements are plotted in a progressive vector diagram, the resulting curve will show the trajectory of a particle if the velocity field is of such uniformity that the observed velocity can be considered representative of the velocities in the neighborhood of the place of observation. The vector that can be drawn from the beginning of the first vector to the end of the last shows the total displacement in the entire time interval, and this displacement, divided by the time interval, is the average velocity for the period.
The Field of Motion and the Equation of Continuity
The Field of Motion. Among vector fields the field of motion is of special importance. Several of the characteristics of the field of motion can be dealt with without considering the forces which have brought about or which maintain the motion, and these characteristics form the subject of kinematics.
The velocity of a particle relative to a given coordinate system is defined as ν = dr/dt, where dr is an element of length in the direction in which the particle moves. In a rectangular coordinate system the velocity has the components
The velocity field can be completely described by the Lagrange or by the Euler method. In the Lagrange method the coordinates of all moving particles are represented as functions of time and of a threefold multitude of parameters that together characterize all the moving particles. From this representation the velocity of each particle, and, thus, the velocity field, can be derived at any time.
The more convenient method by Euler will be employed in the following. This method assumes that the velocity of all particles of the fluid has been defined. On this assumption the velocity field is completely described if the components of the velocity can be represented as functions of the coordinates and of time:
The characteristic difference between the two methods is that Lagrange's method focuses attention on the paths taken by all individual particles, whereas Euler's method focuses attention on the velocity at each point in the coordinate space. In Euler's method it is necessary, however, to consider the motion of the individual particles in order to find the acceleration. After a time dt, a particle that, at the time t, was at the point (x,y,z) and had the velocity components fx(x,y,z,t), and so on, will be at the point (x + dx, y + dy, z + dz), and will have the velocity components fx(x + dx, y+ dy, z + dz, t + dt), and so on. Expanding in Taylor's series, one obtains
Thus, one has to deal with two time derivatives: the individual time derivative, which represents the acceleration of the individual particles, and the local time derivative, which represents the time change of the velocity at a point in space and is called the local acceleration. The last terms in equation (XII, 17) are often combined and called the field acceleration.
The above development is applicable not only when considering the velocity field, but also when considering any field of a property that varies in space and time (p. 157). The velocity field is stationary when the local time changes are zero:
The Equation of Continuity. Consider a cube of volume dxdydz. The mass of water that in unit time flows in parallel to the x axis is equal to pvxdydz, and the mass that flows out is equal to
Now:equation (XII, 17), therefore,
The equation of continuity is not valid in the above form at a boundary surface because no out- or inflow can take place there. In a direction normal to a boundary a particle in that surface must move at the same velocity as the surface itself. If the surface is rigid, no component normal to the surface exists and the velocity must be directed parallel to the surface. The condition
Application of the Equation of Continuity. At the sea surface the kinematic boundary condition must be fulfilled. Designating the vertical displacement of the sea surface relative to a certain level of equilibrium by η, and taking this distance positive downward, because the positive z axis is directed downward, one obtains
With stationary distribution of mass (∂ρ/∂t = 0) the equation of continuity is reduced to
The total transport of mass through a vertical surface of unit width reaching from the surface to the bottom has the componentsequation (XII, 23) by dz and integrating from the surface to the bottom, one obtains
When dealing with conditions near the surface, one can consider the density as constant and can introduce average values of the velocity components [Equation] and [Equation] within a top layer of thickness H. With these simplifications, one obtains, putting νz,0 = 0,equation (XII, 25) states that at a small distance below the surface ascending motion is encountered if the surface currents are diverging, and descending if the surface currents are converging. This is an obvious conclusion, because, with diverging surface currents, water is carried away from the area of divergence and must be replaced by water that rises from some depth below the surface, and vice versa. Thus, conclusions as to vertical motion can be based on charts showing the surface currents.
For this purpose, it is of advantage to write the divergence of a two-dimensional vector field in a different form:
The equation of continuity is applicable not only to the field of mass but also to the field of a dissolved substance that is not influenced by biological activity. Let the mass of the substance per unit mass of water be s. Multiplying the equation of continuity by s and integrating from the surface to bottom, one obtains, if the vertical velocity at the surface is zero,
These equations have already been used in simplified form in order to compute the relation between inflow and outflow of basins (p. 147). Other simplifications have been introduced by Knudsen, Witting, and Gehrke (Krümmel, 1911, p. 509–512).
Trajectories (full drawn lines) and stream lines (dashed lines) in a progressive surface wave.
Stream Lines and Trajectories. The vector lines showing the direction of currents at a given time are called the stream lines, or the lines of flow. The paths followed by the moving water particles, on the other hand, are called the trajectories of the particles. Stream lines and trajectories are identical only when the motion is stationary, in which case the stream, lines of the velocity field remain unaltered in time, and a particle remains on the same stream line.
The general difference between stream lines and trajectories can be illustrated by considering the type of motion in a traveling surface wave. The solid lines with arrows in fig. 98 show the stream lines in a cross section of a surface wave that is supposed to move from left to right, passing the point A. When the crest of the wave passes A, the motion of the water particles at A is in the direction of progress, but with decreasing
It is supposed that the speed at which the wave travels is much greater than the velocity of the single water particles that take part in the wave motion. On this assumption a water particle that originally was located below A will never be much removed from this vertical and will return after one wave period to its original position. The trajectories of such particles in this case are circles, the diameters of which decrease with increasing distance from the surface, as shown in the figure. It is evident that the trajectories bear no similarity to the stream lines.
Representations of the Field of Motion in the Sea
Trajectories of the surface water masses of the ocean can be determined by following the drift of floating bodies that are carried by the currents. It is necessary, however, to exercise considerable care when interpreting the available information about drift of bodies, because often the wind has carried the body through the water. Furthermore, in most cases, only the end points of the trajectory are known—that is, the localities where the drift commenced and ended. Results of drift-bottle experiments present an example of incomplete information as to trajectories. As a rule, drift bottles are recovered on beaches, and a reconstruction of the paths taken by the bottles from the places at which they were released may be very hypothetical. The reconstruction may be aided by additional information in the form of knowledge of distribution of surface temperatures and salinities that are related to the currents, or by information obtained from drift bottles that have been picked up at sea. Systematic drift-bottle experiments have been conducted, especially in coastal areas that are of importance to fisheries.
Stream lines of the actual surface or subsurface currents must be based upon a very large number of direct current measurements. Where the velocity is not stationary, simultaneous observations are required. Direct measurements of subsurface currents must be made from anchored vessels, but this procedure is so difficult that no simultaneous measurements that can be used for preparing charts of observed subsurface currents for any area are available.
Numerous observations of surface currents, on the other hand, have been derived from ships' logs. Assume that the position of the vessel at
Determination of surface currents by difference between positions by fixes and dead reckoning.
The data on surface currents obtained from ships' logs cannot be used for construction of a synoptic chart of the currents, because the number of simultaneous observations is far too small. Data for months, quarter years, or seasons have been compiled, however, from many years' observations, although even these are unsatisfactory for presentation of the average conditions because such data are not evenly distributed over large areas but are concentrated along trade routes. In some charts the average direction in different localities is indicated by arrows, and where strong currents prevail the average speed in nautical miles per day is shown by a number. In other charts the surface flow is represented by direction roses in which the number at the center of the rose represents the percentage of no current, the lengths of the different arrows represent the percentage of currents in the direction of the arrows, and the figures at the ends of the arrows represent the average velocity in miles per day of currents in the indicated direction. These charts contain either averages for the year or for groups of months.
On the basis of such charts, average surface currents during seasons or months have in some areas been represented by means of stream lines and equiscalar curves of velocity. The principle advantage of this representation is that it permits a rapid survey of the major features and that it brings out the singularities of the stream lines, although in many instances the interpretation of the data is uncertain and the details of the chart will depend greatly upon personal judgment.
In drawing these stream lines it is necessary to follow the rules concerning vector lines (p. 419). The stream lines cannot intersect, but an infinite number of stream lines can meet in a point of convergence or divergence or can approach asymptotically a line of convergence or diverge asymptotically from a line of divergence.
Stream lines of the surface currents off southeastern Africa in July (after Wilimzik).
As an example, stream lines of the surface flow in July off southeast Africa and to the south and southeast of Madagascar are shown in fig. 100. The figure is based on a chart by Willimzik (1929), but a number of the stream lines in the original chart have been omitted for the sake of simplification. In the chart a number of the characteristic singularities of a vector field are shown. Three hyperbolic points marked A appear, four points of convergence marked B are seen, and a number of lines of convergence marked C and lines of divergence marked D are present. The stream lines do not everywhere run parallel to the coast, and the representation involves the assumption of vertical motion at the coast, where the horizontal velocity, however, must vanish.
The most conspicuous feature is the continuous line of convergence that to the southwest of Madagascar curves south and then runs west, following lat. 35°S. At this line of convergence, the Subtropical Convergence, which can be traced across the entire Indian Ocean and has its counterpart in other oceans, descending motion must take place. Similarly, descending motion must be present at the other lines of convergence, at the points of convergence, and at the east coast of Madagascar, whereas ascending motion must be present along the lines of divergence and along the west coast of Madagascar, where the surface waters flow away from the coast. Velocity curves have been omitted, for which reason the conclusions as to vertical motion remain incomplete (see p. 425). Near the coasts, eddies or countercurrents are indicated, and these phenomena often represent characteristic features of the flow and remain unaltered during long periods.
As has already been stated, representations of surface flow by means of stream lines have been prepared in a few cases only. As a rule, the surface currents are shown by means of arrows. In some instances the representation is based on ships' observation of currents, but in other cases the surface flow has been derived from observed distribution of temperature and salinity, perhaps taking results of drift-bottle experiments into account. The velocity of the currents may not be indicated or may be shown by added numerals, or by the thickness of the arrows. No uniform system has been adopted (see Defant, 1929), because the available data are of such different kinds that in each individual case a form of representation must be selected which presents the available information in the most satisfactory manner. Other examples of surface flow will be given in the section dealing with the currents in specific areas.
Willimzik, M.1929. “Die Strömungen im subtropischen Konvergenzgebiet des Indischen Ozeans. Berlin, Universität” . Institut f. Meereskunde, Veröff., N. F., A. Geogr.-naturwiss. Reihe, Heft 14, 27 pp., 1929. | http://publishing.cdlib.org/ucpressebooks/view?docId=kt167nb66r&doc.view=content&chunk.id=ch12&toc.depth=100&anchor.id=ch12&brand=eschol | 13 |
84 | ||This article may be too technical for most readers to understand. (February 2013)|
In statistical significance testing the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. One often "rejects the null hypothesis" when the p-value is less than the predetermined significance level which is often 0.05 or 0.01, indicating that the observed result would be highly unlikely under the null hypothesis. Many common statistical tests, such as chi-squared tests or Student's t-test, produce test statistics which can be interpreted using p-values.
The p-value is a key concept in the approach of Ronald Fisher, where he uses it to measure the weight of the data against a specified hypothesis, and as a guideline to ignore data that does not reach a specified significance level. Fisher's approach does not involve any alternative hypothesis, which is instead the Neyman–Pearson approach. The p-value should not be confused with the Type I error rate (false positive rate) α in the Neyman–Pearson approach – though α is also called a "significance level" and is often 0.05, these terms have different meanings, these are incompatible approaches, and the numbers p and α cannot meaningfully be compared. There is a great deal of confusion and misunderstanding on this point, and many misinterpretations, discussed below. Fundamentally, the p-value does not in itself allow reasoning about the probabilities of hypotheses (this requires a prior, as in Bayesian statistics), nor choosing between different hypotheses (this is instead done in Neyman–Pearson statistical hypothesis testing) – it is simply a measure of how likely the data is to have occurred by chance, assuming the null hypothesis is true.
Despite the above caveats, statistical hypothesis tests making use of p-values are commonly used in many fields of science and social sciences, such as economics, psychology, biology, criminal justice and criminology, and sociology, though this is criticized (see below).
Computing a p-value requires a null hypothesis, a test statistic (together with deciding if one is doing one-tailed test or a two-tailed test), and data. A few simple examples follow, each illustrating a potential pitfall.
- One roll of a pair of dice
Rolling a pair of dice once, assuming a null hypothesis of fair dice, the test statistic of "total value of numbers rolled" (one-tailed), and with data of both dice showing 6 (so a test statistic of 12, the total) yields a p-value of 1/36, or about 0.028 (most extreme value out of 6×6 = 36 possible outcomes). At the 0.05 significance level, one rejects the hypothesis that the dice are fair (not loaded towards 6).
This illustrates the danger with blindly applying p-value without considering experiment design – a single roll of a pair of dice is a very weak basis (insufficient data) to draw any meaningful conclusion.
- Five heads in a row
Flipping a coin five times, assuming a null hypothesis of a fair coin, a test statistic of "total number of heads" (one-tailed or two-tailed), and with data of all heads (HHHHH) yields a test statistic of 5. In a one-tailed test, this is the unique most extreme value (out of 32 possible outcomes), and yields a p-value of 1/25 = 1/32 ≈ 0.03, which is significant at the 0.05 level. In a two-tailed test, all tails (TTTTT) is as extreme, and thus the data of HHHHH yields a p-value of 2/25 = 1/16 ≈ 0.06, which is not significant at the 0.05 level. These correspond respectively to testing if the coin is biased towards heads, or if the coin is biased either way.
This demonstrates that specifying a direction (on a symmetric test statistic) halves the p-value (increases the significance) and can mean the difference between data being considered significant or not.
- Sample size dependence
Flipping a coin n times, assuming a null hypothesis of a fair coin, a test statistic of "total number of heads" (two-tailed), and with data of all heads yields a test statistic of n and a p-value of 2/2n = 2−(n−1). If one has a two-headed coin and flips the coin 5 times (obtaining heads each time, as it is two-headed), the p-value is 0.0625 > 0.05, but if one flips the coin 10 times (obtaining heads each time), the p-value is ≈ 0.002 < 0.05.
In both cases the data suggest that the null hypothesis is false, but changing the sample size changes the p-value and significance level. In the first case the sample size is not large enough to allow the null hypothesis to be rejected at the 0.05 level (in fact, in this example the p-value cannot be below 0.05 given a sample size of 5). In cases when a large sample size produces a significant result, a smaller sample size may produce a result that is not significant, simply because the sample size is too small to detect the effect.
This demonstrates that in interpreting p-values, one must also know the sample size, which complicates the analysis.
- Alternating coin flips
Flipping a coin ten times, assuming a null hypothesis of a fair coin, a test statistic of "total number of heads" (two-tailed), and with data of alternating heads/tails (HTHTHTHTHT) yields a test statistic of 5 and a p-value of 1 (completely unexceptional), as this is exactly the expected number of heads.
However, using the subtler test statistic of "number of alternations" (times when H is followed by T or T is followed by H), again two-tailed, yields a test statistic of 9, which is extreme, and has a p-value of which is extremely significant. The expected number of alternations is 4.5 (there are 9 gaps, and each has a 0.5 chance of being an alternation), the values as extreme as this are 0 and 9, and there are only 4 sequences (out of 1024 possible outcomes) this extreme: all heads, all tails, alternating starting from heads (this case), or alternating starting from tails.
This data indicates that, in terms of this test statistic, the data set is extremely unlikely to have occurred by chance, though it does not suggest that the coin is biased towards heads or tails. There is no "alternative hypothesis", only rejection of the null hypothesis, and such data could have many causes – the data may instead be forged, or the coin flipped by a magician who intentionally alternated outcomes.
This example demonstrates that the p-value depends completely on the test statistic used, and illustrates that p-values are about rejecting a null hypothesis, not about considering other hypotheses.
- Impossible outcome and very unlikely outcome
Flipping a coin two times, assuming a null hypothesis of a two-headed coin, a test statistic of "total number of heads" (one-tailed), and with data of one head and one tail (HT) yields a test statistic of 1, and a p-value of 0. In this case the data is inconsistent with the hypothesis – for a two-headed coin, a tail can never come up. In this case the outcome is not simply unlikely in the null hypothesis, but in fact impossible, and the null hypothesis can be definitely rejected as false. In practice such experiments almost never occur, as all data that could be observed would be possible in the null hypothesis, albeit unlikely.
If the null hypothesis were instead that the coin came up heads 99% of the time (otherwise the same setup), the p-value would instead be[a] In this case the null hypothesis could not definitely be ruled out – this outcome is unlikely in the null hypothesis, but not impossible – but the null hypothesis would be rejected at the 0.05 level, and in fact at the 0.02 level, since the outcome is less than 2% likely in the null hypothesis.
Coin flipping
As an example of a statistical test, an experiment is performed to determine whether a coin flip is fair (equal chance of landing heads or tails) or unfairly biased (one outcome being more likely than the other).
Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The null hypothesis is that the coin is fair, so the p-value of this result is the chance of a fair coin landing on heads at least 14 times out of 20 flips. This probability can be computed from binomial coefficients as
This probability is the p-value, considering only extreme results which favor heads. This is called a one-tailed test. However, the deviation can be in either direction, favoring either heads or tails. We may instead calculate the two-tailed p-value, which considers deviations favoring either heads or tails. As the binomial distribution is symmetrical for a fair coin, the two-sided p-value is simply twice the above calculated single-sided p-value; i.e., the two-sided p-value is 0.115.
In the above example we thus have:
- Null hypothesis (H0): The coin is fair; Prob(heads) = 0.5
- Observation O: 14 heads out of 20 flips; and
- p-value of observation O given H0 = Prob(≥ 14 heads or ≥ 14 tails) = 2*(1-Prob(< 14)) = 0.115.
The calculated p-value exceeds 0.05, so the observation is consistent with the null hypothesis, as it falls within the range of what would happen 95% of the time were the coin in fact fair. Hence, we fail to reject the null hypothesis at the 5% level. Although the coin did not fall evenly, the deviation from expected outcome is small enough to be consistent with chance.
However, had one more head been obtained, the resulting p-value (two-tailed) would have been 0.0414 (4.14%). This time the null hypothesis – that the observed result of 15 heads out of 20 flips can be ascribed to chance alone – is rejected when using a 5% cut-off.
In brief, the (left-tailed) p-value is the quantile of the value of the test statistic, with respect to the sampling distribution under the null hypothesis. The right-tailed p-value is one minus the quantile, while the two-tailed p-value is twice whichever of these is smaller. This is elaborated below.
Computing a p-value requires a null hypothesis, a test statistic (together with deciding if one is doing one-tailed test or a two-tailed test), and data. The key preparatory computation is computing the cumulative distribution function (CDF) of the sampling distribution of the test statistic under the null hypothesis; this may depend on parameters in the null distribution and the number of samples in the data. The test statistic is then computed for the actual data, and then its quantile computed by inputting it into the CDF. This is then normalized as follows:
- one-tailed (left tail): quantile, value of cumulative distribution function (since values close to 0 are extreme);
- one-tailed (right tail): one minus quantile, value of complementary cumulative distribution function (since values close to 1 are extreme: 0.95 becomes 0.05);
- two-tailed: twice p-value of one-tailed, for whichever side value is on (since values close to 0 or 1 are both extreme: 0.05 and 0.95 both have a p-value of 0.10, as one adds the tails on both sides).
Even though computing the test statistic on given data may be easy, computing the sampling distribution under the null hypothesis, and then computing its CDF is often a difficult computation. Today this computation is done using statistical software, often via numeric methods (rather than exact formulas), while in the early and mid 20th century, this was instead done via tables of values, and one interpolated or extrapolated p-values from these discrete values. Rather than using a table of p-values, Fisher instead inverted the CDF, publishing a list of values of the test statistic for given fixed p-values; this corresponds to computing the quantile function (inverse CDF).
Hypothesis tests, such as Student's t-test, typically produce test statistics whose sampling distributions under the null hypothesis are known. For instance, in the above coin-flipping example, the test statistic is the number of heads produced; this number follows a known binomial distribution if the coin is fair, and so the probability of any particular combination of heads and tails can be computed. To compute a p-value from the test statistic, one must simply sum (or integrate over) the probabilities of more extreme events occurring. For commonly used statistical tests, test statistics and their corresponding p-values are often tabulated in textbooks and reference works.
Traditionally, following Fisher, one rejects the null hypothesis if the p-value is less than or equal to a specified significance level, often 0.05, or more stringent values, such as 0.02 or 0.01. These numbers should not be confused with the Type I error rate α in Neyman–Pearson-style statistical hypothesis testing; see misunderstandings, below. A significance level of 0.05 would deem extraordinary any result that is within the most extreme 5% of all possible results under the null hypothesis. In this case a p-value less than 0.05 would result in the rejection of the null hypothesis at the 5% (significance) level.
In the 1770s Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls. He concluded by calculation of a p-value that the excess was a real, but unexplained, effect.
The p-value was first formally introduced by Karl Pearson in his Pearson's chi-squared test, using the chi-squared distribution and notated as capital P. The p-values for the chi-squared distribution (for various values of χ2 and degrees of freedom), now notated as P, was calculated in (Elderton 1902), collected in (Pearson 1914, pp. xxxi–xxxiii, 26–28, Table XII) The use of the p-value in statistics was popularized by Ronald Fisher, and it plays a central role in Fisher's approach to statistics.
In the influential book Statistical Methods for Research Workers (1925), Fisher proposes the level p = 0.05, or a 1 in 20 chance of being exceeded by chance, as a limit for statistical significance, and applies this to a normal distribution (as a two-tailed test), thus yielding the rule of two standard deviations (on a normal distribution) for statistical significance – see 68–95–99.7 rule.[b]
He then computes a table of values, similar to Elderton, but, importantly, reverses the roles of χ2 and p. That is, rather than computing p for different values of χ2 (and degrees of freedom n), he computes values of χ2 that yield specified p-values, specifically 0.99, 0.98, 0.95, 0,90, 0.80, 0.70, 0.50, 0.30, 0.20, 0.10, 0.05, 0.02, and 0.01. This allowed computed values of χ2 to be compared against cutoffs, and encouraged the use of p-values (especially 0.05, 0.02, and 0.01) as cutoffs, instead of computing and reporting p-values themselves. The same type of tables were then compiled in (Fisher & Yates 1938), which cemented the approach.
As an illustration of the application of p-values to the design and interpretation of experiments, in his following book The Design of Experiments (1935), Fisher presented the lady tasting tea experiment, which is the archetypal example of the p-value.
To evaluate a lady's claim that she (Muriel Bristol) could distinguish by taste how tea is prepared (first adding the milk to the cup, then the tea, or first tea, then milk), she was sequentially presented with 8 cups: 4 prepared one way, 4 prepared the other, and asked to determine the preparation of each cup (knowing that there were 4 of each). In this case the null hypothesis was that she had no special ability, the test was Fisher's exact test, and the p-value was so Fisher was willing to reject the null hypothesis (consider the outcome highly unlikely to be due to chance) if all were classified correctly. (In the actual experiment, Bristol correctly classified all 8 cups.)
Fisher reiterated the p = 0.05 threshold and explained its rationale, stating:
It is usual and convenient for experimenters to take 5 per cent. as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results.
He also applies this threshold to the design of experiments, noting that had only 6 cups been presented (3 of each), a perfect classification would have only yielded a p-value of which would not have met this level of significance. Fisher also underlined the frequentist interpretation of p, as the long-run proportion of values at least as extreme as the data, assuming the null hypothesis is true.
In later editions, Fisher explicitly contrasted the use of the p-value for statistical inference in science with the Neyman–Pearson method, which he terms "Acceptance Procedures". Fisher emphasizes that while fixed levels such as 5%, 2%, and 1% are convenient, the exact p-value can be used, and the strength of evidence can and will be revised with further experimentation. In contrast, decision procedures require a clear-cut decision, yielding an irreversible action, and the procedure is based on costs of error, which he argues are inapplicable to scientific research.
Despite the ubiquity of p-value tests, this particular test for statistical significance has been criticized for its inherent shortcomings and the potential for misinterpretation.
The data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level (which however does not imply that the null hypothesis is true). In Fisher's formulation, there is a disjunction: a low p-value means either that the null hypothesis is true and a highly improbable event has occurred, or that the null hypothesis is false.
However, people interpret the p-value in many incorrect ways, and try to draw other conclusions from p-values, which do not follow.
The p-value does not in itself allow reasoning about the probabilities of hypotheses; this requires multiple hypotheses or a range of hypotheses, with a prior distribution of likelihoods between them, as in Bayesian statistics, in which case one uses a likelihood function for all possible values of the prior, instead of the p-value for a single null hypothesis.
The p-value refers only to a single hypothesis, called the null hypothesis, and does not make reference to or allow conclusions about any other hypotheses, such as the alternative hypothesis in Neyman–Pearson statistical hypothesis testing. In that approach one instead has a decision function between two alternatives, often based on a test statistic, and one computes the rate of Type I and type II errors as α and β. However, the p-value of a test statistic cannot be directly compared to these error rates α and β – instead it is fed into a decision function.
- The p-value is not the probability that the null hypothesis is true, nor is it the probability that the alternative hypothesis is false – it is not connected to either of these.
In fact, frequentist statistics does not, and cannot, attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero while the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability and which would explain the results more easily). This is Lindley's paradox. But there are also a priori probability distributions where the posterior probability and the p-value have similar or equal values.
- The p-value is not the probability that a finding is "merely a fluke."
As the calculation of a p-value is based on the assumption that a finding is the product of chance alone, it patently cannot also be used to gauge the probability of that assumption being true. This is different from the real meaning which is that the p-value is the chance of obtaining such results if the null hypothesis is true.
- The p-value is not the probability of falsely rejecting the null hypothesis. This error is a version of the so-called prosecutor's fallacy.
- The p-value is not the probability that a replicating experiment would not yield the same conclusion. Quantifying the replicability of an experiment was attempted through the concept of p-rep (which is heavily criticized)
- The significance level, such as 0.05, is not determined by the p-value.
Rather, the significance level is decided before the data are viewed, and is compared against the p-value, which is calculated after the test has been performed. (However, reporting a p-value is more useful than simply saying that the results were or were not significant at a given level, and allows readers to decide for themselves whether to consider the results significant.)
- The p-value does not indicate the size or importance of the observed effect (compare with effect size). The two do vary together however – the larger the effect, the smaller sample size will be required to get a significant p-value.
Critics of p-values point out that the criterion used to decide "statistical significance" is based on an arbitrary choice of level (often set at 0.05). If significance testing is applied to hypotheses that are known to be false in advance, a non-significant result will simply reflect an insufficient sample size; a p-value depends only on the information obtained from a given experiment.
The p-value is incompatible with the likelihood principle, and p-value depends on the experiment design, or equivalently on the test statistic in question. That is, the definition of "more extreme" data depends on the sampling methodology adopted by the investigator; for example, the situation in which the investigator flips the coin 100 times yielding 50 heads has a set of extreme data that is different from the situation in which the investigator continues to flip the coin until 50 heads are achieved yielding 100 flips. This is to be expected, as the experiments are different experiments, and the sample spaces and the probability distributions for the outcomes are different even though the observed data (50 heads out of 100 flips) are the same for the two experiments.
Some regard the p-value as the main result of statistical significance testing, rather than the acceptance or rejection of the null hypothesis at a pre-prescribed significance level. Fisher proposed p as an informal measure of evidence against the null hypothesis. He called on researchers to combine p in the mind with other types of evidence for and against that hypothesis, such as the a priori plausibility of the hypothesis and the relative strengths of results from previous studies.
Many misunderstandings concerning p arise because statistics classes and instructional materials ignore or at least do not emphasize the role of prior evidence in interpreting p. A renewed emphasis on prior evidence could encourage researchers to place p in the proper context, evaluating a hypothesis by weighing p together with all the other evidence about the hypothesis.
Related quantities
A closely related concept is the E-value, which is the average number of times in multiple testing that one expects to obtain a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. The E-value is the product of the number of tests and the p-value.
The 'inflated' (or adjusted) p-value, is when a group of p-values are changed according to some multiple comparisons procedure so that each of the adjusted p-values can now be compared to the same threshold level of significance (α), while keeping the type I error controlled. The control is in the sense that the specific procedures controls it, it might be controlling the familywise error rate, the false discovery rate, or some other error rate.
See also
- False discovery rate
- Fisher's method
- Generalized p-value
- Statistical hypothesis testing
- Odds of TT is odds of HT and TH are and which are equal, and adding these yield
- To be precise the p = 0.05 corresponds to about 1.96 standard deviations for a normal distribution (two-tailed test), and 2 standard deviations corresponds to about a 1 in 22 chance of being exceeded by chance, or p ≈ 0.045; Fisher notes these approximations.
- Goodman, SN (1999). "Toward Evidence-Based Medical Statistics. 1: The P Value Fallacy.". Annals of Internal Medicine 130: 995–1004.
- Stigler 2008.
- Dallal 2012, Note 31: Why P=0.05?.
- Hubbard & Lindsay 2008.
- Wetzels, R.; Matzke, D.; Lee, M. D.; Rouder, J. N.; Iverson, G. J.; Wagenmakers, E. -J. (2011). "Statistical Evidence in Experimental Psychology: An Empirical Comparison Using 855 t Tests". Perspectives on Psychological Science 6 (3): 291. doi:10.1177/1745691611406923.
- Babbie, E. (2007). The practice of social research 11th ed. Thomson Wadsworth: Belmont, CA.
- Stigler 1986, p. 134.
- Pearson 1900.
- Inman 2004.
- Hubbard & Bayarri 2003, p. 1.
- Fisher 1925, p. 47, Chapter III. Distributions.
- Fisher 1925, pp. 78–79, 98, Chapter IV. Tests of Goodness of Fit, Independence and Homogeneity; with Table of χ2, Table III. Table of χ2.
- Fisher 1971, II. The Principles of Experimentation, Illustrated by a Psycho-physical Experiment.
- Fisher 1971, Section 7. The Test of Significance.
- Fisher 1971, Section 12.1 Scientific Inference and Acceptance Procedures.
- Sterne, J. A. C.; Smith, G. Davey (2001). "Sifting the evidence–what's wrong with significance tests?". BMJ (Clinical research ed.) 322 (7280): 226–231. doi:10.1136/bmj.322.7280.226. PMC 1119478. PMID 11159626.
- Schervish, M. J. (1996). "P Values: What They Are and What They Are Not". The American Statistician 50 (3). doi:10.2307/2684655. JSTOR 2684655.
- Casella, George; Berger, Roger L. (1987). "Reconciling Bayesian and Frequentist Evidence in the One-Sided Testing Problem". Journal of the American Statistical Association 82 (397): 106–111.
- Sellke, Thomas; Bayarri, M. J.; Berger, James O. (2001). "Calibration of p Values for Testing Precise Null Hypotheses". The American Statistician 55 (1): 62–71. doi:10.1198/000313001300339950. JSTOR 2685531.
- Casson, R. J. (2011). "The pesty P value". Clinical & Experimental Ophthalmology 39 (9): 849–850. doi:10.1111/j.1442-9071.2011.02707.x.
- Johnson, D. H. (1999). "The Insignificance of Statistical Significance Testing". Journal of Wildlife Management 63 (3): 763–772. doi:10.2307/3802789.
- National Institutes of Health definition of E-value
- Hochberg, Y.; Benjamini, Y. (1990). "More powerful procedures for multiple significance testing". Statistics in Medicine 9 (7): 811–818. doi:10.1002/sim.4780090710. PMID 2218183. (page 815, second paragraph)
- Pearson, Karl (1900). "On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling". Philosophical Magazine Series 5 50 (302): 157–175. doi:10.1080/14786440009463897.
- Elderton, William Palin (1902). "Tables for Testing the Goodness of Fit of Theory to Observation". Biometrika 1 (2): 155–163. doi:10.1093/biomet/1.2.155.
- Fisher, Ronald (1925). Statistical Methods for Research Workers. Edinburgh: Oliver & Boyd. ISBN 0-05-002170-2.
- Fisher, Ronald A. (1971) . The Design of Experiments (9th ed.). Macmillan. ISBN 0-02-844690-9.
- Fisher, R. A.; Yates, F. (1938). Statistical tables for biological, agricultural and medical research. London.
- Stigler, Stephen M. (1986). The history of statistics : the measurement of uncertainty before 1900. Cambridge, Mass: Belknap Press of Harvard University Press. ISBN 0-674-40340-1.
- Hubbard, Raymond; Bayarri, M. J. (November 2003), P Values are not Error Probabilities, a working paper that explains the difference between Fisher's evidential p-value and the Neyman–Pearson Type I error rate α.
- Hubbard, Raymond; Armstrong, J. Scott (2006). "Why We Don't Really Know What Statistical Significance Means: Implications for Educators". Journal of Marketing Education 28 (2): 114. doi:10.1177/0273475306288399.
- Hubbard, Raymond; Lindsay, R. Murray (2008). "Why P Values Are Not a Useful Measure of Evidence in Statistical Significance Testing". Theory & Psychology 18 (1): 69–88. doi:10.1177/0959354307086923.
- Stigler, Stephen (December 2008). "Fisher and the 5% level". Chance 21 (4): 12. doi:10.1007/s00144-008-0033-3.
- Dallal, Gerard E. (2012). The Little Handbook of Statistical Practice.
Further reading
- Free online p-values calculators for various specific tests (chi-square, Fisher's F-test, etc.).
- Understanding p-values, including a Java applet that illustrates how the numerical values of p-values can give quite misleading impressions about the truth or falsity of the hypothesis under test. | http://en.wikipedia.org/wiki/P-value | 13 |
56 | Enter a value into either text box and select units using the drop-down boxes.
What is Acceleration?
Acceleration has the dimensions of distance divided by time squared, and is the rate of change in velocity (or speed) over time. Because it is a vector (it can have magnitude and direction) it can be the rate at which an object speeds up or slows down, which is the commonly understood meaning; but it can also be a change in direction. The SI units for acceleration are metres per second per second (m/s2).
Acceleration may be measured over a time period, an average acceleration, or it may be measured as an instantaneous acceleration. Average acceleration is measured over an interval; for example the time it takes for a car to accelerate from a standstill to 60 miles per hour. The acceleration is determined by dividing the change in velocity by the time. The instantaneous acceleration is measured over an infinitesimal period. In terms of calculus, the instantaneous acceleration is the first derivative of velocity, or the second derivative of distance.
Acceleration occurs as a result of a force acting on a mass, and is equal to the force divided by the mass. For a rocket this force is applied by the reaction of the gases leaving the nozzle of the rocket motor. For an object falling to earth, this force is gravity, which is a force of attraction between two masses. At the surface of the earth this force is a constant per unit mass, and this constant is called the gravitational acceleration, or g, and is approximately 9.8 m/s2. This constant is defined by the mass and dimensions of the earth and the universal Gravitational Constant.
An object dropped from a height will fall to earth and its acceleration will be 9.8 m/s2, at least initially. Galileo was the first to discover this fact which runs counter to the intuitive expectation that a heavier object would fall faster. As it speeds up air resistance will begin to play a part and this will ultimately limit the velocity to the "terminal velocity" which is the speed at which the force due to gravity is balanced by the decelerating force due to air resistance. When there is no net force there is no further acceleration.
The term "g-force" is often used to describe the apparent force experienced by a pilot or driver as a vehicle or aircraft changes speed or direction. The "g-force" is not actually a force but an acceleration. So for example experiencing 4g is to undergo an acceleration equivalent to 4 times the acceleration due to gravity, where 4g is 4x9.8 m/s2.
A device for measuring acceleration is called an accelerometer. As well as single-axis accelerometers, multiple-axis models are available to measure both the magnitude and direction of the acceleration as a vector quantity. They are used to sense location, orientation, vibration and shock. Nowadays accelerometers can be found in portable consumer electronic devices such as pedometers, mobile phones, cameras (for image stabilization) and video game controllers.
Accelerometers are normally small electro-mechanical devices which may be a simple cantilever beam with a mass in a sealed container. As the device is accelerated the mass causes the beam to deflect from its rest position, and this can be measured using piezoelectric sensors, or sensing movement by measuring electrical capacitance. A multiple-axis accelerometer would typically have three of these sensors; one for each perpendicular axis. Accelerometers can be sensitive to gravity and may need to be calibrated before use. More exotic accelerometers use laser Doppler techniques or light gates to measure acceleration from an external observing position.
Accelerometers are used in crash-test dummies to measure the intensity of the acceleration. Extreme acceleration causes injury or death because of the resulting forces (the acceleration in a car crash can exceed 100g without a seat belt, which is not survivable). They are also used to measure vibration and in automobile braking systems. Other applications include seismology, inertial navigation and scientific measurement. | http://www.calculator.org/property.aspx?name=acceleration | 13 |
72 | What is this triangle?
If an enormously heavy object has to be moved from one spot to another, it may not be practical to move it on wheels. Instead the object is placed on a flat platform that in turn rests on cylindrical rollers (Figure 1). As the platform is pushed forward, the rollers left behind are picked up and put down in front. An object moved this way over a flat horizontal surface does not bob up and down as it rolls along. The reason is that cylindrical rollers have a circular cross section, and a circle is closed curve "with constant width." What does that mean? If a closed convex curve is placed between two parallel lines and the lines are moved together until they touch the curve, the distance between the parallel lines is the curve's "width" in one direction. Because a circle has the same width in all directions, it can be rotated between two parallel lines without altering the distance between the lines.
Is a circle the only curve with constant width? Actually
there are infinitely many such curves. The simplest noncircular such
curve is named the Reuleaux triangle. Mathematicians knew it earlier
(some references go back to Leonard
Euler in the 18th century), but some curved triangles can be
seen in Leonardo da Vinci's 15th-century Codex Madrid. In addition the
13th-century Notre Dame cathedral in Bruges, Belgium, has several
windows in the clear shape of a Reuleaux triangle (Figure 2). But Franz
Reuleaux was the first to demonstrate its constant-width properties
and the first to use the triangle in mechanisms. See models B01, B02, B03, B04, L01, L02, L03, L04, L05, and L06. A
modern application of the Reuleaux triangle can be seen in the Wankel
engine (Figure 3).
How to construct a Reuleaux triangle
To construct a Reuleaux triangle begin with an equilateral triangle of side s, and then replace each side by a circular arc with the other two original sides as radii (Figure 4).
The corners of a Reuleaux triangle are the sharpest possible on a curve with constant width. Extending each side of an equilateral triangle a uniform distance at each end can round these corners. The resulting curve has a width, in all directions, that is the sum of the same two radii (Figure 5).
Other symmetrical curves with constant width result if you start with a regular pentagon (or any regular polygon with an odd number of sides) and follow similar procedures. This construction is used in the design of some British coins (Figure 6).
This result can be generalized to the case when the polygon is not regular, but each of its vertices is the endpoint of two diagonals of the same length h (and the lengths of other diagonals are less than h). Of course, the polygon has to have an odd number of vertices.
Here is another really surprising method of constructing curves with constant width:
Draw as many straight lines as you like, but all mutually intersecting. Each arc of the curve will be drawn with the compass point at the intersection of the two lines that bound the arc. Start with any arc, then proceed around the curve, connecting each arc to the preceding one. If you do it carefully, the curve will close and will have a constant width. (You can try to prove it! It is not difficult at all.) The curves drawn in this way may have arcs of as many different circles as you wish. Here is one example (Figure 7), but you will really enjoy making your own! After you have made one, you can make more copies of it and check that your wheels really roll!
Another interesting result about curves of constant width is that the inscribed and circumscribed circles of an arbitrary figure of constant width h are concentric and the sum of their radii is equal to h.
Reuleaux Triangle Links
Problems about Reuleaux Triangles and other Curves with Constant Width
The three dimensional analog of a curve with constant width is the solid with constant width. A sphere is not the only such solid that will rotate within a cube, at all times touching all six sides of the cube; all solids of constant width share this property. Rotating a Reuleaux triangle around one of its axes of symmetry generates the simplest example of a nonspherical solid of this type. There are an infinite number of other such examples. The solids with constant width that have the smallest volumes are derived from the regular tetrahedron in somewhat the same way that the Reuleaux triangle is derived from the equilateral triangle: Spherical caps are first placed on each face of the tetrahedron, and then three of tghe edges must be slightly altered. These altered edges may either form a triangle or radiate from one corner. Since all curves with the same constant width have the same perimeter, it might be supposed that all solids width the same constant width have the same surface area. It was proved by Hermann Minkowski that all the shadows of solids with constant width are curves of the same constant width (when the projecting rays are parallel and the shadow falls on a plane perpendicular to the rays). All such shadows have equal perimeters. Michael Goldberg (1957, 1960, 1962) has introduced the term "rotor" for any convex figure that can be rotated inside a polygon or polyhedron while at all times touching every side or face. The Reuleaux triangle is the rotor of least area in a square. The least area rotor for the equilateral triangle is a biangle (a lens shaped figure) formed from two 60-degree arcs of the circle with radius equal to the triangle’s altitude. As the biangle rotates its corners trace the entire boundary of the triangle, with no rounding of corners. See B01.
Closely related to the theory of rotors is a famous problem named the Kakeya Needle Problem, which was first posed in 1917 by the Japanese mathematician Soichi Kakeya: What is the plane figure of least area, in which a line segment of length 1 can be rotated 360 degrees? The rotation obviously can be made inside a circle of unit diameter, but that is far from the smallest area. Ten years after the Kakeya problem was posed, the Russian mathematician Abram Besicovitch showed that there is no minimum area as an answer to Kakeya’s needle problem.
Reuleaux Tetrahedron Links
Kakeya Needle Problem Links
Where I can read more about Reuleaux triangle?
Where can I learn more about Kakeya's needle problem? | http://kmoddl.library.cornell.edu/tutorials/02/ | 13 |
59 | A Venn diagram or set diagram is a diagram that shows all possible logical relations between a finite collection of sets (aggregation of things). Venn diagrams were conceived around 1880 by John Venn. They are used to teach elementary set theory, as well as illustrate simple set relationships in probability, logic, statistics, linguistics and computer science (see logical connectives).
Intersection of two sets:
Union of two sets:
of A (left) in B (right):
of two sets:
of A in U:
A Venn diagram is constructed with a collection of simple closed curves drawn in a plane. According to Lewis (1918), the "principle of these diagrams is that classes [or sets] be represented by regions in such relation to one another that all the possible logical relations of these classes can be indicated in the same diagram. That is, the diagram initially leaves room for any possible relation of the classes, and the actual or given relation, can then be specified by indicating that some particular region is null or is not-null".
Venn diagrams normally comprise overlapping circles. The interior of the circle symbolically represents the elements of the set, while the exterior represents elements that are not members of the set. For instance, in a two-set Venn diagram, one circle may represent the group of all wooden objects, while another circle may represent the set of all tables. The overlapping area or intersection would then represent the set of all wooden tables. Shapes other than circles can be employed as shown below by Venn's own higher set diagrams. Venn diagrams do not generally contain information on the relative or absolute sizes (cardinality) of sets; i.e. they are schematic diagrams.
Venn diagrams are similar to Euler diagrams. However, a Venn diagram for n component sets must contain all 2n hypothetically possible zones that correspond to some combination of inclusion or exclusion in each of the component sets. Euler diagrams contain only the actual possible zones for a particular given context. In Venn diagrams, a shaded zone may represent an empty zone, whereas in an Euler diagram the corresponding zone is missing from the diagram. For example, if one set represents dairy products and another cheeses, the Venn diagram contains a zone for cheeses that are not dairy products. Assuming that in the context cheese means some type of dairy product, the Euler diagram has the cheese zone entirely contained within the dairy-product zone—there is no zone for (non-existent) non-dairy cheese. This means that as the number of contours increases, Euler diagrams are typically less visually complex than the equivalent Venn diagram, particularly if the number of non-empty intersections is small.
Venn diagrams were introduced in 1880 by John Venn (1834–1923) in a paper entitled "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings" in the "Philosophical Magazine and Journal of Science", about the different ways to represent propositions by diagrams. The use of these types of diagrams in formal logic, according to Ruskey and M. Weston, is "not an easy history to trace, but it is certain that the diagrams that are popularly associated with Venn, in fact, originated much earlier. They are rightly associated with Venn, however, because he comprehensively surveyed and formalized their usage, and was the first to generalize them".
Venn himself did not use the term "Venn diagram" but kept speaking of "Eulerian Circles". In the opening sentence of his 1880 article Venn declared: "Schemes of diagrammatic representation have been so familiarly introduced into logical treatises during the last century or so, that many readers, even those who have made no professional study of logic, may be supposed to be acquainted with the general nature and object of such devices. Of these schemes one only, viz. that commonly called 'Eulerian circles,' has met with any general acceptance..." The first to use the term "Venn diagram" was Clarence Irving Lewis in 1918, in his book "A Survey of Symbolic Logic".
Venn diagrams are very similar to Euler diagrams, which were invented by Leonhard Euler (1708–1783) in the 18th century. M. E. Baron has noted that Leibniz (1646–1716) in the 17th century produced similar diagrams before Euler, but much of it was unpublished. She also observes even earlier Euler-like diagrams by Ramon Lull in the 13th Century.
In the 20th century Venn diagrams were further developed. D.W. Henderson showed in 1963 that the existence of an n-Venn diagram with n-fold rotational symmetry implied that n was prime. He also showed that such symmetric Venn diagrams exist when n is 5 or 7. In 2002 Peter Hamburger found symmetric Venn diagrams for n = 11 and in 2003, Griggs, Killian, and Savage showed that symmetric Venn diagrams exist for all other primes. Thus rotationally symmetric Venn diagrams exist if and only if n is a prime number.
Venn diagrams and Euler diagrams were incorporated as part of instruction in set theory as part of the new math movement in the 1960s. Since then, they have also been adopted by other curriculum fields such as reading.
The following example involves two sets, A and B, represented here as coloured circles. The orange circle, set A, represents all living creatures that are two-legged. The blue circle, set B, represents the living creatures that can fly. Each separate type of creature can be imagined as a point somewhere in the diagram. Living creatures that both can fly and have two legs—for example, parrots—are then in both sets, so they correspond to points in the area where the blue and orange circles overlap. That area contains all such and only such living creatures.
Humans and penguins are bipedal, and so are then in the orange circle, but since they cannot fly they appear in the left part of the orange circle, where it does not overlap with the blue circle. Mosquitoes have six legs, and fly, so the point for mosquitoes is in the part of the blue circle that does not overlap with the orange one. Creatures that are not two-legged and cannot fly (for example, whales and spiders) would all be represented by points outside both circles.
The combined area of sets A and B is called the union of A and B, denoted by A ∪ B. The union in this case contains all living creatures that are either two-legged or that can fly (or both). The area in both A and B, where the two sets overlap, is called the intersection of A and B, denoted by A ∩ B. For example, the intersection of the two sets is not empty, because there are points that represent creatures that are in both the orange and blue circles.
Extensions to higher numbers of sets
Venn diagrams typically represent two or three sets, but there are forms that allow for higher numbers. Shown below, four intersecting spheres form the highest order Venn diagram that is completely symmetric and can be visually represented. The 16 intersections correspond to the vertices of a tesseract (or the cells of a 16-cell respectively).
For higher numbers of sets, some loss of symmetry in the diagrams is unavoidable. Venn was keen to find "symmetrical figures…elegant in themselves," that represented higher numbers of sets, and he devised a four-set diagram using ellipses (see below). He also gave a construction for Venn diagrams for any number of sets, where each successive curve that delimits a set interleaves with previous curves, starting with the three-circle diagram.
Counter-example: This Euler diagram is not a Venn diagram for four sets as it has only 13 regions (excluding the outside); there is no region where only the yellow and blue, or only the pink and green circles meet.
Five-set Venn diagram using congruent ellipses in a radially symmetrical arrangement devised by Branko Grünbaum. Labels have been simplified for greater readability; for example, A denotes A ∩ Bc ∩ Cc ∩ Dc ∩ Ec, while BCE denotes Ac ∩ B ∩ C ∩ Dc ∩ E.
Edwards' Venn diagrams
A. W. F. Edwards constructed a series of Venn diagrams for higher numbers of sets by segmenting the surface of a sphere. For example, three sets can be easily represented by taking three hemispheres of the sphere at right angles (x = 0, y = 0 and z = 0). A fourth set can be added to the representation by taking a curve similar to the seam on a tennis ball, which winds up and down around the equator, and so on. The resulting sets can then be projected back to a plane to give cogwheel diagrams with increasing numbers of teeth, as shown on the right. These diagrams were devised while designing a stained-glass window in memory of Venn.
Other diagrams
Edwards' Venn diagrams are topologically equivalent to diagrams devised by Branko Grünbaum, which were based around intersecting polygons with increasing numbers of sides. They are also 2-dimensional representations of hypercubes.
Charles Lutwidge Dodgson devised a five-set diagram.
Related concepts
Venn diagrams correspond to truth tables for the propositions , , etc., in the sense that each region of Venn diagram corresponds to one row of the truth table. Another way of representing sets is with R-Diagrams.
- Lewis, Clarence Irving (1918). A Survey of Symbolic Logic. Berkeley: University of California Press. p. 157.
- "Euler Diagrams 2004: Brighton, UK: September 22–23". Reasoning with Diagrams project, University of Kent. 2004. Retrieved 13 August 2008.
- Sandifer, Ed (2003). "How Euler Did It" (pdf). The Mathematical Association of America: MAA Online. Retrieved 26 October 2009.
- Ruskey, F.; Weston, M. (June 2005). "Venn Diagram Survey". The electronic journal of combinatorics.
- Venn, J. (July 1880). "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings". Philosophical Magazine and Journal of Science. 5 10 (59).
- In Euler's Letters to a German Princess. In Venn's article, however, he suggests that the diagrammatic idea predates Euler, and is attributable to C. Weise or J. C. Lange.
- Baron, M.E. (May 1969). "A Note on The Historical Development of Logic Diagrams". The Mathematical Gazette 53 (384): 113–125. JSTOR 3614533.
- Henderson, D.W. (April 1963). "Venn diagrams for more than four classes". American Mathematical Monthly 70 (4): 424–6. JSTOR 2311865.
- Ruskey, Frank; Savage, Carla D.; Wagon, Stan (December 2006). "The Search for Simple Symmetric Venn Diagrams" (PDF). Notices of the AMS 53 (11): 1304–11.
- Strategies for Reading Comprehension Venn Diagrams
- Jo Venn (1881). Symbolic logic. Macmillan. p. 108. Retrieved 9 April 2013.
- Grimaldi, Ralph P. (2004). Discrete and combinatorial mathematics. Boston: Addison-Wesley. p. 143. ISBN 0-201-72634-3.
- Johnson, D. L. (2001). "3.3 Laws". Elements of logic via numbers and sets. Springer Undergraduate Mathematics Series. Berlin: Springer-Verlag. p. 62. ISBN 3-540-76123-3.
Further reading
- A Survey of Venn Diagrams by F. Ruskey and M. Weston, is an extensive site with much recent research and many beautiful figures.
- Stewart, Ian (2004). "Ch. 4 Cogwheels of the Mind". Another Fine Math You've Got Me Into. Dover Publications. pp. 51–64. ISBN 0-486-43181-9.
- Edwards, A.W.F. (2004). Cogwheels of the mind: the story of Venn diagrams. JHU Press. ISBN 978-0-8018-7434-5.
- Venn, John (1880). "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings". Dublin Philosophical Magazine and Journal of Science 9 (59): 1–18.
- Mamakani, Khalegh; Ruskey, Frank (27 July 2012), A New Rose : The First Simple Symmetric 11-Venn Diagram, arXiv:1207.6452
|Wikimedia Commons has media related to: Venn diagrams|
- Hazewinkel, Michiel, ed. (2001), "Venn diagram", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Weisstein, Eric W., "Venn Diagram", MathWorld.
- Lewis Carroll's Logic Game — Venn vs. Euler at cut-the-knot
- A Survey of Venn Diagrams
- Area proportional 3-way venn diagram applet
- Generating Venn Diagrams to explore Google Suggest results
- seven sets interactive Venn diagram displaying color combinations
- six sets Venn diagrams made from triangles
- Postscript for 9-set Venn and more
- VBVenn--A Visual Basic program for calculating and graphing quantitative two-circle Venn diagrams | http://en.wikipedia.org/wiki/Venn_diagram | 13 |
330 | The content material described in this course contains the foundational understandings necessary to prepare students to pursue the study of algebra and geometry at a high school level, whether that content is encountered in traditionally organized or integrated courses.
A. Real Numbers, Exponents and Roots and the Pythagorean Theorem
Students extend the properties of computation with rational numbers to real number computation, categorize real numbers as either rational or irrational, and locate real numbers on the number line. Powers and roots are studied along with the Pythagorean theorem and its converse, a critical concept in its own right as well as a context in which numbers expressed using powers and roots arise. Students apply this knowledge to solve problems.
Successful students will:
A1 Use the definition of a root of a number to explain the relationship of powers and roots.
If an = b, for an integer n ≥ 0, then a is said to be an nth root of b. When n is even and b > 0, we identify the unique a > 0 as the principal nth root of b, written .
- Use and interpret the symbols and ; informally explain why , when .
By convention, for is used to represent the non-negative square root of a.
- Estimate square and cube roots and use calculators to find good approximations.
- Make or refine an estimate for a square root using the fact that if
0 ≤ a < n < b, then ; make or refine an estimate for a cube root using the fact that if a < n < b, then .
A2 Categorize real numbers as either rational or irrational and know that, by definition, these are the only two possibilities; extend the properties of computation with rational numbers to real number computation.
- Approximately locate any real number on the number line.
- Apply the definition of irrational number to identify examples and recognize approximations.
Square roots, cube roots, and nth roots of whole numbers that are not respectively squares, cubes, and nth powers of whole numbers provide the most common examples of irrational numbers. Pi (π) is another commonly cited irrational number.
- Know that the decimal expansion of a rational number eventually repeats, perhaps ending in repeating zeros; use this to identify the decimal expansion of an irrational number as one that never ends and never repeats.
- Recognize and use 22/7 and 3.14 as approximations for the irrational number represented by pi (π).
- Determine whether the square, cube, and nth roots of integers are integral or irrational when such roots are real numbers.
A3 Interpret and prove the Pythagorean theorem and its converse; apply the Pythagorean theorem and its converse to solve problems.
- Determine distances between points in the Cartesian coordinate plane and relate the Pythagorean theorem to this process.
B. Variables and Expressions
In middle school, students work more with symbolic algebra than in the previous grades. Students develop an understanding of the different uses for variables, analyze mathematical situations and structures using algebraic expressions, determine if expressions are equivalent, and identify single-variable expressions as linear or non-linear.
Successful students will:
B1 Interpret and compare the different uses of variables and describe patterns, properties of numbers, formulas, and equations using variables.
While a variable has several distinct uses in mathematics, it is fundamentally just a number we either do not know yet or do not want to specify.
- Compare the different uses of variables.
Examples: When a + b = b + a is used to state the commutative property for addition, the variables a and b represent all real numbers; the variable a in the equation 3a - 7 = 8 is a temporary placeholder for the one number, 5, that will make the equation true; the symbols C and r refer to specific attributes of a circle in the formula C = 2πr; the variable m in the slope-intercept form of the line, y = mx + b, serves as a parameter describing the slope of the line.
- Express patterns, properties, formulas, and equations using and defining variables appropriately for each case.
B2 Analyze and identify characteristics of algebraic expressions; evaluate, interpret, and construct simple algebraic expressions; identify and transform expressions into equivalent expressions; determine whether two algebraic expressions are equivalent.
Two algebraic expressions are equivalent if they yield the same result for every value of the variables in them. Great care must be taken to demonstrate that, in general, a finite number of instances is not sufficient to demonstrate equivalence.
- Analyze expressions to identify when an expression is the sum of two or more simpler expressions (called terms) or the product of two or more simpler expressions (called factors). Analyze the structure of an algebraic expression and identify the resulting characteristics.
- Identify single-variable expressions as linear or non-linear.
- Evaluate a variety of algebraic expressions at specified values of their variables.
Algebraic expressions to be evaluated include polynomial and rational expressions as well as those involving radicals and absolute value.
- Write linear and quadratic expressions representing quantities arising from geometric and real-world contexts.
- Use commutative, associative, and distributive properties of number operations to transform simple expressions into equivalent forms in order to collect like terms or to reveal or emphasize a particular characteristic.
- Rewrite linear expressions in the form ax + b for constants a and b.
- Choose different but equivalent expressions for the same quantity that are useful in different contexts.
Example: p + 0.07p shows the breakdown of the cost of an item into the price p and the tax of 7%, whereas (1.07)p is a useful equivalent form for calculating the total cost.
- Demonstrate equivalence through algebraic transformations or show that expressions are not equivalent by evaluating them at the same value(s) to get different results.
- Know that if each expression is set equal to y and the graph of all ordered pairs that satisfy one of these new equations is identical to the graph of all ordered pairs that satisfy the other, then the expressions are equivalent.
Middle school students increase their experience with functional relationships and begin to express and understand them in more formal ways. They distinguish between relations and functions and convert flexibly among the various representations of tables, symbolic rules, verbal descriptions, and graphs. A major focus at this level is on linear functions, recognizing linear situations in context, describing aspects of linear functions such as slope as a constant rate of change, identifying x- and y-intercepts, and relating slope and intercepts to the original context of the problem.
Successful students will:
C1 Determine whether a relationship is or is not a function; represent and interpret functions using graphs, tables, words, and symbols.
In general, a function is a rule that assigns a single element of one set–the output set—to each element of another set—the input set. The set of all possible inputs is called the domain of the function, while the set of all outputs is called the range.
- Identify the independent (input) and dependent (output) quantities/variables of a function.
- Make tables of inputs x and outputs f(x) for a variety of rules that take numbers as inputs and produce numbers as outputs.
- Define functions algebraically, e.g., g(x) = 3 + 2(x - x2).
- Create the graph of a function f by plotting and connecting a sufficient number of ordered pairs (x, f(x)) in the coordinate plane.
- Analyze and describe the behavior of a variety of simple functions using tables, graphs, and algebraic expressions.
- Construct and interpret functions that describe simple problem situations using expressions, graphs, tables, and verbal descriptions and move flexibly among these multiple representations.
C2 Analyze and identify linear functions of one variable; know the definitions of x- and y-intercepts and slope, know how to find them and use them to solve problems.
A function exhibiting a rate of change (slope) that is constant is called a linear function. A constant rate of change means that for any pair of inputs x1 and x2, the ratio of the corresponding change in value f(x2) - f(x2) to the change in input x2 - x1 is constant (i.e., it does not depend on the inputs).
- Explain why any function defined by a linear algebraic expression has a constant rate of change.
- Explain why the graph of a linear function defined for all real numbers is a straight line, identify its constant rate of change, and create the graph.
- Determine whether the rate of change of a specific function is constant; use this to distinguish between linear and nonlinear functions.
- Know that a line with a slope equal to zero is horizontal and represents a function, while the slope of a vertical line is undefined and cannot represent a function.
C3 Express a linear function in several different forms for different purposes.
- Recognize that in the form f(x) = mx + b, m is the slope, or constant rate of change of the graph of f, that b is the y-intercept and that in many applications of linear functions, b defines the initial state of a situation; express a function in this form when this information is given or needed.
- Recognize that in the form f(x) = m(x - x0) + y0, the graph of f(x) passes through the point (x0, y0); express a function in this form when this information is given or needed.
C4 Recognize contexts in which linear models are appropriate; determine and interpret linear models that describe linear phenomena; express a linear situation in terms of a linear function f(x) = mx + b and interpret the slope (m) and the y-intercept (b) in terms of the original linear context.
Common examples of linear phenomena include distance traveled over time for objects traveling at constant speed; shipping costs under constant incremental cost per pound; conversion of measurement units (e.g., pounds to kilograms or degrees Celsius to degrees Fahrenheit); cost of gas in relation to gallons used; the height and weight of a stack of identical chairs.
C5 Recognize, graph, and use direct proportional relationships.
A linear function in which f(0) = 0 represents a direct proportional relationship. The linear function f(x) = kx, where k is constant, descries a direct proportional relationship.
- Show that the graph of a direct proportional relationship is a line that passes through the origin (0, 0) whose slope is the constant of proportionality.
- Compare and constrast the graphs of x = k, y = k, and y = kx, where k is a constant.
D. Equations and Identities
In this middle school course, students begin the formal study of equations. They solve linear equations and solve and graph linear inequalities in one variable. They graph equations in two variables, relating features of the graphs to the related single-variable equations. Solving systems of two linear equations in two variables graphically and understanding what it means to be a solution of such a system is also included in this unit. Interwoven with the development of these skills, students use linear equations, inequalities, and systems of linear equations to solve problems in context and interpret the solutions and graphical representations in terms of the original problem.
Successful students will:
D1 Distinguish among an equation, an expression, and a function; interpret identities as a special type of equation and identify their key characteristics.
An identity is an equation for which all values of the variables are solutions. Although an identity is a special type of equation, there is a difference in practice between the methods for solving equations that have a small number of solutions and methods for proving identities. For example, (x+2)2 = x2 + 4x + 4 is an identity which can be proved by using the distributive property, whereas (x+2)2 = x2 + 3x + 4 is an equation that can be solved by collecting all terms on one side.
- Know that solving an equation means finding all its solutions and predict the number of solutions that should be expected for various simple equations and identities.
- Explain why solutions to the equation f(x) = g(x) are the x-values (abscissas) of the set of points in the intersection of the graphs of the functions f(x) and g(x).
- Recognize that f(x) = 0 is a special case of the equation f(x) = g(x) and solve the equation f(x) = 0 by finding all values of x for which f(x) = 0.
The solutions to the equation f(x) = 0 are called roots of the equation or zeros of the function. They are the values of x where the graph of the function f crosses the x-axis. In the special case where f(x) equals 0 for all values of x, f(x) = 0 represents a constant function where all elements of the domain are zeros of the function.
- Use identities to transform expressions.
D2 Solve linear equations and solve and graph the solution of linear inequalities in one variable.
Common problems are those that involve break-even time, time/rate/distance, percentage increase or decrease, ratio and proportion.
- Solve equations using the facts that equals added to equals are equal and that equals multiplied by equals are equal; more formally, if A = B and C = D, then A + C = B + D and AC = BD; use the fact that a linear expression ax + b is formed using the operations of multiplication by a constant followed by addition to solve an equation ax + b = 0 by reversing these steps.
Be alert to anomalies caused by dividing by 0 (which is undefined), or by multiplying both sides by 0 (which will produce equality even when things were originally unequal).
- Graph a linear inequality in one variable and explain why the graph is always a half-line (open or closed); know that the solution set of a linear inequality in one variable is infinite, and contrast this with the solution set of a linear equation in one variable.
- Explain why, when both sides of an inequality are multiplied or divided by a negative number, the direction of the inequality is reversed, but that when all other basic operations involving non-zero numbers are applied to both sides, the direction of the inequality is preserved.
D3 Recognize, represent, and solve problems that can be modeled using linear equations in two variables and interpret the solution(s) in terms of the context of the problem.
- Rewrite a linear equation in two variables in any of three forms: ax + by = c, ax + by + c = 0, or y = mx + b; select a form depending upon how the equation is to be used.
- Know that the graph of a linear equation in two variables consists of all points (x, y) in the coordinate plane that satisfy the equation and explain why, when x can be any real number, such graphs are straight lines.
- Identify the relationship between linear functions in one variable, x maps to f(x) and linear equations in two variables f(x) = y or f(x) - y = 0; explain why the solution to an equation in standard (or polynomial) form (ax + b = 0) will be the point where the graph of f(x) = ax + b crosses the x-axis.
- Identify the solution of an equation that is in the form f(x) = g(x) and relate the solution to the x-value (abscissa) of the point at which the graphs of the functions f(x) and g(x) intersect.
- Know that pairs of non-vertical lines have the same slope if and only if they are parallel (or the same line) and slopes that are negative reciprocals if and only if they are perpendicular; apply these relationships to analyze and represent equations.
- Represent linear relationships using tables, graphs, verbal statements, and symbolic forms; translate among these forms to extract information about the relationship.
D4 Determine the solution to application problems modeled by two linear equations and interpret the solution set in terms of the situation.
- Determine either through graphical methods or comparing slopes whether a system of two linear equations has one solution, no solutions, or infinitely many solutions, and know that these are the only possibilities.
- Represent the graphs of two linear equations as two intersecting lines when there is one solution, parallel lines when there is no solution, and the same line when there are infinitely many solutions.
- Use the graph of two linear equations in two variables to suggest solution(s).
Since the solution is a set of ordered pairs that satisfy the equations, it follows that these ordered pairs must lie on the graph of each of the equations in the system; the point(s) of intersection of the graphs is (are) the solution(s) to the system of equations.
- Recognize and solve problems that can be modeled using two linear equations in two variables.
Examples: Break-even problems, such as those comparing costs of two services
E. Geometric Representation and Transformations
Coordinate geometry affords middle school students the opportunity to make valuable connections between algebra concepts and geometry representations such as slope and distance. Students extend their elementary school experiences with transformations as specific motions in two-dimensions to transformations of figures in the coordinate plane. They describe the characteristics of transformations that preserve distance, relating them to congruence.
Successful students will:
E1 Represent and explain the effect of translations, rotations, and reflections of objects in the coordinate plane.
- Identify certain transformations (translations, rotations and reflections) of objects in the plane as rigid motions and describe their characteristics; know that they preserve distance in the plane.
- Demonstrate the meaning and results of the translation, rotation, and reflection of an object through drawings and experiments.
- Identify corresponding sides and angles between objects and their images after a rigid transformation.
- Show how any rigid motion of a figure in the plane can be accomplished through a sequence of translations, rotations, and reflections.
E2 Represent and interpret points, lines, and two-dimensional geometric objects in a coordinate plane; calculate the slope of a line in a coordinate plane.
- Determine the area of polygons in the coordinate plane.
- Know how the word slope is used in common non-mathematical contexts, give physical examples of slope, and calculate slope for given examples.
- Find the slopes of physical objects (roads, roofs, ramps, stairs) and express the answers as a decimal, ratio, or percent.
- Interpret and describe the slope of parallel and perpendicular lines in a coordinate plane.
- Show that the calculated slope of a line in a coordinate plane is the same no matter which two distinct points on the line one uses to calculate the slope.
- Use coordinate geometry to determine the perpendicular bisector of a line segment.
The main focus of this unit is on the study of circles, the relationships among their parts, the development of the formulas for the area and circumference and methods for approximating π.
Successful students will:
F1 Identify and explain the relationships among the radius, diameter, circumference and area of a circle; know and apply formulas for the circumference and area of a circle, semicircle, and quarter-circle.
- Identify the relationship between the circumference of a circle and its radius or diameter as a direct proportion and between the area of a circle and the square of its radius or the square of its diameter as a direct proportion.
- Demonstrate why the formula for the area of a circle (radius times one-half of its circumference) is plausible and makes geometric sense.
- Show that for any circle, the ratio of the circumference to the diameter is the same as the ratio of the area to the square of the radius and that these ratios are the same for different circles; identify the constant ratio A/r2 = ½Cr/r2 = C/2r = C/d as the number π and know that although the rational numbers 3.14, or are often used to approximate π, they are not the actual values of the irrational number π.
- Identify and describe methods for approximating π.
G. Ratios, Rates, Scaling, and Similarity
In conjunction with the study of rational numbers, middle school students examine ratios, rates, and proportionality both procedurally and conceptually. Proportionality concepts connect many areas of the curriculum, number, similarity, scaling, slope, and probability and serve as a foundation for future mathematics study. Examining proportionality first with numbers then geometrically with similarity concepts and scaling begins to establish important understandings for more formal study of these concepts in high school algebra and geometry courses.
Successful students will:
G1 Use ratios, rates, and derived quantities to solve problems.
- Interpret and apply measures of change such as percent change and rates of growth.
- Calculate with quantities that are derived as ratios and products.
Examples: Interpret and apply ratio quantities including velocity and population density using units such as feet per second and people per square mile; interpret and apply product quantities including area, volume, energy, and work using units such as square meters, kilowatt hours, and person days.
- Solve data problems using ratios, rates, and product quantities.
- Create and interpret scale drawings as a tool for solving problems.
A scale drawing is a representation of a figure that multiplies all the distances between corresponding points by a fixed positive number called the scale factor
G2 Analyze and represent the effects of multiplying the linear dimensions of an object in the plane or in space by a constant scale factor, r.
- Use ratios and proportional reasoning to apply a scale factor to a geometric object, a drawing, or a model, and analyze the effect.
- Describe the effect of a scale factor r on length, area, and volume.
G3 Interpret the definition and characteristics of similarity for figures in the plane and apply to problem solving situations.
Informally, two geometric figures in the plane are similar if they have the same shape. More formally, having the same shape means that one figure can be transformed onto the other by applying a scale factor.
- Apply similarity in practical situations; calculate the measures of corresponding parts of similar figures.
- Use the concepts of similarity to create and interpret scale drawings.
Students have an opportunity in this unit to apply both their rational number and proportional reasoning skills to probability situations. Students use theoretical probability and proportions to predict outcomes of simple events. Frequency distributions are examined and created to analyze the likelihood of events. The Law of Large Numbers is used to link experimental and theoretical probabilities.
Successful students will:
H1 Describe the relationship between probability and relative frequency; use a probability distribution to assess the likelihood of the occurrence of an event.
- Recognize and use relative frequency as an estimate for probability.
If an action is repeated n times and a certain event occurs b times, the ratio b/n is called the relative frequency of the event occurring.
- Use theoretical probability, where possible, to determine the most likely result if an experiment is repeated a large number of times.
- Identify, create, and describe the key characteristics of frequency distributions of discrete and continuous data.
A frequency distribution shows the number of observations falling into each of several ranges of values; if the percentage of observations is shown, the distribution is called a relative frequency distribution. Both frequency and relative frequency distributions are portrayed through tables, histograms, or broken-line graphs.
- Analyze and interpret actual data to estimate probabilities and predict outcomes.
Example: In a sample of 100 randomly selected students, 37 of them could identify the difference in two brands of soft drink. Based on these data, what is the best estimate of how many of the 2,352 students in the school could distinguish between the soft drinks?
- Compare theoretical probabilities with the results of simple experiments (e.g., tossing number cubes, flipping coins, spinning spinners).
- Explain how the Law of Large Numbers explains the relationship between experimental and theoretical probabilities.
The Law of Large Numbers indicates that if an event of probability p is observed repeatedly during independent repetitions, the ratio of the observed frequency of that event to the total number of repetitions approaches p as the number of repetitions becomes arbitrarily large.
- Use simulations to estimate probabilities.
- Compute and graph cumulative frequencies.
I. Question Formulation and Data Collection
Students learn to design a study to answer a question; collect, organize, and summarize data; communicate the results; and make decisions about the findings. Technology is utilized both to analyze and display data. Students expand their repertoire of graphs and statistical measures and begin the use of random sampling in sample surveys. They assess the role of random assignment in experiments. They look critically at data studies and reports for possible sources of bias or misrepresentation. Students are able to use their knowledge of slope to analyze lines of best fit in scatter plots and make predictions from the data further connecting their algebra and data knowledge.
Successful students will:
I1 Formulate questions about a phenomenon of interest that can be answered with data; design a plan to collect appropriate data; collect and record data; display data using tables, charts, or graphs; evaluate the accuracy of the data.
- Recognize the need for data; understand that data are numbers in context (with units) and identify units. Define measurements that are relevant to the questions posed; organize written or computerized data records, making use of computerized spreadsheets.
- Understand the differing roles of a census, a sample survey, an experiment, and an observational study.
- Select a design appropriate to the questions posed.
- Use random sampling in sample surveys and random assignment in experiments, introducing random sample as a "fair" way to select an unbiased sample.
I2 Represent both univariate and bivariate quantitative (measurement) data accurately and effectively.
- Represent univariate data; make use of line plots (dot plots), stem-and-leaf plots, and histograms.
- Represent bivariate data; make use of scatter plots.
- Describe the shape, center, and spread of data distributions.
Example: A scatter plot used to represent bivariate data may have a linear shape; a trend line may pass through the mean of the x and y variables; its spread is shown by the vertical distances between the actual data points and the line.
- Identify and explain misleading uses of data by considering the completeness and source of the data, the design of the study, and the way the data are analyzed and displayed.
Examples: Determine whether the height or area of a bar graph is being used to represent the data; evaluate whether the scales of a graph are consistent and appropriate or whether they are being adjusted to alter the visual information conveyed.
I3 Summarize, compare, and interpret data sets by using a variety of statistics. Use percentages and proportions (relative frequencies) to summarize univariate categorical data.
- Use conditional (row or column) percentages and proportions to summarize bivariate categorial data.
- Use measures of center (mean and median) and measures of spread (percentiles, quartiles, and interquartile range) to summarize univariate quantitative data.
- Use trend lines (linear approximations or best-fit line) to summarize bivariate quantitative data.
- Graphically represent measures of center and spread (variability) for quantitative data.
- Interpret the slope of a linear trend line in terms of the data being studied.
- Use box plots to compare key features of quantitative data distributions.
I4 Read, interpret, interpolate, and judiciously extrapolate from graphs and tables and communicate the results.
- State conclusions in terms of the question(s) being investigated.
- Use appropriate statistical language when reporting on plausible answers that go beyond the data actually observed.
- Use oral, written, graphic, pictorial and multi-media methods to create and present manuals and reports.
I5 Determine whether a scatter plot suggests a linear trend.
- Visually determine a line of good fit to estimate the relationship in bivariate data that suggests a linear trend.
- Identify criteria that might be used to assess how good the fit is.
The following unit is an extension or enrichment unit, which while interesting and appropriate, may not be feasible time-wise in a traditional 180-day school year.
J. Number Bases [OPTIONAL ENRICHMENT UNIT]
This should be used as an optional unit of study if time permits. Using their understanding of the base-10 number system, students represent numbers in other bases. Computers and computer graphics have made much more important the knowledge of how to work with different base systems, particularly binary.
Successful students will:
J1 Identify key characteristics of the base-10 number system and adapt them to the binary number base system.
- Represent and interpret numbers in the binary number system.
- Apply the concept of base-10 place value to understand representation of numbers in other bases.
Example: In the base-8 number system, the 5 in the number 57,273 represents 5 x 84.
- Convert binary to decimal and vice versa.
- Encode data and record measurements of information capacity using the binary number base system.
Appendix A: Prior Knowledge
The following expectations, which are included in the model two-year middle school course sequence (Middle Course 1 and Middle School Course 2), are essential prerequisites for success in the one-year middle school program (Middle School Advanced Course). Students must develop proficiency in these expectations prior to embarking upon this one-year advanced course. This means that schools opting for this one-year Middle School Advanced Course must adjust the mathematics curriculum in earlier grades to include these expectations.
PK.A. Number Representation and Computation
Successful students will:
PK.A1 Extend and apply understanding about rational numbers; translate among different representations of rational numbers.
Rational numbers are those that can be expressed in the form where p and q are integers and q ≠ 0.
- Use inequalities to compare rational numbers and locate them on the number line; apply basic rules of inequalities to transform numeric expressions involving rational numbers.
PK.A2 Apply the properties of computation (e.g., commutative property, associative property, distributive property) to positive and negative rational number computation; know and apply effective methods of calculation with rational numbers.
- Demonstrate understanding of the algorithms for addition, subtraction, multiplication, and division (non-zero divisor) of numbers expressed as fractions, terminating decimals, or repeating decimals by applying the algorithms and explaining why they work.
- Add, subtract, multiply, and divide (non-zero divisor) rational numbers and explain why these operations always produce another rational number.
- Interpret parentheses and employ conventional order of operations in a numerical expression, recognizing that conventions are universally agreed upon rules for operating on expressions.
- Check answers by estimation or by independent calculations, with or without calculators and computers.
- Solve practical problems involving rational numbers.
Examples: Calculate markups, discounts, taxes, tips, average speed.
PK.A3 Recognize, describe, extend, and create well-defined numerical patterns.
A pattern is a sequence of numbers or objects constructed using a simple rule. Of special interest are arithmetic sequences, those generated by repeated addition of a fixed number, and geometric sequences, those generated by repeated multiplication by a fixed number.
PK.A4 Know and apply the Fundamental Theorem of Arithmetic.
Every positive integer is either prime itself or can be written as a unique product of primes (ignoring order).
- Identify prime numbers; describe the difference between prime and composite numbers; determine and divisibility rules (2, 3, 5, 9, 10), explain why they work, and use them to help factor composite numbers.
- Determine the greatest common divisor and least common multiple of two whole numbers from their prime factorizations; explain the meaning of the greatest common divisor (greatest common factor) and the least common multiple and use them in operations with fractions.
- Use greatest common divisors to reduce fractions and ratios n:m to an equivalent form in which the gcd (n, m) = 1.
Fractions in which gcd (n, m) = 1 are said to be in lowest terms.
- Write equivalent fractions by multiplying both numerator and denominator by the same non-zero whole number or dividing by common factors in the numerator and denominator.
- Add and subtract fractions by using the least common multiple (or any common multiple) of denominators.
PK.A5 Identify situations where estimates are appropriate and use estimates to predict results and verify the reasonableness of calculated answers.
- Use rounding, regrouping, percentages, proportionality, and ratios as tools for mental estimation.
- Develop, apply, and explain different estimation strategies for a variety of common arithmetic problems.
Examples: Estimating tips, adding columns of figures, estimating interest payments, estimating magnitude.
- Explain the phenomenon of rounding error, identify examples, and, where possible, compensate for inaccuracies it introduces.
Examples: Analyzing apportionment in the U.S. House of Representatives; creating data tables that sum properly; analyzing what happens to the sum if you always round down when summing 100 terms.
PK.A6 Use the rules of exponents to simplify and evaluate expressions.
- Evaluate expressions involving whole number exponents and interpret such exponents in terms of repeated multiplication.
PK.A7 Know and apply the definition of absolute value.
The absolute value is defined by |a| = a > 0 and |a| = -a if a < 0.
- Interpret absolute value as distance from zero.
- Interpret absolute value of a difference as "distance between" on the number line.
PK.A8 Analyze and apply simple algorithms.
- Identify and give examples of simple algorithms.
An algorithm is a procedure (a finite set of well-defined instructions) for accomplishing some task that, given an initial state, will terminate in a well-defined end-state. Recipes and assembly instructions are everyday examples of algorithms.
- Analyze and compare simple computational algorithms.
Examples: Write the prime factorization for a large composite number; determine the least common multiple for two positive integers; identify and compare mental strategies for computing the total cost of several objects.
- Analyze and apply the iterative steps in standard base-10 algorithms for addition and multiplication of numbers.
PK.B. Measurement Systems
Successful students will:
PK.B1 Make, record, and interpret measurements.
- Recognize that measurements of physical quantities must include the unit of measurement, that most measurements permit a variety of appropriate units, and that the numerical value of a measurement depends on the choice of unit; apply these ideas when making measurements.
- Recognize that real-world measurements are approximations; identify appropriate instruments and units for a given measurement situation, taking into account the precision of the measurement desired.
- Plan and carry out both direct and indirect measurements.
Indirect measurements are those that are calculated based on actual recorded measurements.
- Apply units of measure in expressions, equations, and problem situations; when necessary, convert measurements from one unit to another within the same system.
- Use measures of weight, money, time, information, and temperature; identify the name and definition of common units for each kind of measurement.
- Record measurements to reasonable degrees of precision, using fractions and decimals as appropriate.
A measurement context often often defines a reasonable level of precision to which the result should be reported.
Example: The U.S. Census bureau reported a national population of 299,894,924 on its Population Clock in mid-October of 2006. Saying that the U.S. population is 3 hundred million (3x108) is accurate to the nearest million and exhibits one-digit precision. Although by the end of that month the population had surpassed 3 hundred million, 3x108 remained accurate to one-digit precision.
PK.B2 Identify and distinguish among measures of length, area, surface area, and volume; calculate perimeter, area, surface area, and volume.
- Calculate the perimeter and area of triangles, quadrilaterals, and shapes that can be decomposed into triangles and quadrilaterals that do not overlap; know and apply formulas for the area and perimeter of triangles and rectangles to derive similar formulas for parallelograms, rhombi, trapezoids, and kites.
- Given the slant height, determine the surface area of right prisms and pyramids whose base(s) and sides are composed of rectangles and triangles; know and apply formulas for the surface area of right circular cylinders, right circular cones, and spheres; explain why the surface are of a right circular cylinder is a rectangle whose length is the circumference of the base of the cylinder and whose width is the height of the cylinder.
- Given the slant height, determine the volume of right prisms, right pyramids, right circular cylinders, right circular cones, and spheres.
- Estimate lengths, areas, surface areas, and volumes of irregular figures and objects.
PK.C. Angles and Triangles
Successful students will:
PK.C1 Know the definitions and properties of angles and triangles in the plane and use them to solve problems.
- Know and apply the definitions and properties of complementary, supplementary,interior, and exterior angles.
- Know and distinguish among the definitions and properties of vertical, adjacent, corresponding, and alternate interior angles; identify pairs of congruent angles and explain why they are congruent.
PK.C2 Know and verify basic theorems about angles and triangles.
- Know the triangle inequality and verify it through measurement.
In words, the triangle inequality states that any side of a triangle is shorter than the sum of the other two sides; it can also be stated clearly in symbols: If a, b, and c are the lengths of three sides of a triangle, then a < b + c, b < a + c, and c < a + b.
- Verify that the sum of the measures of the interior angles of a triangle is 180°.
- Verify that each exterior angle of a triangle is equal to the sum of the opposite interior angles.
- Show that the sum of the interior angles of an n-sided convex polygon is (n - 2) x 180°.
- Explain why the sum of exterior angles of a convex polygon is 360°.
PK.D. 3-Dimensional Geometry
Successful students will:
PK.D1 Visualize solids and surfaces in three-dimensional space.
- Relate a net, top-view, or side-view to a three-dimensional object that it might represent; visualize and be able to reproduce solids and surfaces in three-dimensional space when given two-dimensional representations (e.g., nets, multiple views).
- Interpret the relative position and size of objects shown in a perspective drawing.
- Visualize and describe three-dimensional shapes in different orientations; draw two-dimensional representations of three-dimensional objects by hand and using software; sketch two-dimensional representations of basic three-dimensional objects such as cubes, spheres, pyramids, and cones.
- Create a net, top-view, or side-view of a three-dimensional object by hand or using software; visualize, describe, or sketch the cross-section of a solid cut by a plane that is parallel or perpendicular to a side or axis of symmetry of the solid.
PK.E. Data Analysis
Successful students will:
PK.E1 Represent both univariate and bivariate categorical data accurately and effectively.
- For univariate data, make use of frequency and relative frequency tables and bar graphs; for bivariate data, make use of two-way frequency and relative frequency tables and bar graphs.
Successful students will:
PK.F1 Represent probabilities using ratios and percents; use sample spaces to determine the (theoretical) probabilities of events; compare probabilities of two or more events and recognize when certain events are equally likely.
- Calculate theoretical probabilities in simple models (e.g., number cubes, coins, spinners).
- Know and use the relationship between probability and odds.
The odds of an event occurring is the ratio of the number of favorable outcomes to the number of unfavorable outcomes, whereas the probability is the ratio of favorable outcomes to the total number of possible outcomes. | http://www.utdanacenter.org/k12mathbenchmarks/courses/msadvanced.php | 13 |
58 | Figure 1. The newest lava dome within the horseshoe-shaped crater at Mount St Helens during its building process in August 1984 (photo by S.A. Austin).
Dacite magma at Mount St Helens in Washington State expressed itself directly during six explosive magmatic eruptions in 1980 (18 May, 25 May, 12 June, 22 July, 7 August and 17 October 1980). This magma produced the distinctive plinian, explosive eruptions for which the volcano is famous. After three of these explosive eruptions (12 June, 7 August and 17 October), near-surface magma had low enough steam pressures so that viscous lava flows formed three consecutive, dome-shaped structures within the crater. The first two dacite lava domes built within the crater (late June and early August 1980) were destroyed by subsequent explosive eruptions (22 July and 17 October). The third dacite lava dome began to appear on 18 October 1980 above the lip of a 25-metre-diameter feeding conduit.
After 18 October 1980, this third and newest composite dome of dacite began to appear. By October 1986 this newest lava dome had grown within the horseshoe-shaped crater to be an immense structure up to 350 m high and up to 1,060 m in diameter (see Figures 1 and 2). The lava dome formed by a complex series of lava extrusions, supplemented occasionally by internal inflation of the dome by shallow intrusions of dacite magma into its molten core. Extrusions of lava produced short (200-400 m) and thick (20-40 m) flows piled on top of one another.2 Most dacite flows extended as lobes away from the top-centre of the dome, generally crumbling to very blocky talus on the flanks of the dome before reaching the crater floor (see Figure 3).
Figure 2. Mount St Helens’ new lava dome is composed of 74 million cubic meters of dacite flows and intrusions built up within the crater between 18 October 1980, and 26 October 1986 The view is toward the north looking over the lava dome into the 1980 blast zone (photo by Lyn Topinka of the US Geological Survey, after Pringle, Ref. 1).
Between 18 October 1980 and 26 October 1986, seventeen episodes of dome growth added 74 million cubic meters of dacite to this third and newest dome.3 During these eruptions magma viscosity was high and steam pressure was low so that the magma did not express itself explosively as it had during the six earlier events of 1980. The structure produced within the crater during the six-year period was an elliptical dome of dacite lava flows and intrusions 860 m (diameter east-west), by 1,060 m (diameter north-south), by 350 m (height above northern base). During the six-year period of building of the dacite dome, there was a steady decrease with time in the volume of magma extruded. On 26 October 1986, magma movement into the dome ceased and solidification of magma began within the neck of the volcano beneath the lava dome. Eruptions after 26 October 1986 were phreatic steam explosions, not direct expressions of magma. The stability of this third dome, along with decrease in the frequency of earthquakes and phreatic steam eruptions in the ten years after October 1986, indicate that the volcano, again, may be approaching a period of dormancy.
The SiO2 content of 69 samples of the 1980 to 1986 lava dome at Mount St Helens is 63.0 ± 0.4 percent.4 Called a ‘porphyritic dacite’,5 the rock averages about 55 percent fine-grained, grey groundmass and 45 percent phenocrysts and lithic inclusions (see Figure 4). The groundmass of the rock is composed of microphenocrysts of plagioclase, orthopyroxene, and Fe-Ti oxides within a glass matrix.6 Later flows on the lava dome showed a tendency toward higher crystallinity of the groundmass7 and about 1 percent greater SiO2.8 Phenocrysts of plagioclase (30–35 percent), orthopyroxene (5 percent), hornblende (1–2 percent), Fe-Ti oxides (1 to 2 percent), and clinopyroxene (less than 0.5 percent) together comprise almost half of the lava dome.9 Lithic inclusions of gabbro, quartz diorite, hornfelsic basalt, dacite, andesite and vein quartz together compose 3.5 percent of the dome dacite.10 Of the lithic inclusions 85 percent are medium grained gabbros with an average diameter of 6 cm.11 The highmafic mineral content of gabbroic inclusions makes a small but significant decrease in the overall SiO2 content of the dacite lava dome.12
Figure 3. Blocky surface texture of the east side of the dacite lava dome above prominent talus slope (helicopter photo by S.A. Austin, October 1989).
Geologists are in general agreement concerning the crustal source of the dacitic magma beneath Mount St Helens. Experimental data from the assemblage of minerals in the dacite indicate that just prior to the 18 May 1980 eruption the upper part of the magma chamber was at a temperature of 930°C and at a depth of about 7.2 km.13 That magma is believed to have contained about 4.6 weight% total volatiles, mostly H2O.14 The last dome-building intrusion event of 1986 delineated two aseismic zones (from 7–12 km and from 3–4.5 km depth) indicating that the deep magma chamber has a shallow magma-storage region.15 Fe-Ti oxide pairs indicated magmatic temperatures decreasing to about 870°C in 1986 when flows into the lava dome stopped.16
In June 1992, a seven-kilogram sample of dacite was collected from just above the talus apron on the farthest-north slope of the lava dome. Because the sample comes from the sloping surface of the dome, it most likely represents the upper surface of a flow lobe. The flow interpretation of the sample is corroborated by the ‘breadcrust appearance’ of dacite at the sample location, the blocky fracture pattern which suggests the toe of a lava flow, and the presence of dacite scoria just above the sample. The position on the dome suggests that the sample represents the surface of one of the last lava flows, probably from the year 1986.
|Table 1. Major-element and trace-element abundances in the 1986 dacite lava flow at Mount St Helens determined by X-ray fluorescence. The analysis was performed on dacite groundmass and phenocrysts without lithic inclusions.|
The composition of the sample matches closely the published mineralogic, petrographic and chemical descriptions of ‘porphyritic dacite’.17 Phenocrysts of the sample are of the kind and abundance representative of the entire lava dome. The sample even has several gabbroic inclusions of the composition and size representative of the whole lava dome.18 The chemical analysis of the sample’s groundmass with phenocrysts (without gabbroic inclusions) gave 67.5 percent SiO2 by the X-ray fluorescence method (see Table 1). If the gabbroic inclusions were included in the whole rock analysis, the dacite would be about 64 percent SiO2, the average composition of the 1986 flows on the lava dome. Normative minerals were calculated in Table 2, with the assemblage representative of dacite. Thus, this seven-kilogram sample of dacite is representative of the whole lava dome.
One kilogram of dacite groundmass with phenocrysts (without gabbroic inclusions) was removed from the sample for potassium-argon analysis. The technique began by crushing and milling the dacite in an iron mortar. Particles were sieved through the 80 mesh (0.18 mm) screen and collected on top of the 200 mesh (0.075 mm) screen. The 80–200 mesh (0.18–0.075 mm) particles were specified by the argon lab to be the optimum for the argon analysis.
A second, one-kilogram sample of dacite groundmass was subsequently processed to concentrate more of the pyroxene. This separate preparation utilized crushed particles sieved through a 170 mesh (0.090 mm) screen and collected on a 270 mesh (0.053 mm) screen. These finer particles (0.053–0.090 mm) were found to allow more complete concentration of the mineral phases, even though these particles were finer than the optimum requested by the lab.
Because of the possibility of particles finer than 200 mesh absorbing or releasing a larger portion of argon, particles passing through the 200-mesh screen were rejected. The only exception was the single preparation made from particles passing through 170 mesh and collected on the 270-mesh screen.
Throughout the crushing, milling, sieving and separation processes, great care was taken to avoid contamination. The specific steps used to stop or discover contamination of the samples included:
Sawing of rock from the interior of the collected block of dacite (used to remove particles adhering to the sample),
Washing all surfaces and screens that were to contact directly the sample,
Final wet sieving of particles on the 200-mesh screen (or 270-mesh screen) to insure removal of finer particles (including possible contaminant lab dust introduced during milling),
Filtration of heavy liquids to remove contaminants,
Microscopic scanning of particle concentrates for foreign particles,
Preparation of the second concentrate from the raw dacite sample involving completely separate milling and screening (in order to discover if contamination had occurred in one of the concentrates), and
Sealing of samples in vials between preparation steps.
Five concentrates included one whole-rock powder and four mineral preparations. The concentrate names and descriptions are:
DOME-1 ‘Whole-rock preparation’ composed of representative particles from both the dacite groundmass and phenocrysts, without lithic inclusions; particles 80–200 mesh.
DOME-1L ‘Feldspar-glass concentrate’ from the groundmass and phenocrysts; particles 80–200 mesh; mostly plagioclase, but also contains fragments from the glassy matrix.
DOME-1M ‘Heavy-magnetic concentrate’ from the groundmass and phenocrysts; mostly hornblende with Fe-Ti oxides; particles 80–200 mesh.
DOME-1H ‘Heavy-nonmagnetic concentrate’ from the groundmass and phenocrysts; mostly orthopyroxene; particles 80–200 mesh.
DOME-1P ‘Pyroxene concentrate’ from the groundmass and phenocrysts; particles 170–270 mesh; prepared from separate dacite sample in fashion similar to DOME-1H, but with more complete concentration of orthopyroxene.
|Table 2. Idealized normative mineral assemblage for the Mount St Helens dacite calculated from the major-element abundances of Table 1.|
The last four mineral concentrates were prepared from the whole rock by heavy liquid and magnetic separation. First, the representative particles from the groundmass and phenocrysts were dispersed in tribromomethane (CHBr3), a heavy liquid with a density of 2.85 g/cc at room temperature. These particles and heavy liquid were centrifuged in 250 ml bottles at 6,000 rpm. After ten minutes of centrifugation at 20°C, the float particles were collected, filtered, washed, dried and labeled. This float concentrate, ‘DOME-1L’, was more than 90 percent of the original and became the ‘feldspar-glass concentrate’. The heavy-mineral residue that sank in the heavy liquid was collected, filtered, washed and dried. It was discovered that the heavy concentrate could be separated into ‘strongly magnetic’ and ‘weakly magnetic’ fractions, with about one-third of the heavy residue being strongly magnetic. The heavy concentrate was divided by a very strong hand magnet on a large piece of filter paper at a 45° slope angle. The ‘heavy magnetic’ fraction, later labeled ‘DOME-1M’, was composed of heavy particles which climbed up the paper at 45° slope above the influence of the magnet which was moved under the paper. The residue that did not move up the filter paper was the ‘heavy-nonmagnetic’ fraction. It was labeled ‘DOME-1H’. A fourth mineral concentrate was prepared from a completely separate portion of the dacite sample and processed similar to DOME-1H except from finer particles (170–270 mesh). This finer, heavy-nonmagnetic fraction separated from the dacite was labeled ‘DOME-1P’.
Microscopic examination of the four mineral concentrates indicated the effectiveness of the separation technique. The ‘feldspar-glass concentrate’ (DOME-1L) was dominated by plagioclase and glass, with only occasional mafic microphenocrysts visible in the plagioclase and glass. Although not a complete separation of non-mafic minerals, this concentrate included plagioclase phenocrysts (andesine composition with a density of about 2.7 g/cc) and the major quantity of glass (density assumed to be about 2.4 g/cc). No attempt was made to separate plagioclase from glass, but further use of heavy liquids should be considered.
The ‘heavy-magnetic concentrate’ (DOME-1M) was dominated by amphibole minerals, with hornblende assumed to be the most abundant magnetic mineral within the dacite. However, there was also a significant amount of Fe-Ti oxide minerals, probably magnetite and ilmenite. The ‘heavy-magnetic concentrate’ also had glassy particles (more abundant than in the ‘heavy-nonmagnetic concentrate’). Mafic microphenocrysts within these glassy particles were probably dominated by the strongly magnetic Fe-Ti oxide minerals. The microscopic examination of the ‘heavy-magnetic concentrate’ also revealed a trace quantity of iron fragments, obviously the magnetic contaminant unavoidably introduced from the milling of the dacite in the iron mortar. No attempt was made to separate the hornblende from the Fe-Ti oxides, but further finer milling and use of heavy liquids should be considered.
Figure 4. Photomicrograph of Mount St Helens dacite flow of 1986. The most abundant phenocrysts are plagioclase which are embedded in a much finer-grained groundmass containing glass and microphenocrysts. Photographed in polarised light with 2 mm width of view (dacite sample ‘DOME-1’, photo by A.A. Snelling).
The ‘heavy-nonmagnetic concentrate’ (DOME-1H) was dominated by orthopyroxene with much less clinopyroxene, but had a significant quantity of glassy particles attached to mafic microphenocrysts and fragments of mafic phenocrysts along incompletely fractured grain boundaries. These mafic microphenocrysts and fragments of mafic phenocrysts evidently increased the density of the attached glass particles above the critical density of 2.85 g/cc, which allowed them to sink in the heavy liquid. This sample also had recognizable hornblende, evidently not completely isolated by magnetic separation.
The ‘pyroxene concentrate’ (DOME-1P) was dominated by orthopyroxene and much less clinopyroxene. Because it was composed of finer particles (170–270 mesh), it contained far fewer mafic particles with attached glass fragments than DOME-1H. This preparation is the purest mineral concentrate. Microscopic examination of the orthopyroxene showed it to be a high-magnesium variety, explaining why it was nonmagnetic or only weakly magnetic.
The first three mineral concentrates (DOME-1L, DOME-1M, and DOME-1H) are representative of three different assemblages within the dacite. Because only the finer than 200 mesh fraction was discarded during preparation, these three concentrates should approximately sum, according to their abundance, to make the whole rock. They may not exactly sum because of differences in grind ability of the minerals and their groundmass.
Potassium and argon were measured in the five concentrates by Geochron Laboratories of Cambridge, Massachusetts, under the direction of Richard Reesman, the K-Ar laboratory manager. These preparations were submitted to Geochron Laboratories with the statement that they came from dacite, and that the lab should expect ‘low argon’. No information was given to the lab concerning where the dacite came from or that the rock has a historically known age (ten years old at the time of the argon analysis).
The analytic data are reported in Table 3. The concentration of K (%) was measured by the flame photometry method, the reported value being the average of two readings from each concentrate. The 40K concentration (ppm) was calculated from the terrestrial isotopic abundance using the concentration of K. The concentration in ppm of 40Ar*, the supposed ‘radiogenic argon-40’, was derived from isotope dilution measurements on a mass spectrometer by correcting for the presence of atmospheric argon whose isotopic composition is known. The reported concentration of 40Ar* is the average of two values. The ratio 40Ar/Total Ar is also derived from measurements on the mass spectrometer and is the average of two values.
The ‘age’ of each concentrate is calculated by making use of what Faure19 calls the ‘general model-age equation’:
where t is the‘age’, λ is the decay constant of the parent isotope, Dt is the number of daughter atoms in the rock presently, Do is the number of daughter atoms initially in the rock, and Pt is the number of atoms presently in the rock. Equation (1) can be used to date the rocks if measurements of Dt and Pt are made from the rock, and if an assumption concerning the original quantity of daughter (Do) is made. For the specific application to K-Ar dating,20 equation (1) becomes equivalent to equation (2) when:
where t is the ‘age’ in millions of years, 5.543 x 10–10 yr–1 is the current estimate for the decay constant for 40K, 0.105 is the estimated fraction of 40K decays producing 40Ar, and 40Ar*/40K is the calculation by standard procedure of the mole ratio of radiogenic 40Ar to 40K in the concentrate. It should be noted that equation (1) becomes equivalent to collation (2) when
Thus, 40Ar* includes within it an assumption concerning the initial quantity of 40Ar in the rock. As a matter of practice, no radiogenic argon is supposed to have existed when the rock formed. That is, Do = 0 is supposed for equation (2) to give accurate ages. Thus, equation (2) yields a ‘model age’ assuming zero radiogenic argon in the rock when it formed. After the initial daughter assumption is made, 40Ar* is determined. Then, the mole ratio 40Ar*/40K is calculated in Table 3 from each concentrate’s 40Ar* (ppm) and 40K (ppm). Once the mole ratio is calculated (see Table 3), it is inserted into equation (2) to calculate the ‘model ages’ listed in Table 3.
|Table 3. Potassium-argon data from the new dacite lava dome at Mount St Helens Volcano.|
The argon analyses of the dacite lava dome show, surprisingly, a non-zero concentration of ‘radiogenic argon’ (40Ar*) in all preparations from the dacite. K-Ar ‘ages’ using equation (2) range from 0.34 ± 0.06 Ma (million years) to 2.8 ± 0.6 Ma (see Table 3). Because the sampled dacite at the time of the analyses was only ten years old, there was no time for measurable quantities of 40Ar* to accumulate within the rock due to the slow, radioactive decay of 40K. The conclusion seems inescapable that measurable 40Ar* in the dacite is not from radiogenic accumulation, but must have been resident already within the different mineral assemblages when the rock cooled from the lava in the year 1986. The lab has not measured ‘radiogenic argon’ but some other type of argon.
Other historic lava flows have been recognized to have non-zero values for 40Ar*. Of 26 historic, subaerial lava flows studied by Dalrymple,21 five gave ‘excess argon’ and, therefore, yielded excessively old K-Ar ‘ages’:
|Hualalai basalt (Hawaii, AD 1800–1801)||1.6 ± 0.16 Ma
1.41 ± 0.08 Ma
|Mt Etna basalt (Sicily, 122 BC)||0.25 ± 0.08 Ma|
|Mt Etna basalt (Sicily, AD 1792)||0.35 ± 0.14 Ma|
|Mt Lassen plagioclase (California, AD 1915)||0.11 ± 0.3 Ma|
|Sunset Crater basalt (Arizona, AD 1064–1065)||0.27 ± 0.09 Ma
0.25 ± 0.15 Ma
Dalrymple22 recognized that these anomalous ‘ages’ could be caused by ‘excess radiogenic 40Ar’ from natural contamination, or caused by isotopic fractionation of argon. Krummenacher23 offered similar explanations for unexpected argon isotope ratios from several modern lava flows. Olivine, pyroxene and plagioclase from basalts of the Zuni-Bandera volcanic field (Quaternary of New Mexico) showed very significant quantities of excess argon inherited from the magmatic sources.24 The same conclusion applies to olivine and clinopyroxene phenocrysts from Quaternary volcanoes of New Zealand.25 Significant excess argon was also found in submarine basalts from two currently active Hawaiian volcanoes, Loihi Seamount and Kilauea.26
What caused the non-zero 40Ar* in the Mount St Helens dacite? Could contaminant 40Ar in the laboratory have been added to the Mount St Helens dacite giving the impression of great age? The possibility of contamination caused extreme care to be taken in cleaning the processing equipment, and the concentrates were sealed tightly in vials between preparation and analysis. Could the processing equipment itself be adding argon? For example, might the iron fragments produced during milling the sample in the mortar add argon? The heavy-liquid separation process strongly rejects heavy iron from the light feldspar-rich assemblage (preparation DOME-1L), but this concentrate also contains significant 40Ar. Other processes seem to exclude or isolate laboratory contamination. The wet sieving on the 200-mesh screen, for example, should remove any fine lab dust which could have fallen onto the concentrates. Because of these extraordinary considerations, laboratory contamination of the five concentrates is a very remote possibility.
Could the magmatic process beneath the lava dome be adding a contaminant to the molten dacite as it ascends from great depth? This is a possibility needing consideration. Might an argon-rich mineral (‘xenocryst’) be added to the magma and impart an excessive age to the ‘whole rock’ dacite? The data of Table 3 seem to argue that very different mineral phases of the dacite each contain significant 40Ar. Although the mineral concentrates are not pure, and all contain some glass, an argument can be made that both mafic and non-mafic minerals of the dacite contain significant 40Ar. The lithic inclusions in the lava dome might be thought to be the contaminant, in which case they might add ‘old’ mafic and non-mafic minerals to the young magma. It could be argued that gabbroic clumps in the magma disaggregated as the fluidity of the magma decreased with time, thereby adding an assortment of ‘old’ mineral grains. However, Heliker27 argues that the gabbroic inclusions are not xenolithsfrom the aged country rock adjacent to the pluton, but cumulatesformed by crystal segregation within a compositionally layered pluton. These inclusions are, therefore, regarded as a unique association within the recent magmatic system.
Could the magmatic conditions at depth allow argon to be occluded within the minerals at the time of their formation? This last, and most interesting, explanation of the anomalous 40Ar suggests the different quantities of argon in different mineral assemblages are caused by variation in the partial pressure of the gas as crystallization progressed, or by different quantities of gas retained as pressure was released. Crystallization experiments by Karpinskaya28 show that muscovite retains up to 0.5 percent by weight argon at 640°C and vapour pressure of 4,000 atmospheres. Phenocryst studies by Poths, Healey and Laughlin29 showed that olivine and clinopyroxene separated from young basalts from New Mexico and Nevada have ‘ubiquitous excess argon’. A magmatic source was postulated for the argon in phenocrysts of olivine and clinopyroxene in Quaternary volcanics of New Zealand.30 Presumably other minerals occlude argon in relation to the partial pressure of the gas in the magma source.
Laboratory experiments have been conducted on the solubility of argon in synthetic basaltic melts and their associated minerals.31, 32 Minerals and melts were held near 1300°C at one atmosphere pressure in a gas stream containing argon. After the material was quenched, the researchers measured up to 0.34 ppm 40Ar within synthetic olivine. They noted, ‘The solubility of Ar in the minerals is surprisingly high’.33 Their conclusion is that argon is held primarily in lattice vacancy defects within the minerals.
Argon occlusion within mineral assemblages is supported by the data from the dacite at Mount St Helens. Table 3 indicates that although the mineral concentrates (rich in feldspar, amphibole or pyroxene) have about the same ‘Total Ar’ concentrations, the ‘pyroxene concentrate’ possesses the highest concentration of 40Ar* (over three times that of the ‘feldspar-glass concentrate’) and the highest proportion of 40Ar* (40Ar*/Total Ar is over three times that of the ‘feldspar-glass concentrate’). These data suggest that whereas the orthopyroxene mineral structure has about the same or slightly less gas retention sites as does the associated plagioclase, orthopyroxene has a tighter structure and is able to retain more of the magmatic 40Ar. Orthopyroxene retains the most argon, followed by hornblende, and finally, plagioclase. According to this interpretation, the concentration of 40Ar* of a mineral assemblage is a measure of its argon occlusion and retention characteristics. Therefore, the 2.8 Ma ‘age’ of the ‘pyroxene concentrate’ has nothing to do with the time of crystallization.
Where does the argon in the magma come from? Could it be from outgassing of the lower crust and upper mantle? More study is needed.
To test further the hypothesis of argon occlusion in mineral assemblages, higher purity mineral concentrates could be prepared from the dacite at Mount St Helens. Finer-grained concentrates should be processed more completely with heavy liquids and magnetic separation. The preparation of DOME-1P, a finer-grained and purer pyroxene concentrate than DOME-1H, has, as expected, a higher concentration of 40Ar* and lower concentration of 40K. Acid-solution techniques or further use of heavy liquids could also help to remove undesirable glass. The glass itself should be concentrated for analysis of argon.
Do other volcanic rocks with phenocrysts have mineral assemblages with generally occluded argon? Phenocrysts are very common in volcanic rocks, so a general test of the hypothesis could be devised. In addition to testing other historic lava flows, phenocrysts from some ancient flows might be tested for phenocrysts which greatly exceed the ‘whole rock’ age. Three possible applications are suggested here.
Basalt of Devils Postpile (Devils Postpile National Monument, California)
Plagioclase separated from the Devils Postpile basalt gave a K-Ar ‘age’ of 0.94 ± 0.16 million years.34 The basalt has been reassigned recently an age of less than 100,000 years based on new geologic mapping and detailed stratigraphic study.35 What was the cause of the excessively old age? It could be argon occluded within the plagioclase.
Basalt of Toroweap Dam (western Grand Canyon, Arizona)
The basalt of Toroweap Dam lies at the bottom of Grand Canyon very near the present channel of the Colorado River. The basalt has been dated twice by the K-Ar method at 1.16 ± 0.18 Ma and 1.25 ± 0.2 Ma.36 The original researchers qualified their statements concerning the basalt date by saying, ‘There is the possibility that pre-eruption argon was retained in the basalt’.37 Many other basalts of western Grand Canyon have been shown to contain ‘excess argon’.38 Although the original researchers do not express certainty concerning the K-Ar age of the basalt at Toroweap Dam, other geologists have assigned much greater certainty and use the K-Ar age to argue that Grand Canyon has existed for a very long time (see especially D.A. Young39).
Keramim basalt (northern Golan Heights, Israel)
‘Stone Age’ artifacts occur beneath Keramim basalt dated at 0.25 Ma by the K-Ar method.40 However, human occupation is not thought to have occurred in Israel during the Lower Palaeolithic,40 so this and other K-Ar ‘ages’ should be checked. Because the K-Ar method has been used elsewhere to date Neanderthal Man, we might ask if other Neanderthal ‘ages’ need careful scrutiny.
Argon analyses of the new dacite lava dome at Mount St Helens raise more questions than answers. The primary assumption upon which K-Ar model-age dating is based assumes zero 40Ar* in the mineral phases of a rock when it solidifies. This assumption has been shown to be faulty. Argon occlusion in mineral phases of dacite at Mount St Helens is a reasonable alternate assumption. This study raises more fundamental questions—do other phenocryst-containing volcanic rocks give reliable K-Ar ages?
Financial support was provided by the Institute for Creation Research and Mr Guy Berthault. Dr Andrew Snelling provided helpful comments and reviews of the manuscript.
Help keep these daily articles coming. Support AiG.
“Now that I have updated, revised, and expanded The Lie, I believe it’s an even more powerful, eyeopening book for the church—an essential resource to help all of us to understand the great delusion that permeates our world! The message of The Lie IS the message of AiG and why we even exist! It IS the message God has laid on our hearts to bring before the church! It IS a vital message for our time.”
– Ken Ham, president and founder of AiG–U.S.
Answers magazine is the Bible-affirming, creation-based magazine from Answers in Genesis. In it you will find fascinating content and stunning photographs that present creation and worldview articles along with relevant cultural topics. Each quarterly issue includes a detachable chart, a pullout children’s magazine, a unique animal highlight, excellent layman and semi-technical articles, plus bonus content. Why wait? Subscribe today and get a FREE DVD download! | http://www.answersingenesis.org/articles/tj/v10/n3/argon | 13 |
59 | Plain text is represented by character codes. These are bit patterns that might equally well be interpreted as numbers. Indeed, they are interpreted as numbers, in we may say "the code for a is 97". What this means is the bit pattern 01100001, which may be interpreted as the number 97, will, when sent to an appropriate display device, cause the letter a to be displayed.
The most widely used character code is the ASCII (American Standard for Computer Information Exchange) code For many years, most computer equipment assumed this code. For example, computer terminals used to be designed to interpret such codes directly. When the computer sent a byte to the terminal, circuits in the terminal would generate and display the appropriate pattern of pixels. Similarly, until recently, much software contained assumptions based on the ASCII code. For example, lower-case letters would be converted to upper-case letters by substracting 32 from their character codes. If the ASCII code is in use, this works because the upper-case letters precede the lower-case letters and there are six punctuation signs between them.
The original ASCII system contains 128 codes, ranging from 0 through 127.
This means that in the usual binary representation for unsigned numbers,
only the low seven bits are used. The high bit (written at the left) is always
0 for ASCII characters. Here is a chart of the ASCII characters. Fuller information can be found
The ASCII characters are divided into printing characters and non-printing characters. The printing characters are the characters that appear directly on a computer screen or a printout:' the letters of the alphabet, the digits, and the punctuation symbols. The non-printing characters include the whitespace characters: space, tab, linefeed and carriage return. Most of the non-printing characters are control characters. These codes were originally used to control teletype machines. Some of them, such as "start of text" and "acknowledge" have no use as such anymore. Others, such as "backspace" and "bell" are still meaningful. ("bell" is the code that causes computer terminals to beep. It originally caused the bell on the teletype machine to ring.) Nowadays the control characters are often used for other purposes, e.g. to give commands to a text editor.
The chart above gives the numerical values of the character codes as base-10 numbers,
the numbers in common use. However, character codes are not usually given
in this form. The older convention is to give them in octal, that is,
in base-8. The ASCII code for a, decimal 97, would be described as
octal 141. Here the first digit represents 8 to the second power, the second digit
represents 8 to the first power, and the last digit represents 8 to the zeroth power,
that is, one. 141 in base 8 is therefore 1*8*8 + 4 *8 + 1 = 64 + 32 + 1 = 97.
Nowadays, character codes are usually given in hexadecimal, that is,
in base 16. In hexadecimal, decimal 10 is represented by A, 11 by B, 12 by C,
13 by D, 14 by E, and 15 by F. The ASCII code for the letter a is 61 in
hexadecimal, that is, 6*16 + 1.
Further information on base conversions may be found
One of the ways in which text files that use the same character encoding may vary is in how they represent the end of the line. The old convention goes back to the days of teletype machines. To move from the end of one line to the beginning of the next, two operations were necessary. A "carriage return" (abbreviated CR) moved the print head back to the left margin; a "line feed" (abbreviated LF, or NL, for "newline") moved the print head down one line. In the days of teletypes, two character codes therefore had to be sent out at the end of a line: CR LF. Dos and Windows still make use of this convention. Other operating systems make use of just one of the two. On the MacIntosh, end-of-line is represented by CR alone. On Unix systems, NL alone is used.
Conventions also differ as to when end-of-line is marked at all.
Word processors such as Word Perfect and Microsoft Word use
end-of-line to indicate what they call a "hard return", that is,
a place at which the line MUST end. This is more-or-less equivalent
in these programs to the end of a paragraph. They wrap text
to fit the chosen margins when they display it and print it, so it
is not necessary for the user to enter an end-of-line marker
except when a "hard return" is desired. Text files created in such
word processors will therefore usually have relatively few end-of-line
markers in them.
The ASCII character code is by no means the only one that has been used. Over the years, hundreds of character enccodings have been developed to represent languages and writing sytems other than English. Even for English, there have been other codes, especially in the early days of computing. For English, the main competitor for ASCII has been the EBCDIC (External Binary Coded Decimal for Information Interchange) code developed by IBM. EBCDIC is still used on IBM mainframe computers. Here is an EBCDIC chart. Note that the EBCDIC code, unlike ASCII, is an 8-bit code. The high bit is not necessarily 0. Indeed, the letters of the alphabet all have codes above 128 in EBCDIC.
How characters are represented for a wide range of languages and writing systems
is a complicated matter that we will address later in the course. For the time
being, we will generally work with text in ASCII code, or in the Latin-1 encoding,
which is used for many European languages.
is an extension of ASCII
in the sense that it assigns the same codes to the ASCII characters and
adds codes ranging from 128 to 255 for additional characters, such as
é and Ü.
We often speak of files "text files", "graphics files", "sound files" and so forth, as if files were of one type or another. In Unix, files do not really have any intrinsic type. A file is simply a sequence of bytes. What those bytes represent is a matter, on the one hand, of the intention of the person who created the file, and on the other hand, of how they are interpreted. The bit pattern 01100001 may represent the letter a, if interpreted as ASCII text, the punctuation symbol / , if interpreted as EBCDIC text, or the number 97 if interpreted as an unsigned 8-bit number. It might also represent the grey-level of a pixel in a gray-scale image, the color of a pixel in a color image, or the amplitude of a sound wave at one point in time.
To illustrate the fact that the same data can represent either an image or sound depending on how it is interpreted, we will generate a sound file and an image file from the same data.
First we use R to generate the data:
#Generate a sequence of 1,000,000 values. x=seq(1,10^6) #Generate their sines, with values ranging from 0. to 1.0 y=(sin(2*pi*x/441)+1.0)/2.0 #Scale so values range from 0 to 255 y=y*(2^8-1) #Convert to integer form. yi=as.integer(y) #Write to file as ASCII text write(yi,file="AudioImage.asc")
We now have a file containing a sine wave in ASCII form, each sample represented by an unsigned 8-bit integer. Next, we convert the ASCII values to binary. I use my own little utility, atoub, which takes as input ASCII integers and converts them to unsigned 8-bit binary integers:
%atoub AudioImage.asc AudioImage.raw
AudioImage.raw now contains one million 8-bit integers representing a sine wave. To create a sound file, we need to add a wav header. We can do this with sox:
%sox -r 44100 -u -b AudioImage.raw -r 44100 -u -b AudioImage.wav
Running InfoWave produces the following summary of the structure of this file. It consists of 44 bytes of header information followed by the actual audio sample data.
0: RIFF identifier. 4: chunk size = 1,000,036 bytes. 8: WAV identifier. 12: format chunk identifier 16: format chunk size = 16 bytes. 20: data format: PCM. 22: one channel (mono). 24: Sampling Rate = 44,100 samples per second. 28: Average Data Rate = 44,100 bytes per second. 32: Bytes_Per_Sample value of 1 indicates 8-bit mono 34: Bits_Per_Sample = 8. 36: chunk id 40: chunk length 44: chunk of type data (standard) length 1,000,000 bytes amounting to 0 minutes and 22.7 seconds
If you play this file, you will hear a 100 Hz pure tone 22.68 seconds long. Here is a screenshot of a segment of this file as displayed by Wavesurfer:
The upper panel shows the F0 contour, which is a horizontal line at 100Hz. The lower panel shows the sound pressure waveform, which is a sine wave.
To create the graphics file, we prepend to the data a header appropriate for a binary format PGM file. The header itself is the following ASCII text:
P5 #xxxxxxxxxxxxxxxxxxxxxxxxx 1000 1000 255
This header provides the information that this is a binary format PGM file, that it contains an image 1000 pixels wide by 1000 pixels high, and that the maximum grey-level value is 255. I've added an unnecessary comment to pad the header to 44 bytes so that it will be the same length as the wav file header.
%cat pgmbhdr AudioImage.raw > AudioImage.pgm
The resulting file can be displayed by a suitable graphics program.
Here is a screenshot of the result of using the
program from the ImageMagick
package to display it:
Downloads (both 1,000,044 bytes):
Indeed, the same data can also be interpreted as text. If we surround our sine wave with an HTML header and footer, with the header identifying the content as Latin-1 text, we see this:
And if we identify the character set as Koi8-r, the Cyrillic encoding widely used in Russia, we see this:
Admittedly, this isn't the most interesting text. Data that are coherent in one interpretation won't necessarily be in another.
Here are the HTML files in case you want to see for yourself. Be warned, though,
that loading these files will load down your browser for a while.
In Unix, neither the file nor its name necessarily indicates what kind of file it is. In some operating systems, such as Microsoft Windows, the file name extension indicates what kind of file it is. Thus, files with the extension exe are programs, while files with the extension doc are Microsoft Word documents. In Unix, this is not necessarily the case. Filenames may be constructed so as to encode information about the content of the file, but the operating system does not require this, nor do most programs. For example, it is common to use the suffix .jpg for image files in the JPEG format, but programs for viewing and processing images will in general not care whether the file has such a suffix.
Another way of indicating what kind of information a file contains is by means of a header. This is some information put at the beginning of the file that tells programs that read the file about its contents. A header often begins with a byte that indicates the file type. Such a byte is known as a magic number. The header may contain other information as well, such as the sampling rate of a sound file or the size of an image.
On Unix systems each file also has associated with it information called permissions, which determine who is allowed to read it, write it, and execute it. Permissions are not actually part of the file. A Unix system will not execute a program if the permissions of the file containing do not indicate that it is executable. However, execution permission does not actually indicate the contents of the file. An excutable file may be a machine language program, which can be directly executed, or it may be a script that must be interpreted by another program, such as a shell. Marking it as executable only tells the system to consider it for execution.
A file's contents can often be identified by means of the file program. This is a utility that attempts to identify file types on the basis of their magic numbers and contents. It is not foolproof, but it does quite a good job. (The version of file provided by some computer manufacturers is rather impoverished. What is probably the most sophisticated version can be obtained here.)
If you cannot identify a file, or if for some reason you need to see exactly
what is in a file, you may find the
utility useful. od
stands for "octal dump" and refers to one of the things it can do, namely
display the contents of each byte as an octal number. The command od -bc
is particularly useful. It displays each byte as an octal number and also
displays it as an ASCII character, where possible. Otherwise, it displays
the escape sequence, if there is one (e.g. \t for tab), or as an octal esacpe
(e.g. \377 for é)
For example, if we give the command:
echo "abc xyz" | od -bc
0000000 141 142 143 040 170 171 172 012 a b c x y z \n 0000010
One use of od is to read header information. For example, if we
run od on a GIF format image file, the output begins like this:
0000000 107 111 106 070 067 141 322 003 000 004 367 000 000 007 003 005 G I F 8 7 a Ò 003 \0 004 ÷ \0 \0 \a 003 005
Documents often consist of more than text. They may contain other kinds of data, such as images, and they may contain markup, that is, of information about the structure of the text ("structural" or "logical" markup) and/or how it should be displayed ("physical" or "visual" markup).
Structural markup of a typical piece of text might consist of information such as "this is a chapter title" or "this is an element of an ordered list". Physical markup might consist of information such as "this should be printed in 18 point bold Helvetica type" or "this should be centered".
Some text formatting systems make use of overt markup. This is true of the roff family of text formatters troff, groff, nroff and their preprocessors, of TeX, and of lout. To format text in one of these systems, in addition to the actual text, one inserts markup which is interpreted by the text formatting program. Documents written in such formats can therefore be used as linguistic data by stripping out the markup.
The formats used by word processors, such as Microsoft Word, Word Perfect, and Nisus Writer Express, contain markup that is not intended to be human-readable. In addition to text and markup, word processor files may contain the fonts needed to print the document and images.
In addition to markup systems intended specifically for document formatting there are now general markup lagnguages, which can be used for many purposes. The grandfather of modern general markup languages is SGML ("Standard Generalized Markup Language"), which is specified by ISO standard 8879:1986. (The standard is not freely available online but may be purchased from the ISO.) HTML ("Hypertext Markup language"), the language in which web pages are written, is a specialized derivative of SGML. XML is a derivative of SGML that is increasingly used both for structuring documents and for databases, if not as the internal format, at least as a transfer format.
When printed, text is usually eventually translated into a low-level printer language. Such languages contain detailed information about the position at which to print each character as well as control codes for the printer. They may also contain instructions for graphics. In some cases, the printer is used as a pure graphics device, and text is printed by translating character codes and font information into instructions that tell the printer how and where to draw each character.
Documents are often now printed by translating them into a page description language which is then either translated into a low-level printer language or sent to a printer that is able to interpret it directly. A page description language is intermediate in abstraction between a document with markup and a low-level printer language. Perhaps the best known page description language is Postscript. Postscript is actually a complete programming language (similar to Forth), one capable of elaborate mathematical calculations, which contains primitives for printing characters and drawing graphics. One of the motivations for using Postscript when it was first developed in 1984 was to download the computational load of rendering complex documents from the computer to the printer. A document in Postscript could consist of a fairly abstract program which when executed on the processor of the printer would cause the printer to render the document. In the first few years of its life, the Apple laser printers that were the first printers to use Postscript usually had greater computing power than the computers to which they were connected.
A document in Postscript may consist of low-level graphics data, in which case the text cannot be extracted from it except, possibly, by optical character recognition. A document in Postscript may also consist of calls to functions themselves defined in the document applied to individual characters or other small bits of text. These functions position the text and determine its size and typeface. The characters or strings that are their arguments constitute the text itself. In this case, it is possible to extract the text by removing the function calls.
A file format that is frequently encountered is PDF, the Portable Document Format developed by Adobe in 1993. PDF is a derivative of Postscript, which adds to Postscript's imaging model a document structure and interactive navigation features. PDF is a portable page description format, which allows complex documents, including non-Roman fonts and graphics, to be read and printed on a wide variety of operating systems. PDF files may contain hyperlinks, and they may be password protected. The full PDF standard can be obtained from Adobe's web site here.
Adobe's own program for reading PDF files is Acrobat Reader. Acrobat Reader can be downloaded free for most operating systems here, but there are a number of others. A list of PDF readers for various operating systems may be found here.
However, for most linguistic purposes it is desirable to extract plain text from the PDF file. This can be done, if it is not locked, and if the material is in the form of text. Since PDF files can also contain images, one way to avoid problems with the distribution of exotic fonts is to convert a document to a set of images and embed the images in a PDF file. Such PDF files do not contain extractable, manipulable text. If the text is extractable, it can be extracted by using pdftotext.
Rich Text Format is a format for text and graphics interchange developed by Microsoft. RTF files are usually generated by programs rather than directly by people. RTF is actually a physical markup system, simpler than PDF. Both the markup tags and the text characters are written in a human-readable format, using ASCII characters, or occasionally another common single-byte character encoding. Other characters are represented by codes. Similarly, images are encoded as text.
The RTF specification is available as a set of web pages here. The RTF specification is available as a PDF file here
Thus far we have described files in their native format, such as plain text files. Files may also appear in a variety of modified formats.
Files are sometimes encoded to be sent as email. Once upon a time, the system for transmitting electronic mail was designed only to transmit plain text, that is, 7-bit ASCII, not "binary" files. In many ways, it mimicked teletype transmission. The designers expected only certain byte values to be included in message text. Other values were interpreted as control codes for the mail system. As a result, messages that contained certain values would not be transmitted properly. In order to allow files containing such bytes, such as images and computer programs, to be transmitted by email, they are encoded so that only the non-disruptive values are used. They must then be decoded at the other end. On Unix systems, the usual program for encoding binary files for email is uuencode. uudecode is the corresponding program for decoding them. Another widely used encoding, originating on the MacIntosh, is binhex.
Nowadays email encoding is generally handled automatically, without user intervention. Programs like pine on Unix systems, Outlook Express on MS Windows systems, and Eudora on Macintosh systems, automatically perform the necessary encoding and decoding using the MIME (Multipurpose Internet Mail Extensions) format, defined in RFC 2045. Furthermore, the software underlying the mail system is increasingly 8-bit clean. It is often possible to send 8-bit data without any problem. The result is that you are not very likely to have to deal with a file in encoded form.
Should you encounter the need to decode a file with email encoding, there is a single free program, uudeview, available for Unix and MS Windows systems, that handles all three major encoding systems: binhex, uudecode, and Base64.
Files are sometimes encrypted for security. An encrypted text file will generally appear to contain random binary data. It will not be identifiable as a text file. The traditional Unix encryption program is crypt. This is increasingly being replaced by gnupg.
Files are sometimes compressed in order to reduce disk usage or speed up transmission over the network. There are various ways of compressing files, which depend on the type of data. On Unix systems, the most widely used compression program is probably gzip, which is called gunzip when used to decompress. On MS Windows systems the most widely used compression program is Winzip. Another compression program that runs on just about every kind of system is bzip2.
For some purposes it is desirable to combine several files into a single file. It is generally more convenient to deal with a single large file than many small files, e.g. for transmission over a network or inclusion in email. Furthermore, one sometimes wants to preserve the directory structure, not just the individual files.
There are a number of programs that package groups of files. The most important of these on Unix systems is tar ("tape archiver"), which was originally intended to package files for archiving on tape. tar creates a single file that contains not only the original files but the information necessary to reconstruct the original tree structure, permissions, modification times, and so forth. Files created by tar often have the suffix .tar.
Packaging is often combined with compression. On Unix systems,
tar files that have also been compressed by gzip are usually given
either the suffix .tar.gz or most commonly .tgz.
The more recent versions of tar also perform gzip
compression and decompression if so requested (by the z flag).
On MS Windows systems Winzip
performs both packaging and compression.
tar and Winzip can each unpack and decompress files created
by the other. | http://www.billposer.org/Linguistics/Computation/LectureNotes/Text.html | 13 |
50 | STEPHEN H. SCHNEIDER
The greenhouse effect, despite all the controversy that surrounds the term, is actually one of the most well-established theories in atmospheric science. For example, with its dense CO2 atmosphere, Venus has temperatures near 700 K at its surface. Mars, with its very thin CO2 atmosphere, has temperatures of only 220 K. The primary explanation of the current Venus "runaway greenhouse" and the frigid Martian surface has long been quite clear and straightforward: the greenhouse effect (3) . The greenhouse effect works because some gases and particles in an atmosphere preferentially allow sunlight to filter through to the surface of the planet relative to the amount of radiant infrared energy that the atmosphere allows to escape back up to space. The greater the concentration of "greenhouse" material in the atmosphere (Fig. 1) (4), the less infrared energy that can escape. Therefore, increasing the amount of greenhouse gases increases the planet's surface temperature by increasing the amount of heat that is trapped in the lowest part of the atmosphere. What is controversial about the greenhouse effect is exactly how much Earth's surface temperature will rise given a certain increase in a trace greenhouse gas such as CO2.
Two reconstructions of Earth's surface temperature for the past century (Fig. 2) have been made at the Goddard Institute for Space Studies (GISS) and Climatic Research Unit (CRU). Although some identical instrumental records were used in each study, the methods of analysis were different. Moreover, the CRU results include an ocean data set (6). These records have been criticized because a number of the thermometers were in city centers and might have measured a spurious warming from the urban heat island (7). In other cases thermometers were moved from cities to airports or up and down mountains, and some other measurements are also unreliable. A critical evaluation of the urban heat island effect suggests that in the United States the data may account for nearly 0.4deg.C of warming in the GISS record and about 0.15deg.C warming in the CRU record (8). Because the U.S. data from where the urban heat island effect might be significant are only a small part of the total, these corrections should not automatically be made to the entire global record. However, even after such corrections for the United States are applied to all of the data, the global data still suggest that 0.5deg.C warming occurred during the past 100 years. Moreover, the 1980s appear to be the warmest decade on record; 1981,1987, and 1988 were the warmest years on these records (5,6).
Scientific Issues Surrounding the Greenhouse Effect
It is helpful to break down the set of issues known as the greenhouse effect into a series of stages, each feeding into another, and then to consider how policy questions might be addressed in the context of these more technical stages.
Projecting emissions. Behavioral assumptions must be made in order to project future use of fossil fuels (or deforestation, because this too can impact the amount of CO2 in the atmosphere--it accounts for about 20% of the recent total CO2 injection of about 5.5 x 10 9 metric tons). The essence of this aspect then is social science. Projections must be made of human population, the per capita consumption of fossil fuel, deforestation rates, reforestation activities, and perhaps even countermeasures to deal with the extra CO2 in the air. These projections depend on issues such as the likelihood that alternative energy systems or conservation measures will be available, their price, and their social acceptability. Furthermore, trade in fuel carbon (for example, a large-scale transfer from coal-rich to coal-poor nations) will depend not only on the energy requirements and the available alternatives but also on the economic health of the potential importing nations (9). This trade in turn will depend upon whether those nations have adequate capital resources to spend on energy rather than other precious strategic commodities--such as food or fertilizer as well as some other strategic materials such as weaponry. Total CO2 emissions from energy systems, for example, can be expressed by a formula termed "the population multiplier" by Ehrlich and Holdren (10)
Total CO2 emission = CO2 emission x technology x total population size ------------ ---------- technology capitaThe first term represents engineering effects, the second standard of living, and the third demography in this version, which is expanded from the original.
Fig. 1.(47k) Fig. 2.(42k)
In order to quantify future changes we can make scenarios (such as seen on Fig. 3) that show alternative CO2 futures based on assumed rates of growth in the use of fossil fuels (11). Most typical projections are in the 0.5 to 2% annual growth range for fossil fuel use and imply that CO2 concentrations will double (to 600 ppm) in the 21st century (12, 12a). There is virtually no dispute among scientists that the CO2 concentration in the atmosphere has already increased by @25% since @1850. The record at Mauna Loa observatory shows that concentrations have increased from about 310 to more than 350 ppm since 1958. Superimposed on this trend is a large annual cycle in which CO2 reaches a maximum in the spring of each year in the Northern Hemisphere and a minimum in the fall. The fall minimum is generally thought to result from growth of the seasonal biosphere in the Northern Hemisphere summer whereby photosynthesis increases faster than respiration and atmospheric CO2 levels are reduced. After September, the reverse occurs and respiration proceeds at a faster rate than photosynthesis and CO2 levels increase (13). Analyses of trapped air in several ice cores (14) suggest that during the past several thousand years of the present interglacial, CO2 levels have been reasonably close to the pre industrial value of 280 ppm. However, since about 1850, CO2 has risen @25%. At the maximum of the last Ice Age 18,000 years ago, CO2 levels were roughly 25% lower than pre industrial values. The data also reveal a close correspondence between the inferred temperature at Antarctica and the measured CO2 concentration from gas bubbles trapped in ancient ice (15). However, whether the CO2 level was a response to or caused the temperature changes is debated: CO2 may have simply served as an amplifier or positive feedback mechanism for climate change--that is, less CO2, colder temperatures. This uncertainty arises because the specific biogeophysical mechanisms that cause CO2 to change in step with the climate are not yet elucidated (16). Methane concentrations in bubbles in ice cores also show a similar close relation with climate during the past 150,000 years (17).
Other greenhouse gases like chlorofluorocarbons (CFCs), CH4, nitrogen oxides, tropospheric ozone, and others could, together, be as important as CO2 in augmenting the greenhouse effect, but some of these depend on human behavior and have complicated biogeochemical interactions. These complications account for the large error bars in Fig. 4 (18). Space does not permit a proper treatment of individual aspects of each non-CO2 trace greenhouse gas; therefore I reluctantly will consider all greenhouse gases taken together as "equivalent CO2." However, this assumption implies that projections for "CO2" alone (Fig. 3) will be an underestimate of the total greenhouse gas buildup by roughly a factor of 2. Furthermore, this assumption forces us to ignore possible relations between CH4 and water vapor in the stratosphere, for example, which might affect polar stratospheric clouds, which are believed to enhance photochemical destruction of ozone by chlorine atoms.
Fig. 3.(44k) Fig. 4.(38k)
Projecting greenhouse gas concentrations. Once a plausible set of scenarios for how much CO2 will be injected into the atmosphere is obtained the interacting biogeochemical processes that control the global distribution and stocks of the carbon need to be determined. Such processes involve the uptake of CO2 by green plants (because CO2 is the basis of photosynthesis, more CO2 in the air means faster rates of photosynthesis), changes in the amount of forested area and vegetation type, and how CO2 fertilization or climate change affects natural ecosystems on land and in the oceans (19). The transition from ice age to interglacial climates provides a concrete example of how large natural climatic change affected natural ecosystems in North America. This transition represented some 5deg.C global warming, with as much as 10deg. to 20deg.C warming locally near ice sheets. The boreal species now in Canada were hugging the rim of the great Laurentide glacier in the U.S. Northeast some 10,000 years ago, while now abundant hardwood species were restricted to small refuges largely in the South. The natural rate of forest movement that can be inferred is, to order of magnitude, some @1 km per year, in response to temperature changes averaging @1deg. to 2deg.C per thousand years (20). If climate were to change much more rapidly than this, then the forests would likely not be in equilibrium with the climate; that is, they could not keep up with the fast change and would go through a period of transient adjustment in which many hard-to-predict changes in species distribution, productivity, and CO2 absorptive capacity would likely occur (21).
Furthermore, because the slow removal of CO2 from the atmosphere is largely accomplished through biological and chemical processes in the oceans and decades to centuries are needed for equilibration after a large perturbation, the rates at which climate change modifies mixing processes in the ocean (and thus the CO2 residence time) also needs to be taken into account. There is considerable uncertainty about how much newly injected CO2 will remain in the air during the next century, but typical estimates put this so-called "airborne fraction" at about 50%. Reducing CO2 emissions could initially provide a bonus by allowing the reduction of the airborne fraction, whereas increasing CO2 emissions could increase the airborne fraction and exacerbate the greenhouse effect (22). However, this bonus might last only a decade or so, which is the time it takes for the upper mixed layer of the oceans to mix with deep ocean water. Biological feedbacks can also influence the amount of CO2 in the air. For example, enhanced photosynthesis could reduce the buildup rate of CO2 relative to that projected with carbon cycle models that do not include such an effect (23). On the other hand, although there is about as much carbon stored in the forests as there is in the atmosphere, there is about twice as much carbon stored in the soils in the form of dead organic matter. This carbon is slowly decomposed by soil microbes back to CO2 and other gases. Because the rate of this decomposition depends on temperature, global warming from increased greenhouse gases could cause enhanced rates of microbial decomposition of necromass (dead organic matter) (24), thereby causing a positive feedback that would enhance CO2 buildup. Furthermore, considerable methane is trapped below frozen sediments as clathrates in tundra and off continental shelves. These clathrates could release vast quantities of methane into the atmosphere if substantial Arctic warming were to take place (17, 25). Already the ice core data have shown that not only has CO2 tracked temperature closely for the past 150,000 years, but so has methane, and methane is a significant trace greenhouse gas which is some 20 to 30 times more effective per molecule at absorbing infrared radiation than CO2. Despite these uncertainties, many workers have projected that CO2 concentrations will reach 600 ppm sometime between 2030 and 2080 and that some of the other trace greenhouse gases will continue to rise at even faster rates.
Estimating global climatic response. Once we have projected how much CO2 (and other trace greenhouse gases) may be in the air during the next century or so, we have to estimate its climatic effect. Complications arise because of interactive processes; that is, feedback mechanisms. For example, if added CO2 were to cause a temperature increase on earth, the warming would likely decrease the regions of Earth covered by snow and ice and decrease the global albedo. The initial warming would thus create a darker planet that would absorb more energy, thereby creating a larger final warming (26, 27). This scenario is only one of a number of possible feedback mechanisms. Clouds can change in amount, height, or brightness, for example, substantially altering the climatic response to CO2 (28). And because feedback processes interact in the climatic system, estimating global temperature increases accurately is difficult; projections' of the global equilibrium temperature response to an increase of CO2 from 300 to 600 ppm have ranged from @1.5deg. to 5.5deg.C. (In the next section the much larger uncertainties surrounding regional responses will be discussed.) Despite these uncertainties, there is virtually no debate that continued increases of CO2 will cause global warming (29-30).
We cannot directly verify our quantitative predictions of greenhouse warming on the basis of purely historical events (31); therefore, we must base our estimates on natural analogs of large climatic changes and numerical climatic models because the complexity of the real world cannot be reproduced in laboratory models. In the mathematical models, the known basic physical laws are applied to the atmosphere, oceans, and ice sheets, and the equations that represent these laws are solved with the best computers available (32). Then, we simply change in the computer program the effective amount of greenhouse gases, repeat our calculation, and compare it to the "control" calculation for the present Earth. Many such global climatic models (GCMs) have been built during the past few decades, and the results are in rough agreement that if CO2 were to double from 300 to 600 ppm, then Earth's surface temperature would eventually warm up somewhere between 1deg. and 5deg.C; the most recent GCM estimates are from 3.5deg. to 5.0deg.C (27,33). For comparison, the global average surface temperature (land and ocean) during the Ice Age extreme 18,000 years ago was only about 5deg.C colder than that today. Thus, a global temperature change of 1deg. to 2deg.C can have considerable effects. A sustained global increase of more than 2deg.C above present would be unprecedented in the era of human civilization.
The largest uncertainty in estimating the sensitivity of Earth's surface temperature to a given increase in radiative forcing arises from the problem of parameterization. Because the equations that are believed to represent the flows of mass, momentum, and energy in the atmosphere, oceans, ice fields, and biosphere cannot be solved analytically with any known techniques, approximation techniques are used in which the equations are discretized with a finite grid that divides the region of interest into cells that are several hundred kilometers or more on a side. Clearly, critically important variables, such as clouds, which control the radiation budget of Earth, do not occur on scales as large as the grid of a general circulation model. Therefore, we seek to find a parametric representation or parameterization that relates implicitly the effects of important processes that operate at subgrid-scale but still have effects at the resolution of a typical general circulation model. For example, a parameter or proportionality coefficient might be used that describes the average cloudiness in grid cell in terms of the mean relative humidity in that cell and some other measures of atmospheric stability. Then, the important task becomes validating these semiempirical parameterizations because at some scale, all models, no matter how high resolution, must treat subgrid-scale processes through parameterization.
Projecting regional climatic response. In order to make useful estimates of the effects of climatic changes, we need to determine the regional distribution of climatic change. Will it be drier in Iowa in 2010, too hot in India, wetter in Africa, or more humid in New York; will California be prone to more forest fires or will Venice flood? Unfortunately, reliable prediction of the time sequence of local and regional responses of variables such as temperature and rainfall requires climatic models of greater complexity and expense than are currently available. Even though the models have been used to estimate the responses of these variables, the regional predictions from state-of-the-art models are not yet reliable.
Although there is considerable experience in examining regional changes [for example, Fig. 5 (34)], considerable uncertainty remains over the probability that these predicted regional features will occur. The principal reasons for the uncertainty are twofold: the crude treatment in climatic models of biological and hydrological processes (35) and the usual neglect of the effects of the deep oceans (36). The deep oceans would respond slowly--on time scales of many decades to centuries--to climatic warming at the surface, and also act differentially (that is, non uniformly in space and through time). Therefore, the oceans, like the forests, would be out of equilibrium with the atmosphere if greenhouse gases increase as rapidly as typically is projected and if climatic warming were to occur as fast as 2deg. to 6deg.C during the next century. This typical projection, recall, is 10 to 60 times as fast as the natural average rate of temperature change that occurred from the end of the last Ice Age to the present warm period (that is, 2deg. to 6deg.C warming in a century from human activities compared to an average natural warming of 1deg. to 2deg.C per millennium from the waning of the Ice Age to the establishment of the present interglacial epoch) (37). If the oceans are out of equilibrium with the atmosphere, then specific regional forecasts like that of Fig. 5 will not have much credibility until fully coupled atmosphere-ocean models are tested and applied (38). The development of such models is a formidable scientific and computational task and is still not very advanced.
Validation of dramatic model forecasts. Of course, it is appropriate to ask how climatic models' predictions of unprecedented climatic change beyond the next several decades might be verified. Can society make trillion dollar decisions about global economic developments based on the projections of these admittedly dirty crystal balls? How can models be verified?
The first verification method is checking the ability of a model to simulate today's climate. Reproduction of the seasonal cycle is one critical test because these natural temperature changes are several times larger, on a hemispheric average, than the change from an ice age to an interglacial period or a projected greenhouse warming. Also, "fast physics" such as cloud parameterizations can be tested by seasonal simulations or weather forecasts. Global climate models generally map the seasonal cycle well (Fig. 6) (39), which suggests that fast physics is not badly simulated on a global basis. However, successful reproduction of these seasonal patterns are not enough that strong validation can be claimed. Precipitation, relative humidity, and the other variables need to be checked. Reproduction of the change in daily variance of these variables with the seasons is another tough test (40). The seasonal tests, however, do not indicate how well a model simulates such medium or slow processes as changes in deep ocean circulation or ice cover, which may have an important effect on the decade to century time scales during which the CO2 concentration is expected to double.
A second verification technique is isolating individual physical components of the model, such as its parameterizations, and testing them against high resolution sub models and actual data at high resolution. For example, one can check whether a parameterized evaporation matches the observed evaporation of a particular cell. But this technique cannot guarantee that the complex interactions of many individual model components are treated properly. The model may predict average cloudiness well but represent cloud feedback poorly. In this case, simulation of overall climatic response to increased CO2 is likely to be inaccurate. A model should reproduce to better than, say, 25% accuracy the flow of thermal energy between the atmosphere, surface, and space (Fig. 1). Together, these energy flows comprise the well-established energy balance of Earth and constitute a formidable and necessary test for all models.
A third method for determining overall simulation skill is the model's ability to reproduce past climates or climates of other planets. Paleoclimatic simulations of the Mesozoic Era, glacial-interglacial cycles, or other extreme past climates help in understanding the coevolution of Earth's climate with living things. They are valuable for the estimation of both the climatic and biological future (41).
Overall validation of climatic models thus depends on constant appraisal and reappraisal of performance in the above categories. Also important are a model's response to such century-long forcings as the 25% increase in CO2 concentration and different increases in other trace greenhouse gases since the Industrial Revolution.
Most recent climatic models predict that a warming of at least 1deg.C should have occurred during the past century. The precise "forecast" of the past 100 years also depends upon how the model accounts for such factors as changes in the solar constant or volcanic dust as well as trace greenhouse gases in addition to CO2 (42). Indeed, the typical prediction of a 1deg.C warming is broadly consistent but somewhat larger than that observed (see Fig. 2). Possible explanations for the discrepancy include (43): (i) the state-of-the-art models are too sensitive to increases in trace greenhouse gases by a rough factor of 2; (ii) modelers have not properly accounted for such competitive external forcings as volcanic dust or changes in solar energy output; (iii) modelers have not accounted for other external forcings such as regional tropospheric aerosols from agricultural, biological, and industrial activity; (iv) modelers have not properly accounted for internal processes that could lead to stochastic (44) or chaotic (45) behavior; (v) modelers have not properly accounted for the large heat capacity of the oceans taking up some of the heating of the greenhouse effect and delaying, but not ultimately reducing, warming of the lower atmosphere; (vi) both present models and observed climatic trends could be correct, but models are typically run for equivalent doubling of the CO2 concentration whereas the world has only experienced a quarter of this increase and nonlinear processes have been properly modeled and produced a sensitivity appropriate for doubling but not for 25% increase; and (vii) the incomplete and inhomogeneous network of thermometers has underestimated actual global warming this century.
Despite this array of excuses why observed global temperature trends in the past century and those anticipated by most GCMs disagree somewhat, the twofold discrepancy between predicted and measured temperature changes is not large, but still of concern. This rough validation is reinforced by the good simulation by most climatic models of the seasonal cycle, diverse ancient paleoclimates, hot conditions on Venus, cold conditions on Mars (both well simulated), and the present distribution of climates on Earth. When taken together, these verifications provide strong circumstantial evidence that the current modeling of the sensitivity of global surface temperature to given increases in greenhouse gases over the next 50 years or so is probably valid within a rough factor of 2. Most climatologists do not yet proclaim that the observed temperature changes this century were caused beyond doubt by the greenhouse effect. The relation between the observed century-long trend and the predicted warming could still be chance occurrences, or other factors, such as solar constant variations or volcanic dust, may not have been accounted for correctly during the past century--except during the past decade when accurate measurements began to be made.
Another decade or two of observations of trends in Earth's climate, of course, should produce signal-to-noise ratios sufficiently high that we will be able to determine conclusively the validity of present estimates of climatic sensitivity to increasing trace greenhouse gases. That, however, is not a cost-free enterprise because a greater amount of change could occur then than if actions were undertaken now to slow down the buildup rate of greenhouse gases.
Scenarios of the environmental impact of CO2. Given a set of scenarios for regional climatic change we must next estimate the impacts on the environment and society (46, 47). Most researchers have focused on the direct effects of CO2 increases or used model-predicted maps of temperature and rainfall patterns to estimate impacts on crop yields or water supplies (29a, 30, 48, 49). Also of concern is the potential that temperature increases will alter the range or numbers of pests that affect plants, or diseases that threaten animals or human health (50, 50a). Also of interest are the effects on unmanaged ecosystems, principally forests. For example, ecologists are concerned that the destruction rate of tropical forests attributed to human expansion is eroding the genetic diversity of the planet (51). That is, because the tropical forests are in a sense major banks for the bulk of living genetic materials on Earth, the world is losing some of its irreplaceable biological resources through rapid development. Substantial changes in tropical rainfall have been predicted on the basis of climatic models; reserves (or refugia) that are currently set aside as minimal solutions for the preservation of some genetic resources into the future may not even be as effective as currently planned (52).
Climate changes resulting from greenhouse gas increases could also significantly affect water supply and demand. For example, a local increase in temperature of several degrees Celsius could decrease runoff in the Colorado River Basin by tens of percent (25, 48). A study (53) of the vulnerability to climate change of various water resource regions in the United States showed that some regions are quite vulnerable to climatic changes (Table 1).
Water quality will be diminished if the same volume of wastes are discharged through decreased stream flow. In addition, irrigation demand (and thus pressure on ground-water supplies) may increase substantially if temperatures increase without concomitant offsetting increases in precipitation. A number of climate models suggest that temperatures could increase and precipitation decrease simultaneously in several areas, including the central plains of the United States. Peterson and Keller (54) estimated the effects of a 3deg.C warming and a 10% precipitation change on U.S. crop production based on crop water needs. The greatest impact would be in the western states and the Great Plains, less in the Northwest. The warm, dry combination would increase depletion of streams and reduce viable acreage by nearly a third in the arid regions. New supplies of water would be needed, threatening ground water and the viability of agriculture in these regions. On the other hand, farmers in the East, and particularly in the Southeast, might profit if the depletion of eastern rivers were relatively less severe than that in the West or the Plains. However, increases in the efficiency of irrigation management and technological improvements remain achievable, and would help substantially to mitigate potential negative effects. Drying in the West could also markedly increase the incidence of wildfires, which in turn could act as agents of ecological change as climate changes.
Most workers project that an increase in global temperature of several degrees Celsius will cause sea level to rise by 0.5 to 1.5 m generally in the next 50 to 100 years (55); such a rise would endanger coastal settlements, estuarine ecosystems, and the quality of coastal fresh water resources (56, 57).
Economic, social, and political impacts. The estimation of the distribution of economic "winners and losers," given a scenario of climatic change, involves more than simply looking at the total dollars lost and gained--were it possible somehow to make such a calculation credibly! It also requires looking at these important equity questions: "who wins and who loses?" and "how might the losers be compensated and the winners charged?" For example, if the Cornbelt in the United States were to "move" north and east by several hundred kilometers from a warming, then a billion dollars a year lost in Iowa farms could well eventually become Minnesota's billion dollar gain. Although some macro-economists viewing this hypothetical problem from the perspective of the United States as a whole might see no net losses here, considerable social consternation could be generated by such a shift in climatic resources, particularly since the cause was economic activities (that is CO2 production) that directed differential costs and benefits to various groups. Moreover, even the perception that the economic activities of one nation could create climatic changes that would be detrimental to another has the potential for disrupting international relations--as is already occurring in the case of acid rain. In essence, what greenhouse gas induced environmental changes create is an issue of "redistributive" justice."
If a soil moisture decrease, such as projected for the United States in Fig. 5 were to occur, then it would have disturbing implications for agriculture in the U.S and Canadian plains. Clearly, present farming practices and cropping patterns would have to change. The more rapidly the climate changed and the less accurately the changes were predicted (which go together), the more likely that the net changes would be detrimental. It has been suggested that a future with soil moisture change like that shown in Fig. 5 could translate to a loss of comparative advantage of U.S. agricultural products on the world market (58). Such a scenario could have substantial economic and security implications. Taken together, projected climate changes into the next century could have major impacts on water resources, sea level, agriculture, forests, biological diversity, air quality, human health, urban infrastructure, and electricity demand (29a, 30, 47, 50, 57, 59)
Policy responses. The last stage in diagnosing the greenhouse effect concerns the question of appropriate policy responses. Three classes of actions could be considered. First, engineering countermeasures: purposeful interventions in the environment to minimize the potential effects [for example, deliberately spreading dust in the stratosphere to reflect some extra sunlight to cool the climate as a countermeasure to the inadvertent CO2 warming (60)]. These countermeasures suffer from the immediate and obvious flaw that if there is admitted uncertainty associated with predicting the unintentional consequences of human activities, then likewise substantial uncertainty surrounds any deliberate climatic modification. Thus, it is quite possible that the unintentional change might be overestimated by computer models and the intentional change underestimated, in which case human intervention would be a "cure worse than the disease" (61). Furthermore, the prospect for international tensions resulting from any deliberate environmental modifications is staggering, and our legal instruments to deal with these tensions is immature (62). Thus, acceptance of any substantial climate countermeasure strategies for the foreseeable future is hard to imagine, particularly because there are other more viable alternatives.
The second class of policy action, one that tends to be favored by many economists, is adaptation (63). Adaptive strategists propose to let society adjust to environmental changes. In extreme form, some believe in adaptation without attempting to mitigate or to prevent the changes in advance. Such a strategy is based partly on the argument that society will be able to replace much of its infrastructure before major climatic changes materialize, and that because of the large uncertainties, we are better off waiting to see what will happen before making potentially unnecessary investments. However, it appears quite likely that we are already committed to some climatic change based on emissions to date, and therefore some anticipatory steps to make adaptation easier certainly seems prudent (64). We could adapt to climate change, for example, by planting alternative crop strains that would be more widely adapted to a whole range of plausible climatic futures. Of course, if we do not know what is coming or we have not developed or tested the seeds yet, we may well suffer substantial losses during the transition to the new climate. But such adaptations are often recommended because of the uncertain nature of the specific redistributive character of future climatic change and because of high discount rates (65).
In the case of water supply management, the American Association for the Advancement of Science panel on Climate Change made a strong, potentially controversial, but, I believe, rather obvious adaptive suggestion: governments at all levels should reevaluate the legal, technical, and economic components of water supply management to account for the likelihood of climate change, stressing efficient techniques for water use, and new management practices to increase the flexibility of water systems and recognizing the need to reconsider existing compacts, ownership, and other legal baggage associated with the present water system. In light of rapid climate change, we need to reexamine the balance between private rights and the public good, because water is intimately connected with both. Regional transfers from water-abundant regions to water-deficient regions are often prohibited by legal or economic impediments that need to be examined as part of a hedging strategy for adapting more effectively to the prospect of climatic change even though regional details cannot now be reliably forecast (66).
Finally, the most active policy category is prevention, which could take the form of sulfur scrubbers in the case of acid rain, abandonment of the use of chlorofluorocarbons and other potential ozone-reducing gases (particularly those that also enhance global warming), reduction in the amount of fossil fuel used around the world or fossil fuel switching from more CO2- and SO2-producing coal to cleaner, less polluting methane fuels. Prevention policies, often advocated by environmentalists, are controversial because they involve, in some cases, substantial immediate investments as insurance against the possibility of large future environmental change, change whose details cannot be predicted precisely. The sorts of preventive policies that could be considered are increasing the efficiency of energy production and end use, the development of alternative energy systems that are not fossil fuel-based, or, in a far-reaching proposal: a "law of the air" proposed by Kellogg and Mead (67). They suggest that various nations would be assigned polluting rights to keep CO2 emissions below some agreed global standard. A "Law of the Atmosphere" was recently endorsed in the report of a major international meeting (68).
A Scientific Consensus?
In summary, a substantial warming of the climate through the augmentation to the greenhouse effect is very likely if current technological, economic, and demographic trends continue. Rapid climatic changes will cause both ecological and physical systems to go out of equilibrium--a transient condition that makes detailed predictions tenuous. The faster the changes take place, the less societies or natural ecosystems will be able to adapt to them without potentially serious disruptions. Both the rate and magnitude of typical projections up to 2050 suggest that climatic changes beyond that experienced by civilization could occur. The faster the climate is forced to change, the more likely there will be unexpected surprises lurking (69). The consensus about the likelihood of future global change weakens over detailed assessments of the precise timing and geographic distribution of potential effects and crumbles over the value question of whether present information is sufficient to generate a societal response stronger than more scientific research on the problems--appropriate (but self-serving) advice which we scientists, myself included, somehow always manage to recommend (70).
High Leverage Actions to Cope with Global Warming
Clearly, society does not have the resources to hedge against all possible negative future outcomes. Is there, then, some simple principle that can help us choose which actions to spend our resources on? One guideline is called the "tie-in strategy" (71, 72). Quite simply, society should pursue those actions that provide widely agreed societal benefits even if the predicted change does not materialize. For instance, one of the principal ways to slow down the rate at which the greenhouse effect will be enhanced is to invest in more efficient use and production of energy. More efficiency, therefore, would reduce the growing disequilibrium among physical, biological, and social systems and could buy time both to study the detailed implications of the greenhouse effect further and ensure an easier adaptation. However, if the greenhouse effects now projected prove to be substantial overestimates, what would be wasted by an energy efficiency strategy? Efficiency usually makes good economic sense (although the rate of investment in efficiency does depend, of course, on other competing uses of those financial resources and on the discount rate used). However, reductions in emissions of fossil fuels, especially coal, will certainly reduce acid rain, limit negative health effects in crowded areas from air pollution, and lower dependence on foreign sources of fuel, especially oil. In addition, more energy efficient factories mean reduced energy costs for manufacturing and thus greater long-term product competitiveness against foreign producers (11, 12a).
Development of alternative, environmentally safer energy technologies is another example of a tie-in strategy, as is the development and testing of alternative crop strains, trading agreements with nations for food or other climatically dependent strategic commodities, and so forth. However, there would be in some circles ideological opposition to such strategies on the grounds that these activities should be pursued by individual investment decisions through a market economy, not by collective action using tax revenues or other incentives. In rebuttal, a market which does not include the costs of environmental disruptions can hardly be considered a truly free market. Furthermore, strategic investments are made routinely on non economic (that is, cost-benefit analyses are secondary) criteria even by the most politically conservative people: to purchase military security. A strategic consciousness, not an economic calculus, dictates investments in defense. Similarly, people purchase insurance as a hedge against plausible, but uncertain, future problems. The judgment here is whether strategic consciousness, widely accepted across the political spectrum, needs to be extended to other potential threats to security, including a substantially altered environment occurring on a global scale at unprecedented rates. Then, the next problem is to determine how many resources to allocate.
If we choose to wait for more scientific certainty over details before preventive actions are initiated, then this is done at the risk of our having to adapt to a larger, faster occurring dose of greenhouse gases than if actions were initiated today. In my value system, high leverage, tie-in actions are long overdue. Of course, whether to act is not a scientific judgment, but a value-laden political choice that cannot be resolved by scientific methods.
Incentives for investments to improve energy efficiency, to develop less polluting alternatives, control methane emissions, or phase out CFCs may require policies that charge user fees on activities in proportion to the amount of pollution each generates. This strategy might differentially impact less developed nations, or segments of the population such as coal miners or the poor. Indeed, an equity problem is raised through such strategies. However, is it more appropriate to subsidize poverty, for example, through artificially lower prices of energy which distort the market and discourage efficient energy end use or alternative production, or is it better to fight poverty by direct economic aid? Perhaps targeting some fraction of an energy tax to help those immediately disadvantaged would improve the political tractability of any attempt to internalize the external costs of pollution not currently charged to energy production or end use. In any case, consideration of these political issues will be essential if global scale agreements are to be negotiated, and without global scale agreements, no nation acting alone can reduce global warming by more than 10% or so (73).
The bottom line of the implications of atmospheric change is that we are perturbing the environment at a faster rate than we can understand or predict the consequences. In 1957, Revelle and Suess (74) pointed out that we were undergoing a great "geophysical experiment." In the 30 years since that prophetic remark, CO2 levels have risen more than 10% in the atmosphere, and there have been even larger increases in the concentrations of methane and CFCs. The 1980s appear to have seen the warmest temperatures in the instrumental record, and 1988 saw a combination of dramatic circumstances that gained much media attention: extended heat waves across most of the United States, intense drought, forest fires in the West, an extremely intense hurricane, and flooding in Bangladesh. Indeed, many people interpreted (prematurely, I believe) these events in 1988 as proof that human augmentation to the greenhouse effect had finally arrived (75). Should the rapid warming in the instrumental record of the past 10 years continue into the 1990s, then a vast majority of atmospheric scientists will undoubtedly agree that the greenhouse signal has been felt. Unfortunately, if society chooses to wait another decade or more for certain proof, then this behavior raises the risk that we will have to adapt to a larger amount of climate change than if actions to slow down the buildup of greenhouse gases were pursued more vigorously today. At a minimum, we can enhance our interdisciplinary research efforts to reduce uncertainties in physical, biological, and social scientific areas (76). But I believe enough is known already to go beyond research and begin to implement policies to enhance adaptation and to slow down the rapid buildup of greenhouse gases, a buildup that poses a considerable probability of unprecedented global-scale climatic change within our lifetimes.
REFERENCES AND NOTES
1. Department of Energy Undersecretary D. Fitzpatrick noted the many scientific issues still unresolved and said, ''These scientific uncertainties must be reduced before we commit the nation's economic future to drastic and potentially misplaced policy responses." As a witness at that hearing, I disagreed sharply, arguing that we should not "use platitudes about scientific uncertainty to evade the need to act now." See Congressional Record for 11 August 1988 (in press) for the full transcript; see also (la).
la.J. Hansen, testimony, 23 June 1988 to Senate Energy Committee. Hansen remarked that the greenhouse effect was "99%" likely to be associated with the recent temperature trends of the instrumental record.
2. Toward an Understanding of Global Change: Initial Priorities for U.S. Contributions to the International Geosphere-Biosphere Program (National Academy Press, Washington, DC, 1988).
3. J. F. Kasting, O. B. Toon, J. B. Pollack, Sci. Am. 257, 90 (February 1988).
4. S. H. Schneider, ibid. 256, 72 (May 1987).
5. J. Hansen and S. Lebedeff, Geophys. Res. Lett. 15, 323 (1988).
6. P. D. Jones and T. M. L. Wigley, personal communication (1988).
7. F. B. Wood, Climatic Change 12, 297 (1988); T. M. L. Wigley and P. D. Jones, ibid., p. 313; T. R. Karl, ibid., p. 179.
8. In September 1988, a "Climate Trends Workshop" was held at the U.S. Nattonal Academy of Sciences. One problem pointed out was at St. Helena, an island station in the Atlantic, which in the 1970s had a thermometer moved about 150 m down a mountain. T. Karl also pointed out that changes in the times of observations as well as urbanization effects have contaminated a number of U.S. records. He carried out a detailed analysis comparing rural and urban U.S. stations. Karl's comparison of his detailed U.S. record with that of the Climatic Research Unit showed that a spurious upward trend of 0.15deg.C had occurred and that J. Hansen and S. Lebedeff (5) had overestimated warming in the United States by 38deg.C; K. E. Trenberth, unpublished manuscript.
9. J. H. Ausubel, Climatic Change and the Carbon Wealth of Nations (International Institute for Applied Systems Analysis, Working Paper WP-80-75, Laxenburg, 1980); _____A. Grubler, N. Nakicenovic, Climatic Change 12, 245 (1988); Ausubel et al. argued that there may be long period variations in global economic behavior that could itfluence fossil fuel usage during the next several decades.
10. P. R. Ehrlich and J. P. Holdren, Science 171, 1212 (1971).
11. A. B. Lovins, L. H. Lovins, F. Krause, W. Bach, Least-Cost Energy: Solving the C02 Problem (Brick House, Andova, 1981).
12. W. Nordhaus and G. Yohe in Changing Climate, Report of the Carbon Dioxide Assessment Committee (National Academy Press, Washington, DC, 1983), pp. 87-153; J. A. Edmonds and J. Reilly, Energy J. 4, 21 (1984). These authors have suggested a wide range of plausible CO2 buildups into the 21st century. However, other authors have argued that societies could cost effectively limit CO2 emissions as part of a conscious strategy to stabilize climate by major policy initiatives to increase energy end use and production efficiency; for example, application of the Edmonds and Reilly economic model to the energy future of China was attempted by W. U. Chandler (Climatic Change 13, 241 (1988) which led to a debate over the appropriateness and applicability of that model to both supply and demand projections; B. Keepin, ibid., p. 233; J. A. Edmonds, ibid., p. 237.
12a.J. Goldemberg et al., Energy for Development (World Resources Institute, Washington, DC, 1987); I. N. Mintzer, A Matter of Degrees: The Potential for Controlling the Greenhouse Effect (World Resources Institute, Washington, DC, 1987); W. U. Chandler, H. S. Geller, N. R. Ledbetter, Energy Efficiency: A New Agenda (The American Council for an Energy-Efficient Economy, Washington, DC, 1988).
13. B. Bolin in The Greenhouse Effect, Climatic Change and Ecosystems, B. Bolin, B. R Doos, J Jaeger, R. A. Warrick, Eds. (Wiley, New York, 1986), pp. 93-155.
14. Analyses of CO2, CH4, and other atmospheric constituents have been made in Greenland [A Neftel, H. Oeschger, J. Schwander, B. Stouffer, R Zumbrunn, Nature 295, 220 (1982); J. Beer et al., Ann. Glaciol. 5, 16 (1984)] and in Antarctica [J. M. Barnola et al. (15); J. Jouzel, C. Lorius, J. Petit, C. Genthon, N. Barkhoff, V. Karolyoff, V. Petrov, Nature 329, 403 (1987)].
15. J. M. Barnola, D. Raynaud, Y. S. Korotkevich, C. Lorius, Nature 329,408 (1987).
16. A number of authors addressed the problem of the cause of the increase in CO2 at the end of the last glacial period some 10,000 to 15,000 years ago; E. T. Sundquist and W. S. Broecker, Eds., Thc Carbon Cycle and Atmospheric CO2: Natural Variations Archean to Present (American Geophysical Union, Washington, DC 1985)- F. Knox Ennever and M. B. McElroy in ibid., p. 154; T. Wenk and U. Siegenthaler, in ibid., p. 185. D. Erickson (personal communication) has suggested that CO2 may have been preferentially taken up by the oceans during the glacial age because of altered wind patterns [which he inferred from J. E. Kutzbach and P. J. Guetter, J. Atmos. Sci. 43,1726(1986)], which would have encouraged the uptake of CO2 in regions of undersaturation in the oceans. Other suggestions for the cause of the correlation bertween CO2 and temperature on geologic time scales involves alterations to terrestrial biota; L. Klinger [thesis, University of Colorado, Boulder (1988)] suggests that bogs with vast deposits of dead organic matter expanded during glacial times, storing much carbon as dead organic matter on land; later, during climatic warming and the retreat of ice, this dead organic matter was then able to reoxidize and cause CO2 buildup.
17. G. J. MacDonald, in Preparing for Climate Change, Proceedings of the First North American Conference on Preparing for Climatic Change: A Cooperative Approach, Washington, DC, 27 to 29 October (Government Institutes, Rockville, MD, 1988), pp. 108-117.
18. V. Ramanathan et al., J. Geophys. Res. 90, 5547 (1985).
19. P. Martin, N. J. Rosenberg, M. S. McKenney, Climatic Change, in press; F. I. Woodward, Nature 327, 617 (1987).
20. J. C. Bernabo and T. Webb III, Quat. Res. 8, 64 (1977); COHMAP Members, Science 241, 1043 (1988).
21. J. Pastor and W. M. Post, Nature 334, 55 (1988); W. R. Emanuel, H. H. Shugart, M. P. Stevenson, Climatic Change 7, 30 (1985); D. B. Botkin, R. A. Nisbet, T. E. Reynales, in preparation.
22. J. Firor, Climatic Change 12, 103 (1988); see also, L. D. D. Harvey, ibid., in press.
23. J. Goudriaan and P. Ketner, ibid. 6, 167 (1984). See also G. H. Kohlmaier, G. Kratz, H. Brohl, E. O. Sire, in Energy and Geological Modeling, W. J. Miitsch, R. W. Bosserman, J. M. Klopatek, Eds. (Elsevier, Amsterdam, 1981), pp. 57-68.
24. G. Woodwell, congressional testimony before the Senate Committee on Energy and Natural Resources, 23 June 1988 (Congr. Rec., in press). A number of other biological factors could affect the CO2 concentration through feedback processes. Some of these are suggested to be a substantial positive feedback, perhaps doubling the sensitivity of the climate to initial greenhouse injections according to D. A. Lashof (Climatic Change, in press).
25. R. Revelle in Changing Climate, Report of the Carbon Dioxide Assessment Committee (National Academy Press, Washington, DC, 1983), pp. 252-261.
26. Ice albedo temperature feedback was first introduced by M. I. Budyko [Tellus 21, 611 (1969)] and W. D. Sellers [J. Appl. Meteorol. 8, 392 (1969)]. See also S. H. Schneider and R. E. Dickinson [Rev. Geophys. Space Phys. 12, 447 (1974)] and G. R. North [J. Atmos. Sci. 32, 2033 (1975)], who treat the feedbacks in the contest of simple energy balance climate models. Modern general circulation models also obtain ice albedo temperature feedback.
27. M. Schlesinger and J. F. B. Mitchell [Rev. Geophys. 25, 760 (1987)] review the responses of different climate model to CO2 increases.
28. S. Manabe and R. T. Wetherald, J. Atmos. Sci. 24,241 (1967); ibid. 32,3 (1975); S. H. Schneider, J. Atmos. Sci. 29, 1413 (1972); _____W. M. Washington, R M. Chervin, J. Atmos. Sci. 35, 2207 (1978), J. E. Hansen and T. Takahashi, Eds., Climate Processes and Climate Sensitivity, Geophysical Monograph 29 (American Geophysical Union, Washington, DC, 1984). See also R. D. Cess, D. Hartman, V. Ramanathan, A. Berroir, G. E. Hunt, Rev. Geophys. 24, 439 (1986); V. Ramanathan et al., Science 243, 57 (1989).
29. A number of assessments in this decade have all reached the conclusion that increases in the CO2 concentration will almost certainly cause global warming. These indude National Academy of Sciences, Changing Climate, Report of the Carbon Dioxide Assessment Committee (National Academy Press, Washington, DC, 1983); (29a).
29a.W. C. Clark, Ed., Carbon Dioxide Review 1982 (Oxford Univ. Press, New York, 1982); G. I. Pearman, Ed., Greenhouse: Planning for Climate Change (Brill, Leiden, The Netherlands 1987), National Research Council Current Issues in Atmospheric Change (National Academy Press, Washington, DC, 1987).
30. B. Bolin, B. R. Doos, J. Jaeger, R A. Warrick, Eds., The Greenhouse Effect, Climatic Change and Ecosystems (Wiley, New York, 1986).
31. However, M. I. Budyko, A B. Ronov, and A. L. Yanhin [History ofthe Earth's Atmosphere (Springer-Verlag, Berlin, 1987), p. 92] suggest that there is a direct association between past atmospheric temperature and CO2 content. They suggest that previos CO2 concentrations of 600 ppm had warmed the globe by 3deg.C relative to today. However, the uncertainties in these values are at least a few degrees Celsius in Earth's temperature or a factor of 2 in CO2 content; see S. H. Schneider and R. Londa, The Coevolution of Climate and Life (Sierra Club, San Francisco, 1984), pp. 240-246; R A. Berner, A. C. Lasaga, R. M. Garrels, Am. J. Sci. 283, 641 (1983)-see E. Barron and W. M. Washington [in E. T. Sundquist and W. S. Brocker, Eds., The Carbon Cycle and Atmospheric CO2 Natural Variations Archean to Present (American Geophysical Union, Washington, DC, 1985), pp. 546-553] for discussions of paleo-CO2 concentrations and climate change.
32. W. M. Washington and C. L. Parkinson, An Introduction to Three-Dimensional Climate Modeling (University Science, Mill Valley, CA, 1986).
33. R. E. Dickinson, in (30), pp.207-270; J. Jaeger, Developing Policies for Responding to Climatic Change, A Summary of the Discussions and Recommendations of the Workshops Held in Villath 28 September to 2 October 1987 (WCIP-I, WMO/TD-No. 225, April 1988). Although equilibrium warmings much greater than 5deg.C or less than 1.5deg.C (or perhaps even less than 0deg.C) cannot be ruled out entirely were CO2 to double from human activities, these possibilities are very unlikely (especially CO2-induced global cooling during the next century). See S. H. Schneider [Global Warming (Sierra Club Books, San Francisco, in press)] for a discussion of why the cooling scenario is improbable.
34. S. Manabe and R Wetherald, Science 232, 626 (1986).
35. R. E. Dickinson, Ed., The Geophysiology of Amazonia: Vegetation and Climate Interataons (Wiley, New York, 1987).
36. S. H. Schneider and S. L. Thompson,J. Geophys. Res. 86, 3135 (1981).
37. On century time scales, changes of a few degrees Celsius per centyury appear to have occurred. One such example, the so called Younger Dryas glacial readvance, had a major ecological impact in Europe (4). Changes of up to 1deg.C per century may also have occurred this millennium, but the rate of change did not yet approach the several degree Celsius change estimated for the 21st centuty.
38. K. Bryan et al., Science 215, 56 (1982); S. L. Thompson and S. H. Schneider, ibid. 217, 1031 (1982), K. Bryan, S. Manabe, M. J. Spelman,J. Phys. Oceanog. 18, 851 (1988); W. M. Washington and G. A. Meehl, Climate Dynam., in press.
39. S. Manabe and R J. Stouffer,J. Geophys. Res. 85, 5529 (1980).
40. C. A. Wilson and J. F. B. Mitchell, Climatic Change 10, 11 (1987); L. O. Mearns et al. in preparation; Environmental Ptotection Agency, The Potential Effects of Global Climate Change on the United States: Report to Congress (National Studies, Washington, DC, 1988). voL 2, chap. 17; D. Rind et al., Climate Change, in press.
41. J. E. Kutzbach and F. A. Street-Perrott Nature 317,130 (1985); E. Barron and W. Washington J. Geophys. Res. 89, 1267 (1984); D. Rind and D. Peteet, Quan Res. 24, 1 (1985); see (4) for a review.
42. S. H. Schneider and C. Mass, Science 190, 741 (1975); R A. Bryson and G. J. Dittberner, J. Atmos. Sci. 33, 2094 (1976); J. Hansen, et al., Science 213, 957 (1981)-R L. Gilliland, Climate Change 4, 111 (1982).
43. This list is expanded from that given in R L. Gilliland and S. H. Schneider, Nature 310, 38 (1984).
44. K Hasselmann, Tellus 28, 473 (1976); H. Dalfes, S. H. Schneider, S. L. Thompson, J. Atmos. Sci. 40, 1648 (1983).
45. E. N. Lorenz, Meteorol. Monogr. 8, 1 (1968).
46. Studies of the adaptation of various sectors of society to past climatic variability can serve as a guide that helps to calibrate how societies might be impacted by specific greenhouse gas-induced climatic changes in the future, such studies include R. Kates, J. Ausubel, M. Berberian, Eds., Climate Impact Assessment, SCOPE 27 (Wiley, New York, 1985); T. K. Rabb, in R S. Chen, E. M. Boulding, S. H. Schneider, Eds., Social Science Research and Climate Change: An Interdisciplinary Appraisal (Reidel, Dordrecht, Netherlands, 1983), pp. 61--70.
47. M. H. Glantz, Ed., Societal Responses to Regional Climatic Change (Westview, Boulder, 1988).
48. H. E. Schwarz and L. A. Dillard, in (48a).
48a. P. E. Waggoner, Ed., Climate Change and U.S. Water Resources (Wiley, New York, in press).
49. Climate, Climatic Change, and Water Supply, Studies in Geophysics (National Academy of Sciences, Washington, DC 1977); M. P. Farrell, Ed., Master Index for the Carbon Dioxide Research State-of-the-Art Report Series, (U.S. Department of Energy, Washington, DC, 1987); J. I. Hanchey, K. E. Schilling, E. Z. Stakhiv, in Preparing.for Climate Change, Proceedings of the First North American Conference on Preparing.for Climate Change: A Cooperative Approach, Washington, DC, 27 to 29 October 1988 (Government Institutes, Rockville, MD, 1988), pp. 394-405; M. L. Parry, Ed., Climatic Change 7, 1 (1985).
50. W. H. Weihe, in Proceedings of the World Climate Conference (World Meteorological Organization, Geneva, 1979).
50a. A. Dobson, in Proceedings of Conference on the Consequences of the Effect for Biological Diversity, R Peters, Ed. (Yale Univ. Press, New Haven, in press).
51. N. Myers, The Sinking Ark (Pergamon, New York, 1979); see also J. Gradwohl and R Greenberg, Saving the Tropical Forests (Island Press, Washington, DC, 1988).
52. R. H. MacArthur and E. O. Wilson, The Theory of Island Biogeography (Princeton Univ. Press, Princeton, NJ, 1967); R L. Peters and J. D. Darling, Bioscience 35, 707 (1985); T. E.Lovejoy, in The Global 2000 Report to the President: Entering the 21st Century, Council on the Environmental Quality and the Department of State (U.S. Government Printing of Offices, Washington, DC, 1980, p. 328-331).
53. P. H. Gleick, in (48a).
54. D. F. Peterson and A. A. Keller, in (48a).
55. G. de Q. Robin in (30); M. F. Meier et al., Glaciers, Ice Sheets, and Sea Level (National Academy of Sciences, Washington, DC, 1985). Sea Level rises greater than 1.5 m and less than 0.5 m, perhaps even sea level falls, could also occur in the next 50 to 100 years, although most analysts give these extremes low probabilities. Should a much more rapid disintegration of the West Antarctic Ice Sheet than now envisioned occur, sea levels would rise substantially because this glacier has aboveground ice sufficient to raise sea level by @5 m. On the other hand, because a warming of Antarctica would almost certainly increase snowfall without raising temperatures sufficiently to create summer melt a doubling of thc snowfall over Antarcrica could lower sea level perhaps as much as 1 mm per year. Of course, such a change would require that the calving rate in Antarctica does not increase. The same would have to apply for the melting and calving in Greenland, an ice sheet which, unlike Antarctica, has substantial melting at lower altitudes and low-latitude flanks. However, thc principal factor responsible for the "most probable" estimate of 0.5- to 1.5-m sea level rise is the assumption that some mountain glaciers will disapear while, at the same time, several degrees Celsius warming of the oceans will, through thc direct process of thermal expansion (which is several times greater in warm water than cold water), lead to an inexorable increase in ocean volume and rise of the sea level. A rise of sea level seems highly probable, whereas disintegration of the West Antarctic Ice Sheet or snow accumulation in each Antarcica are much more speculative, and such changes will in any case, occur more slowly in response to climate change. Sea level rise only exacerbates the likelihood of catastrophic storm surges especially if warming increases hurricane intensity; K. A. Emanuel, Nature 326, 483 (1987).
56. G. P. Hekstra, in Proceedings of Controlling and Adapting to Greenhouse Warming (Resources for the Future, Washington, DC, in press); M. C. Barth and J. G. Titus, Eds., Greenhouse Effects and Sea Level Rise: A Challenge for this Generation (Van Nostrand, Reinhold, New York, 1984). The Environmental Protection Agency (EPA), in a comprehensive study of potential scenarios of climate change for the United States (57), concluded that building of bulkheads and levies, pumping sand, and raising barrier islands to protect areas against a l-m rise in sea level by 2100 would cost $73 billion to $100 billion (cumulative capital costs in 1985 dollars). In contrast, elavating beaches, houses, land, and roadways by the year 2100 would cost $50 billion to $75 billion (cumulative capital costs in 1985 dollars) [(57), chap. 9].
57. J. B. Smith and D. Tirpak, Eds., The Potential Effects of a Global Climate Change on the United States Draft Report to Congress (Environmental Protection Agency, Washington, DC, October 1988) vol. 2, chap. 9.
58. M. Parry, W. Easterling, P. Crosson, N. Rosenberg, RESOURCES for the future Conference Proceedings, in press; The scenario with the Geophysical Fluid Dynamics Laboratory computer model for soil drying is more severe in central North America than that for other models, such as the Goddard Institute for Space Studies. The agricultural consequences of a number of model scenarios including hypothetical increases in yield resulting from direct CO2 fertilization and decreases due to heat stress or drought stress are assessed in (57). Although no general rules for any crop or region could be deterrnined from the EPA analysis (chap. 10), crop yield changes from a few tens of percent advantages to 50% reductions were obtained. At a minimum, one robust conclusion could be drawn: climate changes of the magnitudes projected in most GCM results for the middle to the late part of the next century certainly will cause major redistribution of cropping zones and farming practiccs.
59. R S. Chen, E. M. Boulding, S. H. Schneider, Eds., Social Science Research and Climate Change: An Interdisciplinary Apraisal (Reidel, Dordrecht, Netherlands, 1983); The Environmental Protection Agency report (57) is the most comprehensive analysis of potential impacts and adaptive strategies and costs, although it was restricted to the United States, and at that, only half a dozen or so regions.
60. M. I. Budyko, Climatic Changes (Hydrometeorological Publishers, Leningrad, 1974) (in Russian)-C. Marchetti, Climae Change 1, 59 (1977).
61. S. H. Schneider and L. E. Mesirow, Thc Genesis Strategy: Climate and Global Survival (Plenum, New York, 1976), chap. 7, p. 215.
62. W. W. Kellogg and S. H. Schneider, Science 186, 1163 (1974).
63. K. Meyer-Abich, Climatic Change 2,373 (1980); L. B. Lave, International Institute for Applied Systems Analysis IIASA Rep. CP-81-14 (1981), p. Vi; T. Schelling, in Changing Climate Report of the Carbon Dioxide Assessment Committee (National Academy Press, Washington, DC 1983), p. 449.
64. S. H. Schneider and S. L. Thompson [in The Global Possible: Resources Development and the New Century, R Repetto, Ed. (Yale Univ. Press, New Haven, 1985), p. 397] call this "anticipatory adaptation".
65. Much of the decision as to whether it is cost effective to wait or act against potential threats depends upon the discount rate used to value potential future losses. For example, S. H. Schneider and R S. Chen [Annu. Rev. Energy 5 107 (1980)] described how damage from a sea level rise of 8 m could cost about 1 trillion 1980 dollars some 150 years in the future. As a discount rate of 7% per year, which implies a doubling of an economic investment every 10 years, this hypothesized trillion dollar loss 150 years hence would only be "worth" some $33 million today, less than the value of a single power plant. Although an 8-m rise now seems a low probability, the discounting example remains instructive.
66. In the example of water supplies and climatic change, most of the local policy decisions facing urban water engineers or rural irrigation planners would be easiest to face if we had more credible specific regional forecasts of temperature and precipitation changes (48). However, regional details are the most difficult variables to predict credibly. Thus, strategies to build flexibility in adapting to changing climate statistics seem appropriate for local or regional water supply planning. Because of uncertainty over details, an individual planner in a region may face difficulty in choosing exactly how to respond to the advent or prospect of rapid climate change. But this should not necessarily deter strategic hedging at national or international levels. In other words, most local or regional water supply planners would not welcome the prospect of rapidly changing climate. Therefore, most planners would hold that if the rate of climate change could be slowed down and time bought to study the outcomes and to adapt more cheaply than this would be an appropriate recommended national-level strategic response.
67. W. W. Kellogg and M. Mead, The Atmosphere: Endangered and Endangering (U.S. Department of Health, Education and Welfare, Washington, DC, 1975).
68. Conference statement, Thc Changing Atmosphere: Implications for Global Security, Toronto, Ontario, Canada, 27 to 30 June 1988 (Environment Canada, Toronto, 1988). The report noted that the "first steps in developing international law and practices to address pollution of the air have already been taken. Several examples are cited in it, in particular the Vienna Convention for the Protection of the Ozone, Air, and Its Montreal Protocol signed in 1987. The report states that "These are important first steps and should be activly implemented and respected by all nations. However, there is no overall convention constituting a comprehensive international framework that could address the interrelated problems of the global atmosphere, or that is directed toward the issues of climate change". It set forth a far-reaching action plan that would have major implications for government, industry, and populations. This report follows on the heels of the United Nations Commission on Environment and Development, known as the Brundand Commission Report, which argued that environment, development, and security should not be treated as separate issues, but rather as connected problems.
69. W. S. Broecker, testimony for U.S. Senate Subcommittee on Environmental Protection, 28 January 1987.
70. It is important to ask how long it might take the scientific community to be able to provide more credible time-evolving regional climatic anomaly forecasts from increasing greenhouse gases; S. H. Schneider, P. H. Gleick, L. Mearns in (49); Schneider et al. suggested that it will be at least 10 years and probably several decades before the current level of scientific effort can provide a widespread consensus on these details. The reason such time is needed at current levels of effort is that providing credible regional details will require the coupling of high resolution atmosphere, ocean, and sea ice models with ecological models that provide accurate fluxes of energy and water between atmosphere and land as well as nutrient cycling and chemical transformations that account for trace greenhouse gas buildup over time. A dedicated effort to accelerate thc rate of progress could conceivably speed up the establishment of a consensus on regional issues, but at best 10 years or so will be necessary even with a dramatic effort. However, such efforts would clearly put future decision making on a firmer factual basis and help to make adaptation strategies more effective sooner.
71. The "tie-in" strategy was first formulated by E. Boulding, et al. [in Carbon Dioxide Effects, Research and Assessment Program: Workshop on Environmental and Societal Consequences of a Possible CO2-induced Climatic Change, Report 009, CONF-7904143, U.S. Department of Energy (Government Printing Office, Washington, DC, October 1980), pp. 79-103], it was later adopted by W. W. Kellogg and R. Schware, Climate Change and Society, Consequences of Increasing Atmospheric Carbon Dioxide (Westview Press, Boulder, CO, 1981).
72. S. H. Schneider and S. L. Thompson, in The Global Possible: Resources, Development and the New Century, R. Repetto, Ed., (Yale Univ. Press, New Haven, 1985), pp. 397-430.
73. J. A. Edmonds, W. B. Ashton, H. C. Cheng, and M. Stemberg (in preparation) have calculated that the U.S. contributes some 5% of CO2 emissions, but this fraction could drop significantly if it holds emissions growth while other nations with large populations try to catch up with U.S. per capita energy use standards.
74. R. Revelle and H. Suess, Tellus 9, 18 (1957).
75. K. B. Trenberth, G. W. Branstator, P. A. Arkin, Science 242,1640 (1988); S. H. Schneider, Climatic Change 13, 113 (1988). Clearly, one hot year can no more prove that the greenhouse effect has been detected in the record any more than a few cold ones could disprove it. The 1990s, should they see a continuation of the sharp warming trend of the 1980s, will undoubtedly lead many more scientists to predict confidently that the increase in trace greenhouse gases has caused direct and clearly detectable climatic change. Already a few scientists are satisfied that the effects are 99% detectable in the record. J. Hansen (la); see also, J. N. Wilford New York Times, 23 August 1988, p. C4.
76. Toward an Understanding of Global Change, Initial Priorities for U.S. Contributions to the International Geosphere-Biosphere Program (National Academy Press, Washington, DC, 1988); S. H. Schneider, Issues Sci. Technol. IV (no. 3),93 (1988).
77. I thank J. Ausubel, G. J. MacDonald and two anonymous reviewers for useful comments on the first draft. I also thank S. Mikkelson for efficient word processing and correcting several drafts of the manuscript very quickly. The National Center for Atmospheric Research is sponsored by the National Science Foundation. Any opinions, findings, conclusions, or recommendations expressed in this article are those of the author and do not necessarily reflect the views of the National Science Foundation. | http://www.ciesin.org/docs/003-074/003-074.html | 13 |
50 | Correlation addresses the relationship between two different factors (variables). The statistic is called a correlation coefficient. A correlation coefficient can be calculated when there are two (or more) sets of scores for the same individuals or matched groups.
A correlation coefficient describes direction (positive or negative) and degree (strength) of relationship between two variables. The higher the correlation coefficient, the stronger the relationship. The coefficient also is used to obtain a p value indicated whether the degree of relationship is greater than expected by chance. For correlation, the null hypothesis is that the correlation coefficient = 0.
Examples: Is there a relationship between family income and scores on the SAT? Does amount of time spent studying predict exam grade? How does alcohol intake affect reaction time?
Raw data sheet
The notation X is used for scores on the independent (predictor) variable. Y is used for the scores on the outcome (dependent) variable.
Subject Variable 1 Variable 2 1 X1 Y1
X = score on the 1st variable (predictor)
Y = score on the 2nd variable (outcome)
2 X2 Y2 3 X3 Y3 4 X4 Y4 5 X5 Y5 6 X6 Y6 7 X7 Y7 8 X8 Y8
Contrast/comparison versus Correlation
The modules on descriptive and inferential statistics describe
contrasting groups -- Do samples differ on some outcome? ANOVA analyzes central
tendency and variability along an outcome variable. Chi-square compares observed
with expected outcomes. ANOVA and Chi-square compare different subjects (or
the same subjects over time) on the same outcome variable.
Correlation looks at the relative position of the same subjects on different variables. More....
Correlation can be positive or negative, depending upon the direction of the relationship. If both factors increase and decrease together, the relationship is positive. If one factor increases as the other decreases, then the relationship is negative. It is still a predictable relationship, but inverse, changing in opposite rather than same direction. Plotting a relationship on a graph (called a scatterplot) provides a picture of the relationship between two factors (variables). More....
A correlation coefficient can vary from -1.00 to +1.00. The closer the coefficient is to zero (from either + or -), the less strong the relationship. The sign indicates the direction of the relationship: plus (+) = positive, minus (-) = negative. Take a look at the correlation coefficients (on the graph itself) for the 3 examples from the scatterplot tutorial.
Correlations as low as .14 are statistically significant in large samples (e.g., 200 cases or more).
The important point to remember in correlation is that we cannot make any assumption about cause. The fact that 2 variables co-vary in either a positive or negative direction does not mean that one is causing the other. Remember the 3 criteria for cause-and-effect and the third variable problem.
There are two different formulas to use in calculating correlation. For normal distributions, use the Pearson Product-moment Coefficient (r). When the data are ranks (1st, 2nd, 3rd, etc.), use the Spearman Rank-order Coefficient (rs). More details are provided in the next two sections.
Bivariate and multiple regression
The correlation procedure discussed thus far is called bivariate correlation. That is because there are two factors (variables) involved -- bi = 2. The term regression refers to a diagonal line drawn on the data scatterplot. You saw that in the tutorial. The formula for correlation calculates how closely the data points are to the regression line.
Multiple regression is correlation for more than 2 factors. The concept is fairly simple, but the calculation is not, and requires use of a computer program (or many hours of hand calculation).
Next section: Correlation for normally-distributed variables, Pearson r | http://psychology.ucdavis.edu/sommerb/sommerdemo/correlation/intro.htm | 13 |
61 | Induction or inductive reasoning, sometimes called inductive logic, is reasoning which takes us "beyond the confines of our current evidence or knowledge to conclusions about the unknown." The premises of an inductive argument support the conclusion but do not entail it; i.e. they do not ensure its truth. Induction is used to ascribe properties or relations to types based on an observation instance (i.e., on a number of observations or experiences); or to formulate laws based on limited observations of recurring phenomenal patterns. Induction is employed, for example, in using specific propositions such as:
This ice is cold. (or: All ice I have ever touched was cold.)
This billiard ball moves when struck with a cue. (or: Of one hundred billiard balls struck with a cue, all of them moved.)
...to infer general propositions such as:
All ice is cold.
All billiard balls move when struck with a cue.
Another example would be:
3+5=8 and eight is an even number. Therefore, an odd number added to another odd number will result in an even number.
Inductive reasoning has been attacked several times. Historically, David Hume denied its logical admissibility. Sextus Empiricus questioned how the truth of the Universals can be established by examining some of the particulars. Examining all the particulars is difficult as they are infinite in number. During the twentieth century, thinkers such as Karl Popper and David Miller have disputed the existence, necessity and validity of any inductive reasoning, including probabilistic (Bayesian) reasoning . Some say scientists still rely on induction but Popper and Miller dispute this: Scientists cannot rely on induction simply because it does not exist.
Note that mathematical induction is not a form of inductive reasoning. While mathematical induction maybe inspired by the non-base cases, the formulation of a base case firmly establishes it as a form of deductive reasoning.
All observed crows are black.
All crows are black.
This exemplifies the nature of induction: inducing the universal from the particular. However, the conclusion is not certain. Unless we can systematically falsify the possibility of crows of another colour, the statement (conclusion) may actually be false.
For example, one could examine the bird's genome and learn whether it's capable of producing a differently coloured bird. In doing so, we could discover that albinism is possible, resulting in light-coloured crows. Even if you change the definition of "crow" to require blackness, the original question of the colour possibilities for a bird of that species would stand, only semantically hidden.
A strong induction is thus an argument in which the truth of the premises would make the truth of the conclusion probable, but not necessary.
I always hang pictures on nails.
All pictures hang from nails.
Assuming the first statement to be true, this example is built on the certainty that "I always hang pictures on nails" leading to the generalisation that "All pictures hang from nails". However, the link between the premise and the inductive conclusion is weak. No reason exists to believe that just because one person hangs pictures on nails that there are no other ways for pictures to be hung, or that other people cannot do other things with pictures. Indeed, not all pictures are hung from nails; moreover, not all pictures are hung. The conclusion cannot be strongly inductively made from the premise. Using other knowledge we can easily see that this example of induction would lead us to a clearly false conclusion. Conclusions drawn in this manner are usually overgeneralisations.
Many speeding tickets are given to teenagers.
All teenagers drive fast.
In this example, the premise is built upon a certainty; however, it is not one that leads to the conclusion. Not every teenager observed has been given a speeding ticket. In other words, unlike "The sun rises every morning", there are already plenty of examples of teenagers not being given speeding tickets. Therefore the conclusion drawn can easily be true or false, and the inductive logic does not give us a strong conclusion. In both of these examples of weak induction, the logical means of connecting the premise and conclusion (with the word "therefore") are faulty, and do not give us a strong inductively reasoned statement.
See main article: Problem of induction.
Formal logic, as most people learn it, is deductive rather than inductive. Some philosophers claim to have created systems of inductive logic, but it is controversial whether a logic of induction is even possible. In contrast to deductive reasoning, conclusions arrived at by inductive reasoning do not have the same degree of certainty as the initial premises. For example, a conclusion that all swans are white is false, but may have been thought true in Europe until the settlement of Australia or New Zealand, when Black Swans were discovered. Inductive arguments are never binding but they may be cogent. Inductive reasoning is deductively invalid. (An argument in formal logic is valid if and only if it is not possible for the premises of the argument to be true whilst the conclusion is false.) In induction there are always many conclusions that can reasonably be related to certain premises. Inductions are open; deductions are closed. It is however possible to derive a true statement using inductive reasoning if you know the conclusion. The only way to have an efficient argument by induction is for the known conclusion to be able to be true only if an unstated external conclusion is true, from which the initial conclusion was built and has certain criteria to be met in order to be true (separate from the stated conclusion). By substitution of one conclusion for the other, you can inductively find out what evidence you need in order for your induction to be true. For example, you have a window that opens only one way, but not the other. Assuming that you know that the only way for that to happen is that the hinges are faulty, inductively you can postulate that the only way for that window to be fixed would be to apply oil (whatever will fix the unstated conclusion). From there on you can successfully build your case. However, if your unstated conclusion is false, which can only be proven by deductive reasoning, then your whole argument by induction collapses. Thus ultimately, inductive reasoning is not reliable.
The classic philosophical treatment of the problem of induction, meaning the search for a justification for inductive reasoning, was by the Scottish philosopher David Hume. Hume highlighted the fact that our everyday reasoning depends on patterns of repeated experience rather than deductively valid arguments. For example, we believe that bread will nourish us because it has done so in the past, but this is not a guarantee that it will always do so. As Hume said, someone who insisted on sound deductive justifications for everything would starve to death.
Induction is sometimes framed as reasoning about the future from the past, but in its broadest sense it involves reaching conclusions about unobserved things on the basis of what has been observed. Inferences about the past from present evidence for instance, as in archaeology, count as induction. Induction could also be across space rather than time, for instance as in physical cosmology where conclusions about the whole universe are drawn from the limited perspective we are able to observe (see cosmic variance); or in economics, where national economic policy is derived from local economic performance.
Twentieth-century philosophy has approached induction very differently. Rather than a choice about what predictions to make about the future, induction can be seen as a choice of what concepts to fit to observations or of how to graph or represent a set of observed data. Nelson Goodman posed a "new riddle of induction" by inventing the property "grue" to which induction as a prediction about the future does not apply.
The proportion Q of the sample has attribute A.
The proportion Q of the population has attribute A.
How great the support which the premises provide for the conclusion is dependent on (a) the number of individuals in the sample group compared to the number in the population; and (b) the randomness of the sample. The hasty generalisation and biased sample are fallacies related to generalisation.
A statistical syllogism proceeds from a generalization to a conclusion about an individual.
A proportion Q of population P has attribute A.
An individual I is a member of P.
There is a probability which corresponds to Q that I has A.
Simple induction proceeds from a premise about a sample group to a conclusion about another individual.
Proportion Q of the known instances of population P has attribute A.
Individual I is another member of P.
There is a probability corresponding to Q that I has A.
This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.
I has attributes A, B, and C
J has attributes A and B
So, J has attribute C
An analogy relies on the inference that the attributes known to be shared (the similarities) imply that C is also a shared property. The support which the premises provide for the conclusion is dependent upon the relevance and number of the similarities between I and J. The fallacy related to this process is false analogy. As with other forms of inductive argument, even the best reasoning in an argument from analogy can only make the conclusion probable given the truth of the premises, not certain.
Analogical reasoning is very frequent in common sense, science, philosophy and the humanities, but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning. For more information on inferences by analogy, see Juthe, 2005.
A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.
A prediction draws a conclusion about a future individual from a past sample.
Proportion Q of observed members of group G have had attribute A.
There is a probability corresponding to Q that other members of group G will have attribute A when next observed.
Of the candidate systems for an inductive logic, the most influential is Bayesianism. This uses probability theory as the framework for induction. Given new evidence, Bayes' theorem is used to evaluate how much the strength of a belief in a hypothesis should change.
There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms.
Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or prior probabilities; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. Maximum entropy a generalization of the principle of indifference and transformation groups are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions.
Cox's theorem, which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an inductive logic.
Based on an analysis of measurement theory (specifically the axiomatic work of Krantz-Luce-Suppes-Tversky), Henry E. Kyburg, Jr. produced a novel account of how error and predictiveness could be mediated by epistemological probability. It explains how one can adopt a rule, such as PV=nRT, even though the new universal generalization produces higher error rates on the measurement of P, V, and T. It remains the most detailed procedural account of induction, in the sense of scientific theory-formation. | http://everything.explained.at/Inductive_reasoning/ | 13 |
56 | Polygons are the closed figures, which are formed by three or more line segments. We say that the Line Segment is the part of the line which has the fixed length and the fixed end points. In Polygon Geometry, we will learn about the different Types of Polygons and the different names are given to them. If a polygon has all sides equal, then we say that the polygon is a Regular Polygon. We start with the polygons of three line segments. A closed figure formed by joining end to end of the three line segments are called Triangles A triangle has 3 sides, 3 vertex and 3 angles. If a triangle has all the sides equal, we call it an Equilateral Triangle.
Now we look at 4 sided polygons, which are called quadrilaterals, a quadrilateral can be named as a Square, rectangle, rhombus, trapezium, kite or a Parallelogram based on the different properties of the quadrilaterals. If all the sides of the quadrilateral are equal, then it is a regular quadrilateral. It can be a square or a Rhombus, depending upon the angles measure of the figure, which we will study in other sessions. In a quadrilateral we have a closed figure formed by joining 4 sides of equal or unequal lengths. All the quadrilaterals have 4 sides, 4 vertices and so it has 4 angles. If the quadrilaterals are in form of the square or Rectangle, then it has all angles equal (90 degrees each).
Now we see that the 5 sides polygon is called a pentagon, the 6 sides polygon is called a Hexagon, the 7 sides polygon is called a heptagons, the 8 sides polygon is called a octagon, the 9 sides polygon is called a nonagon, the 10 sides polygon is called a decagon.
Enclosed flat figures made up of lines which are straight are called as Polygon. Here flat means that figure is two dimensional. Straight lines in definition are also called as segments. Polygons are called enclosed because all line segments fit at end Point with each other and it has no opening point. Those figure which are not enclosed are not considered in category ...Read More
A Polygon with 12 sides is called as dodecagon. It can be regular or irregular. If we want to calculate interior angle sum then we need to divide the dodecagon into Triangles. It consists of 10 triangles. We can calculate the sum of all angles by multiplying 180 * 10 because we have 10 triangles and each triangles interior angle sum is 180 degree. So dodecagon angles su...Read More
Quadrilaterals are the special type of polygons which are the closed figure formed by joining 4 line segments, sometimes they may be equal and sometimes not, depending upon the type of quadrilateral it is. Here we ar...Read More
A Polygon is a packed or closed figure which is bounded by three or more line is known as a polygon or any type of figure which has closed shape is known as polygon. The meaning of poly is “many” and meaning of “gon” is “angle”. A polygon has two dimensional shapes. In a Polygon every end Point is known as vertex of polygon. The exterior angle of a poly...Read More
In Geometry the figure which contains two dimensional shapes are called polygons. The polygons are closed with straight lines. There are many types of polygons such as simple polygons, complex polygons, concave polygons, convex polygons, regular polygons, irregular polygons, Equilateral Polygons, Equiangular Polygons, Triangle polygons, Quadrilateral polygons, Pentagon...Read More
Polygons are the figures formed by joining 3 or more line segments and thus forming the closed figures. We say that the polygons with three line segments are Triangles, the polygons with 4 line segments are quadrilaterals, the polygons with 5 line segments are called pentagons and the polygons formed by joining 6 line segments are called hexagons and this way more differ...Read More
Before going through the midpoint theorem, let us first learn, what is midpoint. Geometrically, the midpoint is the middle Point of any line segment which is equidistant from both the endpoints.
The formula used to determine the midpoint of two points ‘a’ and b on a line is (a + b)/2 and the formula used to determine the midpoint of a Line Segment on a plane with endp...Read More
When we see a Polygon which has four or more sides and all the diagonals which are possible from a single vertex, then the polygon is divided into many Triangles. When we find the interior angle of a polygon then we find it by multiplying the number of triangles which are present in a polygon by 180 degree.
Now we see the angle sums of polygons with the help of di...Read More
The word Polygon is originated from two Greek word ‘poly’ and ‘gon’. ‘Poly’ means many and ‘gon’ means angle so word polygon means many angles. A closed plane figure bounded or surrounded by three or more sides and have several angles is defined as polygon. A polygon is formed with different parts and sides and its name depends on its sides like if a polygon has 6 sides...Read More | http://www.tutorcircle.com/polygons-t4qGp.html | 13 |
94 | In calculus, an antiderivative, primitive integral or indefinite integral of a function f is a differentiable function F whose derivative is equal to f, i.e., F ′ = f. The process of solving for antiderivatives is called antidifferentiation (or indefinite integration) and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are related to definite integrals through the fundamental theorem of calculus: the definite integral of a function over an interval is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.
The discrete equivalent of the notion of antiderivative is antidifference.
The function F(x) = x3/3 is an antiderivative of f(x) = x2. As the derivative of a constant is zero, x2 will have an infinite number of antiderivatives; such as (x3/3) + 0, (x3/3) + 7, (x3/3) − 42, (x3/3) + 293 etc. Thus, all the antiderivatives of x2 can be obtained by changing the value of C in F(x) = (x3/3) + C; where C is an arbitrary constant known as the constant of integration. Essentially, the graphs of antiderivatives of a given function are vertical translations of each other; each graph's location depending upon the value of C.
In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on).
Uses and properties
Antiderivatives are important because they can be used to compute definite integrals, using the fundamental theorem of calculus: if F is an antiderivative of the integrable function f and f is continuous over the interval [a, b], then:
Because of this, each of the infinitely many antiderivatives of a given function f is sometimes called the "general integral" or "indefinite integral" of f and is written using the integral symbol with no bounds:
If F is an antiderivative of f, and the function f is defined on some interval, then every other antiderivative G of f differs from F by a constant: there exists a number C such that G(x) = F(x) + C for all x. C is called the arbitrary constant of integration. If the domain of F is a disjoint union of two or more intervals, then a different constant of integration may be chosen for each of the intervals. For instance
is the most general antiderivative of on its natural domain
Every continuous function f has an antiderivative, and one antiderivative F is given by the definite integral of f with variable upper boundary:
Varying the lower boundary produces other antiderivatives (but not necessarily all possible antiderivatives). This is another formulation of the fundamental theorem of calculus.
There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions (like polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations). Examples of these are
See also differential Galois theory for a more detailed discussion.
Techniques of integration
Finding antiderivatives of elementary functions is often considerably harder than finding their derivatives. For some elementary functions, it is impossible to find an antiderivative in terms of other elementary functions. See the article on elementary functions for further information.
We have various methods at our disposal:
- the linearity of integration allows us to break complicated integrals into simpler ones
- integration by substitution, often combined with trigonometric identities or the natural logarithm
- integration by parts to integrate products of functions
- the inverse chain rule method, a special case of integration by substitution
- the method of partial fractions in integration allows us to integrate all rational functions (fractions of two polynomials)
- the Risch algorithm
- integrals can also be looked up in a table of integrals
- when integrating multiple times, we can use certain additional techniques, see for instance double integrals and polar coordinates, the Jacobian and the Stokes' theorem
- computer algebra systems can be used to automate some or all of the work involved in the symbolic techniques above, which is particularly useful when the algebraic manipulations involved are very complex or lengthy
- if a function has no elementary antiderivative (for instance, exp(-x2)), its definite integral can be approximated using numerical integration
- to calculate the ( times) repeated antiderivative of a function Cauchy's formula is useful (cf. Cauchy formula for repeated integration):
Antiderivatives of non-continuous functions
Non-continuous functions can have antiderivatives. While there are still open questions in this area, it is known that:
- Some highly pathological functions with large sets of discontinuities may nevertheless have antiderivatives.
- In some cases, the antiderivatives of such pathological functions may be found by Riemann integration, while in other cases these functions are not Riemann integrable.
Assuming that the domains of the functions are open intervals:
- A necessary, but not sufficient, condition for a function f to have an antiderivative is that f have the intermediate value property. That is, if [a, b] is a subinterval of the domain of f and d is any real number between f(a) and f(b), then f(c) = d for some c between a and b. To see this, let F be an antiderivative of f and consider the continuous function
on the closed interval [a, b]. Then g must have either a maximum or minimum c in the open interval (a, b) and so
- The set of discontinuities of f must be a meagre set. This set must also be an F-sigma set (since the set of discontinuities of any function must be of this type). Moreover for any meagre F-sigma set, one can construct some function f having an antiderivative, which has the given set as its set of discontinuities.
- If f has an antiderivative, is bounded on closed finite subintervals of the domain and has a set of discontinuities of Lebesgue measure 0, then an antiderivative may be found by integration.
- If f has an antiderivative F on a closed interval [a,b], then for any choice of partition , if one chooses sample points as specified by the mean value theorem, then the corresponding Riemann sum telescopes to the value F(b) − F(a).
- However if f is unbounded, or if f is bounded but the set of discontinuities of f has positive Lebesgue measure, a different choice of sample points may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below.
Some examples
- The function
- The function
- If f(x) is the function in Example 1 and F is its antiderivative, and is a dense countable subset of the open interval , then the function
has an antiderivative
The set of discontinuities of g is precisely the set . Since g is bounded on closed finite intervals and the set of discontinuities has measure 0, the antiderivative G may be found by integration.
- Let be a dense countable subset of the open interval . Consider the everywhere continuous strictly increasing function
It can be shown that
for all values x where the series converges, and that the graph of F(x) has vertical tangent lines at all other values of x. In particular the graph has vertical tangent lines at all points in the set .
Moreover for all x where the derivative is defined. It follows that the inverse function is differentiable everywhere and that
for all x in the set which is dense in the interval . Thus g has an antiderivative G. On the other hand, it can not be true that
since for any partition of , one can choose sample points for the Riemann sum from the set , giving a value of 0 for the sum. It follows that g has a set of discontinuities of positive Lebesgue measure. Figure 1 on the right shows an approximation to the graph of g(x) where and the series is truncated to 8 terms. Figure 2 shows the graph of an approximation to the antiderivative G(x), also truncated to 8 terms. On the other hand if the Riemann integral is replaced by the Lebesgue integral, then Fatou's lemma or the dominated convergence theorem shows that g does satisfy the fundamental theorem of calculus in that context.
- In Examples 3 and 4, the sets of discontinuities of the functions g are dense only in a finite open interval . However these examples can be easily modified so as to have sets of discontinuities which are dense on the entire real line . Let
- Using a similar method as in Example 5, one can modify g in Example 4 so as to vanish at all rational numbers. If one uses a naive version of the Riemann integral defined as the limit of left-hand or right-hand Riemann sums over regular partitions, one will obtain that the integral of such a function g over an interval is 0 whenever a and b are both rational, instead of . Thus the fundamental theorem of calculus will fail spectacularly.
See also
- Antiderivatives are also called general integrals, and sometimes integrals. The latter term is generic, and refers not only to indefinite integrals (antiderivatives), but also to definite integrals. When the word integral is used without additional specification, the reader is supposed to deduce from the context whether it is referred to a definite or indefinite integral. Some authors define the indefinite integral of a function as the set of its infinitely many possible antiderivatives. Others define it as an arbitrarily selected element of that set. Wikipedia adopts the latter approach.
- Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 0-495-01166-5.
- Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 0-547-16702-4.
- Introduction to Classical Real Analysis, by Karl R. Stromberg; Wadsworth, 1981 (see also)
- Historical Essay On Continuity Of Derivatives, by Dave L. Renfro; http://groups.google.com/group/sci.math/msg/814be41b1ea8c024
- Wolfram Integrator — Free online symbolic integration with Mathematica
- Antiderivative calculator with step-by-step solutions — supports all common methods and rules of integration
- Mathematical Assistant on Web — symbolic computations online. Allows to integrate in small steps (with hints for next step (integration by parts, substitution, partial fractions, application of formulas and others), powered by Maxima
- Online Integral Calculator — supports definite and indefinite integrals (antiderivatives) as well as improper integrals, powered by Maxima
- Function Calculator from WIMS
- "The Indefinite Integral or Anti-derivative " at the Khan Academy | http://en.wikipedia.org/wiki/Antiderivative | 13 |
54 | Logic for Computer Scientists/Induction
Induction plays a crucial role at least in two aspect throughout this book. Firstly, it is one of main proof principles in mathematics and of course in logics. In particular it can be used to investigate properties of infinite sets. Very often it is used as natural induction, namely over the natural numbers. We will introduce it as a more general principle over well founded partial orders, which is called structural induction. The second aspect is, that it can be used as well to define infinite structures, as the set of well formed formulae in a particular logic or the set of binary trees.
We start with a very general structure over arbitrary sets, namely partial orders. A relation over is a partial order iff is reflexive, transitive and anti-symmetric (i.e. ) . Partial ordered sets (p.o. sets) are usually written as .
The necessary structure for our induction principle is a partial order, such that there exist minimal elements. Given a p.o. set , we define:
- iff and .
- is called well-founded iff there is no infinite sequence and .
- is called a chain iff or
- is a total ordering iff is a chain.
is well-founded, if every non-empty subset of has a minimal Element. Proof can be done by contradiction and will be given as an exercise.
We finally have the machinery to introduce the principle of complete induction:
Definition 2 (Complete (structural) induction)
Given a well-founded p.o. set and a predicate , i.e. . The principle of induction is given by the following (second order) formula
The induction principle holds for every well-founded set.
Proof: The proof is given by contradiction: Assume the principle is wrong; i.e. the implication is wrong, which means, that we have to assume the premise as true:
and the conclusion as wrong:
Hence we can assume that the set is not empty.
Since is a subset of a well-founded set, it has a minimal element, say . From assumption (premise) we conclude
Now we can distinguish two cases:
- is minimal in : Hence there is no , such that . Hence the premise of the implication in (inst) is true, which implies that the conclusion is true. This is a contradiction to the assumption that
- is not minimal in : holds and it must be that is true, because otherwise it would be that and not minimal in . Hence, again the premise of the implication in (inst) is true, which implies that the conclusion is true. This is a contradiction to the assumption that !
In this subsection we will carry out a proof with induction in detail. For this we need the extension of p.O. sets:
Definition 3 (Lexicographic Ordering)
A p.O. set induces an ordering over : iff
- and or
If is a well-founded set, then is well-founded as well.
The Ackermann-function defined by the following recursion is a total function over
ACK(x,y) = if x=0 then y+1 else if y=0 then ACK(x-1,1) else ACK(x-1,ACK(x, y-1))
Proof: For the induction start we take the minimal element of the well-founded set,
, where is the lexicographic ordering induced by . Hence, assume . By definition of
, we conclude and hence defined. Assume for an an arbitrary , that
We distinguish the following cases:
- : i.e. and hence defined.
- and : We know that and . From the induction hypothesis we know that is defined, and hence as well.
- and : According to the definition of we have two cases to consider:
- and : From the induction hypothesis we conclude immediately that is defined.
- and : Independent from the values of and . If we assume
and , we again can conclude from the hypothesis, that is defined and hence as well
Altogether we proved, that is defined for all .
Problem 1 (Induction)
Prove the following lemma: If is well founded also .
Note: The lexicographical Order is defined as follows:
Problem 2 (Induction)
How many points of intersection could straight lines have at most? Find a recursive and explicit formula and show
Problem 3 (Induction)
Prove that a number is even if and only if is even.
Problem 4 (Induction)
Point by an indirect proof that there is not any greatest prime number!
Problem 5 (Induction)
Which prerequisite do you need that the following order
- partial ordered
- total ordered
Problem 6 (Induction)
An example of a well founded set is the power set over a finite set which is comparable over the relation of the subset . If then is e.g. but and are not comparable. Give a definition of a relation in this way that is total ordered and well founded.
Problem 7 (Induction)
Examine which of the following partial order are total and which are well founded!
- with is the power set for natural numbers.
- with marks the relation "`is factor of"'.
- with iff or ( and ).
- with is the lexicographical .
- for , i.e.
Problem 8 (Induction)
Give for the natural numbers an order relation, that is
- both well-founded and total,
- total but not well-founded,
- well-founded but not total and
- neither well-founded nor total.
Problem 9 (Induction)
Prove, that a partial order is well-founded iff every non-empty partial set of (at least) contains a minimum element.
Problem 10 (Induction)
A root tree consists (a) of a single node or (b) of a node - that's the root of the tree - and at least one, but only at most finite many (part)trees, this one is bandaged over an edge with the root. Point formally by means of induction, that in every tree the number of the nodes around is taller than the number of the edges , i.e. .
Problem 11 (Induction)
Prove: If ε is a quality of the natural numbers and it is valid
- ε(0) and
- : [ε(n) ε(n+1)].
then : ε(n) is valid.
Note: The proof can be done by the fact that the principle of the complete induction in (which should be proved) can be reduced to the principle of the transfinite induction for well founded orders. | http://en.m.wikibooks.org/wiki/Logic_for_Computer_Scientists/Induction | 13 |
67 | Impact Cratering Lab
Part I: Impact Cratering Mechanics & Crater Morphology
Part I of this lab introduces the mechanics of crater formation and the morphology of different types of craters. The lab exercises in Part I utilize movies of impact experiments to demonstrate the formation and structure of impact craters. The diversity of crater morphologies is illustrated by images of craters.
Every body in the solar system has been subjected to the impact of objects such as comets, asteroids, or accretionary debris. The Moon, Mars, and Mercury all have heavily cratered surfaces that are the result of tens of thousands of impacts. Most satellites in the outer solar system also display thousands of impact craters. These heavily cratered surfaces record the period of an intense, solar-system-wide bombardment that ended about 3.8 billion years ago. All traces of this period have been erased from the surfaces of the Earth and Venus because these two planets have undergone relatively recent activity, including tectonic, volcanic, and erosional activity. Since this period of intense bombardment, the impact rate has been about 100 times less. However, during the past 500 million years there have been several very large impacts on Earth that have affected the entire planet.
Today, impacts from comets and asteroids still occur on all bodies in the solar system (Figure 1), including Earth. Meteor Crater (1.2 kilometers diameter, 183 meters deep, with a rim height of 30–60 meters) in northern Arizona was created by the impact of a 25-meter-diameter iron asteroid approximately 49,000 years ago. In 1908 a small asteroid exploded in the atmosphere over an unpopulated region of Tunguska in Siberia, creating an atmospheric shock wave so strong that it devastated a 10-km2 area. In July 1994, about 20 fragments of Comet Shoemaker-Levy 9 impacted the atmosphere of Jupiter and created atmospheric disturbances larger than Earth.
Figure 1. New martian crater. A new crater (white arrow) formed in the western Arcadia Planitia region of Mars between June 4, 2008, and August 10, 2008. Image Credit: NASA/ JPL/MSSS.
IMPACT CRATERING MECHANICS
When a high-speed object strikes a surface, it produces an enormous amount of energy. This is called kinetic energy because it is caused by motion. The amount of energy produced in this way depends on the mass of the impacting object and the velocity with which it strikes the surface:
KE = ½mv2
where m is the mass of the object and v is its impact velocity. Asteroids hitting Earth have impact velocities from about 11 to 25 kilometers per second (about 25,000 to 56,000 miles per hour). Very large amounts of energy are released by impacts because the amount of energy released is proportional to the square of the velocity. For instance, an iron meteorite 1 kilometer in diameter hitting the surface at a velocity of 15 kilometers per second will release more than 4 × 1027 ergs of energy, the equivalent of about 100,000 one-megaton hydrogen bombs. The crater formed by such an event would measure about 10 kilometers in diameter. In an impact event the motion of the projectile (meteorite or comet) rapidly transfers kinetic energy to the planetary crust. Most of this energy takes the form of shock or pressure waves that travel at supersonic speeds through both the surface and projectile. These shock waves spread outward beneath the point of impact in a hemispherically expanding shell (Figure 2).
Figure 2. Diagram of the pressure field and flow of material in the excavation of an impact crater. The arrows show the upward and outward flow of material left behind the rapidly expanding shock wave. Ejecta fragments are initially ejected at a 45° angle. The green contours represent the peak pressures as the shock wave moves through the target material. The material lining the growing transient cavity is impact melt and the open area behind the impactor is vaporized target material. Ejecta near the impact site travels at very high speeds, whereas ejecta that emerges at greater distances travels at slower velocities. Image Credit: Illustration from an educational poster, Geological Effects of Impact Cratering, David A. Kring, NASA Univ. of Arizona Space Imagery Center, 2006. Modified from a figure in Traces of Catastrophe, Bevan M. French, 1998.
The strength of the shock waves is so great that the rocks are subjected to enormous pressures. The interaction of the shock waves with the unconfined surface, called a free surface, is responsible for excavating the crater. After passage of the shock wave, the compressed rock snaps back along the free surface. This produces a tensional wave — called a rarefaction wave — that decompresses and fractures the rock, setting it into motion along fracture planes. The net effect is to momentarily convert the rock into a fluid-like material that moves laterally upward and out of a steadily growing excavation cavity. In the meantime, the projectile has been largely destroyed by shock waves generated within it.
IMPACT CRATERING MORPHOLOGY
|In general, impact craters have a raised rim with the floor at a much lower level than the surrounding terrain. Fresh craters are surrounded by low, hilly terrain and swarms of small craters caused by material ejected from the crater. The rim consists of an overturned flap of target material. The upper layers of the rim consist of material that originally occurred at a greater depth, and is therefore older. The deeper layers in the rim occurred at a shallower depth and are therefore younger. This is called inverted stratigraphy (Figure 3) because the age of the rim material gets younger with depth in the rim.||
Figure 3. Illustration of inverted stratigraphy. Image Credit: Modified illustration from Guidebook to the Geology of Barringer Meteorite Crater, Arizona (a.k.a. Meteor Crater), ©2007, David A. Kring, Lunar and Planetary Institute. LPI Contribution No.1355.
Figure 4. Formation of a simple crater. Image Credit: Illustration from an educational poster, Geological Effects of Impact Cratering, David A. Kring, NASA Univ. of Arizona Space Imagery Center, 2006. Modified from a figure in Traces of Catastrophe, Bevan M. French, 1998 - modified from a figure in Impact Cratering on the Earth, Richard A. F. Grieve, Scientific American, v. 262, pp. 66-73, 1990.
The size of an impact crater depends not only on the amount of energy released by the impact, but also on the gravity field of the planet or satellite, and certain properties of the projectile and surface rocks. For a given size impact, a larger crater will form on a planet with a weaker gravity field because it is easier to excavate the material. In all cases, a crater is many times larger than the projectile that formed it. Although the diameter of a crater depends on the complex interaction of many factors, a rough approximation is that the excavation crater will be about 10 times larger than the projectile that formed it. The depth of a crater is considerably less than the diameter. For example, simple craters on the Moon have a depth/diameter ratio from 0.14 to 0.2, i.e., the diameter is about 5 to 7 times greater than the depth. For complex craters on the Moon (larger than 20 kilometers in diameter), the depth/diameter ratio ranges from 0.1 to 0.05, i.e., the diameter is from 10 to 20 times larger than the depth. This is because slumping of the inner walls and formation of the central peak causes a shallower depth.
There are two basic types of impact crater, simple (Figure 4) and complex (Figure 5). A simple impact crater is bowl-shaped with no interior structure. The size of simple craters corresponds closely to the maximum diameter of the excavation cavity. However, at larger diameters, the crater begins to change form. This takes place at a critical diameter called the transition diameter (Figure 6), which generally depends on the gravity field of the planet or satellite. The transition diameter usually is smaller on planets with a larger gravity field.
Complex craters have interior terraces and flat floors surrounding central peaks. In large craters the excavation cavity is enlarged by inward slumping of the crater walls. This produces terraces on the interior walls. The energy of impact is so great that a certain fraction of the impacted material is melted to produce impact melt. Some of the impacted material is vaporized. The flat floors are a combination of impact melt and broken-up floor material called impact breccia. The mountains near the center of the crater are called central peaks. The sudden excavation of a crater causes the rocks beneath the impact point to undergo a shift from very high to very low pressures in an extremely short period of time. This causes the center of the floor to spring back or rebound into a central peak. At large diameters the central peak develops into a peak ring. Very large complex craters have multiple rings that are caused by large sections of the crust collapsing into the enormous excavation cavity.
Figure 5. Formation of a complex crater. Image Credit: Illustration from an educational poster, Geological Effects of Impact Cratering, David A. Kring, NASA Univ. of Arizona Space Imagery Center, 2006. Modified from a figure in Traces of Catastrophe, Bevan M. French, 1998 - modified from a figure in Impact Cratering on the Earth, Richard A. F. Grieve, Scientific American, v. 262, pp. 66-73, 1990.
Figure 6. Transition diameters for the terrestrial planets and the (Earth’s) Moon. Image Credit: Illustration from an educational poster, Geological Effects of Impact Cratering, David A. Kring, NASA Univ. of Arizona Space Imagery Center, 2006.
Impact Cratering Lab Exercises
Part I: Impact Cratering Mechanics and Crater Morphology
Impact Cratering Mechanics
The Vertical Impact Stack file is a movie made of a stack of 24 individual images, or frames. It is a movie of an experimental vertical impact at the NASA Ames Research Center. This type of experiment uses small projectiles fired into targets, mostly composed of quartz sand, to simulate the physical properties of solid rock at very high velocities. The experiments take place in a vacuum chamber to simulate atmosphereless conditions. A projectile is fired from an ultra-high-speed gas gun at a velocity of 6 kilometers per second. Although the impact velocity is high by most standards, it is relatively slow compared to celestial body impacts, which mostly take place at velocities between approximately 10 and 72 kilometers per second. Nevertheless, experimental impacts such as these give us good insights into the formation of large impact craters.
Open the Vertical Impact Stack file. The scale of these images is 1 pixel = 0.156 centimeters. Set the scale of these images; Analyze à Set Scale. Enter the measured distance as 1 pixel, and the known distance as 0.156 centimeters. Enter “centimeter” in the "Unit of Length" field.
From the IMAGE menu, select STACKS à START Animation. [A shortcut to starting, and stopping, any movie is to press the \ (backslash) key. There are also controls for starting/stopping the animation, and stepping through the frames, in the right side of the ImageJ applet window. Click the double arrowheads (>>) and select Stack Tools.] Watch the movie. Notice how the crater's depth and diameter grow, and how the ejecta curtain moves. Slow down the movie to get a better view of the crater formation and ejecta curtain movement. Use the number keys (1–9) to run the movie at different speeds. (The higher the number, the faster the movie plays.) Next, cycle through the movie one frame at a time using the left or right arrow keys. Notice that each frame is numbered consecutively, with 1 being the first frame.
1. Compare the stages of formation of this experimental crater with those shown for a simple crater in the previous background section.
a. On which frame of the experimental impact does the crater reach maximum depth? Measure the depth and record your answer.
b. On which frame does it reach maximum diameter? Measure the diameter and record your answer.
2. Calculate the depth/diameter ratio of this crater.
3. On the second or third frame, measure the angle of the ejecta curtain with respect to the surface using the Angle Tool, and record your result.
Close this stack of images.
Open the Oblique Impact Stack file. This is another movie of an experimental hypervelocity impact in a vacuum chamber at the NASA Ames Research Center. This time the impact occurs at an angle of 30° from the vertical in order to show the effect of an oblique impact on the distribution of ejecta. Animate the stack, then step through the frames one at a time. Impacts that occur at angles to the surface form ejecta deposits that are asymmetrical, with most of the ejecta deposited on the downrange side of the crater.
4. On the second frame, measure the angle that the ejecta curtain makes with the surface on both the left and right side of the impact, and record your results.
a. In what direction in the images was the projectile traveling?
b. How did you reach this conclusion?
5. Is the shape of this crater different from a crater formed by a vertical impact? Explain your answer.
Close this stack of images.
Impact Crater Morphology
Open the LO Crater file. This is an image of a small crater on the Moon taken in 1965 by Lunar Orbiter III. You will measure the diameter and depth of this crater and calculate the depth/diameter ratio for comparison with the depth/diameter ratios of the experimental crater and simple craters on the Moon. There are two basic types of impact crater, simple and complex. A simple crater is bowl-shaped with little interior structure. A complex crater has terraced inner walls, a central peak or peaks, and a flat floor. Simple craters on the Moon have a depth/diameter ratio from 0.14 to 0.2, i.e., the diameter is about 5 to 7 times greater than the depth. For complex craters on the Moon (larger than 20 kilometers in diameter), the depth/diameter ratio ranges from 0.1 to 0.05, i.e., the diameter is from 10 to 20 times larger than the depth. This is because slumping of the inner walls and formation of the central peak causes a shallower depth.
In this image 1 pixel = 2.15 meters. Set the scale.
6. Measure the diameter of the crater and record your result.
7. Measure the length of the shadow cast by the crater rim and determine the depth of the crater. The depth of the crater can be determined from the length of the shadow cast by the crater rim on the floor of the crater. The depth of the crater is the shadow length times the tangent of the Sun angle, or D = L tan β, where D is the depth, L is the shadow length, and β is the Sun angle. When this image was taken the Sun was 17.88° above the horizon. (If you have not yet been introduced to trigonometric functions, e.g., tangent, see your instructor.)
8. Determine the depth/diameter ratio of this crater.
a. Does it agree with the ratio for the experimental crater you measured earlier, and the ratio for simple lunar craters?
b. What does this tell you about this crater?
Open the Ammonius and Lambert files. Tile the images to make it easier to view both at the same time. Select Windows à Tile. These lunar craters where photographed by the Apollo astronauts in orbit around the Moon. The interior structure of each crater is very different. One is a complex crater and the other is a simple crater. On the Moon, the change from simple to complex craters is at diameters between 15 and 20 kilometers.
9. Describe the major morphological differences between the craters and state which one is simple and which is complex.
10. From the characteristics alone, which crater is the largest? Explain your answer.
Cascade the images. Select WINDOW à CASCADE. The scale for the Ammonius image is 1 pixel = 0.038 kilometers, and the scale of the Lambert image is 1 pixel = 0.0835 kilometers.
11. Measure the diameters of Ammonius and Lambert. Do your measurements agree with your answer to question 10 above?
Open the Meteor Crater file. This is a picture of Meteor Crater in northern Arizona. Its diameter is 1.2 kilometers, and its depth is 183 meters. The crater was formed approximately 49,000 years ago. It is the largest, most well-preserved crater on Earth. Relatively few (about 120) impact craters have been identified on Earth because they are rapidly erased by erosion and plate tectonics. Seventy percent of Earth’s surface is water, so many craters formed on the ocean floors and were quickly eroded. However, impact craters are well preserved on other planets and the Moon. This is the reason you have been studying impact craters on the Moon.
12. What type of impact crater is Meteor Crater? How did you reach your conclusion?13. How does this crater differ from the same type of crater on the Moon? What are some possible reasons for these differences? | http://www.lpi.usra.edu/nlsi/education/hsResearch/crateringLab/lab/ | 13 |
64 | This section provides definitions for terms and concepts used for internationalized text input and a brief overview of the intended use of the mechanisms provided by Xlib.
A large number of languages in the world use alphabets consisting of a small set of symbols (letters) to form words. To enter text into a computer in an alphabetic language, a user usually has a keyboard on which there are key symbols corresponding to the alphabet. Sometimes, a few characters of an alphabetic language are missing on the keyboard. Many computer users who speak a Latin-alphabet-based language only have an English-based keyboard. They need to press a combination of keystrokes to enter a character that does not exist directly on the keyboard. A number of algorithms have been developed for entering such characters, known as European input methods, the compose input method, or the dead-keys input method.
Japanese is an example of a language with a phonetic symbol set, where each symbol represents a specific sound. There are two phonetic symbol sets in Japanese: Katakana and Hiragana. In general, Katakana is used for words that are of foreign origin, and Hiragana for writing native Japanese words. Collectively, the two systems are called Kana. Hiragana consists of 83 characters; Katakana, 86 characters.
Korean also has a phonetic symbol set, called Hangul. Each of the 24 basic phonetic symbols (14 consonants and 10 vowels) represent a specific sound. A syllable is composed of two or three parts: the initial consonants, the vowels, and the optional last consonants. With Hangul, syllables can be treated as the basic units on which text processing is done. For example, a delete operation may work on a phonetic symbol or a syllable. Korean code sets include several thousands of these syllables. A user types the phonetic symbols that make up the syllables of the words to be entered. The display may change as each phonetic symbol is entered. For example, when the second phonetic symbol of a syllable is entered, the first phonetic symbol may change its shape and size. Likewise, when the third phonetic symbol is entered, the first two phonetic symbols may change their shape and size.
Not all languages rely solely on alphabetic or phonetic systems. Some languages, including Japanese and Korean, employ an ideographic writing system. In an ideographic system, rather than taking a small set of symbols and combining them in different ways to create words, each word consists of one unique symbol (or, occasionally, several symbols). The number of symbols may be very large: approximately 50,000 have been identified in Hanzi, the Chinese ideographic system.
There are two major aspects of ideographic systems for their computer usage. First, the standard computer character sets in Japan, China, and Korea include roughly 8,000 characters, while sets in Taiwan have between 15,000 and 30,000 characters, which make it necessary to use more than one byte to represent a character. Second, it is obviously impractical to have a keyboard that includes all of a given language's ideographic symbols. Therefore a mechanism is required for entering characters so that a keyboard with a reasonable number of keys can be used. Those input methods are usually based on phonetics, but there are also methods based on the graphical properties of characters.
In Japan, both Kana and Kanji are used. In Korea, Hangul and sometimes Hanja are used. Now, consider entering ideographs in Japan, Korea, China, and Taiwan.
In Japan, either Kana or English characters are entered and a region is selected (sometimes automatically) for conversion to Kanji. Several Kanji characters can have the same phonetic representation. If that is the case, with the string entered, a menu of characters is presented and the user must choose the appropriate option. If no choice is necessary or a preference has been established, the input method does the substitution directly. When Latin characters are converted to Kana or Kanji, it is called a Romaji conversion.
In Korea, it is usually acceptable to keep Korean text in Hangul form, but some people may choose to write Hanja-originated words in Hanja rather than in Hangul. To change Hangul to Hanja, a region is selected for conversion and the user follows the same basic method as described for Japanese.
Probably because there are well-accepted phonetic writing systems for Japanese and Korean, computer input methods in these countries for entering ideographs are fairly standard. Keyboard keys have both English characters and phonetic symbols engraved on them, and the user can switch between the two sets.
The situation is different for Chinese. While there is a phonetic system called Pinyin promoted by authorities, there is no consensus for entering Chinese text. Some vendors use a phonetic decomposition (Pinyin or another), others use ideographic decomposition of Chinese words, with various implementations and keyboard layouts. There are about 16 known methods, none of which is a clear standard.
Also, there are actually two ideographic sets used: Traditional Chinese (the original written Chinese) and Simplified Chinese. Several years ago, the People's Republic of China launched a campaign to simplify some ideographic characters and eliminate redundancies altogether. Under the plan, characters would be streamlined every five years. Characters have been revised several times now, resulting in the smaller, simpler set that makes up Simplified Chinese.
As shown in the previous section, there are many different input methods used today, each varying with language, culture, and history. A common feature of many input methods is that the user can type multiple keystrokes to compose a single character (or set of characters). The process of composing characters from keystrokes is called preediting. It may require complex algorithms and large dictionaries involving substantial computer resources.
Input methods may require one or more areas in which to show the feedback of the actual keystrokes, to show ambiguities to the user, to list dictionaries, and so on. The following are the input method areas of concern.
Intended to be a logical extension of the light-emitting diodes (LEDs) that exist on the physical keyboard. It is a window that is intended to present the internal state of the input method that is critical to the user. The status area may consist of text data and bitmaps or some combination.
Intended to display the intermediate text for those languages that are composing prior to the client handling the data.
Used for pop-up menus and customizing dialog boxes that may be required for an input method. There may be multiple auxiliary areas for any input method. Auxiliary areas are managed by the input method independent of the client. Auxiliary areas are assumed to be a separate dialog that is maintained by the input method.
Data is displayed directly in the application window. Application data is moved to allow preedit data to be displayed at the point of insertion.
Data is displayed in a preedit window that is placed over the point of insertion.
Preedit window is displayed inside the application window but not at the point of insertion. Often, this type of window is placed at the bottom of the application window.
Preedit window is the child of RootWindow.
It would require a lot of computing resources if portable applications had to include input methods for all the languages in the world. To avoid this, a goal of the Xlib design is to allow an application to communicate with an input method placed in a separate process. Such a process is called an input server. The server to which the application should connect is dependent on the environment when the application is started up: what the user language is and the actual encoding to be used for it. The input method connection is said to be locale-dependent. It is also user-dependent; for a given language, the user can choose, to some extent, the user-interface style of input method (if there are several choices).
Using an input server implies communications overhead, but applications can be migrated without relinking. Input methods can be implemented either as a token communicating to an input server or as a local library.
The abstraction used by a client to communicate with an input method is an opaque data structure represented by the XIM data type. This data structure is returned by the XOpenIM() function, which opens an input method on a given display. Subsequent operations on this data structure encapsulate all communication between client and input method. There is no need for an X client to use any networking library or natural language package to use an input method.
A single input server can be used for one or more languages, supporting one or more encoding schemes. But the strings returned from an input method are always encoded in the (single) locale associated with the XIM object.
Xlib provides the ability to manage a multithreaded state for text input. A client may be using multiple windows, each window with multiple text entry areas, with the user possibly switching among them at any time. The abstraction for representing the state of a particular input thread is called an input context. The Xlib representation of an input context is an XIC. See Figure 5-1 for an illustration.
An input context is the abstraction retaining the state, properties, and semantics of communication between a client and an input method. An input context is a combination of an input method, a locale specifying the encoding of the character strings to be returned, a client window, internal state information, and various layout or appearance characteristics. The input context concept somewhat matches for input the graphics context abstraction defined for graphics output.
One input context belongs to exactly one input method. Different input contexts can be associated with the same input method, possibly with the same client window. An XIC is created with the XCreateIC() function, providing an XIM argument, affiliating the input context to the input method for its lifetime. When an input method is closed with the XCloseIM() function, no affiliated input contexts should be used again (and should preferably be deleted before closing the input method).
Considering the example of a client window with multiple text entry areas, the application programmer can choose to implement the following:
As many input contexts are created as text-entry areas. The client can get the input accumulated on each context each time it looks up that context.
A single context is created for a top-level window in the application. If such a window contains several text-entry areas, each time the user moves to another text-entry area, the client has to indicate changes in the context.
Application designers can choose a range of single or multiple input contexts, according to the needs of their applications.
To obtain characters from an input method, a client must call the XmbLookupString() function or XwcLookupString() function with an input context created from that input method. Both a locale and display are bound to an input method when they are opened, and an input context inherits this locale and display. Any strings returned by the XmbLookupString() or XwcLookupString() function are encoded in that locale.
For each text-entry area in which the XmbLookupString() or XwcLookupString() function is used, there is an associated input context.
When the application focus moves to a text-entry area, the application must set the input context focus to the input context associated with that area. The input context focus is set by calling the XSetICFocus() function with the appropriate input context.
Also, when the application focus moves out of a text-entry area, the application should unset the focus for the associated input context by calling the XUnsetICFocus() function. As an optimization, if the XSetICFocus() function is called successively on two different input contexts, setting the focus on the second automatically unsets the focus on the first.
To set and unset the input context focus correctly, it is necessary to track application-level focus changes. Such focus changes do not necessarily correspond to X server focus changes.
If a single input context is used to do input for multiple text-entry areas, it is also necessary to set the focus window of the input context whenever the focus window changes.
In most input method architectures (OnTheSpot being the notable exception), the input method performs the display of its own data. To provide better visual locality, it is often desirable to have the input method areas embedded within a client. To do this, the client may need to allocate space for an input method. Xlib provides support that allows the client to provide the size and position of input method areas. The input method areas that are supported for geometry management are the status area and the preedit area.
The fundamental concept on which geometry management for input method windows is based is the proper division of responsibilities between the client (or toolkit) and the input method. The division of responsibilities is the following:
The client is responsible for the geometry of the input method window.
The input method is responsible for the contents of the input method window. It is also responsible for creating the input method window per the geometry constraints given to it by the client.
An input method can suggest a size to the client, but it cannot suggest a placement. The input method can only suggest a size: it does not determine the size, and it must accept the size it is given.
Before a client provides geometry management for an input method, it must determine if geometry management is needed. The input method indicates the need for geometry management by setting the XIMPreeditArea() or XIMStatusArea() function in its XIMStyles value returned by the XGetIMValues() function. When a client decides to provide geometry management for an input method, it indicates that decision by setting the XNInputStyle value in the XIC.
After a client has established with the input method that it will do geometry management, the client must negotiate the geometry with the input method. The geometry is negotiated by the following steps:
The client suggests an area to the input method by setting the XNAreaNeeded value for that area. If the client has no constraints for the input method, it either does not suggest an area or sets the width and height to 0 (zero). Otherwise, it sets one of the values.
The client gets the XIC XNAreaNeeded value. The input method returns its suggested size in this value. The input method should pay attention to any constraints suggested by the client.
The client sets the XIC XNArea value to inform the input method of the geometry of the input method's window. The client should try to honor the geometry requested by the input method. The input method must accept this geometry.
Clients performing geometry management must be aware that setting other IC values may affect the geometry desired by an input method. For example, the XNFontSet and XNLineSpacing values may change the geometry desired by the input method. It is the responsibility of the client to renegotiate the geometry of the input method window when it is needed.
In addition, a geometry management callback is provided by which an input method can initiate a geometry change.
A filtering mechanism is provided to allow input methods to capture X events transparently to clients. It is expected that toolkits (or clients) using the XmbLookupString() or XwcLookupString() function call this filter at some point in the event processing mechanism to make sure that events needed by an input method can be filtered by that input method. If there is no filter, a client can receive and discard events that are necessary for the proper functioning of an input method. The following provides a few examples of such events:
Expose events that are on a preedit window in local mode.
Events can be used by an input method to communicate with an input server. Such input server protocol-related events have to be intercepted if the user does not want to disturb client code.
Key events can be sent to a filter before they are bound to translations such as Xt provides.
Clients are expected to get the XIC XNFilterEvents value and add to the event mask for the client window with that event mask. This mask can be 0. | http://docs.oracle.com/cd/E19683-01/816-0280/xtxlib-7/index.html | 13 |
61 | Your guide to a winning display. (TABLES, CHARTS, AND GRAPHS.How do you keep track of the data from your science experiment? And how do you turn the collected information into something visually interesting, such as charts and graphs? First, read "Hit the Waves" on p. 18. Then, follow this step-by-step guide to practice making tables, graphs, and charts.
1. Data Table
Use a data table to record your experiment findings. An organized data table should list your independent variables clearly. It should also have blank spaces Noun 1. blank space - a blank area; "write your name in the space provided"
surface area, expanse, area - the extent of a 2-dimensional surface enclosed within a boundary; "the area of a rectangle"; "it was about 500 square feet in area" for you to fill in the data from your experiment. Suppose you want to find out how Atlantic hurricane Atlantic hurricane refers to a tropical cyclone that forms in the Atlantic Ocean usually in the Northern Hemisphere summer or autumn, with one-minute maximum sustained winds of 74 mph (64 knots, 33 m/s, 119 km/h). strength has varied from year to year. Hurricanes are categorized cat·e·go·rize
tr.v. cat·e·go·rized, cat·e·go·riz·ing, cat·e·go·riz·es
To put into a category or categories; classify.
cat on a strength level scale of 1 through 5. The stronger the storm is, the higher is its category number. The category numbers are your independent variables. And the number of hurricanes is your dependent variable.
To make a data table:
1. Draw a blank data table.
2. Give your table a title that identifies your variables ("The Yearly Change in Hurricane hurricane, tropical cyclone in which winds attain speeds greater than 74 mi (119 km) per hr. Wind speeds reach over 190 mi (289 km) per hr in some hurricanes. Strength").
3. Label the column on the left as the independent variable (Category). Underneath, list the category numbers (1, 2, 3, 4, and 5).
4. Label the columns to the right as the dependent variable (Number of Hurricanes). Draw boxes under these columns in which you can record the results in each year for each category.
5. Include columns at the far right to record the average number of hurricanes for each category. To calculate the average, simply find the total number of hurricanes in each category. Then divide the total by the number of years.
Your Turn: Complete the data table by calculating the average number of hurricanes for categories 2, 3, 4, and 5. Round your answers to the nearest tenth.
2. Bar Graph
Use a bar graph to compare your variables.
A bar graph is a great way to show how the independent variables stack up stack
1. A large, usually conical pile of straw or fodder arranged for outdoor storage.
2. An orderly pile, especially one arranged in layers. See Synonyms at heap.
3. against each other. The graph below compares the number of hurricanes for each category in 2004.
To make a bar graph:
1. On graph paper, draw a set of axes axes
[L., Gr.] plural of axis. The straight lines which intersect at right angles and on which graphs are drawn. Usually the horizontal axis is the x-axis and the vertical one the y-axis. Called also axes of reference. (x and y).
2. Give your bar graph a title ("Hurricane Strengths in 2004").
3. Label the horizontal (x) axis with your independent variable (Category), including a label of each category number (1, 2, 3, 4, and 5).
4. Label the vertical (y) axis with your dependent variable (Number of Hurricanes) and a scale from 0 to at least the highest number in your dependent variable results.
5. For each independent variable, draw a solid bar to the height of the corresponding value of the dependent variable. Example: The number of category 1 hurricanes is 2. Draw a bar above the "1" label on the x-axis See x-y matrix. to the "2" mark on the y-axis See X-Y matrix. .
Your Turn: Use the information in the data table on the previous page to complete the bar graph.
3. Line Graph In graph theory, the line graph L(G) of an undirected graph G is a graph such that
Use a line graph to pinpoint changes in your data. Choose a line graph when you want to see how continuous changes to the independent variable affect the dependent variable. For example, instead of comparing hurricane categories, you choose to focus on how the number of hurricanes in category 4 changed from year to year. The independent variable is now the year, and the dependent variable is the number of hurricanes.
To make a line graph:
1. On graph paper, draw a set of axes (x and y).
2. Give your line graph a title (The Number of Category 4 Hurricanes From 2002 to 2004).
3. Label the x-axis with your independent variable (Year) with the values of the independent variable (2002, 2003, and 2004).
4. Label the y-axis with your dependent variable (Number of Hurricanes). Use a scale from 0 to at least the highest number in your dependent variable results.
5. Plot a point on the graph for each piece of data. Example: The number of category 4 hurricanes in 2002 was 1. To locate this point on your graph, draw an imaginary Imaginary can refer to:
See also: Horizontal from the "1" mark on the y-axis. Plot the point where the imaginary lines In general, an imaginary line is any sort of line that has only an abstract definition, and does not exist in fact.
As a geographical concept, an imaginary line may serve as an arbitrary division (such as a border). intersect In a relational database, to match two files and produce a third file with records that are common in both. For example, intersecting an American file and a programmer file would yield American programmers. .
6. Once you've you've
Contraction of you have.
you've you have
you've have plotted the points for all your data, connect the points.
4. Pie Chart A graphical representation of information in which each unit of data is represented as a pie-shaped piece of a circle. See business graphics.
Use a pie chart to illustrate numbers expressed in percentages of a whole,
A pie chart is a circle divided into wedge-shaped sections. The circle represents 100 percent. Wedges inside that circle represent data that are percentages of the whole.
Suppose you decide to graph the percentage of hurricanes in each category that occurred during 2004. The total number of hurricanes in 2004 represents 100 percent. And each hurricane category represents a different wedge of the pie chart.
To make a pie chart:
1. Use a compass to draw a circle.
2. Give your pie chart a title ("Profile of Hurricane Strengths in 2004").
3. Mark the center of the circle with a point; this is where each pie "slice," or wedge, will start.
4. Measure a wedge for each independent variable (Category 1, 2, 3, 4, or 5). First, convert your data into percentages. Do so by dividing the number of hurricanes in each category in 2004 by the total number of hurricanes in that same year.
Example: Two of nine hurricanes in 2004 were Category 1 hurricanes. 2 / 9 = .22 or 22% when rounded to the nearest tenth. Then convert your data from percentages to angle degrees. Example: 22 percent of the total number of hurricanes in 2004 were a category 1, so the pie wedge for category 1 would be 22 percent of the 360[degrees] circle, or 79[degrees] (360 x .22 = 79.2, rounded to 79). Position a protractor protractor
Instrument for constructing and measuring plane angles. The simplest protractor is a semicircular disk marked in degrees from 0° to 180°. A more complex protractor, for plotting position on navigation charts, is called a three-arm protractor, or station at the center point of the circle. Mark 0[degrees] and 79[degrees] angles with points on the edge of the circle. Draw a line from these points to the center of the circle.
5. Label the wedge (include its percentage).
6. Measure your next wedge from the edge of the first. When finished, the entire circle should be filled and the angles of the wedges should add up to 360[degrees].*
Your Turn: Calculate the percentages of hurricanes in 2004 for the other categories in the data table. Use this information to complete the pie chart.
1. DATA TABLE
Category 2: 1; Category 3: 1.3; Category 4: 1.7; Category 5:07
2. BAR GRAPH
3. LINE GRAPH
4. PIE CHART
Profile of Hurricane Strengths in 2004 Category 1 22% Category 2 11% Category 3 22% Category 4 33% Category 5 11% Note: Table made from pie chart. | http://www.thefreelibrary.com/Your+guide+to+a+winning+display.+(TABLES%2C+CHARTS%2C+AND+GRAPHS.-a0151662182 | 13 |
97 | This HTML version of the book is provided as a convenience, but some math equations are not translated correctly. The PDF version is more reliable.
Chapter 6 Zero-finding
6.1 Why functions?
The previous chapter explained some of the benefits of functions, including
Another reason you should consider using functions is that many of the tools provided by MATLAB require you to write functions. For example, in this chapter we will use fzero to find solutions of nonlinear equations. Later we will use ode45 to approximate solutions to differential equations.
In mathematics, a map is a correspondence between one set called the range and another set called the domain. For each element of the range, the map specifies the corresponding element of the domain.
You can think of a sequence as a map from positive integers to elements. You can think of a vector as a map from indices to elements. In these cases the maps are discrete because the elements of the range are countable.
You can also think of a function as a map from inputs to outputs, but in this case the range is continuous because the inputs can take any value, not just integers. (Strictly speaking, the set of floating-point numbers is discrete, but since floating-point numbers are meant to represent real numbers, we think of them as continuous.)
6.3 A note on notation
In this chapter I need to start talking about mathematical functions, and I am going to use a notation you might not have seen before.
If you have studied functions in a math class, you have probably seen something like
which is supposed to mean that f is a function that maps from x to x2 − 2x −3. The problem is that f(x) is also used to mean the value of f that corresponds to a particular value of x. So I don’t like this notation. I prefer
which means “f is the function that maps from x to x2 − 2x −3.” In MATLAB, this would be expressed like this:
function res = error_func(x) res = x^2 - 2*x -3; end
I’ll explain soon why this function is called error_func. Now, back to our regularly-scheduled programming.
6.4 Nonlinear equations
What does it mean to “solve” an equation? That may seem like an obvious question, but I want to take a minute to think about it, starting with a simple example: let’s say that we want to know the value of a variable, x, but all we know about it is the relationship x2 = a.
If you have taken algebra, you probably know how to “solve” this equation: you take the square root of both sides and get x = √a. Then, with the satisfaction of a job well done, you move on to the next problem.
But what have you really done? The relationship you derived is equivalent to the relationship you started with—they contain the same information about x—so why is the second one preferable to the first?
There are two reasons. One is that the relationship is now “explicit in x;” because x is all alone on the left side, we can treat the right side as a recipe for computing x, assuming that we know the value of a.
The other reason is that the recipe is written in terms of operations we know how to perform. Assuming that we know how to compute square roots, we can compute the value of x for any value of a.
When people talk about solving an equation, what they usually mean is something like “finding an equivalent relationship that is explicit in one of the variables.” In the context of this book, that’s what I will call an analytic solution, to distinguish it from a numerical solution, which is what we are going to do next.
To demonstrate a numerical solution, consider the equation x2 − 2x = 3. You could solve this analytically, either by factoring it or by using the quadratic equation, and you would discover that there are two solutions, x=3 and x=−1. Alternatively, you could solve it numerically by rewriting it as x = √2x+3.
This equation is not explicit, since x appears on both sides, so it is not clear that this move did any good at all. But suppose that we had some reason to expect there to be a solution near 4. We could start with x=4 as an “initial guess,” and then use the equation x = √2x+3 iteratively to compute successive approximations of the solution.
Here’s what would happen:
>> x = 4; >> x = sqrt(2*x+3) x = 3.3166 >> x = sqrt(2*x+3) x = 3.1037 >> x = sqrt(2*x+3) x = 3.0344 >> x = sqrt(2*x+3) x = 3.0114 >> x = sqrt(2*x+3) x = 3.0038
After each iteration, x is closer to the correct answer, and after 5 iterations, the relative error is about 0.1%, which is good enough for most purposes.
Techniques that generate numerical solutions are called numerical methods. The nice thing about the method I just demonstrated is that it is simple, but it doesn’t always work as well as it did in this example, and it is not used very often in practice. We’ll see one of the more practical alternatives in a minute.
A nonlinear equation like x2 − 2x = 3 is a statement of equality that is true for some values of x and false for others. A value that makes it true is a solution; any other value is a non-solution. But for any given non-solution, there is no sense of whether it is close or far from a solution, or where we might look to find one.
To address this limitation, it is useful to rewrite non-linear equations as zero-finding problems:
Zero-finding lends itself to numerical solution because we can use the values of f, evaluated at various values of x, to make reasonable inferences about where to look for zeros.
For example, if we can find two values x1 and x2 such that f(x1) > 0 and f(x2) < 0, then we can be certain that there is at least one zero between x1 and x2 (provided that we know that f is continuous). In this case we would say that x1 and x2 bracket a zero.
Here’s what this scenario might look like on a graph:
If this was all you knew about f, where would you go looking for a zero? If you said “halfway between x1 and x2,” then congratulations! You just invented a numerical method called bisection!
If you said, “I would connect the dots with a straight line and compute the zero of the line,” then congratulations! You just invented the secant method!
And if you said, “I would evaluate f at a third point, find the parabola that passes through all three points, and compute the zeros of the parabola,” then... well, you probably didn’t say that.
Finally, if you said, “I would use a built-in MATLAB function that combines the best features of several efficient and robust numerical methods,” then you are ready to go on to the next section.
fzero is a built-in MATLAB function that combines the best features of several efficient and robust numerical methods.
In order to use fzero, you have to define a MATLAB function that computes the error function you derived from the original nonlinear equation, and you have to provide an initial guess at the location of a zero.
We’ve already seen an example of an error function:
function res = error_func(x) res = x^2 - 2*x -3; end
You can call error_func from the Command Window, and confirm that there are zeros at 3 and -1.
>> error_func(3) ans = 0 >> error_func(-1) ans = 0
But let’s pretend that we don’t know exactly where the roots are; we only know that one of them is near 4. Then we could call fzero like this:
>> fzero(@error_func, 4) ans = 3.0000
Success! We found one of the zeros.
The first argument is a function handle that names the M-file that evaluates the error function. The @ symbol allows us to name the function without calling it. The interesting thing here is that you are not actually calling error_func directly; you are just telling fzero where it is. In turn, fzero calls your error function—more than once, in fact.
The second argument is the initial guess. If we provide a different initial guess, we get a different root (at least sometimes).
>> fzero(@error_func, -2) ans = -1
Alternatively, if you know two values that bracket the root, you can provide both:
>> fzero(@error_func, [2,4]) ans = 3
The second argument here is actually a vector that contains two elements. The bracket operator is a convenient way (one of several) to create a new vector.
You might be curious to know how many times fzero calls your function, and where. If you modify error_func so that it displays the value of x every time it is called and then run fzero again, you get:
>> fzero(@error_func, [2,4]) x = 2 x = 4 x = 2.75000000000000 x = 3.03708133971292 x = 2.99755211623500 x = 2.99997750209270 x = 3.00000000025200 x = 3.00000000000000 x = 3 x = 3 ans = 3
Not surprisingly, it starts by computing f(2) and f(4). After each iteration, the interval that brackets the root gets smaller; fzero stops when the interval is so small that the estimated zero is correct to 16 digits. If you don’t need that much precision, you can tell fzero to give you a quicker, dirtier answer (see the documentation for details).
6.7 What could go wrong?
The most common problem people have with fzero is leaving out the @. In that case, you get something like:
>> fzero(error_func, [2,4]) ??? Input argument "x" is undefined. Error in ==> error_func at 2 x
Which is a very confusing error message. The problem is that MATLAB treats the first argument as a function call, so it calls error_func with no arguments. Since error_func requires one argument, the message indicates that the input argument is “undefined,” although it might be clearer to say that you haven’t provided a value for it.
Another common problem is writing an error function that never assigns a value to the output variable. In general, functions should always assign a value to the output variable, but MATLAB doesn’t enforce this rule, so it is easy to forget. For example, if you write:
function res = error_func(x) y = x^2 - 2*x -3 end
and then call it from the Command Window:
>> error_func(4) y = 5
It looks like it worked, but don’t be fooled. This function assigns a value to y, and it displays the result, but when the function ends, y disappears along with the function’s workspace. If you try to use it with fzero, you get
>> fzero(@error_func, [2,4]) y = -3 ??? Error using ==> fzero FZERO cannot continue because user supplied function_handle ==> error_func failed with the error below. Output argument "res" (and maybe others) not assigned during call to "/home/downey/error_func.m (error_func)".
If you read it carefully, this is a pretty good error message (with the quibble that “output argument” is not a good synonym for “output variable”).
You would have seen the same error message when you called error_func from the interpreter, if only you had assigned the result to a variable:
>> x = error_func(4) y = 5 ??? Output argument "res" (and maybe others) not assigned during call to "/home/downey/error_func.m (error_func)". Error in ==> error_func at 2 y = x^2 - 2*x -3
You can avoid all of this if you remember these two rules:
When you write your own functions and use them yourself, it is easy for mistakes to go undetected. But when you use your functions with MATLAB functions like fzero, you have to get it right!
Yet another thing that can go wrong: if you provide an interval for the initial guess and it doesn’t actually contain a root, you get
>> fzero(@error_func, [0,1]) ??? Error using ==> fzero The function values at the interval endpoints must differ in sign.
There is one other thing that can go wrong when you use fzero, but this one is less likely to be your fault. It is possible that fzero won’t be able to find a root.
fzero is generally pretty robust, so you may never have a problem, but you should remember that there is no guarantee that fzero will work, especially if you provide a single value as an initial guess. Even if you provide an interval that brackets a root, things can still go wrong if the error function is discontinuous.
6.8 Finding an initial guess
The better your initial guess (or interval) is, the more likely it is that fzero will work, and the fewer iterations it will need.
When you are solving problems in the real world, you will usually have some intuition about the answer. This intuition is often enough to provide a good initial guess for zero-finding.
Another approach is to plot the function and see if you can approximate the zeros visually. If you have a function, like error_func that takes a scalar input variable and returns a scalar output variable, you can plot it with ezplot:
>> ezplot(@error_func, [-2,5])
The first argument is a function handle; the second is the interval you want to plot the function in.
By default ezplot calls your function 100 times (each time with a different value of x, of course). So you probably want to make your function silent before you plot it.
6.9 More name collisions
Functions and variables occupy the same “name-space,” which means that whenever a name appears in an expression, MATLAB starts by looking for a variable with that name, and if there isn’t one, it looks for a function.
As a result, if you have a variable with the same name as a function, the variable shadows the function. For example, if you assign a value to sin, and then try to use the sin function, you might get an error:
>> sin = 3; >> x = 5; >> sin(x) ??? Index exceeds matrix dimensions.
In this example, the problem is clear. Since the value of sin is a scalar, and a scalar is really a 1x1 matrix, MATLAB tries to access the 5th element of the matrix and finds that there isn’t one. Of course, if there were more distance between the assignment and the “function call,” this message would be pretty confusing.
But the only thing worse than getting an error message is not getting an error message. If the value of sin was a vector, or if the value of x was smaller, you would really be in trouble.
>> sin = 3; >> sin(1) ans = 3
Just to review, the sine of 1 is not 3!
The converse error can also happen if you try to access an undefined variable that also happens to be the name of a function. For example, if you have a function named f, and then try to increment a variable named f (and if you forget to initialize f), you get
>> f = f+1 ??? Error: "f" previously appeared to be used as a function or command, conflicting with its use here as the name of a variable. A possible cause of this error is that you forgot to initialize the variable, or you have initialized it implicitly using load or eval.
At least, that’s what you get if you are lucky. If this happens inside a function, MATLAB tries to call f as a function, and you get this
??? Input argument "x" is undefined. Error in ==> f at 3 y = x^2 - a
There is no universal way to avoid these kind of collisions, but you can improve your chances by choosing variable names that don’t shadow existing functions, and by choosing function names that you are unlikely to use as variables. That’s why in Section 6.3 I called the error function error_func rather than f. I often give functions names that end in func, so that helps, too.
6.10 Debugging in four acts
When you are debugging a program, and especially if you are working on a hard bug, there are four things to try:
Beginning programmers sometimes get stuck on one of these activities and forget the others. Each activity comes with its own failure mode.
For example, reading your code might help if the problem is a typographical error, but not if the problem is a conceptual misunderstanding. If you don’t understand what your program does, you can read it 100 times and never see the error, because the error is in your head.
Running experiments can help, especially if you run small, simple tests. But if you run experiments without thinking or reading your code, you might fall into a pattern I call “random walk programming,” which is the process of making random changes until the program does the right thing. Needless to say, random walk programming can take a long time.
The way out is to take more time to think. Debugging is like an experimental science. You should have at least one hypothesis about what the problem is. If there are two or more possibilities, try to think of a test that would eliminate one of them.
Taking a break sometimes helps with the thinking. So does talking. If you explain the problem to someone else (or even yourself), you will sometimes find the answer before you finish asking the question.
But even the best debugging techniques will fail if there are too many errors, or if the code you are trying to fix is too big and complicated. Sometimes the best option is to retreat, simplifying the program until you get to something that works, and then rebuild.
Beginning programmers are often reluctant to retreat, because they can’t stand to delete a line of code (even if it’s wrong). If it makes you feel better, copy your program into another file before you start stripping it down. Then you can paste the pieces back in a little bit at a time.
To summarize, here’s the Ninth Theorem of debugging:
Finding a hard bug requires reading, running, ruminating, and sometimes retreating. If you get stuck on one of these activities, try the others.
The density of a duck, ρ, is 0.3 g / cm3 (0.3 times the density of water).
The volume of a sphere2 with radius r is 4/3 π r3.
If a sphere with radius r is submerged in water to a depth d, the volume of the sphere below the water line is
An object floats at the level where the weight of the displaced water equals the total weight of the object.
Assuming that a duck is a sphere with radius 10 cm, at what depth does a duck float?
Here are some suggestions about how to proceed:
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey. | http://www.greenteapress.com/matlab/html/book008.html | 13 |
60 | Several scaling relationships are important in cell structure. One is dimensional scaling—how the volume, surface area, and length of cells and its substructures relate to one another as the cell grows. These relationships depend both on the cell’s shape and the manner in which the cell grows. Consider the case where a cell doubles in volume before division then divides to form two daughters of similar shape (). In the ideal case of isotropic, three-dimensional growth of spherical cell (), a doubling of volume requires a roughly 60% increase in surface area and a 25% increase in diameter. There is a disparity as volume has increased two-fold but surface area has not. To achieve similarity of shape, dividing spherical cells must therefore provide more membrane surface area19,20
and/or remove volume.21
Different scaling relationships between cell dimensions can be achieved during the cell division cycle for cells with non-spherical cell shapes or polarized growth. For the roughly cylindrical fission yeast Schizosaccharomyces pombe
, growth of the cell occurs by elongation along the cylindrical axis while preserving the cross-sectional area ().22
In effect, the geometry has been reduced from three dimensions to one, and so volume increases are linearly proportional to both surface area and axial length. Thus, cell division requires no dramatic shape changes. The situation is more complicated in budding yeast, Saccharomyces cerevisiae
, where the mother cell volume remains largely constant while the bud size shows a combination of polarized (or apical) and isotropic growth ().23
Here, the shape of the mother-bud pair just before division has roughly double the original volume and surface area, similar to the case in fission yeast. However, the increases in volume and surface area during bud growth are less straightforward to correlate with one another.
Figure 2 The following cartoons illustrate three cell growth patterns. In each case, the cell (left purple) grows as illustrated by the grey arrows until total cell volume (V) increases to twice its original value (middle purple) with some increase in surface (more ...)
Even the most basic question of how cell growth scales with time or progression along the cell cycle has also been a matter of intense study and debate. The two most prevalent models for cell growth are linear and exponential which are familiar as the solution to zeroth- and first-order rate equations. Both linear24
and exponential growth25,26
have been reported in various cell types, though these possibilities are often difficult to distinguish,27,28
and more complex growth patterns have also been reported.26,29,30
In the former, the rate at which a cell increases in size (typically measured by volume) is independent of its current size, and is hypothesized to be a result of a growth-limiting factor such as nutrient import which holds constant regardless of cell size.24
Exponential growth occurs when there is a linear relationship between growth rate and size, suggesting that the larger cell has proportionately more capacity for metabolism and growth.28
At a simple level, cells are membrane-bound structures that include smaller sub-structures or organelles. Each of these organelles performs a specific function and shows characteristic morphologies, though these vary from organism to organism. As the cell grows, generally organelles do as well to accommodate the typically greater need for their functions.31
How organelle size scales with cell size is a question garnering more attention as advances in microscopy and other imaging techniques allow for better quantification of cell and organelle size. A simple model to consider is that functional need for organelles increases with cell size, and that correspondingly organelle size increases with a straightforward linear scaling relationship. However, even in this overly basic framework, several questions arise. What is the relevant measure of size, both for the cell and for the organelle? Does the relevant cell size measure differ among the organelles? What is the relationship between organelle size and function, and how can this be measured?
Morphology can impact function in several ways. In terms of capacity, intuitively, as an organelle gets bigger, it will be able to perform more of the functions for which it is responsible, including metabolism, signaling, storage and homeostasis. Most organelles have membrane and lumenal environments to perform these functions, and these parameters are characterized by surface area and volume, respectively. The balance between two- and three-dimensional size determines the possible shapes of the organelle.
There are several transport factors that can depend on morphology. The first is transport between the organelle and other points in the cell. A centralized organelle like the nucleus will obviously sample a much more restricted region of the cell than a distributed network, and this localization can affect how long it takes to deliver cargo to and from that organelle. Further, cellular transport occurs by many mechanisms, and we will discuss two of these: diffusion and active transport using motor proteins traveling along the cytoskeleton. These modes of transport show different dependencies of average transport time vs. distance. Many signaling processes involve influx and subsequent diffusion of Ca2+
-ions or other species that are generally found at varying concentrations.32
The effective distance, d
, over which the signaling species propagates scales as the square root of time (d
), and therefore diffusion is more efficient at shorter length-scales. Motor protein-mediated transport requires input of ATP and allows the cargo to be targeted to particular destinations. Distance traveled scales linearly with time (d
t), which is more efficient than diffusion at longer length-scales. Then, there is transport of material from inside to outside the organelle (or vice versa), which is achieved by a number of passive or active mechanisms. The rate of these processes is likely limited in part by the amount of membrane or surface area in the organelle, and the surface area-to-volume ratio may be regulated to achieve the appropriate amount of transport back-and-forth between the organelle interior and cytoplasm. Such regulation would result in changes in membrane or lumen amounts and can occur somewhat independently for the two parameters, and this would be reflected in the overall shape of the organelle.
Just as cells show different growth patterns, there are several ways by which organelle size can increase: (I) Isotropic, three-dimensional expansion is generally found for larger, round organelles such as the nucleus. (II) Distributed networks—such as found for mitochondria and tubular endoplasmic reticulum (ER) in certain cells—are propagated by increasing the linear length of the network, keeping the cross-sectional dimensions roughly constant. (III) Organelles existing in multiple copies can simply be increased in number, as is the case for peroxisomes. Combinations of these scaling behaviors are also possible, as for example in the fungal vacuole, which can be found in a single, round morphology (I) as well as a fragmented collection of smaller vesicles (III).
Furthermore, each of these cases has different scaling properties as organelle and cell size increase (). Linear volume scaling between an organelle showing isotropic growth and the cell would give a constant proportion between organelle and cell dimensions (, purple sphere). With only one such organelle, it is trivial to optimize the transport distances to other places in the cell by placing it in the center. However, having two or more organelles necessarily breaks this symmetry, leaving at least one further away from some areas of the cell. This effect is well illustrated in electron tomography micrographs of the smallest known eukaryote, Ostreococcus tauri. O. tauri
cells contain only single copies of several organelles, which are tightly packed into a relatively small cell volume, with many organelles located at the cell periphery.33
Because of this organism’s size, all organelles are still within a short distance of all other points in the cell volume. However, these distances will grow longer with larger cells, and this may be a reason why in other organisms, only a small number of organelles tend to have this kind of morphology. Thus, organelle shape can affect cellular organization.
Figure 3 Cartoon illustration of the scaling of organelles with cell size. The cell on the right is twice the diameter of the left. The eight-fold increase in cell volume is correlated to a proportionate increase in: (I) volume for a centralized organelle (purple), (more ...)
Organelles with network morphology ( and black lines) or multiple copies (, red spheres) can be more readily distributed around the cell as needed. An interesting scenario arises for linear networks that are localized to the periphery of an isotropically growing cell, as in the case of mitochondria in S. cerevisiae
Assuming linear scaling between cell volume and organelle length, as the cell grows its surface area increases as Vcell2/3
, meaning there will be proportionately less area per unit network length, and the area density of the organelle will increase. Network properties such as branching and spacing can adjust to accommodate this density scaling to a point. There is an upper limit on how much packing can be achieved, indicating a transition point at which the network is forced to enter into the cell volume to access more available space. With these general considerations in mind, we next consider the scaling and size regulation of particular organelles. | http://pubmedcentralcanada.ca/pmcc/articles/PMC2901812/?lang=en-ca | 13 |
57 | Heat spreading is essentially area enlarging: the larger the area, the more heat can be removed at the same temperature difference (subject to certain limits). Unfortunately, except for the simplest of cases, the equations describing heat spreading physics do not have an explicit mathematical solution. Hence, we have to rely on clever approximations or suitable computer codes. This article discusses the basics of heat spreading starting with the problematic definitions of thermal resistance and maximum allowable flux.
The Thermal Resistance Conundrum
Thermal resistance data published by vendors are widely used to compare the performance of various products, be it semiconductor packages, LEDs or heat sinks. The common notation is:
However, we rarely have constant temperatures at the two faces where the heat flux originates and where the same flux leaves. Here is a formal definition of thermal resistance:
The temperature difference between two isothermal surfaces divided by the heat that flows between them is the thermal resistance of a) the materials enclosed between the two isothermal surfaces and b) the heat flux tube originating and ending on the boundaries of the two isothermal surfaces .
Figure 1. Two isothermal surfaces connected by a heat flux tube.
The emphasis is on ‘isothermal surfaces 1 and 2′ and ‘flux leaving 1 equals flux entering 2′, see Figure 1. Now consider Figure 2 showing two simple cases of heat spreading where all surfaces are adiabatic except the bottom surface.
Figure 2. Heat spreading from single source (left) and two sources (right).
For the single source case a constant heat transfer coefficient h at the bottom causes a non-uniform temperature profile on this face, except in the case of h or the thermal conductivity k being infinite or the trivial case of the source area equalling the substrate area. The average temperature could be used but the question in practice is: how do we get this value? Usually one thermocouple is used at the center, but one should realize that the thermal resistance from source to thermocouple defined in this way is by definition dependent on the boundary conditions because these determine the temperature profile and hence the temperature at the location of the thermocouple.
The only definition that is in accordance with the definition is the thermal resistance from source to ambient: Rthj-a, but this value is often useless because the boundary conditions are rarely the same. However, even this definition is lost when more than one source is present, such as in Figure 2b, because the main condition for a correct definition, the fact that the same flux has to enter and leave the resistance, cannot be fulfilled. The essential point to understand is that the concept of thermal resistance becomes meaningless when dealing with multiple sources. It is theoretically possible to solve this problem because the substrate can be split in such a way that every source has its own volume allocated to it (based on the minima of the isotherms) but then the simplicity of the concept is lost.
A possible way out is to make use of the principle of superposition. By switching off all sources except one we are able to find again the thermal resistance from that source to ambient. However, its use for design purposes is limited because the problem cannot be reduced to a parallel combination of all individual source-ambient resistances found in this way, put otherwise: no network can be constructed using these resistors. It might serve however as a way to thermally characterize multisource applications by providing a test protocol.
The Flux Conundrum
Let’s calculate how much heat we may dissipate by natural convection at a Δ T of 50�C. We may write for the flux =hΔT, and with h=12 W/m2K (including radiation) we get =600 W/m2. The conclusion is then: natural convection can handle 600W/m2. Is this correct? Yes and no. Can we put 60 mW in one square centimeter? Yes, no problem. Can we put 600 W in one square meter with a source of a few cm2 and stay within the quoted temperature limit? No, we can’t, because the simple equation assumes not only a uniform h but also a uniform surface temperature. The problem is caused by the fact that we are dealing with increasing temperature gradients over the surface when the area increases.
Why do we always talk W/cm2? Because the thermal management field is dominated by the cooling of processors that have dissipating areas of O(cm2). Now take a junction of such a processor. The area is about 0.1*0.1�m and with 1 �W dissipation this comes down to an amazing 10kW/cm2. Nobody has a problem with this. Apparently it is the dissipating area that sets the limit to the maximum allowable flux. A useful rule of thumb has been formulated by Willcoxon and Cornelius , using the following equation:
However, its use is limited to areas larger than about 0.1*0.1 mm2. It should also be realized that this limit depends not only on the source size but also on the spreader thickness, area, thermal conductivity and boundary conditions. In summary, what is important to realize is that it does not make sense to use the same heat flux limits for both processors and LEDs alike. The author suggests to quote the “raw” data related to the size of the source: e.g.; for a junction, W/�m2; for a die, W/cm2; for a TV backplane, W/m2.
Basic Principles of Heat Spreading
Single Source, Single Layer
Contrary to what is believed by many designers: heat spreading is not a trivial issue. Consider the simple configuration shown in Figure 2. A square source with zero thickness of size As centrally located on a square plate of size A, thickness d and thermal conductivity k dissipates q W. The top and sides of the plate are adiabatic (insulated), the bottom ‘sees’ a uniform heat transfer coefficient h (W/m2K). The remarkable thing is that even for this simple configuration no explicit solution is known for the description of heat spreading.
Observing the exact implicit solution of the governing differential equations reveals the source of the complexity of heat spreading: it is not possible to separate the convection and conduction parts. In other words, changing the heat transfer coefficient changes also the value of the spreading resistance. Consequently, it is not possible to write the problem in terms of one conduction resistance describing the heat spreading inside the solid and one convection resistance describing the boundary condition because the two are dependent.
There is one exception: the analysis becomes much more straightforward when the temperature gradients over the area that is in contact with the environment can be neglected. In other words, a uniform temperature may be assumed. Such is often the case with relatively small heat sinks and spreaders.
A final complexity stems from the fact that decreasing the thickness of the plate does not automatically result in a decrease in temperature, caused by the fact that a smaller thickness also implies a decrease in spreading capability. Hence, for a certain combination of thickness and thermal conductivity, given the boundary conditions and the dimensions, a minimum in the total thermal resistance may be found.
Single Source, Multiple Layers
In practice we often have to deal with multiple layers.
Figure 3. Cross section through one planar source on a submount and a heat spreader.
Figure 3 shows a cross section of a single source on a submount, which is itself connected to a second spreader. Closer observation reveals two heat spreading effects: one from submount to spreader, and one from the source to the submount. To first order, this problem can be handled as a single layer problem, provided the boundary condition of the first layer is replaced by the spreading plus the boundary condition of the second layer.
Multiple sources add another layer of complexity because the coupling between the sources is not only dependent on the dimensions and physical properties but also on the boundary conditions and, worst of all, on the dissipation of the sources themselves. Using superposition techniques is the recommended approach.
Approximate Heat Spreading Solutions for the Single Source-Single Layer Problem
In the following, three approximate solutions will be discussed.
The 1D-Series-Resistance Approach
Equation 3 describes the temperature rise of the source due to dissipation q via a total thermal resistance Rtotal from junction to ambient. While it is physically correct, splitting up this resistance according to equations 4 and 5 is not.
with g=f (As, A) some geometrical factor representing the spreading, e.g., the 45� rule.
A long time ago, the author derived a simple rule-of-thumb for the case of a single source on a plate with uniform heat transfer coefficients at both sides, mainly for flat heat sinks . The assumptions are that a surface source can be replaced by a volume source extending over the thickness and that a square plate can be replaced by a flat cylinder. Surprisingly, it turned out that the equation is valid over quite a large range of parameters, including low-conductivity substrates and even non-square areas up to a length/width ratio of 4. By approximating the Bessel-function solution for the cylinder by algebraic equations, the following equation results for one-sided heat transfer:
where γ is Euler’s constant, 0.577. With some care, the first term of the right-hand side could be interpreted as a convective term, the second as a conductive term, and the third as a correction term. Note: due to the negative sign this equation does not represent a simple series resistance network.
To guarantee an accuracy of better than 90% the following inequalities should be obeyed:
m is the so-called fin factor, and a and b equivalent radii for respectively source and base area, defined as:
For non-centrally placed sources it can easily be shown that simple correction factors can be applied. Calling the first term in Eq. (6) C1 and the second plus third C2, it follows:
For a source in the corner, the extra Δ T is given by 3�C2�q (exact). For a source at the center of a side, the extra Δ T is C2�q (approximation). More sophisticated correction factors can be found in an article by Lee .
Figure 4. Position of source: center, side, corner.
Song et al. and Lee et al. found a set of approximate explicit formulae that are easy to implement in a spreadsheet. They showed that the errors stay within 5% or less for the majority of cases of practical interest. The maximum spreading resistances is defined as:
where is the average temperature of the base area. Approximate equations are presented to calculate not only Rmax but also Ravg, based on the average temperature of the source. The interested reader may consult the references cited. Finally, the temperature rise from junction to ambient is calculated using:
At first sight this expression looks like an ordinary series resistance network. However, it should be understood that Rmax is a function of the boundary conditions, contrary to the networks discussed previously.
(Author’s Note: For the reader who wants to consult the original papers a word of caution should be issued because the three relevant papers differ in the definition of the parameters. In , the 1D conduction resistance and the spreading resistance are separate entities and Rmax is only associated with the spreading resistance. In , Rmax is defined as the sum of the two. The result is different graphs. In , equation (1) is only correct if R0 equals Rconv. However, formally R0 represents the sum of Rconv and R1D (R0 =1/hA+t/kA). Fortunately, in most cases the errors are negligible, but for those cases with h/k > 100 and d > 10 mm the errors are of the order of 10% and beyond. However, especially the condition d > 10 mm is not likely to occur in practice.)
In summary, heat spreading is a complex phenomenon that can be addressed by analytical formulae only for geometrically simple cases for which no explicit solution exists. For situations where double -sided heat transfer plays a role, or multiple sources, or multiple layers, the problem becomes intractable from an approximate analytical point of view and we have to rely on computer codes. Implicit solutions are known for multi-layer cases with multiple sources and uniform boundary conditions, even when time is a parameter.
User friendly software exists that is based on these solutions, with the additional advantage that no mesh generation is required . The big advantage of such software is that even people with little background in heat transfer can get insight in the physics underlying heat spreading by simply changing a few parameters. For the more practical cases for which layers consist of more than one material or for which the boundary conditions cannot be considered uniform, more advanced conduction-only codes should be used. Another recommended source of information can be found on the website of the University of Waterloo . One of their papers shows a couple of graphics showing clearly the errors a designer may encounter by using equations such as Eq. (5).
What Errors Can Be Expected?
An important question is: what are the errors if we use the approximate formulae? A detailed analysis will be presented in the future, here are the most important conclusions.
Conclusions for Single layers
- The 1D approximations should not be used at all.
- The L-equation can be used, provided h/k�d < 0.025; e.g., h = 2500 W/m2K, k = 100 W/mK, d = 1 mm: h/k�d = 0.025.
- The SLA equation can be used over a very large range.
- It is strongly recommended to use a user-friendly code.
Conclusions for Two Layers
- The 1D approximations should not be used at all.
- While the L-equation is accdeptable for a certain range, its use is not recommended because too many conditions have to be met to guarantee accurate values.
- The SLA-equations can still be used over a very large range but errors increase when h/k > 10, ksubm > 100 W/mK and d < 1mm
- It is strongly recommended to use a user-friendly code.
Conclusions for More Than One Source
- Be careful using simple resistance networks.
- Use superposition.
- Use a matrix-approach for thermal characterization.
Conclusions for All Other Cases
- Use a conduction-only code.
The author hopes he succeeded in conveying the message that heat spreading is not a trivial problem that can be solved by simply adding convective and conductive thermal resistances. A physically correct spreading resistance may be formulated, but it always contains the convective boundary condition. Unfortunately, the notion of a thermal spreading resistance fails when we are dealing with multiple sources and we have to rely on more sophisticated matrix methods.
- Rosten, H. and Lasance, C., “DELPHI: the Development of Libraries of Physical Models of Electronic Components for an Integrated Design Environment,” in Model Generation in Electronic Design, Eds.: Berge, J-M., Levia, O. and Rouillard, J., Kluwer Academic Press, 1995, pp. 63-90.
- Wilcoxon, R. and Cornelius, D., “Thermal Management of an LED Light Engine for Airborne Applications,” Proc. SemiTherm 22, Dallas, 2006, pp.178-185.
- Lasance, C., “Computer Analysis of Heat Transfer Problems to Check the Validity of Engineering Formulae,” Proc. 8th IHTC, San Francisco, 1986, pp. 325-330.
- Lasance, C., “Pragmatic Methods to Determine the Parameters Required for the Thermal Analysis of Electronic Systems,” in ‘Cooling of Electronics’, Eds.: Kakac, S., et al., Kluwer Academic Publishers, 1994, pp. 859-898.
- Lee, S., “Calculating Spreading Resistance in Heat Sinks,” ElectronicsCooling, Vol. 4, No. 1, January, 1998.
- Song, S., Lee, S. and Au, V., “Closed-Form Equations for Thermal Constriction/Spreading Resistances with Variable Resistance Boundary Condition,” IEPS Conference, 1994, pp. 111-121.
- Lee, S., Song, S., Au, V., Moran, K., “Constriction/Spreading Resistance Model for Electronics Packaging,” ASME/JSME Thermal Engineering Conf., Vol.4, 1995, pp.199-206.\
- Culham, J. and Yovanovich, M., “Factors Affecting the Calculation of Effective Conductivity in Printed Circuit Boards, Proc. ITHERM ’98, Seattle, 1998, pp.46-467. | http://www.electronics-cooling.com/2008/05/heat-spreading-not-a-trivial-problem/ | 13 |
58 | The Endangered Species Act (ESA) protects species identified as endangered or threatened with extinction and attempts to protect the habitat on which they depend. It is administered primarily by the Fish and Wildlife Service and also by the National Marine Fisheries Service for certain marine and anadromous species. Dwindling species are listed as either endangered or threatened according to assessments of the risk of their extinction. Once a species is listed, legal tools are available to aid its recovery and to protect its habitat. The ESA can become the visible focal point for underlying situations involving the allocation of scarce or diminishing lands or resources, especially in instances where societal values may be changing, such as for the forests of the Pacific Northwest and the waters in the Klamath River Basin. This report discusses the major provisions of the ESA, both domestic and international, and also discusses some of the background issues, such as extinction in general, and the effectiveness of the statute.
An amplified discussion is provided on four aspects of the ESA and its implementation that have raised concerns and promoted debate — listing species, designating critical habitat, consulting on projects, and exempting projects. This report provides much of the issue context for understanding individual legislative initiatives discussed in '''CRS Report RL33468''', The Endangered Species Act (ESA) in the 109th Congress: Conflicting Values and Difficult Choices. This report will be updated as circumstances warrant.
The Endangered Species Act: A Primer
The Endangered Species Act (ESA) receives significant congressional attention. The associated power and reach of its comprehensive protection for species identified as endangered or threatened with extinction has ignited concern that there be greater bounds on this power. The following discussion provides an overview and background on the various features of the ESA that contribute to its stature and yet spark an ongoing debate over its implementation.
What is the ESA?
The Endangered Species Act (ESA) is a comprehensive attempt to protect identified species and to consider habitat protection as an integral part of that effort. It is administered primarily by the Fish and Wildlife Service (FWS), but also by the National Marine Fisheries Service (NMFS) for certain marine species. Under the ESA, species of plants and animals (both vertebrate and invertebrate) are listed as either “endangered” or “threatened” according to assessments of the risk of their extinction. Once a species is listed, powerful legal tools are available to aid the recovery of the species and to protect its habitat. As of June 28, 2009, a total of 1,893 species of animals and plants had been listed as either endangered or threatened; 1,320 of these occur in the United States and its territories and the remainder only in other countries. Of the U.S. species, 1,134 were covered by recovery plans. The authorization for funding under ESA expired on October 1, 1992, although Congress has appropriated funds in each succeeding fiscal year.
Why is the ESA controversial?
While the ESA plays an important role in protecting species, it can also become a surrogate in quarrels whose primary focus is the allocation of scarce or diminishing lands or resources. Indeed, a stated purpose of the ESA is to “provide a means whereby the ecosystems upon which endangered species and threatened species depend may be conserved.” There can be economic interests on the various sides of some vanishing species issues. Because other laws often lack the strict substantive provisions that Congress included in the ESA (see Major Provisions sections, below), the ESA often becomes a surrogate battleground in such disputes. Like the miners’ canaries, declining species are often symptoms of resource scarcities and altered ecosystems. Examples of such resource controversies include the Tellico Dam (hydropower development and construction jobs versus farmland protection and tribal graves, as well as the endangered snail darter); Northwest timber harvest (protection of logging jobs and communities versus commercial and sport fishing, recreation, and ecosystem protection, as well as salmon and spotted owls); and oil development on the energy-rich plain around the northern mountain states (coal bed methane development, grazing rights, ground water protection, traditional ranching, and a proposal for sage grouse listing in a complex and varying stew of interests). (Ultimately, a petition to list this species was judged not to be warranted.) And the worldwide debate over global warming has found its avatar in the polar bear.
In recent years, tensions over the ESA have increased as species have been added to the protected list, and as the greater demands of a growing economy and human population have affected species’ habitats. Both Congress and the Administration have sought to lessen these tensions by, among other things, tailoring application of the ESA for particular circumstances. The ESA’s critics contend that neither the ESA nor administrative efforts go far enough in accommodating needs other than species conservation, while the ESA’s defenders counter that it merely balances an inherent bias toward development in other governmental laws and policies.
Debate, pro and con, on the ESA splits largely along demographic lines. While most demographic groups support species conservation, that support is stronger among urban and suburban populations and less so in rural areas, and is stronger among those in the East and along the coasts and less so in central and mountain states. Sport hunters and anglers seem divided on the issue. Native Americans, as a group often dependent on natural resources (e.g. fish), are frequently involved in ESA issues, most commonly siding with survival of listed species. Groups opposing strong protections for listed species usually make claims that jobs will be lost if conservation measures are stringent, but those seeking strong protections often claim that jobs will be lost if they are not. It is also noteworthy that, while the debate often centers on jobs and biology, people on both sides claim ethical support for their positions, and some religious groups now participate in the debate. In addition, some industries (e.g., logging and land development) generally see the ESA as a serious problem, while others (e.g., some commercial fishing and many recreation interests) see it as generally supporting their interests.
Has ESA Been Effective?
The answer to this question depends very much on the choice of measurement. A major goal of the ESA is the recovery of species to the point at which the protection of the ESA is no longer necessary. If this is the standard, the ESA could be considered a failure, since only 21 species have been delisted due to recovery, to date. Nine species have become extinct since their listing, and have been delisted due to improved data. In the former case, some of the 9 species now believed extinct were originally listed to protect any last remaining few that might have been alive at the time of listing. It can be quite difficult to prove whether extraordinarily rare species are simply that, or in fact are already extinct. For example, a rare shorebird thought by many to be extinct was re-discovered in a remote area of Canada a few years ago; it might just as easily have quietly gone extinct without being rediscovered. Rare species are, by definition, hard to find.
Even so, since some scientific studies demonstrated that most species are listed only once they are very depleted (e.g., median population of 407 animals for endangered vertebrates according to one study), another measure of effectiveness might be the number of species that have stabilized or increased their populations, even if the species is not actually delisted. If this is the standard, the ESA could be considered a success, since a large number (41% of listed species according to one study) have improved or stabilized their population levels. Other species (e.g., red wolves and California condors) might not exist at all without ESA protection, and this too might be considered a measure of success, even though the species are still rare. One could also ask what species might have become extinct if there were no ESA. The authors are unaware of comprehensive studies regarding the likely status of rare species were there no ESA, but for species such as spotted owls, salmon, Florida panthers, and plants of very narrow ranges, it seems likely that their numbers would be (at best) far fewer if ESA did not exist.
Leading Causes of Extinction
Until recent decades, the focus of the extinction debate was on losses due to over-exploitation, generally through hunting, trapping, or fishing. The poster species of the debate were passenger pigeons, tigers, wolves, and other well-known animals. But during the 20th century, a shift of focus and probably of fact occurred. The vast majority of species, including those for which direct taking was probably an early factor in their decline, are generally also at risk due to habitat loss. Habitats reduced now to a small fraction of their former extent include tall-grass prairie, fresh and salt water wetlands, old growth forests of most types, free-flowing rivers, coral reefs, undisturbed sandy beaches, and others.
Another high-ranking factor in the demise of many species is the introduction of non-native species. The non-native (invasive) species can be disease vectors or parasites (e.g., avian malaria in Hawaii, or Asian long-horned beetles in North America), predators (brown tree snakes in Guam and Hawaii), or competitors (e.g., barred owls in the Pacific Northwest). The gradual homogenization of the world’s flora and fauna has led to a demise of many species.
Is Extinction Normal?
If extinction is normal, some argue that there is no need for the government to intervene to halt this natural process. But is it normal? Geological evidence shows that the vast majority of species that have ever lived on Earth are now extinct — an observation uncontested by paleontologists. However, many scientists are concerned that the current rate of extinction exceeds background extinction rates over time. (Over the billions of years of life on Earth, extinction rates have varied, with five periods of exceptionally high rates. The most famous periods are the mass extinctions at the end of the Age of Dinosaurs (Cretaceous Period), about 65 million years ago, and the even more massive die-offs at the end of the Permian Period, about 250 million years ago, when about 52% of the groups of marine species became extinct. Between each of these five events, extinctions continued at more moderate, background levels.) But calculating current rates of extinction, much less making comparisons with the geologic past, is extremely difficult. Current estimates of total species range from 3.5 million to 100 million, with 10-30 million being commonly accepted numbers. If scientists are unsure of how many species exist, it is naturally difficult to estimate how fast they are going extinct, and whether current extinction rates exceed background extinction rates. Consequently, scientists use very conservative assumptions to make these estimates. The resulting extinction rates (17,000 species per year being a typical estimate) still seem astonishingly large, in part because the public are generally unaware of the huge number of species in groups to which many people pay little or no attention (e.g., beetles, marine invertebrates, fish), and the large number of species estimated on Earth. How do these compare to background rates?
Widely diverse methods all suggest that current rates of extinction exceed background rates. Normal rates are thought to be from 1 to 10 species per every 10 million species per year. (That is, if there are 20 million species now, background levels would be about 2 to 20 species extinctions per year.) Common estimates of current extinction rates range from 100 to 10,000 times such background rates — roughly comparable to the five great episodes of extinction in the geologic past. Critics most frequently question these calculations by stressing uncertainties, rather than citing specific factual errors. This criticism is not surprising, since each step in these calculations contains uncertainties (e.g., estimating the number of existing species). Most biologists counter by noting that similar numbers are generated in studies of widely different groups by a variety of scientists using different methods. Robust results (i.e., similar results from the testing of a hypothesis in a variety of ways) are usually considered scientifically sound.
Once extinct, a species can never be revived. But faced with high rates of extinction, some might take comfort in a return to an equal number of species, even if those species are different. Evolution continues, even in the face of high extinction rates, so perhaps new species will evolve that are better adapted to new conditions. If so, how long would such a “recovery” take? Examining the geologic record after major extinction episodes, some scientists estimate that recovery to approximately equal numbers of (different) species took up to 25 million years for the most severe extinction events. Thus, if the current extinction rate and recovery rate are comparable to past rates, the return to species numbers of the pre-historic era would take several million years.
Major Provisions of Current Law: Domestic
The modern ESA was passed in 1973, but was preceded by simpler acts in 1966 and 1969. It has been amended on numerous occasions since then: 1976, 1977, 1978, 1979, 1980, 1982, and 1988. The following are brief summaries of the major domestic provisions of the ESA in the order they appear in the U.S. Code. Several major issues are discussed in more detail later in this report.
Endangered and Threatened Species
An endangered species is defined as “any species which is in danger of extinction throughout all or a significant portion of its range....” A threatened species is defined as “any species which is likely to become an endangered species within the foreseeable future throughout all or a significant portion of its range.” The ESA does not rely on a numerical standard: such a standard would not reflect the wide variety of many species’ biology. (For example, a population of 10,000 butterflies, all confined to one mountaintop, would clearly be at greater risk than 10,000 butterflies scattered over thousands of square miles.) The protection of the ESA extends to all species and subspecies of animals (not just birds and mammals), although for vertebrates, further protection can be given for distinct population segments within a species, and not just the species as a whole. More limited protection is available for plant species under the ESA. There is currently no protection afforded under the ESA for organisms (e.g., Eubacteria, Archaea, viruses) considered neither animal nor plant.
The term “take” under the ESA means “to harass, harm, pursue, hunt, shoot, wound, kill, trap, capture, or collect, or to attempt to engage in any such conduct.” (Harassment and harm are further defined in regulation at 50 C.F.R. §17.3.) Taking is prohibited under 16 U.S.C. §1538. There has been controversy over the extent to which the prohibition on taking may include habitat modification. A 1995 Supreme Court decision ([[Sweet Home Decision| Sweet Home]) held that the inclusion of significant habitat modification was a reasonable interpretation of the term “harm” in the law.
Fish and Wildlife Service and National Marine Fisheries Service
The Secretary of the Interior manages and administers most listed species through FWS. Marine species, including some marine mammals, and anadromous fish are the responsibility of the Secretary of Commerce, acting through and NMFS. The law assigns the major role to the Secretary of the Interior (all references to “Secretary” below are to the Secretary of the Interior unless otherwise stated) and provides in detail for the relationship of the two Secretaries and their respective powers.
Species may be listed on the initiative of the appropriate Secretary or by petition from an individual, group, or state agency. The Secretary must decide whether to list the species based only on the best available scientific and commercial information, after an extensive series of procedural steps to ensure public participation and the collection of relevant information. At this point, the Secretary may not consider the economic effects that listing may have on the area where the species occurs. This is the only place in the ESA where economic considerations are expressly forbidden; such considerations may enter in a later stage. Economic factors cannot be taken into account at this stage, because Congress directed that listing be fundamentally a scientific question: is the continued existence of the species threatened or endangered? Through the 1982 amendments particularly, Congress clearly intended to separate this scientific question from subsequent decisions on appropriate protection. This is evident upon comparing 16 U.S.C. §1533(b) with §1533(f) in this regard.
In the interval between a proposal and a listing decision, the Secretary must monitor the status of these “candidate” species and, if any emergency poses a significant risk to the well-being of the species, promptly list them. Some steps in the normal listing process may be skipped for emergency listings. Federal agencies must confer with the appropriate Secretary on actions likely to jeopardize the continued existence of candidate species, but agencies need not limit commitments of resources. As of July 27, 2009, there were 248 candidate species.
Delisting and Downlisting
The processes for delisting or downlisting a species from the Lists of Endangered and Threatened Wildlife and Plants are the same as the processes for listing. Delisting is removing a species from the lists. Downlisting is reclassifying a species from endangered to threatened, and uplisting is the reverse. The Secretary of the Interior may initiate a change in the status of listed species. Alternatively, after receiving a substantive petition for any change in listing status, the Secretary is to review the species’ status. The determination to delist, downlist, or uplist a species must be made “solely on the basis of the best scientific and commercial data available” and “without reference to possible economic or other impacts.” FWS regulations also state that, at least once every five years, the Director review each listed species to determine whether it should be removed from the list, changed from endangered to threatened, or changed from threatened to endangered.
When a species is listed, the Secretary must also designate critical habitat (either where the species is found or, if it is not found there, where there are features essential to its conservation). If the publication of this information is not “prudent” because it would harm the species (e.g., by encouraging vandals or collectors), the Secretary may choose not to designate critical habitat. The Secretary may also postpone designation for as long as one year if the information is not determinable. As of August 10, 2009, critical habitat had been designated for 544 listed species. Any area, whether or not federally owned, may be designated as critical habitat, but private land is only affected by critical habitat designation if some federal action (e.g., license, loan, permit) is also involved. Federal agencies must avoid “destruction or adverse modification” of critical habitat, either through their direct action or activities that they approve or fund.
The appropriate Secretary must develop recovery plans for the conservation and survival of listed species. Recovery plans to date tend to cover birds and mammals, but a 1988 ESA amendment prohibits the Secretary from favoring particular taxonomic groups. The ESA and its regulations provide little detail on the requirements for recovery plans, nor are these plans binding on federal agencies or others, and the essentially hortatory nature of these plans has been widely criticized. As of August 10, 2009, recovery plans had been completed for 1,134 U.S. species.
Land may be acquired to conserve (recover) endangered and threatened species, and money from the Land and Water Conservation Fund may be appropriated for this acquisition. In fiscal year (FY) 2005, a total of 1,655 acres were acquired by FWS for the National Wildlife Refuge System under ESA authority.
Cooperation with States
The appropriate Secretary must cooperate with the states in conserving protected species and must enter into cooperative agreements to assist states in their endangered species programs, if the programs meet certain specified standards. If there is a cooperative agreement, the states may receive federal funds to implement the program, but the states must normally provide a minimum 25% matching amount. The 1988 ESA amendments created a fund to provide for the state grants. While the authorized size of the fund is determined according to a formula, money from the fund still requires annual appropriation. For FY2007, a total of almost $81.0 million was provided to states and territories for cooperative activities, including land acquisition and planning assistance.
Federal agencies must ensure that their actions are “not likely to jeopardize the continued existence” of any endangered or threatened species, nor to adversely modify critical habitat. If federal actions or actions of non-federal parties that require a federal approval, permit, or funding might affect a listed species, the federal action agencies must complete a biological assessment. To be sure of the effects of their actions, the action agency must consult with the appropriate Secretary. This is referred to as a § 7 consultation. “Action” includes any activity authorized, funded, or carried out by a federal agency, including permits and licenses. However, a 2007 Supreme Court decision held that the consultation process is required only for those federal actions that involve agency discretion. Where a federal action is dictated by statute, such as where an agency must act if certain listed conditions are
met, a § 7 consultation is not required.
If the appropriate Secretary finds that an action would not jeopardize a species or adversely modify critical habitat, the Secretary issues a Biological Opinion (“BiOp”) to that effect, and the agency is provided with a written statement under 16 U.S.C. § 1536(b)(4), specifying the terms and conditions under which the federal action may proceed in order to avoid jeopardy or adverse modification of critical habitat. The Secretary must suggest any reasonable and prudent alternatives that would be required to avoid harm to the species. The great majority of consultations result in “no jeopardy” opinions, and nearly all of the rest find that the project has reasonable and prudent alternatives which will permit it to go forward. Actions that would result in jeopardy and have no reasonable and prudent alternatives are exceptionally rare. If no reasonable and prudent alternatives to the proposed action can be devised to avoid the jeopardy or adverse modification, the agency has three choices: (1) choose not to proceed with the action; (2) proceed with the action at the risk of penalties; or (3) apply for a formal exemption for the action.38 Pending completion of the consultation process, agencies may not make irretrievable commitments of resources that would foreclose any of these alternatives.
A federal agency, an applicant or permittee, or the governor of a state in which the action in question would occur may apply for an exemption that allows the action to go forward without penalties. Exemptions are only available for actions (e.g., water withdrawal), not for species (e.g., Delta smelt). A high-level Endangered Species Committee of six specified federal officials and a representative of each affected state (commonly called the “God Squad”) decides whether to allow the action to proceed despite future harm to a species; at least five votes are required to pass an exemption. The law includes extensive rules and deadlines to be followed in applying for such an exemption and some stringent rules for the Committee in deciding whether to grant an exemption. The Committee must grant an exemption if the Secretary of Defense determines that an exemption is necessary for national security. In addition and under specified circumstances, the President may determine whether to exempt a project for the repair or replacement of facilities in declared disaster areas. See a separate discussion of the complex exemption process and its history, below, in the Appendix.
Permits for Non-Federal Actions
For actions that might take a listed species, but without any federal nexus such as a loan or permit, the Secretary may issue permits to allow “incidental take” of species for otherwise lawful actions. The applicant for an incidental take permit must submit a habitat conservation plan (HCP) that shows the likely impact, the steps to minimize and mitigate the impact, the funding for the mitigation, the alternatives that were considered and rejected, and any other measures that the Secretary may require. Secretary Babbitt greatly expanded use of this section during the Clinton Administration, and an agency handbook provides for streamlined procedures for activities with minimal impacts.
Other provisions specify certain exemptions for raptors; regulate subsistence activities by Alaskan Natives; prohibit interstate transport and sale of listed species and parts; control trade in parts or products of an endangered species that were owned before the law went into effect; and specify rules for establishing experimental populations. (Provisions of the ESA referring to international activities are discussed below.)
Prohibitions and Penalties
The ESA prohibits certain actions, specifies criminal and civil penalties, and provides for citizens’ suits to enforce certain aspects of the ESA. The citizen suit provisions have been a driving force in the ESA’s history, and often have been used to force reluctant agencies to provide for species conservation that might otherwise have been neglected.
Major Provisions of Current Law: International
The ESA implements the Convention on International Trade in Endangered Species of Wild Fauna and Flora (“CITES”, signed by the United States on March 3, 1973); and the Convention on Nature Protection and Wildlife Preservation in the Western Hemisphere (the “Western Hemisphere Convention” signed by the United States on October 12, 1940). CITES parallels the ESA by dividing its listed species into groups, according to the estimated risk of extinction, but uses three major categories, rather than two. In contrast to the ESA, CITES focuses exclusively on trade, and does not consider or attempt to control habitat loss. The following are the major international provisions of the ESA.
The Secretary may use foreign currencies (available under 7 U.S.C. §1691, the Food for Peace program) to provide financial assistance to other countries for conserving endangered species. (As a practical matter, little money is currently available under this provision.) The ESA also authorizes appropriations for this purpose.
The ESA designates the Interior Secretary as the Endangered Species Scientific Authority (ESSA) under CITES. As the ESSA, the Secretary must determine that the United States’ international trade of living or dead organisms, or their products, will not harm the species in question. The Secretary has authority to enforce these determinations. The Secretary is required to base export determinations upon “the best available biological information,” although population estimates are not required. Certain other responsibilities are also spelled out in CITES.
The Interior Secretary is also named as the Management Authority for the United States under CITES. The Management Authority must assure that specimens are exported legally, that imported specimens left the country of origin legally, and that live specimens are shipped under suitable conditions. Certain other responsibilities are also spelled out in CITES.
The ESA makes violations of CITES violations of U.S. law if committed within the jurisdiction of the United States.
The ESA requires importers and exporters of controlled products to use certain ports and provides for exemptions for scientific purposes and for programs intended to assist the recovery of listed species. There are also certain exemptions for Alaska Natives and for products owned before December 28, 1973, including scrimshaw (carved ivory).
The 1988 ESA amendments (Title II of P.L. 100-478; 16 U.S.C. §§4201 et seq.) created a major program for the conservation of African elephants. In 1994Congress enacted a separate program for rhinoceros and tigers (P.L. 103-391; 16 U.S.C. §§ 5301 et seq.). In 1997, a program for Asian elephants was established (P.L. 105-96; 16 U.S.C. §§ 4261 et seq.). In 2000, a program for great apes was added (P.L. 106-411; 16 U.S.C. §§ 6301 et seq.). In 2004, a program for marine turtles was added (P.L. 108-266; 16 U.S.C. §§ 6601 et seq.). While none of these programs is formally part of the ESA authorization per se, they provide funds for species which are protected under the ESA.
Analysis of Domestic Law Provisions
Because the listing of species, the designation of critical habitat, and the consultation and exemption processes are such important and controversial aspects of the ESA, each of these components is discussed in greater detail in this portion of the report.
Bases for Listings. As discussed above, the listing of a species under the ESA results in greater protection for the species, limitations on activities that might affect that species, and penalties for “taking” individuals of a listed species.
A species may be designated as either endangered or threatened, depending on the severity of its decline and threats to its continued survival. Under § 3 of the ESA, an endangered species is a species that is “in danger of extinction throughout all or a significant portion of its range.” A threatened species is defined as a species “likely to become endangered within the foreseeable future throughout all or a significant portion of its range.” Because the ESA defines species as a species, a subspecies, or, — for vertebrates only — a “distinct population segment,” there is some flexibility as to how to provide different levels of protection to less than a whole species.
In the last several years, the Department of the Interior (DOI) has interpreted the definition of endangered species to find that only a species that is in danger of extinction throughout all of its range is truly endangered. Under this interpretation, a species that was at risk of extinction in a significant portion of its range would not be considered endangered. Just about every court that considered the issue found DOI’s interpretation violated the ESA, including one federal court of appeals. And in 2007, DOI changed its interpretation.55 Under the new interpretation issued by the Solicitor of DOI, FWS must also consider whether a species is at risk of extinction throughout a significant portion of its range, allowing the agency discretion to define significant. The interpretation also states that the range of a species is the area in which a species currently exists, not the historical range where the species once existed.
The determination of whether a species should be listed as endangered or threatened must be based on several scientific factors related to a species and threats to its continuance. The ESA expressly states that listing determinations are to be made “solely on the basis of the best scientific and commercial data available.” The word “solely” was added in the 1982 amendments to the ESA58 to clarify that the determination of endangered or threatened status was intended to be made without reference to its potential economic impacts. Observers have compared the decision of whether to list a species to diagnosing whether a patient has cancer: the diagnosis should be a strictly scientific decision, but other factors can be considered later in deciding how to treat the cancer. In discussing the addition of the word “solely,” a committee report stated:
... The principal purpose of the amendments to Section 4 is to ensure that decisions pertaining to the listing and delisting of species are based solely upon biological criteria and to prevent non-biological considerations from affecting such decisions. To accomplish this and other purposes, Section 4(a) is amended in several instances.
Section 4(b) of the Act is amended in several instances by Section 1(a)(2) of H.R. 6133. First, the legislation requires that the Secretary base his determinations regarding the listing or delisting of species “solely” on the basis of the best scientific and commercial data available to him. The addition of the word “solely” is intended to remove from the process of the listing or delisting of species any factor not related to the biological status of the species. The Committee strongly believes that economic considerations have no relevance to determinations regarding the status of species and intends that the economic analysis requirements of Executive Order 12291, and such statutes as the Regulatory Flexibility Act and the Paperwork Reduction Act not apply. The committee notes, and specifically rejects, the characterization of this language by the Department of the Interior as maintaining the status quo and continuing to allow the Secretary to apply Executive Order 12291 and other statutes in evaluating alternatives to listing. The only alternatives involved in the listing of species are whether the species should be listed as endangered or threatened or not listed at all. Applying economic criteria to the analysis of these alternatives and to any phase of the species listing process is applying economics to the determinations made under Section 4 of the Act and is specifically rejected by the inclusion of the word “solely” in this legislation.
Section 4(b) of the Act, as amended, provides that listings shall be based solely on the basis of the best “scientific and commercial data” available. The Committee did not change this information standard because of its interpretation of the word “commercial” to allow the use of trade data. Retention of the word “commercial” is not intended, in any way, to authorize the use of economic considerations in the process of listing a species.
The conference report confirms that it was the intent of both chambers that economic factors not play a role in the designation and listing of species for protection:
Section 2 of the Conference substitute amends section 4 of the Act in several ways. The principal purpose of these amendments is to ensure that decisions in every phase of the process pertaining to the listing or delisting of species are based solely upon biological criteria and to prevent non-biological considerations from affecting such decisions.
The Committee of Conference (hereinafter the Committee) adopted the House language which requires the Secretary to base determinations regarding the listing or delisting of species “solely” on the basis of the best scientific and commercial date available to him. As noted in the House Report, economic considerations have no relevance to determinations regarding the status of species and the economic analysis requirements of Executive Order 12291, and such statutes as the Regulatory Flexibility Act and the Paperwork Reduction Act, will not apply to any phase of the listing process. The standards in the Act relating to the designation of critical habitat remain unchanged. The requirement that the Secretary consider for listing those species that states or foreign nations have designated or identified as in need of protection also remains unchanged.
The Committee adopted, with modifications, the Senate amendments which combined and rewrote section 4(b) and (f) of the Act to streamline the listing process by reducing the time periods for rulemaking, consolidating public meeting and hearing requirements and establishing virtually identical procedures for the listing and delisting of species and for the designation of critical habitat.
In summary, the ESA makes clear that whether a species is endangered or threatened is a scientific question in which economic factors must not play a part. Once this determination is made, economics then may be considered in analyzing and taking other actions such as designating critical habitat or developing recovery plans. Nothing in the ESA prevents choosing conservation methods that will lower costs to society, industry, or landowners, as long as the chosen methods still achieve conservation goals.
Pre-Listing Activities. The question may arise as to what the responsibilities of the federal government are toward a species that is proposed for listing but has not yet been listed. This question could be important because there may be a significant time between the proposal for listing and the actual listing, during which time a federal agency could be faced with decisions on contracts and management actions of various types. Under current law, an agency must “confer” with the appropriate Secretary on any agency action that is likely to jeopardize the continued existence of any species proposed to be listed or to destroy or adversely modify critical habitat proposed to be designated for such species. The implementing regulations state that the conference is designed to assist the federal agency and an applicant in identifying and resolving potential conflicts at an early stage in the planning process.
The conference process that applies to species proposed for listing is distinct from the consultation process that applies to listed species. The conference is intended to be less formal, and to permit FWS or NMFS to advise an agency on ways to minimize or avoid adverse effects. A federal agency has to follow more formal procedures and provide more complete documentation once a species is listed. The agency may choose to follow the more complete and formal process even at the proposed listing stage to avoid duplication of effort later.
The ESA states that the conference stage does not require a limitation on the irreversible or irretrievable commitment of resources by agency action which would foreclose reasonable and prudent alternative measures. Once a species is listed, an agency will have definite responsibilities, and an agency might consider it prudent at the proposed listing stage both to avoid harm to a precarious species and to avoid possible liability for compensation arising from agency actions creating private rights which later cannot be exercised. An agency might, for example, choose to avoid holding timber sales in an area containing a proposed species. The relevant Secretary must monitor candidate species and prevent a significant risk to the well being of any such species.
Special Protection for Threatened Species. Under § 4(d) of the ESA, the Secretary may promulgate special regulations to address the conservation of species listed as threatened. Protections and recovery measures for a particular threatened species can be carefully tailored to particular situations, as was done, for example, with respect to the threatened northern spotted owl. A federal regulation also clarifies that a threatened species for which a special rule has not been promulgated enjoys the same protections as endangered species.
Designation of Critical Habitat
Critical habitat designation has been controversial, given FWS’s stated position (see below), the importance that the environmental community attaches to critical habitat (especially in some specific cases), and the distress its designation causes among many landowners.
Concurrently with determining a species to be endangered or threatened, the Secretary “to the maximum extent prudent and determinable” is to designate the critical habitat of the species. The reference to the designation of critical habitat being “prudent” reflects the need to consider whether designating habitat would harm the species, for example, by identifying areas that could be damaged by specimen collecting. If the facts relevant to the designation of critical habitat are not yet “determinable,” the Secretary may postpone habitat designation for an additional year. Eventually, habitat is to be designated to the maximum extent it is prudent to do so.
If the Secretary designates critical habitat, the Secretary must do so
on the basis of the best scientific data available and after taking into consideration the economic impact, and any other relevant impact, of specifying any particular area as critical habitat. The Secretary may exclude any area from critical habitat if he determines that the benefits of such exclusion outweigh the benefits of specifying such area as part of the critical habitat, unless he determines, based on the best scientific and commercial data available, that the failure to designate such area as critical habitat will result in the extinction of the species concerned.
Therefore, although economic factors are not to be considered in the listing of a species as endangered or threatened, economic factors are considered in the designation of critical habitat, and some habitat areas may be excluded from designation based on such concerns, unless the failure to designate habitat would result in the extinction of the species.
Although avoiding adverse modification of critical habitat is an express obligation only for federal agencies and actions, it is frequently misunderstood by the public as the major restriction on a private landowner’s authority to manage land. The bulk of any restrictions on use of private land come primarily from the ESA’s prohibition on taking of listed species. Only occasionally — when some federal nexus is present — are they due to any additional strictures resulting from designated critical habitat.
Both the Clinton and George W. Bush Administrations have supported restrictions on their own ability to designate critical habitat under the ESA (e.g., proposed restrictions under the appropriations process). In an announcement on October 22, 1999, FWS placed designation of critical habitat at the lowest priority in its listing budget, and stated that it could not comply with all of the demands of the ESA under current budget constraints. Conservation groups saw a contradiction between that claim, and the agency’s repeated failure to request increased funds for listing, together with requests that Congress place a special cap on funding for designation of critical habitat.
FWS has designated critical habitat for 534 of the 1,320 listed domestic species (as of August 10, 2009). The agency has been sued frequently for its failure to designate critical habitat and consistently loses such suits. In the agency’s view, critical habitat offers little protection for a species beyond that already available under the listing process, and thus the expense of designation, combined with its perception of a small margin of additional conservation benefit, make critical habitat requirements a poor use of scarce budgetary resources, especially if the public views critical habitat as the major regulatory impact of the ESA, rather than as a supplement to the ESA’s prohibition on “taking” a listed species.
According to FWS, critical habitat designation shows its greatest conservation benefit when it includes areas not currently occupied by the species; these areas may be important as connecting corridors between populations or as areas in which new populations may be re-introduced. FWS proposed to “develop policy or guidance and/or revise regulations, if necessary, to clarify the role of habitat in endangered species conservation.” The notice reflected the agency’s longstanding disaffection for this provision of the law and its view that its conservation benefit is low compared to its cost. However, while workshops were held on the topic, ultimately, no action was taken on the proposal.
These agency assertions and conclusions rest on an agency regulation in 2000 that fails to consider the role of critical habitat in the recovery of species, rather than its mere survival. In 2001, a federal court of appeals rejected that regulatory interpretation. In 2004, a second federal court of appeals found the regulation contradicted the statute. If the agency interpretation is changed to more closely reflect the statute, the role of critical habitat arguably would be more meaningful in practice.
Under §7 of the ESA, federal agencies are required to consult with the Secretary about proposed actions that might affect a listed species; to use their authorities in furtherance of the ESA; and to insure that any action authorized, funded, or carried out by the agency is not likely to jeopardize the continued existence of any endangered or threatened species, or to destroy or adversely modify critical habitat unless the agency has been granted an exemption under the ESA. Consultation is usually begun at the request of the action agency, but may be initiated at the request of a FWS Regional Director or NMFS’s Assistant Administrator for Fisheries.
Science plays an important role in the consultation process because the Secretary is to use the “best scientific and commercial data available” to ascertain if a listed species might be present in the area of a proposed agency action. If so, the action agency is to prepare a “biological assessment” to explore whether a proposed action might jeopardize a listed species or its critical habitat. This assessment also is to be based on “the best scientific and commercial data available.” Consultation must also be initiated in connection with private lands if an applicant for (or recipient of) federal funding, permit, or license has reason to believe that a listed species may be present in the area affected by a project and implementation of the action will likely affect a listed species.
The relevant Secretary generally is to complete consultation within 90 days for a wholly federal action, unless the Secretary and the federal agency mutually agree to a longer period (up to 150 days) and reasons are given for the delay. A consultation involving a non-federal party is to be completed within the time agreed to by the Secretary, the federal agency involved, and the applicant concerned.
Thereafter, FWS or NMFS will prepare a written statement, known as the biological opinion, analyzing whether the proposed agency action is likely to jeopardize the continued existence of a listed species or destroy or adversely modify critical habitat. The ESA does not expressly state that the biological opinion is to be based on the “best scientific and commercial data available,” but this arguably is implied, and is expressly required under the implementing regulations, which require that the consulting agency provide “the best scientific and commercial data available or which can be obtained during the consultation.” Such information is to be the basis of the biological opinion, and the biological opinion is to include a summary of the information on which the opinion is based.
The biological opinion may conclude that the agency action is not likely to jeopardize the species, or that it can be modified to avoid jeopardy. If so, FWS or NMFS may issue a permit that excuses the taking of listed species incidental to the otherwise lawful activities that are to take place. If the biological opinion concludes that the proposed action is likely to jeopardize, FWS or NMFS must suggest reasonable and prudent alternatives to avoid jeopardy and mitigate the impacts of the action. If this is not possible, then the agency proposing the action must forego the action, risk incurring penalties under the ESA, or obtain a formal exemption from the penalties of the ESA as set out below.
Exemptions: A History
The Endangered Species Committee. If the jeopardy that is expected to result from a proposed agency action cannot be avoided and the agency proposing the action nonetheless wishes to go ahead with the action, the agency (or the affected governor(s) or license applicant(s)) may apply for an exemption to allow the action to go forward. The exemption process is an important way in which economic factors may be taken into account under the ESA. Because the exemption process involves convening a cabinet-level committee, there have only been six instances to date in which the exemption process was initiated. Of these six, one was granted, one was partially granted, one was denied, and three were dropped (see Appendix).
As originally enacted, the ESA contained an absolute prohibition against activities detrimental to listed species. When the prospective impoundment of water behind the nearly completed Tellico dam threatened to eradicate the only known population of the snail darter (a fish related to perch), the Supreme Court concluded that the then-current “plain language” of the ESA mandated that the gates of the dam not be closed:
Concededly, this view of the Act will produce results requiring the sacrifice of the anticipated benefits of the project and of many millions of dollars in public funds. But examination of the language, history, and structure of the legislation under review here indicates beyond doubt that Congress intended endangered species to be afforded the highest of priorities. (Tennessee Valley Authority v. Hill, 437 U.S. 153, 174 (1978))
After this Supreme Court decision, the ESA was amended by P.L. 95-632 to include a process by which economic impacts could be reviewed and projects exempted from the restrictions that otherwise would apply. As originally enacted, the exemption process involved recommendations by the Secretary of the Interior, processing by a review board, and then an application to the Endangered Species Committee (ESC). In 1982, P.L. 97-304 changed this process to eliminate the review board. Currently, the reviewing Committee is composed of the Secretary of Agriculture, the Secretary of the Army, the Chair of the Council of Economic Advisors, the Administrator of the Environmental Protection Agency, the Secretary of the Interior (who chairs the ESC), the Administrator of the National Oceanic and Atmospheric Administration, and one individual from each affected state. By regulation, Committee members from affected states collectively have one vote.
Eligible Applicants. A federal agency, the governor of a state in which an agency action will occur, or a permit or license applicant may apply to the Secretary for an exemption for an agency action. How an agency action is structured — whether, for example, it is a separate action or a region-wide program — could be relevant to the various findings required under the exemption procedures. The term “permit or license applicant” is defined in the ESA as a person whose application to a federal agency for a permit or license has been denied primarily because of ESA prohibitions applicable to the agency action. The regulations do not elaborate on who is included within this term.
An exemption application from a federal agency must describe the consultation process carried out between the head of the federal agency and the Secretary, and include a statement explaining why the action cannot be altered or modified to conform with the requirements of the statute. All applications must be submitted to the Secretary not later than 90 days after completion of the consultation process, or within 90 days of final agency action if the application involves a federal permit or license. An application must set out the reasons the applicant considers an exemption warranted. The Secretary then publishes a notice of receipt of the application in the Federal Register and notifies the governor of each affected state (as determined by the Secretary) so that state members can be appointed to the ESC. The Secretary (acting alone) may deny the application, if the preliminary steps have not been completed.
To be eligible for an exemption, the federal agency concerned and the exemption applicant must have carried out the consultation processes required under §7 of the ESA in good faith. The agency also must have made a reasonable and responsible effort to develop and fairly consider modifications or reasonable and prudent alternatives to the proposed action that would not jeopardize the continued existence of any endangered or threatened species or destroy or adversely modify critical habitat of a species. In addition, the agency must have conducted required biological assessments; and, to the extent determinable within the time provided, refrained from making any irreversible or irretrievable commitment of resources that would foreclose the formulation or implementation of reasonable and prudent alternatives that would avoid jeopardizing the species and/or adversely modifying its habitat. These qualifying requirements were put in place to insure that the exemption process is meaningful and that consideration of the issues would not be preempted by actions already taken. Additional requirements for an application are contained in the relevant regulations.
It is important to note that the exemption process begins only after a species is listed, consultation has occurred, a finding has been made that the agency action is likely to jeopardize a species, and it is determined that there are no reasonable and prudent alternatives to the agency action.
Secretarial Review. The Secretary is to determine whether an application is qualified within 20 days or a time mutually agreeable to the applicant and the Secretary. Within 140 days of the time the Secretary determines that the applicant is qualified, the Secretary, in consultation with the other members of the ESC, must hold a formal hearing on the application and prepare a report. The purpose of the formal hearing is to collect evidence both favoring and opposing the exemption. The Secretary’s report reviews whether the applicant has made any irreversible or irretrievable commitment of resources; discusses the availability of reasonable and prudent alternatives and the benefits of each; provides a summary of the evidence concerning whether the action is in the public interest and is nationally or regionally significant, and, if so, states why; and outlines appropriate and reasonable mitigation and enhancement measures which should be considered by the ESC.
Committee Determination. Within 30 days after receiving the report of the Secretary, the ESC is to grant or deny an exemption. The ESC shall grant an exemption for the project or activity if, based on the evidence, the ESC determines that
(i) there are no reasonable and prudent alternatives to the agency action;
(ii) the benefits of such action clearly outweigh the benefits of alternative courses of action consistent with conserving the species or its critical habitat, and such action is in the public interest;
(iii) the action is of regional or national significance; and
(iv) neither the federal agency concerned nor the exemption applicant made any irreversible or irretrievable commitment of resources prohibited by subsection (d) of this section [commitments as described above that jeopardize species or critical habitat].
Mitigation. If the ESC grants an exemption, it also must establish reasonable mitigation and enhancement measures that are “necessary and appropriate to minimize the adverse effects” of an approved action on the species or critical habitat. The exemption applicant (whether federal agency, governor, or permit or license applicant) is responsible for carrying out and paying for mitigation.
The costs of mitigation and enhancement measures specified in an approved exemption must be included in the overall costs of continuing the proposed action, and the applicant must report annually to the Council on Environmental Quality on compliance with mitigation and enhancement measures.
Special Circumstances. The ESA specifies certain particular instances when special provisions will apply.
Review by the Secretary of State. The ESC cannot grant an exemption for an agency action if the Secretary of State, after a hearing and a review of the proposed agency action, certifies in writing that carrying out the action for which an exemption was sought would violate a treaty or other international obligation of the United States. This provision could enter in if a particular species listed under the ESA were also protected under treaties, such as the Migratory Bird Treaties to which the United States is a party. The Secretary of State is to make this determination within 60 days “of any application made under this section,” a time limit which may be unrealistic given the longer length of time the Secretary of the Interior has to prepare the report that will fully describe the agency action to be reviewed by the Secretary of State.
National Security. The Committee is required to grant an exemption if the Secretary of Defense finds that an exemption is necessary for reasons of national security. We know of no instance on the public record in which this provision has been used.
Domestic Disasters. The President may grant exemptions in certain cases involving facilities in declared disaster areas. This provision appears to be written in contemplation of domestic disasters, such as hurricanes. The ESA does not have a general provision that allows the granting of an exemption in other emergency conditions.
Duration and Effect of Exemption. An exemption is permanent unless the Secretary finds that the exemption would result in the extinction of a species that was not the subject of consultation or was not identified in a biological assessment and the ESC determines that the exemption should not be permanent.
The ESA expressly states that the penalties that normally apply to the taking of an endangered or threatened species do not apply to takings resulting from actions that are exempted.
Appendix: Exemption Applications
In three instances, an Endangered Species Committee (ESC) reached a decision on an application for an exemption:
The Platte River is a major stopover site on the migration path of whooping cranes, listed under the ESA as an endangered species. FWS determined that the construction of the Grayrocks Dam and Reservoir in Wyoming, along with existing projects in the Platte River Basin, would jeopardize the downstream habitat of whooping cranes. The Endangered Species Committee voted (7-0) to grant an exemption for Grayrocks Dam and Reservoir on January 23, 1979, conditioned on specified mitigation measures that included maintenance and enhancement of critical whooping crane habitat on the Platte River. A previous enactment by Congress would have exempted the project, if the ESC had not reached a decision within a certain time.
The Tellico Dam on the Little Tennessee River was to serve multiple purposes, but was vigorously opposed by several sectors. After the snail darter (a fish) was listed as endangered, litigation was filed to stop the construction of the dam, resulting in the significant case TVA v. Hill, which contains important language on the ESA and on possible ratification of projects through appropriations measures. The Tellico situation was also subject to P.L. 95-632 that provided for an expedited ESC process and an automatic exemption, if the ESC did not reach a decision within a specified time. The ESC denied an exemption for Tellico (on a 7-0 vote), but Congress enacted an exemption in P.L. 96-69, and the dam was completed. Subsequently, additional snail darters were found in a few other locations, and the snail darter was reclassified as threatened.
Bureau of Land Management Timber Sales
The Bureau of Land Management, an agency in the Department of the Interior, sought an exemption for 44 Oregon timber sales in the habitat of the northern spotted owl. In 1992, the ESC voted (5-2) to grant an exemption for 13 of the sales. Controversy over the sales and the processes within the Department continued, and the application was subsequently withdrawn.
In three other instances, there were applications for exemptions, but no ESC decisions:
Pittston Company Refinery
The Pittston Company applied for an exemption to build a refinery in Eastport, Maine. Following jeopardy opinions based on probable effects on threatened bald eagles and endangered [[Right Whale| right] and humpback whales, the company applied for an exemption, but further action on this application appears to have been discontinued in 1982.
Consolidated Grain and Barge Company Docking Area
This company sought to build a docking area for barges at Mound City, Illinois, on the Ohio River, an area that was habitat for an endangered mussel. Following a jeopardy opinion, and a denial of permits by the Army Corps of Engineers, the company applied for an exemption, but withdrew the application in 1986.
Suwanee River Authority
The consulting engineer of the Suwanee River Authority applied for an exemption for a project to dredge Alligator Pass in Suwanee Sound, Florida, part of the habitat for the endangered manatee. The project had been denied a permit by the Army Corps of Engineers. The engineer apparently lacked the authority to apply on behalf of the Authority, which in 1986 refused to ratify his actions and withdrew the application. Although the engineer attempted to continue the application, the withdrawal was effective.
- U.S. Fish and Wildlif Service(FWS)
- Petition to List the Greater Sage-Grouse
- FWS threatened and Endangered Species System (TESS)
- ESA species summary
- FWS Delistings Report
- FWS Candidates for Listing
- FWS Critical Habitat Listings
- Recovery Plans
- CRS Report 98-32 ENR, Endangered Species Act List Revisions: A Summary of Delisting and Downlisting.
- CRS Report RL30123, Harmful Non-Native Species: Issues for Congress.
- CRS Report 95-778 A, Habitat Modification and the Endangered Species Act: The Sweet Home Decision.
- CRS Report RL30792, The Endangered Species Act: Consideration of Economic Factors
- CRS Report 90-242 ENR, Endangered Species Act: The Listing and Exemption Processes
- RS Report RL32751, The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES): Background and Issues.
- CRS Report RS20263, The Role of Designation of Critical Habitat under the Endangered Species Act.
- CRS Report RL33468, The Endangered Species Act (ESA) in the 109th Congress: Conflicting Values and Difficult Choices.
- Wiygul, Robert and Heather Weiner. “Critical Habitat Destruction,” Environmental Forum, vol. 16, no. 6 (May/June 1999): 12-21.
- CRS Report RS21500, The Endangered Species Act, “Sound Science,” and the Courts
- CRS Report RL32992, The Endangered Species Act and “Sound Science.”
See also environmental laws of United States Fish and Wildlife Service
Disclaimer: This article is taken wholly from, or contains information that was originally published by, the Congressional Research Service. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the Congressional Research Service should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content. | http://www.eoearth.org/article/Endangered_Species_Act,_United_States | 13 |
63 | Before we begin to explore the effects of resistors, inductors, and capacitors connected together in the same AC circuits, let's briefly review some basic terms and facts.
Resistance is essentially friction against the motion of electrons. It is present in all conductors to some extent (except superconductors!), most notably in resistors. When alternating current goes through a resistance, a voltage drop is produced that is in-phase with the current. Resistance is mathematically symbolized by the letter “R” and is measured in the unit of ohms (Ω).
Reactance is essentially inertia against the motion of electrons. It is present anywhere electric or magnetic fields are developed in proportion to applied voltage or current, respectively; but most notably in capacitors and inductors. When alternating current goes through a pure reactance, a voltage drop is produced that is 90o out of phase with the current. Reactance is mathematically symbolized by the letter “X” and is measured in the unit of ohms (Ω).
Impedance is a comprehensive expression of any and all forms of opposition to electron flow, including both resistance and reactance. It is present in all circuits, and in all components. When alternating current goes through an impedance, a voltage drop is produced that is somewhere between 0o and 90o out of phase with the current. Impedance is mathematically symbolized by the letter “Z” and is measured in the unit of ohms (Ω), in complex form.
Perfect resistors (Figure below) possess resistance, but not reactance. Perfect inductors and perfect capacitors (Figure below) possess reactance but no resistance. All components possess impedance, and because of this universal quality, it makes sense to translate all component values (resistance, inductance, capacitance) into common terms of impedance as the first step in analyzing an AC circuit.
Perfect resistor, inductor, and capacitor.
The impedance phase angle for any component is the phase shift between voltage across that component and current through that component. For a perfect resistor, the voltage drop and current are always in phase with each other, and so the impedance angle of a resistor is said to be 0o. For an perfect inductor, voltage drop always leads current by 90o, and so an inductor's impedance phase angle is said to be +90o. For a perfect capacitor, voltage drop always lags current by 90o, and so a capacitor's impedance phase angle is said to be -90o.
Impedances in AC behave analogously to resistances in DC circuits: they add in series, and they diminish in parallel. A revised version of Ohm's Law, based on impedance rather than resistance, looks like this:
Kirchhoff's Laws and all network analysis methods and theorems are true for AC circuits as well, so long as quantities are represented in complex rather than scalar form. While this qualified equivalence may be arithmetically challenging, it is conceptually simple and elegant. The only real difference between DC and AC circuit calculations is in regard to power. Because reactance doesn't dissipate power as resistance does, the concept of power in AC circuits is radically different from that of DC circuits. More on this subject in a later chapter!
Let's take the following example circuit and analyze it: (Figure below)
Example series R, L, and C circuit.
The first step is to determine the reactances (in ohms) for the inductor and the capacitor.
The next step is to express all resistances and reactances in a mathematically common form: impedance. (Figure below) Remember that an inductive reactance translates into a positive imaginary impedance (or an impedance at +90o), while a capacitive reactance translates into a negative imaginary impedance (impedance at -90o). Resistance, of course, is still regarded as a purely “real” impedance (polar angle of 0o):
Example series R, L, and C circuit with component values replaced by impedances.
Now, with all quantities of opposition to electric current expressed in a common, complex number format (as impedances, and not as resistances or reactances), they can be handled in the same way as plain resistances in a DC circuit. This is an ideal time to draw up an analysis table for this circuit and insert all the “given” figures (total voltage, and the impedances of the resistor, inductor, and capacitor).
Unless otherwise specified, the source voltage will be our reference for phase shift, and so will be written at an angle of 0o. Remember that there is no such thing as an “absolute” angle of phase shift for a voltage or current, since its always a quantity relative to another waveform. Phase angles for impedance, however (like those of the resistor, inductor, and capacitor), are known absolutely, because the phase relationships between voltage and current at each component are absolutely defined.
Notice that I'm assuming a perfectly reactive inductor and capacitor, with impedance phase angles of exactly +90 and -90o, respectively. Although real components won't be perfect in this regard, they should be fairly close. For simplicity, I'll assume perfectly reactive inductors and capacitors from now on in my example calculations except where noted otherwise.
Since the above example circuit is a series circuit, we know that the total circuit impedance is equal to the sum of the individuals, so:
Inserting this figure for total impedance into our table:
We can now apply Ohm's Law (I=E/R) vertically in the “Total” column to find total current for this series circuit:
Being a series circuit, current must be equal through all components. Thus, we can take the figure obtained for total current and distribute it to each of the other columns:
Now we're prepared to apply Ohm's Law (E=IZ) to each of the individual component columns in the table, to determine voltage drops:
Notice something strange here: although our supply voltage is only 120 volts, the voltage across the capacitor is 137.46 volts! How can this be? The answer lies in the interaction between the inductive and capacitive reactances. Expressed as impedances, we can see that the inductor opposes current in a manner precisely opposite that of the capacitor. Expressed in rectangular form, the inductor's impedance has a positive imaginary term and the capacitor has a negative imaginary term. When these two contrary impedances are added (in series), they tend to cancel each other out! Although they're still added together to produce a sum, that sum is actually less than either of the individual (capacitive or inductive) impedances alone. It is analogous to adding together a positive and a negative (scalar) number: the sum is a quantity less than either one's individual absolute value.
If the total impedance in a series circuit with both inductive and capacitive elements is less than the impedance of either element separately, then the total current in that circuit must be greater than what it would be with only the inductive or only the capacitive elements there. With this abnormally high current through each of the components, voltages greater than the source voltage may be obtained across some of the individual components! Further consequences of inductors' and capacitors' opposite reactances in the same circuit will be explored in the next chapter.
Once you've mastered the technique of reducing all component values to impedances (Z), analyzing any AC circuit is only about as difficult as analyzing any DC circuit, except that the quantities dealt with are vector instead of scalar. With the exception of equations dealing with power (P), equations in AC circuits are the same as those in DC circuits, using impedances (Z) instead of resistances (R). Ohm's Law (E=IZ) still holds true, and so do Kirchhoff's Voltage and Current Laws.
To demonstrate Kirchhoff's Voltage Law in an AC circuit, we can look at the answers we derived for component voltage drops in the last circuit. KVL tells us that the algebraic sum of the voltage drops across the resistor, inductor, and capacitor should equal the applied voltage from the source. Even though this may not look like it is true at first sight, a bit of complex number addition proves otherwise:
Aside from a bit of rounding error, the sum of these voltage drops does equal 120 volts. Performed on a calculator (preserving all digits), the answer you will receive should be exactly 120 + j0 volts.
We can also use SPICE to verify our figures for this circuit: (Figure below)
Example series R, L, and C SPICE circuit.
ac r-l-c circuit v1 1 0 ac 120 sin r1 1 2 250 l1 2 3 650m c1 3 0 1.5u .ac lin 1 60 60 .print ac v(1,2) v(2,3) v(3,0) i(v1) .print ac vp(1,2) vp(2,3) vp(3,0) ip(v1) .end
freq v(1,2) v(2,3) v(3) i(v1) 6.000E+01 1.943E+01 1.905E+01 1.375E+02 7.773E-02 freq vp(1,2) vp(2,3) vp(3) ip(v1) 6.000E+01 8.068E+01 1.707E+02 -9.320E+00 -9.932E+01
The SPICE simulation shows our hand-calculated results to be accurate.
As you can see, there is little difference between AC circuit analysis and DC circuit analysis, except that all quantities of voltage, current, and resistance (actually, impedance) must be handled in complex rather than scalar form so as to account for phase angle. This is good, since it means all you've learned about DC electric circuits applies to what you're learning here. The only exception to this consistency is the calculation of power, which is so unique that it deserves a chapter devoted to that subject alone.
We can take the same components from the series circuit and rearrange them into a parallel configuration for an easy example circuit: (Figure below)
Example R, L, and C parallel circuit.
The fact that these components are connected in parallel instead of series now has absolutely no effect on their individual impedances. So long as the power supply is the same frequency as before, the inductive and capacitive reactances will not have changed at all: (Figure below)
Example R, L, and C parallel circuit with impedances replacing component values.
With all component values expressed as impedances (Z), we can set up an analysis table and proceed as in the last example problem, except this time following the rules of parallel circuits instead of series:
Knowing that voltage is shared equally by all components in a parallel circuit, we can transfer the figure for total voltage to all component columns in the table:
Now, we can apply Ohm's Law (I=E/Z) vertically in each column to determine current through each component:
There are two strategies for calculating total current and total impedance. First, we could calculate total impedance from all the individual impedances in parallel (ZTotal = 1/(1/ZR + 1/ZL + 1/ZC), and then calculate total current by dividing source voltage by total impedance (I=E/Z). However, working through the parallel impedance equation with complex numbers is no easy task, with all the reciprocations (1/Z). This is especially true if you're unfortunate enough not to have a calculator that handles complex numbers and are forced to do it all by hand (reciprocate the individual impedances in polar form, then convert them all to rectangular form for addition, then convert back to polar form for the final inversion, then invert). The second way to calculate total current and total impedance is to add up all the branch currents to arrive at total current (total current in a parallel circuit -- AC or DC -- is equal to the sum of the branch currents), then use Ohm's Law to determine total impedance from total voltage and total current (Z=E/I).
Either method, performed properly, will provide the correct answers. Let's try analyzing this circuit with SPICE and see what happens: (Figure below)
Example parallel R, L, and C SPICE circuit. Battery symbols are “dummy” voltage sources for SPICE to use as current measurement points. All are set to 0 volts.
ac r-l-c circuit v1 1 0 ac 120 sin vi 1 2 ac 0 vir 2 3 ac 0 vil 2 4 ac 0 rbogus 4 5 1e-12 vic 2 6 ac 0 r1 3 0 250 l1 5 0 650m c1 6 0 1.5u .ac lin 1 60 60 .print ac i(vi) i(vir) i(vil) i(vic) .print ac ip(vi) ip(vir) ip(vil) ip(vic) .end
freq i(vi) i(vir) i(vil) i(vic) 6.000E+01 6.390E-01 4.800E-01 4.897E-01 6.786E-02 freq ip(vi) ip(vir) ip(vil) ip(vic) 6.000E+01 -4.131E+01 0.000E+00 -9.000E+01 9.000E+01
It took a little bit of trickery to get SPICE working as we would like on this circuit (installing “dummy” voltage sources in each branch to obtain current figures and installing the “dummy” resistor in the inductor branch to prevent a direct inductor-to-voltage source loop, which SPICE cannot tolerate), but we did get the proper readings. Even more than that, by installing the dummy voltage sources (current meters) in the proper directions, we were able to avoid that idiosyncrasy of SPICE of printing current figures 180o out of phase. This way, our current phase readings came out to exactly match our hand calculations.
Now that we've seen how series and parallel AC circuit analysis is not fundamentally different than DC circuit analysis, it should come as no surprise that series-parallel analysis would be the same as well, just using complex numbers instead of scalar to represent voltage, current, and impedance.
Take this series-parallel circuit for example: (Figure below)
Example series-parallel R, L, and C circuit.
The first order of business, as usual, is to determine values of impedance (Z) for all components based on the frequency of the AC power source. To do this, we need to first determine values of reactance (X) for all inductors and capacitors, then convert reactance (X) and resistance (R) figures into proper impedance (Z) form:
Now we can set up the initial values in our table:
Being a series-parallel combination circuit, we must reduce it to a total impedance in more than one step. The first step is to combine L and C2 as a series combination of impedances, by adding their impedances together. Then, that impedance will be combined in parallel with the impedance of the resistor, to arrive at another combination of impedances. Finally, that quantity will be added to the impedance of C1 to arrive at the total impedance.
In order that our table may follow all these steps, it will be necessary to add additional columns to it so that each step may be represented. Adding more columns horizontally to the table shown above would be impractical for formatting reasons, so I will place a new row of columns underneath, each column designated by its respective component combination:
Calculating these new (combination) impedances will require complex addition for series combinations, and the “reciprocal” formula for complex impedances in parallel. This time, there is no avoidance of the reciprocal formula: the required figures can be arrived at no other way!
Seeing as how our second table contains a column for “Total,” we can safely discard that column from the first table. This gives us one table with four columns and another table with three columns.
Now that we know the total impedance (818.34 Ω ∠ -58.371o) and the total voltage (120 volts ∠ 0o), we can apply Ohm's Law (I=E/Z) vertically in the “Total” column to arrive at a figure for total current:
At this point we ask ourselves the question: are there any components or component combinations which share either the total voltage or the total current? In this case, both C1 and the parallel combination R//(L--C2) share the same (total) current, since the total impedance is composed of the two sets of impedances in series. Thus, we can transfer the figure for total current into both columns:
Now, we can calculate voltage drops across C1 and the series-parallel combination of R//(L--C2) using Ohm's Law (E=IZ) vertically in those table columns:
A quick double-check of our work at this point would be to see whether or not the voltage drops across C1 and the series-parallel combination of R//(L--C2) indeed add up to the total. According to Kirchhoff's Voltage Law, they should!
That last step was merely a precaution. In a problem with as many steps as this one has, there is much opportunity for error. Occasional cross-checks like that one can save a person a lot of work and unnecessary frustration by identifying problems prior to the final step of the problem.
After having solved for voltage drops across C1 and the combination R//(L--C2), we again ask ourselves the question: what other components share the same voltage or current? In this case, the resistor (R) and the combination of the inductor and the second capacitor (L--C2) share the same voltage, because those sets of impedances are in parallel with each other. Therefore, we can transfer the voltage figure just solved for into the columns for R and L--C2:
Now we're all set for calculating current through the resistor and through the series combination L--C2. All we need to do is apply Ohm's Law (I=E/Z) vertically in both of those columns:
Another quick double-check of our work at this point would be to see if the current figures for L--C2 and R add up to the total current. According to Kirchhoff's Current Law, they should:
Since the L and C2 are connected in series, and since we know the current through their series combination impedance, we can distribute that current figure to the L and C2 columns following the rule of series circuits whereby series components share the same current:
With one last step (actually, two calculations), we can complete our analysis table for this circuit. With impedance and current figures in place for L and C2, all we have to do is apply Ohm's Law (E=IZ) vertically in those two columns to calculate voltage drops.
Now, let's turn to SPICE for a computer verification of our work:
Example series-parallel R, L, C SPICE circuit.
ac series-parallel r-l-c circuit v1 1 0 ac 120 sin vit 1 2 ac 0 vilc 3 4 ac 0 vir 3 6 ac 0 c1 2 3 4.7u l 4 5 650m c2 5 0 1.5u r 6 0 470 .ac lin 1 60 60 .print ac v(2,3) vp(2,3) i(vit) ip(vit) .print ac v(4,5) vp(4,5) i(vilc) ip(vilc) .print ac v(5,0) vp(5,0) i(vilc) ip(vilc) .print ac v(6,0) vp(6,0) i(vir) ip(vir) .end
freq v(2,3) vp(2,3) i(vit) ip(vit) C1 6.000E+01 8.276E+01 -3.163E+01 1.466E-01 5.837E+01
freq v(4,5) vp(4,5) i(vilc) ip(vilc) L 6.000E+01 1.059E+01 -1.388E+02 4.323E-02 1.312E+02
freq v(5) vp(5) i(vilc) ip(vilc) C2 6.000E+01 7.645E+01 4.122E+01 4.323E-02 1.312E+02
freq v(6) vp(6) i(vir) ip(vir) R 6.000E+01 6.586E+01 4.122E+01 1.401E-01 4.122E+01
Each line of the SPICE output listing gives the voltage, voltage phase angle, current, and current phase angle for C1, L, C2, and R, in that order. As you can see, these figures do concur with our hand-calculated figures in the circuit analysis table.
As daunting a task as series-parallel AC circuit analysis may appear, it must be emphasized that there is nothing really new going on here besides the use of complex numbers. Ohm's Law (in its new form of E=IZ) still holds true, as do the voltage and current Laws of Kirchhoff. While there is more potential for human error in carrying out the necessary complex number calculations, the basic principles and techniques of series-parallel circuit reduction are exactly the same.
In the study of DC circuits, the student of electricity comes across a term meaning the opposite of resistance: conductance. It is a useful term when exploring the mathematical formula for parallel resistances: Rparallel = 1 / (1/R1 + 1/R2 + . . . 1/Rn). Unlike resistance, which diminishes as more parallel components are included in the circuit, conductance simply adds. Mathematically, conductance is the reciprocal of resistance, and each 1/R term in the “parallel resistance formula” is actually a conductance.
Whereas the term “resistance” denotes the amount of opposition to flowing electrons in a circuit, “conductance” represents the ease of which electrons may flow. Resistance is the measure of how much a circuit resists current, while conductance is the measure of how much a circuit conducts current. Conductance used to be measured in the unit of mhos, or “ohms” spelled backward. Now, the proper unit of measurement is Siemens. When symbolized in a mathematical formula, the proper letter to use for conductance is “G”.
Reactive components such as inductors and capacitors oppose the flow of electrons with respect to time, rather than with a constant, unchanging friction as resistors do. We call this time-based opposition, reactance, and like resistance we also measure it in the unit of ohms.
As conductance is the complement of resistance, there is also a complementary expression of reactance, called susceptance. Mathematically, it is equal to 1/X, the reciprocal of reactance. Like conductance, it used to be measured in the unit of mhos, but now is measured in Siemens. Its mathematical symbol is “B”, unfortunately the same symbol used to represent magnetic flux density.
The terms “reactance” and “susceptance” have a certain linguistic logic to them, just like resistance and conductance. While reactance is the measure of how much a circuit reacts against change in current over time, susceptance is the measure of how much a circuit is susceptible to conducting a changing current.
If one were tasked with determining the total effect of several parallel-connected, pure reactances, one could convert each reactance (X) to a susceptance (B), then add susceptances rather than diminish reactances: Xparallel = 1/(1/X1 + 1/X2 + . . . 1/Xn). Like conductances (G), susceptances (B) add in parallel and diminish in series. Also like conductance, susceptance is a scalar quantity.
When resistive and reactive components are interconnected, their combined effects can no longer be analyzed with scalar quantities of resistance (R) and reactance (X). Likewise, figures of conductance (G) and susceptance (B) are most useful in circuits where the two types of opposition are not mixed, i.e. either a purely resistive (conductive) circuit, or a purely reactive (susceptive) circuit. In order to express and quantify the effects of mixed resistive and reactive components, we had to have a new term: impedance, measured in ohms and symbolized by the letter “Z”.
To be consistent, we need a complementary measure representing the reciprocal of impedance. The name for this measure is admittance. Admittance is measured in (guess what?) the unit of Siemens, and its symbol is “Y”. Like impedance, admittance is a complex quantity rather than scalar. Again, we see a certain logic to the naming of this new term: while impedance is a measure of how much alternating current is impeded in a circuit, admittance is a measure of how much current is admitted.
Given a scientific calculator capable of handling complex number arithmetic in both polar and rectangular forms, you may never have to work with figures of susceptance (B) or admittance (Y). Be aware, though, of their existence and their meanings.
With the notable exception of calculations for power (P), all AC circuit calculations are based on the same general principles as calculations for DC circuits. The only significant difference is that fact that AC calculations use complex quantities while DC calculations use scalar quantities. Ohm's Law, Kirchhoff's Laws, and even the network theorems learned in DC still hold true for AC when voltage, current, and impedance are all expressed with complex numbers. The same troubleshooting strategies applied toward DC circuits also hold for AC, although AC can certainly be more difficult to work with due to phase angles which aren't registered by a handheld multimeter.
Power is another subject altogether, and will be covered in its own chapter in this book. Because power in a reactive circuit is both absorbed and released -- not just dissipated as it is with resistors -- its mathematical handling requires a more direct application of trigonometry to solve.
When faced with analyzing an AC circuit, the first step in analysis is to convert all resistor, inductor, and capacitor component values into impedances (Z), based on the frequency of the power source. After that, proceed with the same steps and strategies learned for analyzing DC circuits, using the “new” form of Ohm's Law: E=IZ ; I=E/Z ; and Z=E/I
Remember that only the calculated figures expressed in polar form apply directly to empirical measurements of voltage and current. Rectangular notation is merely a useful tool for us to add and subtract complex quantities together. Polar notation, where the magnitude (length of vector) directly relates to the magnitude of the voltage or current measured, and the angle directly relates to the phase shift in degrees, is the most practical way to express complex quantities for circuit analysis.
Contributors to this chapter are listed in chronological order of their contributions, from most recent to first. See Appendix 2 (Contributor List) for dates and contact information.
Jason Starck (June 2000): HTML document formatting, which led to a much better-looking second edition.
Lessons In Electric Circuits copyright (C) 2000-2013 Tony R. Kuphaldt, under the terms and conditions of the Design Science License. | http://www.ibiblio.org/kuphaldt/electricCircuits/AC/AC_5.html | 13 |
79 | Title: Calculating the Area of a Circle
Sunshine State Standard Benchmark MA.6.G.4.1: Understand the concept
of π, know common estimates of π (3.14; 22/7) and use these
values to estimate and calculate the circumference and the area of circles
Write the Objective
Given six diagrams of circles in which either the diameter or radius
is specified, students will determine the area of the circles using the
formula Area=πR2. Students must show their work and solve
at least five of the problems correctly.
Introduce the Lesson
- Gain Student Attention: Show two Frisbees™ of different sizes.
Ask students if they have ever played Frisbee™ and thought about
how the size of the Frisbee™ might affect how far it will go when
thrown? Tell them that a Frisbee™ or disk is really a circle,
and the surface is called the area. In this lesson, they will learn
how to determine the area of a circle.
- Explain the Objective: Today they are going to learn how to determine
the area of a circle. They will learn how to find the area when they
know the radius of the circle and when they know the diameter. Use the
Frisbee™ to point out the area, radius, and diameter.
- Relate to Prior Knowledge: Use prompting questions and statements
to remind students of the following:
- vocabulary for the parts of a circle: circumference, diameter,
radius (Draw and label the parts on the board.)
- the definition and value of π (π = 3.14 or 22/7)
- the formula for the circumference of a circle (Circumference=πD)
Present the Content
- Knowledge and Skills in Lesson: Students already know vocabulary for
the parts of a circle and the formula for the circumference of a circle.
The lesson content will focus on calculating the area of a circle.
- Teacher and Student Learning Activities:
- Write the formula for the area of a circle on the board and explain
it (Area=πR2). Model and describe several examples,
step by step, using the formula when the radius is known. Use the
Frisbee™ as the first circle.
- Draw circle diagrams and write the problem-solving steps on
a transparency as you explain the examples. Then, have the students
work two problems together with you. Ask prompting questions to guide
learning at each step in the process.
- Repeat the process described above to teach how to determine the
area of a circle when the diameter is known, adding in the extra step
required (dividing the diameter by 2) to find the radius.
- Activity Organization and Support:
- Media Selection: Gather two different-sized Frisbees™.
Secure an overhead projector and transparencies. Prepare two different
worksheets with four circles printed on them for guided and independent
practice. Prepare an assessment including six diagrams of circles,
three with the diameter specified and three with the radius specified.
- Student grouping: The introduction and the content are presented
to the whole class. Guided practice is a small group activity.
Provide Practice and Feedback
- Guided practice: Have students work in groups of four to determine
the area of four circles printed on paper, where either the radius or
the diameter is given. Ask students to follow the problem-solving
steps demonstrated in the lesson, showing their work on the back of
the sheet of paper. After the first problem is solved, have one person
in each group present the steps to the solution discuss it with the
group, After the second problem is solved, have a different group member
present the solution. Continue until all four problems are solved, and
each group member has had a turn presenting the solution. Rotate among
groups to coach students where needed and provide feedback on their
performance. Next, go over all of the problems together with the class,
showing each step in the problem-solving process on overhead transparencies.
Provide feedback on why responses are right or wrong. If needed, provide
additional examples and additional opportunities for practice and feedback.
- Independent practice: Assign a homework exercise for independent practice.
Give each student four circles printed on paper with different areas
than those used in the guided practice activity. Have students determine
the area of the four circles, where either the radius or the diameter
is given. Ask students to show the steps in their work on the back of
the page. Check the homework with the class the next day in the same
manner described for guided practice.
- Judicious review: Preview the remaining lessons in the unit and determine
appropriate places to include a short review of calculating the area
of a circle.
Summarize the Lesson
Remind students they have learned how to find the area of a circle.
Ask them to state the formula used and tell the extra step that must be
taken first if only the diameter is known. Point out that this skill could
be applied to finding the area of any circle, for example, the area of
the top of a round table, etc. Write an additional circle problem on the
board, have students solve it, and discuss responses.
Assess student learning
Determine the procedures: give students a worksheet including six diagrams of circles, three with
the diameter specified and three with the radius specified. The directions
tell students to determine the area of each circle using the formula Area
= πR2 and show the steps in their work.
Describe how to judge performance: students must
solve five out of six problems correctly to demonstrate mastery. The solutions
must include the steps and the correct answer.
For a student who has difficulty maintaining attention and working with
other students in small groups, the accommodations listed below could
- Provide Practice and Feedback: Within the small group, pair the student
with a trained peer who can help keep his or her work on track.
- Monitor the group’s interactions and provide positive reinforcement
to the student for appropriate behaviors.
For a student with poor functional vision the accommodations listed below
could be provided.
- Introduce the Lesson and Present the Content: Make sure the student
can see the visual aids for the lesson by making markings on the worksheets
dark and legible. If needed, provide a large print handout with formulas
and other key points for the student to read at his or her desk.
For a student who has poor fine motor control and writes very large,
the accommodations listed below could be provided.
- Assess Student Learning: Provide extra sheets of paper for the student
to show the steps in the problem solutions so the solution does not
have to fit into small spaces. Let the student use a word processor
to complete the assessment.
in Lesson Design
Access Points (Different Objectives)
for Individual Students
Students working on access points have different learning goals and
objectives for the lesson. The SSS Access Points specify learning goals
at the Independent, Supported, and Participatory levels.
For students working on the access points, the following modifications
could be made:
Independent Level Access Point: MA.6.G.4.ln.a Compare the distance around
the outside of circles (circumference) and areas using physical or visual
- Write the Objective: Given six diagrams of circles of different sizes,
in which either the circumference or area is specified, the student
will correctly Identify the circles with the largest and smallest circumferences
and the circles with the largest and smallest areas. (Note: The student
is not expected to calculate the area of a circle; he or she is learning
a prerequisite skill.
Supported Level access point: MA.6.G.4.Su.a Identify the distance around the outside of circles (circumference) and compare areas of circles using physical models.
- Write the Objective: Given six circles of different sizes, the student will correctly identify the distance around each one and correctly compare the circles with the largest and smallest areas.
Partcipatory Level Access Point: MA.6.G.4.Pa.a Recognize the outside (circumference)
and inside (area) of a circle.
Write the Objective: Given six circles of different sizes, the student
will correctly recognize the inside (area) of at least five circles.
(Note: The student confirms the "inside" of each circle
as indicated by the teacher–"Is this the inside of the
circle?" The student is not expected to know the term in parentheses–area.)
Go to sample lessons: Elementary
Science | Elementary Language Arts
Write Objective | Introduce | Present Content | Practice & Feedback | Summarize | Assess | http://www.cpt.fsu.edu/eseold/in/desin/exMSMLP.html | 13 |
116 | why 360 degrees?
See also the
Dr. Math FAQ:
segments of circles
Browse High School Conic Sections, Circles
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Find the center of a circle.
Is a circle a polygon?
Volume of a tank.
Why is a circle 360 degrees?
- Differentiating and Integrating the Formula for Area of Circle [05/11/1998]
The formula for a circle's circumference is the derivative of the formula
for its area. What is the significance of this?
- Distance from Point to Ellipse [05/19/1997]
How do you find the minimum distance from a point to an ellipse when the
point can be either inside or outside the ellipse?
- Distance of Chord from Circumference [12/31/1997]
Is it possible to calculate the vertical distance, at a right angle, from
a chord to the circumference of a circle?
- Distance to Mars [07/25/1997]
What is the distance from Earth to Mars?
- Dividing a Circle using Six Lines [08/29/2001]
What is the largest number of regions into which you can divide a circle
using six lines?
- Do Circles Have Corners? [08/06/1999]
Can you have angles or corners without edges?
- Donkey Grazing Half a Field [08/08/1997]
A donkey is attached by a rope to a point on the perimeter of a circular
field. How long should the rope be so that the donkey can graze exactly
half the field?
- Drawing a Circle Tangent to an Angle [05/13/2000]
Given an angle and any point inside it not on its bisector, how can you
draw a circle that goes through the point and is tangent to both sides of
the angle with just a compass and protractor?
- Drawing An Ellipse [11/24/1997]
How do you draw an ellipse with only a straight edge and a compass?
- Drawing or Constructing an Ellipse or Oval [02/22/2006]
I know you can draw an ellipse using a string and two tacks. How do I
determine the length of the string and the location of the tacks to
draw an ellipse of a particular size?
- Earth's Curvature [07/07/2003]
How far would you have to 'walk' in no gravity to get 1 foot off the
- Earth's Rotational Speed [07/25/1999]
If you were standing at the equator at sea level, how fast would you be
travelling in relation to the center of the earth?
- Ellipse Area and Circumference [04/19/2001]
How can I draw an ellipse and find the area and circumference?
- Ellipse Bounding A Rectangle [7/15/1996]
How do I calculate the ellipse bounding any given rectangle?
- Ellipse Equation [03/11/1999]
How do I get the equation of an ellipse, given four points and the
inclination of the major axis?
- Ellipse Geometry [08/09/1998]
I wish to draw a line departing at a given angle from the long axis of an
ellipse and bisecting the perimeter of the ellipse at right angles to the
tangent at that point...
- An Ellipse Or A Circle? - Parametric Equations [12/05/1998]
Is this parametric equation elliptical or a circle?... And how do I
compute the slopes at points 0, pi/4, pi/2, 3pi/2,and 2pi?
- Ellipses: Pythagorean Relationship [2/12/1996]
In an ellipse with major axis of 2a, minor axis of 2b, and foci c (on the
major axis), the relationship c squared = a squared - b squared holds
true... how do the three numbers fit into a Pythagorean relationship?
- Elliptical Orbits in the Solar System [05/22/2005]
I want to have my students draw a scale model of the solar system that
shows the orbits of the planets. Assuming I have the apogee and
perigee of each planet's orbit about the sun, they need to construct 9
ellipses with some degree of accuracy. What's the best way to go about
- Endpoint of an Arc [06/25/2001]
Given the center of the circle, the angle of the arc, the radius of the
circle, and the starting point of the arc, determine the end point of the
arc using cartesian coordinates.
- Equation for an Arch [09/09/1997]
I am trying to draw an arch that will go in the ceiling of a building.
The arch will be at a maximum height of 28 inches...
- Equation of a Circle [05/13/2003]
Find the equation of a circle with the center at point (3, -4) and
- Equation of a Parabola [12/20/2001]
Given several points that appear to be a parabola, how do you approximate
the equation that would give a similar graph?
- Equilateral Shapes Inscribed in a Circle [04/07/2003]
Is there a general formula for the length of a side of an equilateral
shape that is inscribed in a circle?
- Escaping the Tiger [07/10/2003]
A man stands in the center of a circle. On the circumference is a
tiger that can only move around the circle. The tiger can run four
times as fast as the man. How can the man escape the circle without
being eaten by the tiger?
- Euclidean Formula for Orthogonal Circles [04/11/2001]
When considering the case when circle C has center at the origin and
radius 1, we need to show that the equation of the circle orthogonal to
circle C and with center (h,k) is given by: x^2-2hx+y^2-2ky+1=0.
- An Euler Circle Proof [03/26/1999]
I'm having trouble with the incenter and the inradius.
- Find Circle Center and Radius [09/21/2001]
Given three sets of (x,y) coordinates that lie on the circumference of a
circle, how do you find the center and radius of the circle?
- Finding a Parabola [2/5/1996]
Find the equation of the parabola that is one unit away from X^2 at all
- Finding a Point on a Circle [5/28/1996]
How do I find the y1 value?
- Finding Miles Per Hour [03/06/2002]
If a wheel is making 64.2 revolutions per minute, how many miles per hour
is it going?
- Finding Quadratic Roots Geometrically or Graphically [12/07/2004]
How do you find the roots of a quadratic function geometrically? For
example, what is the algorithm to find roots for f(x) = x^2 + 1 by
looking at the graph?
- Finding Radius Given Arc Length and Chord to Arc Height [08/28/2005]
I'm purchasing a curved piece of glass for some furniture. The curve
(arc) is 60 inches long. The height (the midpoint of the chord to the
center of the arc) is 11 inches. I need to know the radius of this
curve so the glass company can make my glass. Any thoughts?
- Finding Radius Given the Length of a Chord [04/09/2006]
Two points on a circle are 78 units apart and the distance from the
circle to the center of the chord connecting them is 12 units. How do
I calculate the radius?
- Finding the Area of an Arc [1/23/1996]
When you draw a circle and make a chord from one point to another, how
would you find the area of that arc (formula)?
- Finding the Axes of an Ellipse from a Known Cone [01/26/2001]
I'm trying to solve a specific situation regarding lighting when viewed
as an oblique circular cone...
- Finding the Center of a Circle [06/06/1999]
How can I find the center and radius of a circle that is in the form:
Ax^2 + Cy^2 + Dx + Ey + F = 0?
- Finding the Center of a Circle [5/29/1996]
How can you find the center of a circle using a ruler and compass?
- Finding the Center of a Circle from 2 Points and Radius [01/24/1997]
Given two points on a circle and the circle's radius, find the center
coordinates of the circle.
- Finding the Center of a Circle from 2 Points [06/01/1999]
How do you find the centre of a circle if you are given 2 points on the
circle and the radius? | http://mathforum.org/library/drmath/sets/high_circles.html?start_at=121&num_to_see=40&s_keyid=38680760&f_keyid=38680761 | 13 |
114 | |Nucleus · Nucleons (p, n) · Nuclear force · Nuclear reaction|
Isotopes are variants of a particular chemical element: while all isotopes of a given element share the same number of protons and electrons, each isotope differs from the others in its number of neutrons. The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"). Hence: "the same place," meaning that different isotopes of a single element occupy the same position on the periodic table. The number of protons within the atom's nucleus uniquely identifies an element, but a given element may in principle have any number of neutrons. The number of nucleons (protons and neutrons) in the nucleus is the mass number, and each isotope of a given element has a different mass number.
For example, carbon-12, carbon-13 and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13 and 14 respectively. The atomic number of carbon is 6 which means that every carbon atom has 6 protons, so that the neutron numbers of these isotopes are 6, 7 and 8 respectively.
Isotope vs. nuclide
A nuclide is an atom with a specific number of protons and neutrons in the nucleus, for example carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, while the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number has drastic effects on nuclear properties, but its effect on chemical properties is negligible in most elements, and still quite small in the case of the very lightest elements, although it does matter in some circumstances (for hydrogen, the lightest of all elements, the isotope effect is large enough to strongly affect biology). Since isotope is the older term, it is better known than nuclide, and is still sometimes used in contexts where nuclide might be more appropriate, such as nuclear technology and nuclear medicine.
An isotope and/or nuclide is specified by the name of the particular element (this indicates the atomic number implicitly) followed by a hyphen and the mass number (e.g. helium-3, helium-4, carbon-12, carbon-14, uranium-235 and uranium-239). When a chemical symbol is used, e.g., "C" for carbon, standard notation (now known as "AZE notation" because A is the mass number, Z the atomic number, and E for element) is to indicate the number of nucleons with a superscript at the upper left of the chemical symbol and to indicate the atomic number with a subscript at the lower left (e.g. 3
92U, and 239
92U, respectively). Since the atomic number is implied by the element symbol, it is common to state only the mass number in the superscript and leave out the atomic number subscript (e.g. 3He, 4He, 12C, 14C, 235U, and 239U, respectively). The letter m is sometimes appended after the mass number to indicate a nuclear isomer, a metastable or energetically-excited nuclear state (rather than the lowest-energy ground state), for example 180m
Radioactive, primordial, and stable isotopes
Some isotopes are radioactive, and are therefore described as radioisotopes or radionuclides, while others have never been observed to undergo radioactive decay and are described as stable isotopes. For example, 14C is a radioactive form of carbon while 12C and 13C are stable isotopes. There are about 339 naturally occurring nuclides on Earth, of which 288 are primordial nuclides, meaning that they have existed since the solar system's formation.
Primordial nuclides include 35 nuclides with very long half-lives (over 80 million years) and 254 which are formally considered as "stable isotopes", since they have not been observed to decay. In most cases, for obvious reasons, if an element has stable isotopes, those isotopes predominate in the elemental abundance found on Earth and in the solar system. However, in the cases of three elements (tellurium, indium, and rhenium) the most abundant isotope found in nature is actually one (or two) extremely long lived radioisotope(s) of the element, despite these elements having one or more stable isotopes.
Many apparently "stable" isotopes are predicted by theory to be radioactive, with extremely long half-lives (this does not count the possibility of proton decay, which would make all nuclides ultimately unstable). Of the 254 nuclides never observed to decay, only 90 of these (all from the first 40 elements) are stable in theory to all known forms of decay. Element 41 (niobium) is theoretically unstable via spontaneous fission, but this has never been detected. Many other stable nuclides are in theory energetically susceptible to other known forms of decay, such as alpha decay or double beta decay, but no decay products have yet been observed. The predicted half-lives for these nuclides often greatly exceed the estimated age of the universe, and in fact there are also 27 known radionuclides (see primordial nuclide) with half-lives longer than the age of the universe.
Adding in the radioactive nuclides that have been created artificially, there are more than 3100 currently known nuclides. These include 905 nuclides which are either stable, or have half-lives longer than 60 minutes. See list of nuclides for details.
Radioactive isotopes
The existence of isotopes was first suggested in 1912 by the radiochemist Frederick Soddy, based on studies of radioactive decay chains which indicated about 40 different species described as radioelements (i.e. radioactive elements) between uranium and lead, although the periodic table only allowed for 11 elements from uranium to lead.
Several attempts to separate these new radioelements chemically had failed. For example, Soddy had shown in 1910 that mesothorium (later shown to be 228Ra), radium (226Ra, the longest-lived isotope), and thorium X (224Ra) are impossible to separate. Attempts to place the radioelements in the periodic table led Soddy and Kazimierz Fajans independently to propose their radioactive displacement law in 1913, to the effect that alpha decay produced an element two places to the left in the periodic table, while beta decay emission produced an element one place to the right. Soddy recognized that emission of an alpha particle followed by two beta particles led to the formation of an element chemically identical to the initial element but with a mass four units lighter and with different radioactive properties.
Soddy proposed that several types of atoms (differing in radioactive properties) could occupy the same place in the table. For example, the alpha-decay of uranium-235 forms thorium-231, while the beta decay of actinium-230 forms thorium-230 The term “isotope”, Greek for “at the same place”, was suggested to Soddy by Margaret Todd, a Scottish physician and family friend, during a conversation in which he explained his ideas to her.
In 1914 T.W. Richards found variations between the atomic weight of lead from different mineral sources, attributable to variations in isotopic composition due to different radioactive origins.
Stable isotopes
The first evidence for isotopes of a stable (non-radioactive) element was found by J. J. Thomson in 1913 as part of his exploration into the composition of canal rays (positive ions). Thomson channeled streams of neon ions through a magnetic and an electric field and measured their deflection by placing a photographic plate in their path. Each stream created a glowing patch on the plate at the point it struck. Thomson observed two separate patches of light on the photographic plate (see image), which suggested two different parabolas of deflection. Thomson eventually concluded that some of the atoms in the neon gas were of higher mass than the rest.
F.W. Aston subsequently discovered different stable isotopes for numerous elements using a mass spectrograph. In 1919 Aston studied neon with sufficient resolution to show that the two isotopic masses are very close to the integers 20 and 22, and that neither is equal to the known molar mass (20.2) of neon gas. This is an example of Aston’s whole number rule for isotopic masses, which states that large deviations of elemental molar masses from integers are primarily due to the fact that the element is a mixture of isotopes. Aston similarly showed that the molar mass of chlorine (35.45) is a weighted average of the almost integral masses for the two isotopes Cl-35 and Cl-37.
Variation in properties between isotopes
Chemical and molecular properties
A neutral atom has the same number of electrons as protons. Thus, different isotopes of a given element all have the same number of protons and share a similar electronic structure. Because the chemical behavior of an atom is largely determined by its electronic structure, different isotopes exhibit nearly identical chemical behavior. The main exception to this is the kinetic isotope effect: due to their larger masses, heavier isotopes tend to react somewhat more slowly than lighter isotopes of the same element. This is most pronounced for protium (1H) and deuterium (2H), because deuterium has twice the mass of protium. The mass effect between deuterium and the relatively light protium also affects the behavior of their respective chemical bonds, by means of changing the center of gravity (reduced mass) of the atomic systems. However, for heavier elements, which have more neutrons than lighter elements, the ratio of the nuclear mass to the collective electronic mass is far greater, and the relative mass difference between isotopes is much less. For these two reasons, the mass-difference effects on chemistry are usually negligible.
In similar manner, two molecules that differ only in the isotopic nature of their atoms (isotopologues) will have identical electronic structure and therefore almost indistinguishable physical and chemical properties (again with deuterium providing the primary exception to this rule). The vibrational modes of a molecule are determined by its shape and by the masses of its constituent atoms. As a consequence, isotopologues will have different sets of vibrational modes. Since vibrational modes allow a molecule to absorb photons of corresponding energies, isotopologues have different optical properties in the infrared range.
Nuclear properties and stability
Atomic nuclei consist of protons and neutrons bound together by the residual strong force. Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert the attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to be bound into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph at right). For example, although the neutron:proton ratio of 3
2He is 1:2, the neutron:proton ratio of 238
92U is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 (Z = N). The nuclide 40
20Ca (calcium-40) is the observationally the heaviest stable nuclide with the same number of neutrons and protons; (theoretically, the heaviest stable one is sulfur-32). All stable nuclides heavier than calcium-40 contain more neutrons than protons.
Numbers of isotopes per element
Of the 80 elements with a stable isotope, the largest number of stable isotopes observed for any element is ten (for the element tin). No element has nine stable isotopes. Xenon is the only element with eight stable isotopes. Four elements have seven stable isotopes, eight have six stable isotopes, ten have five stable isotopes, nine have four stable isotopes, five have three stable isotopes, 16 have two stable isotopes (counting 180m
73Ta as stable), and 26 elements have only a single stable isotope (of these, 19 are so-called mononuclidic elements, having a single primordial stable isotope that dominates and fixes the atomic weight of the natural element to high precision; 3 radioactive mononuclidic elements occur as well). In total, there are 254 nuclides that have not been observed to decay. For the 80 elements that have one or more stable isotopes, the average number of stable isotopes is 254/80 = 3.2 isotopes per element.
Even and odd nucleon numbers
The proton:neutron ratio is not the only factor affecting nuclear stability. It depends also on evenness or oddness of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron decay), electron capture or other exotic means, such as spontaneous fission and cluster decay.
The majority of stable nuclides are even-proton-even-neutron, where all numbers Z, N, and A are even. The odd-A stable nuclides are divided (roughly evenly) into odd-proton-even-neutron, and even-proton-odd-neutron nuclides. Odd-proton-odd-neutron nuclei are the least common.
Even atomic number
Even-proton, even-neutron (EE) nuclides, which comprise 148/254 = ~ 58% of all stable nuclides, necessarily have spin 0 because of pairing. There are 148 stable even-even nuclides, forming ~58% of the 254 stable nuclides. There are also 22 primordial long-lived even-even nuclides. As a result, each of the 41 even-numbered elements from 2 to 82 has at least one stable isotope, and most of these elements have several primordial isotopes. Half of these even-numbered elements have six or more stable isotopes. The extreme stability of helium-4 due to a double pairing of 2 protons and 2 neutrons prevents any nuclides containing five or eight nucleons from existing for long enough to serve as platforms for the buildup of heavier elements via nuclear fusion in stars (see triple alpha process).
These 53 stable nuclides have an even number of protons and an odd number of neutrons. They are a minority in comparison to the even-even isotopes which are about 3 times as numerous. Among the 41 even-Z elements that have a stable nuclide, only three elements (argon, cerium, and lead) have no even-odd stable nuclides. One element (tin) has three. There are 24 elements that have one even-odd nuclide and 13 that have two odd-even nuclides. Of 35 primordial radionuclides there exist four even-odd nuclides (see table at right), including the fissile 235
92U. Because of their odd neutron numbers, the even-odd nuclides tend to have large neutron capture cross sections, due to the energy that results from neutron-pairing effects. These stable even-proton odd-neutron nuclides tend to be uncommon by abundance in nature, generally because in order to form and be enter into primordial abundance, they must have escaped capturing neutrons to form yet other stable even-even isotopes, during both the s-process and r-process of neutron capture, during nucleosynthesis in stars. For this reason, only 195
78Pt and 9
4Be are the most naturally abundant isotopes of their element.
Odd atomic number
48 stable odd-proton-even-neutron nuclides, stabilized by their even numbers of paired neutrons, form most of the stable isotopes of the odd-numbered elements; the very few odd-odd nuclides comprise the others. There are 41 odd-numbered elements with Z = 1 through 81, with 39 of these having any stable isotopes (the elements technetium (
43Tc) and promethium (
61Pm) have no stable isotopes). Of these 39 odd Z elements, 30 elements (including hydrogen-1 where 0 neutrons is even) have one stable odd-even isotope, and nine elements: chlorine (
17Cl), potassium (
19K), copper (
29Cu), gallium (
31Ga), bromine (
35Br), silver (
47Ag), antimony (
51Sb), iridium (
77Ir), and thallium (
81Tl), have two odd-even stable isotopes each. This makes a total 30 + 2(9) = 48 stable odd-even isotopes.
There are also five primordial long-lived radioactive odd-even isotopes, 87
63Eu, and 209
83Bi. The last two were only recently found to decay, with half-lives greater than 1018 years.
Only five stable nuclides contain both an odd number of protons and an odd number of neutrons. The first four "odd-odd" nuclides occur in low mass nuclides, for which changing a proton to a neutron or vice versa would lead to a very lopsided proton-neutron ratio (2
5B, and 14
7N; spins 1, 1, 3, 1). The only other entirely "stable" odd-odd nuclide is 180m
73Ta (spin 9), the only primordial nuclear isomer, which has not yet been observed to decay despite experimental attempts. Hence, all observationally stable odd-odd nuclides have nonzero integer spin. This is because the single unpaired neutron and unpaired proton have a larger nuclear force attraction to each other if their spins are aligned (producing a total spin of at least 1 unit), instead of anti-aligned. See deuterium for the simplest case of this nuclear behavior.
Many odd-odd radionuclides (like tantalum-180) with comparatively short half lives are known. Usually, they beta-decay to their nearby even-even isobars which have paired protons and paired neutrons. Of the nine primordial odd-odd nuclides (five stable and four radioactive with long half lives), only 14
7N is the most common isotope of a common element. This is the case because it is a part of the CNO cycle. The nuclides 6
3Li and 10
5B are minority isotopes of elements that are themselves rare compared to other light elements, while the other six isotopes make up only a tiny percentage of the natural abundance of their elements. For example, 180m
73Ta is thought to be the rarest of the 254 stable isotopes.
Odd neutron number
Actinides with odd neutron number are generally fissile (with thermal neutrons), while those with even neutron number are generally not, though they are fissionable with fast neutrons. Only 195
4Be and 14
7N have odd neutron number and are the most naturally abundant isotope of their element.
Occurrence in nature
Elements are composed of one or more naturally occurring isotopes. The unstable (radioactive) isotopes are either primordial or postprimordial. Primordial isotopes were a product of stellar nucleosynthesis or another type of nucleosynthesis such as cosmic ray spallation, and have persisted down to the present because their rate of decay is so slow (e.g., uranium-238 and potassium-40). Postprimordial isotopes were created by cosmic ray bombardment as cosmogenic nuclides (e.g., tritium, carbon-14), or by the decay of a radioactive primordial isotope to a radioactive radiogenic nuclide daughter (e.g., uranium to radium). A few isotopes also continue to be naturally synthesized as nucleogenic nuclides, by some other natural nuclear reaction, such as when neutrons from natural nuclear fission are absorbed by another atom.
As discussed above, only 80 elements have any stable isotopes, and 26 of these have only one stable isotope. Thus, about two thirds of stable elements occur naturally on Earth in multiple stable isotopes, with the largest number of stable isotopes for an element being ten, for tin (
50Sn). There are about 94 elements found naturally on Earth (up to plutonium inclusive), though some are detected only in very tiny amounts, such as plutonium-244. Scientists estimate that the elements that occur naturally on Earth (some only as radioisotopes) occur as 339 isotopes (nuclides) in total. Only 254 of these naturally occurring isotopes are stable in the sense of never having been observed to decay as of the present time An additional 35 primordial nuclides (to a total of 289 primordial nuclides), are radioactive with known half-lives, but have half-lives longer than 80 million years, allowing them to exist from the beginning of the solar system. See list of nuclides for details.
All the known stable isotopes occur naturally on Earth; the other naturally occurring-isotopes are radioactive but occur on Earth due to their relatively long half-lives, or else due to other means of ongoing natural production. These include the afore-mentioned cosmogenic nuclides, the nucleogenic nuclides, and any radiogenic radioisotopes formed by ongoing decay of a primordial radioactive isotope, such as radon and radium from uranium.
An additional ~3000 radioactive isotopes not found in nature have been created in nuclear reactors and in particle accelerators. Many short-lived isotopes not found naturally on Earth have also been observed by spectroscopic analysis, being naturally created in stars or supernovae. An example is aluminium-26, which is not naturally found on Earth, but which is found in abundance on an astronomical scale.
The tabulated atomic masses of elements are averages that account for the presence of multiple isotopes with different masses. Before the discovery of isotopes, empirically determined noninteger values of atomic mass confounded scientists. For example, a sample of chlorine contains 75.8% chlorine-35 and 24.2% chlorine-37, giving an average atomic mass of 35.5 atomic mass units.
According to generally accepted cosmology theory, only isotopes of hydrogen and helium, traces of some isotopes of lithium and beryllium, and perhaps some boron, were created at the Big Bang, while all other isotopes were synthesized later, in stars and supernovae, and in interactions between energetic particles such as cosmic rays, and previously produced isotopes. (See nucleosynthesis for details of the various processes thought to be responsible for isotope production.) The respective abundances of isotopes on Earth result from the quantities formed by these processes, their spread through the galaxy, and the rates of decay for isotopes that are unstable. After the initial coalescence of the solar system, isotopes were redistributed according to mass, and the isotopic composition of elements varies slightly from planet to planet. This sometimes makes it possible to trace the origin of meteorites.
Atomic mass of isotopes
The atomic mass (mr) of an isotope is determined mainly by its mass number (i.e. number of nucleons in its nucleus). Small corrections are due to the binding energy of the nucleus (see mass defect), the slight difference in mass between proton and neutron, and the mass of the electrons associated with the atom, the latter because the electron:nucleon ratio differs among isotopes.
The mass number is a dimensionless quantity. The atomic mass, on the other hand, is measured using the atomic mass unit based on the mass of the carbon-12 atom. It is denoted with symbols "u" (for unified atomic mass unit) or "Da" (for dalton).
The atomic masses of naturally occurring isotopes of an element determine the atomic mass of the element. When the element contains N isotopes, the expression below is applied for the average atomic mass :
where m1, m2, ..., mN are the atomic masses of each individual isotope, and x1, ..., xN are the relative abundances of these isotopes.
Applications of isotopes
Several applications exist that capitalize on properties of the various isotopes of a given element. Isotope separation is a significant technological challenge, particularly with heavy elements such as uranium or plutonium. Lighter elements such as lithium, carbon, nitrogen, and oxygen are commonly separated by gas diffusion of their compounds such as CO and NO. The separation of hydrogen and deuterium is unusual since it is based on chemical rather than physical properties, for example in the Girdler sulfide process. Uranium isotopes have been separated in bulk by gas diffusion, gas centrifugation, laser ionization separation, and (in the Manhattan Project) by a type of production mass spectrometry.
Use of chemical and biological properties
- Isotope analysis is the determination of isotopic signature, the relative abundances of isotopes of a given element in a particular sample. For biogenic substances in particular, significant variations of isotopes of C, N and O can occur. Analysis of such variations has a wide range of applications, such as the detection of adulteration of food products or the geographic origins of products using isoscapes. The identification of certain meteorites as having originated on Mars is based in part upon the isotopic signature of trace gases contained in them.
- Isotopic substitution can be used to determine the mechanism of a chemical reaction via the kinetic isotope effect.
- Another common application is isotopic labeling, the use of unusual isotopes as tracers or markers in chemical reactions. Normally, atoms of a given element are indistinguishable from each other. However, by using isotopes of different masses, even different nonradioactive stable isotopes can be distinguished by mass spectrometry or infrared spectroscopy. For example, in 'stable isotope labeling with amino acids in cell culture (SILAC)' stable isotopes are used to quantify proteins. If radioactive isotopes are used, they can be detected by the radiation they emit (this is called radioisotopic labeling).
Use of nuclear properties
- A technique similar to radioisotopic labeling is radiometric dating: using the known half-life of an unstable element, one can calculate the amount of time that has elapsed since a known level of isotope existed. The most widely known example is radiocarbon dating used to determine the age of carbonaceous materials.
- Several forms of spectroscopy rely on the unique nuclear properties of specific isotopes, both radioactive and stable. For example, nuclear magnetic resonance (NMR) spectroscopy can be used only for isotopes with a nonzero nuclear spin. The most common isotopes used with NMR spectroscopy are 1H, 2D,15N, 13C, and 31P.
- Mössbauer spectroscopy also relies on the nuclear transitions of specific isotopes, such as 57Fe.
- Radionuclides also have important uses. Nuclear power and nuclear weapons development require relatively large quantities of specific isotopes. Nuclear medicine and radiation oncology utilize radioisotopes respectively for medical diagnosis and treatment.
See also
- Abundance of the chemical elements
- Table of nuclides
- Table of nuclides (complete)
- List of isotopes
- List of isotopes by half-life
- List of elements by stability of isotopes
- Radionuclide (or radioisotope)
- Nuclear medicine (includes medical isotopes)
- List of particles
- Isotopes are nuclides having the same number of protons; compare:
- Isotones are nuclides having the same number of neutrons.
- Isobars are nuclides having the same mass number, i.e. sum of protons plus neutrons.
- Nuclear isomers are different excited states of the same type of nucleus. A transition from one isomer to another is accompanied by emission or absorption of a gamma ray, or the process of internal conversion. Isomers are by definition both isotopic and isobaric. (Not to be confused with chemical isomers.)
- Isodiaphers are nuclides having the same neutron excess, i.e. number of neutrons minus number of protons.
- Bainbridge mass spectrometer
- IUPAC (Connelly, N. G.; Damhus, T.; Hartshorn, R. M.; and Hutton, A. T.), Nomenclature of Inorganic Chemistry – IUPAC Recommendations 2005, The Royal Society of Chemistry, 2005 ; IUPAC (McCleverty, J. A.; and Connelly, N. G.), Nomenclature of Inorganic Chemistry II. Recommendations 2000, The Royal Society of Chemistry, 2001 ; IUPAC (Leigh, G. J.), Nomenclature of Inorganic Chemistry (recommendations 1990), Blackwell Science, 1990 ; IUPAC, Nomenclature of Inorganic Chemistry, Second Edition, 1970 ; probably in the 1958 first edition as well
- This notation seems to have been introduced in the second half of the 1930s. Before that, various notations were used, such as Ne(22) for neon-22 (1934), Ne22 for neon-22 (1935), or even Pb210 for lead-210 (1933).
- "Radioactives Missing From The Earth".
- "NuDat 2 Description".
- Choppin, G.; Liljenzin, J. O. and Rydberg, J. (1995) “Radiochemistry and Nuclear Chemistry” (2nd ed.) Butterworth-Heinemann, pp. 3–5
- Others had also suggested the possibility of isotopes; e.g.,
- Strömholm, Daniel and Svedberg, Theodor (1909) "Untersuchungen über die Chemie der radioactiven Grundstoffe II." (Investigations into the chemistry of the radioactive elements, part 2), Zeitschrift für anorganischen Chemie, 63: 197–206; see especially page 206.
- Alexander Thomas Cameron, Radiochemistry (London, England: J.M. Dent & Sons, 1910), p. 141. (Cameron also anticipated the displacement law.)
- Scerri, Eric R. (2007) The Periodic Table Oxford University Press, pp. 176–179 ISBN 0195305736
- Nagel, Miriam C. (1982). "Frederick Soddy: From Alchemy to Isotopes". Journal of Chemical Education 59 (9): 739–740. Bibcode:1982JChEd..59..739N. doi:10.1021/ed059p739.
- Kasimir Fajans (1913) "Über eine Beziehung zwischen der Art einer radioaktiven Umwandlung und dem elektrochemischen Verhalten der betreffenden Radioelemente" (On a relation between the type of radioactive transformation and the electrochemical behavior of the relevant radioactive elements), Physikalische Zeitschrift, 14: 131–136.
- Soddy announced his "displacement law" in: Soddy, Frederick (1913). "The Radio-Elements and the Periodic Law". Nature 91 (2264): 57. doi:10.1038/091057a0..
- Soddy elaborated his displacement law in: Soddy, Frederick (1913) "Radioactivity," Chemical Society Annual Report, 10: 262–288.
- Alexander Smith Russell (1888–1972) also published a displacement law: Russell, Alexander S. (1913) "The periodic system and the radio-elements," Chemical News and Journal of Industrial Science, 107: 49–52.
- Soddy first used the word "isotope" in: Soddy, Frederick (1913). "Intra-atomic charge". Nature 92 (2301): 399–400. doi:10.1038/092399c0.
- Fleck, Alexander (1957). "Frederick Soddy". Biographical Memoirs of Fellows of the Royal Society 3: 203–216. doi:10.1098/rsbm.1957.0014. "p. 208: Up to 1913 we used the phrase 'radio elements chemically non-separable' and at that time the word isotope was suggested in a drawing-room discussion with Dr. Margaret Todd in the home of Soddy's father-in-law, Sir George Beilby."
- Budzikiewicz H and Grigsby RD (2006). "Mass spectrometry and isotopes: a century of research and discussion". Mass spectrometry reviews 25 (1): 146–57. doi:10.1002/mas.20061. PMID 16134128.
- Scerri, Eric R. (2007) The Periodic Table, Oxford University Press, ISBN 0195305736, Ch. 6, note 44 (p. 312) citing Alexander Fleck, described as a former student of Soddy's.
- In his 1893 book, William T. Preyer also used the word "isotope" to denote similarities among elements. From p. 9 of William T. Preyer, Das genetische System der chemischen Elemente [The genetic system of the chemical elements] (Berlin, Germany: R. Friedländer & Sohn, 1893): "Die ersteren habe ich der Kürze wegen isotope Elemente genannt, weil sie in jedem der sieben Stämmme der gleichen Ort, nämlich dieselbe Stuffe, einnehmen." (For the sake of brevity, I have named the former "isotopic" elements, because they occupy the same place in each of the seven families [i.e., columns of the periodic table], namely the same step [i.e., row of the periodic table]. [In other words, each element in a given row of the periodic table has chemical properties that are similar ("isotopic") to those of the other elements in the same column of the periodic table.])
- The origins of the conceptions of isotopes Frederick Soddy, Nobel prize lecture
- Thomson, J.J. (1912). "XIX.Further experiments on positive rays". Philosophical Magazine Series 6 24 (140): 209. doi:10.1080/14786440808637325.
- Thomson, J.J. (1910). "LXXXIII.Rays of positive electricity". Philosophical Magazine Series 6 20 (118): 752. doi:10.1080/14786441008636962.
- Mass spectra and isotopes Francis W. Aston, Nobel prize lecture 1922
- Sonzogni, Alejandro (2008). "Interactive Chart of Nuclides". National Nuclear Data Center: Brookhaven National Laboratory. Retrieved 2013-05-03.
- Hult, Mikael; Elisabeth Wieslander, J.S.; Marissens, Gerd; Gasparro, Joël; Wätjen, Uwe; Misiaszek, Marcin (2009). "Search for the radioactivity of 180mTa using an underground HPGe sandwich spectrometer". Applied Radiation and Isotopes 67 (5): 918–21. doi:10.1016/j.apradiso.2009.01.057. PMID 19246206.
- "Radioactives Missing From The Earth". Don-lindsay-archive.org. Retrieved 2012-06-16.
- E. Jamin et al.; Guérin, Régis; Rétif, Mélinda; Lees, Michèle; Martin, Gérard J. (2003). "Improved Detection of Added Water in Orange Juice by Simultaneous Determination of the Oxygen-18/Oxygen-16 Isotope Ratios of Water and Ethanol Derived from Sugars". J. Agric. Food Chem. 51 (18): 5202. doi:10.1021/jf030167m.
- A. H. Treiman, J. D. Gleason and D. D. Bogard (2000). "The SNC meteorites are from Mars". Planet. Space Sci. 48 (12–14): 1213. Bibcode:2000P&SS...48.1213T. doi:10.1016/S0032-0633(00)00105-7.
- The Nuclear Science web portal Nucleonica
- The Karlsruhe Nuclide Chart
- National Nuclear Data Center Portal to large repository of free data and analysis programs from NNDC
- National Isotope Development Center Coordination and management of the production, availability, and distribution of isotopes, and reference information for the isotope community
- Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program for isotope production and production research and development
- International Atomic Energy Agency Homepage of International Atomic Energy Agency (IAEA), an Agency of the United Nations (UN)
- Atomic Weights and Isotopic Compositions for All Elements Static table, from NIST (National Institute of Standards and Technology)
- Atomgewichte, Zerfallsenergien und Halbwertszeiten aller Isotope
- Exploring the Table of the Isotopes at the LBNL
- Current isotope research and information isotope.info
- Emergency Preparedness and Response: Radioactive Isotopes by the CDC (Centers for Disease Control and Prevention)
- Chart of Nuclides Interactive Chart of Nuclides (National Nuclear Data Center)
- Interactive Chart of the nuclides, isotopes and Periodic Table
- The LIVEChart of Nuclides – IAEA with isotope data, in Java or HTML
- Annotated bibliography for isotopes from the Alsos Digital Library for Nuclear Issues
|Isotopes of the chemical elements| | http://en.wikipedia.org/wiki/Isotope | 13 |
60 | ALGEBRA IS A METHOD OF WRITTEN CALCULATIONS that help us reason about numbers. At the very outset, the student should realize that algebra is a skill. And like any skill -- driving a car, baking cookies, playing the guitar -- it requires practice. A lot of practice. Written practice. That said, let us begin.
The first thing to note is that in algebra we use letters as well as numbers. But the letters represent numbers. We imitate the rules of arithmetic with letters, because we mean that the rule will be true for any numbers.
Here, for example, is the rule for adding fractions:
The letters a and b mean The numbers that are in the numerators. The letter c means The number that is in the denominator. The rule means:
"Whatever those numbers are, add the numerators
Algebra is telling us how to do any problem that looks like that. That is one reason why we use letters.
(The symbols for numbers, after all, are nothing but written marks. And so are letters As the student will see, algebra depends only on the patterns that the symbols make.)
The numbers are the numerical symbols, while the letters are called literal symbols.
Question 1. What are the four operations of arithmetic, and
what are their operation signs?
To see the answer, pass your mouse over the colored area.
3) Multiplication: a· b. Read a· b as "a times b."
The multiplication sign in algebra is a centered dot. We do not use the multiplication cross ×, because we do not want to confuse it with the letter x.
And so if a represents 2, and b represents 5, then
a· b = 2· 5 = 10.
"2 times 5 equals 10."
Do not confuse the centered dot -- 2·5, which in the United States means multiplication -- with the decimal point: 2.5.
However, we often omit the multiplication dot and simply write ab. Read "a, b." In other words, when there is no operation sign between two letters, or between a letter and a number, it always means multiplication. 2x means 2 times x.
In algebra, we use the horizontal division bar. If a represents 10, for example and b represents 2, then
"10 divided by 2 is 5."
Note: In algebra we call a + b a "sum" even though we do not name an answer. As the student will see, we name something in algebra simply by how it looks. In fact, you will see that you do algebra with your eyes, and then what you write on the paper, follows.
This sign = of course is the equal sign, and we read this --
a = b
-- as "a equals (or is equal to) b."
That means that the number on the left that a represents, is equal to the number on the right that b represents. If we write
a + b = c,
and if a represents 5, and b represents 6, then c must represent 11.
Question 2. What is the function of parentheses () in algebra?
3 + (4 + 5) 3(4 + 5)
Parentheses signify that we should treat what they enclose
3 + (4 + 5) = 3 + 9 = 12. 3(4 + 5) = 3· 9 = 27.
Note: When there is no operation sign between 3 and (4 + 5), it means multiplication.
Problem 1. In algebra, how do we write
a) 5 times 6? 5· 6
b) x times y? xy
d) x plus 5 plus x minus 2? (x + 5) + (x − 2)
e) x plus 5 times x minus 2? (x + 5)(x − 2)
Problem 2. Distinguish the following:
a) 8 − (3 + 2) b) 8 − 3 + 2
a) 8 − (3 + 2) = 8 − 5 = 3.
b) 8 − 3 + 2 = 5 + 2 = 7.
In a), we treat 3 + 2 as one number. In b), we do not. We are to first subtract 3 and then add 2. (But see the order of operations below.)
There is a common misconception that parentheses always signify multiplication. In Lesson 3, in fact, we will see that we use parentheses to separate the operation sign from the algebraic sign. 8 + (−2).
Question 3. Terms versus factors.
When numbers are added or subtracted, they are called terms.
When numbers are multiplied, they are called factors.
Here is a sum of four terms: a − b + c − d.
In algebra we speak of a "sum" of several terms, even though there are subtractions. In other words, anything that looks like what you see above, we call a sum.
Here is a product of four factors: abcd.
The word factor always signifies multiplication.
And again, we speak of the "product" abcd, even though we do not name an answer.
Problem 3. In the following expression, how many terms are there? And each term has how many factors?
2a + 4ab + 5a(b + c)
There are three terms. 2a is the first term. It has two factors:
Powers and exponents
When all the factors are equal -- 2· 2· 2· 2 -- we call the product a power of that factor. Thus, a· a is called the second power of a, or "a squared." a· a· a is the third power of a, or "a cubed." aaaa is a to the fourth power, and so on. We say that a itself is the first power of a.
Now, rather than write aaaa, we write a just once and place a small 4:
a4 ("a to the 4th")
That small 4 is called an exponent. It indicates the number of times to repeat a as a factor.
83 ("8 to the third power" or simply "8 to the third") means 8· 8· 8.
Problem 4. Name the first five powers of 2. 2, 4, 8, 16, 32.
Problem 5. Read, then calculate each of the following.
a) 52 "5 to the second power" or "5 squared" = 25.
b) 23 "2 to the third power" or "2 cubed" = 8.
c) 104 "10 to the fourth" = 10,000.
d) 121 "12 to the first" = 12.
The student must take care not to confuse the following: 3a means 3 times a. While a3 means a times a times a.
Question 4. When there are several operations,
8 + 4(2 + 3)2 − 7,
what is the order of operations?
Before answering, let us note that since skill in science is the reason students are required to learn algebra; and since orders of operations appear only in certain forms, then in these pages we present only those forms that the student is ever likely to encounter in the actual practice of algebra. The division sign ÷ is never used in scientific formulas, only the division bar. And the multiplication cross × is used only in scientific notation -- therefore the student will never see the following:
3 + 6 × (5 + 3) ÷ 3 − 8.
Such a problem would be purely academic, which is to say, it is an exercise for its own sake, and is of no practical value. It leads nowhere.
The order of operations is as follows:
In Examples 1 and 2 below, we will see in what sense we may add or subtract. And in Example 3 we will encounter multiply or divide.
Note: To "evaluate" means to name and write a number.
Example 1. 8 + 4(2 + 3)2 − 7
First, we will evaluate the parentheses, that is, we will replace 2 + 3 with 5:
= 8 + 4· 52 − 7
Since there is now just one number, 5, it is not necessary to write parentheses.
Notice that we transformed one element, the parentheses, and rewrote all the rest.
Next, evaluate the exponents:
= 8 + 4· 25 − 7
= 8 + 100 − 7
Finally, add or subtract, it will not matter. If we add first:
= 108 − 7 = 101.
While if we subtract first:
8 + 100 − 7 = 8 + 93 = 101.
Example 2. 100 − 60 + 3.
100 − 60 + 3 does not mean 100 − 63.
Only if there were parentheses --
100 − (60 + 3)
-- could we treat 60 + 3 as one number. In the absence of parentheses, the problem means to subtract 60 from 100, then add 3:
100 − 60 + 3 = 40 + 3 = 43.
In fact, it will not matter whether we add first or subtract first,
100 − 60 + 3 = 103 − 60 = 43.
When we come to signed numbers, we will see that
100 − 60 + 3 = 100 + (−60) + 3.
The order in which we "add" those will not matter.
There are no parentheses to evaluate and no exponents. Next in the order is multiply or divide. We may do either -- we will get the same answer. But it is usually more skillful to divide first, because we will then have smaller numbers to multiply. Therefore, we will first divide 35 by 5:
Example 4. ½(3 + 4)12 = ½· 7· 12.
The order of factors does not matter: abc = bac = cab, and so on. Therefore we may first do ½· 12. That is, we may first divide 12 by 2:
½· 7· 12 = 7· 6 = 42.
In any problem with the division bar, before we can divide we must evaluate the top and bottom according to the order of operations. In other words, we must interpret the top and bottom as being in parentheses.
Now we proceed as usual and evaluate the parentheses first. The answer is 4.
Problem 6. Evaluate each of the following according to the order of operations.
Question 5. What do we mean by the value of a letter?
The value of a letter is a number. It is the number that will replace the letter when we do the order of operations.
Question 6. What does it mean to evaluate an expression?
It means to replace each letter with its value, and then do the order of operations.
Example 6. Let x = 10, y = 4, z = 2. Evaluate the following.
In each case, copy the pattern. Copy the + signs and copy the parentheses ( ). When you come to x, replace it with 10. When you come to y, replace it with 4. And when you come to z, replace it with 2.
Problem 7. Let x = 10, y = 4, z = 2, and evaluate the following.
g) x2 − y2 + 3z2 = 100 − 16 + 3· 4 = 100 − 16 + 12 = 84 + 12 =96.
Again, 100 − 16 + 12 does not mean 100 − (16 + 12).
That is 168 divided by 100. See Lesson 4 of Arithmetic, Question 4.
Question 7. Why is a literal symbol also called a variable?
Because its value may vary.
A variable, such as x, is a kind of blank or empty symbol. It is therefore available to take any value we might give it: a positive number or, as we shall see, a negative number; a whole number or a fraction.
Problem 8. Two variables. Let the value of the variable y depend
y = 2x + 4.
Calculate the value of y that corresponds to each value of x:
When x = 0, y = 2· 0 + 4 = 0 + 4 = 4.
When x = 1, y = 2· 1 + 4 = 2 + 4 = 6.
When x = 2, y = 2· 2 + 4 = 4 + 4 = 8.
When x = 3, y = 2· 3 + 4 = 6 + 4 = 10.
When x = 4, y = 2· 4 + 4 = 8 + 4 = 12.
Real problems in science or in business occur in ordinary language. To do such problems, we typically have to translate them into algebraic language.
Problem 9. Write an algebraic expression that will symbolize each of the following.
a) Six times a certain number. 6n, or 6x, or 6m. Any letter will do.
b) Six more than a certain number. x + 6
c) Six less than a certain number. x − 6
d) Six minus a certain number. 6 − x
e) A number repeated as a factor three times. x· x· x = x3
f) A number repeated as a term three times. x + x + x
g) The sum of three consecutive whole numbers. The idea, for example,
h) Eight less than twice a certain number. 2x − 8
i) One more than three times a certain number. 3x + 1
Now an algebraic expression is not a sentence, it does not have a verb, which is typically the equal sign = . An algebraic statement has an equal sign.
Problem 10. Write each statement algebraically.
a) The sum of two numbers is twenty. x + y = 20.
b) The difference of two numbers is twenty. x − y = 20.
c) The product of two numbers is twenty. xy = 20.
d) Twice the product of two numbers is twenty. 2xy = 20.
e) The quotient of two numbers is equal to the sum of those numbers.
A formula is an algebraic rule for evaluating some quantity. A formula is a statement.
Example 7. Here is the formula for the area A of a rectangle whose base is b and whose height is h.
A = bh.
"The area of a rectangle is equal to the base times the height."
And here is the formula for its perimeter P -- that is, its boundary:
P = 2b + 2h.
"The perimeter of a rectangle is equal to two times the base
For, in a rectangle the opposite sides are equal.
Problem 11. Evaluate the formulas for A and P when b = 10 in, and h = 6 in.
A = bh = 10· 6 = 60 in2.
P = 2b + 2h = 2· 10 + 2· 6 = 20 + 12 = 32 in.
Problem 12. The area A of trapezoid is given by this formula,
A = ½(a + b)h.
Find A when a = 2 cm, b = 5 cm, and h = 4 cm.
A = ½(2 + 5)4 = ½· 7· 4 = 7· 2 = 14 cm2.
When 1 cm is the unit of length, then 1 cm² ("1 square centimeter") is the unit of area.
Problem 13. The formula for changing temperature in degrees Fahrenheit (F) to degrees Celsius (C) is given by this formula:
Find C if F = 68°.
Replace F with 68:
"One ninth of 36 is 4. So five ninths is five times 4: 20."
Please make a donation to keep TheMathPage online.
Copyright © 2013 Lawrence Spector
Questions or comments? | http://www.themathpage.com/Alg/algebraic-expressions.htm | 13 |
51 | We know what quadratic equations are. Now we're going to graph them. That's just how we roll. Whenever we find out what the Mega Millions lottery numbers are, we graph those, too. Same goes for basketball scores, election results and opening weekend movie grosses. We might have a problem.
In order to get functions, we'll graph quadratic equations of the form
y = ax2 + bx + c.
This will ensure that we only have one value of y for each value of x.
Graph the equation y = x2.
This is the simplest quadratic equation there is. It's also a relation, so we already know how to graph it. We may have forgotten it, but the knowledge is in there somewhere.
First we make a table of values so we can graph some points, and then we see the shape we're getting and connect the dots. Here is a sampling of points:
which we can graph:
This is forming a sort of "U'' shape, so that's how we connect the dots:
The only root of the polynomial x2 is x = 0. Therefore, 0 is the only value of x we can plug into the equation y = x2 if we want to get 0 for y. Un-coincidentally, the only point on the graph of y = x2 with a y-coordinate of 0 is the point (0, 0).
Once again, an appearance by the ever-popular 0. That guy is everywhere lately. He must be promoting a new film.
Let's do some slightly more interesting, albeit more complicated examples. Of course we could just graph these on the calculator, but we'd like to understand why we're getting the pictures we're getting, so unless you have one of those calculators that blabs on and on about how it "got there, " let's go ahead and do the work ourselves.
Sometimes we'll be asked to sketch a graph. Don't stress; no points will be taken off for less-than-perfectly-straight lines or a complete lack of artistic ability. The idea is to draw the rough shape of the graph and label a couple of easy values, but not to worry about pinpoint accuracy. Save the pinpoint accuracy for archery practice.
Without using a calculator, sketch a graph of the function y = x2 + 1.
When we graphed y = x2 + 2, we got the graph of y = x2 moved up by 2. When we graph y = x2 + 1, we get the graph of y = x2 moved up by 1. The only value we need to label on the sketch is y = 1.
The examples we've done so far have been pretty straightforward, but now we'll get into some examples that are straightbackward. The graphs in our prior examples looked like the graph of x2 moved up or down a bit, and possibly flipped upside down. To graph general quadratic equations, however, we need to do things differently.
We need a few new definitions first, so spread open your mind and clear some room...
The graph of any quadratic equation is a parabola. A parabola will look like either a right-side-up "U'':
or an upside-down "U.''
The places the parabola crosses the x-axis are called the x-intercepts, just like with linear equations. The place the parabola crosses the y-axis is the y-intercept. The lowest or highest point of the parabola, depending on which way it opens, is called the vertex of the parabola. Not a vortex, so no need to worry about being swallowed up into a whirling, tornado-like spiral...unless you live in Kansas.
In order to sketch the graph of a general quadratic equation, we need to know three things.
Let's walk through an example to see how to find all these things, and how to put them together into a graph. This will be a nice, relaxing walk. You can even do it sitting down; that's how relaxing it is.
Sketch a graph of y = x2 + 3x + 2.
We need to find the intercepts, the vertex, and whether the parabola opens upwards or downwards. We should also find out how late it's going to be open, in case we need to make a late night run.
1. Where does the parabola cross the axes?
First, where does the parabola cross the x-axis? An x-intercept is a point of the form
Therefore, the x intercepts of our graph will occur at whatever values of x make y zero; in other words, at the roots of the polynomial x2 + 3x + 2. To find the roots, we factor the polynomial to get
(x + 2)(x + 1),
set equal to 0, and solve. The equation
(x + 2)(x + 1) = 0
x = -2, x = -1.
These are the roots of the polynomial, and the x-intercepts of the parabola. It's not a problem that they're both negative. This is an equal opportunity graph.
We now know the points (-2, 0) and (-1, 0) are on the graph:
Where does the parabola cross the y-axis? To find the y-intercept, we plug in 0 for x and see what we get. In this case, we find
(0)2 + 3(0) + 2 = 2,
so the y-intercept is 2. We also have the point (0, 2) on the graph:
We now have two points. Must have been a safety.
2. What is the vertex?
The vertex of a parabola occurs halfway between the roots, at least when the roots exist. We'll worry in a moment about what happens when they don't. Well, you can worry about it now, but please keep it to yourself until we get to it. No good can come of spoiling it for everyone else in the meantime.
For this parabola, then, the vertex occurs when x is halfway between -2 and -1, or at . We find the y-value by plugging into the quadratic equation to find
is also on our parabola.
3. Does the graph open up or down?
Ever tried to push a "pull" door? Because we, um, haven't. It's important to know which way things open. A parabola is no different.
When x is outside the x-intercepts, the further away x gets from zero, the larger y gets, and it's not even taking growth hormones. When x = 5, y = 42; when x = 100, y = 10, 302. We can imagine that if we graphed more points, we would see the graph opening upwards.
Putting all the pieces together, we connect our dots in a "U'' shape, like this:
Now that we've gone through a sample problem and you enjoyed it so much that your mouth is in the shape of an upward-opening parabola, let's talk a little more about the sub-problems involved in graphing a quadratic equation of the form
y = ax2 + bx + c.
1. Finding the intercepts.
There are two steps here: finding the x-intercepts and finding the y-intercept. It's like that Easter egg hunt all over again. The x-intercepts are the values of x that make y zero. In other words, the solutions to the quadratic equation
0 = ax2 + bx + c.
This may involve using the quadratic formula. Since not all quadratic equations have solutions, the graph might not have any x-intercepts. It could look like this, for example:
The y-intercept is the value we get when we plug x = 0 in to the equation ax2 + bx + c. When we do that, we find
a(0)2 + b(0) + c = c.
A parabola will always have a y-intercept, since c will always be some number (possibly 0). With all of this up-in-the-air x-intercept business, it's nice to know we can still rely on c and the y-intercept. They'll stay with us through thick and thin.
2. Finding the vertex.
When a parabola has x-intercepts, the vertex occurs halfway between them. The number halfway between two numbers is also known as their average. Just like how you're halfway between the ages of your older brother and younger sister, which makes you totally average. Wait...
The average of 4 and 10 is
which is the number halfway between 4 and 10. If we use the quadratic formula to find the x-intercepts, we get the values
The number halfway between these two values is
This means the vertex occurs at .
But wait, what happens when the quadratic formula doesn't get us any solutions? Sneakily enough, the vertex would still be at . The value will always exist, because if a = 0 we don't have a quadratic formula in the first place. Oh, algebra. Always trying to slip one past us. Not this time, pal.
3. Deciding whether the graph opens upwards ("U'') or downwards (upside-down "U'').
The coefficient a on the x2 term tells us whether the graph opens upward or downward. If a is positive, the graph opens upward. If a is negative, the graph opens downward. This should be easy to remember, because when you have a positive attitude your mouth forms a "U" shape, but when you're being negative it forms an upside-down "U" shape. Unless you're one of those people who can remain completely straight-faced at all times, in which case you're on your own with this one.
Anyway, the reason for this is that as x gets farther from 0, y will be getting more and more negative. | http://www.shmoop.com/functions/quadratic-functions.html | 13 |
101 | 2008/9 Schools Wikipedia Selection. Related subjects: Mathematics
Kinematics ( Greek κινειν,kinein, to move) is a branch of dynamics which describes the motion of objects without the consideration of the masses or forces that bring out the motion. In contrast, kinetics is concerned with the forces and interactions that produce or affect the motion.
The simplest application of kinematics is to point particle motion ( translational kinematics or linear kinematics). The description of rotation ( rotational kinematics or angular kinematics) is more complicated. The state of a generic rigid body may be described by combining both translational and rotational kinematics ( rigid-body kinematics). A more complicated case is the kinematics of a system of rigid bodies, possibly linked together by mechanical joints. The kinematic description of fluid flow is even more complicated, and not generally thought of in the context of kinematics.
Translational or curvilinear kinematics is the description of the motion in space of a point along a trajectory. This path can be linear, or curved as seen with projectile motion. There are three basic concepts that are required for understanding translational motion:
- Displacement is the shortest distance between two points: the origin and the displaced point. The origin is (0,0) on a coordinate system that is defined by the observer. Because displacement has both magnitude (length) and direction, it is a vector whose initial point is the origin and terminal point is the displaced point.
- Velocity is the rate of change in displacement with respect to time; that is the displacement of a point changes with time. Velocity is also a vector. For a constant velocity, every unit of time adds the length of the velocity vector (in the same direction) to the displacement of the moving point. Instantaneous velocity (the velocity at an instant of time) is defined as , where ds is an infinitesimally small displacement and dt is an infinitesimally small length of time. Average velocity (velocity over a length of time) is defined as , where Δs is the change in displacement and Δt is the interval of time over which displacement changes.
- Acceleration is the rate of change in velocity with respect to time. Acceleration is also a vector. As with velocity if acceleration is constant, for every unit of time the length of the acceleration vector (in the same direction) is added to the velocity. If the change in velocity (a vector) is known, the acceleration is parallel to it. Instantaneous acceleration (the acceleration at an instant of time) is defined as , where dv is an infinitesimally small change in velocity and dt is an infinitesimally small length of time. Average acceleration (acceleration over a length of time) is defined as , where Δv is the change in velocity and Δt is the interval of time over which velocity changes.
When acceleration is constant it is said to be undergoing uniformly accelerated motion. If this is the case, there are four equations that can be used to describe the motion of an object.
- Those who are familiar with calculus may recognize this as an initial value problem. Because acceleration (a) is a constant, integrating it with respect to time (t) gives a change in velocity. Adding this to the initial velocity (v0) gives the final velocity (v).
- Using the above formula, we can substitute for v to arrive at this equation, where s is displacement.
- By using the definition of an average, and the knowledge that average velocity times time equals displacement, we can arrive at this equation.
To describe the motion of object A with respect to object O, when we know how each is moving with respect to object B, we use the following equation involving vectors and vector addition:
The above relative motion equation states that the motion of A relative to O is equal to the motion of B relative to O plus the motion of A relative to B.
For example, let Ann move with velocity VA and let Bob move with velocity VB, each velocity given with respect to the ground. To find how fast Ann is moving relative to Bob (we call this velocity VA / B), the equation above gives:
To find VA / B we simply rearrange this equation to obtain:
Rotational kinematics is the description of the rotation of an object and involves the definition and use of the following three quantities:
Angular position: If a vector is defined as the oriented distance from the axis of rotation to a point on an object, the angular position of that point is the oriented angle θ from a reference axis (e.g. the positive x-semiaxis) to that vector. An oriented angle is an angle swept about a known rotation axis and in a known rotation sense. In two-dimensional kinematics (the description of planar motion), the rotation axis is normal to the reference frame and can be represented by a rotation point (or centre), and the rotation sense is represented by the sign of the angle (typically, a positive sign means counterclockwise sense). Angular displacement can be regarded as a relative position. It is represented by the oriented angle swept by the above-mentioned point (or vector), from an angular position to another.
Angular velocity: The magnitude of the angular velocity ω is the rate at which the angular position θ changes with respect to time t:
Angular acceleration: The magnitude of the angular acceleration α is the rate at which the angular velocity ω changes with respect to time t:
The equations of translational kinematics can easily be extended to planar rotational kinematics with simple variable exchanges:
Here and are, respectively, the initial and final angular positions, and are, respectively, the initial and final angular velocities, and is the constant angular acceleration. Although position in space and velocity in space are both true vectors (in terms of their properties under rotation), as is angular velocity, angle itself is not a true vector.
In any given situation, the most useful coordinates may be determined by constraints on the motion, or by the geometrical nature of the force causing or affecting the motion. Thus, to describe the motion of a bead constrained to move along a circular hoop, the most useful coordinate may be its angle on the hoop. Similarly, to describe the motion of a particle acted upon by a central force, the most useful coordinates may be polar coordinates.
Fixed rectangular coordinates
In this coordinate system, vectors are expressed as an addition of vectors in the x, y, and z direction from a non-rotating origin. Usually i is a unit vector in the x direction, j is a unit vector in the y direction, and k is a unit vector in the z direction.
The position vector, s (or r), the velocity vector, v, and the acceleration vector, a are expressed using rectangular coordinates in the following way:
Three dimensional rotating coordinate frame
(to be written)
A kinematic constraint is any condition relating properties of a dynamic system that must hold true at all times. Below are some common examples:
Rolling without slipping
An object that rolls against a surface without slipping obeys the condition that the velocity of its centre of mass is equal to the cross product of its angular velocity with a vector from the point of contact to the centre of mass, :
For the case of an object that does not tip or turn, this reduces to v = R ω .
This is the case where bodies are connected by some cord that remains in tension and cannot change length. The constraint is that the sum of all components of the cord, however they are defined, is the total length, and the time derivative of this sum is zero. | http://pustakalaya.org/wiki/wp/k/Kinematics.htm | 13 |
98 | SOLVING QUADRATIC EQUATIONS USING THE QUADRATIC FORMULA
10.1A SOLVING QUADRATIC EQUATIONS USING THE QUADRATIC
Review of Square Roots:
Recall, the square root of a number is a number that when multiplied by
itself, gives the
For example: because 52 = 25
25 is a perfect square because it has a rational square root. You
may use your calculator to
evaluate the square root of values that are not perfect squares.
For example, 13 is not a perfect square because it has an irrational
square root, which means
that its square root is a decimal number which never ends or repeats. However,
by using a
calculator, we can find that if we round to
three decimal places.
In addition to solving by factoring, quadratic equations may be solved by using
For any quadratic equation written in standard form: ax2 + bx + c = 0
with a ≠ 0,
The Quadratic Formula may be used to solve quadratic
equations which may be factored as
well as those which cannot be factored or are difficult to factor.
Example 1: Solve using the quadratic formula: 4x2 + 5x = 6
First write the equation in standard form: 4x2 + 5x – 6 = 0
Then a = 4, b = 5, and c = –6. Substitute these values into the quadratic
This gives two answers:
Note that the equation could have been solved by
factoring: (4x – 3)(x + 2) = 0
Set each factor equal to zero: 4x – 3 = 0 or x + 2 = 0
Solving each of these equations gives the same solutions
as the quadratic formula
gave: or x = –2.
Example 2: Solve using the quadratic formula: m2 – m – 2 = 4m – 5
First write the equation in standard form by subtracting 4m from both sides and
5 to both sides of the equation: m2 – 5m + 3 = 0
Then a = 1, b = –5, and c = 3. Substitute these values into the quadratic
This gives two answers:
Note that and are
exact answers, while 4.303 and 0.697 are
approximations to three decimal places.
10.1B MORE APPLICATIONS OF QUADRATIC EQUATIONS
The following examples are applications of quadratic equations.
Example 3: Five times a number is 24 less than the square of that number. Find
Let n = the number.
Then 5n = n2 – 24.
Write this equation in standard form by subtracting 5n from both sides:
0 = n2 – 5n – 24, or n2 – 5n – 24 = 0.
You may solve this equation either by factoring or by using the quadratic
Factoring gives (n + 3)(n – 8) = 0.
Set each factor equal to zero: n + 3 = 0, or n – 8 = 0.
Then solve each of these equations to get n = –3 or n = 8.
The number is –3 or 8 (you must give both answers).
Check the answers in the original problem:
5(–3) = (–3)2 – 24 which gives –15 = –15
5(8) = (8)2 – 24 which gives 40 = 40
Example 4: The product of two consecutive odd integers is 195. Find the
Let x = the first odd integer.
Then x + 2 = the next odd integer (because odd numbers are two apart).
The equation is x (x + 2) = 195.
Distribute the x, and then subtract 195 from both sides of the equation to write
equation in standard form: x2 + 2x – 195 = 0.
Again, you may solve this equation either by factoring or by using the quadratic
Factoring gives (x + 15)(x – 13) = 0.
Set each factor equal to zero: x + 15 = 0, or x – 13 = 0.
Then solve each of these equations to get x = –15 or x = 13.
For each of these answers, find the next odd integer by adding two to your
For the first answer, –15, the next odd integer is x + 2 = –15 + 2 = –13.
For the second answer, 13, the next odd integer is x + 2 = 13 + 2 = 15.
The odd integers are –15 and –13, or the odd integers are 13 and 15 (you must
both pairs of answers).
Check the answers by multiplying:
(–15)(–13) = 195 , and
(13)(15) = 195 .
Example 5: The length of one leg of a right triangle is 9 meters. The length of
is three meters longer than the other leg. Find the length of the hypotenuse and
length of the other leg.
Let x = the length of the other leg in meters.
Then x + 3 = the length of the hypotenuse in meters.
Use the Pythagorean Theorem, a2 + b2 = c2, to
solve the problem:
x2 + 92 = (x + 3)2.
When simplified, the equation becomes x2 + 81 = x2 + 6x +
9 (don't forget that the
square of a binomial is a trinomial).
When x2 is subtracted from both sides of the equation, the equation
81 = 6x + 9, which is no longer a quadratic equation
because it has no x2 term.
Solve as a linear equation by subtracting 9 from both sides and then dividing by
72 = 6x, or , which becomes 12 = x.
Therefore, the length of the other leg is x = 12 meters.
The length of the hypotenuse is x + 3 = 12 + 3 = 15 meters.
Check by using the Pythagorean Theorem:
122 + 92 = 152, or
144 + 81 = 225, or
225 = 225 .
In addition to the problems assigned from your Personal
Academic Notebook for lesson 10.1,
work the following problems.
Solve the quadratic equations below either by factoring or by using the
Give exact answers, and where appropriate, give approximations to three decimal
8. Four times a number is 12 less than the square of that
number. Find all such numbers.
9. The product of two consecutive odd integers is 143. Find the two integers.
10. The sum of the squares of two consecutive integers is nine less than ten
larger. Find the two integers.
11. The length of a rectangular garden is 3 feet longer than the width. If the
area of the
garden is 88 square feet, find the dimensions of the garden.
12. A triangle has a base that is 2 cm longer than its height. The area of the
triangle is 12
square cm. Find the lengths of the height and the base of the triangle.
13. For an experiment, a ball is projected with an initial velocity of 48
air resistance, its height H, in feet, after t seconds is given by the formula
H = 48t – 16t2
How long will it take for the ball to hit the ground? (Hint: H = 0 when it hits
14. The length of one leg of a right triangle is 8 inches. The length of the
hypotenuse is four
inches longer than the other leg. Find the length of the hypotenuse and the
the other leg.
15. A water pipe runs diagonally under a rectangular garden that is one meter
longer than it
is wide. If the pipe is 5 meters long, find the dimensions of the garden.
8. The number is –2 or 6. [Equation is 4x = x2
9. The odd integers are 11 and 13, or the odd integers are –13 and –11.
[Equation is x (x + 2) = 143]
10. The consecutive integers are 0 and 1, or the consecutive integers are 4 and
[Equation is x2 + (x + 1)2 = 10(x + 1) – 9]
11. The width is 8 feet, and the length is 11 feet. Note that dimensions of
figures cannot be negative. [Equation is x (x + 3) = 88]
12. The height is 4 cm, and the base is 6 cm. [Equation is
13. The ball will hit the ground in 3 seconds. [Equation is 0 = 48t – 16t2]
14. The other leg is 6 inches, and the hypotenuse is 10 inches.
[Equation is x2 + 82 = (x + 4)2]
15. The width is 3 meters, and the length is 4 meters.
[Equation is x2 + (x + 1)2 = 52] | http://www.solve-variable.com/solving-quadratic-equations-using-the-quadratic-formula.html | 13 |
68 | Each person/thing we collect data on is called an OBSERVATION (in our work these are usually people/subjects. Currently, the term participant rather than subject is used when describing the people from whom we collect data).
OBSERVATIONS (participants) possess a variety of CHARACTERISTICS.
If a CHARACTERISTIC of an OBSERVATION (participant) is the same for every member of the group (doesn't vary) it is called a CONSTANT.
If a CHARACTERISTIC of an OBSERVATION (participant) differs for group members it is called a VARIABLE. In research we don't get excited about CONSTANTS (since everyone is the same on that characteristic); we're more interested in VARIABLES. Variables can be classified as QUANTITATIVE or QUALITATIVE (also known as CATEGORICAL).
QUANTITATIVE variables are ones that exist along a continuum that runs from low to high. Ordinal, interval, and ratio variables are quantitative. QUANTITATIVE variables are sometimes called CONTINUOUS VARIABLES because they have a variety (continuum) of characteristics. Height in inches and scores on a test would be examples of quantitative variables.
QUALITATIVE variables do not express differences in amount, only differences. They are sometimes referred to as CATEGORICAL variables because they classify by categories. Nominal variables such as gender, religion, or eye color are CATEGORICAL variables. Generally speaking, categorical variables
|Categorical variables are groups...such as gender or type of degree sought. Quantitative variables are numbers that have a range...like weight in pounds or baskets made during a ball game. When we analyze data we do turn the categorical variables into numbers but only for identification purposes...e.g. 1 = male and 2 = female. Just because 2 = female does not mean that females are better than males who are only 1. With quantitative data having a higher number means you have more of something. So higher values have meaning.|
A special case of a CATEGORICAL variable is a DICHOTOMOUS VARIABLE. DICHOTOMOUS variables have only two CHARACTERISTICS (male or female). When naming QUALITATIVE variables, it is important to name the category rather than the levels (i.e., gender is the variable name, not male and female).
Variables have different purposes or roles...
Independent (Experimental, Manipulated, Treatment, Grouping) Variable-That factor which is measured, manipulated, or selected by the experimenter to determine its relationship to an observed phenomenon. "In a research study, independent variables are antecedent conditions that are presumed to affect a dependent variable. They are either manipulated by the researcher or are observed by the researcher so that their values can be related to that of the dependent variable. For example, in a research study on the relationship between mosquitoes and mosquito bites, the number of mosquitoes per acre of ground would be an independent variable" (Jaeger, 1990, p. 373)
While the independent variable is often manipulated by the researcher, it can also be a classification where subjects are assigned to groups. In a study where one variable causes the other, the independent variable is the cause. In a study where groups are being compared, the independent variable is the group classification.
Dependent (Outcome) Variable-That factor which is observed and measured to determine the effect of the independent variable, i.e., that factor that appears, disappears, or varies as the experimenter introduces, removes, or varies the independent variable. "In a research study, the independent variable defines a principal focus of research interest. It is the consequent variable that is presumably affected by one or more independent variables that are either manipulated by the researcher or observed by the researcher and regarded as antecedent conditions that determine the value of the dependent variable. For example, in a study of the relationship between mosquitoes and mosquito bites, the number of mosquito bites per hour would be the dependent variable" (Jaeger, 1990, p. 370)The dependent variable is the participant's response.
The dependent variable is the outcome. In an experiment, it may be what was caused or what changed as a result of the study. In a comparison of groups, it is what they differ on.
Moderator Variable- That factor which is measured, manipulated, or selected by the experimenter to discover whether it modifies the relationship of the independent variable to an observed phenomenon. It is a special type of independent variable.
The independent variable's relationship with the dependent variable may change under different conditions. That condition is the moderator variable. In a study of two methods of teaching reading, one of the methods of teaching reading may work better with boys than girls. Method of teaching reading is the independent variable and reading achievement is the dependent variable. Gender is the moderator variable because it moderates or changes the relationship between the independent variable (teaching method) and the dependent variable (reading achievement).
Suppose we do a study of reading achievement where we compare whole language with phonics, and we also include students social economic status (SES) as a variable. The students are randomly assigned to either whole language instruction or phonics instruction. There are students of high and low SES in each group.
Lets assume that we found that whole language instruction worked better than phonics instruction with the high SES students, but phonics instruction worked better than whole language instruction with the low SES students. Later you will learn in statistics that this is an interaction effect. In this study, language instruction was the independent variable (with two levels: phonics and whole language). SES was the moderator variable (with two levels: high and low). Reading achievement was the dependent variable (measured on a continuous scale so there arent levels).
With a moderator variable, we find the type of instruction did make a difference, but it worked differently for the two groups on the moderator variable. We select this moderator variable because we think it is a variable that will moderate the effect of the independent on the dependent. We make this decision before we start the study.
If the moderator had not been in the study above, we would have said that there was no difference in reading achievement between the two types of reading instruction. This would have happened because the average of the high and low scores of each SES group within a reading instruction group would cancel each other an produce what appears to be average reading achievement in each instruction group (i.e., Phonics: Low6 and High2; Whole Language: Low2 and High6; Phonics has an average of 4 and Whole Language has an average of 4. If we just look at the averages (without regard to the moderator), it appears that the instruction types produced similar results).
Extraneous Variable- Those
factors which cannot be controlled.
Extraneous variables are independent variables that have not been controlled. They may or may not influence the results. One way to control an extraneous variable which might influence the results is to make it a constant (keep everyone in the study alike on that characteristic). If SES were thought to influence achievement, then restricting the study to one SES level would eliminate SES as an extraneous variable.
Here are some examples similar to your homework:
Null Hypothesis: Students who receive pizza coupons as a reward
do not read more books than students who do not receive pizza coupon rewards.
Independent Variable: Reward Status
Dependent Variable: Number of Books Read
High achieving students do not perform better
than low achieving student when
writing stories regardless of whether they use paper and pencil or a word processor.
Independent Variable: Instrument Used for Writing
Moderator Variable: Ability Level of the Students
Dependent Variable: Quality of Stories Written
When we are comparing two groups, the groups are the independent variable. When we are testing whether something influences something else, the influence (cause) is the independent variable. The independent variable is also the one we manipulate. For example, consider the hypothesis "Teachers given higher pay will have more positive attitudes toward children than teachers given lower pay."
One approach is to ask ourselves "Are there two or more groups being compared?" The answer is "Yes." "What are the groups?" Teachers who are given higher pay and teachers who are given lower pay. Therefore, the independent variable is teacher pay (it has two levels-- high pay and low pay). The dependent variable (what the groups differ on) is attitude towards school.
We could also approach this another way. "Is something causing something else?" The answer is "Yes."
"What is causing what?" Teacher pay is causing attitude towards school. Therefore, teacher pay
is the independent variable (cause) and attitude towards school is the dependent variable (outcome).
Research Questions and Hypotheses
The research question drives the study. It should specifically state what is being investigated. Staticians often convert their research questions to null and alternative hypotheses. The null hypothesis states that no relationship (correlation study) or difference (experimental study) exists. Converting research questions to hypotheses is a simple task. Take the questions and make it a positive statement that says a relationship exists (correlation studies) or a difference exists (experiment study) between the groups and we have the alternative hypothesis. Write a statement that a relationship does not exist or a difference does not exist and we have the null hypothesis.
Format for sample research questions and accompanying hypotheses:
Research Question for Relationships: Is there a relationship between
height and weight?
Null Hypothesis: There is no relationship between height and weight.
Alternative Hypothesis: There is a relationship between height and weight.
When a researcher states a nondirectional hypothesis in a study that compares the performance of two groups, she doesnt state which group she believes will perform better. If the word more or less appears in the hypothesis, there is a good chance that we are reading a directional hypothesis. A directional hypothesis is one where the researcher states which group she believes will perform better. Most researchers use nondirectional hypotheses.
We usually write the alternative hypothesis (what we believe might happen) before we write thenull hypothesis (saying it wont happen).
Research Question for Differences: Do boys like reading more than girls?
Null Hypothesis: Boys do not like reading more than girls.
Alternative Hypothesis: Boys do like reading more than girls.
Research Question for Differences: Is there a difference between boys' and girls' attitude towards reading? --or-- Do boys' and girls' attitude towards reading differ?
Null Hypothesis: There is no difference between boys' and girls' attitude towards reading. --or-- Boys' and girls' attitude towards reading do not differ.
Alternative Hypothesis: There is a difference between boys' and girls' attitude towards reading. --or-- Boys' and girls' attitude towards reading differ.
Del Siegle, Ph.D.
Neag School of Education - University of Connecticut | http://www.gifted.uconn.edu/siegle/research/Variables/variablenotes.htm | 13 |
63 | Friction is the physical description of the force exerted between two surfaces in contact with each other. In the example of surf wax, when you press onto the board you body applies a force onto the surface of the board. This force has two parts, there is a force into the surface (called the normal force) and then there is another force parallel to the surface of the board. Figure 1 shows this behavior. Friction comes in with the parallel force. A frictional force is created at the surface of your body and the board which is equal to force that say, your foot imparts to the board. This frictional force is equal to the "Coefficient of Friction" times the normal force. Rather confusing at first, but important in that the frictional force is very dependent on the coefficient of friction for any given normal force. Note that when you are standing upright, the normal force is essentially how much you weigh, so your only hope for more friction is: 1) get heavier, or 2) increase the coefficient of friction. When wax makers claim they are "sticky" that's what they are saying, that their wax has a higher coefficient of friction. Of course there are many other features to wax as well, such as durability, ease of application, whether you got it for free, etc...., but in terms of functionality, stickiness and the coefficient of friction are pretty much the same. Note that wax deforms pretty easily and in the classic sense friction is hard to quantify, but for fun let's assume that the classic description of friction is somewhat valid.
One can measure the coefficient of friction with a simple test. I set-up a simple test to start quantifying how sticky various waxes are. The test set-up is pretty simple and you can do this yourself at home. This might make a fun science project for someone in school. I found at least one person who has conducted a surf wax friction comparison test as a school project. So far I have found that you need to do LOTS of tests because of the inherent variability of wax friction. Note that friction coefficients are typically like this. Surf wax friction tests seems especially difficult to measure. Before I show you my test apparatus, we need to do some physics to figure out how to solve for the coefficient of friction. A surfing friend of mine once commented something to the effect that geometry was fairly useless. Here is a case where it is not useless. The method to determine the coefficient of friction is basically a geometric proof. The idea is to use some physics to draw a free body diagram, manipulate some force vectors and derive an equation that quantify the effect of friction. Figure 2 shows the free body diagram, various geometry relationships and the force vectors. One can find this derivation in most college freshman physics texts and some high school sources. The punch line is: The tangent of the angle of inclination is the coefficient of friction. The trick to this exercise is knowing that the angles shown in the figure are the same which is a geometric proof for similar triangles. This would be something done in high school geometry, though it is a bit confusing and I commonly forget the proof. Now I have just memorized that it works. Anyhow, some geometry allows us to do a simple test and calculate the coefficient of friction by raising an inclined plane and watching something slip down.
So how do I use that information. Now we get to the test set-up. See figure 3 for my test set-up. In this figure I have a surfboard which has been waxed. I actually waxed it with different waxes in different locations so I can test various waxes together for comparison. I have attached a shop crane to the tail of the board, so I can slowly raise the board at an angle from horizontal. I place a small board to simulate a hand or foot onto the board and then apply a weight onto the foot simulator. I then begin raising the tail of the board slowly and watch the foot simulator until I see it begin to move. Once it begins to move, I stop raising the board and I measure the angle the board makes with the floor. The tangent of this angle is the coefficient of friction. Normally the coefficient of friction is constant, which means its value is the same no matter how large the normal force is. These means that the frictional force increases the harder I press down on a surface and that the increase in frictional force is proportional to the applied load. Surf wax is not like this. I have found that the coefficient of friction is also dependent on the normal force, which is not typical for most materials. The coefficient of friction of surf wax DECREASES as the applied load is increases. This means that wax is stickier when you touch it (a small applied or normal force), than when you applied a hard force (like standing up). As is well known, wax deforms slightly when pressed on, which shows that the wax is plastically yielding or flowing under high applied loads. The coefficient of friction will change if the wax deforms and this test data shows that. Of course, it will also be dependent on the temperature, so one should measure that as well. The variation of the results varies a fair amount from test to test, so for each test condition, I like to repeat the exact same test conditions at least three tests with the foot simulator being placed at different locations on the board. The important feature to remember is that the stickiness of wax when you touch it is not the same stickiness you will get when you really need it.
I tested several different brands of surf wax at roughly 60-70 degrees F. The waxes were selected as the best fit for that temperature. I was surprised to see that there was quite a bit of difference between brands. I am not showing any brand names, but anecdotally, I have heard surfers claim that what I have found to be a less least sticky wax is actually the strongest and stickiest. Of course the actual ocean is different than my test, so preference of use may not be so easy to "test" like this. However, the basic trends I have found seem reasonable.
I will be doing further tests in the future and will update this data. Figure 4 shows the current test data. Note that the normal force is shown as "normal stress". This is the normal force divided by the contact area of the foot simulator and board. A "normal stress" is similar to pressure. I assumed that the wax may be undergoing local compressive failure at the contact location and based on general material failure theory suggests that one needs to measure this applied stress. Based on that theory, I am assuming that the data would fit better using a stress rather than a normal force. Seems a little more complicated than necessary, but it will help when I do further work in the future.
What you see is that all waxes seem to obey the same the general behavior. As the applied load increases, the coefficient of friction decreases. In fact the coefficient of friction more than drops in half at higher loads. My highest load is roughly equivalent to a 100 pound person standing upright on a board. The variation of the coefficient of friction is large. the Brand 1 coefficient is higher than Brands 2 and 3 which are almost identical. I would expect that as the temperature varies that for any given wax there will be an optimal temperature which will yield the maximum stickiness. This will require a more temperature controlled experiment, such as enclosing the surfboard test apparatus in a temperature controlled chamber. These tests were also done "dry", so other tests with water could be performed as well.
Figure 1 - Nomenclature for Friction Forces
Figure 3 - Example of Surf Wax Friction Test Set-Up
Figure 4 - Comparison of Several Surf Waxes | http://www.waveequation.com/surfwax_friction_data.html | 13 |
69 | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
For example, since
This example suggests how square roots can arise when solving quadratic equations such as or, more generally
There are two solutions to the square root of a non-zero number. For a positive real number, the two square roots are the principle square root and the negative square root. For negative real numbers, the concept of imaginary and complex numbers has been developed to provide a mathematical framework to deal with the results.
Square roots of positive integers are often irrational numbers, i.e., numbers not expressible as a ratio of two integers. For example, cannot be written exactly as m/n, where n and m are integers. Nonetheless, it is exactly the length of the diagonal of a square with side length 1.
- The principal square root function is a function which maps the set of non-negative real numbers onto itself.
- The principal square root function always returns a unique value.
- To obtain both roots of a positive number, take the value given by the principal square root function as the first root (root1) and obtain the second root (root2) by subtracting the first root from zero (ie root2 = 0 - root1).
- The following important properties of the square root functions are valid for all positive real numbers and :
- The square root function maps rational numbers to algebraic numbers; also, is rational if and only if is a rational number which, after cancelling, is a ratio of two perfect squares. In particular, is irrational.
- Contrary to popular belief, does not necessarily equal . The equality holds for non-negative , but when , is positive by definition, and thus . Therefore, for real (see absolute value).
- Suppose that and are reals, and that , and we want to find . A common mistake is to "take the square root" and deduce that . This is incorrect, because the principal square root of is not , but the absolute value , one of our above rules. Thus, all we can conclude is that , or equivalently .
- In calculus, for instance when proving that the square root function is continuous or differentiable, or when computing certain limits, the following identity often comes handy:
- valid for all non-negative numbers and which are not both zero.
- The function has the following graph, made up of half a parabola lying on its side:
- The function is continuous for all non-negative and differentiable for all positive (it is not differentiable for since the slope of the tangent there is ∞). Its derivative is given by
- for .
There are numerous methods to compute square roots. See the article on methods of computing square roots.
Square roots of complex numbers Edit
To every non-zero complex number z there exist precisely two numbers w such that w2 = z. The usual definition of √z is as follows: if z = r exp(iφ) is represented in polar coordinates with -π < φ ≤ π, then we set √z = √r exp(iφ/2). Thus defined, the square root function is holomorphic everywhere except on the non-positive real numbers (where it isn't even continuous). The above Taylor series for √(1+x) remains valid for complex numbers x with |x| < 1.
When the number is in rectangular form the following formula can be used:
where the sign of the imaginary part of the root is the same as the sign of the imaginary part of the original number.
Note that because of the discontinuous nature of the square root function in the complex plane, the law √(zw) = √(z)√(w) is in general not true. Wrongly assuming this law underlies several faulty "proofs", for instance the following one showing that -1 = 1:
The third equality cannot be justified. (See invalid proof.)
However the law can only be wrong by a factor -1 (it is right up to a factor -1), √(zw) = ±√(z)√(w), is true for either ± as + or as - (but not both at the same time). Note that √(c2) = ±c, therefore √(a2b2) = ±ab and therefore √(zw) = ±√(z)√(w), using a = √(z) and b = √(w).
Square roots of matrices and operators Edit
- Main article: square root of a matrix
If A is a positive-definite matrix or operator, then there exists precisely one positive definite matrix or operator B with B2 = A; we then define √A = B.
More generally, to every normal matrix or operator A there exist normal operators B such that B2 = A. In general, there are several such operators B for every A and the square root function cannot be defined for normal operators in a satisfactory manner. Positive definite operators are akin to positive real numbers, and normal operators are akin to complex numbers.
Infinitely nested square roots Edit
Under certain conditions infinitely nested radicals such as
represent rational numbers. This rational number can be found by realizing that x also appears under the radical sign, which gives the equation
If we solve this equation, we find that x = 2. More generally, we find that
Beware, however, of the discontinuity for n=0. The infinitely nested square root for n=0 does not equal one, as the "general" solution would indicate. Rather, it is (obviously) zero.
The same procedure also works to get
This method will give a rational value for all values of such that
Square roots of the first 20 positive integers Edit
√Template:Overline = 1
√Template:Overline ≈ 1.4142135623 7309504880 1688724209 6980785696 7187537694 8073176679 7379907324 78462
√Template:Overline ≈ 1.7320508075 6887729352 7446341505 8723669428 0525381038 0628055806 9794519330 16909
√Template:Overline = 2
√Template:Overline ≈ 2.2360679774 9978969640 9173668731 2762354406 1835961152 5724270897 2454105209 25638
√Template:Overline ≈ 2.4494897427 8317809819 7284074705 8913919659 4748065667 0128432692 5672509603 77457
√Template:Overline ≈ 2.6457513110 6459059050 1615753639 2604257102 5918308245 0180368334 4592010688 23230
√Template:Overline ≈ 2.8284271247 4619009760 3377448419 3961571393 4375075389 6146353359 4759814649 56924
√Template:Overline = 3
√Template:Overline ≈ 3.1622776601 6837933199 8893544432 7185337195 5513932521 6826857504 8527925944 38639
√Template:Overline ≈ 3.3166247903 5539984911 4932736670 6866839270 8854558935 3597058682 1461164846 42609
√Template:Overline ≈ 3.4641016151 3775458705 4892683011 7447338856 1050762076 1256111613 9589038660 33818
√Template:Overline ≈ 3.6055512754 6398929311 9221267470 4959462512 9657384524 6212710453 0562271669 48293
√Template:Overline ≈ 3.7416573867 7394138558 3748732316 5493017560 1980777872 6946303745 4673200351 56307
√Template:Overline ≈ 3.8729833462 0741688517 9265399782 3996108329 2170529159 0826587573 7661134830 91937
√Template:Overline = 4
√Template:Overline ≈ 4.1231056256 1766054982 1409855974 0770251471 9922537362 0434398633 5730949543 46338
√Template:Overline ≈ 4.2426406871 1928514640 5066172629 0942357090 1562613084 4219530039 2139721974 35386
√Template:Overline ≈ 4.3588989435 4067355223 6981983859 6156591370 0392523244 4936890344 1381595573 28203
√Template:Overline ≈ 4.4721359549 9957939281 8347337462 5524708812 3671922305 1448541794 4908210418 51276
Geometric construction of the square root Edit
You can construct a square root with a compass and straightedge. This has been known at least since the time of the Pythagoreans. in his Elements, Euclid (fl. 300 BC) gave the construction of the geometric mean of two quantities in two different places: Proposition II.14 and Proposition VI.13. Since the geomteric mean of and is , you can construct simply by taking .
- Quadratic residue
- Radical (mathematics)
- Quadratic irrational
- Cube root
- Integer square root
- Root of unity
- Methods of computing square roots
- Square root of a matrix
- Japanese soroban techniques - Professor Fukutaro Kato's method
- Japanese soroban techniques - Takashi Kojima's method
- Algorithms, implementations, and more - Paul Hsieh's square roots webpage
- Square root of positive real numbers with implementation in Rexx.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | http://psychology.wikia.com/wiki/Square_root | 13 |
50 | Welcome to this little explanation on how to determine the fold of a Haskell
datatype. First we’ll look at how we define functions over lists, something
everyone starting with Haskell should be sufficiently familiar with, after which
we move on to the datatypes. You’ll see different ways how to calculate the sum
of a list, how to fold over a list, what datatypes are and, how to fold over a
datatype, specifically the
BinTree a datatype. Most importantly, I hope you
will grow to understand what a fold is, and why they are so important and useful
when programming Haskell.
During our functional programming course at Utrecht University I noticed students having anxiety of datatypes and even more for folds. Not because they couldn’t grasp the workings of a single example but more a lack of a view on the complete picture and lack of experience with the Haskell datatype way.
So, what are datatypes? Datatypes are a way of notating the abstract structure of your data. There are several datastructures known to man, such as lists and trees.
Lists in Haskell are used in several ways. Today we will look at how to calculate the sum of a list. Intuitively you calculate the sum of a list by adding its elements together. Starting with the first element and then continuing on to the rest. This is how we literally translate that thought into Haskell code:
1 2 3 4
Would we be calculating the product of the list, we’d do the same except we multiply instead of adding.
1 2 3 4
Looking at these two examples we can see that we have two similarities. We always have recursion on the tail of the list and we do something with the head of the list and the result of the recursion. In the first example we add them together, and in product we multiply them. Another property of most functions over lists is that there is a base case for the empty list. We call this the identity of our function, sometimes also referred to as unit. The identity of addition is 0 and the identity of multiplication is 1.
Now we look at one of the folds over lists defined in the prelude,
foldr. Make sure you know what each parameter stands for.
Note: There are some downsides to this function, mostly that it
will not work for large lists, you’ll see more on this later.
1 2 3 4 5
Now, take a moment to let this function soak in and try to think of how you
product in terms of this
foldr. Crucial at this point is to notice that we do not see any
type hardcoded in the type of
foldr It may be helpful to look at
this fold, which is the identity function for lists:
1 2 3
If we look at our sum function, the operator between each recursive call is
(+) and our base case is 0. So that’s what we are going to use for
our sum in terms of foldr.
1 2 3
Thus far we only have lists of
Int for our examples. However, for
sake of usability we will now move on to lists of numeric elements. Because
(+) is defined for all numbers. (If you wish to read more on this
subject please look up classes and instances.) Notice how the type of sumList
changes while it’s definition remains unaltered.
Now that we have seen how we can determine the function and identity for our
fold from our recursive function to a definition in terms of
And most importantly, the foldr takes care of the recursive nature of the list
for us and the only thing it asks us in return for that is an operator and an
identity for your operation. So the fold can now be used to define several
operations on lists.
I told you about a problem of
foldr. Depending on the size of your
list the above
sumList may not work. Try the following on your
1 2 3 4
1 2 3 4
For a complete overview and analysis on
foldl please read A tutorial on the universality and
expressiveness of fold, by Graham Hutton.
First, let’s take a look at how data structures in Haskell can be defined. In
short we have the
data keyword, followed by zero or more
= and then a number of
constructors separated by a
Now that you have familiarized yourself with lists we can proceed to a slightly more complicated datastructure. The Tree. In this example we will use a simple binary tree. A binary tree can be denoted as follows:
In this case
BinTree is the type constructor and
Leaf the data data constructors.
Remember that the goal of folds is to separate the implementation of the recursion from the actual operation we want to execute on the datatype. So we have one function, the fold, that takes care of the recursion and several other functions that use this fold to specify certain semantics on the datatype. We are going to calculate the sum of all elements in this tree.
The above binary tree has elements in the nodes and nothing in the leaves. You
can notice the recursive occurrence of “BinTree a
' in the node. We
see two constructors in this datatype, Node
The Node constructor expects some value of type a
and two subtrees
of type BinTree a
, and the Leaf` constructor has no
parameters. In Haskell:
Notice that our list also has two constructors, namely the
constructor that adds an element and the constructor for the empty list,
. Analogously we have the
constructors. If we were allowed to use the
constructor in our own Haskell code, the datatype for a list would look like
Also, it is customary to keep the arguments for the functions in the same order as the constructors are defined in the datatype, and to name the identifiers containing the functions the same as the constructor function but in lowercase. Applying this we get:
1 2 3 4
I left out the result type of this fold, try to find it yourself before continuing.
By following our code we can see that every case, the case for
and the one for
Leaf, result in a
Consequently the complete result of the function is a
Now recall that we previously used a fold to calculate the sum of a list. With
that fold we were not restricted to a list. Which is also clearly visible by
looking at the type of
foldr it contains only type variables. As we
want to calculate sum of the elements in this tree, which is something of type
a and not of type
BinTree a we have to revise the type
of our fold. Notice the recurrence of the datatype in it’s declaration,
BinTree a. We will replace all of these occurrences in the types of
our functions with a free type variable, say
1 2 3 4
Keep in mind that I used ‘eta reduction’ in the above code. This means that the last parameter (in this case the tree) isn’t explicitly specified as it would appear at the very end of the parameter list and ath the very end of the definition. (More on eta reduction.) So, now let’s do something with this fold. Suppose we have the following tree:
1 2 3 4 5 6 7 8 9
A visual representation:
1 / \ 2 2 / \ / \ 3 3 3 3
Now we can define several traversals over this tree. Let’s calculate the sum of all values in the tree.
1 2 3
The type of our foldBinTree is now more compact. However, there will be datatypes that contain a larger sum of constructors that also may have more or less parameters than in our case. Defining the type of the fold on those datatypes as we have done before will inevitably lead to very clumsy type signatures. The idea is that we split up the section that denotes our functions into a separate type, namely the algebra.
So now we define an algebra for our data structure. To do this we again look at each data constructor and determine it’s type. The fold function takes care of the recursion, so applying this thought consequently gives us the following types and fold.
Keep in mind that we have to change the type of our fold, but not the definition of the fold itself!
1 2 3 4 5
Now we can again define a sum on all
Num a trees. Notice that we
went from to seperate parameters for the functions to one tuple with two
1 2 3
1 2 3 4 5
That’s it for now. If you did not understand everything by the end of this article, don’t panic. It will sink in eventually. Let this rest a day or two, and then read this article again and you’ll understand folds better. :) | http://alessandrovermeulen.me/2009/12/17/haskell-datatypes-and-folds/ | 13 |
51 | Although lasers range from quantum-dot to football-field size and utilize materials from gases to solids, the underlying operating principles are always the same. This article provides the basic information about how and why lasers work.
When the laser was
first demonstrated in 1960, it sparked a wave of public interest. Soon, however,
many scientists and engineers dismissed the laser as “a solution without a
problem.” Time has proved the critics very wrong. From communications to construction,
laser technology has become a part of everyday life.
All light sources convert input energy into light.
In the case of the laser, the input, or pump, energy can take many forms, the two
most common being optical and electrical. For optical pumping, the energy source
may even be another laser.
In a conventional (incoherent) light
source, each atom excited by input energy randomly emits a single photon according
to a given statistical probability. This produces radiation in all directions with
a spread of wavelengths and no interrelationships among individual photons. This
is called spontaneous emission.
Figure 1. Spontaneous emission is a random process, whereas
stimulated emission produces photons with identical properties.
Einstein predicted that excited atoms
also could convert their stored energy into light by a process called stimulated
emission. Here, an excited atom first produces a photon by spontaneous emission.
When this photon reaches another excited atom, the interaction prompts that atom
to emit a second photon (Figure 1). This process has two important characteristics.
First, it is multiplicative — one photon becomes two. If these two photons
interact with two other excited atoms, this will yield a total of four photons,
and so forth. Most important, these two photons have identical properties: wavelength,
direction, phase and polarization. This ability to amplify light is termed optical
gain, and a wide range of solid, liquid and gas phase materials have been discovered
that exhibit gain.
The laser cavity
The laser cavity, or resonator, is at the heart
of the system. The cavity defines a unique axis with very high optical gain, which
becomes the beam direction. This axis usually is defined in two ways. First, the
laser’s shape ensures that the gain medium is longer along one axis, often
as a long thin cylinder, as is typical for gas lasers. A more extreme example of
a uniquely long gain axis is the fiber laser. This is sufficient for some high-gain
devices such as excimers. But for most, it is necessary to further enhance the gain
along this axis using cavity mirrors that produce feedback (Figure 2).
Figure 2. In the prototypical gas laser, the gain medium has a
long, thin cylindrical shape. The cavity is defined by two mirrors. One
is partially reflecting and allows the output beam to escape.
The simplest cavity is defined by two
mirrors — a total reflector and a partial reflector whose reflectance can
vary between 50 and 99 percent. Light bounces back and forth between these mirrors,
gaining intensity with each pass through the gain medium. Because some of this light
escapes the cavity, or oscillator, through the partial reflector (output coupler),
a stable equilibrium condition is reached quickly. The output beam is the light
that escapes through the output coupler.
In the ideal laser, all the photons
in the output beam are identical. This imparts several unique properties: directionality,
monochromaticity, coherence and brightness.
— A photon’s
energy determines its wavelength through the relationship E = hc/λ, where c is the speed of
light, h is Planck’s constant and λ is wavelength. If our ideal laser emits all
photons with the same energy, and thus the same wavelength, it is said to be monochromatic.
Many applications are dependent on monochromaticity. For example, in telecommunications,
several lasers at different wavelengths transmit multiple streams of data down the same
fiber without crosstalk.
— Besides being the
same wavelength, the photons that make up a laser beam are all in phase (Figure
3). In the ideal case, the laser acts as one long, continuous, intense lightwave.
This enables a host of applications that rely on optical interference. For example,
the surface of precision lenses and mirrors is measured using laser interferometers.
The coherent light beam acts as an ultrafine ruler, where the wavelength of light
fulfills the dimensional role.
Figure 3. Laser light differs from conventional light in that
all the lightwaves are in phase with each other.
Directionality and brightness
The most obvious visible difference between lasers and conventional light sources
is that laser light travels in the same direction as an intense beam. Brightness
is defined as the amount of light leaving the source per unit of surface area. Because
a laser’s photons have identical vector properties, they act as if they are
coming from the same point in space. The ideal laser thus acts as a true point source
with extremely high brightness.
This combination of directionality
and brightness has two consequences. The beam can be projected over great distances,
and it can be focused to a very small spot. In the ideal case, the divergence of
a collimated beam or the size of the focused spot are limited only by diffraction
— an inescapable property of light. This is referred to as a diffraction-limited
Lasers can be divided into three main categories
— continuous wave (CW), pulsed and ultrafast.
As their name suggests, continuous
wave lasers produce a continuous, uninterrupted output. The exact wavelength(s)
at which this occurs is primarily determined by three factors: the gain bandwidth
of the lasing medium, the spectral characteristics of the cavity optics and the
longitudinal modes of the resonator.
Many laser materials in fact have several
wavelengths (or laser lines) at which emission occurs. Also, several factors (such
as the Doppler effect in the moving atoms of a gas) typically broaden the wavelength
bandwidth of the gain at each of these various lines.
The first step in determining at which
wavelength the laser will operate is to use cavity mirrors that are highly reflective
only at the desired wavelength(s). This suppresses lasing at other lines. However,
even a single laser line actually covers a band of wavelengths.
Figure 4. A resonant cavity supports only modes that meet the
resonance condition, Nλ = 2/ (cavity length). The output of a CW laser is
defined by the overlap of the gain bandwidth and these resonant cavity
The specific wavelengths of output
within this gain bandwidth are determined by the longitudinal modes of the cavity.
Figure 4 shows the basic principles of the resonant two-mirror cavity, the most
basic design. To sustain gain as light travels back and forth between the mirrors,
the waves must remain in phase, which means that the cavity round-trip distance
must be an exact multiple of the wavelength.
Nλ = 2/(cavity length)
where λ is the laser wavelength and
N is an integer called the mode number — it is usually a very large integer,
since the wavelength of light is so much smaller than a typical cavity length. In
a helium-neon laser, for example, the red output wavelength is 0.633 μm, yet
the typical cavity length is 15 to 50 cm. Wavelengths that satisfy this resonance
equation are called longitudinal cavity modes. The actual output wavelengths are
at the cavity modes that fall within the gain bandwidth, as shown in Figure 4. This
is called multi-longitudinal mode operation.
The cavity also controls the transverse
modes, or intensity cross sections. The ideal beam has a symmetric cross section:
The intensity is greater in the middle and tails off at the edges. This is called
output mode. Lasers can produce many other TEM modes, a few of which are
shown in Figure 5. Typically, laser output is specified as what percentage of the
total beam intensity is in the form of the TEM00
Figure 5. Lasers can emit any number of transverse modes, of
which the TEM00 usually is most desirable.
A laser that produces multiple longitudinal
modes has limited coherence — different wavelengths cannot stay in phase over
extended distances. Applications such as holography, which demand excellent coherence,
often require a single longitudinal mode laser. For some laser types, single-mode
output is achieved with a very short resonant cavity; this makes the mode spacing
larger than the gain bandwidth and only one mode lases. Generally, though, a filtering
element that preferentially passes only one mode is inserted into the cavity. The
most common type of filter is called an etalon.
Various liquid and solid-state lasers
have broad bandwidths that cover tens of nanometers. Examples include dye and Ti:sapphire
lasers. Rather than being a disadvantage, this has allowed the development of tunable
and ultrafast lasers. Creating a tunable CW laser involves including an extra filtering
element in the cavity — usually a birefringent (or Lyot) filter. The birefringent
filter does two things: It narrows the bandwidth and, by rotating the filter, allows
smooth bandwidth tuning, or in the case of optically pumped semiconductor lasers
(OPSLs), it allows the final output wavelength to be exactly set to match the end
Although this sounds complicated, laser
operation is remarkably simple. High-end CW lasers include an on-board computer
or microprocessor. This automatically controls the additional cavity elements while
simultaneously maintaining optimum alignment of the mirrors.
Some materials — ruby, rare-gas halogen
excimers, such as ArF and XeCl — sustain laser action for only a brief period
and form the basis of pulsed lasers. If the pulse duration is sufficiently long
(microseconds), the laser can be designed much like a CW laser. However, many pulsed
lasers are designed for short pulse duration; e.g., a few nanoseconds (10–9
s). In each pulse, the light has time for very few round-trips in the cavity. The
resonant cavity designs described so far cannot control such a laser: The pulse
dies before equilibrium conditions are reached.
While two mirrors are still used in
pulsed lasers for defining the direction of highest gain, they do not act as a resonant
cavity. Instead, the usual method of controlling and tuning wavelength is a diffraction
grating (Figure 6). Some pulsed lasers, such as Nd:YAG (neodymium yttrium aluminum
garnet) can be operated with a Q-switch, an intracavity device that acts as a fast
optical gate. Light cannot pass it unless it is activated, usually by a high-voltage
pulse. Initially, the switch is closed and energy is allowed to build up in the
laser material. Then at the optimum time, the switch is opened and the stored energy
is released as a very short pulse. This can shorten the normal pulse duration by
several orders of magnitude. The peak power of a pulsed laser is proportional to
pulse energy/pulse duration. Q-switching, therefore, has an added benefit of increasing
peak power by several orders of magnitude. This effect enables neodymium (Nd)-based
solid state lasers of only modest average power to machine tougher materials such
as glasses and metals.
Figure 6. The wavelength of a pulsed laser usually is
controlled with a diffraction grating. Rotating the angle of this device
tunes the wavelength.
The wavelength purity of Q-switched
lasers can be difficult to control because of the combination of the high peak power
and the short pulse duration. However, this can be solved by stacking two or more
lasers in series: a low-power, well-controlled oscillator followed by one or more
amplifiers. For even higher performance, the oscillator itself is sometimes seeded
by another low-power laser, such as a wavelength-stabilized laser diode.
A much less common pulsing mechanism
is called cavity-dumping and is used where the laser material cannot store enough
gain for stable Q-switched operation. A cavity dumped laser has end mirrors with
nominal 100-percent efficiency in order to maximize the circulating intra-cavity
power. An intracavity switch, either an acousto-optic deflector or tillable mirror
is then flipped to allow all the trapped energy to depart the cavity in a single
round-trip time interval. Pulse energies achievable with cavity dumping are much
lower than Q-switching where the power is stored as gain in the laser medium.
CW lasers can produce many longitudinal modes,
and if the cavity is pulsed or modulated, it is possible to lock the phase of these
modes together. The resultant interference causes the traveling lightwaves inside
the cavity to collapse into a very short pulse. Every time this pulse reaches the
output coupler, the laser emits a part of this pulse. The pulse repetition rate
is determined by the time it takes the pulse to make one trip around the cavity.
It turns out that the more modes that
interfere, the shorter the pulse duration. In other words, the pulse duration is
inversely proportional to the bandwidth of the laser gain material. This explains
why the materials used for broadly tunable lasers produce the shortest mode-locked
pulses. The most popular ultrafast laser material is titanium-doped sapphire or
Ti:sapphire; turnkey commercial Ti:sapphire lasers now routinely deliver pulses
as short as 10 fs (20 x 10–15
s), with typical repetition rates around 100
MHz. and peak powers approaching 1 MW. This can be amplified to the terawatt level
in custom commercial products.
With the CW power condensed into a
mode-locked pulse, the result is high peak power even for modest devices. Furthermore,
a regenerative amplifier can boost the peak power of an ultrafast laser by orders
of magnitude. These lasers can be optimized for high repetition rates (≤300
kHz) or high peak power (≥20 x 1012
W) — the highest peak power delivered
by any class of commercial laser.
Specialized ultrafast lasers – CEP stabilization
In a mode-locked laser, interference between the
myriad longitudinal modes means that in the time domain, the output collapses from
a continuous-wave light source to a series of short pulses separated by the time
it takes for light to travel around the cavity. But in the frequency (1/wavelength)
regime, the pulse still consists of these myriad individual modes. An intensity
vs. frequency plot of these modes looks like a comb (Figure 4) and are often referred
to as a frequency comb. In recent years there has been fast growing interest, and
even a Nobel prize, for using this type of comb as a tool for high resolution spectroscopy
with unprecedented absolute precision. When using a laser in this way, the modes
act as ultrahigh precision frequency (wavelength) fiducials. This requires a specialized
laser operating approach called carrier envelope phase (CEP) stabilization.
In simple terms, each mode has a frequency given by
ν = c/λ = Nc/(2x cavity length)
Where c is the speed of light and N
is the mode number. But in real lasers, this is not exactly true. That’s because
the speed of light in any material other than a vacuum is actually defined by two
separate velocity components called group velocity and phase velocity. The group
velocity is the effective speed with which the light travels and the phase velocity
is the speed of the actual wavefronts – the waves that are electric-field
oscillations. The difference in group and phase velocity is a direct function of
the refractive index of the material(s) the light is passing through.
In a mode-locked laser this can be seen because it has the effect of adding an offset to the above formula
ν = Nc/(2x cavity length) + offset
This is called the carrier offset frequency
or CEO. In a spectroscopy measurement, the full utility of using the comb of modes
as reference frequencies means that this offset must be held constant throughout
the spectral acquisition time. But even extremely minor fluctuations and drifts
in the optical properties of the laser cavity (e.g., noise in the laser pumping
the Ti:sapphire crystal) can cause significant shifts in the CEO value. This can
be successfully addressed by using a feedback loop that measures the CEO value and
then adjusts one of the cavity optics to hold the CEO value constant. The most robust
CEO measurement tool for this purpose is the so-called 1f — 2f interferometer
whose operation has been well described elsewhere.
If the CEO is held at zero, the modes
are exact multiples of the inverse of the cavity length, enabling absolute measurement
of high resolution (hyperfine) spectrum parameters. But there is another advantage
to holding the CEO value at zero as can be seen in Figure 7. Specifically, in the
time domain, when the CEO is zero, the peak of the overall electric field oscillation
coincides with the amplitude envelope of the ultrafast pulse. Clearly this results
in the highest possible electric field peak value. This is a critical advantage
for applications involving multiple nonlinear processes, i.e., optical processes
whose efficiencies have a high order dependence on the peak electric field of the
laser. At this time, the most spectacular of these applications are in attosecond
physics using ultrashort x-ray pulses. These short pulses offer chemists and physicists
the first tool capable of probing electron motion, in contrast to slower spectroscopy
methods that can only follow the motion of the much slower nuclei.
Figure 7. In typical ultrafast laser operation, the electric field oscillation does not
have a fixed phase relationship with the pulse envelope, as shown here. The goal of CEP
stabilization is to fix the phase relationship between the overall pulse envelope
and the underlying electric field oscillation.
CEP stabilization requires a very low
noise system, including a low-noise CW laser to pump the Ti:sapphire oscillator.
In addition, the oscillator and amplifier must both incorporate feedback mechanisms
that adjust cavity dispersion in real time. Fortunately, turnkey commercial laser
systems are now available with all these attributes, enabling scientists to focus
on their experiments rather than on the nuances of CEP stabilization.
Even with the wealth of commercially available
lasers, it is not always possible to find one that exactly matches an application.
Fortunately, the required wavelength often can be generated using frequency doubling,
or shifting, or with an optical parametric oscillator. All these processes are related
and are called nonlinear phenomena since they depend nonlinearly on the laser’s
In simple terms, when an intense and/or
tightly focused laser beam passes through a suitable condensed phase such as a liquid,
solid crystal or even dense gas jet, its oscillating electric field may interact
with the electrons of the atoms or molecules in several ways. One of these mechanisms
serves to distort the electron cloud thereby polarizing the atoms, i.e., the traveling
sine wave creates a temporary refractive index profile that is also a traveling
sine wave. When a photon interacts with this moving refractive index ripple it can
gain energy from the interaction or lose energy from the interaction. This is the
basis of frequency-doubling, where the laser frequency is doubled and the wavelength
is thus halved, as well as frequency mixing, where two laser wavelengths combine
to form photons with the sum of their original energies. It can also be used to
create long wavelength photons whose energy is the difference between two different
input laser wavelengths. Because these nonlinear interactions depend on the second
or third power of the laser intensity, they work well with pulsed lasers that have
high peak power. They can also be used with CW lasers if the beam is focused and
if the nonlinear crystal is inside the laser cavity (intracavity doubling).
Under most conditions, the light at
the new frequency (wavelength) would be destroyed by destructive interference. That’s
because it is usually created in a long (millimeters) crystal using the full length
of that crystal. But the phase velocity of the original and new wavelengths is different.
So wavelength-shifted light created at one spot in the crystal is out of phase with
that created at another position along the crystal, and so on. This difficulty is
overcome by choosing a crystal temperature and orientation that creates a so-called
phase-matching condition where the phase velocity of the fundamental and shifted
light is the same. Details of phase matching are beyond the scope of this review
article. But the most common phase-matching mechanisms depend on birefringence where
the phase velocity of light in a crystal depends on the light’s polarization
Optical parametric oscillators
The optical parametric oscillator (OPO) represents
an area of rapid development in terms of both products and applications —
thanks to its ability to produce tunable output anywhere from the mid-UV to the
mid-IR. For example, it is proving a powerful tool in the near-infrared for use
in deep-tissue microscopy.
An OPO uses a similar nonlinear mechanism
in a laserlike cavity to generate two shorter frequencies (i.e., longer wavelengths)
from one input frequency. The shorter of these new wavelengths is called the signal
wavelength, and the longer, the idler. The exact values can be smoothly varied by
tuning the cavity and rotating the OPO crystal under microprocessor control. Because
of the high power necessary for OPO operation, these devices typically have been
limited to pulsed and ultrafast systems.
Recently, the use of fan-poled nonlinear
crystals have enabled compact OPOs that can operate equally well over a wide range
of input wavelengths. By using a tunable pump laser such as a Ti:sapphire laser,
this type of OPO can thus be utilized to provide the user with two independently
tunable wavelengths. This can be very useful in several types of microscopy including
coherent anti-Stokes Raman spectroscopy (CARS) imaging.
A related device is the optical parametric
amplifier, which was developed to provide tunable pulses with much higher pulse
energy, e.g., for gas phase experiments. They are pumped by an ultrafast amplifier
– usually a regenerative amplifier in order to provide the requisite beam
Common laser types
For many years, the most common CW laser was the
helium neon laser, or HeNe. These low-power lasers (a few milliwatts) use an electric
discharge to create a low-pressure plasma in a glass tube; nearly all emit in the
red at 633 nm. In recent years, the majority of HeNe applications have switched
to visible laser diodes. Typical applications include bar-code readers, alignment
tasks in the construction and the lumber industries, and a host of sighting and
pointing applications from medical surgery to high-energy physics.
In fact, the laser diode has become
by far the most common laser type, with truly massive use throughout telecommunications
and data storage (e.g., DVDs, CDs). In a laser diode, current flow creates charge
carriers (electrons and holes) in a p-n junction. These combine and emit light through
stimulated emission. Laser diodes are available as single emitters with powers up
to tens of watts, and as monolithic linear bars, with numerous individual emitters.
These bars can be assembled into 2-D arrays with total output powers in the kilowatts
range. They are used in both CW and pulsed operation for so-called direct diode
applications. But even more importantly, laser diodes now underpin many other types
of lasers, where they are used as optical pumps that perform the initial electrical-to-optical
For example, higher power visible CW
applications were originally supported by argon-ion and krypton-ion lasers. Based
on a plasma discharge tube operating at high current, these gas-phase lasers are
large and very inefficient, generating a large amount of heat which must be actively
dissipated. The tube also has a finite lifetime and thus represents a costly consumable.
In most former applications the ion laser was displaced by diode-pumped solid- state
(DPSS) lasers. Here, the gain medium is a neodymium-doped crystal (usually Nd:YAG
) pumped by one or more laser diodes. The near-IR fundamental at 1064
nm is then converted to green 532 nm output by the use of an intracavity doubling
The DPSS laser in turn, has now been
challenged by several newer technologies. The most successful of these is the OPSL.
Here the gain medium is a large-area semiconductor laser that is pumped by one or
more laser diodes. The OPSL offers numerous advantages, most notably wavelength
and power scalablity. Specifically, these lasers can be designed to operate at virtually
any visible wavelength, at last freeing applications from the restrictions of limited
legacy wavelength choices (i.e., 488, 514 and 532 nm). Indeed OPSLs represent a
paradigm shift in lasers because they can be designed for the needs of the application
instead of vice versa.
OPSL is now a dominant technology in
low-power bioinstrumentation applications, most notably at 488 nm. And the power
scalability and inherent low noise of OPSL technology is now seeing multiwatt green
and yellow OPSLs moving strongly into other applications including scientific research,
forensics, ophthalmology, and light shows.
At longer wavelengths, carbon dioxide
lasers, which use plasma discharge technology, emit in the mid-infrared around 10
μm. Most are CW or pseudo-CW, with commercial output powers from a few watts
to several kilowatts. Another important technology is the fiber laser, which can
be operated in CW, Q-switched and mode-locked formats. Here, laser diodes optically
pump a rare-earth doped fiber, which typically emits at about 1 μm.
, fiber and direct diode
lasers are the workhorses of industrial laser applications. Direct diode lasers
predominately service low brightness applications, such as heat treating, cladding
and some welding applications. This is because direct diode lasers offer the lowest
capital cost of any industrial laser type, as well as the lowest operating costs,
due to their high electrical efficiency.
The advent of slab-discharge technology
has allowed the size/power ratio of CO2
lasers to be greatly scaled down, increasing
their utility in subkilowatt applications. Low-cost waveguide designs also support
a healthy market for CO2
lasers with powers in the tens of watts, primarily in marking
and engraving applications.
lasers and fiber lasers
have come to dominate the cutting of metals in the 2- to 4-mm thickness range. Sealed
is usually the first choice when both metals and nonmetals must both be processed,
while fiber lasers have proved quite successful in certain markets that can benefit
from their combination of high repetition rate, low pulse energy and high brightness.
They also excel at metal cutting and welding in the 4- to 6-mm thickness range,
as well as some marking applications. Flowing gas CO2
lasers still dominate the
market for thick metal (> 6 mm) cutting.
Nd:YAG can deliver the high peak power
for materials processing applications such as metal welding; in these heavy industrial
applications, raw power is more important than beam quality and for many years these
lasers were lamp-pumped. But the ever increasing power and lifetime characteristics
of laser diodes are causing these lasers to switch to diode pumping, i.e., DPSS
Conversely, lower power Q-switched
DPSS lasers are often based on Nd:YVO4
. These are usually optimized for high beam
quality in order for use in micromachining and microstructuring applications with
high repetition rates (up to 250 kHz) to support high throughput processes. They
are available with powers up to tens of watts with a choice of near-infrared (1064
nm), green (532 nm) or UV (355 nm) output. The UV is popular for producing small
features in “delicate” materials because it can be focused to a very
small spot and minimizes peripheral thermal damage. Deep-UV (266 nm) versions are
starting to be used in some applications. But their relatively high cost and the
need for specialty beam delivery optics causes many potential applications to rely
instead on 355 nm lasers optimized for short pulse duration, which can produce similar
results in many materials.
Excimers represent another important
pulsed laser technology. They can produce several discrete wavelengths throughout
the UV; depending on the gas combination, emission ranges from 157 to 348 nm. The
deep-UV line at 193 nm is the most widely used source for lithography processes
in the semiconductor industry. The 308 nm wavelength is used for annealing silicon
in high performance displays. The same wavelength is also key to generating a unique
long-wear surface on the cylinder liners of high performance diesel engines. And
finally, excimers have a unique ability to produce high pulse energies – up
to 1 joule per pulse. This enables direct writing of low-cost electronic circuits
for applications such as medical disposables.
Ultrafast lasers for scientific applications
are dominated by Ti:sapphire as already described. Ultrafast is also a fast growing
technology for micromachining applications. These lasers are based on fibers, free-space
optics or some combination of the two.
And finally, there are many other types
of niche and exotic lasers that are beyond the coverage of this overview article.
Examples include Raman lasers used in telecommunications, quantum cascade lasers
used in some gas sensing applications, and chemical lasers which tend to be limited
to military programs. | http://www.photonics.com/Article.aspx?AID=25161 | 13 |
55 | Area Between Curves
An integral by definition is used to find the area under the curve of a function. It can also be used to find the area between two functions. First, let's discuss the integral a little more. The form Integral(a, b, f(x) dx) can be defined as the area of f(x) between a and b, which comes out to F(b) - F(a). Or another definition is Integral(0, b, f(x) dx) - Integral(0, a, f(x) dx). Basically, it is the area under f(b) that is not also under f(a). Understanding this tells us what information to look for when finding the area between multiple curves.
The general formula for the area between two curves is: Area = Integral(a, b, (f(x) - g(x)) dx).
The limits a and b are determined by the points where f(x) and g(x) intersect. This can be determined by setting f(x) = g(x) and solving. Since area is always positive, it is either necessary to take the absolute value of the integral or determine which of the two functions is on top, setting it so f(x) is on top and g(x) is on the bottom.
Let's look at an example, to find the area between f(x) = 3x - 2, and g(x) = x^2. Setting the two equal to each other produces:
x^2 = 3x - 2 x^2 - 3x + 2 = 0 (x-2)(x-1) = 0 x = 1, 2
So evaluating |Integral(1, 2, 3x -2 - x^2, dx)| yields |(3x^2)/2 - 2x - (x^3)/3| evaluated from 1 to 2. The area between these curves is 1/6.
There are two types of volumes to be concerned with in Calculus: the volume between two curves, and volumes of revolving solids. The first is pretty straight-forward. It works the same as finding the area between two curves, simply squaring f(x) and g(x) in the integral, leaving: Volume = Integral(a, b, (f^2(x) - g^2(x))dx).
Volumes of revolving solids are a little more tricky. The key to remember though is that areas are added up to produce a volume. There are two methods for finding volumes of revolving solids: discs and shells. The discs method uses the area of a circle (A = pi * r^2), and the shells method uses the volume of a cylinder (A = 2pi * r * h).
Discs is a pretty easy method to use because volumes, like areas, are always positive. So simply integrating pi * |Integral(a, b, (f^2(x) - g^2(x)) dx)| will produce the volume of the solid revolved around the x axis (denoted by the dx). If the volume is revolved around the y axis, then f(x) and g(x) will have to be solved as functions of y instead, and the partition will be dy instead of dx. Again, set f(y) = g(y) and solve for the values of y where they are equal.
The shells method requires a little more analysis than the discs method. As was discussed above, the shells method uses the area of a cylinder A = 2pi * r * h. The other thing to keep in mind with shells is that if the area is being revolved over the x-axis, the function must be in terms of y. And if the area is being revolved over the y-axis, the function must be in terms of x.
Let's look at an example, finding the volume of the function y = sqrt(x) bounded by x = 0 and x = 4, revolved around the x-axis. To start, let's re-write the function in terms of y: x = y^2. Next, let's fill in the parameters for the area formula. The 2pi is a constant, leaving the radius and the height. In this function, height varies with y, so that is easy. The radius changes with respect to x. Lastly, the limits of integration are 0 and 2, the y-bounds, since the function is in terms of y. So the integral comes out to Volume = Integral(0, 2, y(4-y^2) dy). This results in (2y^2 - y^3/3) evaluated from 0 to 2, yielding Volume = 16/3. | http://www.dreamincode.net/forums/topic/232954-a-calculus-primer-part-vii-applications-of-integrals/page__p__1372518 | 13 |
258 | These notes are intended as a guide for the computer program "Stress, Mohr Circle" which illustrates the use of stress calculations for two-dimensional systems. The program consists of five displays. In each display you can vary the physical situation and immediately see the effects in the graphical displays. The first display illustrates the relation between forces and stresses acting on a rectangular solid. In the second display, you explore the stress on a plane as you vary the orientation of the plane and the state of stress in the system. We illustrate the calculation of the stress on the plane in terms of a simple matrix multiplication in the third display. You can also observe the simple geometry of the envelope of the stresses that are produced as the angle of the plane is varied. The fourth display shows the behavior of the normal and shear components of the stress on the plane. When the normal component is plotted along x and the shear component along y, the envelope will be a circle: called a Mohr circle. This beautiful result lets you visualize the range of possible stresses on the plane. Finally in the last display, you can explore the conditions which lead to the fracture of a sample: you can vary the stresses until the stresses are just strong enough to cause fracture by watching the changes in size and location of the Mohr circle.
Display 1: "Equilibrium"
Stress and force vectors acting on a rectangular solid
A mechanical system deforms or fractures when forces are applied and it is the force per unit area that is the quantity which determines the deformation or failure. There is unfortunately no single name for this concept with the words stress, stress vector, or traction used by various authors all to name the force per unit area acting on a plane. We will use the phrase stress vector. It turns out that the stress vector depends in direction and magnitude on the orientation of the plane on which the force acts. A small array of numbers called a stress tensor allows us to calculate the stress vector for any orientation of the plane. In two dimensions, the stress tensor is a two by two array and three numbers define the stress tensor and allow us to calculate the stress vector for any plane. In three dimensions we must use a three by three array: and six numbers then are sufficient to specify the stress tensor. We will restrict ourselves to two dimensions in these notes. This keeps the calculations and illustrations simple.
We can illustrate the constant force approximation by considering a small volume and dividing the bottom into three equal parts. Then the three forces Ba, Bb, Bc acting on the parts of the bottom are all equal.
The stress vector acting on the bottom is then just the total force B divided by the total area of the bottom and we denote it by s B. With B=Ba+Bb+Bc the stress vector on the bottom is s B =B/(w*d) and with our choice of d=1 this is sB=B/w. In our program, we specify the plane on which the stress vector acts by the normal to that plane. We choose the normal to point into the body and the stress vector is the force per unit area acting onto the plane from the outside of the body. Thus for the force B acting on the bottom surface from below we choose the normal pointing up into our volume, the y axis in this case, and we call the bottom the y plane.
In general the stress vector will not be perpendicular to the plane on which it acts. For the (bottom) y plane shown, the total stress vector consists of two parts: s yx parallel to the x direction and s yy parallel to the y direction. The first subscript (y) gives us the direction of the normal and the second subscript gives the direction for the part of the stress vector. The total stress vector sB on the bottom is just the vector sum of these two parts.
s B = B /w*d=s yx +s yy
We will often want to specify length and direction of the x and y parts by giving just two numbers or components and we will use s yx and s yy for the x and y components of the stress vector acting on the y plane. In the above illustration both components s yx and s yy are positive numbers .
Similarly for the left hand side of area h*d=h*1=h we use L to for the total force and the normal to the left side is just the x axis. So the parts of the stress vector sL acting on the left side are just s xx and s xy. The first subscript x again gives the direction of the normal to the left side and the second subscript gives the direction for the part of the stress vector. In the illustration below and in the first computer display, we show the body with the force vectors on the left of the screen and the body with the stress vectors on the right of the screen. The force vectors on the four sides of the rectangle are L, B, R, and T (left, bottom, right, and top).
The vanishing of the total force means that the force B on the bottom is balanced by T the force on the top and the force L on the left side is balanced by R the force on the right with T=-B and R=-L.
Since the rectangle must not undergo any rotational acceleration around the center, we must have no torque around the center. The force Bx then has a torque Bx*h/2 about the center (just the magnitude of the force multiplied by its perpendicular lever arm) and this torque is counterclockwise for positive Bx. The By part of the force on the bottom exerts no torque around the center since it points at the center and hence has zero perpendicular lever arm. There is an equal torque and in the same sense from T the force on the top since by force balance Tx=-Bx. The torque from the bottom and the top is then 2*Bx*h/2=Bx*h. In the same way the torque from the left and right side is 2*Ly*w/2 =Ly*w but the twist is clockwise for Ly positive. The total counter- clockwise torque is then Bx*h-Ly*w and this must be zero for equilibrium.
Hence Bx*h=Ly*w or dividing by h*w Bx/w=Ly/h. We recognize that Bx/w is just s yx the x component of the stress vector on the bottom and Ly/h is just s xy the y component of the stress vector on the left side. We are therefore left with the important result that s yx=s xy. (The same argument can be used in three dimensions to show that not only s xy=s yx but also s xz=s zx and s yz=s zy). We can summarize our results for the stress vectors acting on a rectangular system:
You should explore the relationship between the stress vectors and the force vector using the first or "equilibrium " display in the program. Appendix 1.1 tells you how to navigate in the program. Use display1 to work exercises 1.1, 1.2, 1.3, and 1.4.
Display 2: "Stress Vector Part 1"
Stress vector on a plane whose normal is at angle a with the x axis
In many real situations it is important to be able to calculate the stress vector acting on a plane which may have an arbitrary orientation with respect to our coordinate axes. Such calculations are crucial in predicting the plane along which an object may fracture under stress. If we can calculate the stress vector for an arbitrarily chosen plane we can say that we know the state of stress of the system. In this section and in the accompanying display 2, you can see how to obtain the stress vector on a plane whose normal makes an arbitrary angle a with respect to the x axis from the three components s xx, s xy, and s yy. These three numbers therefore define the state of stress in two dimensions.
2.1 Equilibrium for a triangular solid
We will consider a small triangular solid with one side parallel to x, the second parallel to y, and the third side perpendicular to a normal making an angle a with respect to the x axis as shown. To simplify our calculations we choose the length l for the third side equal to one. Then the width w of the top of the triangle is w=l*sin(a )=sin(a ) and the height h of the right side is h=1*cos(a )=cos(a ). We’ll call the stress vector acting on the top s T, and the stress vector acting on the right sR. The stress vector sP which acts from the outside onto the third side will be found by finding the force FP on the third side of the triangle. The vector sum of all the forces must vanish for equilibrium: we must have
R+T+FP=0 or FP=-R-T. Because we chose l=1, the area of the third side is l*d=1*1=1 and sP=FP=-R-T.
We want to write the x and y components of sP in terms of the stress components s xx, s yy, ands xy so we will need the x and y components of R and T in terms of the stress components.
Rx=-s xx*h=-s xx*cos(a ), Ry=-s xy*h=-s xy *cos(a ), Tx=-s yx*w=-s yx*sin(a ), and Ty=-s yy*w=-s yy*sin(a ). Using these components in sP=-R-T.
s Px=-Rx-Tx==s xx*cos(a )+s yx*sin(a ),
s Py=-Ry-Ty==s xy *cos(a )+s yy*sin(a ),
These two equations therefore let you calculate the x and y components of the stress vector sP for a plane whose normal makes an arbitrary angle a with respect to the x axis. (The three dimensional versions are often called the Cauchy equations.) In the two dimensional case, the two equations let you calculate the stress vectors for any arbitrary plane This shows that the components s xx, s yy, and s xy completely specify the stress for a two dimensional system. The vectors PN, sP, sPx and sPy are shown in the next figure.
You should now work exercises 2.1,2.2, and 2.3 in appendix 2, using display 2 to check that the results for the stress vector sP agree with your expectations.
Display 3: Stress Vector, Part 2
Stress vector components from matrix multiplication, stress vector envelope, normal and shear stresses acting on a plane
3.1 The stress vector components in terms of matrix multiplication
The appearance of stress calculations in three dimensions is simplified by rewriting the Cauchy formula in terms of a matrix multiplication. In two dimensions, the advantage of the matrix formulation is not so obvious but it is worthwhile to learn the method. We can take advantage of the equality of s xy and s yx to rewrite the Cauchy formula so that it looks like the multiplication of a two by two stress matrix with a two by one column matrix for the normal vector. In specifying the dimensions of a matrix, the first number gives the number of rows and the second number gives the number of columns in a matrix. The product of this matrix multiplication is the two by one column matrix for the stress vector.
We had for our fist Cauchy equation:
s Px=s xx*cos(a )+s yx*sin(a )
Now using PNx=cos(a ) , and PNy=sin(a ) for the x and y components of PN the vector normal to the plane and also using s yx=s xy , we get
s Px=s xx*PNx+s xy*Pny (1)
Similarly we had for our second Cauchy equation: s Py=s xy*cos(a )+s yy*sin(a ) from the Cauchy formula
Using the same substitutions for cos(a ) and sin(a ) we obtain
s Py=s yx*PNx+s yy*Pny (2)
When you recall the rules for matrix multiplication, you recognize the equations 1 and 2 are just the results of multiplying the two by two matrix defining the state of stress by the components of the normal vector to obtain the components of the stress vector acting on the plane perpendicular to the normal vector:
We can restate the above result in words: the stress vector for any plane is obtained by multiplying the stress tensor onto the vector normal to the plane.
The third display first shows the plane and the stress vector just as in the second display. By clicking on the box marked matrix in the control panel you can see the matrix multiplication displayed as well. You should confirm that you understand the elements in the matrix multiplication. Choose a simple matrix such as s xx=2 s yy=1 and s xy=0 for a start and do the multiplication for simple normal vectors like the normal vector along x and the normal vector along y. Then check out the multiplication for s xx=3, s yy=3, s xy=1 for the same normal vectors and finally for the normal making an angle of thirty degrees with the x axis.
You can also have the computer display the magnitude of the stress vector by clicking on the stress magnitude box in the control panel. Use display 3 to work exercise 3.1
3.2 The stress vector envelope
When you keep the stresses on the system fixed and vary the angle a , you find that the stress vector sP traces out an envelope. In display 3 click on the button "stress ellipse" in the control box. As you vary a , the corresponding stress vector is diplayed at the lower left always with its tail at the origin. Observe the curve or envelope that is traced out by the head of the stress vector. Exercises 3.2, 3.3, 3.4, and 3.5 in Appendix 2 provide suggestions for your explorations.
The envelope you have explored is sometimes called the stress ellipse. The longest, or major axis, is denoted by the symbol s1 and the shortest, or minor axis, by the symbol s3 and they are called the principal axes. Their lengths s 1 and s 3 are called the principal values and are shown in the figure.
In Ex 3.4 and 3.5, you should also have confirmed that the circle and the straight line are the limiting cases of the stress ellipse. In three dimensions the envelope becomes an ellipsoid, there are three principal axes all mutually perpendicular, and the corresponding principal values are denoted by s 1, s 2, and s 3 with s 1> s 2> s 3.
Display 4: Mohr circle, Part 1
Plot of s Ps vs s Pn in terms of s 1 and s 3
Display 4 lets you examine the normal and shear parts of the stress vector. The control panel is now simplified so that you choose the principal values s 1, s 3, and the angle of the normal a . A word of caution about varying the parameters s 1 and s 3. It is always assumed that s 1 is larger than or equal to s 3. The program prevents you from violating this rule. For instance if you have s 1=2 and you want to raise both s 1 and s 3 above 2, then you must raise s 1 first to its new value and then s 3, otherwise s 3 is limited above by the value of s 1. Similarly you must lower s 3 first when you want to decrease both parameters below the present value of s 3.
4.1 The normal and shear parts of the stress vector.
For many questions in the deformation or fracture of solids the values of the parts of the stress vector perpendicular and parallel to a plane are crucial in deciding whether distortion or motion will occur along that plane. The normal part sPn is just the part of the stress vector parallel to PN the normal to the plane. The normal component s Pn has magnitude equal to the length of the normal part and in geological applications the sign of the normal component is chosen to be positive for compression. The shear part sPs is the part of the stress vector parallel to the plane. The shear component s Ps has magnitude equal to the length of the shear part and the sign of the shear component s Ps is chosen to be positive when the shear part would produce a counterclockwise twist if it were applied a little outside the body. In our sketch below we have chosen an example in which both s Pn and s Ps are positive.
You should explore the behavior of the normal and shear parts of the stress vector with exercises 4.1, 4.2, 4.3, 4.4, and 4.5 in appendix 2. When you plot s Ps along y against s Pn along x you find that you trace out a circle which is usually called the Mohr circle.
In the exercises you should confirm that the center of the circle is on the horizontal axis at (s3+s1)/2 and that the radius is (s1-s3)/2. The angle g between the radius to the point at x=sPn and y=sPs and the horizontal as shown below, and in the exercises you should have confirmed that g=2a .
It is often important to relate the geometrical position of s Pn and s Ps on the Mohr circle to the actual orientation of the plane and the stress vector relative to the s1 direction. For instance, in discussing Display 5, we will find that we can determine the conditions for failure of a mechanical system by a geometrical comparison of the Mohr circle with a simple geometrical line showing the failure criterion. We can determine g from the position on the Mohr circle, and hence find a the direction of the normal to the plane. Thereforewe will be able to predict the plane along which failure will occur. As an example of the determination of a, consider a point on a Mohr circle with center at xc=2.0, s Pn=2.5, and s Ps=0.866 . (This point does not correspond to a failure condition). The angle g is found using tang=s Ps/(s Pn-xc)=.866/(2.5-2.0)=1.732 and g =60 degrees. Using a =g /2=30 degrees, we know that the normal PN makes an angle of 30 degrees with respect to the s 1 direction and we can draw the plane on which the stress vector acts. Finally we draw the parts s Pn with length 2.5 parallel to the normal direction and sPs with length .866 down to the right because of the positive sign. The stress vector sP is just the sum of sPn and s Ps as shown.
You can also find the values of s 1 and s 3 from the above data. The position of the center gives one relation for s 1 and s 3: xc=2.0=(s 1 + s 3)/2, and the distance of the point at s Pn and s Ps from the center gives the radius r of the Mohr circle and a second relation for s 1 and s 3. In our example, r=1=(s 1 - s 3)/2. So s 1=xc+r=3 and
s 3=xc-r=1. You can use
these values of s
1 and s 3 with a
=30 in display 4 to confirm the above figure and see the Mohr circle. Now
work exercise 4.6 in appendix 2.
Display 5 Mohr Circle, Part 2
Mohr circle and Failure Criteria, Cohesion, Envelope Slope
There is no simple law that lets you determine whether a system will fail if you are given the stresses on the system. However, Byerlee’s Rule (sometimes called Byerlee’s law) provides an approximate value of the stress that will produce failure: the system will not fail as long as the shear stress is less than the critical shear. The critical shear s cr is given by the prescription s cr=C+m*s N where the two parameters C and m are called the cohesion and the envelope slope and s N is just the magnitude of the normal stress. You can explore Byerlee’s rule by plotting the failure line (or for more complicated cases an envelope) along with the Mohr circle on a s Ps vs s Pn plot as shown in the figure.
The failure line is just a straight line on this plot. The cohesion C is the y intercept of the failure line and m is the slope of the failure line. The Mohr circle gives all the allowed values of s Ps and s Pn for the state of stress. As long as the Mohr circle lies below the failure line the system will not fail. When the stresses are changed so that the Mohr circle becomes tangent to the failure line, the system will fail and we can deduce a, the angle of the plane along which failure occurs, from the value g of the point where the Mohr circle just touches the failure line.
Display 5 shows the Mohr circle and the failure line. You can vary the cohesion C and the envelope slope m, as well as s 1, s 3, and the angle a that you used in display 4. In the upper right of display 5, we also show the curves giving the normal and shear stress components for all angles of the normal to the plane. The vertical purple line on the display marks the angle a selected. On the Mohr circle there also is a line from the center to the circle drawn at the angle g =2*a .
You should explore display 5 with exercises 5.1, 5.2, and 5.3 in appendix 2.
Once the program is started you can navigate in the various displays with use of the mouse. We will use "Click on" for moving the mouse pointer to a menu or check box selection and then depressing the left mouse key, and use "Click and Drag" for moving the mouse pointer to the small square in the scroll bars, depressing the left mouse key, holding it down and moving the small square until you have the desired value of the parameter.
You can choose a display by clicking on the display name. A new menu will appear at the top of the window, click on "Display" to go directly to work with the display, or click on "Back" to return to the main menu. When you have chosen the display, you can change the parameters of the display by using the scroll bars in the white dialog box. Click and drag the small squares in the scroll bars to make large changes in the parameters. For fine adjustments, click on the squares with the arrows at the ends of the scroll bars. For some of the displays, you can turn certain features on and off by clicking in the check boxes in the dialog boxes. To leave a display, click on stop in the dialog box and then on "Back" in the menu at the top of the window.
Finally, to stop the program, go back to the main menu and click on "Quit".
Appendix 2: Exercises
Display 1: Equilibrium
Ex 1.1 Using the control box, choose a simple set of forces by using s xx=2, s yy=0, and s xy=0. If you vary the width but not the height will L and R change? If you vary the height but not the width will L and R change. (in this as well as further exercises you will check you comprehension best if you make the prediction and calculate the result and then use the computer display to check you result) Is there any torque on the system?
Ex 1.2 Choose s xx=0, s yy=3, and s xy=0: predict the values of all the forces (and check your prediction with the display) for w=h=1; w=2,h=1; w=.5,h=2.
Ex 1.3 a Choose s xx=0, s yy=0, and s xy=2: and set w=2 and h=1. (note that on the stress diagram at the right s xy=s yx). Predict all the forces and calculate the torque about the center from each force and hence the total torque. b Now make h=.5 and compare the new torques with those in part a. This gives an experimental "proof" that with s xy=s yx the total torque always vanishes.
Ex 1.4 Now explore the range of possible values of s xx, s yy, s xy , w, and h.
a can you find a combination so that both L and B make an angle of 30 degrees with the x axis?
b can you find a combination so that L makes an angle of 30 degrees with the x axis and B makes an angle of 120 degrees with the x axis?
Display 2: Stress 1
Ex 2.1 Choose a triangular solid with a =30 degree and only s xx=2 different from zero as shown below. Before starting the program, make a sketch of the triangular solid with all the force vectors on the left and all the stress vectors on the right and sketch the stress vector sP. Check your result for sP against the Cauchy formulas. Then see whether your results agree with the program display.
Ex 2.2 Use the computer display and keep the same stress as in 2.1 but now let a go to 90 degrees. What happens to the area and the force on the right side? The behavior of FP and sP should agree with your expectations and you should again check the result with the Cauchy formula.
Ex 2.3 Now set s xx and s xy to zero but choose s yy=2. Predict the behavior for sP as a goes from zero to 90 and check your prediction with the display and by evaluating the Cauchy formula.
Display 3: Stress 2
Ex3.1 With s xx=s yy=3 and s xy=1, vary a and find the values of a for which the stress vector magnitude is a maximum and a minimum. Let b be the angle between the normal and the stress vector as shown in the sketch.
What are the values for b when the stress vector is a maximum and a minimum?
What is the angle in space between the stress vector at maximum and the stress vector at minimum? (These results are quite general and this process is often called "finding the principal axes" for the stress tensor).
Ex 3.2 Still with s xx=s yy=3 and s xy=1 vary a and draw the corresponding stress vector always with its tail at the origin. The head of the stress vector now traces out an envelope curve. Make a sketch of the envelope curve and check that your envelope includes the results for the maximum and minimum found in 3.1.
(You can confirm your sketch and also experiment with different values of s xx, s yy, and s xy by clicking on the "stress ellipse" in the control panel)
Ex 3.3 Now vary the values of the components of the stress tensor s xx, s yy, and s xy so that your envelope has the same size and shape as in 3.2 but now the longest (major) axis is along x and the shortest (minor) axis is along y.
Ex 3.4 Can you vary the stress tensor components s xx, s yy, and s xy so that your envelope becomes a circle of radius 2? A circle of radius 3? Is your result unique? (Try with s xy=0 and with s xy not equal to zero)
Ex 3.5 Can you vary the stress tensor components so the envelope becomes a straight line along the x axis?
Display 4: Mohr circle, part 1
Ex 4.1 Choose a simple state of stress such as s 1=3 and s 3=2. (This corresponds to choosing sxx=3, syy=1, and sxy=0 in part 3). Examine the display first when the stress vector sP is along the major (x) axis and then when sP is along the minor (y) axis of the stress ellipse.
What are the corresponding values of the normal and
Make a sketch of s Ps vs s Pn, that is plot the shear component s Ps in the up direction against the normal component s Pn in the horizontal direction for a =0, 45, 90, 135, and 180 degrees. (The values of s Ps and s Pn are displayed just to the right of plot of sP. ) Mark your points with the corresponding value of a . You may want to include some points in between to obtain a clear picture of the curve that is traced out. Your curve should be a circle as shown in the figure.
Ex 4.2 Leave s 1=3 and s 3=1 and click on the button marked Mohr circle to check your sketch in Ex 4.1. Now vary the angle a of the normal to the plane from 0 to 180 and examine the curve traced out on a s Ps vs s Pn plot. The resulting circle is called the Mohr circle. Find the rule for the location of the point furthest to the right and furthest to the left for your circle. Test your rule by choosing some new values for s 1 keeping s 3 fixed, and then choose new values for s 3 keeping s 1 fixed.
Ex 4.3 Return to some convenient values like s 1=3 and s 3=1 and now find the relationship between the angle a and the location of the s Pn, s Ps point on the circle. Call g the angle that the line from the center of the circle to the point at s Pn, s Ps makes with the horizontal s Pn axis. Determine g for a =0, 45, 90, 135, and 180 degrees and thereby "prove" that g =2*a .
Ex 4.4 Note that by setting a =0 you always choose the point furthest to the right (maximum normal stress) and for a =90 you always choose the minimum normal stress. With these two choices you can easily check the results of Ex4.1 and confirm the coordinates of the points on the s Ps vs s Pn plot are at s Pn =s 1, s Ps=0 and s Pn =s 3, s Ps=0. From this result find the formula for the location of the center and for the radius of the Mohr circle.
Ex 4.5 With the expressions found in Ex 4.4
a: find the values of s 1 and s 3 that yield a circle that goes through the origin and has radius 1.
b: find the values of s 1 and s 3 whose center is at 2 that has zero radius. What is the shape for the stress ellipse in this case? This is a two dimensional example of lithostatic stress. For lithostatic stress the magnitude of the stress is independent of the orientation of the plane, the stress is always perpendicular to the plane, and the shear component is always zero.
Ex 4.6 Given g
=90 degrees, s Pn=1.5,
and s Ps=1 draw the
corresponding point on a Mohr circle diagram i.e. with s
Pn along the x axis and s
Ps in the y axis direction. Now draw the Mohr circle using the fact that
=90 degrees. From your sketch calculate s
1, s 3, and a
and then confirm your results using display 4.
Display 5: Mohr circle, part 2
Ex 5.1 Set the cohesion to 0.3 and the envelope slope to 0.577 so that the envelope is at 30 degrees with the horizontal. With s 3=0.3 what value of s 1 will cause the system to fail? What is the angle a that the normal to the plane makes with s 1 just at failure? Is the value of s Ps the maximum allowed by the Mohr circle? (You can use the plot of s Ps vs a in the upper right to check your answer).
Ex 5.2 keep the cohesion and envelope slope fixed at 0.3 and 0.577 and increase s 3. Try to predict the changes of s 1 and a that will just produce failure. Now set s 3 to 0.6 and use the display to obtain the corresponding values for s 1 and a .
Ex 5.3 The simple straight "failure line" we have used in display 5 is a good approximation over a finite range of stresses. For a wide range of the principal stresses, the straight line is replaced by a more complicated failure envelope as shown.
For large values of the stresses the slope of this failure envelope is nearly zero. In this case, what angle a does the normal to the failure plane make with the s 1 direction? Is the value of s Ps now at the maximum allowed by the Mohr circle?
1 Stress on system sxx,sxy on left; syx, syy on bottom
2 Plane PL, normal to plane PN, stress vector sP acting on plane PL
3 Cauchy Equations
sPx= sxx*cos(a) + sxy* sin(a)
sPy= syx*cos(a) + syy*sin(a)
or with cos(a)=PNx and sin(a)=Pny
4 Stress ellipsoid: envelope of sP, principal axes s1, s3 ; (s1>s3)
5 Normal and shear components sPn , sPs
6 Mohr Circle plot of sPs vs sPn: xc=(s1+s3)/2; radius=(s1-s3)/2; tan g=sPs/(sPn-xc)
7 Failure envelope: y intercept=Cohesion, slope=envelope slope,
Mohr circle is tangent to envelope at failure. | http://www.earth.lsa.umich.edu/~vdpluijm/stressmohr/stressmohr.html | 13 |
58 | On May 4,
2000, a prescribed fire
on National Park Service land near Los Alamos,
New Mexico, was blown out
of control by erratic, gusty winds. The fire
exploded into nearby Santa
Fe National Forest, swept across lands at Los
Laboratory, and destroyed hundreds of homes in the town
of Los Alamos.
Even as firefighters were struggling to contain the
problem loomed in the minds of the people involved.
Some of the burned
land was in a watershed that fed into canyons where
the Los Alamos
National Laboratory, birthplace of the atomic bomb, had
potentially radioactive waste before environmental regulations
as strict as they are today. What would happen when the
season began in less than two months and thunderstorms drenched
Answering that kind of question is
the work of a multidisciplinary
group of scientists and natural resource
management experts called a
BAER team, for Burned Area Emergency
Rehabilitation. Members can comefrom several federal and state agencies
and can consist of hydrologists,
wildlife biologists, archaeologists,
soils scientists, landscape
architects, geologists, ecologists,
engineers, foresters, botanists, and
Geographic Information System (GIS)
specialists. Usually on the ground
before a fire is even fully
contained, a BAER team evaluates the burned
area for threats to life,
property or natural resources due to post-fire
Forest fires attract the most attention when they’re actively burning, but a threat remains after the flames have died down. After a fire, damaged vegetation, scorched underbrush, and soils that shed water can lead to severe erosion and flash floods. Scientists now use satellite data to aid rehabilitation teams in locating severely burned areas, helping to prevent post-fire disasters. (Photograph copyright Kari Brown, National Interagency Fire Center Image Portal)
Why is flooding after a fire such a big threat? For one thing, flames consume leaf litter and decomposing matter on the ground that normally soak up water. Additionally, after a fire, the soil itself has the potential to become hydrophobic, or water repellant. Plants and trees have numerous protective chemicals with which they coat their leaves to prevent water loss. Many of these substances are similar to wax. Vaporized by the heat from fires, these substances disperse into the air and then congeal over the soil surface when the fire begins to cool. Like the wax on your car, these substances coat the soil, causing water to bead up and run off quickly. In general, the greater the fire intensity and the longer the fire’s residence time, the more hydrophobic the soil becomes.
|Burned Area Emergency Recovery (BAER) teams move in soon after a fire sweeps through. Composed of wildlife biologists, soil scientists, foresters, and other natural resource specialists, the teams are responsible for locating areas that are susceptible to post-fire erosion, and doing what they can to prevent soil loss and flooding. (Photograph courtesy Rob Sohlberg, University of Maryland)|
In the short term, loss of vegetation and hydrophobic soil contribute to flash flooding and erosion—sometimes to the point of landslides. When fallen trees and other fire debris are added to the scenario, floods can become severe. Excessive rainwater can sweep ash and debris into streams and rivers, contaminating natural and manmade water sources. This past summer, the Hayman Fire in Colorado resulted in erosion problems that fed ash and sediment into Cheesman Reservoir, which supplies 15 percent of Denver’s water. Several other reservoirs were affected by the blaze as well. Humans aren’t the only ones affected. Mudslides resulting from the Missionary Ridge Fire near Durango, Colorado, washed mud and debris into the Animas River, killing hundreds of fish in the award-winning trout stream.
|A hot fire can even change the composition of soil, making it hydrophobic. Hydrophobic soils repel water, which will bead up and quickly run downslope. Instead of being absorbed by vegetation and forest litter, rain in a severely burned area will stay on the surface, potentially causing floods, erosion, and mudflows. Assessing how hydrophobic the soil has become is an important BAER team activity. [Photographs courtesy USDA Forest Service (USDAFS) Agricultural Research Service (left) and Annette Parsons, USDAFS (right)]|
Beyond this immediate cause for concern, water-repellant soil poses a longer-term problem for new plant growth. As seeds and undamaged plant structures try to regenerate in a burned area, they need water. But rainwater has difficulty soaking down into water-repellent soil, sometimes slowing the healing process in severely burned areas.
In assessing such threats, BAER teams are under intense pressure: they must make an assessment report with seven days of a fire’s containment. If threats exist, such as the potential for a landslide on a fire-charred mountain slope to descend upon nearby homes, the team must include mitigation measures that can be carried out before the first damaging storm. In the case of the Cerro Grande Fire, those measures included the construction of a small dam in Pajarito Canyon to prevent any potentially radioactive-contaminated sediment from entering the watershed and being carried away from the site.
|These photographs show the results of post-fire erosion. The boulder-filled channel (left) was created after the Cerro Grande fire. The eroded material was deposited further downstream (right). The water during the flood reached several feet higher, as shown by the trees stripped of bark. (Photographs courtesy John A. Moody, USGS Hydrologic and Erosional Responses of Burned Watersheds)|
Satellites Do It Faster, Cheaper
Given the time constraints, remote sensing data have great potential to assist BAER teams in their assessments. Over the past decade, scientists at the USDA Forest Service’s Remote Sensing Applications Center (RSAC) in northern Utah have been developing techniques for using airborne and satellite-based instruments to map out burned landscapes. Mark Finco is a senior scientist specializing in remote sensing and GIS for the RSAC. He described the group’s BAER activities as a long-running project that began 8 or 9 years ago. The group’s initial efforts were in the development of a digital infrared color camera that could be mounted on an aircraft platform and flown over fires. Although the camera produces good imagery, it’s expensive, and it takes a lot of effort to map a large area. Over the last three years or so, the RSAC has been investigating how satellite-based images could be used in the BAER analysis.
The Multi-sensor Advantage
The burn scar of the Missionary Ridge fire appears dark red in this Moderate Resolution Imaging Spectroradiometer (MODIS) image from August 10, 2002. With a maximum resolution of 250 meters per pixel [the image above is 135 km (84 miles) across] and coverage every 1 to 2 days, MODIS provides frequent observations of large areas. The scientists use higher-resolution satellites like Landsat to map the burned area in detail. The white box in the image above indicates the subsetted Landsat scene shown below. (Image by Robert Simmon and Jesse Allen, NASA GSFC)
Geographer Rob Sohlberg with the University of Maryland describes how multiple satellite sensors are used in satellite mapping of burn severity. “In general, the higher the sensor’s spatial resolution, the less likely it is that it will be observing the area we need at exactly the right time for burn severity mapping of a specific fire.” Higher resolution data are also inevitably more expensiveto obtain. “With a sensor like MODIS,” Sohlberg continues, “the coverage is much better—we see almost the entire surface of the Earth every day—but at coarser resolution.” These coarser- resolution data aren’t ideal for mapping out such detail as which hillside, creek, or ravine is especially damaged, but they can be useful for very preliminary mapping of burned areas, and perhaps more importantly, they can help scientists to decide in which areas high-resolution imagery is most needed.
The Enhanced Thematic Mapper plus (ETM+) aboard Landsat 7 has a true-color resolution of 30 meters per pixel—much better than MODIS—but only covers an area every 9 to 16 days. Smoke partially obscures the burn scar in this true-color image, making it difficult to see the patchy nature of the burn. Infrared wavelengths of light measured by Landsat penetrate smoke, and false-color images made using data from the infrared portion of the spectrum reveal the structure of the scar. (Image by Robert Simmon, based on data provided by Andrew Orlemann, USDAFS)
Annette Parsons has worked for more than 20 years as a soil scientist and mapping specialist for the Forest Service, and has been involved with BAER activities for more than a decade. Parsons was quick to see the potential of the RSAC satellite-based products and is now serving as a liaison between the RSAC and the BAER teams that operate on the ground. Says Parsons, “One of our main goals is to get local forest officials to notify the RSAC as soon as they think a fire might end up needing a BAER assessment. The earlier we know, the better our window of opportunity for finding the best satellite imagery for the area and having it in the team’s hands when they arrive at the field location.” Parsons agrees that MODIS’ coarser resolution provides less detail than Landsat or SPOT, but she says, “You can count on it almost every day. And in very large fires, such as this summer’s Biscuit Fire in Oregon, coarser-resolution data may be all we get for some areas.”
Finco identifies another use for the MODIS products. “All these burn products have the potential to be used for carbon budget investigations.” Fires release into the atmosphere carbon that is stored in trees, plants, and even the soil if the fire is intense and long-lasting. Regrowth draws carbon back in and stores it in plant matter. “People tend to think that a burn perimeter on a map means everything within that perimeter was totally burned. But the vast majority of the terrain is a mosaic of areas showing different burn severity. If you understood this mosaic, it would improve carbon emission estimates.” MODIS data are well suited for this application, especially when the fires are in the several-hundred-thousand-acre category.
Detail down to the level of individual trees is revealed by Space Imaging’s IKONOS satellite, which produces color images at 4-meter resolution. This true-color image, acquired June 23, 2002, shows dark gray, burned area in the upper left, and green, unburned area in the lower right. The bright blue-white smoke plume is caused by a smouldering hotspot, and thin smoke covers much of the rest of the scene. (Image copyright Space Imaging, based on data provided by Andrew Orlemann, USDAFS)
Using a combination of sensors allows the RSAC team to make burn severity maps quickly and relatively cheaply—especially compared to the cost of hours of helicopter time, which is the traditional reconnaissance source. It may seem trivial, but satellites have another advantage over sketch mapping from helicopters, which Finco and Parsons are both quick to point out. Leaning out of a moving helicopter as it weaves back and forth over a burned area, looking out to the terrain and down to the map in your lap not only makes it hard to be sure where you are, but also makes it hard to keep down your lunch!
Burn severity maps are produced by assigning the burned landscape into one of four categories based on characteristics like damage to trees and other vegetation, soil hydrophobicity, and ash color and depth. This map of burn severity for the Missionary Ridge Fire is based on preliminary satellite data. Before making a final map, scientists must carefully compare the post-fire effects to the unique pre-fire landscape. (Image courtesy Monte Williams, USDAFS)
Assessing Burn Severity from Space
This aerial view shows the differences between low, moderate, and high burn severity. Green trees are in low burn severity areas, brown trees with dead needles are located in moderately burned areas, and black, needleless trunks have been severely burned. Gound level images of similar areas are on page 3. (Image courtesy Annette Parsons, USDAFS)
The way in which the maps are produced depend largely on what sensor provides the data, but most involve using vegetation characteristics as an indicator of soil conditions. Generally, the maps are based on an indicator called NDVI, for Normalized Difference Vegetation Index, or one called NDBR, for Normalized Difference Burn Ratio. NDVI is based on the principle that vegetation absorbs red light and reflects near-infrared light. Changes in the amount of each kind of energy reflected from the surface can signal how severely the pre-existing vegetation has been transformed by fire. NDBR is based on the observation that vegetation reflects energy in the near-infrared, while soil tends to reflect energy from the mid-infrared part of the spectrum. After a burn, the vegetation within the burned area has been reduced and the soil has been exposed. As a result, the near-infrared values are lower than before the fire, while the mid-infrared values are higher. These NDVI and NDBR relationships are used to classify areas in a satellite image into one of four burn severity categories: high, moderate, low, and unburned.
Scientists estimate burn severity with satellite data by comparing the signal from different wavelengths of light. Near-infrared and red light are used for the Normalized Difference Vegetation Index (NDVI), which indicates the density and health of vegetation. Dark areas in the NDVI image (left) represent unvegetated or burned areas. Normalized Difference Burn Ratio (NDBR) uses the same math as NDVI, but near-infrared and shortwave wavelengths. NDBR highlights areas of exposed soil [bright areas in the NDBR image (right).] Data collected after a fire are compared with pre-fire data to produce burn severity maps. Before these maps are finalized, teams must survey the burned areas from the ground. (Images by Robert Simmon, based on Landsat 7 data provided by Andrew Orlemann, USDAFS)
But How Good Are They?
While the BEAR teams revise the satellite maps based on field observations, their primary concern is conducting their own emergency assessment. They have neither the time nor the resources to provide any kind of systematic feedback to the satellite mappers to help them improve their maps. In search of something more scientifically defensible, the RSAC has partnered with scientists at the University of Maryland and obtained funding from the National Interagency Fire Center’s Joint Fire Science Program to collect their own field data on burn severity conditions, which they are using to check the accuracy of the satellite classification, a process called validation.
Sohlberg is one of the scientists who spent some time in the field this summer. Since BAER teams are often on the ground before fires are totally contained, Sohlberg had to become certified as an entry-level firefighter in order to do this work. A lot of the training was in the classroom, said Sohlberg, but they also had to learn how to dig fire lines, operate pumps and hoses, and how to maintain their tools. They had to participate in timed drills to see how quickly they could set up an emergency fire shelter using the distinctive shiny Mylar bags. Said Sohlberg, “We were out there to collect data, but we always had to remember to be on guard; and we had to carry full personal protective equipment: fire-resistant pants and shirt, 8-inch logging boots, fire shelter, compass, water, goggles, helmet and leather gloves. Although we were never in contact with active fire, there were still lots of hazards—damaged trees that are prone to falling, called snags; rolling rocks; and smoldering stump holes.”
field, the team uses a mapping-grade Global Positioning System
receiver that has a built-in data dictionary. Once the GPS device
pinpointed the scientists’ location, they can access the
dictionary, which prompts them with a series of preset questions
the burn severity at the site. Is there any ground cover left? How
is the ash, and what color is it? How long does a drop of water sit
the surface before soaking in or running off? Are there any green
left? Are there any needles or leaves left on the trees? What is
overall assessment of the burn severity at this location? Since
satellite maps the whole area, the team tries to collect data to
up with all the different burn categories within a fire perimeter.
the physical geography of the terrain is highly variable, for
because of topography or ecology, the team tries to collect data
each burn category in each of the different physiographic regions
the fire perimeter. That’s a lot of different
BAER teams enter burned areas so soon after a fire has rolled through that they need to be trained as wildland firefighters. In the field they carry full gear, including fire resistant clothing, helmet, and a fire shelter. In addition to the threat from hot spots, snags of dead trees and unstable slopes contribute to the danger. (Photograph courtesy Rob Sohlberg, University of Maryland)
differences can be tricky for satellite classification.
“You have to be initially cautious about the
classifications, particularly in areas of diverse terrain or
geology. Fire may quickly pass through a rocky area with already
vegetation, or grassland, and blacken everything. Or it might
a forested ridge, consuming acres of trees and all the
vegetation. From a satellite, all these areas might look
same—just black—and might be classified as high
severity. But the actual
severity as far as the impact on the watershed
would be very different.
In the first case, the post-fire runoff and
erosion would probably not
be substantially increased over pre-fire
conditions, and the vegetation
would probably recover in a season. The
effects could hardly be called
severe. In the second case, the impacts
would be much more
The RSAC has provided satellite-based burn severity maps for BAER teams for two fire seasons: 2001 and 2002. Parsons says that their experience suggests that the remote-sensing-based assessments are most accurate in more forested ecosystems. In that kind of terrain, they can often use the satellite maps from the RSAC without any major modification. In shrubby or rocky ecosystems, they require more fine tuning from ground-truth. Even so, Parsons says, her main criteria for the project’s success is less in how technically accurate the maps are and more in their utility to field crews. “We have produced satellite maps for at least 70 fires in the 2002 fire season, and the word we have gotten back from BEAR field teams is overwhelmingly positive. They may not be 100 percent accurate at fine scale, but they have been very useful in pointing teams to areas of greatest concern.” Using those criteria, the project can already be called a success.
Though the flames seem menacing, grassland fires are rarely considered severe. Grassland ecosystems are well adapted to fire, and while above-ground grass stems may be consumed, roots often remain, allowing the ecosystem to regenerate rapidly. In these cases, the potential for flooding and erosion is not very different from pre-fire conditions, and the burn can hardly be considered severe—despite the fact that the satellite sees a blackened surface. Scientists are collecting validation data in the effort to increase the sophistication of the satellite classifications. (Photograph copyright Kari Brown, National Interagency Fire Center Image Portal)
That won’t stop the scientists from continuing to tweak their approach. Over the winter, Forest Service and University of Maryland scientists will look for ways to increase the accuracy of the satellite maps using data collected this past summer at the Missionary Ridge Fire in Colorado, the Rodeo-Chediski Fires in Arizona, the East Fork Fire in northern Utah, and the Biscuit, Winter, Toolbox and Eyerly Fires in Oregon. NASA is supporting that work as part of its Natural Hazards program. The team will also be preparing for another season of data collection in the summer fire season of 2003.
The final burn severity maps developed by BAER use satellite data combined with information gathered by the ground teams. These maps are then used to prioritize efforts to stabilize soil, preventing later erosion and mitigating possible floods. (Image courtesy Monte Williams, USDAFS)
The group also
take advantage of its involvement in NASA’s newly created
Response Team to coordinate the logistics of acquiring imagery
airborne sensors. The team has their eye on the MAS (MODIS
Simulator) on NASA’s ER-2 aircraft and the AIRDAS
Infrared Disaster Assessment System), developed by
Research Center and the Forest Service to fly on a
variety of aircraft.
They also hope to make use of another Terra
called ASTER (Advanced Spaceborne Thermal Emission
Radiometer). ASTER offers unique capabilities in sensing
infrared radiation, which makes it particularly useful for
through smoke. Increasing the number of data sources increases
team’s opportunity to produce useful images and burn severity
to BAER teams and also increases opportunity for cross-comparisons
all the remote-sensing data, which serves as means of validation
Satellite data were used to map many fires in the summer of 2002, including the 200,000-hectare (500,000 acre) Biscuit Fire near Grants Pass, Oregon (above). In this image, red shades indicate burned areas and green indicates unburned forest. Low and moderately severe burns are a mixture of red and green. The image combines shortwave-infrared, near-infrared, and green light as red, green, and blue, respectively. Landsat 7 acquired the data on August 30, 2002. (Image by Robert Simmon, NASA GSFC, based on data provided by Andrew Orlemann, USDAFS)
This partnership is not the first for the RSAC, NASA, and University of Maryland. The three originally teamed up as part of the MODIS Land Rapid Response Project, in which daily imagery and fire detections from MODIS were relayed from NASA’s Goddard Space Flight Center, to the University of Maryland, and on to the RSAC for use by the National Interagency Fire Center as it allocated firefighting personnel and resources across the country. That collaboration ultimately led to the installation of a MODIS Direct Broadcast Receiving Station at the RSAC and paved the way for the current work. With solid validation data, the satellite-based burn severity maps produced in coming seasons will only get better, saving BAER crews precious time as they assess the dangers left behind in the wake of wildfires. | http://www.visibleearth.nasa.gov/Features/BAER/printall.php | 13 |
74 | Chapter 3 Estuary setting
There are many types of estuary determined by their geological setting and dominance of particular processes. This chapter:
- Outlines estuary classification and setting according to topographical and geomorphological classification (Figure 3.1);
- Reviews the ways in which estuaries have been classified in the past;
- Describes estuarine processes, which contribute to the changing geomorphology of an estuary, and assist in the definition of estuary classification;
- Discusses estuarine geomorphological characteristics;
- Summarises estuarine characteristic parameters including estuary length, tidal prism, cross sectional areas and sedimentology;
- Discusses the form and function of an estuary according to controls and constraints of estuary characteristics and process.
There are many things that contribute to the form and functioning of an estuary, for example, the size and length of the river catchment, the amount of river flow, the tidal range and geological setting of the estuary (often referred to as the antecedent conditions i.e. what went before). The recent geological record aids the understanding of the behaviour of estuaries. In particular, the period known as the Holocene (approximately the last 10,000 years, going back to the last ice age) is important because this determines the recent history of infilling by sediments. This has occurred as sediments are washed down the rivers, or carried in from the sea by the tide, and dropped in the more tranquil conditions of the estuary. The pre-Holocene geology is usually much harder and defines the basin in which the estuary sits. It may also provide local hard points, such as the “narrows” to be found at the mouth of the River Mersey in England.
Examining the amount of infilling that has taken place over the Holocene allows different types of estuary to be identified. Firstly, there are the deep fjords and fjards found in Scotland, Norway and New Zealand, where any infilling is insignificant and the shape and size of the estuary is entirely dependent on the shape carved out by earlier ice ages. Then there are a group of estuaries known as rias, which are also rock forms, usually carved by rivers or ice melt waters, and now partially infilled. Examples are to be found on the south coasts of Ireland and England and again in New Zealand. There are also three groups that are almost entirely formed within Holocene sediments. All are a result of marine transgression, the first being drowned river valleys, which are referred to as spit enclosed or funnel-shaped estuaries. The second are the embayments, which are river or marine in origin (i.e. not glacial) and where one or more rivers meet at the mouth; and the third are drowned coastal plains where tidal inlets have formed. All three are to be found on the numerous sedimentary coasts around the world.
This progression provides some clues as to how an estuary develops. Clearly there is a progressive infilling taking place that depends on the size of the initial basin and the amount of sediment available (Figure 3.2); either from erosion in the catchment, or supplied from the marine environment. Beyond a certain point, however, a sort of balance is reached and the estuary begins to release sediment, rather than retain it. This is all to do with a bias in the tide. When there is not much sediment present the greater depth at high water means that the tidal wave travels faster at high water than at low water. This helps to bring sediment in from the sea (Dronkers, 1986). However once the mudflats build up so that, even at high tide, water depths are quite shallow, this has the effect of slowing the tidal wave at high water. Clearly, this distortion will have the opposite effect and tend to export sediment from the estuary. As changes take place in the tides, the level of the sea, the flows draining from the rivers and the supply of sediment, so the balance will continuously adjust (Pethick, 1994).
The same progression allows the different types of estuary to be classified and with an appreciation of the dominant processes, a more detailed characterisation of a particular estuary can be undertaken, as explained in the following sections.
There are many ways in which estuaries have been defined, but by their very nature as places of transition between land and sea, no simple definition readily fits all types of estuarine system. Perhaps the most widely used is that proposed by Pritchard: “An estuary is a semi-enclosed coastal body of water which has a free connection with the open sea and within which sea water is measurably diluted with fresh water derived from land drainage” (Pritchard, 1967). Pritchard went on to propose a classification from a geomorphological standpoint with four subdivisions: (1) drowned river valleys, (2) fjord type estuaries, (3) bar-built estuaries and (4) estuaries produced by tectonic processes.
A very similar classification was used for the Estuaries Review undertaken in the UK (Davidson et al., 1991). On the basis of geomorphology and topography, estuaries were divided into nine categories: (i) fjord, (ii) fjard, (iii) ria, (iv) coastal plain, (v) bar built, (vi) complex, (vii) barrier beach, (viii) linear shore and (ix) embayment.
Hume and Herdendorf (1988) undertook a review of these and several other classification schemes before developing a scheme to cover the range of estuary types to be found in New Zealand.
In this scheme estuaries are grouped into five classes according to the primary process that shaped the underlying basin, prior to the influence of Holocene sediment deposits Table 3.1. Within each of these five classes, there is further subdivision based on the geomorphology and oceanographic characteristics of the estuary; tides and catchment hydrology being the two most important.
|Type||Primary mode of origin||Estuary type|
|1||Fluvial erosion||Funnel shaped|
|3||Barrier enclosed||Double spit|
|8||River mouth||Straight bank|
|13||Tectonism||Fault defined embayment|
|14||Volcanism||Diastrophic embayment|
There is some evidence that these various types can be grouped to reflect the degree to which the antecedent conditions have been altered as a result of Holocene sedimentary processes (Townend et al., 2000). The resultant groupings are as follows (Table 3.2):
|A||Limited or no sedimentary influence||12, 14, 16|
|B||Relatively “young” systems in terms of Holocene evolution||1, 13, 15|
|B/C||Fall between Groups B and C possibly because of headland control||2|
|C||Fully developed Holocene environments||3-11|
A more recent classification of UK estuaries (Defra, 2002) has developed the first three geomorphological types identified by Pritchard (1967) by including behavioural type to suggest the following seven subdivisions (note: this excludes tectonic/volcanic origins which are found elsewhere in the world) (Table 3.3):
|3||Drowned river valley||Ria|
|7||Drowned coastal plain||Tidal inlet|
This classification has been further developed by the EstSim Project (FD 2117, EstSim Consortium, 2007) to identify specific geomorphological elements of UK estuaries in the form of an estuary typology (Table 3.4). This estuary typology has been applied to UK estuary data and developed into a rule base, presented in Table 3.5. The resulting classification for UK estuaries can be found in the Estuaries database and in much more detail at the EstSim website. Each of the estuary types has been mapped in terms of their key morphological components, termed their geomorphic elements (described below), in Systems diagrams for UK estuaries.
|Type||Origin||Behavioural type||Spits1||Barrier beach||Dune||Delta||Linear banks2|
|3||Drowned river valley||Ria||0/1/2|
|7||Drowned coastal plain||Tidal inlet||1/2||X||X||E/F|
|Type||Origin||Behavioural type||Channels3||Rock platform||Sand flats||Mud flats|
|3||Drowned river valley||Ria||X||X||X|
|7||Drowned coastal plain||Tidal inlet||X||X||X|
|Type||Origin||Behavioural type||Salt marsh||Cliff||Flood plain4||Drainage basin|
|3||Drowned river valley||Ria||X||X||X|
|7||Drowned coastal plain||Tidal inlet||X||X|
1 Spits: 0/1/2 refers to number of spits; E/F refers to ebb/flood deltas; N refers to no low water channel; X indicates a significant presence.
2 Linear Banks: considered as alternative form of delta.
3 Channels: refers to presence of ebb/flood channels associated with deltas or an estuary subtidal channel.
4 Flood Plain: refers to presence of accommodation space on estuary hinterland.
|1||Fjord||Glacial origin, exposed rock platform set within steep-sided relief and with no significant mud or sand flats|
|2||Fjard||Glacial origin, low lying relief, with significant area of sand or mud flats|
|3||Ria||Drowned river valley in origin, with exposed rock platform and no linear banks|
|4||Spit-enclosed||Drowned river valley in origin, with one or more spits and not an embayment|
|5||Funnel-shaped||Drowned river valley in origin, with linear banks or no ebb/flood delta and not an embayment|
|6||Embayment||River or marine in origin (i.e. not glacial), with multiple tidal rivers meeting at or near mouth and a bay width/length ratio1 of 1 or greater, and no exposed rock platform|
|7||Tidal inlet||Drowned coastal plain in origin, with barrier beaches or spits|
|1 Where bay extends from sea opening to the confluence of the rivers|
Such classifications provide a broad description of the type of estuary and are particularly relevant when considering the likely functioning of an estuary using regime concepts (see section on Study methods). However, estuaries also contain a number of other distinct features, which distinguish them from marine and terrestrial habitats. For instance, they generally contain wetlands that form at the margins of the land and the sea, and are unique in that they link marine (subtidal and intertidal), freshwater and terrestrial ecosystems. On the seaward side are banks, shoals, sand flats, mud flats and saltmarsh habitats, which link to fringing habitats such as sand‑dunes, shingle ridges and coastal marshes, in turn linking to progressively less saline terrestrial habitats, such as freshwater marshes and coastal grassland, in a landward direction.
Characteristic features of estuaries include:
- extensive intertidal areas including saltmarshes, mudflats and sand flats,
- semi‑diurnal or diurnal tidal regime,
- wave shelter,
- water layering and mixing,
- temperature and salinity gradients,
- sediment suspension and transport,
- high productivity,
- high levels and rapid exchange of nutrients,
- the presence of plants and animals particularly adapted to these conditions, and
- the presence of migrant and seasonally fluctuating populations of animals (particularly birds).
Within EstSim (EstSim Consortium, 2007) the physical features of estuaries have been classified into the following units:
- Barrier beaches;
- Rock platforms;
- Saltmarsh; and
- Drainage basin.
A generic description of each of the above elements can be found in Estuary geomorphic elements - PDF 812KB (ABPmer, 2007) under the following headings:
- Definition of Geomorphic Element (GE):
Providing an overall definition of the GE in question through for example, a description of the key aspects of the form, formation, processes or location within an estuary system;
Defining the role of the GE within the physical system in terms of exchanges of energy and mass;
- Formation and evolution:
Providing details of the processes that lead to the formation of the particular GE and how the GE develops and evolves over time;
- General form:
Describing the characteristic shape (or component shapes) of the GE, where appropriate highlighting the prevailing conditions under which a particular form will be adopted;
- General behaviour:
The general behaviour of the GE is described in terms of how the GE may respond to the varying forcing to which it can be exposed;
- Forcing factors:
This section describes the key processes (for example, wave attack) responsible for shaping the GE, with details provided where appropriate of role of the forcing processes;
- Evolutionary constraints:
This section details the factors that may alter or constrain the development of the GE leading to a differing evolution due to that constraint;
- Behavioural timescales:
As discussed above, landforms will respond to forcing over a range of time and space scales, and will exhibit characteristic responses of differing scales. For each GE, the behaviour of the element is discussed over different timescales; and
- Interactions with other geomorphic elements:
Each GE will be linked to other GEs present within a particular estuary system. This section identifies the interactions in terms of flows of energy and/or matter between GEs. Interactions are identified and discussed either in terms of general interactions (for both elements within the estuary system and external to the estuary system) or interaction with specific geomorphic elements.
The prevailing processes in the estuary clearly determine many of these characteristic features. It is therefore important to have a sound appreciation of what processes one might expect to find and how to go about determining the relative importance of the different processes. Fortunately this is an area that has been the subject of considerable research and as a result there is a lot of background information available. A good starting point is a number of standard texts on the subject of which there are many. For example, from a physical perspective the books by Ippen (1966), McDowell & O’Connor (1977), and Dyer (1997) provide a useful introduction and from a more geomorphological perspective, books by Pethick (1984), Carter (1988), and Carter and Woodroffe (1994) have chapters on estuaries.
The estuary is an area of transition from the tidal conditions seaward to the freshwater flows from landward. Not only does this involve a change from the reversing tidal flow to the uni‑directional river flows upstream, but there is also a transition from saline to freshwater conditions. As saline and freshwater bodies meet, mixing takes place, to a greater or lesser degree, and can give rise to a marked interface between the two bodies and the occurrence of internal waves on the interface between the two. Such salinity gradients can also set up density flows, which can be directed both along and across the estuary depending on the size of the estuary. These water movements are further complicated by the presence of surface waves. As well as waves formed within the estuary, waves can also be generated externally (i.e. offshore) and propagate into the estuary.
The complexity of water movements is reflected in the sediment transport pathways within the system. Sediments can be supplied from marine or freshwater sources. In some estuaries, sediment is brought down rivers when they are in flood, and in from the sea during periodic storms. There can be a high degree of sediment reworking within an estuary, and erosional and depositional shores can exist in close proximity. Although many intertidal mudflats and sand flats appear relatively stable at least in the medium term, such areas can be quite dynamic, with deposition and erosion taking place at comparable rates and leading to a form of dynamic equilibrium. Sediments can be cycled on a variety of timescales, for example, changes in the configuration of channels and bed forms can occur over periods as short as days, whilst also responding to longer‑term effects such as changing sea levels. The feedback between accretion, water movements and sediment transport is expressed schematically in Figure 3.3.
Estuaries are often characterised by the deposition of fine sediment. This means that the movement of fine material (sand and mud) is a crucial component of estuarine sediment pathways and this will often be superimposed on the movement of coarser sand fractions. Hence, the variety of environments and sediment sources, coupled with the linkage between erosion and accretion in different areas in the same estuary, highlight the need to consider the estuary system as a whole.
There are a number of measures that do not require detailed modelling but collectively convey a great deal about the type of estuary. These can be as simple as an examination of the length and depth compared to the tidal range, the plan form in terms of variation of width along the estuary or the dimensions of any meanders. The following tables set out some of the properties which, collectively, can be used to characterise an estuary (Dun & Townend, 1998). The first table defines a range of measured, or observed, properties,Table 3. 6. These are often supplemented with a number of interpreted or derived properties that express particular attributes of the system, particularly in relation to equilibrium or steady state concepts as summarised in Table 3. 7.
|Property||Definition or reference|
|Lengths||Usually overall and to key change points (e.g. from mouth to tidal limit).|
|Plan Areas||Helps to include area of catchment and floodplain as well as estuary area at various elevations. Features such as saltmarsh and intertidal may also be analysed individually.|
|Cross-sectional areas||Typically at the mouth and tidal limit.|
|Volumes||Usually in terms of volume below a given level (e.g. MHW, MLW) and the volume of the tidal exchange (prism). Useful to also examine variation with chainage.|
|Widths and Depths||Indicative values at mouth, tidal limit and as an average over estuary length.|
|Tidal levels and range||Spring and neap values at locations along the estuary.|
|Freshwater flows||Magnitude of annual mean daily flow rate and peak values.|
|Geology||Usually mapped from available borehole records and provides essential information on potential constraints to long-term change.|
|Geomorphology||Mapped from maps, charts, aerial photographs, remote sensing images and field survey data to show all major forms - includes features such as saltmarsh, mudflat, cheniers, spits and nesses, bed forms, artificial channels and reclamations, ridges, cliffs, dunes, etc.|
Surficial sediments need to be characterised to assess sediment sources, sinks and transport regimes (McLaren & Bowles, 1985).
|Property||Definition or reference|
|Form descriptions||Parameters for variation of width, depth and cross-sectional area to power and exponential law descriptions, see (Prandle & Rahman, 1980; Prandle, 1985).|
|Estuary number||Indicates degree of stratification (Ippen, 1966).|
|Tidal wavelength (l)||Simplistically this can be obtained using linear wave theory and either the depth at the mouth or the average depth. Ippen (1966) gives a method of computing the wave number and hence the wavelength for a standing wave, including friction. This gives a very rough indication of possible tidal resonance (l/4) but methods using the shape functions are more reliable (Prandle, 1985)..|
|Tidal constituent ratio’s||The diurnal/semi-diurnal ratio indicates the dominant duration of the tidal cycle (McDowell & O'Connor, 1977). An examination of the M4 to M2 ratios (magnitude and phase) also gives a useful measure of the importance of non-linear effects within the estuary, (Friedrichs & Aubrey, 1988).|
|Tidal asymmetry||Examination of the duration and magnitude of flood and ebb velocities, together with timing of slack waters provides a useful indication of potential movement of coarse and fine sediments and the type of tidal basin (Dronkers, 1986; Dronkers, 1998).|
|Hydraulic geometry relationships||Measures of form against discharge or tidal prism properties (usually derived from a model and/or measurements) - tidal prism v cross-sectional area (O'Brien, 1931; Gao & Collins, 1994), plan area v volume (Renger & Partenscky, 1974) and hydraulic geometry or regime relationships (Langbein, 1963; Spearman et al., 1998).|
An example of the sort of information that can be extracted is given in Table 3.8. This presents the data for the Humber Estuary in the UK and both measured and derived information are presented in the same table to provide an overview. Where more detailed information on water levels, currents, salinity gradients and suspended sediment loads are available it is possible to elaborate on many of these descriptions. As a simple example, where water levels over a tidal cycle are available at intervals along the estuary, by overlaying them it may be possible to get an immediate impression of how the tidal wave alters as it propagates upstream. If high and low waters occur at about the same time and there is little distortion taking place this is characteristic of a standing wave. Where high waters are delayed in an upstream direction, the characteristic is closer to a progressive wave. Marked amplification and asymmetry further indicate that there are significant shallow water effects. If information on tidal currents is also available this can be used to examine various measures of tidal asymmetry in more detail (see section on asymmetry in Study methods).
|Property||Values for the Humber|
|Lengths||To Trent Falls, 62km; to tidal limit on R Trent, 147km|
|Plan Areas||Catchment area = 23,690 km2; Flood plain area = 1,100 km2
Plan area @ HW = 2.8 x 108 m2; @ LW = 1.8 x 108 m2 (*)
Intertidal area = 1 x 108 m2 (*)
Saltmarsh area = 6.3 x 106 m2 (*) (* between Spurn and Trent Falls)
|Cross-sectional areas||CSA @ mouth = 85538 m2 to mtl|
|Volumes||Total volume @ HW = 2.5 x 109 m3 and @ LW = 1.1 x 109 m3
Tidal prism, springs = 1.5 x 109 m3 and neaps = 0.8 x 109 m3.
|Widths and Depths||Width @ mouth = 6620 m; hydraulic depth @ mouth = 13.2 m
Width @ tidal limit = 52 m; hydraulic depth @ tidal limit = 2.9 m
Average width = 4265 m; average hydraulic depth = 6.5 m
|Form descriptions||Area=84.exp(6.7.x/l); r2 = 0.99
Width=198.exp(3.7.x/l); r2 = 0.89
Depth=0.55.exp(3.x/l); r2 = 0.91 (Length, l=145 km)
|Tidal levels and range||MHWS = 3.0; MHWN = 1.6; MLWN = -1.2; MLWS = -2.8
(all levels metres Ordnance Datum Newlyn at Bull Sand Fort)
|Freshwater flows||Average flow = 240 m3/s; High flow = 1600 m3/s|
|Estuary number||Average fresh water flow, springs = 3.43; neaps = 0.75
High fresh water flow, springs = 0.52; neaps = 0.11
(cf E > 0.09 indicates progressively well mixed conditions, (Ippen, 1966)
|Tidal wavelength||Using linear theory (i) with depth at mouth, λ = 500 km; (ii) with average depth, λ = 350 km|
|Tidal constituent ratio’s||F=0.06 i.e. tide is semi-diurnal (o[0.1] semi-diurnal, o diurnal)
M4/M2 amplitude = 0.003; 2M2-M4 phase = 223 at mouth
M4/M2 amplitude = 0.25; 2M2-M4 phase = 52 at Burton Stather on R. Trent.
(i.e. significant sea surface distortion and ebb dominance at the mouth changing to flood dominance upstream)
|Tidal asymmetry||Dronkers gamma -1 = -0.05; 0.13; 1.51
Net excursion* = -1.35; -10.35; -0.9 km
Net slack duration+ = 0.18; 0.22; 0 hrs
Values are for Spurn, Hull and Trent Falls. Positive values indicate flood dominance.
|Indicates dominance for:
- tidal equilibrium
- coarse sediment
- fine sediment
* >0.9m/s; + <0.2m/s
|Hydraulic geometry relationships||CSA/tidal prism = 5.7 x 10-5 m-1 (springs) and 1 x 10-4 m-1 (neaps)
LW volume/HW plan area2 = 1.32 x 10-8 m-1
LW plan area/HW plan area1.5 = 3.8 x 10-5 m-1
Discharge exponents: Mean velocity, m = 0.1 (r2=0.39)
Width, b = 0.48 (r2=0.85)
Mean depth, f = 0.41 (r2=0.91)
Energy slope, z = -0.2 (r2=0.89)
|Geology||See Jones (1988); BGS (1982), and Humber holocene chronology - PDF 871KB (ABPmer, 2004)|
|Geomorphology||See McQuillin et al. (1969)|
|Sedimentology||See McQuillin et al. (1969)|
Whilst classification, as discussed above, says something of the origin of an estuary, and the characterisation sets out some important properties, neither fully explains the observed form. For this we need to consider what is the purpose, or function, of an estuary and how does this influence the ensuing form? Fairbridge (1980) defines an estuary as “an inlet of the sea reaching into a river valley as far as the upper limit of tidal rise”. In general, there is no forcing that creates an inlet from seaward. Rather, antecedent conditions give rise to low-lying land within a valley that as sea level rises can be flooded from seaward. Such valleys can be formed by a number of different processes as indicated in Table 3.1. Alternatively, river flows cut a channel to the sea, which as the channel grows in size, or the river deposits sediments to form a tidal delta, progressively allows greater penetration from the sea.
For inlets and estuaries around the world, the tide and river flows can each vary from being dominant to non-existent. Hence there is a spectrum that encompasses the full range of possible interactions. The function of the estuary is to accommodate this interaction and in doing so its form must reflect this role. Of course superimposed on this are other functions, more typical of any water-land interface, where, for instance, the action of waves and current give rise to characteristic forms for the beaches or banks.
The inflow from rivers and the movement of tides give rise to sediment transport in and out of the estuary. Thus the estuary functions as an open system with exchanges of energy, water and sediment with the surrounding systems (catchment and open sea). However in transporting water and sediment there are inevitable losses of energy due to dissipative processes, such as friction and heat loss. As such, this type of open system continuously expends energy and the ceaseless flow of energy gives rise to a dynamic ‘steady state’, taking the place of equilibrium (or steady state) (Thompson, 1961). The various forms of equilibria are illustrated in Figure 3.4. This dynamic steady state is itself transient simply because the inputs from external environments change and the constraints on the system also change as the estuary develops (e.g. the solid geology may vary).
The time taken to respond to a given perturbation will also vary for individual features and this will tend to introduce lags into the system. Consequently, it is more probable that the system will be in transition, moving towards a steady state, rather than in a steady-state condition.
In other words, it is a juxtaposition of:
- the behaviour of the component parts, each seeking their own target state, but subject to changes in linked components; and
- the mix of space and time scales.
Thus, when discussing system states (of whatever form: equilibrium, steady, dynamic, quasi, etc), there is a need to be very clear about which parts of the system are involved and over what timescale the state is determined.
From this, it can be concluded that the function of the estuary is to accommodate an energy exchange by redistributing water and sediment. At any instant, the prevailing conditions may generate a “target” steady state and the estuary will seek to adjust to achieve this. As the estuary is observed, it may, therefore, be fluctuating about its steady state, or transiting towards the steady state and fluctuating as it does so. Superimposed on this behaviour, will be the changes in input conditions and constraints, which together may:
- alter the rate of transition towards a given state,
- cause a switch to different but similar (in form) target state, or
- cause a switch to a different state altogether.
In the context of the entire estuary, the system is searching for an optimum state. To move to a different state would require a major perturbation of the surrounding landform, such as an earthquake, volcanic eruption or glaciation. So it can happen but is very rare. For most purposes and timescales of interest, the estuary form, as a whole, can be considered to be stable. The system will simply continue to adjust its form in response to changes in energy inputs and constraints. These changes are generally internal to the estuary system and as a consequence it is the internal features that exhibit the range of responses outlined as the system searches for an optimum steady state. Thus a progressive increase in river flows might cause channels to enlarge (a transition), whereas a major flood event might cause channel switching (a switch in position), or a switch from meandering to braided channels (a switch in form). The first two are essentially changes in the given state. In contrast, the switch in form is an example of moving to a different state, for the channel feature, but not the estuary as a whole.
This provides a basis for thinking about the overall condition of the system and how specific features within it may behave. However the interaction of processes and form remains something of a conundrum. Although the size and shape of an estuarine channel is a response to tidal processes, it is nevertheless apparent that tidal discharge is itself dependent on the morphology of the estuarine channel since this determines the overall tidal prism. Entering this loop requires some of the constraints in the system to be identified.
One of the principal constraints that defines the size and shape of an estuary must be the antecedent form of the catchment basin, as already discussed. This determines the tidal length of the estuary, a characteristic dimension, which is dependent on the macro-scale slope of the coastal plain, fluvial discharge, and the tidal range in the nearshore zone. The tidal range and morphology within the estuary is then a response to these independent factors.
In some specific cases, further constraints to the closed cause-effect system are present. These constraints may be geological features such as sills, moraines or changes in geological strata, which limit control how the estuary can adjust. Equally, anthropogenic limits to width or depth, such as urban areas or harbour facilities, can constrain how an estuary responds to changing conditions.
In each case, the identification of such constraints can be seen as a method of reducing the number of degrees of freedom and providing a point of entry into the cause-effect feedback loop. It is for this reason that developments within an estuary need to be considered in terms of both their local and estuary wide impacts (Pontee & Townend, 1999). Consequently, when examining development proposals or new activities, it is essential that the local and estuary wide implications be taken into account.
Given the extensive developments that have taken place in and around estuaries, there are now moves to restore the natural system or to “design” a natural estuary. An estuary forms a mix of habitats depending on the prevailing constraints. If the constraints are changed, then the estuary will adjust to establish a new dynamic equilibrium, consistent with the existing and new constraints. In order to “design” a natural estuary, the first stage would involve setting out societal preferences for particular types of habitat, as a basis for determining what constraints should be adjusted. This then runs the risk of establishing an even more artificial situation simply to meet societal preferences. A more preferable approach is, therefore, to take advantage of any opportunity that will increase the room in which the estuary can move, to respond to such things as sea level rise, by removing unnecessary constraints.
Movement of the earth's crust to produce surface irregularities. | http://www.estuary-guide.net/guide/chapter3_estuary_setting.asp | 13 |
54 | Point, Line Segment, Square, Cube. But what comes next, what is the equivalent object in higher dimensions? Well it is called a hypercube or n-cube, although the 4-cube has the special name tesseract.
Before I go on to explain about the elements of hypercubes, let me show you some pictures of some hypercubes. I guess this also raises the question how can you construct these objects. One method is to start with a point. Then stretch it out in one dimension to get a line segment. Then take this line and stretch it out in another dimension perpendicular to the previous one, to get a square. Then take that square and stretch it out in another dimension perpendicular to the previous two to get a cube. This is when your visualisation may hit a wall. Its very hard to then visualise taking this cube and stretching it in another dimension perpendicular to the previous three. However mathematically, this is easy and this is one approach to constructing hypercubes.
However there is more mathematical and analytical method. You most probably know that these n-cubes have certain elements to them, namely vertices (points), edges (lines), faces (planes), and then in the next dimension up, cells and then in general n-faces. These elements are summed up nicely here. Firstly we take a field of say . Next we construct the vertices of the n-cube. Basically we are taking all the n dimensional vectors which have all the combinations of 0′s and 1′s for each entry of the vector. More mathematically,
There is a vertex described by each vector where
There is an edge between vertices and if and only if for exactly one .
There is an m-face between (or though) vertices and and … and if and only if for exactly .
Basically this means we list the vertices just as if were were counting in base 2. And then we can group these vertices into different groups based on the n-face level and (if we think of the vertices of a bit string) how many bits we have to change to make two vertices bit streams the same. This approach is very interesting because the concept of grouping these vertices relates strongly to hypergraphs.
Another way to think about it is as follows. Edges, from the set of all edges (i.e. joining each vertex with every other vertex), are the ones that are perpendicular to one of the standard basis vectors. This generalises to n-faces; from the set of all n-faces (i.e. all ways of grouping vertices into groups of n) are those that the object constructed is parallel to the span of any set of n of the standard basis vectors.
When you think about it, a lot of things that you can say about the square or cube generalise. For instance you can think of a square being surrounded by 4 lines, and cube by 6 surfaces, a tesseract by 8 cells, etc.
Now that we have some idea how to describe and build n-cubes, the next question is how do we draw them. There are numerous methods and I can’t explain them all in this post (such as slicing and stereographic projection, as well as other forms of projection (I’ll leave these for another blog article)). But another question is also what aspects do we draw and how do we highlight them. For instance it may seem trivial in two dimensions to ask do I place a dot at each vertex and use just 4 solid lines for the edges. But in higher dimensions we have to think about how do we show the different cells and n-faces.
Firstly, how can we draw or project these n dimensional objects in a lower dimensional world (ultimately we want to see them in 2D or 3D as this is the only space we can draw in). This first method is basically the exact same approach that most people would have first learnt back in primary school. Although, I do not think it makes the most sense or makes visualisation easiest. Basically this method is just the take a dot and perform a series of stretches on it that I described earlier, although most people wouldn’t think this is what they were doing. Nor would we usually start with a dot, we would normally start with the square. Although we will, so we start with this.
We would now draw a line along some axis from that dot, and place another dot at the end of this line.
Now from each of the dots we have, we would draw another line along some other axis and again draw a dot at the end of each of those two lines. We would then connect the newly formed dots.
Now, we just keep repeating this process where by each time we are drawing another dimension. So we take each of these four dots and draw lines from them in the direction of another axis, placing a dot at the end of each of these lines, and joining each of the dots that came from other dots that were adjacent, with a line.
Now for 4D and beyond we basically keep the process going, just choosing really anywhere from the new axis, so long as it passes though the origin.
If we do a little bit of work we can see that this map is given by the matrix,
where is the angle of the projected z axis from the x axis, and is the angle of the projected w axis from the negative x axis. Also r1 and r2 are the scales of the third and fourth respective receding axis (it makes it “look” more realistic when we use a number less than 1) This is just an extension of oblique projection for 3D to 2D.
Now this method seems very primitive, and a much better approach is to use all the dimensions we have. We live in a three dimensional world, so why just constrict our drawings to two dimensions! Basically, an alternate approach to draw an n-cube in three dimensional space would be to draw n lines all passing though a single point. Although it is not necessary to make all these lines as spread out as possible, we will try to. (This actually presents another interesting idea of how do we equally distribute n points on a sphere. For instance we can try to make it so that all the angles between any two of the points and the origin are equal. But I will leave this for another blog article later.) We then treat each of these lines as one dimension from there we can easily draw, or at least represent an n-dimensional point in 3D space. Now obviously we can have two different points in 4D that map to the same 3D point, but that is always going to happen no matter what map we use. The following set of 4 vectors are the projected axis we will use as a basis.
Now I won’t say how I got these (actually I took them from Wikipedia, they are just the vertices of a 3-simplex) but all of the vectors share a common angle between any two and the origin.
Now if we draw in our tesseract, highlighting the cells with different colours (not this became problematic with some faces and edges as they are a common boundary for two different faces, so you cannot really make them one colour or the other) we get something like this,
The projection matrix for this projection is then simply (from the vectors that each of the standard basis maps to),
Now if we compare this to our original drawing (note I’m not talking about the projection used, but rather the presentation of the drawings, i.e. the colour.) I think you will see that the second one is clearer and try’s to show where the cells and faces are, not just the vertices and edges. Note also the second one is in 3D so you can rotate around it. Looking at the first one though, you will notice it doesn’t show where the faces or cells are. Remember that we have more than just vertices, edges and faces. We have cells, and n-faces. These are essentially just different groupings of the vertices. But how can we show these. Now the most mathematical way would be to just list all the different groupings. This is okay, but I like to see things in a visual sense. So another way would just show different elements. Like you draw all the vertices on one overhead, edges on another, and so on. Then when you put all these overheads on top of each other we get the full image, but we can also look at just one at a time to see things more clearly. This would be particularly more useful for the higher dimensional objects and higher dimensional elements. We can also use different colours to show the different elements. For example in the square, we can see that the line around surrounding it is 4 lines, but in higher dimensions its not so easy, so we can colour the different parts to the element differently. (When I say part I mean the 4 edges of a square are 4 different parts. Whereas the edges are all one element, but are a different element to the vertices.)
Some Interesting Properties
Once you start defining hypercubes there are many interesting properties that we can investigate. For this section lets just assume that we have the standard hypercube of side length 1. Now we can trivially see that the area, volume, etc. for the respective hypercube will always be 1. As described above each time we add another dimension and sweep the object out into that dimension we effectively multiply this hypervolume by 1. So for an n-cube, the hypervolume of it will be . When I say hypervolume I mean the one that makes sense for that dimension. E.g. in 2D, area, in 3D, volume, and so on.
The next obvious question to ask is what is the perimeter, surface area, cell volume, …, n-face hypervolume of the respective n-cube? It gets a little confusing as you have to think about what exactly you are finding. Is it a length, an area, a volume? Well it will just be an (n – 1) volume. Eg. in 2D we are finding a length (the perimeter), in 3D, an area (surface area), and so on so that each time we increase the dimension of the n-cube we increase the units we are measuring in. Well if we just start listing the sequence (starting with a square), 4, 6… we notice this is just the number of (n – 1) degree elements. Namely, the number of edge, faces, cells, etc.
This leads me in the obvious question of how can I calculate the number of m-elements of the n-cube?
Well instead of me just going to the formula, which you can find on Wikipedia anyway, I will go though my lines of thinking when I first tried to work this out. Number of vertices is easy, each component of the n-vector can be either a 0 or a 1. So for each component there is 2 possibilities, but we have n of them, so it is just 2x2x2… n times, or 2n. Now originally when I tried to work out the number of edges, I started listing them and saw that I could construct the recurrence… Although with the help of graph theory it is very simple. In graph theory the handshaking theorem says Where means the number of edges, and means the degree of vertex V, which means the number of edges connected two it. Now if we think of an edge being a group of two vertices where you only make one entry of the vector change to get from one vector to the other, then we can see that there are exactly n way of doing this. We can either change the 1st entry of the vector, or the 2nd, or the …., or the nth. Thus each vertex has of the n-cube graph will have degree n. So as we have 2n vertices and each vertex has degree n, then the sum of the vertex degrees will be n2n. Hence by the handshaking theorem, I am not exactly sure how to generalise this further. I will leave it for another article. However, the formula is
(I shall try to write more at a later date.)
- Hypercube. (2008, October 14). In Wikipedia, The Free Encyclopedia. Retrieved 08:19, October 21, 2008, from http://en.wikipedia.org/w/index.php?title=Hypercube&oldid=245242295
- Tran, T., & Britz, T. (2008). MATH1081 Discreet Matematics: 5 Graph Theory. | http://andrewharvey4.wordpress.com/tag/graph-therory/ | 13 |
93 | |Part of a series on|
In Big Bang cosmology, the observable universe consists of the galaxies and other matter that can, in principle, be observed from Earth in the present day—because light (or other signals) from those objects has had time to reach the Earth since the beginning of the cosmological expansion. Assuming the universe is isotropic, the distance to the edge of the observable universe is roughly the same in every direction. That is, the observable universe is a spherical volume (a ball) centered on the observer, regardless of the shape of the universe as a whole. Every location in the universe has its own observable universe, which may or may not overlap with the one centered on Earth.
The word observable used in this sense does not depend on whether modern technology actually permits detection of radiation from an object in this region (or indeed on whether there is any radiation to detect). It simply indicates that it is possible in principle for light or other signals from the object to reach an observer on Earth. In practice, we can see light only from as far back as the time of photon decoupling in the recombination epoch. That is when particles were first able to emit photons that were not quickly re-absorbed by other particles. Before then, the universe was filled with a plasma that was opaque to photons.
The surface of last scattering is the collection of points in space at the exact distance that photons from the time of photon decoupling just reach us today. These are the photons we detect today as cosmic microwave background radiation (CMBR). However, it may be possible in the future to observe the still older neutrino background, or even more distant events via gravitational waves (which also should move at the speed of light). Sometimes astrophysicists distinguish between the visible universe, which includes only signals emitted since recombination—and the observable universe, which includes signals since the beginning of the cosmological expansion (the Big Bang in traditional cosmology, the end of the inflationary epoch in modern cosmology). According to calculations, the comoving distance (current proper distance) to particles from the CMBR, which represent the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light years), while the comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light years), about 2% larger.
The best estimate of the age of the universe as of 2013 is 13.798 ± 0.037 billion years but due to the expansion of space humans are observing objects that were originally much closer but are now considerably farther away (as defined in terms of cosmological proper distance, which is equal to the comoving distance at the present time) than a static 13.8 billion light-years distance. The diameter of the observable universe is estimated at about 28 billion parsecs (93 billion light-years), putting the edge of the observable universe at about 46–47 billion light-years away.
The universe versus the observable universe
Some parts of the universe may simply be too far away for the light emitted from there at any moment since the Big Bang to have had enough time to reach Earth at present, so these portions of the universe would currently lie outside the observable universe. In the future the light from distant galaxies will have had more time to travel, so some regions not currently observable will become observable in the future. However, due to Hubble's law regions sufficiently distant from us are expanding away from us much faster than the speed of light (special relativity prevents nearby objects in the same local region from moving faster than the speed of light with respect to each other, but there is no such constraint for distant objects when the space between them is expanding; see uses of the proper distance for a discussion), and the expansion rate appears to be accelerating due to dark energy. Assuming dark energy remains constant (an unchanging cosmological constant), so that the expansion rate of the universe continues to accelerate, there is a "future visibility limit" beyond which objects will never enter our observable universe at any time in the infinite future, because light emitted by objects outside that limit would never reach us. (A subtlety is that, because the Hubble parameter is decreasing with time, there can be cases where a galaxy that is receding from us just a bit faster than light does emit a signal that reaches us eventually). This future visibility limit is calculated at a comoving distance of 19 billion parsecs (62 billion light years) assuming the universe will keep expanding forever, which implies the number of galaxies that we can ever theoretically observe in the infinite future (leaving aside the issue that some may be impossible to observe in practice due to redshift, as discussed in the following paragraph) is only larger than the number currently observable by a factor of 2.36.
Though in principle more galaxies will become observable in the future, in practice an increasing number of galaxies will become extremely redshifted due to ongoing expansion, so much so that they will seem to disappear from view and become invisible. An additional subtlety is that a galaxy at a given comoving distance is defined to lie within the "observable universe" if we can receive signals emitted by the galaxy at any age in its past history (say, a signal sent from the galaxy only 500 million years after the Big Bang), but because of the universe's expansion, there may be some later age at which a signal sent from the same galaxy can never reach us at any point in the infinite future (so for example we might never see what the galaxy looked like 10 billion years after the Big Bang), even though it remains at the same comoving distance (comoving distance is defined to be constant with time—unlike proper distance, which is used to define recession velocity due to the expansion of space), which is less than the comoving radius of the observable universe. This fact can be used to define a type of cosmic event horizon whose distance from us changes over time. For example, the current distance to this horizon is about 16 billion light years, meaning that a signal from an event happening at present can eventually reach us in the future if the event is less than 16 billion light years away, but the signal will never reach us if the event is more than 16 billion light years away.
Both popular and professional research articles in cosmology often use the term "universe" to mean "observable universe". This can be justified on the grounds that we can never know anything by direct experimentation about any part of the universe that is causally disconnected from us, although many credible theories require a total universe much larger than the observable universe. No evidence exists to suggest that the boundary of the observable universe constitutes a boundary on the universe as a whole, nor do any of the mainstream cosmological models propose that the universe has any physical boundary in the first place, though some models propose it could be finite but unbounded, like a higher-dimensional analogue of the 2D surface of a sphere that is finite in area but has no edge. It is plausible that the galaxies within our observable universe represent only a minuscule fraction of the galaxies in the universe. According to the theory of cosmic inflation and its founder, Alan Guth, if it is assumed that inflation began about 10−37 seconds after the Big Bang, then with the plausible assumption that the size of the universe at this time was approximately equal to the speed of light times its age, that would suggest that at present the entire universe's size is at least 1023 times larger than the size of the observable universe.
If the universe is finite but unbounded, it is also possible that the universe is smaller than the observable universe. In this case, what we take to be very distant galaxies may actually be duplicate images of nearby galaxies, formed by light that has circumnavigated the universe. It is difficult to test this hypothesis experimentally because different images of a galaxy would show different eras in its history, and consequently might appear quite different. Bielewicz et al.: claims to establish a lower bound of 27.9 gigaparsecs (91 billion light-years) on the diameter of the last scattering surface (since this is only a lower bound, the paper leaves open the possibility that the whole universe is much larger, even infinite). This value is based on matching-circle analysis of the WMAP 7 year data. This approach has been disputed.
The comoving distance from Earth to the edge of the observable universe is about 14 gigaparsecs (46 billion light years or 4.3×1026 meters) in any direction. The observable universe is thus a sphere with a diameter of about 29 gigaparsecs (93 Gly or 8.8×1026 m). Assuming that space is roughly flat, this size corresponds to a comoving volume of about 1.3×104 Gpc3 (4.1×105 Gly3 or 3.5×1080 m3).
The figures quoted above are distances now (in cosmological time), not distances at the time the light was emitted. For example, the cosmic microwave background radiation that we see right now was emitted at the time of photon decoupling, estimated to have occurred about 380,000 years after the Big Bang, which occurred around 13.8 billion years ago. This radiation was emitted by matter that has, in the intervening time, mostly condensed into galaxies, and those galaxies are now calculated to be about 46 billion light-years from us. To estimate the distance to that matter at the time the light was emitted, we may first note that according to the Friedmann–Lemaître–Robertson–Walker metric, which is used to model the expanding universe, if at the present time we receive light with a redshift of z, then the scale factor at the time the light was originally emitted is given by the following equation.
WMAP nine-year results give the redshift of photon decoupling as z=1091.64 ± 0.47 which implies that the scale factor at the time of photon decoupling would be 1⁄1092.64. So if the matter that originally emitted the oldest CMBR photons has a present distance of 46 billion light years, then at the time of decoupling when the photons were originally emitted, the distance would have been only about 42 million light-years away.
Many secondary sources have reported a wide variety of incorrect figures for the size of the visible universe. Some of these figures are listed below, with brief descriptions of possible reasons for misconceptions about them.
- 13.8 billion light-years
- The age of the universe is estimated to be 13.8 billion years. While it is commonly understood that nothing can accelerate to velocities equal to or greater than that of light, it is a common misconception that the radius of the observable universe must therefore amount to only 13.8 billion light-years. This reasoning would only makes sense if the flat, static Minkowski spacetime conception under special relativity were correct. In the real universe, spacetime is curved in a way that corresponds to the expansion of space, as evidenced by Hubble's law. Distances obtained as the speed of light multiplied by a cosmological time interval have no direct physical significance.
- 15.8 billion light-years
- This is obtained in the same way as the 13.8 billion light year figure, but starting from an incorrect age of the universe that the popular press reported in mid-2006. For an analysis of this claim and the paper that prompted it, see the following reference at the end of this article.
- 27.6 billion light-years
- This is a diameter obtained from the (incorrect) radius of 13.8 billion light-years.
- 78 billion light-years
- In 2003, Cornish et al. found this lower bound for the diameter of the whole universe (not just the observable part), if we postulate that the universe is finite in size due to its having a nontrivial topology, with this lower bound based on the estimated current distance between points that we can see on opposite sides of the cosmic microwave background radiation (CMBR). If the whole universe is smaller than this sphere, then light has had time to circumnavigate it since the big bang, producing multiple images of distant points in the CMBR, which would show up as patterns of repeating circles. Cornish et al. looked for such an effect at scales of up to 24 gigaparsecs (78 Gly or 7.4×1026 m) and failed to find it, and suggested that if they could extend their search to all possible orientations, they would then "be able to exclude the possibility that we live in a universe smaller than 24 Gpc in diameter". The authors also estimated that with "lower noise and higher resolution CMB maps (from WMAP's extended mission and from Planck), we will be able to search for smaller circles and extend the limit to ~28 Gpc." This estimate of the maximum lower bound that can be established by future observations corresponds to a radius of 14 gigaparsecs, or around 46 billion light years, about the same as the figure for the radius of the visible universe (whose radius is defined by the CMBR sphere) given in the opening section. A 2012 preprint by most of the same authors as the Cornish et al. paper has extended the current lower bound to a diameter of 98.5% the diameter of the CMBR sphere, or about 26 Gpc.
- 156 billion light-years
- This figure was obtained by doubling 78 billion light-years on the assumption that it is a radius. Since 78 billion light-years is already a diameter (the original paper by Cornish et al. says, "By extending the search to all possible orientations, we will be able to exclude the possibility that we live in a universe smaller than 24 Gpc in diameter," and 24 Gpc is 78 billion light years), the doubled figure is incorrect. This figure was very widely reported. A press release from Montana State University – Bozeman, where Cornish works as an astrophysicist, noted the error when discussing a story that had appeared in Discover magazine, saying "Discover mistakenly reported that the universe was 156 billion light-years wide, thinking that 78 billion was the radius of the universe instead of its diameter."
- 180 billion light-years
- This estimate accompanied the age estimate of 15.8 billion years in some sources; it was obtained by adding 15% to the figure of 156 billion light years.
Large-scale structure
Sky surveys and mappings of the various wavelength bands of electromagnetic radiation (in particular 21-cm emission) have yielded much information on the content and character of the universe's structure. The organization of structure appears to follow as a hierarchical model with organization up to the scale of superclusters and filaments. Larger than this, there seems to be no continued structure, a phenomenon that has been referred to as the End of Greatness.
Walls, filaments, and voids
The organization of structure arguably begins at the stellar level, though most cosmologists rarely address astrophysics on that scale. Stars are organized into galaxies, which in turn form clusters of galaxies and superclusters that are separated by immense voids, creating a vast foam-like structure sometimes called the "cosmic web". Prior to 1989, it was commonly assumed that virialized galaxy clusters were the largest structures in existence, and that they were distributed more or less uniformly throughout the universe in every direction. However, based on redshift survey data, in 1989 Margaret Geller and John Huchra discovered the "Great Wall", a sheet of galaxies more than 500 million light-years long and 200 million wide, but only 15 million light-years thick. The existence of this structure escaped notice for so long because it requires locating the position of galaxies in three dimensions, which involves combining location information about the galaxies with distance information from redshifts. In April 2003, another large-scale structure was discovered, the Sloan Great Wall. In August 2007, a possible supervoid was detected in the constellation Eridanus. It coincides with the 'WMAP Cold Spot', a cold region in the microwave sky that is highly improbable under the currently favored cosmological model. This supervoid could cause the cold spot, but to do so it would have to be improbably big, possibly a billion light-years across.
Another large-scale structure is the Newfound Blob, a collection of galaxies and enormous gas bubbles that measures about 200 million light years across.
In recent studies the universe appears as a collection of giant bubble-like voids separated by sheets and filaments of galaxies, with the superclusters appearing as occasional relatively dense nodes. This network is clearly visible in the 2dF Galaxy Redshift Survey. In the figure, a three dimensional reconstruction of the inner parts of the survey is shown, revealing an impressive view of the cosmic structures in the nearby universe. Several superclusters stand out, such as the Sloan Great Wall, the largest wall known to date.
End of Greatness
The End of Greatness is an observational scale discovered at roughly 100 Mpc (roughly 300 million lightyears) where the lumpiness seen in the large-scale structure of the universe is homogenized and isotropized in accordance with the Cosmological Principle. At this scale, no pseudo-random fractalness is apparent. The superclusters and filaments seen in smaller surveys are randomized to the extent that the smooth distribution of the universe is visually apparent. It was not until the redshift surveys of the 1990s were completed that this scale could accurately be observed.
Another indicator of large-scale structure is the 'Lyman alpha forest'. This is a collection of absorption lines that appear in the spectral lines of light from quasars, which are interpreted as indicating the existence of huge thin sheets of intergalactic (mostly hydrogen) gas. These sheets appear to be associated with the formation of new galaxies.
Caution is required in describing structures on a cosmic scale because things are often different than they appear. Bending of light by gravitation (gravitational lensing) can make images appear to originate in a different direction from their real source. This is caused when foreground objects (such as galaxies) curve surrounding spacetime (as predicted by general relativity), and deflect passing light rays. Rather usefully, strong gravitational lensing can sometimes magnify distant galaxies, making them easier to detect. Weak lensing (gravitational shear) by the intervening universe in general also subtly changes the observed large-scale structure. In 2004, measurements of this subtle shear show considerable promise as a test of cosmological models.
The large-scale structure of the universe also looks different if one only uses redshift to measure distances to galaxies. For example, galaxies behind a galaxy cluster are attracted to it, and so fall towards it, and so are slightly blueshifted (compared to how they would be if there were no cluster) On the near side, things are slightly redshifted. Thus, the environment of the cluster looks a bit squashed if using redshifts to measure distance. An opposite effect works on the galaxies already within the cluster: the galaxies have some random motion around the cluster centre, and when these random motions are converted to redshifts, the cluster appears elongated. This creates finger of God—the illusion of a long chain of galaxies pointed at the Earth.
Cosmography of our cosmic neighborhood
At the centre of the Hydra-Centaurus Supercluster, a gravitational anomaly called the Great Attractor affects the motion of galaxies over a region hundreds of millions of light-years across. These galaxies are all redshifted, in accordance with Hubble's law. This indicates that they are receding from us and from each other, but the variations in their redshift are sufficient to reveal the existence of a concentration of mass equivalent to tens of thousands of galaxies.
The Great Attractor, discovered in 1986, lies at a distance of between 150 million and 250 million light-years (250 million is the most recent estimate), in the direction of the Hydra and Centaurus constellations. In its vicinity there is a preponderance of large old galaxies, many of which are colliding with their neighbours, and/or radiating large amounts of radio waves.
In 1987 Astronomer R. Brent Tully of the University of Hawaii’s Institute of Astronomy identified what he called the Pisces-Cetus Supercluster Complex, a structure one billion light years long and 150 million light years across in which, he claimed, the Local Supercluster was embedded.
Matter content
The observable universe contains between 1022 and 1024 stars (between 10 sextillion and 1 septillion stars). To be slightly more precise, according to the Sloan Digital Sky Survey, "[by] a conservative estimate.... the currently observable universe is home to of order 6 x 1022 stars" These stars are organized in more than 100 to 200 billion (up to 1 trillion depending on sources) galaxies, which themselves form groups, clusters, superclusters, sheets, filaments, and walls.
Two approximate calculations give the number of atoms in the observable universe to be close to 1080.
Method 1
Observations of the cosmic microwave background from the Wilkinson Microwave Anisotropy Probe suggest that the spatial curvature of the universe is very close to zero, which in current cosmological models implies that the value of the density parameter must be very close to a certain critical value. A NASA page gives this density, which includes dark energy, dark matter and ordinary matter all lumped together, as 9.9×10−27 kg/m3, although the figure has not been updated since 2005 and a number of new estimates of the Hubble parameter have been made since then. The present value of the Hubble parameter is important because it is related to the value of the critical density at the present, , by the equation
Analysis of the WMAP results suggests that only about 4% of the critical density is in the form of normal atoms, while 22% is thought to be made of cold dark matter and 74% is thought to be dark energy, so if we make the simplifying assumption that all the atoms are hydrogen atoms (which in reality make up about 74% of all atoms in our galaxy by mass, see Abundance of the chemical elements), which each have a mass of about 1.67×10−27kg, this implies about 0.26 atoms/m3. Multiplying this by the volume of the visible universe (with a radius of 14 billion parsecs, the volume would be about 3.38×1080 m3) gives an estimate of about 8.8×1079 atoms in the visible universe, while multiplying it by the volume of the observable universe (with a radius of 14.3 billion parsecs, the volume would be about 3.60×1080 m3) gives an estimate of about 9.4×1079 atoms in the observable universe.
Method 2
A typical star has a mass of about 2×1030 kg, which is about 1×1057 atoms of hydrogen per star. A typical galaxy has about 400 billion stars so that means each galaxy has 1×1057 × 4×1011 = 4×1068 hydrogen atoms. There are possibly 80 billion galaxies in the universe, so that means that there are about 4×1068 × 8×1010 = 3×1079 hydrogen atoms in the observable universe. But this is definitely a lower limit calculation, and it ignores many possible atom sources such as intergalactic gas.
Some care is required in defining what is meant by the total mass of the observable universe. In relativity, mass and energy are equivalent, and energy can take on a variety of forms, including energy that is associated with the curvature of spacetime itself, not with its contents such as atoms and photons. Defining the total energy of a large region of curved spacetime is problematic because there is no single agreed-upon way to define the energy due to gravity (the energy associated with spacetime curvature); for example, when photons are redshifted due to the expansion of the universe, they lose energy, and some physicists would say the energy has been converted to gravitational energy while others would say the energy has simply been lost. One can, however, derive an order-of-magnitude estimate of the mass due to sources other than gravity, namely visible matter, dark matter and dark energy, based on the volume of the observable universe and the mean density.
Estimation based on critical density
As noted in the previous section, since the universe seems to be close to spatially flat, this suggests the density is close to the critical density, estimated above at 9.30×10−27 kg/m3. Multiplying this by (A) the estimated volume of the visible universe (3.38×1080 m3) gives a total mass for the visible universe of 3.14×1054 kg, while multiplying by (B) the estimated volume of the observable universe (3.60×1080 m3) gives a total mass for the observable universe of 3.35×1054 kg. The WMAP 7-year results estimate that 4.56% of the universe's mass is made up of normal atoms, so this would give an estimate (A) of 1.43×1053 kg, or (B) 1.53×1053 kg, for all the atoms in the observable universe. The fraction of these atoms that make up stars is probably less than 10%.
Estimation based on the measured stellar density
One way to calculate the mass of the visible matter that makes up the observable universe is to assume a mean stellar mass and to multiply that by an estimate of the number of stars in the observable universe, as seen in the paper 'On the Expansion of the Universe' from the Mathematical Thinking in Physics section of a former NASA educational site, the Glenn Learning Technologies Project. The paper derives its estimate of the number of stars in the Universe from its value for the volume of the "observable universe"
Note however that this volume is not derived from the 46 billion light year radius given by most authors, but rather from the Hubble volume, which is the volume of a sphere with radius equal to the Hubble length (the distance at which galaxies would currently be receding from us at the speed of light), which the paper gives as 13 billion light years. In any case, the paper combines this volume with an estimate of the average stellar density calculated from observations by the Hubble Space Telescope
- , (or 1 star per cube, 1,000 ly to a side (x,y,z))
Taking the mass of Sol (2 × 1030 kg) as the mean stellar mass (on the basis that the large population of dwarf stars balances out the population of stars whose mass is greater than Sol) and rounding the estimate of the number of stars up to 1022 yields a total mass for all the stars in the observable universe of 3 × 1052 kg. However, aside from the issue that the calculation is based on the Hubble volume, as noted above the WMAP results in combination with the Lambda-CDM model predict that less than 5% of the total mass of the observable universe is made up of baryonic matter (atoms), the rest being made up of dark matter and dark energy, and it is also estimated that less than 10% of baryonic matter consists of stars.
Estimation based on steady-state universe
which can also be stated as
or approximately 8 × 1052 kg.
Most distant objects
The most distant astronomical object yet announced as of January 2011 is a galaxy candidate classified UDFj-39546284. In 2009, a gamma ray burst, GRB 090423, was found to have a redshift of 8.2, which indicates that the collapsing star that caused it exploded when the universe was only 630 million years old. The burst happened approximately 13 billion years ago, so a distance of about 13 billion light years was widely quoted in the media (or sometimes a more precise figure of 13.035 billion light years), though this would be the "light travel distance" (see Distance measures (cosmology)) rather than the "proper distance" used in both Hubble's law and in defining the size of the observable universe (cosmologist Ned Wright argues against the common use of light travel distance in astronomical press releases on this page, and at the bottom of the page offers online calculators that can be used to calculate the current proper distance to a distant object in a flat universe based on either the redshift z or the light travel time). The proper distance for a redshift of 8.2 would be about 9.2 Gpc, or about 30 billion light years. Another record-holder for most distant object is a galaxy observed through and located beyond Abell 2218, also with a light travel distance of approximately 13 billion light years from Earth, with observations from the Hubble telescope indicating a redshift between 6.6 and 7.1, and observations from Keck telescopes indicating a redshift towards the upper end of this range, around 7. The galaxy's light now observable on Earth would have begun to emanate from its source about 750 million years after the Big Bang.
Particle horizon
The particle horizon (also called the cosmological horizon, the light horizon, or the cosmic light horizon) is the maximum distance from which particles could have traveled to the observer in the age of the universe. It represents the boundary between the observable and the unobservable regions of the universe, so its distance at the present epoch defines the size of the observable universe. The existence, properties, and significance of a cosmological horizon depend on the particular cosmological model being discussed.
where is the scale factor of the Friedmann–Lemaître–Robertson–Walker metric, and we have taken the Big Bang to be at . In other words, the particle horizon recedes constantly as time passes, and the observed fraction of the universe always increases. Since proper distance at a given time is just comoving distance times the scale factor (with comoving distance normally defined to be equal to proper distance at the present time, so at present), the proper distance to the particle horizon at time is given by
The particle horizon differs from the cosmic event horizon, in that the particle horizon represents the largest comoving distance from which light could have reached the observer by a specific time, while the event horizon is the largest comoving distance from which light emitted now can ever reach the observer in the future. At present, this cosmic event horizon is thought to be at a comoving distance of about 46 billion light years. In general, the proper distance to the event horizon at time is given by
where is the time-coordinate of the end of the universe, which would be infinite in the case of a universe that expands forever.
See also
- Causality (physics)
- Dark flow
- Event horizon of the universe
- Hubble volume
- Orders of magnitude (length)
- Gott III, J. Richard; Mario Jurić, David Schlegel, Fiona Hoyle, Michael Vogeley, Max Tegmark, Neta Bahcall, Jon Brinkmann (2005). "A Map of the Universe". The Astrophysics Journal 624 (2): 463. arXiv:astro-ph/0310571. Bibcode:2005ApJ...624..463G. doi:10.1086/428890.
- Planck collaboration (2013). "Planck 2013 results. XVI. Cosmological parameters". Submitted to Astronomy & Astrophysics. arXiv:1303.5076.
- Davis, Tamara M.; Charles H. Lineweaver (2004). "Expanding Confusion: common misconceptions of cosmological horizons and the superluminal expansion of the universe". Publications of the Astronomical Society of Australia 21 (1): 97. arXiv:astro-ph/0310808. Bibcode:2004PASA...21...97D. doi:10.1071/AS03040.
- Itzhak Bars; John Terning (November 2009). Extra Dimensions in Space and Time. Springer. pp. 27–. ISBN 978-0-387-77637-8. Retrieved 1 May 2011.
- Frequently Asked Questions in Cosmology. Astro.ucla.edu. Retrieved on 2011-05-01.
- Lineweaver, Charles; Tamara M. Davis (2005). "Misconceptions about the Big Bang". Scientific American. Retrieved 2008-11-06.
- Is the universe expanding faster than the speed of light? (see the last two paragraphs)
- Krauss, Lawrence M.; Robert J. Scherrer (2007). "The Return of a Static Universe and the End of Cosmology". General Relativity and Gravitation 39 (10): 1545–1550. arXiv:0704.0221. Bibcode:2007GReGr..39.1545K. doi:10.1007/s10714-007-0472-9.
- Using Tiny Particles To Answer Giant Questions. Science Friday, 3 Apr 2009. According to the transcript, Brian Greene makes the comment "And actually, in the far future, everything we now see, except for our local galaxy and a region of galaxies will have disappeared. The entire universe will disappear before our very eyes, and it's one of my arguments for actually funding cosmology. We've got to do it while we have a chance."
- See also Faster than light#Universal expansion and Future of an expanding universe#Galaxies outside the Local Supercluster are no longer detectable.
- Loeb, Abraham (2002). "The Long-Term Future of Extragalactic Astronomy". Physical Review D 65 (4). arXiv:astro-ph/0107568. Bibcode:2002PhRvD..65d7301L. doi:10.1103/PhysRevD.65.047301.
- Alan H. Guth (17 March 1998). The inflationary universe: the quest for a new theory of cosmic origins. Basic Books. pp. 186–. ISBN 978-0-201-32840-0. Retrieved 1 May 2011.
- Constraints on the Topology of the Universe
- Mota; Reboucas; Tavakol (2010). "Observable circles-in-the-sky in flat universes". arXiv:1007.3466 [astro-ph.CO].
- "WolframAlpha". Retrieved 29 November 2011.
- "WolframAlpha". Retrieved 29 November 2011.
- "Seven-Year Wilson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results" (PDF). nasa.gov. Retrieved 2010-12-02. (see p. 39 for a table of best estimates for various cosmological parameters)
- Abbott, Brian (May 30, 2007). "Microwave (WMAP) All-Sky Survey". Hayden Planetarium. Retrieved 2008-01-13.
- Paul Davies (28 August 1992). The new physics. Cambridge University Press. pp. 187–. ISBN 978-0-521-43831-5. Retrieved 1 May 2011.
- V. F. Mukhanov (2005). Physical foundations of cosmology. Cambridge University Press. pp. 58–. ISBN 978-0-521-56398-7. Retrieved 1 May 2011.
- Ned Wright, "Why the Light Travel Time Distance should not be used in Press Releases".
- Universe Might be Bigger and Older than Expected. Space.com (2006-08-07). Retrieved on 2011-05-01.
- Big bang pushed back two billion years – space – 04 August 2006 – New Scientist. Space.newscientist.com. Retrieved on 2011-05-01.
- 2 billion years added to age of universe. Worldnetdaily.com. Retrieved on 2011-05-01.
- Edward L. Wright, "An Older but Larger Universe?"
- Cornish; Spergel; Starkman; Eiichiro Komatsu (2003). "Constraining the Topology of the Universe". Phys.Rev.Lett. 92 (2004)02 92 (20). arXiv:astro-ph/0310233. Bibcode:2004PhRvL..92t1302C. doi:10.1103/PhysRevLett.92.201302.
- Levin, Janna. "In space, do all roads lead to home?". plus.maths.org. Retrieved 2012-08-15.
- Bob Gardner's "Topology, Cosmology and Shape of Space" Talk, Section 7. Etsu.edu. Retrieved on 2011-05-01.
- Vaudrevange; Starkmanl; Cornish; Spergel. Constraints on the Topology of the Universe: Extension to General Geometries. arXiv:1206.2939. Bibcode:2012arXiv1206.2939V.
- SPACE.com – Universe Measured: We're 156 Billion Light-years Wide!
- Roy, Robert. (2004-05-24) New study super-sizes the universe – Technology & science – Space – Space.com – msnbc.com. MSNBC. Retrieved on 2011-05-01.
- "Astronomers size up the Universe". BBC News. 2004-05-28. Retrieved 2010-05-20.
- "MSU researcher recognized for discoveries about universe". 2004-12-21. Retrieved 2011-02-08.
- Space.com – Universe Might be Bigger and Older than Expected
- M. J. Geller & J. P. Huchra, Science 246, 897 (1989).
- Biggest void in space is 1 billion light years across – space – 24 August 2007 – New Scientist. Space.newscientist.com. Retrieved on 2011-05-01.
- Wall, Mike (2013-01-11). "Largest structure in universe discovered". Fox News.
- LiveScience.com, "The Universe Isn't a Fractal, Study Finds", Natalie Wolchover,22 August 2012
- Robert P Kirshner (2002). The Extravagant Universe: Exploding Stars, Dark Energy and the Accelerating Cosmos. Princeton University Press. p. 71. ISBN 0-691-05862-8.
- "Large Scale Structure in the Local Universe: The 2MASS Galaxy Catalog", Jarrett, T.H. 2004, PASA, 21, 396
- Massive Clusters of Galaxies Defy Concepts of the Universe N.Y. Times Tue. November 10, 1987:
- Map of the Pisces-Cetus Supercluster Complex:
- "Astronomers count the stars". BBC News. July 22, 2003. Retrieved 2006-07-18.
- van Dokkum, Pieter G.; Charlie Conroy (2010). "A substantial population of low-mass stars in luminous elliptical galaxies". Nature 468 (7326): 940–942. arXiv:1009.5992. Bibcode:2010Natur.468..940V. doi:10.1038/nature09578. PMID 21124316.
- "How many stars?"
- How many galaxies in the Universe? says "the Hubble telescope is capable of detecting about 80 billion galaxies. In fact, there must be many more than this, even within the observable universe, since the most common kind of galaxies in our own neighborhood are faint dwarfs, which are difficult enough to see nearby, much less at large cosmological distances."
- WMAP- Content of the Universe. Map.gsfc.nasa.gov (2010-04-16). Retrieved on 2011-05-01.
- archived version of page from 2005-04-30.
- Bernard F. Schutz (2003). Gravity from the ground up. Cambridge University Press. pp. 361–. ISBN 978-0-521-45506-0. Retrieved 1 May 2011.
- Matthew Champion, "Re: How many atoms make up the universe?", 1998
- 'Is Energy Conserved in General Relativity?' by Michael Weiss and John C. Baez
- National Solar Observatory. "The Universe".
- Houjun Mo; Frank van den Bosch; Simon White (28 June 2010). Galaxy Formation and Evolution. Cambridge University Press. ISBN 978-0-521-85793-2. Retrieved 1 May 2011.
- On the expansion of the Universe (PDF). NASA Glenn Research Centre.
- Helge Kragh (1999-02-22). Cosmology and Controversy: The Historical Development of Two Theories of the Universe. Princeton University Press. p. 212, Chapter 5. ISBN 0-691-00546-X.
- New Gamma-Ray Burst Smashes Cosmic Distance Record – NASA Science. Science.nasa.gov. Retrieved on 2011-05-01.
- More Observations of GRB 090423, the Most Distant Known Object in the Universe. Universetoday.com (2009-10-28). Retrieved on 2011-05-01.
- Attila Meszaros; Balázs; Bagoly; Veres (2009). "Impact on cosmology of the celestial anisotropy of the short gamma-ray bursts". Baltic Astronomy 18: 293–296. arXiv:1005.1558. Bibcode:2009BaltA..18..293M. More than one of
|last2=specified (help); More than one of
|last3=specified (help); More than one of
- Hubble and Keck team up to find farthest known galaxy in the Universe|Press Releases|ESA/Hubble. Spacetelescope.org (2004-02-15). Retrieved on 2011-05-01.
- MSNBC: "Galaxy ranks as most distant object in cosmos"
- Edward Robert Harrison (2000). Cosmology: the science of the universe. Cambridge University Press. pp. 447–. ISBN 978-0-521-66148-5. Retrieved 1 May 2011.
- Andrew R. Liddle; David Hilary Lyth (13 April 2000). Cosmological inflation and large-scale structure. Cambridge University Press. pp. 24–. ISBN 978-0-521-57598-0. Retrieved 1 May 2011.
- Michael Paul Hobson; George Efstathiou; Anthony N. Lasenby (2006). General relativity: an introduction for physicists. Cambridge University Press. pp. 419–. ISBN 978-0-521-82951-9. Retrieved 1 May 2011.
- Massimo Giovannini (2008). A primer on the physics of the cosmic microwave background. World Scientific. pp. 70–. ISBN 978-981-279-142-9. Retrieved 1 May 2011.
- Lars Bergström and Ariel Goobar: "Cosmology and Particle Physics", WILEY (1999), page 65. ISBN 0-471-97041-7
Further reading
- Vicent J. Martínez, Jean-Luc Starck, Enn Saar, David L. Donoho, Simon Reynolds, Pablo de la Cruz, and Silvestre Paredes (2005). "Morphology Of The Galaxy Distribution From Wavelet Denoising". The Astrophysical Journal 634 (2): 744–755. arXiv:astro-ph/0508326. Bibcode:2005ApJ...634..744M. doi:10.1086/497125.
- Mureika, J. R. and Dyer, C. C. (2004). "Review: Multifractal Analysis of Packed Swiss Cheese Cosmologies". General Relativity and Gravitation 36 (1): 151–184. arXiv:gr-qc/0505083. Bibcode:2004GReGr..36..151M. doi:10.1023/B:GERG.0000006699.45969.49.
- Gott, III, J. R. et al. (may 2005). "A Map of the Universe". The Astrophysical Journal 624 (2): 463–484. arXiv:astro-ph/0310571. Bibcode:2005ApJ...624..463G. doi:10.1086/428890.
- F. Sylos Labini, M. Montuori and L. Pietronero (1998). "Scale-invariance of galaxy clustering". Physics Reports 293 (1): 61–226. arXiv:astro-ph/9711073. Bibcode:1998PhR...293...61S. doi:10.1016/S0370-1573(97)00044-6.
- "Millennium Simulation" of structure forming Max Planck Institute of Astrophysics, Garching, Germany
- The Sloan Great Wall: Largest Known Structure? on APOD
- Cosmology FAQ
- Forming Galaxies Captured In The Young Universe By Hubble, VLT & Spitzer
- NASA featured Images and Galleries
- Star Survey reaches 70 sextillion
- Animation of the cosmic light horizon
- Inflation and the Cosmic Microwave Background by Charles Lineweaver
- Logarithmic Maps of the Universe
- List of publications of the 2dF Galaxy Redshift Survey
- List of publications of the 6dF Galaxy Redshift and peculiar velocity survey
- The Universe Within 14 Billion Light Years—NASA Atlas of the Universe (note—this map only gives a rough cosmographical estimate of the expected distribution of superclusters within the observable universe; very little actual mapping has been done beyond a distance of one billion light years):
- Video: "The Known Universe", from the American Museum of Natural History
- NASA/IPAC Extragalactic Database | http://en.wikipedia.org/wiki/Mass_of_the_observable_universe | 13 |
55 | MSP:MiddleSchoolPortal/Teaching Strategies for Middle School Math
From Middle School Portal
Introduction - Teaching Strategies for Middle School Math
In explaining its Teaching Principle, one of six principles from Principles and Standards for School Mathematics, the National Council of Teachers of Mathematics took care to emphasize that “teachers have different styles and strategies for helping students learn particular mathematical ideas, and there is no one ‘right way’ to teach” (NCTM, p. 18). Our aim in this publication is to provide resources that support your personal instructional style while, perhaps, introducing materials that encourage you to experiment with a wider range of teaching techniques.
In the section titled Assessment as Instruction, we offer resources that connect these seemingly opposing activities. In another section, Games That Teach, we add to your collection of math games. Each game selection deals with middle school content, such as fractions, linear equations, factors, and geometry.
We all want to teach mathematics that is relevant and interdisciplinary, but it can be difficult to find supporting resources. Connecting to the Wider World offers lesson ideas that integrate math across the school curriculum and beyond the classroom. Taking Advantage of Technology offers activities that use the Internet as a teaching tool, both to explore and to visualize math concepts.
If you are looking for problems that encourage your students to think outside the box, try Challenging with “Rich” Problems. Finally, Launching Through Literature recommends books that will involve your students in mathematics scenarios.
For professional resources, you will find interesting online books in Background Information if you like to dig into theory of teaching and learning. And in the section on The Teaching Principle, we discuss how the aim of this publication aligns with the NCTM Principles and Standards.
We hope these resources support your teaching strategies and add to your repertoire of effective instructional materials.
Background Information for Teachers
For those of you who are interested in exploring mathematics teaching and learning in depth, the books Measuring What Counts, Adding It Up, How Students Learn, and Beating a Path to the Brain provide the theory and research findings underlying current recommendations for reform in mathematics education. For practical, well-explained ideas for classroom teaching, Measuring Up offers exercises that challenge the usual types of assessment. All three books are available free online. The final resource here is a unique site that presents strategies for teaching math to visually impaired students.
Measuring What Counts: A Conceptual Guide for Mathematics Achievement Arguing for a better balance between educational and measurement concerns in the development and use of mathematics assessment, this book sets forth three principles — related to content, learning, and equity — that can form the basis for new assessments that support national standards in mathematics education.
Adding It Up: Helping Children Learn Mathematics Adding It Up explores how students in pre-K through grade 8 learn mathematics and recommends ways of changing teaching, curricula, and teacher education to improve learning during these critical years. Based on research findings, the book details the processes by which students acquire proficiency with whole numbers, rational numbers, and integers, as well as beginning algebra, geometry, measurement, and probability and statistics.
How Students Learn: History, Mathematics, and Science in the Classroom In this book, questions that confront every classroom teacher are addressed using the latest research on cognition, teaching, and learning. Leading educators explain in detail how they developed successful interdisciplinary curricula and teaching approaches. Their recounting of personal teaching experiences lends strength and warmth to this volume.
Beating a Path to the Brain Written from one teacher to another, this short article outlines several instructional strategies based on recent research on the human brain. Practical ideas on "getting the brain's attention" and helping students retain the lesson over time.
Measuring Up: Prototypes for Mathematics Assessment This book features 13 classroom exercises that demonstrate the meaning of inquiry, performance, communication, and problem solving as standards for mathematics education. Even though the examples are at the fourth-grade level, all middle grades teachers can learn from the use of these genuine exercises to challenge and prepare students.
Teaching Math to Visually Impaired Students This site presents strategies for teaching visually impaired students as well as information about math tools, adaptive tools and technology, and the Nemeth code.
Assessment as Instruction
Assessment and instruction can seem opposed to each other, with assessment serving only as a measure of success of instruction. These resources show that assessment can be used as an essential part of the instructional process itself.
Balanced Assessment A set of more than 300 assessment tasks actually designed to inform teaching practice! Most tasks, indexed for grades K-12, incorporate a story problem and include hands-on activities. Some intriguing titles include Confetti Crush, Walkway, and Hockey Pucks. Rubrics for each task are provided. MSP full record
Mathematics Assessment Instructional Support Modules Developed by the Washington education agency, these modules embed the content of the state assessment within the process of problem solving. Each module consists of problems that engage students in practicing/learning mathematical concepts. Following each module is a set of assessment items that test the content covered in that module. Rubrics are included. Although intended for high school students, most of the material is appropriate for the upper middle grades.
Math Partners: Mathematics Mentoring for America’s Youth The materials here were designed for use by mentors for K-9 students in after-school programs, but are useful in any teaching situation. In each grade band (K-2, 3-5, 6-8, and 8-9 algebra), you will find four units focused on number and operation, geometry and measurement, statistics and probability, and patterns and functions. Each unit has pre-assessment activities to help determine prior knowledge as well as student needs.
Games That Teach
You probably already incorporate games in your teaching. Games focus students’ attention as few other teaching strategies can. The ones selected here deal directly with the math content covered in the middle grades. Each has a learning objective; each could be embedded in a lesson plan. We believe that they will add to your store of games that teach.
Fraction Game For work on fractions, this applet is a winner! It allows students to individually practice working with relationships among fractions and ways of combining fractions. It helps them visualize what is meant by equivalence of fractions. A link to an applet for two-person play is also given here.
Polygon Capture This excellent lesson uses a game to review and stimulate conversation about properties of polygons. A player draws two cards, one about the sides of a polygon, such as "All sides are equal," and one about the angles, such as "Two angles are acute." The player then captures all the polygons on the table that fit both of the properties. Provided here are handouts of the game cards, the polygons, and the rules of the game.
The Factor Game A two-player game that immerses students in factors! To play, one person circles a number from 1 to 30 on a gameboard. The second person circles (in a different color) all the proper factors of that number. The roles are switched and play continues until there are no numbers remaining with uncircled factors. The person with the largest total wins. A lesson plan outlines how to help students analyze the best first move in the game, which leads to class discussion of primes and squares as well as abundant and deficient numbers.
Planet Hop In this online one-person computer game, four planets are shown on a coordinate grid. A player must pass through each on a journey through space. The player must find the coordinates of the four planets and, finally, the equation of the line connecting them. Three levels of difficulty are available.
Towers of Hanoi: Algebra (Grades 6-8) This online version of the Towers of Hanoi puzzle features three spindles and a graduated stack of two to eight discs, a number decided by the player, with the largest disc on the bottom. The player must move all discs from the original spindle to a new spindle in the smallest number of moves possible, while never placing a larger disc on a smaller one. The algebra learning occurs as the player observes the pattern of number of discs to number of moves needed. Generalizing from this pattern, students can answer the question: What if you had 100 discs? The final step is expressing the pattern as a function.
Connecting to the Wider World
It is through an integrated unit of study that students can see measurement and data analysis in the context of science, or improve their sense of shape, symmetry, and similarity through the study of art. Applying mathematics to other subject areas helps students see where mathematics fits into the world at large.
The National Math Trail Ideas on this site lead students to explore the mathematics in their own communities; for example, the geometric shapes in buildings or the data available on monuments and in cemeteries. As students explore, they develop math problems related to their findings. Teachers can submit the problems to the web site, along with photos, drawings, sound recordings, and even videos, which are then made available to other educators and students.
Measuring the Circumference of the Earth Through this online project, students learn about Eratosthenes and actually do a similar measurement that yields a close estimate of the earth’s circumference. Even with access to only one computer, students can obtain data from other schools that lie approximately on their own longitude. Careful instructions guide the students in carrying out the experiment and analyzing the data collected. The project also provides activities, reference materials, online help, and a teacher area.
Connections: Linking Mathematics to Social Studies, Art, and Science Here you will find online resources that connect mathematics to social studies, art, and science. Each section contains lesson plans, problems to solve, and examples of mathematics at work in an interdisciplinary setting.
Taking Advantage of Technology
The computer can be a distraction and a frustration, but it can also be a teaching tool. Such commonplace but abstract concepts as fractional equivalence and the "size" of large numbers can be made visual through technology. And students can interact with virtual manipulatives to change algebraic variables on a balance scale, or rotate a 12-sided solid to see its regularity and symmetry. These resources are examples of the potential of the Internet as a teaching strategy.
The MegaPenny Project This site shows arrangements of large quantities of U.S. pennies. It begins with only 16 pennies, which measure one inch when stacked and one foot when laid in a row. The visuals build to a thousand pennies and in progressive steps to a million and even a quintillion pennies! All pages have tables at the bottom listing the value of the pennies on the page, size of the pile, weight, and area (if laid flat). The site can be used to launch lessons on large numbers, volume versus area, or multiplication by a factor of 10.
Cynthia Lanius' Fractal Unit In this unit developed for middle school students, the lessons begin with a discussion of why we study fractals and then provide step-by-step explanations of how to make fractals, first by hand and then using Java applets—an excellent strategy! But the unit goes further; it actually explains the properties of fractals in terms that make sense to students and teachers alike.
Project Interactivate Activities This site offers numerous opportunities for online exploration of middle school mathematics. The following two resources are examples.
Fraction Sorter Using this applet, the student represents two to four fractions by dividing and shading areas of squares or circles and then ordering the fractions from smallest to largest on a number line. The applet even checks if a fraction is correctly modeled and keeps score. A visual support to understanding the magnitude of fractions!
Transmorgrapher 2 Another way to "explain" geometric transformations! Using this applet, students can explore the world of translations, reflections, and rotations in the Cartesian coordinate system by transforming polygons on a coordinate plane.
National Library of Virtual Manipulatives In this impressive collection of applets, each applet presents a problem and prompts the student for a solution. The ease of use and clear purpose of each applet make this a truly exceptional site. Below is an example of an activity that fits well in the middle school curriculum.
Algebra Balance Scales — Negatives This virtual balance scale offers students an experimental way to learn about solving linear equations involving negative numbers. The applet presents an equation for the student to illustrate by balancing the scale using blue blocks for positives and red balloons for negatives. The student then solves the equation while a record of the steps taken, written in algebraic terms, is shown on the screen. The exercise reinforces the idea that what is done to one side of an equation must be done to the other side to maintain balance.
Illuminations, National Council of Teachers of Mathematics Vision for School Mathematics The site was developed to illuminate the vision for school mathematics in NCTM’s Principles and Standards for School Mathematics. The activities, lesson plans, and other resources are designed to improve the teaching and learning of mathematics for all students. Below are two examples of material for the middle grades level.
Exploring Angle Sums Students explore the sum of the interior angles of triangles, quadrilaterals and other polygons. To do this, they mark a midpoint on any side, then rotate the figure 180 degrees about that midpoint. They eventually get all interior angles together at one vertex and consider what the figure suggests about the angle sum.
Geometric Solids This tool allows learners to investigate various geometric solids and their properties. They can manipulate and color each shape to explore the number of faces, edges, and vertices, and to answer the following question: For any polyhedron, what is the relationship between the number of faces, vertices, and edges?
Challenging with Rich Problems
What makes a problem "rich?" In my opinion, rich problems have multiple entry points, force students to think outside the box, may have more than one solution, and open the way to new territory for further exploration. The problems in these resources can challenge your students and enliven their study of mathematics.
Ohio Resource Center for Mathematics, Science, and Reading Among this site’s resources is a collection of rich math problems intended generally for the high school level, but the following three could appropriately challenge middle school students.
What Is the Average Birth Month? What is the average month for births? The class may start out with assigning a number to each month — January = 1, February = 2, and so forth — and then find "the average". Examining just what "average" means in this case leads to selecting and graphing the best way to find an "average" with categorical data. Next, students examine class data and are asked if this information is representative of the entire population. In this way, students explore a question that engages them even as it leads to deeper understanding of basic statistical concepts. Questions for class discussion and teaching tips are included.
The Mouse City Hall has a rectangular lobby with a floor of black and white tiles. The tiles are square, in a checkerboard pattern, lined up with the walls: 93 tiles in one direction and 231 in the other. There are two mouse holes, at diagonally opposite corners of the floor. One night a mouse comes out of one mouse hole and runs straight across the floor, and into the other mouse hole. How many tiles does the mouse run across? A complete solution and handouts are provided.
SMARTR: Virtual Learning Experiences for Students
Visit our student site SMARTR to find related math-focused virtual learning experiences for your students! The SMARTR learning experiences were designed both for and by middle school aged students. Students from around the country participated in every stage of SMARTR’s development and each of the learning experiences includes multimedia content including videos, simulations, games and virtual activities.
The FunWorks Visit the FunWorks STEM career website to learn more about a variety of math-related careers (click on the Math link at the bottom of the home page).
Students learn mathematics through the experiences that teachers provide. — Principles and Standards for School Mathematics, p. 16
The Teaching Principle is one of six principles describing the National Council of Teachers of Mathematics’s vision of high-quality mathematics education. As noted in the discussion of this principle, teaching mathematics well requires several types of knowledge, including pedagogical knowledge, which "helps teachers understand how students learn mathematics, become facile with a range of different teaching techniques and instructional materials, and organize and manage the classroom" (NCTM, 2000, p. 17). The resources featured in Teaching Strategies in Middle School Math present a wide range of techniques and supporting materials in each type. They may give you opportunity to explore a teaching strategy new to you, or simply build up your store of activities in a strategy already familiar to you.
In a discussion of professional standards for mathematics teachers, Mathematics Teaching Today, NCTM emphasizes that "the tasks and activities that teachers select are mechanisms for drawing students into the important mathematics that composes the curriculum. Worthwhile mathematical tasks are those that do not separate mathematical thinking from mathematical concepts or skills, that capture students’ curiosity, and that invite students to speculate and to pursue their hunches" (NCTM, 2007, p. 33).
The activities in the resources highlighted here, both online and offline, directly address middle school curriculum while challenging students to make sense of the mathematical concepts through their own reasoning. We hope these resources will add to your own list of "worthwhile mathematical tasks".
Author and Copyright
Terese Herrera taught math several years at middle and high school levels, then earned a Ph.D. in mathematics education. She is a resource specialist for the Middle School Portal 2: Math & Science Pathways project.
Please email any comments to [email protected].
Connect with colleagues at our social network for middle school math and science teachers at http://msteacher2.org.
Copyright December 2007 - The Ohio State University. This page was last updated January 5, 2012. This material is based upon work supported by the National Science Foundation under Grant No. 0424671 and since September 1, 2009 Grant No. 0840824. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | http://msp.ehe.osu.edu/wiki/index.php/MSP:MiddleSchoolPortal/Teaching_Strategies_for_Middle_School_Math | 13 |
63 | Chickamauga Wars (1776–94)
The Chickamauga Wars (1776–1794) were a series of raids, campaigns, ambushes, minor skirmishes, and several full-scale frontier battles which were a continuation of the Cherokee (Ani-Yunwiya, Ani-Kituwa, Tsalagi, Talligewi) struggle during and after the American Revolutionary War against encroachment by American frontiersmen from the former British colonies. Until the end of the Revolution, the Cherokee fought in part as British allies. After 1786, they also fought along with and as members of the Western Confederacy, organized by the Shawnee chief Tecumseh, in an effort to repulse European-American settlers from the area west of the Appalachian Mountains.
Open warfare broke out in the summer of 1776 between the Cherokee led by Dragging Canoe and frontier settlers along the Watauga, Holston, Nolichucky, and Doe rivers in East Tennessee. (The colonials first referred to these Cherokee as the "Chickamauga" or "Chickamauga-Cherokee," and later as the "Lower Cherokee".) The warfare spread to settlers along the Cumberland River in Middle Tennessee and in Kentucky, as well as to the colonies (later states) of Virginia, North Carolina, South Carolina, and Georgia.
The earliest phase of the conflicts, ending with the treaties of 1777, is sometimes called the "Second Cherokee War", a reference to the earlier Anglo-Cherokee War. Since Dragging Canoe was the dominant leader in both phases of the conflict, the period is sometimes called "Dragging Canoe's War."
Dragging Canoe and his forces fought alongside with American Indians from several other tribes, both in the South and in the Northwest: Muscogee Creek and Shawnee, respectively. They had the support of, first, the British (often with participation of British agents and regular soldiers) and, second, the Spanish. The Cherokee were among the founding members of the Native Americans' Western Confederacy.
Though the Americans identified the followers of Dragging Canoe as "Chickamauga" as distinct from Cherokee who abided by the peace treaties of 1777, there was no separate tribe or band known as "Chickamauga". They identified as Cherokee.
Anglo-Cherokee War
At the outbreak of the French and Indian War (also known as the Seven Years' War) (1754–1763), the Cherokee were allies of the British against the French. They fought in distant campaigns, such as those against the French at Fort Duquesne (at modern-day Pittsburgh, Pennsylvania) and the Shawnee of the Ohio Country. In 1755, a band of Cherokee 130-strong under Ostenaco (Ustanakwa) of Tamali (Tomotley), took up residence in a fortified town at the confluence of the Ohio and Mississippi rivers at the behest of the Iroquois, who were fellow British allies.
For several years, French agents from Fort Toulouse had been visiting the Overhill Cherokee, especially those on the Hiwassee and Tellico rivers. They had built some alliances. The strongest pro-French sentiment among the Cherokee came from Mankiller (Utsidihi) of Great Tellico (Talikwa); Old Caesar of Chatuga (Tsatugi); and Raven (Kalanu) of Great Hiwassee (Ayuhwasi). The Principal Chief Kanagatucko or "Stalking Turkey", was very pro-French, as was his nephew Kunagadoga (Standing Turkey), who succeeded at his death in 1760.
The Anglo-Cherokee War was initiated in the colonies in 1758 in the midst of the Seven Years War by Moytoy (Amo-adawehi) of Citico. He was retaliating for British and colonial mistreatment of Cherokee warriors. The war lasted from 1758 to 1761.
In 1759 the chief named Big Mortar (Yayatustanage), a Muscogee, occupied with his warriors the former site of the Coosa chiefdom of the Mississippian culture. It had been long deserted since Spanish explorations in the mid-16th century. He reoccupied the site to support of his pro-French Cherokee allies in Great Tellico and Chatuga. The occupation was also a step toward an alliance with other Muscogee, Cherokee, Shawnee, Chickasaw, and Catawba warriors. His plans were the first recorded of an intertribal alliance in the South. They were a precursor of the alliances of Dragging Canoe. After the end of the French and Indian War, Big Mortar rose to be the leading chief of the Muscogee.
During the Anglo-Cherokee War, the British murdered Cherokee hostages at Fort Prince George near Keowee. In retaliation, Cherokee attacked and massacred the garrison of Fort Loudoun near Chota. Those two events brought all the Cherokee nation into the war until the fighting ended in 1761. The Cherokee were led by the chiefs Oconostota (Aganstata) of Chota (Itsati); Attakullakulla (Atagulgalu) of Tanasi; Ostenaco of Tomotley; Wauhatchie (Wayatsi) of the Lower Towns; and "Round O" of the Middle Towns.
The Cherokee made separate peace treaties with the Colony of Virginia (Treaty of Long-Island-on-the-Holston, 1761) and the Province of South Carolina (Treaty of Charlestown, 1762). Standing Turkey was deposed and replaced by Attakullakulla, who was pro-British.
John Stuart, the only officer to escape the Fort Loudoun massacre, became the British Superintendent of Indian Affairs for the Southern District, out of Charlestown, South Carolina. He served as the main contact for the Cherokee with the British government. His first deputy, Alexander Cameron, lived among the Cherokee at Keowee, followed by Toqua on the Little Tennessee River. His second deputy, John McDonald, set up a base one hundred miles to the southwest on the west side of Chickamauga River, where it was crossed by the Great Indian Warpath.
During the war, the British forces under general James Grant destroyed a number of major Cherokee towns, which were never reoccupied. Kituwa was abandoned, and its former residents migrated west; they took up residence at Great Island Town on the Little Tennessee River among the Overhill Cherokee.
In the aftermath of the Seven Years War, France in defeat ceded that part of the Louisiana Territory east of the Mississippi and Canada to the British. Spain took control of Louisiana west of the Mississippi. In exchange it ceded Florida to Great Britain, which created the jurisdictions of East Florida and West Florida.
Valuing the support of Native Americans, King George III issued the Royal Proclamation of 1763. This prohibited colonial settlement west of the Appalachian Mountains, in an effort to preserve territory for the Native Americans. Many colonials resented the interference with their drive to the vast western lands. The proclamation was a major irritant that contributed to the American Revolution.
Treaty of Fort Stanwix
After Pontiac's War (1763–1764), the Iroquois Confederacy (Haudenosaunee) ceded to the British government its claims to the hunting grounds between the Ohio and Cumberland rivers, known to them and other Indians as Kain-tuck-ee (Kentucky), in the 1768 Treaty of Fort Stanwix.
The British had planned a colony to be called Charlotina in the lands of the Ohio Valley and Great Lakes regions. (The French formerly claimed this as part of Upper Louisiana); it was also known as the Illinois Country.) In 1774 the British claimed those lands as part of the Province of Quebec. After achieving independence, the United States called this area the Northwest Territory.
Watauga Association
|Wikisource has original text related to this article:|
The earliest colonial settlement in the vicinity of what became Upper East Tennessee was Sapling Grove (Bristol). The first of the North-of-Holston settlements, it was founded by Evan Shelby, who "purchased" the land in 1768 from John Buchanan. Jacob Brown began another settlement on the Nolichucky River, and John Carter in what became known as Carter's Valley (between Clinch River and Beech Creek), both in 1771. Following the Battle of Alamance in 1771, James Robertson led a group of some twelve or thirteen Regulator families from North Carolina to the Watauga River.
Each of the groups thought they were within the territorial limits of the colony of Virginia. After a survey proved their mistake, Alexander Cameron, Deputy Superintendent for Indian Affairs, ordered them to leave. Attakullakulla, now First Beloved Man (Principal Chief), interceded on their behalf. The settlers were allowed to remain, provided no additional people joined them.
In May 1772, the settlers on the Watauga signed the Watauga Compact to form the Watauga Association. Although the other settlements were not parties to it, all of them are sometimes referred to as "Wataugans". The next year, Daniel Boone led a group to establish a permanent settlement inside the hunting grounds of Kentucky. In retaliation the Shawnee, Lenape (Delaware), Mingo, and some Cherokee attacked a scouting and forage party, which included Boone's son James. The Indians ritually tortured to death their captives James Boone and Henry Russell. The colonists responded with the beginning of Dunmore's War (1773–1774).
Henderson Purchase
In 1775, a group of North Carolina speculators led by Richard Henderson negotiated the Treaty of Watauga at Sycamore Shoals with the older Overhill Cherokee leaders; Oconostota and Attakullakulla (now First Beloved Man), the most prominent, ceded the claim of the Cherokee to the Kain-tuck-ee (Ganda-giga'i) lands. The Transylvania Land Company believed it was gaining ownership of the land, not realizing that other tribes, such as the Lenape, Shawnee, and Chickasaw, also claimed these lands for hunting.
Dragging Canoe (Tsiyugunsini), headman of Great Island Town (Amoyeliegwayi) and son of Attakullakulla, refused to go along with the deal. He told the North Carolina men, "You have bought a fair land, but there is a cloud hanging over it; you will find its settlement dark and bloody". The governors of Virginia and North Carolina repudiated the Watauga treaty and Henderson fled to avoid arrest. George Washington also spoke out against it. The Cherokee appealed to John Stuart, the Indian Affairs Superintendent, for help, which he had provided on previous such occasions, but the outbreak of the American Revolution intervened.
The "Second Cherokee War"
Henderson and frontiersmen thought the outbreak of the Revolution superseded the judgements of the royal governors. The Transylvania Company began recruiting settlers for the region they had "purchased".
As tensions rose, the Loyalist John Stuart, British Superintendent of Indian Affairs, was besieged by a mob at his house in Charleston and had to flee for his life. His first stop was St. Augustine in East Florida. He sent his deputy, Alexander Cameron, and his brother Henry to Mobile to obtain short-term supplies and arms for the Cherokee. Dragging Canoe took a party of 80 warriors to provide security for the packtrain. He met Henry Stuart and Cameron (whom he had adopted as a brother) at Mobile on 1 March 1776. He asked how he could help the British against their rebel subjects, and for help with the illegal settlers. The two men told him to wait for regular troops to arrive before taking any action.
When the two arrived at Chota, Henry Stuart sent out letters to the trespassers of the Washington District (Watauga and Nolichucky), Pendelton District (North-of-Holston), and Carter's Valley (in modern Hawkins County). He informed the settlers they were illegally on Cherokee land and gave them 40 days to leave. People sympathetic to the Revolution forged a letter to indicate a large force of regular troops, plus Chickasaw, Choctaw, and Muscgoee, was on the march from Pensacola and planning to pick up reinforcements from the Cherokee. The forged letters alarmed the settlers, who began gathering together in closer, fortified groups, building stations (small forts), and otherwise preparing for an attack.
Visit from the northern tribes
In May 1776, partly at the behest of Henry Hamilton, the British governor in Detroit, the Shawnee chief Cornstalk led a delegation from the northern tribes (Shawnee, Lenape, Iroquois, Ottawa, others) to the southern tribes (Cherokee, Muscogee, Chickasaw, Choctaw). Cornstalk called for united action against those they called the "Long Knives", the squatters who settled and remained in Kain-tuck-ee (Ganda-gi), or, as the settlers called it, Transylvania. The northerners met with the Cherokee leaders at Chota. At the close of his speech, Cornstalk offered his war belt, and Dragging Canoe accepted it, along with Abraham (Osiuta) of Chilhowee (Tsulawiyi). Dragging Canoe also accepted belts from the Ottawa and the Iroquois, while Savanukah, the Raven of Chota, accepted the belt from the Lenape. The northern emissaries offered war belts to Stuart and Cameron, but they declined to accept.
The plan was for the Cherokee of the Middle, Out, and Valley Towns of what is now western North Carolina to attack South Carolina. Cameron would lead warriors of the Lower Towns against Georgia. Warriors of the Overhill Towns along the lower Little Tennessee and Hiwassee rivers were to attack Virginia and North Carolina. In the Overhill campaign, Dragging Canoe was to lead a force against the Pendelton District, Abraham one against the Washington District, and Savanukah one against Carter's Valley. Dragging Canoe led a small war party into Kentucky and returned with four scalps to present to Cornstalk before the northern delegation departed.
Jemima Boone and the Callaway sisters
The Cherokee soon began raiding into Kentucky, often together with the Shawnee. In one of these raids, a war party led by Hanging Maw (Skwala-guta) of Coyatee (Kaietiyi), captured three teenage girls in a canoe on the Kentucky River. The girls were Jemima Boone, daughter of the explorer; and Elizabeth and Frances Callaway, daughters of Richard Callaway. The war party hurried toward the Shawnee towns north of the Ohio River, but were overtaken by Boone and his rescue party after three days. After a brief firefight, the war party retreated and the girls were rescued. They were unharmed and Jemima said they had been treated reasonably well.
The incident was believed to have inspired James Fenimore Cooper's similar scene in his novel The Last of the Mohicans. Lieutenant-Colonel George Munro, the book's protagonist Hawkeye (Natty Bumppo), his adopted Mohican elder brother Chingachgook, Chingachgook's son Uncas, and David Gamut follow and overtake a Huron war party of Magua in order to rescue the sisters, Cora and Alice Munro.
The attacks
Traders warned the squatters in Upper East Tennessee of the impending Cherokee attacks. They had come from Chota bearing word from Nancy Ward (Agigaue), the Beloved Woman (leader or Elder). The Cherokee offensive proved to be disastrous for the attackers, particularly those going up against the Holston settlements.
Finding Eaton's Station deserted, Dragging Canoe took his force up the Great Indian Warpath, where he had a small skirmish with 20 militia. Pursuing them and intending to take Fort Lee at Long-Island-on-the-Holston, his force advanced. They encountered a larger force of militia six miles from Fort Lee. It was about half the size of his own but desperate and in a stronger position. During the "Battle of Island Flats," Dragging Canoe was wounded in his hip by a musket ball, and his brother Little Owl (Uku-usdi) was hit eleven times, but survived. His force withdrew, raiding isolated cabins on the way. After raiding further north into southwestern Virginia, his party returned to the Overhill area with plunder and scalps.
The following week, Dragging Canoe led the attack on Black's Fort on the Holston (today Abingdon, Virginia). They killed the settler Henry Creswell on July 22, 1776, outside the stockade. More attacks continued the third week of July, with support from the Muscogee and Tories.
Abraham of Chilhowee was unsuccessful in trying to take Fort Caswell on the Watauga, and his warriors suffered heavy casualties. Instead of withdrawing, he put the garrison under siege. After two weeks, he gave it up. Savanukah raided from the outskirts of Carter's Valley far into Virginia, but those targets contained only small settlements and isolated farmsteads, so he did no real military damage.
Despite his wounds, Dragging Canoe led his warriors to South Carolina to join Alexander Cameron and the Cherokee from the Lower Towns.
Colonial response
The colonials quickly gathered militia who moved against the Cherokee. North Carolina sent General Griffith Rutherford with 2400 militia to scour the Oconaluftee and Tuckasegee river valleys, and the headwaters of the Little Tennessee and Hiwassee. South Carolina sent 1800 men to the Savannah, and Georgia sent 200 to attack Cherokee settlements along the Chattahoochee and Tugaloo rivers. In all, they destroyed more than 50 towns, burned the houses and food stores, destroyed the orchards, slaughtered livestock, and killed hundreds of Cherokee. They sold captives into slavery.
Virginia sent a large force accompanied by North Carolina volunteers, led by William Christian, to the lower Little Tennessee valley. By this time, Dragging Canoe and his warriors had returned to the Overhill Towns. Oconostota supported making peace with the colonists at any price. Dragging Canoe called for the women, children, and old to be sent below the Hiwassee and for the warriors to burn the towns, then ambush the Virginians at the French Broad River. Oconostota, Attakullakulla, and the older chiefs decided against that plan. He sent word to the approaching colonial army offering to exchange Dragging Canoe and Cameron if the Overhill Towns were spared.
Dragging Canoe spoke to the council of the Overhill Towns, denouncing the older leaders as rogues and "Virginians" for their willingness to cede land for an ephemeral safety. He concluded, "As for me, I have my young warriors about me. We will have our lands." He stalked out of the council. Afterward, he and other militant leaders, including Ostenaco, gathered like-minded Cherokee from the Overhill, Valley, and Hill towns, and migrated to what is now the Chattanooga, Tennessee area. Cameron had already transferred there.
Christian's Virginia force found Great Island, Citico (Sitiku), Toqua (Dakwayi), Tuskegee (Taskigi), Chilhowee, and Great Tellico virtually deserted. Only the older leaders remained. Christian limited the destruction in the Overhill Towns to the burning of the deserted towns.
The paramount mico Emistigo led the Upper Muscogee in alliance with the British; within a year he had become the strongest native ally of Dragging Canoe and his faction of Cherokee. After 1777, he was assisted by Alexander McGillivray (Hoboi-Hili-Miko), the mixed-blood son of a Coushatta woman and a Scots-Irish American trader. He was mico of the Coushatta, a former colonel in the British Army, and one of John Stuart's agents. Although the majority of the Lower Muscogee chose to remain neutral, the Loyalist Capt. William McIntosh, another of Stuart's agents, recruited a sizable unit of Hitchiti warriors to fight on the British side.
The Treaties of 1777
In 1777, the Cherokee in the Hill, Valley, Lower, and Overhill towns signed the Treaty of Dewitt's Corner with Georgia and South Carolina (Ostenaco was one of the Cherokee signatories) and the Treaty of Fort Henry with Virginia and North Carolina. They promised to stop warring, and the colonies promised in return to protect them from attack. Dragging Canoe raided within 15 miles of Fort Henry during the negotiations. One provision of the latter treaty required that James Robertson and a small garrison be quartered at Chota on the Little Tennessee. Neither treaty halted attacks by frontiersmen from the illegal colonies, nor stopped their encroachment onto Cherokee lands. The peace treaty required the Cherokee to give up their land of the Lower Towns in South Carolina, and most of the area of the Out Towns. This was part of their movement to the south and west.
First migration, to the Chickamauga area
In the meantime, Alexander Cameron had suggested to Dragging Canoe and his dissenting Cherokee that they settle at the place where the Great Indian Warpath crossed the Chickamauga River (South Chickamauga Creek). When led by Big Fool, it became known as the Chickamauga (Tsikamagi) Town. Since Dragging Canoe made that town his seat of operations, frontier Americans called his faction the "Chickamaugas".
As mentioned above, John McDonald already had a trading post across the Chickamauga River. This provided a link to Henry Stuart, brother of John, in the West Florida capital of Pensacola. Cameron, the British deputy Indian superintendent and blood brother to Dragging Canoe, accompanied him to Chickamauga. Nearly all the whites legally resident among the Cherokee were part of the exodus.
Dragging Canoe's band set up three other settlements on the Chickamauga River: "Toqua" (Dakwayi), at its mouth on the Tennessee River, "Opelika", a few kilometers upstream from Chickamauga Town; and "Buffalo Town" (Yunsayi, John Sevier called it "Bull Town") at the headwaters of the river in northwest Georgia (in the vicinity of the later Ringgold, Georgia). Other towns established were Cayoka, on Hiwassee Island; "Black Fox" (or Inaliyi) at the current community of the same name in Bradley County, Tennessee; "Ooltewah" (Ultiwa-Cherokee/italwa-Muskogee), under Ostenaco on Ooltewah (Wolftever) Creek; "Sawtee" (Itsati-Cherokee/Koasati-Muskogee), under Dragging Canoe's brother Little Owl on Laurel (North Chickamauga) Creek; "Citico" (Sitiku), along the creek of the same name; "Chatanuga" (Tsatanugi-Cherokee/cvto-nuga-Muskogee) at the foot of Lookout Mountain in what is now St. Elmo; and "Tuskegee" (Taskigi) under Bloody Fellow (Yunwigiga) on Williams' Island.
This newly-settled Cherokee area was once the traditional land of the Muscogee who, after having been decimated by European diseases (including smallpox), had withdrawn farther south in the early 18th century to create a buffer zone with the Cherokee. In the intervening years, the two tribes used the region as hunting grounds. When the Carolina began trading with the Cherokee in the late 17th century, their westernmost settlements were the twin towns of Great Tellico (Talikwa and Chatuga (Tsatugi) at the current site of Tellico Plains, Tennessee. The Coosawattee townsite (Kuswatiyi, for "Old Coosa Place"), reoccupied briefly by Big Mortar's Muscogee as mentioned above, was among the sites settled by the Cherokee migrants.
Many Cherokee resented the (largely Scots-Irish) settlers moving into Cherokee lands, and agreed with Dragging Canoe. The Cherokee towns of Great Hiwassee (Ayuwasi), Tennessee (Tanasi), Chestowee (Tsistuyi), Ocoee (Ugwahi), and Amohee (Amoyee) in the vicinity of Hiwassee River supported those rejecting the settlers from moving into their lands. were wholly in the camp of the rejectionists of the pacifism of the old chiefs, as did the Lower Cherokee in the North Georgia towns of Coosawatie (Kusawatiyi), Etowah (Itawayi), Ellijay (Elatseyi), Ustanari (or Ustanali), etc., who had been evicted from their homes in South Carolina by the Treaty of Dewitts' Corner. The Yuchi in the vicinity of the new settlement, on the upper Chickamauga, Pinelog, and Conasauga Creeks, likewise supported Dragging Canoe's policies.
In July 1776 Dragging Canoe had learned that guerrilla warfare was more suitable for his warriors than fighting like English regular troops. From their new bases, the Cherokee conducted raids against settlers on the Holston, Doe, Watauga, and Nolichucky Rivers, on the Cumberland and Red Rivers, and the isolated stations in between. Dragging Canoe called them all "Virginians". The Cherokee ambushed parties traveling on the Tennessee River, and on local sections of the many ancient trails that served as "highways", such as the Great Indian Warpath (Mobile to northeast Canada), the Cisca and St. Augustine Trail (St. Augustine to the French Salt Lick at Nashville), the Cumberland Trail (from the Upper Creek Path to the Great Lakes), and the Nickajack Trail (Nickajack to Augusta). Later, the Cherokee stalked the Natchez Trace and roads improved by the uninvited settlers, such as the Kentucky, Cumberland, and Walton roads. Occasionally, the Cherokee attacked targets in Virginia, the Carolinas, Georgia, Kentucky, and the Ohio country.
In 1778–1779, Savannah (see: Capture of Savannah) and Augusta, Georgia, were captured by the British with help from Dragging Canoe, John McDonald, and the Cherokee, along with McGillivray's Upper Muskogee force and McIntosh's band of Hitichiti warriors, who were being supplied with guns and ammunition through Pensacola and Mobile, and together they were able to gain control of parts of interior South Carolina and Georgia. In addition, the remaining neutral towns of the Lower Muscogee now threw in their lot with the British side, at least nominally.
First invasion of the Chickamauga Towns
In early 1779, James Robertson of Virginia received warning from Chota that Dragging Canoe's warriors were going to attack the Holston area. In addition, he had received intelligence that John McDonald's place was the staging area for a conference of Indians Governor Hamilton was planning to hold at Detroit, and that a stockpile of supplies equivalent to that of a hundred packhorses was stored there.
In response, he ordered a preemptive assault under Evan Shelby (father of Isaac Shelby, first governor of the State of Kentucky) and John Montgomery. Boating down the Tennessee in a fleet of dugout canoes, they disembarked and destroyed the eleven towns in the immediate Chickamauga area and most of their food supply, along with McDonald's home and store. Whatever was not destroyed was confiscated and sold at the point where the trail back to the Holston crossed what has since been known as Sale Creek.
In the meantime, Dragging Canoe and John McDonald were leading the Cherokee and fifty Loyalist Rangers in attacks on Georgia and South Carolina, so there was no resistance and only four deaths among the towns' inhabitants. Upon hearing of the devastation of the towns, Dragging Canoe, McDonald, and their men, including the Rangers, returned to Chickamauga and its vicinity.
The Shawnee sent envoys to Chickamauga to find out if the destruction had caused Dragging Canoe's people to lose the will to fight, along with a sizable detachment of warriors to assist them in the South. In response to their inquiries, Dragging Canoe held up the war belts he'd accepted when the delegation visited Chota in 1776, and said, "We are not yet conquered". To cement the alliance, the Cherokee responded to the Shawnee gesture with nearly a hundred of their warriors sent to the North.
The towns in the Chickamauga area were soon rebuilt and reoccupied by their former inhabitants. Dragging Canoe responded to the Shelby expedition with punitive raids on the frontiers of both North Carolina and Virginia.
Concord between the Lenape and the Cherokee
In spring 1779, Oconostota, Savanukah, and other non-belligerent Cherokee leaders travelled north to pay their respects after the death of the White Eyes, the Lenape leader who had been encouraging his people to give up their fighting against the Americans. He had also been negotiating, first with Lord Dunmore and second with the American government, for an Indian state with representatives seated in the Continental Congress, which he finally won an agreement for with that body, which he had addressed in person in 1776.
Upon the arrival of the Cherokee in the village of Goshocking, they were taken to the council house and began talks. The next day, the Cherokee present solemnly agreed with their "grandfathers" to take neither side in the ongoing conflict between the Americans and the British. Part of the reasoning was that thus "protected", neither tribe would find themselves subject to the vicissitudes of war. The rest of the world at conflict, however, remained heedless, and the provisions lasted as long as it took the ink to dry, as it were.
Death of John Stuart
About this same time, John Stuart, Indian Affairs Superintendent, died at Pensacola. The British assigned his deputy, Alexander Cameron, to work with the Chickasaw and Choctaw further west. His replacement, Thomas Browne, was assigned to the Cherokee, Muscogee, and Catawba. But, Cameron never went west, and he and Browne worked together until the latter departed for St. Augustine.
The Chickasaw
The Chickasaw came into the war on the side of the British and their Indian allies in 1779. The colonist George Rogers Clark and a party of over 200 built Fort Jefferson and a surrounding settlement near the mouth of the Ohio River, inside the Chickasaw hunting grounds. After learning of the trespass, the Chickasaw destroyed the settlement, laid siege to the fort, and began attacking settlers on the Kentucky frontier. They continued attacking along the Cumberland River and into Kentucky through the following year, making their last raid together with Dragging Canoe's Cherokee. Former animosities from the Cherokee-Chickasaw war of 1758–1769 were forgotten in the face of the common enemy.
Cumberland Settlements
Later that year, Robertson and John Donelson traveled overland across country along the Kentucky Road and founded Fort Nashborough at the French Salt Lick (which got its name from having previously been the site of a French outpost called Fort Charleville) on the Cumberland River. It was the first of many such settlements in the Cumberland area, which subsequently became the focus of attacks by all the tribes in the surrounding region. Leaving a small group there, both returned east.
Early in 1780, Robertson and a group of fellow Wataugans left the east down the Kentucky Road headed for Fort Nashborough. Meanwhile, Donelson journeyed down the Tennessee with a party that included his family, intending to go across to the mouth of the Cumberland, then upriver to Ft. Nashborough. Eventually, the group did reach its destination, but only after being ambushed several times.
In the first encounter near Tuskegee Island, the Cherokee warriors under Bloody Fellow focused their attention on the boat in the rear whose passengers had come down with smallpox. There was only one survivor, later ransomed. The victory, however, proved to be a Pyrrhic one for the Cherokee, as the ensuing epidemic wiped out several hundred in the vicinity.
Several miles downriver, beginning with the obstruction known as the Suck or the Kettle, the party was fired upon throughout their passage through the Tennessee River Gorge (aka Cash Canyon), the party losing one with several wounded. Several hundred kilometers downriver, the Donelson party ran up against Muscle Shoals, where they were attacked at one end by the Muscogee and the other end by the Chickasaw. The final attack was by the Chickasaw in the vicinity of the modern Hardin County, Tennessee.
Shortly after the party's arrival at Fort Nashborough, Donelson, Robertson and others formed the Cumberland Compact.
John Donelson eventually moved to the Indiana country after the Revolution, where he and William Christian were captured while fighting in the Illinois country in 1786 and were burned at the stake by their captors.
Augusta and Kings Mountain
That summer, the new Indian superintendent, Thomas Browne, planned to have a joint conference between the Cherokee and Muscogee to plan ways to coordinate their attacks, but those plans were forestalled when the Americans made a concerted effort to retake Augusta, where he had his headquarters. The arrival of a war party from the Chickamauga Towns, joined by a sizable number or warriors from the Overhill Towns, prevented the capture of both, and they and Brown's East Florida Rangers chased Elijah Clarke's army into the arms of John Sevier, wreaking havoc on rebellious settlements along the way. This set the stage for the Battle of Kings Mountain, in which loyalist militia under Patrick Ferguson moved south trying to encircle Clarke and were defeated by a force of 900 frontiersmen under Sevier and William Campbell referred to as the Overmountain Men.
Alexander Cameron, aware of the absence from the settlements of nearly a thousand men, urged Dragging Canoe and other Cherokee leaders to strike while they had the opportunity. With Savanukah as their headman, the Overhill Towns gave their full support to the new offensive. Both Cameron and the Cherokee had been expecting a quick victory for Ferguson and were stunned he suffered such a resounding defeat so soon, but the assault was already in motion.
Hearing word of the new invasion from Nancy Ward, her second documented betrayal of Dragging Canoe, Virginia Governor Thomas Jefferson sent an expedition of seven hundred Virginians and North Carolinians against the Cherokee in December 1780, under the command of Sevier. It met a Cherokee war party at Boyd's Creek, and after the battle, joined by forces under Arthur Campbell and Joseph Martin, marched against the Overhill towns on the Little Tennessee and the Hiwassee, burning seventeen of them, including Chota, Chilhowee, the original Citico, Tellico, Great Hiwassee, and Chestowee. Afterwards, the Overhill leaders withdrew from further active conflict for the time being, though the Hill and Valley Towns continued to harass the frontier.
In the Cumberland area, the new settlements lost around forty people in attacks by the Cherokee, Muscogee, Chickasaw, Shawnee, and Lenape.
Second migration and expansion
By 1781, Dragging Canoe was working with the towns of the Cherokee from western South Carolina relocated on the headwaters of the Coosa River, and with the Muscogee, particularly the Upper Muscogee. The Chickasaw, Shawnee, Huron, Mingo, Wyandot, and Munsee-Lenape (who were the first to do so) were repeatedly attacking the Cumberland settlements as well as those in Kentucky. Three months after the first Chickasaw attack on the Cumberland, the Cherokee's largest attack of the wars against those settlements came in April of that year, and culminated in what became known as the Battle of the Bluff, led by Dragging Canoe in person. Afterwards, settlers began to abandon the settlements until only three stations were left, a condition which lasted until 1785.
Loss of British supply lines and territory
In February 1780, Spanish forces from New Orleans under Bernardo de Galvez, allied to the Americans but acting in the interests of Spain, captured Mobile in the Battle of Fort Charlotte. When they next moved against Pensacola the following month, McIntosh and McGillivray rallied 2000 Muscogee warriors to its defense. A British fleet arrived before the Spanish could take the port. A year later, the Spanish reappeared with an army twice the size of the garrison of British, Choctaw, and Muscogee defenders, and Pensacola fell two months later. Shortly thereafter, Augusta was also retaken by the revolutionaries when the Lower Muskogee relief force led by McIntosh was unable to arrive in time. The British and Muskogee garrison at Savannah fell to the Patriots in 1782. Emistigo was leading the Upper Muscogee attempt to relieve them and died in the attempt; McGillivray, by then his right hand man, succeeded him to become the leading mico of the Upper Towns. Also in 1782, a successful campaign by Brigadier General Andrew Pickens led to a treaty forcing cessions of land between the Savannah and Chattachooche Rivers to the State of Georgia in the Treaty of Long Swamp Creek.
Politics in the Overhill Towns
In the fall of 1781, the British engineered a coup d'état of sorts that put Savanukah as First Beloved Man in place of the more pacifist Oconostota, who succeeded Attakullakulla. For the next year or so, the Overhill Cherokee openly, as they had been doing covertly, supported the efforts of Dragging Canoe and his militant Cherokee. In the fall of 1782, however, the older pacifist leaders replaced him with another of their number, Corntassel (Kaiyatsatahi, known to history as "Old Tassel"), and sent messages of peace along with complaints of settler encroachment to Virginia and North Carolina. Opposition from pacifist leaders, however, never stopped war parties from traversing the territories of any of the town groups, largely because the average Cherokee supported their cause, nor did it stop small war parties of the Overhill Towns from raiding settlements in East Tennessee, mostly those on the Holston.
Cherokee in the Ohio region
A party of Cherokee joined the Lenape, Shawnee, and Chickasaw in a diplomatic visit to the Spanish at Fort St. Louis in the Missouri country in March 1782 seeking a new avenue of obtaining arms and other assistance in the prosecution of their ongoing conflict with the Americans in the Ohio Valley. One group of Cherokee at this meeting led by Standing Turkey sought and received permission to settle in Spanish Louisiana, in the region of the White River.
By 1783, there were at least three major communities of Cherokee in the region. One lived among the Chalahgawtha (Chillicothe) Shawnee. The second Cherokee community lived among the mixed Wyandot-Mingo towns on the upper Mad River near the later Zanesfield, Ohio. A third group of Cherokee is known to have lived among and fought with the Munsee-Lenape, the only portion of the Lenape nation at war with the Americans.
Second invasion of the Chickamauga Towns
In September 1782, an expedition under Sevier once again destroyed the towns in the Chickamauga vicinity, though going no further west than the Chickamauga River, and those of the Lower Cherokee down to Ustanali (Ustanalahi), including what he called Vann's Town. The towns were deserted because having advanced warning of the impending attack, Dragging Canoe and his fellow leaders chose relocation westward. Meanwhile, Sevier's army, guided by John Watts (Kunokeski), somehow never managed to cross paths with any parties of Cherokee.
Dragging Canoe and his people established what whites called the Five Lower Towns downriver from the various natural obstructions in the twenty-six-mile Tennessee River Gorge. Starting with Tuskegee (aka Brown's or Williams') Island and the sandbars on either side of it, these obstructions included the Tumbling Shoals, the Holston Rock, the Kettle (or Suck), the Suck Shoals, the Deadman's Eddy, the Pot, the Skillet, the Pan, and, finally, the Narrows, ending with Hale's Bar. The whole twenty-six miles was sometimes called The Suck, and the stretch of river was notorious enough to merit mention even by Thomas Jefferson. These navigational hazards were so formidable, in fact, that the French agents attempting to travel upriver to reach Cherokee country during the French and Indian War, intending to establish an outpost at the spot later occupied by British agent McDonald, gave up after several attempts.
The Five Lower Towns
The Five Lower Towns included Running Water (Amogayunyi), at the current Whiteside in Marion County, Tennessee, where Dragging Canoe made his headquarters; Nickajack (Ani-Kusati-yi, or Koasati place), eight kilometers down the Tennessee River in the same county; Long Island (Amoyeligunahita), on the Tennessee just above the Great Creek Crossing; Crow Town (Kagunyi) on the Tennessee, at the mouth of Crow Creek; and Stecoyee (Utsutigwayi, aka Lookout Mountain Town), at the current site of Trenton, Georgia. Tuskegee Island Town was reoccupied as a lookout post by a small band of warriors to provide advance warning of invasions, and eventually many other settlements in the area were resettled as well.
Because this was a move into the outskirts of Muscogee territory, Dragging Canoe, knowing such a move might be necessary, had previously sent a delegation under Little Owl to meet with Alexander McGillivray, the major Muscogee leader in the area, to gain their permission to do so. When he and his followers moved their base, so too did the British representatives Cameron and McDonald, making Running Water the center of their efforts throughout the Southeast. The Chickasaw were in the meantime trying to play off the Americans and the Spanish against each other with little interest in the British. Turtle-at-Home (Selukuki Woheli), another of Dragging Canoe's brothers, along with some seventy warriors, headed north to live and fight with the Shawnee.
Cherokee continued to migrate westward to join Dragging Canoe's followers, whose ranks were further swelled by runaway slaves, white Tories, Muscogee, Koasati, Kaskinampo, Yuchi, Natchez, and Shawnee, as well as a band of Chickasaw living at what was later known as Chickasaw Old Fields across from Guntersville, plus a few Spanish, French, Irish, and Germans.
Later major settlements of the Lower Cherokee (as were they called after the move) included Willstown (Titsohiliyi) near the later Fort Payne; Turkeytown (Gundigaduhunyi), at the head of the Cumberland Trail where the Upper Creek Path crossed the Coosa River near Centre, Alabama; Creek Path (Kusanunnahiyi), near at the intersection of the Great Indian Warpath with the Upper Creek Path at the modern Guntersville, Alabama; Turnip Town (Ulunyi), seven miles from the present-day Rome, Georgia; and Chatuga (Tsatugi), nearer the site of Rome.
This expansion came about largely because of the influx of Cherokee from North Georgia, who fled the depredations of expeditions such as those of Sevier; a large majority of these were former inhabitants of the Lower Towns in northeast Georgia and western South Carolina. Cherokee from the Middle, or Hill, Towns also came, a group of whom established a town named Sawtee (Itsati) at the mouth of South Sauta Creek on the Tennessee. Another town, Coosada, was added to the coalition when its Koasati and Kaskinampo inhabitants joined Dragging Canoe's confederation. Partly because of the large influx from North Georgia added to the fact that they were no longer occupying the Chickamauga area as their main center, Dragging Canoe's followers and others in the area began to be referred to as the Lower Cherokee, with he and his lieutenants remaining in the leadership.
Another visit from the North
In November 1782, twenty representatives from four northern tribes--Wyandot, Ojibwa, Ottawa, and Potowatami—travelled south to consult with Dragging Canoe and his lieutenants at his new headquarters in Running Water Town, which was nestled far back up the hollow from the Tennessee River onto which it opened. Their mission was to gain the help of Dragging Canoe's Cherokee in attacking Pittsburgh and the American settlements in Kentucky and the Illinois country.
After the revolution
Eventually, Dragging Canoe realized the only solution for the various Indian nations to maintain their independence was to unite in an alliance against the Americans. In addition to increasing his ties to McGillivray and the Upper Muscogee, with whom he worked most often and in greatest numbers, he continued to send his warriors to fighting alongside the Shawnee, Choctaw, and Lenape.
In January 1783, Dragging Canoe travelled to St. Augustine, the capital of East Florida, for a summit meeting with a delegation of northern tribes, and called for a federation of Indians to oppose the Americans and their frontier colonists. Browne, the British Indian Superintendent, approved the concept. At Tuckabatchee a few months later, a general council of the major southern tribes (Cherokee, Muscogee, Chickasaw, Choctaw, and Seminole) plus representatives of smaller groups (Mobile, Catawba, Biloxi, Huoma, etc.) took place to follow up, but plans for the federation were cut short by the signing of the Treaty of Paris. In June, Browne received orders from London to cease and desist.
Following that treaty, Dragging Canoe turned to the Spanish (who still claimed all the territory south of the Cumberland and were now working against the Americans) for support, trading primarily through Pensacola and Mobile. What made this possible was the fact that the Spanish governor of Louisiana Territory in New Orleans had taken advantage of the British setback to seize those ports. Dragging Canoe maintained relations with the British governor at Detroit, Alexander McKee, through regular diplomatic missions there under his brothers Little Owl and The Badger (Ukuna).
Chickasaw and Muscogee treaties
In November, the Chickasaw signed the Treaty of French Lick with the new United States of America that year[when?] and never again took up arms against it. The Lower Cherokee were also present at the conference and apparently made some sort of agreement to cease their attacks on the Cumberland, for after this Americans settlements in the area began to grow again. That same month, the pro-American camp in the Muscogee nation signed the Treaty of Augusta with the State of Georgia, enraging McGillivray, who wanted to keep fighting; he burned the houses of the leaders responsible and sent warriors to raid Georgia settlements.
Treaties of Hopewell and Coyatee
The Cherokee in the Overhill, Hill, and Valley Towns also signed a treaty with the new United States government, the 1785 Treaty of Hopewell, but in their case it was a treaty made under duress, the frontier colonials by this time having spread further along the Holston and onto the French Broad. Several leaders from the Lower Cherokee signed, including two from Chickamauga Town (which had been rebuilt) and one from Stecoyee. None of the Lower Cherokee, however, had any part in the Treaty of Coyatee, which new State of Franklin forced Corntassel and the other Overhill leaders to sign at gunpoint, ceding the remainder of the lands north of the Little Tennessee. Nor did they have any part in the Treaty of Dumplin Creek, which ceded the remaining land within the claimed boundaries of Sevier County. The colonials could now shift military forces to Middle Tennessee in response to increasing frequency of attacks by both Chickamauga Cherokee (by now usually called Lower Cherokee) and Upper Muscogee.
State of Franklin
In May 1785, the settlements of Upper East Tennessee, then comprising four counties of western North Carolina, petitioned the Congress of the Confederation to be recognized as the "State of Franklin". Even though their petition failed to receive the two-thirds votes necessary to qualify, they proceeded to organize what amounted to a secessionist government, holding their first "state" assembly in December 1785. One of their chief motives was to retain the foothold they had recently gained in the Cumberland Basin.
Attacks on the Cumberland
In the summer of 1786, Dragging Canoe and his warriors along with a large contingent of Muscogee raided the Cumberland region, with several parties raiding well into Kentucky. John Sevier responded with a punitive raid on the Overhill Towns. One such occasion that summer was notable for the fact that the raiding party was led by Hanging Maw of Coyatee, who was supposedly friendly at the time.
Formation of the Western Confederacy
In addition to the small bands still operating with the Shawnee, Wyandot-Mingo, and Lenape in the Northwest, a large contingent of Cherokee led by The Glass attended and took an active role in a grand council of northern tribes (plus some Muscogee and Choctaw in addition to the Cherokee contingent) resisting the American advance into the western frontier which took place in November–December 1786 in the Wyandot town of Upper Sandusky just south of the British capital of Detroit.
This meeting, initiated by Joseph Brant (Thayendanegea), the Mohawk leader who was head chief of the Iroquois Six Nations and like Dragging Canoe fought on the side of the British during the American Revolution, led to the formation of the Western Confederacy to resist American incursions into the Old Northwest. Dragging Canoe and his Cherokee were full members of the Confederacy. The purpose of the Confederacy was to coordinate attacks and defense in the Northwest Indian War of 1785–1795.
According to John Norton (Teyoninhokovrawen), Brant's adopted son, it was here[clarification needed] that The Glass formed a friendship with his adopted father that lasted well into the 19th century. He apparently served as Dragging Canoe's envoy to the Iroquois as the latter's brothers did to McKee and to the Shawnee.
The passage of the Northwest Ordinance by the Congress of the Confederation (subsequently affirmed by the United States Congress) in 1787, establishing the Northwest Territory and essentially giving away the land upon which they lived, only exacerbated the resentment of the tribes in the region.
Coldwater Town
The settlement of Coldwater was founded by a party of French traders who had come down for the Wabash to set up a trading center in 1783. It sat a few miles below the foot of the thirty-five-mile-long Muscle Shoals, near the mouth of Coldwater Creek and about three hundred yards back from the Tennessee River, close the site of the modern Tuscumbia, Alabama. For the next couple of years, trade was all the French did, but then the business changed hands. Around 1785, the new management began covertly gathering Cherokee and Muscogee warriors into the town, whom they then encouraged to attack the American settlements along the Cumberland and its environs. The fighting contingent eventually numbered approximately nine Frenchmen, thirty-five Cherokee, and ten Muscogee.
Because the townsite was well-hidden and its presence unannounced, James Robertson, commander of the militia in the Cumberland's Davidson and Sumner Counties, at first accused the Lower Cherokee of the new offensives. In 1787, he marched his men to their borders in a show of force, but without an actual attack, then sent an offer of peace to Running Water. In answer, Dragging Canoe sent a delegation of leaders led by Little Owl to Nashville under a flag of truce to explain that his Cherokee were not the responsible parties.
Meanwhile, the attacks continued. At the time of the conference in Nashville, two Chickasaw out hunting game along the Tennessee in the vicinity of Muscle Shoals chanced upon Coldwater Town, where they were warmly received and spent the night. Upon returning home to Chickasaw Bluffs, now Memphis, Tennessee, they immediately informed their head man, Piomingo, of their discovery. Piomingo then sent runners to Nashville.
Just after these runners had arrived in Nashville, a war party attacked one of its outlying settlements, killing Robertson's brother Mark. In response, Robertson raised a group of one hundred fifty volunteers and proceeded south by a circuitous land route, guided by two Chickasaw. Somehow catching the town off guard despite the fact they knew Robertson's force was approaching, they chased its would-be defenders to the river, killing about half of them and wounding many of the rest. They then gathered all the trade goods in the town to be shipped to Nashville by boat, burned the town, and departed.
After the wars, it became the site of Colbert's Ferry, owned by Chickasaw leader George Colbert, the crossing place over the Tennessee River of the Natchez Trace.
Muscogee council at Tuckabatchee
In 1786, McGillivray had convened a council of war at the dominant Upper Muscogee town of Tuckabatchee about recent incursions of Americans into their territory. The council decided to go on the warpath against the trespassers, starting with the recent settlements along the Oconee River. McGillivray had already secured support from the Spanish in New Orleans.
The following year, because of the perceived insult of the incursion Cumberland against Coldwater so near to their territory, the Muscogee also took up the hatchet against the Cumberland settlements. They continued their attacks until 1789, but the Cherokee did not join them for this round due partly to internal matters but more because of trouble from the State of Franklin.
Peak of Lower Cherokee power and influence
Dragging Canoe's last years, 1788–1792, were the peak of his influence and that of the rest of the Lower Cherokee, among the other Cherokee and among other Indian nations, both south and north, as well as with the Spanish of Pensacola, Mobile, and New Orleans, and the British in Detroit. He also sent regular diplomatic envoys to negotiations in Nashville, Jonesborough then Knoxville, and Philadelphia.
Massacre of the Kirk family
In May 1788, a party of Cherokee from Chilhowee came to the house of John Kirk's family on Little River, while he and his oldest son, John Jr., were out. When Kirk and John Jr. returned, they found the other eleven members of their family dead and scalped.
Massacre of the Brown family
After a preliminary trip to the Cumberland at the end of which he left two of his sons to begin clearing the plot of land at the mouth of White's Creek, James Brown returned to North Carolina to fetch the rest of the family, with whom he departed Long-Island-on-the-Holston by boat in May 1788. When they passed by Tuskegee Island (Williams Island) five days later, Bloody Fellow stopped them, looked around the boat, then let them proceed, meanwhile sending messengers ahead to Running Water.
Upon the family's arrival at Nickajack, a party of forty under mixed-blood John Vann boarded the boat and killed Col. Brown, his two older sons on the boat, and five other young men travelling with the family. Mrs. Brown, the two younger sons, and three daughters were taken prisoner and distributed to different families.
When he learned of the massacre the following day, The Breath (Unlita), Nickajack's headman, was seriously displeased. He later adopted into his own family the Browns' son Joseph as a son, who had been originally given to Kitegisky (Tsiagatali), who had first adopted him as a brother, treating him well, and of whom Joseph had fond memories in later years.
Mrs. Brown and one of her daughters were given to the Muscogee and ended up in the personal household of Alexander McGillivray. George, the elder of the surviving sons, also ended up with the Muscogee, but elsewhere. Another daughter went to a Cherokee nearby Nickajack and the third to a Cherokee in Crow Town.
Murders of the Overhill chiefs
At the beginning of June 1788, John Sevier, no longer governor of the State of Franklin, raised a hundred volunteers and set out for the Overhill Towns. After a brief stop at the Little Tennessee, the group went to Great Hiwassee and burned it to the ground. Returning to Chota, Sevier sent a detachment led by James Hubbard to Chilhowee to punish those responsible for the Kirk massacre. Hubbard's force included John Kirk Jr. Hubbard brought along Corntassel and Hanging Man from Chota.
At Chilhowee, Hubbard raised a flag of truce and took Corntassel and Hanging Man to the house of Abraham, still headman of the town. He was there with his son, also bringing along Long Fellow and Fool Warrior. Hubbard posted guards at the door and windows of the cabin, and gave John Kirk Jr. a tomahawk to get his revenge.
The murder of the pacifist Overhill chiefs under a flag of truce angered the entire Cherokee nation. Men who had been reluctant to participate took to the warpath. The increase in hostility lasted for several months. Doublehead, Corntassel's brother, was particularly incensed.
Highlighting the seriousness of the matter, Dragging Canoe came in to address the general council of the Nation, now meeting at Ustanali on the Coosawattee River (one of the former Lower Towns on the Keowee River relocated to the vicinity of Calhoun, Georgia) to which the seat of the council had been moved. Little Turkey (Kanagita) was elected as First Beloved Man to succeed the murdered chief. The election was contested by Hanging Maw of Coyatee; he had been elected chief headman of the traditional Overhill Towns on the Little Tennessee River). Both men had been among those who originally followed Dragging Canoe into the southwest of the nation.
Dragging Canoe's presence at the Ustanali council and the council's meetings now held in what was then the area of the Lower Towns (but to which Upper Cherokee from the Overhill towns were migrating in vast numbers), as well as his acceptance of the election of his former co-belligerent Little Turkey as principal leader over all the Cherokee nation, are graphic proof that he and his followers remained Cherokee and were not a separate tribe as some, following Brown, allege.
Houston's Station
In early August, the commander of the garrison at Houston's Station (near the present Maryville, Tennessee) received word that a Cherokee force of nearly five hundred was planning to attack his position. He therefore sent a large reconnaissance patrol to the Overhill Towns.
Stopping in the town of Citico on the south side of the Little Tennessee, which they found deserted, the patrol scattered throughout the town's orchard and began gathering fruit. Six of them died in the first fusilade, another ten while attempting to escape across the river.
With the loss of those men, the garrison at Houston's Station was seriously beleaguered. Only the arrival of a relief force under John Sevier saved the fort from being overrun and its inhabitants slaughtered. With the garrison joining his force, Sevier marched to the Little Tennessee and burned Chilhowee.
Invasion and counter-invasion
Later in August, Joseph Martin (who was married to Betsy, daughter of Nancy Ward, and living at Chota), with 500 men, marched to the Chickamauga area, intending to penetrate the edge of the Cumberland Mountains to get to the Five Lower Towns. He sent a detachment to secure the pass over the foot of Lookout Mountain (Atalidandaganu), which was ambushed and routed by a large party of Dragging Canoe's warriors, with the Cherokee in hot pursuit. One of the participants later referred to the spot as "the place where we made the Virginians turn their backs". According to one of the participants on the other side, Dragging Canoe, John Watts, Bloody Fellow, Kitegisky, The Glass, Little Owl, and Dick Justice were all present at the encounter.
Dragging Canoe raised an army of 3,000 Cherokee warriors, which he split into more flexible warbands of hundreds of warriors each. One band was headed by John Watts (Kunnessee-i, also known as 'Young Tassel'), with Bloody Fellow, Kitegisky (Tsiagatali), and The Glass. It included a young warrior named Pathkiller (Nunnehidihi), who later became known as The Ridge (Ganundalegi).
In October of that year, the band advanced across country toward White's Fort. Along the way, they attacked Gillespie's Station on the Holston River after capturing settlers who had left the enclosure to work in the fields, storming the stockade when the defender's ammunition ran out, killing the men and some of the women and taking 28 women and children prisoner. They proceeded to attack White's Fort and Houston's Station, only to be beaten back. Afterward, the warband wintered at an encampment on the Flint River in present-day Unicoi County, Tennessee as a base of operations.
In return, the settlers increased their retaliatory attacks. Troops under Sevier destroyed the Valley Towns in North Carolina. Bob Benge, with a group of Cherokee warriors, evacuated the general population from Ustalli, on the Hiwassee; they left a rearguard to ensure their escape. After firing the town, Sevier and his group pursued the fleeing inhabitants, and were ambushed at the mouth of the Valley River by Benge's party. The US soldiers went to the village of Coota-cloo-hee (Gadakaluyi) and burned down its cornfields, but they were chased off by 400 warriors led by Watts (Young Tassel).
Because of the destruction, the Overhill Cherokee and refugees from the Lower and Valley towns virtually abandoned the settlements on the Little Tennessee and dispersed south and west. Chota was the only town left with many inhabitants.
The Flint Creek band/Prisoner exchange
John Watts' band on Flint Creek fell upon serious misfortune early the next year. In early January 1789, they were surrounded by a force under John Sevier that was equipped with grasshopper cannons. The gunfire from the Cherokee was so intense, however, that Sevier abandoned his heavy weapons and ordered a cavalry charge that led to savage hand-to-hand fighting. Watt's band lost nearly 150 warriors.
Word of their defeat did not reach Running Water until April, when it arrived with an offer from Sevier for an exchange of prisoners which specifically mentioned the surviving members of the Brown family, including Joseph, who had been adopted first by Kitegisky and later by The Breath. Among those captured at Flint Creek were Bloody Fellow and Little Turkey's daughter.
Joseph and his sister Polly were brought immediately to Running Water, but when runners were sent to Crow Town to retrieve Jane, their youngest sister, her owner refused to surrender her. Bob Benge, present in Running Water at the time, mounted his horse and hefted his famous axe, saying, "I will bring the girl, or the owner's head". The next morning he returned with Jane. The three were handed over to Sevier at Coosawattee.
McGillivray delivered Mrs. Brown and Elizabeth to her son William during a trip to Rock Landing, Georgia, in November. George, the other surviving son from the trip, remained with the Muscogee until 1798.
Blow to the Western Confederacy
In January 1789, Arthur St. Clair, American governor of the Northwest Territory, concluded two separate peace treaties with members of the Western Confederacy. The first was with the Iroquois, except for the Mohawk, and the other was with the Wyandot, Lenape, Ottawa, Potawotami, Sac, and Ojibway. The Mohawk, the Shawnee, the Miami, and the tribes of the Wabash Confederacy, who had been doing most of the fighting, not only refused to go along but became more aggressive, especially the Wabash tribes.
Chiksika's band of Shawnee
In early 1789, a band of thirteen Shawnee arrived in Running Water after spending several months hunting in the Missouri River country, led by Chiksika, a leader contemporary with the famous Blue Jacket (Weyapiersenwah). In the band was his brother, the later leader Tecumseh.
Their mother, a Muscogee, had left the north (her husband died at the Battle of Point Pleasant, the only major action of Dunmore's War, in 1774) and gone to live in her old town because without her husband she was homesick. The town was now near those of the Cherokee in the Five Lower Towns. Their mother had died, but Chiksika's Cherokee wife and his daughter were living at nearby Running Water Town, so they stayed.
They were warmly received by the Cherokee warriors, and, based out of Running Water, they participated in and conducted raids and other actions, in some of which Cherokee warriors participated (most notably Bob Benge). Chiksika was killed in one of the actions in which their band took part in April, resulting in Tecumseh becoming leader of the small Shawnee band, gaining his first experiences as a leader in warfare.
The "Miro Conspiracy"
Starting in 1786, the leaders of the State of Franklin and the Cumberland District began secret negotiations with Esteban Rodriguez Miro, governor of Spanish Louisiana, to deliver their regions to the jurisdiction of the Spanish Empire. Those involved included James Robertson, Daniel Smith, and Anthony Bledsoe of the Cumberland District, John Sevier and Joseph Martin of the State of Franklin, James White, recently-appointed American Superintendent for Southern Indian Affairs (replacing Thomas Browne), and James Wilkinson, governor of Kentucky.
The irony lay in the fact that the Spanish backed the Cherokee and Muscogee harassing their territories. Their main counterpart on the Spanish side in New Orleans was Don Diego de Gardoqui. Gardoqui's negotiations with Wilkinson, initiated by the latter, to bring Kentucky (then a territory) into the Spanish orbit also were separate but simultaneous.
The "conspiracy" went as far as the Franklin and Cumberland officials promising to take the oath of loyalty to Spain and renounce allegiance to any other nation. Robertson even successfully petitioned the North Carolina assembly create the "Mero Judicial District" for the three Cumberland counties (Davidson, Sumner, Tennessee). There was a convention held in the failing State of Franklin on the question, and those present voted in its favor.
A large part of their motivation, besides the desire to secede from North Carolina, was the hope that this course of action would bring relief from Indian attacks. The series of negotiations involved Alexander McGillivray, with Roberston and Bledsoe writing him of the Mero District's peaceful intentions toward the Muscogee and simultaneously sending White as emissary to Gardoqui to convey news of their overture.
The scheme fell apart for two main reasons. The first was the dithering of the Spanish government in Madrid. The second was the interception of a letter from Joseph Martin which fell into the hands of the Georgia legislature in January 1789.
North Carolina, to which the western counties in question belonged under the laws of the United States, took the simple expedient of ceding the region to the federal government, which established the Southwest Territory in May 1790. Of note is the fact that under the new regime the Mero District kept its name.
Wilkinson remained a paid Spanish agent until his death in 1825, including his years as one of the top generals in the U.S. army, and was involved in the Aaron Burr conspiracy. Ironically, he became the first American governor of Louisiana Territory in 1803.
The opposite end of Muscle Shoals from Coldwater Town, mentioned above, was occupied in 1790 by a roughly 40-strong warrior party under Doublehead (Taltsuska), plus their families. He had gained permission to establish his town at the head of the Shoals, which was in Chickasaw territory, because the local headman, George Colbert, the mixed-blood leader who later owned Colbert's Ferry at the foot of Muscle Shoals, was his son-in-law.
Like the former Coldwater Town, Doublehead's Town was diverse, with Cherokee, Muscogee, Shawnee, and a few Chickasaw. It quickly grew beyond the initial 40 warriors, who carried out many small raids against settlers on the Cumberland and into Kentucky. During one foray in June 1792, his warriors ambushed a canoe carrying the three sons of Valentine Sevier (brother of John) and three others on a scouting expedition searching for his party. They killed the three Seviers and another man; two escaped.
Doublehead conducted his operations largely independently of the Lower Cherokee, though he did take part in large operations with them on occasion, such as the invasion of the Cumberland in 1792 and that of the Holston in 1793.
Treaty of New York
Dragging Canoe's long-time ally among the Muscogee, Alexander McGillivray, led a delegation of twenty-seven leaders north, where they signed the Treaty of New York in August 1790 with the United States government on behalf of the "Upper, Middle, and Lower Creek and Seminole composing the Creek nation of Indians". However, though the treaty signified the end of the involvement of McGillivray (who was made an America brigadier general) in the wars, the signers did not represent even half the Muscogee Confederacy, and there was much resistance to the treaty from the peace faction he had attacked after the Treaty of Augusta as well as the faction of the Confederacy who wished to continue the war and did so.
Muscle Shoals
In January 1791, a group of land speculators named the Tennessee Company from the Southwest Territory led by James Hubbard and Peter Bryant attempted to gain control of the Muscle Shoals and its vicinity by building a settlement and fort at the head of the Shoals. They did so against an executive order of President Washington forbidding it, as relayed to them by the governor of the Southwest Territory, William Blount. The Glass came down from Running Water with sixty warriors and descended upon the defenders, captained by Valentine Sevier, brother of John, told them to leave immediately or be killed, then burned their blockhouse as they departed.
Bob Benge
Starting in 1791, Benge, and his brother The Tail (Utana; aka Martin Benge), based at Willstown, began leading attacks against settlers in East Tennessee, Southwest Virginia, and Kentucky, often in conjunction with Doublehead and his warriors from Coldwater. Eventually, he became one of the most feared warriors on the frontier.
Meanwhile, Muscogee scalping parties began raiding the Cumberland settlements again, though without mounting any major campaigns.
Treaty of Holston
The Treaty of Holston, signed in July 1791, required the Upper Towns to cede more land in return for continued peace because the US government proved unable to stop or roll back illegal settlements. As it appeared to guarantee Cherokee sovereignty, the chiefs of the Upper Cherokee believed they had the same status as states. Several representatives of the Lower Cherokee participated in the negotiations and signed the treaty, including John Watts, Doublehead, Bloody Fellow, Black Fox (Dragging Canoe's nephew), The Badger (his brother), and Rising Fawn (Agiligina; aka George Lowery).
Battle of the Wabash
Later in the summer, a small delegation of Cherokee under Dragging Canoe's brother Little Owl traveled north to meet with the Indian leaders of the Western Confederacy, chief among them Blue Jacket (Weyapiersenwah) of the Shawnee, Little Turtle (Mishikinakwa) of the Miami, and Buckongahelas of the Lenape. While they were there, word arrived that Arthur St. Clair, governor of the Northwest Territory, was planning an invasion against the allied tribes in the north. Little Owl immediately sent word south to Running Water.
Dragging Canoe quickly sent a 30-strong war party north under his brother The Badger, where, along with the warriors of Little Owl and Turtle-at-Home they participated in the decisive encounter in November 1791 known as the Battle of the Wabash, the worst defeat ever inflicted by Native Americans upon the American military, the American military body count of which far surpassed that at the more famous Battle of the Little Bighorn in 1876.
After the battle, Little Owl, The Badger, and Turtle-at-Home returned south with most of the warriors who had accompanied the first two. The warriors who'd come north years earlier, both with Turtle-at-Home and a few years before, remained in the Ohio region, but the returning warriors brought back a party of thirty Shawnee under the leadership of one known as Shawnee Warrior that frequently operated alongside warriors under Little Owl.
Death of "the savage Napoleon"
Inspired by news of the northern victory, Dragging Canoe embarked on a mission to unite the native people of his area as had Little Turtle and Blue Jacket, visiting the other major tribes in the region. His embassies to the Lower Muscogee and the Choctaw were successful, but the Chickasaw in West Tennessee refused his overtures. Upon his return, which coincided with that of The Glass and Dick Justice (Uwenahi Tsusti), and of Turtle-at-Home, from successful raids on settlements along the Cumberland (in the case of the former two) and in Kentucky (in the case of the latter), a huge all-night celebration was held at Stecoyee at which the Eagle Dance was performed in his honor.
By morning, March 1, 1792, Dragging Canoe was dead. A procession of honor carried his body to Running Water, where he was buried. By the time of his death, the resistance of the Chickamauga/Lower Cherokee had led to grudging respect from the settlers, as well as the rest of the Cherokee nation. He was even memorialized at the general council of the Nation held in Ustanali in June by his nephew Black Fox (Inali):
The Dragging Canoe has left this world. He was a man of consequence in his country. He was friend to both his own and the white people. His brother [Little Owl] is still in place, and I mention it now publicly that I intend presenting him with his deceased brother's medal; for he promises fair to possess sentiments similar to those of his brother, both with regard to the red and the white. It is mentioned here publicly that both red and white may know it, and pay attention to him.
The final years
The last years of the Chickamauga Wars saw John Watts, who had spent much of the wars affecting friendship and pacifism towards his American counterparts while living most of the time among the Overhill Cherokee, drop his facade as he took over from his mentor, though deception and artifice still formed part of his diplomatic repertoire.
John Watts
At his own previous request, the old warrior was succeeded as leader of the Lower Cherokee by John Watts (Kunokeski), although The Bowl (Diwali) succeeded him as headman of Running Water, along with Bloody Fellow and Doublehead, who continued Dragging Canoe's policy of Indian unity, including an agreement with McGillivray of the Upper Muscogee to build joint blockhouses from which warriors of both tribes could operate at the junction of the Tennessee and Clinch Rivers, at Running Water, and at Muscle Shoals.
Watts, Tahlonteeskee, and 'Young Dragging Canoe' (whose actual name was Tsula, or "Red Fox") travelled to Pensacola in May at the invitation of Arturo O'Neill de Tyrone, Spanish governor of West Florida. They took with them letters of introduction from John McDonald. Once there, they forged a treaty with O'Neill for arms and supplies with which to carry on the war. Upon returning north, Watts moved his base of operations to Willstown in order to be closer to his Muscogee allies and his Spanish supply line.
Watts at the time of Dragging Canoe's death had been serving as an interpreter during negotiations in Chota between the American government and the Overhill Cherokee. Throughout the wars, up until the time he became principal chief of the Lower Cherokee, he continued to live in the Overhill Towns as much as in the Chickamauga and Lower Towns, and many whites mistook him for a non-belligerent, most notably John Sevier when he mistakenly contracted Watts to guide him to Dragging Canoe's headquarters in September 1782.
Meanwhile John McDonald, now British Indian Affairs Superintendent, moved to Turkeytown with his assistant Daniel Ross and their families. Some of the older chiefs, such as The Glass of Running Water, The Breath of Nickajack, and Dick Justice of Stecoyee, abstained from active warfare but did nothing to stop the warriors in their towns from taking part in raids and campaigns.
That summer, the band of Shawnee Warrior and the party of Little Owl began joining the raids of the Muscogee on the Mero District. In late June, they attacked a small fortified settlement called Ziegler's Station, swarming it, killing the men and taking the women and children prisoner.
Buchanan's Station
In September 1792, Watts orchestrated a large campaign intending to attack the Holston region with a large combined army in four bands of two hundred each. When the warriors were mustering at Stecoyee, however, he learned that their planned attack was expected and decided to aim for Nashville instead.
The army Watts led into the Cumberland region was nearly a thousand strong, including a contingent of cavalry. It was to be a four-pronged attack in which Tahlonteeskee (Ataluntiski; Doublehead's brother) and Bob Benge's brother The Tail led a party to ambush the Kentucky Road, Doublehead with another to the Cumberland Road, and Middle Striker (Yaliunoyuka) led another to do the same on the Walton Road, while Watts himself led the main force, made up of 280 Cherokee, Shawnee, and Muscogee warriors plus cavalry, intending to go against the fort at Nashville.
He sent out George Fields (Unegadihi; "Whitemankiller") and John Walker, Jr. (Sikwaniyoha) as scouts ahead of the army, and they killed the two scouts sent out by James Robertson from Nashville.
Near their target on the evening of 30 September, Watts's combined force came upon a small fort known as Buchanan's Station. Talotiskee, leader of the Muscogee, wanted to attack it immediately, while Watts argued in favor of saving it for the return south. After much bickering, Watts gave in around midnight. The assault proved to be a disaster for Watts. He himself was wounded, and many of his warriors were killed, including Talotiskee and some of Watts' best leaders; Shawnee Warrior, Kitegisky, and Dragging Canoe's brother Little Owl were among those who died in the encounter.
Doublehead's group of sixty ambushed a party of six and took one scalp then headed for toward Nashville. On their way, they were attacked by a militia force and lost thirteen men, and only heard of the disaster at Buchanan's Station afterwards. Tahlonteeskee's party, meanwhile, stayed out into early October, attacking Black's Station on Crooked Creek, killing three, wounding more, and capturing several horses. Middle Striker's party was more successful, ambushing a large armed force coming to the Mero District down the Walton Road in November and routing it completely without losing a single man.
In revenge for the deaths at Buchanan's Station, Benge, Doublehead, and his brother Pumpkin Boy led a party of sixty into southwestern Kentucky in early 1793 during which their warriors, in an act initiated by Doublehead, cooked and ate the enemies they had just killed. Afterwards, Doublehead's party returned south and held scalp dances at Stecoyee, Turnip Town, and Willstown, since warriors from those towns had also participated in the raid in addition to his and Benge's groups.
Joseph, of the Brown family discussed above, was a member of the station's garrison but had been at his mother's house three miles away at the time of the battle. When he learned of the death of his friend Kitegisky, he is reported to have mourned greatly.
Muscogee attack the Holston and the Cumberland
Meanwhile, a party of Muscogee under a mixed-breed named Lesley invaded the Holston region and began attacking isolated farmsteads. Lesley's party continued harassment of the Holston settlements until the summer of 1794, when Hanging Maw sent his men along with the volunteers from the Holston settlements to pursue them, killing two and handing over a third to the whites for trial and execution.
After the failed Cherokee attack on Buchanan's Station, the Muscogee increased their attacks on the Cumberland in both size and frequency. Besides scalping raids, two parties attacked Bledsoe's Station and Greenfield Station in April 1793. Another party attacked Hays' Station in June. In August, the Coushatta from Coosada raided the country around Clarksville, Tennessee, attacking the homestead of the Baker family, killing all but two who escaped and one taken prisoner who was later ransomed at Coosada Town. A war party of Tuskeegee from the Muscogee town of that name was also active in Middle Tennessee at this time.
Attack on a Cherokee diplomatic party
In early 1793, Watts began rotating large war parties back and forth between the Lower Towns and the North at the behest of his allies in the Western Confederacy, which was beginning to lose the ground to the Legion of the United States that had been created in the aftermath of the Battle of the Wabash. With the exception of the 1793 campaign against the Holston, his attention was more focused on the north than on the Southwest Territory and its environs during these next two years.
Shortly after a delegation of Shawnee stopped in Ustanali in that spring on their way to call on the Muscogee and Choctaw to punish the Chickasaw for joining St. Clair's army in the north, Watts sent envoys to Knoxville, then the capital of the Southwest Territory, to meet with Governor William Blount to discuss terms for peace. Blount in turn passed the offer to Philadelphia, which invited the Lower Cherokee leaders to a meeting with President Washington. The party that was sent from the Lower Towns that May included Bob McLemore, Tahlonteeskee, Captain Charley of Running Water, and Doublehead, among several others.
The party from the Lower Towns stopped in Coyatee because Hanging Maw and other chiefs from the Upper Towns were going also and had gathered there along with several whites who had arrived earlier. A large party of Lower Cherokee (Pathkiller aka The Ridge among them) had been raiding the Upper East, killed two men, and stolen twenty horses. On their way out, they passed through Coyatee, to which the pursuit party tracked them.
The militia violated their orders not to cross the Little Tennessee, then the border between the Cherokee nation and the Southwest Territory, and entered the town shooting indiscriminantly. In the ensuing chaos, eleven leading men were killed, including Captain Charley, and several wounded, including Hanging Maw, his wife and daughter, Doublehead, and Tahlonteeskee; one of the white delegates was among the dead. The Cherokee, even Watts' hostile warriors, agreed to await the outcome of the subsequent trial, which proved to be a farce, in large part because John Beard, the man responsible, was a close friend of John Sevier.
Invasion and Cavett's Station
Watts responded to Beard's acquittal by invading the Holston area with one of the largest Indian forces ever seen in the region, over one thousand Cherokee and Muscogee, plus a few Shawnee, intending to attack Knoxville itself. The plan was to have four bodies of troops march toward Knoxville esparately, converging at a previously agreed on rendezvous point along the way.
In August, Watts attacked Henry's Station with a force of two hundred, but fell back due to overwhelming gunfire coming from the fort, not wanting to risk another misfortune like that at Buchanan's Station the previous year.
The four columns converged a month later near the present Loudon, Tennessee, and proceeded toward their target. On the way, the Cherokee leaders were discussing among themselves whether to kill all the inhabitants of Knoxville, or just the men, James Vann advocating the latter while Doublehead argued for the former.
Further on the way, they encountered a small settlement called Cavett's Station. After they had surrounded the place, Benge negotiated with the inhabitants, agreeing that if they surrendered, their lives would be spared. However, after the settlers had walked out, Doublehead's group and his Muscogee allies attacked and began killing them all over the pleas of Benge and the others. Vann managed to grab one small boy and pull him onto his saddle, only to have Doublehead smash the boy's skull with an axe. Watts intervened in time to save another young boy, handing him to Vann, who put the boy behind him on his horse and later handed him over to three of the Muscogee for safe-keeping; unfortunately, one of the Muscogee chiefs killed the boy and scalped him a few days later.
Because of this incident, Vann called Doublehead "Babykiller" (deliberately parodying the honorable title "Mankiller") for the remainder of his life; and it also began a lengthy feud which defined the politics of the early 19th century Cherokee Nation and only ended in 1807 with Doublehead's death at Vann's orders. By this time, tensions among the Cherokee broke out into such vehement arguments that the force broke up, with the main group retiring south.
Battle of Etowah
Sevier countered the invasion with an invasion and occupation of Ustanali, which had been deserted; there was no fighting there other than an indecisive skirmish with a Cherokee-Muscogee scouting party. He and his men then followed the Cherokee-Muscogee force south to the town of Etowah (Itawayi; near the site of present-day Cartersville, Georgia across the Etowah River from the Etowah Indian Mounds), leading to what Sevier called the "Battle of Hightower". His force defeated their opponents soundly, then went on to destroy several Cherokee villages to the west before retiring to the Southwest Territory. This was the last pitched battle of the Chickamauga Wars.
End of the Chickamauga Wars
In late June 1794, the federal government signed the Treaty of Philadelphia with the Cherokee. It reaffirmed their land cessions of the 1785 Treaty of Hopwell and the 1791 Treaty of Holston. Both the chiefs Doublehead and Bloody Fellow signed it.
Muscle Shoals massacre
Later in the summer, a party of Cherokee under Whitemankiller (Unegadihi; aka George Fields) overtook a river party under William Scott at Muscle Shoals. They killed its white passengers, looted the goods, and took the African-American slaves as captives.
Final engagements
In August of that year, Thomas Browne (now working as the US Indian Agent to the Chickasaw) sent word from Chickasaw territory to General Robertson of the Miro District, as the Cumberland region was then called, that the Cherokee and Muscogee were going to attack settlements all along the river. Browne reported that a war party of 100 was going to take canoes down the Tennessee to the lower river, while another of 400 was going to attack overland after passing through the Five Lower Towns and picking up reinforcements.
The river party began the journey toward the targets, but there was much dissension in the larger mixed Muscogee-Cherokee overland party. They had divided over the actions of Hanging Maw, who had attacked the Lesley party in the Holston region. They divided their forces before reaching the settlements; only three small parties made it to the Cumberland area and they operated into at least September.
The Nickajack Expedition
Desiring to end the wars once and for all, Robertson sent a detachment of U.S. regular troops, Miro militia, and Kentucky volunteers to the Five Lower Towns under U.S. Army Major James Ore. Guided by knowledgeable locals, including former captive Joseph Brown, Ore's army traveled down the Cisca and St. Augustine Trail toward the Five Lower Towns.
On 13 September, the army attacked Nickajack without warning, slaughtering many of the inhabitants, including its pacifist chief The Breath. After torching the houses, the soldiers went upriver and burned Running Water, whose residents had long fled. Joseph Brown fought was the soldiers, but tried to spare women and children. The Cherokee casualties were relatively light, as the majority of the population of both towns were in Willstown attending a major stickball (similar to lacrosse) game.
Treaty of Tellico Blockhouse
Watts finally decided to call for peace: he was discouraged by the destruction of the two towns, the death of Bob Benge in April, and the recent defeat of the Western Confederacy by General "Mad Anthony" Wayne's army at the Battle of Fallen Timbers. More than 100 Cherokee had fought there.
The loss of support from the Spanish, who had their own problems with Napoleon I of France in Europe, convinced Watts to end the fighting. Two months later, on 7 November 1794, he made the Treaty of Tellico Blockhouse. It was notable for not requiring any more land cessions by the Cherokee, other than finally ended the series of conflicts, which was notable for not requiring any further cession of land other than requiring the Lower (or Chickamauga) Cherokee to recognize the cessions of the Holston treaty. This led to a period of relative peace into the 19th century.
Following the peace treaty, leaders from the Lower Cherokee were dominant in national affairs. When the national government of all the Cherokee was organized, the first three persons to hold the office of Principal Chief of the Cherokee Nation – Little Turkey (1788–1801), Black Fox (1801–1811), and Pathkiller (Nunnehidihi; 1811–1827) – had previously served as warriors under Dragging Canoe, as had the first two Speakers of the Cherokee National Council, established in 1794, Doublehead and Turtle-at-Home.
The domination of the Cherokee Nation by the former warriors from the Lower Towns continued well into the 19th century. Even after the revolt of the young chiefs of the Upper Towns, the Lower Towns were a major voice, and the "young chiefs" of the Upper Towns who dominated that region had themselves previously been warriors with Dragging Canoe and Watts.
Muscogee-Chickasaw War
The Muscogee kept on fighting after the destruction of Nickajack and Running Water and the following peace between the Lower Cherokee and the United States. In October 1794, they attacked Bledsoe's Station again. In November, they attacked Sevier's Station and massacred fourteen of the inhabitants, Valentine Sevier being one of the few survivors.
In early January 1795, however, the Chickasaw, who had sent warriors to take part in the Army of the Northwest, began killing Muscogee warriors found in Middle Tennessee as allies of the United States and taking their scalps, so in March, the Muscogee began to turn their attentions away from the Cumberland to the Chickasaw, over the entreaties of the Cherokee and the Choctaw. The Muscogee-Chickasaw War, also begun partly at the behest of the Shawnee to punish the Chickasaw for joining the Army of the Northwest at the Battle of Fallen Timbers, ended in a truce negotiated by the U.S. government at Tellico Blockhouse in October that year in a conference attended by the two belligerents and the Cherokee. The Muscogee signed their own peace treaty with the United States in June 1796.
Treaty of Greenville
The northern allies of the Lower Cherokee in the Western Confederacy signed the Treaty of Greenville with the United States in August 1795, ending the Northwest Indian War. The treaty required them to cede the territory that became the State of Ohio and part of what became the State of Indiana to the United States and to acknowledge the United States rather Great Britain as the predominant ruler of the Northwest.
None of the Cherokee in the North were present at the treaty. Later that month, Gen. Wayne sent a message to Long Hair (Gitlugunahita), leader of those who remained in the Ohio country, that they should come in and sue for peace. In response, Long Hair replied that all of them would return south as soon as they finished the harvest. However, they did not all do so; at least one, called Shoe Boots (Dasigiyagi), stayed in the area until 1803, so it's likely others did as well.
Counting the previous two years of all the Cherokee fighting openly as British allies, the Chickamauga Wars lasted nearly twenty years, one of the longest-running conflicts between Indians and the Americans. It has been often overlooked for its length, its importance at the time, and its influence on later Native American leaders (or considering that Cherokee had been involved at least in small numbers in all the conflicts beginning in 1758, that number could be nearly forty years). Because of the continuing hostilities that followed the Revolution, the United States placed one of the two permanent garrisons of the new country at Fort Southwest Point at the confluence of the Tennessee and Clinch Rivers; the other was at Fort Pitt in Pennsylvania. Because the conflict has been overlooked, many historians have failed to include Dragging Canoe as one of the notable Native American war chiefs and diplomats. Some texts dealing with conflicts between "Americans" and "Indians" often barely mention him.
On the "Chickamauga" or "Lower Cherokee" as a separate tribe
Brother Steiner, a Moravian missionary, met with Richard Fields at Tellico Blockhouse in 1799. Fields was a former Lower Cherokee warrior whom he had hired to serve as his guide and interpreter. Br. Steiner had been sent south by the Brethren to scout for a location for the mission and school which they planned to build in the Nation. They eventually placed it on land donated by James Vann at Spring Place. On one occasion, Br. Steiner asked his guide, "What kind of people are the Chickamauga?". Fields laughed, then replied, "They are Cherokee, and we know no difference."
Dragging Canoe spoke on several occasions to the National Council at Ustanali, he publicly acknowledged Little Turkey as the senior leader of all the Cherokee, and he was memorialized at the Council following his death in 1792, all events demonstrating that the "Chickamauga" were exactly as Richard Fields said, Cherokee. In addition, there was constant communication between the militant leaders with the Cherokee of other regions, warriors from the Overhill Towns and other groups participated in the warfare, and numerous men identified as "Chickamauga" signed treaties as Cherokee with the federal government, along with other leaders of the Cherokee.
See also
- Timeline of Cherokee removal
- Historic treaties of the Cherokee
- Eastern Band of Cherokee Indians
- United Keetoowah Band of Cherokee Indians
- Cherokee Nation of Oklahoma
- Principal Chiefs of the Cherokee
- Allen Manuscript. Note: Richard Fields, a mixed-race Cherokee, explained this to the Moravian missionary Brother Steiner, when they met at Tellico Blockhouse.
- Tanner, p. 95
- Brown, Eastern Cherokee Chiefs
- Klink and Talman, p. 62
- "Watauga Petition". Ensor Family Pages
- Evans (1977), "Dragging Canoe," p. 179
- Brown, Old Frontiers, p. 138
- Evans (1977), "Dragging Canoe", pp. 180–182
- Hoig, p. 59
- "the Killing of William [sic] Henry Creswell" http://www.rootsweb.ancestry.com/~varussel/indian/19.html
- Alderman, p. 38
- Brown, Old Frontiers, p. 161
- Moore and Foster, p. 168
- Evans, Dragging Canoe, p. 184
- Tanner, p. 98
- Brown, Old Frontiers, pp. 205–207
- Hoig, p. 68
- Moore, p. 175
- Moore, pp. 180–182
- Evans, Dragging Canoe, p. 185
- Mooney, Myths and Sacred Formulas, p.60
- Tanner, p. 99
- Brown, Old Frontiers, pp. 204–205
- Moore, p. 182
- Braund, p. 171
- Klink and Talman, p. 49
- Moore, pp. 182–187
- Brown, Old Frontiers, pp. 272–275
- Evans, Last Battle, 30–40
- Klink and Talman, p.48
- Draper Mss. 16
- Moore, p. 204
- Brown, Old Frontiers, p. 293-295
- Brown, Old Frontiers, p. 297
- Evans, Bob Benge, p. 100
- Brown, Old Frontiers, pp. 286–290
- Brown, Old Frontiers, pp. 297–299
- Brown, Old Frontiers, p. 275
- Brown, Old Frontiers, p. 299
- Moore, p. 201
- Wilson, pp. 47–48
- Drake, Chapt. II
- Eckert, pp.379–387
- Henderson, Chap. XX
- Moore, pp. 233
- Brown, Old Frontiers, pp. 318–319
- American State Papers, Vol. I, p. 263
- Starr, p. 35
- Starr, p. 36
- Moore, pp. 205–211
- Brown, Old Frontiers, pp. 344–366
- Hoig, p. 83
- Evans, Bob Benge, p. 101-102
- Moore, p. 225-231
- Moore, p. 215-220
- Moore, pp. 220–225
- Evans, Bob Benge, pp. 103–104
- Moore, pp. 244–250
- American State Papers, p. 536
- Adair, James. History of the American Indian. (Nashville: Blue and Gray Press, 1971).
- Alderman, Pat. Dragging Canoe: Cherokee-Chickamauga War Chief. (Johnson City: Overmountain Press, 1978)
- Allen, Penelope. "The Fields Settlement". Penelope Allen Manuscript. Archive Section, Chattanooga-Hamilton County Bicentennial Library.
- American State Papers, Indian Affairs, Vol, I. (Washington: Government Printing Office, 1816).
- Braund, Kathryn E. Holland. Deerskins and Duffels: Creek Indian Trade with Anglo-America, 1685–1815. (Lincoln:University of Nebraska Press, 1986).
- Brown, John P. "Eastern Cherokee Chiefs". Chronicles of Oklahoma, Vol. 16, No. 1, pp. 3–35. (Oklahoma City: Oklahoma Historical Society, 1938).
- Brown, John P. Old Frontiers: The Story of the Cherokee Indians from Earliest Times to the Date of Their Removal to the West, 1838. (Kingsport: Southern Publishers, 1938).
- Drake, Benjamin. Life Of Tecumseh And Of His Brother The Prophet; With A Historical Sketch Of The Shawanoe Indians. (Mount Vernon : Rose Press, 2008).
- Eckert, Allan W. A Sorrow in Our Heart: The Life of Tecumseh. (New York: Bantam, 1992).
- Evans, E. Raymond, ed. "The Battle of Lookout Mountain: An Eyewitness Account, by George Christian". Journal of Cherokee Studies, Vol. III, No. 1. (Cherokee: Museum of the Cherokee Indian, 1978).
- Evans, E. Raymond. "Notable Persons in Cherokee History: Ostenaco". Journal of Cherokee Studies, Vol. 1, No. 1, pp. 41–54. (Cherokee: Museum of the Cherokee Indian, 1976).
- Evans, E. Raymond. "Notable Persons in Cherokee History: Bob Benge". Journal of Cherokee Studies, Vol. 1, No. 2, pp. 98–106. (Cherokee: Museum of the Cherokee Indian, 1976).
- Evans, E. Raymond. "Notable Persons in Cherokee History: Dragging Canoe". Journal of Cherokee Studies, Vol. 2, No. 2, pp. 176–189. (Cherokee: Museum of the Cherokee Indian, 1977).
- Evans, E. Raymond. "Was the Last Battle of the American Revolution Fought on Lookout Mountain?". Journal of Cherokee Studies, Vol. V, No. 1, pp. 30–40. (Cherokee: Museum of the Cherokee Indian, 1980).
- Evans, E. Raymond, and Vicky Karhu. "Williams Island: A Source of Significant Material in the Collections of the Museum of the Cherokee". Journal of Cherokee Studies, Vol. 9, No. 1, pp. 10–34. (Cherokee: Museum of the Cherokee Indian, 1984).
- Hamer, Philip M. Tennessee: A History, 1673–1932. (New York: American History Association, 1933).
- Haywood, W.H. The Civil and Political History of the State of Tennessee from its Earliest Settlement up to the Year 1796. (Nashville: Methodist Episcopal Publishing House, 1891).
- Henderson, Archibald. The Conquest Of The Old Southwest: The Romantic Story Of The Early Pioneers Into Virginia, The Carolinas, Tennessee And Kentucky 1740 To 1790. (Whitefish: Kessinger Publishing, 2004).
- Hoig, Stanley. The Cherokees and Their Chiefs: In the Wake of Empire. (Fayeteeville: University of Arkansas Press, 1998)
- King, Duane H. The Cherokee Indian Nation: A Troubled History. (Knoxville: University of Tennessee Press, 1979).
- Klink, Karl, and James Talman, ed. The Journal of Major John Norton. (Toronto: Champlain Society, 1970).
- Kneberg, Madeline and Thomas M.N. Lewis. Tribes That Slumber. (Knoxville: University of Tennessee Press, 1958).
- McLoughlin, William G. Cherokee Renascence in the New Republic. (Princeton: Princeton University Press, 1992).
- Mooney, James. The Ghost Dance Religion and the Sioux Outbreak of 1890. (Washington: Government Printing Office, 1896).
- Mooney, James. Myths of the Cherokee and Sacred Formulas of the Cherokee, Smithsonian Institution, 1891 and 1900; reprinted, (Nashville: Charles and Randy Elder-Booksellers, 1982).
- Moore, John Trotwood and Austin P. Foster. Tennessee, The Volunteer State, 1769–1923, Vol. 1. (Chicago: S. J. Clarke Publishing Co., 1923).
- Ramsey, James Gettys McGregor. The Annals of Tennessee to the End of the Eighteenth Century. (Chattanooga: Judge David Campbell, 1926).
- Royce, C.C. "The Cherokee Nation of Indians: A narrative of their official relations with the Colonial and Federal Governments". Fifth Annual Report, Bureau of American Ethnology, 1883–1884. (Washington: Government Printing Office, 1889).
- Starr, Emmet. History of the Cherokee Indians, and their Legends and Folklore. (Fayetteville: Indian Heritage Assn., 1967).
- Tanner, Helen Hornbeck. "Cherokees in the Ohio Country". Journal of Cherokee Studies, Vol. III, No. 2, pp. 95–103. (Cherokee: Museum of the Cherokee Indian, 1978).
- Wilkins, Thurman. Cherokee Tragedy: The Ridge Family and the Decimation of a People. (New York: Macmillan Company, 1970).
- Williams, Samuel Cole. Early Travels in the Tennessee Country, 1540–1800. (Johnson City: Watauga Press, 1928).
- Wilson, Frazer Ells. The Peace of Mad Anthony. (Greenville: Chas. B. Kemble Book and Job Printer, 1907).
- The Cherokee Nation
- United Keetoowah Band
- Eastern Band of Cherokee Indians (official site)
- Annual report of the Bureau of Ethnology to the Secretary of the Smithsonian Institution (1897/98: pt.1), Contains The Myths of The Cherokee, by James Mooney
- Muscogee (Creek) Nation of Oklahoma (official site)
- Account of 1786 conflicts between Nashville-area settlers and natives (second item in historical column)
- The journal of Major John Norton
- Emmett Starr's History of the Cherokee Indians | http://en.wikipedia.org/wiki/Chickamauga_(tribe) | 13 |
61 | Carbon dioxide sink
A carbon dioxide (CO2) sink is a carbon reservoir that is increasing in size, and is the opposite of a carbon dioxide "source". The main natural sinks are the oceans and plants and other organisms that use photosynthesis to remove carbon from the atmosphere by incorporating it into biomass and release oxygen into the atmosphere. This concept of CO2 sinks has become more widely known because the Kyoto Protocol allows the use of carbon dioxide sinks as a form of carbon offset.
Carbon sequestration is the term describing processes that remove carbon dioxide from the atmosphere. To help mitigate global warming, a variety of means of artificially capturing and storing carbon (while releasing oxygen) — as well as of enhancing natural sequestration processes — are being explored.
Carbon dioxide is incorporated into forests and forest soils by trees and other plants. Through photosynthesis, plants absorb carbon dioxide from the atmosphere, store the carbon in sugars, starch and cellulose, and release the oxygen into the atmosphere. A young forest, composed of growing trees, absorbs carbon dioxide and acts as a sink. Mature forests, made up of a mix of various aged trees as well as dead and decaying matter, may be carbon neutral above ground. In the soil, however, the gradual build-up of slowly decaying organic material will continue to accumulate carbon, but at a slower rate than an immature forest. Organic material in the form of humus in the forest floor accumulates in greater quantity in cooler regions such as the boreal and taiga forests. At warmer temperatures humus is oxidized rapidly; this, in addition to high rainfall levels, is the reason why tropical jungles have very thin organic soils. The forest eco-system may eventually become carbon neutral. Forest fires release absorbed carbon back into the atmosphere, as does deforestation due to rapidly increased oxidation of soil organic matter.
The dead trees, plants, and moss in peat bogs undergo slow anaerobic decomposition below the surface of the bog. This process is slow enough that in many cases the bog grows rapidly and fixes more carbon from the atmosphere than is released. Over time, the peat grows deeper. Peat bogs inter approximately one-quarter of the carbon stored in land plants and soils.
Under some conditions, forests and peat bogs may become sources of CO2, such as when a forest is flooded by the construction of a hydroelectric dam. Unless the forests and peat are harvested before flooding, the rotting vegetation is a source of CO2 and methane comparable in magnitude to the amount of carbon released by a fossil-fuel powered plant of equivalent power
Oceans are natural CO2 sinks, and represent the largest active carbon sink on Earth. This role as a sink for CO2 is driven by two processes, the solubility pump and the biological pump. The former is primarily a function of differential CO2 solubility in seawater and the thermohaline circulation, while the latter is the sum of a series of biological processes that transport carbon (in organic and inorganic forms) from the surface euphotic zone to the ocean's interior. A small fraction of the organic carbon transported by the biological pump to the seafloor is buried in anoxic conditions under sediments and ultimately forms fossil fuels such as oil and natural gas.
At the present time, approximately one third of anthropogenic emissions are estimated to be entering the ocean. The solubility pump is the primary mechanism driving this, with the biological pump playing a negligible role. This stems from the limitation of the biological pump by ambient light and nutrients required by the phytoplankton that ultimately drive it. Total inorganic carbon is not believed to limit primary production in the oceans, so its increasing availability in the ocean does not directly affect production (the situation on land is different, since enhanced atmospheric levels of CO2 essentially "fertilize" land plant growth). However, ocean acidification by invading anthropogenic CO2 may affect the biological pump by negatively impacting calcifying organisms such as coccolithophores, foraminiferans and pteropods. Climate change may also affect the biological pump in the future by warming and stratifying the surface ocean, thus reducing the supply of limiting nutrients to surface waters. Although the buffering capacity of sea water is keeping the pH nearly constant at present, eventually pH will drop. At this point, the dissruption of life in the sea may turn it into a carbon source rather than a carbon sink. The characteristic of buffered systems is to hold the pH reasonably constant over a large introduction of acid and then drop suddenly with a small additional amount
Carbon as plant organic matter is sequestered in soils: Soils contain more carbon than is contained in vegetation and the atmosphere combined. Soils' organic carbon (humus) levels in many agricultural areas have been severely depleted. Organic material in the form of humus accumulates below about 25 degrees Celsius. Above this temperature, humus is oxidized much more rapidly. This is part of the reason why tropical soils under jungles are so thin, despite the rapid accumulation of organic material on the jungle floor (the other being extensive rainfall leaching soluble components vital to organic soil structure). Areas where shifting cultivation or "hack-and-slash" agriculture are practised are generally only fertile for 2-3 years before they are abandoned. These tropical jungles are similar to coral reefs in that they are highly efficient at conserving and circulating necessary nutrients, which explains their lushness in a nutrient desert.
Grasslands contribute to soil organic matter, mostly in the form of their extensive fibrous root mats. Much of this organic matter can remain unoxidized for long periods of time, depending on rainfall conditions, the length of the winter season, and the frequency of naturally occurring lightning-induced grass-fires necessary to recycle inorganic compounds from existing plant material. While these fires release carbon dioxide, they improve the quality of the grass-lands overall, in turn increasing the amount of carbon retained in the retained humic material. They also desposit carbon directly to the soil in the form of char that does not significantly degrade back to carbon dioxide
Enhancing natural sequestration
Future sea level rise
In 2001, the Intergovernmental Panel on Climate Change's Third Assessment Report predicted that by 2100, global warming will lead to a sea level rise of 9 to 88 cm. At that time no significant acceleration in the rate of sea level rise during the 20th century had been detected. Subsequently, Church and White found acceleration of 0.013 ± 0.006 mm/yr².
These sea level rises could lead to difficulties for shore-based communities: for example, many major cities such as London and New Orleans already need storm-surge defenses, and would need more if sea level rose, though they also face issues such as sinking land.
Future sea level rise, like the recent rise, is not expected to be globally uniform (details below). Some regions show a sea level rise substantially more than the global average (in many cases of more than twice the average), and others a sea level fall. However, models disagree as to the likely pattern of sea level change.
Intergovernmental Panel on Climate Change results
The results from the IPCC (TAR) sea level chapter (convening authors John A. Church and Jonathan M. Gregory) are given below.
The sum of these components indicates a rate of eustatic sea level rise (corresponding to a change in ocean volume) from 1910 to 1990 ranging from –0.8 to 2.2 mm/yr, with a central value of 0.7 mm/yr. The upper bound is close to the observational upper bound (2.0 mm/yr), but the central value is less than the observational lower bound (1.0 mm/yr), i.e., the sum of components is biased low compared to the observational estimates. The sum of components indicates an acceleration of only 0.2 (mm/yr)/century, with a range from –1.1 to 0.7 (mm/yr)/century, consistent with observational finding of no acceleration in sea level rise during the 20th century. The estimated rate of sea level rise from anthropogenic climate change from 1910 to 1990 (from modeling studies of thermal expansion, glaciers and ice sheets) ranges from 0.3 to 0.8 mm/yr. It is very likely that 20th century warming has contributed significantly to the observed sea level rise, through thermal expansion of sea water and widespread loss of land ice.
A common perception is that the rate of sea level rise should have accelerated during the latter half of the 20th century, but tide gauge data for the 20th century show no significant acceleration. We have obtained estimates based on AOGCMs for the terms directly related to anthropogenic climate change in the 20th century, i.e., thermal expansion, ice sheets, glaciers and ice caps... The total computed rise indicates an acceleration of only 0.2 (mm/yr)/century, with a range from -1.1 to 0.7 (mm/yr)/century, consistent with observational finding of no acceleration in sea level rise during the 20th century. The sum of terms not related to recent climate change is -1.1 to 0.9 mm/yr (i.e., excluding thermal expansion, glaciers and ice caps, and changes in the ice sheets due to 20th century climate change). This range is less than the observational lower bound of sea level rise. Hence it is very likely that these terms alone are an insufficient explanation, implying that 20th century climate change has made a contribution to 20th century sea level rise.
Uncertainties and criticisms regarding IPCC results
- Tide records with a rate of 180 mm/century going back to the 19th century show no measurable acceleration throughout the late 19th and first half of the 20th century. The IPCC attributes about 60 mm/century to melting and other eustatic processes, leaving a residual of 120 mm of 20th century rise to be accounted for. Global ocean temperatures by Levitus et al are in accord with coupled ocean/atmosphere modeling of greenhouse warming, with heat-related change of 30 mm. Melting of polar ice sheets at the upper limit of the IPCC estimates could close the gap, but severe limits are imposed by the observed perturbations in Earth rotation. (Munk 2002)
- By the time of the IPCC TAR, attribution of sea level changes had a large unexplained gap between direct and indirect estimates of global sea level rise. Most direct estimates from tide gauges give 1.5–2.0 mm/yr, whereas indirect estimates based on the two processes responsible for global sea level rise, namely mass and volume change, are significantly below this range. Estimates of the volume increase due to ocean warming give a rate of about 0.5 mm/yr and the rate due to mass increase, primarily from the melting of continental ice, is thought to be even smaller. One study confirmed tide gauge data is correct, and concluded there must be a continental source of 1.4 mm/yr of fresh water. (Miller 2004)
- From (Douglas 2002): "In the last dozen years, published values of 20th century GSL rise have ranged from 1.0 to 2.4 mm/yr. In its Third Assessment Report, the IPCC discusses this lack of consensus at length and is careful not to present a best estimate of 20th century GSL rise. By design, the panel presents a snapshot of published analysis over the previous decade or so and interprets the broad range of estimates as reflecting the uncertainty of our knowledge of GSL rise. We disagree with the IPCC interpretation. In our view, values much below 2 mm/yr are inconsistent with regional observations of sea-level rise and with the continuing physical response of Earth to the most recent episode of deglaciation."
- The strong 1997-1998 El Niño caused regional and global sea level variations, including a temporary global increase of perhaps 20 mm. The IPCC TAR's examination of satellite trends says the major 1997/98 El Niño-Southern Oscillation (ENSO) event could bias the above estimates of sea level rise and also indicate the difficulty of separating long-term trends from climatic variability.
Effects of sea level rise
Based on the projected increases stated above, the IPCC TAR WG II report notes that current and future climate change would be expected to have a number of impacts, particularly on coastal systems. Such impacts may include increased coastal erosion, higher storm-surge flooding, inhibition of primary production processes, more extensive coastal inundation, changes in surface water quality and groundwater characteristics, increased loss of property and coastal habitats, increased flood risk and potential loss of life, loss of nonmonetary cultural resources and values, impacts on agriculture and aquaculture through decline in soil and water quality, and loss of tourism, recreation, and transportation functions.
There is an implication that many of these impacts will be detrimental. The report does, however, note that owing to the great diversity of coastal environments; regional and local differences in projected relative sea level and climate changes; and differences in the resilience and adaptive capacity of ecosystems, sectors, and countries, the impacts will be highly variable in time and space and will not necessarily be negative in all situations.
Statistical data on the human impact of sea level rise is scarce. A study in the April, 2007 issue of Environment and Urbanization reports that 634 million people live in coastal areas within 30 feet of sea level. The study also reported that about two thirds of the world's cities with over five million people are located in these low-lying coastal areas.
Are islands "sinking"
IPCC assessments have suggested that deltas and small island states may be particularly vulnerable to sea level rise. Relative sea level rise (mostly caused by subsidence) is causing substantial loss of lands in some deltas. However, sea level changes have not yet been implicated in any substantial environmental, humanitarian, or economic losses to small island states. Previous claims have been made that parts of the island nations of Tuvalu were "sinking" as a result of sea level rise. However, subsequent reviews have suggested that the loss of land area was the result of erosion during and following the actions of 1997 cyclones Gavin, Hina, and Keli. According to climate skeptic Patrick J. Michaels, "In fact, areas...such as [the island of] Tuvalu show substantial declines in sea level over that period."
Reuters has reported other Pacific islands are facing a severe risk including Tegua island in Vanuatu. Claims that Vanuatu data shows no net sea level rise, are not substantiated by tide gauge data. Vanuatu tide gauge data show a net rise of ~50 mm from 1994-2004. Linear regression of this short time series suggests a rate of rise of ~7 mm/y, though there is considerable variability and the exact threat to the islands is difficult to assess using such a short time series.
Numerous options have been proposed that would assist island nations to adapt to rising sea level.
From Wikipedia, the free encyclopedia
In Intergovernmental Panel on Climate Change (IPCC) reports, equilibrium climate sensitivity refers to the equilibrium change in global mean surface temperature following a doubling of the atmospheric (equivalent) CO2 concentration. This value is estimated, by the IPCC Fourth Assessment Report as likely to be in the range 2 to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values. This is a slight change from the IPCC Third Assessment Report, which said it was "likely to be in the range of 1.5 to 4.5°C". More generally, equilibrium climate sensitivity refers to the equilibrium change in surface air temperature following a unit change in radiative forcing, expressed in units of °C/(W/m2). In practice, the evaluation of the equilibrium climate sensitivity from models requires very long simulations with coupled global climate models, or it may be deduced from observations.
Gregory et al. (2002) estimate a lower bound of 1.6°C by estimating the change in Earth's radiation budget and comparing it to the global warming observed over the 20th century. Recent work by Annan and Hargreaves combines independent observational and model based estimates to produce a mean of about 3°C, and only a 5% chance of exceeding 4.5°C.
Shaviv (2005) carried out a similar analysis for 6 different time scales, ranging from the 11-yr solar cycle to the climate variations over geological time scales. He found a typical sensitivity of 2.0°C (ranging between 0.9°C and 2.9°C at 99% confidence) if there is no cosmic-ray climate connection, or a typical sensitivity of 1.3°C (between 0.9°C and 2.5°C at 99% confidence), if the cosmic-ray climate link is real.
Andronova and Schlesinger (2001) (using simple climate models) found that it could lie between 1 and 10°C, with a 54 percent likelihood that the climate sensitivity lies outside the IPCC range. The exact range depends on which factors are most important during the instrumental period: "At present, the most likely scenario is one that includes anthropogenic sulfate aerosol forcing but not solar variation. Although the value of the climate sensitivity in that case is most uncertain, there is a 70 percent chance that it exceeds the maximum IPCC value. This is not good news." said Schlesinger.
Forest et al. (2002) using patterns of change and the MIT EMIC estimated a 95% confidence interval of 1.4–7.7°C for the climate sensitivity, and a 30% probability that sensitivity was outside the 1.5 to 4.5°C range.
Frame et al. (2005) and Allen et al. note that the size of the confidence limits are dependent on the nature of the prior assumptions made.
Climate sensitivity is not the same as the expected climate change at, say 2100: the TAR reports this to be an increase of 1.4 to 5.8°C over 1990.
The Transient climate response (TCR) - a term first used in the TAR - is the temperature change at the time of CO2 doubling in a run with CO2 increasing at 1%/year.
The effective climate sensitivity is a related measure that circumvents this requirement. It is evaluated from model output for evolving non-equilibrium conditions. It is a measure of the strengths of the feedbacks at a particular time and may vary with forcing history and climate state.
From Wikipedia, the free encyclopedia
Carbon dioxide is a chemical compound composed of two oxygen atoms covalently bonded to a single carbon atom. It is a gas at standard temperature and pressure and exists in Earth's atmosphere as a gas. It is currently at a globally averaged concentration of approximately 385 ppm by volume in the Earth's atmosphere, although this varies both by location and time. Carbon dioxide's chemical formula is CO2.
In general, it is exhaled by animals and utilized by plants during photosynthesis. Additional carbon dioxide is created by the combustion of fossil fuels or vegetable matter, among other chemical processes.
Carbon dioxide is an important greenhouse gas because of its ability to absorb many infrared wavelengths of the Sun's light, and because of the length of time it stays in the Earth's atmosphere. Due to this, and the role it plays in the respiration of plants, it is a major component of the carbon cycle.
In its solid state, carbon dioxide is commonly called dry ice. Carbon dioxide has no liquid state at pressures below 5.1 atm.
In the Earth's atmospher
Atmospheric CO2 concentrations measured at Mauna Loa Observatory.
Carbon dioxide in earth's atmosphere is considered a trace gas, and is measured in parts per million. Current concentration levels average approximately 385 ppm, which represents a total of around 800 gigatons of carbon. Its concentration can vary considerably on a regional basis: in urban areas it is generally higher, and indoors can reach 10 times the atmospheric concentration.
Due to human activities such as the combustion of fossil fuels and deforestation, the concentration of atmospheric carbon dioxide has increased by about 35% since the beginning of the age of industrialization.
Up to 40% of the gas emitted by a volcano during a subaerial volcanic eruption is carbon dioxide. However, human activities currently release more than 130 times the amount of CO2 emitted by volcanoes. According to the best estimates, volcanoes release about 130-230 million tonnes (145-255 million tons) of CO2 into the atmosphere each year. Emissions of CO2 by human activities amount to about 27 billion tonnes per year (30 billion tons).
Carbon dioxide is an end product in organisms that obtain energy from breaking down sugars, fats and amino acids with oxygen as part of their metabolism, in a process known as cellular respiration. This includes all plants, animals, many fungi and some bacteria. In higher animals, the carbon dioxide travels in the blood from the body's tissues to the lungs where it is exhaled. In plants using photosynthesis, carbon dioxide is absorbed from the atmosphere.
Role in photosynthesis
Plants remove carbon dioxide from the atmosphere by photosynthesis, also called carbon assimilation, which uses light energy to produce organic plant materials by combining carbon dioxide and water. Free oxygen is released as gas from the decomposition of water molecules, while the hydrogen is split into its protons and electrons and used to generate chemical energy via photophosphorylation. This energy is required for the fixation of carbon dioxide in the Calvin cycle to form sugars. These sugars can then be used for growth within the plant through respiration. Carbon dioxide gas must be introduced into greenhouses to maintain plant growth, as even in vented greenhouses the concentration of carbon dioxide can fall during daylight hours to as low as 200 ppm, at which level photosynthesis is significantly reduced. Venting can help offset the drop in carbon dioxide, but will never raise it back to ambient levels of 340 ppm. Carbon dioxide supplementation is the only known method to overcome this deficiency. Direct introduction of pure carbon dioxide is ideal, but rarely done because of cost constraints. Most greenhouses burn methane or propane to supply the additional CO2, but care must be taken to have a clean burning system as increased levels of nitrogen oxides (NOx) result in reduced plant growth. Sensors for sulfur dioxide (SO2) and NOx are expensive and difficult to maintain; accordingly most systems come with a carbon monoxide (CO) sensor under the assumption that high levels of carbon monoxide mean that significant amounts of NOx are being produced. Plants can potentially grow up to 50 percent faster in concentrations of 1,000 ppm CO2 when compared with ambient conditions.
Plants also emit CO2 during respiration, so it is only during growth stages that plants are net absorbers. For example a growing forest will absorb many tonnes of CO2 each year, however a mature forest will produce as much CO2 from respiration and decomposition of dead specimens (e.g. fallen branches) as used in biosynthesis in growing plants. Regardless of this, mature forests are still valuable carbon sinks, helping maintain balance in the Earth's atmosphere. Additionally, and crucially to life on earth, phytoplankton photosynthesis absorbs dissolved CO2 in the upper ocean and thereby promotes the absorption of CO2 from the atmosphere.
Carbon dioxide content in fresh air varies between 0.03% (300 ppm) and 0.06% (600 ppm), depending on the location. A person's exhaled breath is approximately 4.5% carbon dioxide. It is dangerous when inhaled in high concentrations (greater than 5% by volume, or 50,000 ppm). The current threshold limit value (TLV) or maximum level that is considered safe for healthy adults for an eight-hour work day is 0.5% (5,000 ppm). The maximum safe level for infants, children, the elderly and individuals with cardio-pulmonary health issues is significantly less.
These figures are valid for pure carbon dioxide. In indoor spaces occupied by people the carbon dioxide concentration will reach higher levels than in pure outdoor air. Concentrations higher than 1,000 ppm will cause discomfort in more than 20% of occupants, and the discomfort will increase with increasing CO2 concentration. The discomfort will be caused by various gases coming from human respiration and perspiration, and not by CO2 itself. At 2,000 ppm the majority of occupants will feel a significant degree of discomfort, and many will develop nausea and headaches. The CO2 concentration between 300 and 2,500 ppm is used as an indicator of indoor air quality.
Acute carbon dioxide toxicity is sometimes known as by the names given to it by miners: black damp, choke damp, or stythe. Miners would try to alert themselves to dangerous levels of carbon dioxide in a mine shaft by bringing a caged canary with them as they worked. The canary would inevitably die before CO2 reached levels toxic to people. Choke damp caused a great loss of life at Lake Nyos in Cameroon in 1986, when an upwelling of CO2-laden lake water quickly blanketed a large surrounding populated area. The heavier carbon dioxide forced out the life-sustaining oxygen near the surface, killing nearly two thousand people.
Carbon dioxide ppm levels (CDPL) are a surrogate for measuring indoor pollutants that may cause occupants to grow drowsy, get headaches, or function at lower activity levels. To eliminate most Indoor Air Quality complaints, total indoor CDPL must be reduced to below 600. NIOSH considers that indoor air concentrations that exceed 1,000 are a marker suggesting inadequate ventilation. ASHRAE recommends they not exceed 1,000 inside a space. OSHA limitsconcentrations in the workplace to 5,000 for prolonged periods. The U.S. National Institute for Occupational Safety and Health limits brief exposures (up to ten minutes) to 30,000 and considers CDPL exceeding 40,000 as "immediately dangerous to life and health." People who breathe 50,000 for more than half an hour show signs of acute hypercapnia, w hile breathing 70,000 – 100,000 can produce unconsciousness in only a few minutes. Accordingly, carbon dioxide, either as a gas or as dry ice, should be handled only in well-ventilated areas.
From Wikipedia, the free encyclopedia
The greenhouse effect, discovered by Joseph Fourier in 1829 and first investigated quantitatively by Svante Arrhenius in 1896, is the process in which the emission of infrared radiation by the atmosphere warms a planet's surface. The name comes from an analogy with the warming of air inside a greenhouse compared to the air outside the greenhouse. The Earth's average surface temperature is about 20-30°C warmer than it would be without the greenhouse effect. In addition to the Earth, Mars and especially Venus have greenhouse effects
A schematic representation of the exchanges of energy between outer space, the Earth's atmosphere, and the Earth surface. The ability of the atmosphere to capture and recycle energy emitted by the Earth surface is the defining characteristic of the greenhouse effect.
Anthropogenic greenhouse effect
CO2 production from increased industrial activity (fossil fuel burning) and other human activities such as cement production and tropical deforestation has increased the CO2 concentrations in the atmosphere. Measurements of carbon dioxide amounts from Mauna Loa observatory show that CO2 has increased from about 313 ppm (parts per million) in 1960 to about 375 ppm in 2005. The current observed amount of CO2 exceeds the geological record of CO2 maxima (~300 ppm) from ice core data (Hansen, J., Climatic Change, 68, 269, 2005 ISSN 0165-0009).
Because it is a greenhouse gas, elevated CO2 levels will increase global mean temperature; based on an extensive review of the scientific literature, the Intergovernmental Panel on Climate Change concludes that "most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations".
Greenhouse gases (GHG) are components of the atmosphere that contribute to the greenhouse effect. Some greenhouse gases occur naturally in the atmosphere, while others result from human activities such as burning of fossil fuels such as coal. Greenhouse gases include water vapor, carbon dioxide, methane, nitrous oxide, and ozone.
The "Greenhouse effect"
When sunlight reaches the surface of the Earth, some of it is absorbed and warms the Earth. Because the Earth's surface is much cooler than the sun, it radiates energy at much longer wavelengths than does the sun. The atmosphere absorbs these longer wavelengths more effectively than it does the shorter wavelengths from the sun. The absorption of this longwave radiant energy warms the atmosphere; the atmosphere also is warmed by transfer of sensible and latent heat from the surface. Greenhouse gases also emit longwave radiation both upward to space and downward to the surface. The downward part of this longwave radiation emitted by the atmosphere is the "greenhouse effect." The term is a misnomer, as this process is not the mechanism that warms greenhouses.
The major natural greenhouse gases are water vapor, which causes about 36-70% of the greenhouse effect on Earth (not including clouds); carbon dioxide, which causes 9-26%; methane, which causes 4-9%, and ozone, which causes 3-7%. It is not possible to state that a certain gas causes a certain percentage of the greenhouse effect, because the influences of the various gases are not additive. (The higher ends of the ranges quoted are for the gas alone; the lower ends, for the gas counting overlaps.)
Other greenhouse gases include, but are not limited to, nitrous oxide, sulfur hexafluoride, hydrofluorocarbons, perfluorocarbons and chlorofluorocarbons.
The major atmospheric constituents (nitrogen, N2 and oxygen, O2) are not greenhouse gases. This is because homonuclear diatomic molecules such as N2 and O2 neither absorb nor emit infrared radiation, as there is no net change in the dipole moment of these molecules when they vibrate. Molecular vibrations occur at energies that are of the same magnitude as the energy of the photons on infrared light. Heteronuclear diatomics such as CO or HCl absorb IR; however, these molecules are short-lived in the atmosphere owing to their reactivity and solubility. As a consequence they do not contribute significantly to the greenhouse effect.
Late 19th century scientists experimentally discovered that N2 and O2 did not absorb infrared radiation (called, at that time, "dark radiation") and that CO2 and many other gases did absorb such radiation. It was recognized in the early 20th century that the known major greenhouse gases in the atmosphere caused the earth's temperature to be higher than it would have been without the greenhouse gases.
Anthropogenic greenhouse gases
The projected temperature increase for a range of greenhouse gas stabilization scenarios (the coloured bands). The black line in middle of the shaded area indicates 'best estimates'; the red and the blue lines the likely limits. From the work of IPCC AR4 2007.
The concentrations of several greenhouse gases have increased over time. Human activity increases the greenhouse effect primarily through release of carbon dioxide, but human influences on other greenhouse gases can also be important. Some of the main sources of greenhouse gases due to human activity include:
-burning of fossil fuels and deforestation leading to higher carbon dioxide concentrations;
-livestock and paddy rice farming, land use and wetland changes, pipeline losses, and covered vented landfill emissions leading to higher methane atmospheric concentrations. Many of the newer style fully vented septic systems that enhance and target the fermentation process also are major sources of atmospheric methane;
-use of chlorofluorocarbons (CFCs) in refrigeration systems, and use of CFCs and halons in fire suppression systems and manufacturing processes.
-agricultural activities, including the use of fertilizers, that lead to higher nitrous oxide concentrations.
The seven sources of CO2 from fossil fuel combustion are (with percentage contributions for 2000-2004):
1-Solid fuels (e.g. coal): 35%
2-Liquid fuels (e.g. petrol): 36%
3-Gaseous fuels (e.g. natural gas): 20%
4-Flaring gas industrially and at wells: <1%</p>
5-Cement production: 3%
6-Non-fuel hydrocarbons: <1%</p>
7-The "international bunkers" of shipping and air transport not included in national inventories: 4%
Greenhouse gas emissions from industry, transportation (1/3 of total US global warming pollution) and agriculture are very likely the main cause of recently observed global warming. Major sources of an individual's GHG include home heating and cooling, electricity consumption, and automobiles. Corresponding conservation measures are improving home building insulation, cellular shades, Compact fluorescent lamps, and choosing high miles per gallon vehicles.
Carbon dioxide, methane, nitrous oxide and three groups of fluorinated gases (sulfur hexafluoride, HFCs, and PFCs) are the major greenhouse gases and the subject of the Kyoto Protocol, which entered into force in 2005.
CFCs, although greenhouse gases, are regulated by the Montreal Protocol, which was motivated by CFCs' contribution to ozone depletion rather than by their contribution to global warming. Note that ozone depletion has only a minor role in greenhouse warming though the two processes often are confused in the popular media.
From Wikipedia, the free encyclopedia | http://www.tree-nation.com/163386 | 13 |
63 | The best way to begin is probably to construct a hyperboloid and describe its properties.
In the diagram at the right, let AC be the axis of rotation, and let OP be any line not parallel to or intersecting the axis. Let CO be the shortest distance R between the two lines, which will be normal to both. Let α be the angle between OP and OB, parallel to AC. Let (x,y,z) be rectangular coordinates with origin at C, the z-axis along AC and the x-axis along CO. Take any point P on OP, which is a distance [R2 + (z tan α)2]1/2 from the axis of rotation. P will be a point on the hyperboloid, and will describe a circle about the axis as OP is rotated. The equation of the surface is then x2 + y2 = R2 + z2tan2α. Rotation of a line making an angle -α with OB would give the same surface. The surface is then obviously doubly ruled. The smallest horizontal section is described by point O, a circle of radius R. This is called the throat or gorge of the hyperboloid.
It is easy to demonstrate generating a hyperboloid by using an electric hand drill to rotate the generating line. I drilled a hole in one end of the plastic T-handle of a small gimlet and attached a length of stainless steel wire with a small loop at its centre with a 3/8 x 4 round head wood screw. The wire, about 150 mm long overall, was positioned at about a 45° angle. The effect is best seen against a dark background with a good light on the bright wire. It is remarkable to see the hyperboloid magically appear when the drill rotates at no great speed.
The equation of the hyperboloid can be expressed in the standard form (x/a)2 + (y/b)2 - (z/c)2 = 1 if a = b = R, c = R/tan α. The exponents 2 make it a quadric surface (like the ellipsoid and sphere). If a ≠ b, the cross-sections normal to z are ellipses instead of circles. Although the hyperboloid is a (doubly) ruled surface, it cannot be laid out flat, a property shared with the sphere: it is said to be nondevelopable. It is even more difficult to map a hyperboloid than to map a sphere. When z is very large compared to R, we can neglect R and then the hyperboloid is asymptotic to a cone of angle α. The equation also shows that the sections with a plane passing through the z-axis are hyperbolas with foci in the z = 0 plane, so the hyperboloid can also be formed by rotating a hyperbola about the axis, a property not as interesting as the one we used to find the equation. If the hyperbolas have foci on the z-axis, rotation about the z-axis produces a different surface, the hyperboloid of two sheets, which also has asymptotic cones. The equation of this surface is the same, but with +1 replaced by -1.
The cylinder and the cone are degenerate cases of the hyperboloid. When α = 0 we have a cylinder of radius R, and when R = 0 we have a cone of angle α. The cylinder is generated by a generator parallel to the axis, while a cone is generated by a generator that intersects the axis. The hyperboloid is closely related to the sphere, since it is obtained by changing the sign of z2, which gives a hyperboloid with α = 45° and a gorge radius of r. This reminds one of the relativistic metric, where s2 = x2 - c2t2. The radius of curvature of any normal section of a sphere at a point on the sphere is the same in any direction, the radius of the sphere. The radius of curvature of a normal section of a hyperboloid changes sign--the centre of curvature of a vertical section outside the surface, while that of a horizontal section is inside. The hyperboloid is negatively curved, the sphere positively. The curvature is zero for sections in which the generators lie.
A. N. Tolstoy (a distant relative of L. N. Tolstoy of War and Peace) wrote the science-fiction novel Engineer Garin's Hyperboloid in 1926-27. It wasn't a surface, but a ray gun, and Tolstoy really wanted a paraboloid, not a hyperboloid, to focus the rays.
The pioneer of hyperboloidal structures is the remarkable Russian engineer V. Shukhov (1853-1939) who, among other accomplishments, built a hyperboloidal water tower for the 1896 industrial exhibition in Nizhny Novgorod. Hyperboloidal towers can be built from reinforced concrete or as a steel lattice, and is the most economical such structure for a given diameter and height. The roof of the McDonnell Plantarium in St. Louis, the Brasilia Cathedral and the Kobe Port tower are a few recent examples of hyperboloidal structures. The most familiar use, however, is in cooling towers used to cool the water used for the condensers of a steam power plant, whether fuel burning or nuclear. The bottom of the tower is open, while the hot water to be cooled is sprayed on wooden baffles inside the tower. Potentially, the water can be cooled to the wet bulb temperature of the admitted air. Natural convection is established due to the heat added to the tower by the hot water. If the air is already of moderate humidity when admitted, a vapor plume is usually emitted from the top of the tower. The ignorant often call this plume "smoke" but it is nothing but water. Smokestacks are the high, thin columns emitting at most a slight haze. The hyperboloidal cooling towers have nothing to do with combustion or nuclear materials. Two such towers can be seen at the Springfield Nuclear Plant on The Simpsons. The large coal facility at Didcot, UK also has hyperboloidal cooling towers easily visible to the north of the railway west of the station. Hyperboloidal towers of lattice construction have the great advantage that the steel columns are straight.
Suppose two nonparallel shafts that do not intersect are to be connected by gearing. Let C be the shortest distance between the two shafts. This line is perpendicular to both shafts. At some point on the line, let there be a straight line. When this line is rotated around the shafts, two hyperboloids are generated which will be tangent along this line. If the successive positions of this line on the hyperboloids are provided with teeth, the rotation of one hyperboloid will compel the rotation of the other. There will be sliding along the tooth, as we shall see. If the two shafts intersect, the hyperboloids degenerate to cones, and the gears are bevel gears, with no sliding along the teeth. In the general case, the gears are called skew bevel or hypoid gears. The gears will have line contact, like spur gears, which allows them to handle large powers and run smoothly. Helical gears can serve the same purpose, but they have only point contact.
Let the angle between the two shafts be θ as projected on a plane normal to the shortest distance, and let α and β be the angles between the projections of the shafts and the generator of the hyperboloids. For the moment, let the shafts intersect at O so that C = 0, as shown in the figure. Now we have cones rolling on each other, which would be pitch surfaces for bevel gears. Let the gear whose pitch radius is AQ have N teeth, and that with pitch radius BQ have N' teeth. The velocity ratio is N/N'. The teeth must have the same pitch on each gear, so the radius AQ = PN/2, where P is the diametral pitch, and BQ = PN'/2. The distance OQ is common to both triangles, so PN/2 = OQ sin α and PN'/2 = OQ sin β. Therefore, N/N' = sin α/sin β. If N/N' and θ are given, we can calculate α and β as follows: eliminate β using β = θ - α and solve for α. The result is shown, tan α = sin θ/(N/N' + cos θ). Then β = θ - α.
For general hyperboloidal gears, let us now calculate α and β in the same way in terms of the velocity ratio. Let us focus our attention on the position of closest approach, which will give us the gorge radii R, R' of the hyperboloids, such that R + R' = C. However, the ratio R/R' is not the same as the velocity ratio. In order for the gears to mesh, they must have the same normal pitch. However, the radii are determined by the pitch in the plane of rotation, which differs from the normal pitch by a factor cos α or cos β. This means that R/R' = (N/cos α)/(N'/cos β) = N cos β/N' cos α = tan α/tan β. The velocity ratio is the ratio of the sines, but the gorge radius ratio is the ratio of the tangents. Then, the equations of the two hyperboloids are of the form x2 + y2 = R2[1 + λz2], where λ is the same for both, and only the gorge radius is different.
It may be helpful to review what is meant by pitch. The linear or circumferential pitch P is the distance between corresponding points on successive teeth, expressed, say, in inches per tooth. Its reciprocal is the number of teeth per inch of circumference on the pitch circle (since we are generally talking about circular gears). Multiplying by π gives the number of teeth per inch of diameter. This is the diametral pitch, D = π/P, which is the most commonly used expression of pitch, since it is very convenient in specifying gears. A gear, of course, must have an integral number of teeth. For example, a 4-pitch gear has 4 teeth for each inch of pitch diameter, so that a 32-tooth gear has a pitch diameter of 8 inches. The circumferential pitch is P = 0.7854". When a tooth is inclined at an angle of θ to the axis, the diametral pitch is reduced by a factor cos θ. The normal pitch is important because the teeth may be generated by a cutter that would produce spur gear teeth of this pitch. The pitch diameter is the diameter of the equivalent cylinder for friction gearing.
The figure at the left should give a better idea of how hyperboloids roll on one another. The axis of the hyperboloid of gorge radius R1 is a-a, the axis of the hyperboloid of gorge radius R2 is b-b, and the common generator of the two is t-t. The hyperboloids are in contact at the pitch point P on the shortest distance between the two axes, and at all points along t-t, such as Q. Note that the lines AQ and BQ in the top view are not shown in true length. Let the angle between the shafts in a plane passing through the pitch point and normal to the shortest distance be θ. The line t-t also passes through the pitch point and lies in this plane, making angles α and β with the two shafts, so that α + β = θ. These angles are, of course, the angle parameters of the two hyperboloids.
If ω is the angular velocity and R the gorge radius of the hyperboloid whose axis makes an angle α with the line of contact, and ω', R' similar quantities for the other hyperboloid, then considering the velocities at the pitch point, the equality of the components of the velocities normal to the line of contact gives us ωR cos α = ω'R' cos β. From this we easily can find the angular velocity ratio: ω/ω' = (R'/R)(cos β/cos α) = sin β/sin α. Starting from the desired angular velocity ratio, we can find the angles α and β, and from them the ratio R/R'. Since the distance between the shafts is R + R', we can find the individual gorge radii. Usually only a small part of each hyperboloid is used in a pair of gears, and these parts are not usually near the gorges. They look a lot like bevel gears, but the teeth are quite different.
The pitch of hyperboloidal gear teeth increases with distance from the gorge, analogously to the increase of pitch of bevel gear teeth with distance from the point of intersection. This makes the teeth difficult to design and manufacture. For gears of limited thickness, the change in pitch can be neglected and the teeth generated as parallel teeth.
Use Wikipedia Search for "hyperboloid" and visit the interesting links displayed there. There are many illustrations of hyperboloids, including an excellent one made by strings stretched between two circular discs.
P. Schwamb, A. L. Merrill, W. H. James and V. L. Doughtie, Elements of Mechanism, 6th ed. (New York: John Wiley & Sons, 1947). Sec. 9-17, pp. 219-224.
E. Buckingham, Analytical Mechanics of Gears (New York: Dover, 1988). Chapter 17, pp. 352-382.
Composed by J. B. Calvert
Created 6 March 2007
Last revised 13 March 2007 | http://mysite.du.edu/~etuttle/tech/hyperbo.htm | 13 |
53 | Nucleic acid sequence
A nucleic acid sequence is a succession of letters that indicate the order of nucleotides within a DNA (using GACT) or RNA (GACU) molecule. By convention, sequences are usually presented from the 5' end to the 3' end. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure.
The sequence has capacity to represent information. Biological DNA represents the information which directs the functions of a living thing. In that context, the term genetic sequence is often used. Sequences can be read from the biological raw material through DNA sequencing methods.
Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as primary sequence. Conversely, there is no parallel concept of secondary or tertiary sequence.
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structure such as the famed double helix.
The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand — adenine, cytosine, guanine, thymine — covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.
One sequence can be complementary to another sequence, meaning that they have the base on each position is the complementary (i.e. A to T, C to G) and in the reverse order. For example, the complementary sequence to TTAC is GTAA. If one strand of the double-stranded DNA is considered the sense strand, then the other strand, considered the antisense strand, will have the complementary sequence to the sense strand.
While A, T, C, and G represent a particular nucleotide at a position, there are also letters that represent ambiguity which are used when more than one kind of nucleotide could occur at that position. The rules of the International Union of Pure and Applied Chemistry (IUPAC) are as follows:
- A = adenine
- C = cytosine
- G = guanine
- T = thymine
- R = G A (purine)
- Y = T C (pyrimidine)
- K = G T (keto)
- M = A C (amino)
- S = G C (strong bonds)
- W = A T (weak bonds)
- B = G T C (all but A)
- D = G A T (all but C)
- H = A C T (all but G)
- V = G C A (all but T)
- N = A G C T (any)
These symbols are also valid for RNA, except with U (uracil) replacing T (thymine).
Apart from adenine (A), cytosine (C), guanine (G), thymine (T) and uracil (U), DNA and RNA also contain bases that have been modified after the nucleic acid chain has been formed. In DNA, the most common modified base is 5-methylcytidine (m5C). In RNA, there are many modified bases, including pseudouridine (Ψ), dihydrouridine (D), inosine (I), ribothymidine (rT) and 7-methylguanosine (m7G). Hypoxanthine and xanthine are two of the many bases created through mutagen presence, both of them through deamination (replacement of the amine-group with a carbonyl-group). Hypoxanthine is produced from adenine, xanthine from guanine. Similarly, deamination of cytosine results in uracil.
Biological significance
In biological systems, nucleic acids contain information which is used by a living cell to construct specific proteins. The sequence of nucleobases on a nucleic acid strand is translated by cell machinery into a sequence of amino acids making up a protein strand. Each group of three bases, called a codon, corresponds to a single amino acid, and there is a specific genetic code by which each possible combination of three bases corresponds to a specific amino acid.
The central dogma of molecular biology outlines the mechanism by which proteins are constructed using information contained in nucleic acids. DNA is transcribed into mRNA molecules, which travels to the ribosome where the mRNA is used as a template for the construction of the protein strand. Since nucleic acids can bind to molecules with complementary sequences, there is a distinction between "sense" sequences which code for proteins, and the complementary "antisense" sequence which is by itself nonfunctional, but can bind to the sense strand.
Sequence determination
DNA sequencing is the process of determining the nucleotide sequence of a given DNA fragment. The sequence of the DNA of a living thing encodes the necessary information for that living thing to survive and reproduce. Therefore, determining the sequence is useful in fundamental research into why and how organisms live, as well as in applied subjects. Because of the importance of DNA to living things, knowledge of a DNA sequence may be useful in practically any biological research. For example, in medicine it can be used to identify, diagnose and potentially develop treatments for genetic diseases. Similarly, research into pathogens may lead to treatments for contagious diseases. Biotechnology is a burgeoning discipline, with the potential for many useful products and services.
RNA is not sequenced directly. Instead, it is copied to a DNA by reverse transcriptase, and this DNA is then sequenced.
Current sequencing methods rely on the discriminatory ability of DNA polymerases, and therefore can only distinguish four bases. An inosine (created from adenosine during RNA editing) is read as a G, and 5-methyl-cytosine (created from cytosine by DNA methylation) is read as a C. With current technology, it is difficult to sequence small amounts of DNA, as the signal is too weak to measure. This is overcome by polymerase chain reaction (PCR) amplification.
Digital representation
Once a nucleic acid sequence has been obtained from an organism, it is stored in silico in digital format. Digital genetic sequences may be stored in sequence databases, be analyzed (see Sequence analysis below), be digitally altered and/or be used as templates for creating new actual DNA using artificial gene synthesis.
Sequence analysis
Digital genetic sequences may be analyzed using the tools of bioinformatics to attempt to determine its function.
Genetic testing
The DNA in an organism's genome can be analyzed to diagnose vulnerabilities to inherited diseases, and can also be used to determine a child's paternity (genetic father) or a person's ancestry. Normally, every person carries two variations of every gene, one inherited from their mother, the other inherited from their father. The human genome is believed to contain around 20,000 - 25,000 genes. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders.
Genetic testing identifies changes in chromosomes, genes, or proteins. Usually, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. Several hundred genetic tests are currently in use, and more are being developed.
Sequence alignment
In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to identify regions of similarity that may be due to functional, structural, or evolutionary relationships between the sequences. If two sequences in an alignment share a common ancestor, mismatches can be interpreted as point mutations and gaps as insertion or deletion mutations (indels) introduced in one or both lineages in the time since they diverged from one another. In sequence alignments of proteins, the degree of similarity between amino acids occupying a particular position in the sequence can be interpreted as a rough measure of how conserved a particular region or sequence motif is among lineages. The absence of substitutions, or the presence of only very conservative substitutions (that is, the substitution of amino acids whose side chains have similar biochemical properties) in a particular region of the sequence, suggest that this region has structural or functional importance. Although DNA and RNA nucleotide bases are more similar to each other than are amino acids, the conservation of base pairs can indicate a similar functional or structural role.
Computational phylogenetics makes extensive use of sequence alignments in the construction and interpretation of phylogenetic trees, which are used to classify the evolutionary relationships between homologous genes represented in the genomes of divergent species. The degree to which sequences in a query set differ is qualitatively related to the sequences' evolutionary distance from one another. Roughly speaking, high sequence identity suggests that the sequences in question have a comparatively young most recent common ancestor, while low identity suggests that the divergence is more ancient. This approximation, which reflects the "molecular clock" hypothesis that a roughly constant rate of evolutionary change can be used to extrapolate the elapsed time since two genes first diverged (that is, the coalescence time), assumes that the effects of mutation and selection are constant across sequence lineages. Therefore it does not account for possible difference among organisms or species in the rates of DNA repair or the possible functional conservation of specific regions in a sequence. (In the case of nucleotide sequences, the molecular clock hypothesis in its most basic form also discounts the difference in acceptance rates between silent mutations that do not alter the meaning of a given codon and other mutations that result in a different amino acid being incorporated into the protein.) More statistically accurate methods allow the evolutionary rate on each branch of the phylogenetic tree to vary, thus producing better estimates of coalescence times for genes.
Sequence motifs
Frequently the primary structure encodes motifs that are of functional importance. Some examples of sequence motifs are: the C/D and H/ACA boxes of snoRNAs, Sm binding site found in spliceosomal RNAs such as U1, U2, U4, U5, U6, U12 and U3, the Shine-Dalgarno sequence, the Kozak consensus sequence and the RNA polymerase III terminator.
See also
- Nomenclature for Incompletely Specified Bases in Nucleic Acid Sequences, NC-IUB, 1984.
- BIOL2060: Translation
- T Nguyen, D Brunson, C L Crespi, B W Penman, J S Wishnok, and S R Tannenbaum, DNA damage and mutation in human cells exposed to nitric oxide in vitro, Proc Natl Acad Sci U S A. 1992 April 1; 89(7): 3030–3034
- What is genetic testing? - Genetics Home Reference
- Genetic Testing: MedlinePlus
- "Definitions of Genetic Testing". Definitions of Genetic Testing (Jorge Sequeiros and Bárbara Guimarães). EuroGentest Network of Excellence Project. 2008-09-11. Retrieved 2008-08-10.[dead link]
- Mount DM. (2004). Bioinformatics: Sequence and Genome Analysis (2nd ed.). Cold Spring Harbor Laboratory Press: Cold Spring Harbor, NY. ISBN 0-87969-608-7.
- Ng PC, Henikoff S. Predicting deleterious amino acid substitutions. Genome Res. 2001 May;11(5):863-74.
- Samarsky, DA; Fournier MJ, Singer RH, Bertrand E (1998). "The snoRNA box C/D motif directs nucleolar targeting and also couples snoRNA synthesis and localization". EMBO 17 (13): 3747–3757. doi:10.1093/emboj/17.13.3747. PMC 1170710. PMID 9649444.
- Ganot, Philippe; Caizergues-Ferrer, Michèle; Kiss, Tamás (1 April 1997). "The family of box ACA small nucleolar RNAs is defined by an evolutionarily conserved secondary structure and ubiquitous sequence elements essential for RNA accumulation". Genes & Development 11 (7): 941–956. doi:10.1101/gad.11.7.941. PMID 9106664.
- Shine J, Dalgarno L (1975). "Determinant of cistron specificity in bacterial ribosomes". Nature 254 (5495): 34–8. doi:10.1038/254034a0. PMID 803646.
- Kozak M (October 1987). "An analysis of 5'-noncoding sequences from 699 vertebrate messenger RNAs". Nucleic Acids Res. 15 (20): 8125–8148. doi:10.1093/nar/15.20.8125. PMC 306349. PMID 3313277.
- Bogenhagen DF, Brown DD (1981). "Nucleotide sequences in Xenopus 5S DNA required for transcription termination.". Cell 24 (1): 261–70. doi:10.1016/0092-8674(81)90522-5. PMID 6263489. | http://en.wikipedia.org/wiki/Genetic_sequence | 13 |
197 | |SI unit:||kg m/s or N s|
In classical mechanics, linear momentum or translational momentum (pl. momenta; SI unit kg m/s, or equivalently, N s) is the product of the mass and velocity of an object. For example, a heavy truck moving fast has a large momentum—it takes a large and prolonged force to get the truck up to this speed, and it takes a large and prolonged force to bring it to a stop afterwards. If the truck were lighter, or moving more slowly, then it would have less momentum.
Like velocity, linear momentum is a vector quantity, possessing a direction as well as a magnitude:
Linear momentum is also a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change. In classical mechanics, conservation of linear momentum is implied by Newton's laws; but it also holds in special relativity (with a modified formula) and, with appropriate definitions, a (generalized) linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory, and general relativity.
Newtonian mechanics
Momentum has a direction as well as magnitude. Quantities that have both a magnitude and a direction are known as vector quantities. Because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Below, the basic properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations (see multiple dimensions).
Single particle
The units of momentum are the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity in meters per second, then the momentum is in kilograms meters/second (kg m/s). Being a vector, momentum has magnitude and direction. For example, a model airplane of 1 kg, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg m/s due north measured from the ground.
Many particles
The momentum of a system of particles is the sum of their momenta. If two particles have masses m1 and m2, and velocities v1 and v2, the total momentum is
The momenta of more than two particles can be added in the same way.
A system of particles has a center of mass, a point determined by the weighted sum of their positions:
If all the particles are moving, the center of mass will generally be moving as well. If the center of mass is moving at velocity vcm, the momentum is:
Relation to force
If a force F is applied to a particle for a time interval Δt, the momentum of the particle changes by an amount
If the force depends on time, the change in momentum (or impulse) between times t1 and t2 is
The second law only applies to a particle that does not exchange matter with its surroundings, and so it is equivalent to write
Example: a model airplane of 1 kg accelerates from rest to a velocity of 6 m/s due north in 2 s. The thrust required to produce this acceleration is 3 newton. The change in momentum is 6 kg m/s. The rate of change of momentum is 3 (kg m/s)/s = 3 N.
In a closed system (one that does not exchange any matter with the outside and is not acted on by outside forces) the total momentum is constant. This fact, known as the law of conservation of momentum, is implied by Newton's laws of motion. Suppose, for example, that two particles interact. Because of the third law, the forces between them are equal and opposite. If the particles are numbered 1 and 2, the second law states that F1 = dp1/dt and F2 = dp2/dt. Therefore
If the velocities of the particles are u1 and u2 before the interaction, and afterwards they are v1 and v2, then
This law holds no matter how complicated the force is between particles. Similarly, if there are several particles, the momentum exchanged between each pair of particles adds up to zero, so the total change in momentum is zero. This conservation law applies to all interactions, including collisions and separations caused by explosive forces. It can also be generalized to situations where Newton's laws do not hold, for example in the theory of relativity and in electrodynamics.
Dependence on reference frame
Momentum is a measurable quantity, and the measurement depends on the motion of the observer. For example, if an apple is sitting in a glass elevator that is descending, an outside observer looking into the elevator sees the apple moving, so to that observer the apple has a nonzero momentum. To someone inside the elevator, the apple does not move, so it has zero momentum. The two observers each have a frame of reference in which they observe motions, and if the elevator is descending steadily they will see behavior that is consistent with the same physical laws.
Suppose a particle has position x in a stationary frame of reference. From the point of view of another frame of reference moving at a uniform speed u, the position (represented by a primed coordinate) changes with time as
This is called a Galilean transformation. If the particle is moving at speed dx/dt = v in the first frame of reference, in the second it is moving at speed
Since u does not change, the accelerations are the same:
Thus, momentum is conserved in both reference frames. Moreover, as long as the force has the same form in both frames, Newton's second law is unchanged. Forces such as Newtonian gravity, which depend only on the scalar distance between objects, satisfy this criterion. This independence of reference frame is called Newtonian relativity or Galilean invariance.
A change of reference frame can often simplify calculations of motion. For example, in a collision of two particles a reference frame can be chosen where one particle begins at rest. Another commonly used reference frame is the center of mass frame, one that is moving with the center of mass. In this frame, the total momentum is zero.
Application to collisions
By itself, the law of conservation of momentum is not enough to determine the motion of particles after a collision. Another property of the motion, kinetic energy, must be known. This is not necessarily conserved. If it is conserved, the collision is called an elastic collision; if not, it is an inelastic collision.
Elastic collisions
An elastic collision is one in which no kinetic energy is lost. Perfectly elastic "collisions" can occur when the objects do not touch each other, as for example in atomic or nuclear scattering where electric repulsion keeps them apart. A slingshot maneuver of a satellite around a planet can also be viewed as a perfectly elastic collision from a distance. A collision between two pool balls is a good example of an almost totally elastic collision, due to their high rigidity; but when bodies come in contact there is always some dissipation.
A head-on elastic collision between two bodies can be represented by velocities in one dimension, along a line passing through the bodies. If the velocities are u1 and u2 before the collision and v1 and v2 after, the equations expressing conservation of momentum and kinetic energy are:
A change of reference frame can often simplify the analysis of a collision. For example, suppose there are two bodies of equal mass m, one stationary and one approaching the other at a speed v (as in the figure). The center of mass is moving at speed v/2 and both bodies are moving towards it at speed v/2. Because of the symmetry, after the collision both must be moving away from the center of mass at the same speed. Adding the speed of the center of mass to both, we find that the body that was moving is now stopped and the other is moving away at speed v. The bodies have exchanged their velocities. Regardless of the velocities of the bodies, a switch to the center of mass frame leads us to the same conclusion. Therefore, the final velocities are given by
In general, when the initial velocities are known, the final velocities are given by
If one body has much greater mass than the other, its velocity will be little affected by a collision while the other body will experience a large change.
Inelastic collisions
In an inelastic collision, some of the kinetic energy of the colliding bodies is converted into other forms of energy such as heat or sound. Examples include traffic collisions, in which the effect of lost kinetic energy can be seen in the damage to the vehicles; electrons losing some of their energy to atoms (as in the Franck–Hertz experiment); and particle accelerators in which the kinetic energy is converted into mass in the form of new particles.
In a perfectly inelastic collision (such as a bug hitting a windshield), both bodies have the same motion afterwards. If one body is motionless to begin with, the equation for conservation of momentum is
In a frame of reference moving at the speed v), the objects are brought to rest by the collision and 100% of the kinetic energy is converted.
One measure of the inelasticity of the collision is the coefficient of restitution CR, defined as the ratio of relative velocity of separation to relative velocity of approach. In applying this measure to ball sports, this can be easily measured using the following formula:
The momentum and energy equations also apply to the motions of objects that begin together and then move apart. For example, an explosion is the result of a chain reaction that transforms potential energy stored in chemical, mechanical, or nuclear form into kinetic energy, acoustic energy, and electromagnetic radiation. Rockets also make use of conservation of momentum: propellant is thrust outward, gaining momentum, and an equal and opposite momentum is imparted to the rocket.
Multiple dimensions
Real motion has both direction and magnitude and must be represented by a vector. In a coordinate system with x, y, z axes, velocity has components vx in the x direction, vy in the y direction, vz in the z direction. The vector is represented by a boldface symbol:
Similarly, the momentum is a vector quantity and is represented by a boldface symbol:
The equations in the previous sections work in vector form if the scalars p and v are replaced by vectors p and v. Each vector equation represents three scalar equations. For example,
represents three equations:
The kinetic energy equations are exceptions to the above replacement rule. The equations are still one-dimensional, but each scalar represents the magnitude of the vector, for example,
Each vector equation represents three scalar equations. Often coordinates can be chosen so that only two components are needed, as in the figure. Each component can be obtained separately and the results combined to produce a vector result.
A simple construction involving the center of mass frame can be used to show that if a stationary elastic sphere is struck by a moving sphere, the two will head off at right angles after the collision (as in the figure).
Objects of variable mass
The concept of momentum plays a fundamental role in explaining the behavior of variable-mass objects such as a rocket ejecting fuel or a star accreting gas. In analyzing such an object, one treats the object's mass as a function that varies with time: m(t). The momentum of the object at time t is therefore p(t) = m(t)v(t). One might then try to invoke Newton's second law of motion by saying that the external force F on the object is related to its momentum p(t) by F = dp/dt, but this is incorrect, as is the related expression found by applying the product rule to d(mv)/dt:
This equation does not correctly describe the motion of variable-mass objects. The correct equation is
where u is the velocity of the ejected/accreted mass as seen in the object's rest frame. This is distinct from v, which is the velocity of the object itself as seen in an inertial frame.
This equation is derived by keeping track of both the momentum of the object as well as the momentum of the ejected/accreted mass. When considered together, the object and the mass constitute a closed system in which total momentum is conserved.
Generalized coordinates
Newton's laws can be difficult to apply to many kinds of motion because the motion is limited by constraints. For example, a bead on an abacus is constrained to move along its wire and a pendulum bob is constrained to swing at a fixed distance from the pivot. Many such constraints can be incorporated by changing the normal Cartesian coordinates to a set of generalized coordinates that may be fewer in number. Refined mathematical methods have been developed for solving mechanics problems in generalized coordinates. They introduce a generalized momentum, also known as the canonical or conjugate momentum, that extends the concepts of both linear momentum and angular momentum. To distinguish it from generalized momentum, the product of mass and velocity is also referred to as mechanical, kinetic or kinematic momentum. The two main methods are described below.
Lagrangian mechanics
If the generalized coordinates are represented as a vector q = (q1, q2, ... , qN) and time differentiation is represented by a dot over the variable, then the equations of motion (known as the Lagrange or Euler–Lagrange equations) are a set of N equations:
If a coordinate qi is not a Cartesian coordinate, the associated generalized momentum component pi does not necessarily have the dimensions of linear momentum. Even if qi is a Cartesian coordinate, pi will not be the same as the mechanical momentum if the potential depends on velocity. Some sources represent the kinematic momentum by the symbol Π.
In this mathematical framework, a generalized momentum is associated with the generalized coordinates. Its components are defined as
Each component pj is said to be the conjugate momentum for the coordinate qj.
Now if a given coordinate qi does not appear in the Lagrangian (although its time derivative might appear), then
This is the generalization of the conservation of momentum.
Even if the generalized coordinates are just the ordinary spatial coordinates, the conjugate momenta are not necessarily the ordinary momentum coordinates. An example is found in the section on electromagnetism.
Hamiltonian mechanics
In Hamiltonian mechanics, the Lagrangian (a function of generalized coordinates and their derivatives) is replaced by a Hamiltonian that is a function of generalized coordinates and momentum. The Hamiltonian is defined as
where the momentum is obtained by differentiating the Lagrangian as above. The Hamiltonian equations of motion are
As in Lagrangian mechanics, if a generalized coordinate does not appear in the Hamiltonian, its conjugate momentum component is conserved.
Symmetry and conservation
Conservation of momentum is a mathematical consequence of the homogeneity (shift symmetry) of space (position in space is the canonical conjugate quantity to momentum). That is, conservation of momentum is a consequence of the fact that the laws of physics do not depend on position; this is a special case of Noether's theorem.
Relativistic mechanics
Lorentz invariance
Newtonian physics assumes that absolute time and space exist outside of any observer; this gives rise to the Galilean invariance described earlier. It also results in a prediction that the speed of light can vary from one reference frame to another. This is contrary to observation. In the special theory of relativity, Einstein keeps the postulate that the equations of motion do not depend on the reference frame, but assumes that the speed of light c is invariant. As a result, position and time in two reference frames are related by the Lorentz transformation instead of the Galilean transformation.
Consider, for example, a reference frame moving relative to another at velocity v in the x direction. The Galilean transformation gives the coordinates of the moving frame as
while the Lorentz transformation gives
where γ is the Lorentz factor:
Newton's second law, with mass fixed, is not invariant under a Lorentz transformation. However, it can be made invariant by making the inertial mass m of an object a function of velocity:
The modified momentum,
obeys Newton's second law:
Within the domain of classical mechanics, relativistic momentum closely approximates Newtonian momentum: at low velocity, γm0v is approximately equal to m0v, the Newtonian expression for momentum.
Four-vector formulation
In the theory of relativity, physical quantities are expressed in terms of four-vectors that include time as a fourth coordinate along with the three space coordinates. These vectors are generally represented by capital letters, for example R for position. The expression for the four-momentum depends on how the coordinates are expressed. Time may be given in its normal units or multiplied by the speed of light so that all the components of the four-vector have dimensions of length. If the latter scaling is used, an interval of proper time, τ, defined by
is invariant under Lorentz transformations (in this expression and in what follows the (+ − − −) metric signature has been used, different authors use different conventions). Mathematically this invariance can be ensured in one of two ways: by treating the four-vectors as Euclidean vectors and multiplying time by the square root of -1; or by keeping time a real quantity and embedding the vectors in a Minkowski space. In a Minkowski space, the scalar product of two four-vectors U = (U0,U1,U2,U3) and V = (V0,V1,V2,V3) is defined as
In all the coordinate systems, the (contravariant) relativistic four-velocity is defined by
and the (contravariant) four-momentum is
where m0 is the invariant mass. If R = (ct,x,y,z) (in Minkowski space), then[note 1]
Using Einstein's mass-energy equivalence, E = mc2, this can be rewritten as
Thus, conservation of four-momentum is Lorentz-invariant and implies conservation of both mass and energy.
The magnitude of the momentum four-vector is equal to m0c:
and is invariant across all reference frames.
The relativistic energy–momentum relationship holds even for massless particles such as photons; by setting m0 = 0 it follows that
In a game of relativistic "billiards", if a stationary particle is hit by a moving particle in an elastic collision, the paths formed by the two afterwards will form an acute angle. This is unlike the non-relativistic case where they travel at right angles.
Classical electromagnetism
In Newtonian mechanics, the law of conservation of momentum can be derived from the law of action and reaction, which states that the forces between two particles are equal and opposite. Electromagnetic forces violate this law. Under some circumstances one moving charged particle can exert a force on another without any return force. Moreover, Maxwell's equations, the foundation of classical electrodynamics, are Lorentz-invariant. However, momentum is still conserved.
In Maxwell's equations, the forces between particles are mediated by electric and magnetic fields. The electromagnetic force (Lorentz force) on a particle with charge q due to a combination of electric field E and magnetic field (as given by the "B-field" B) is
This force imparts a momentum to the particle, so by Newton's second law the particle must impart a momentum to the electromagnetic fields.
In a vacuum, the momentum per unit volume is
where μ0 is the vacuum permeability and c is the speed of light. The momentum density is proportional to the Poynting vector S which gives the directional rate of energy transfer per unit area:
If momentum is to be conserved in a volume V, changes in the momentum of matter through the Lorentz force must be balanced by changes in the momentum of the electromagnetic field and outflow of momentum. If Pmech is the momentum of all the particles in a volume V, and the particles are treated as a continuum, then Newton's second law gives
The electromagnetic momentum is
and the equation for conservation of each component i of the momentum is
The term on the right is an integral over the surface S representing momentum flow into and out of the volume, and nj is a component of the surface normal of S. The quantity Ti j is called the Maxwell stress tensor, defined as
The above results are for the microscopic Maxwell equations, applicable to electromagnetic forces in a vacuum (or on a very small scale in media). It is more difficult to define momentum density in media because the division into electromagnetic and mechanical is arbitrary. The definition of electromagnetic momentum density is modified to
where the H-field H is related to the B-field and the magnetization M by
The electromagnetic stress tensor depends on the properties of the media.
Particle in field
If a charged particle q moves in an electromagnetic field, its kinematic momentum m v is not conserved. However, it has a canonical momentum that is conserved.
Lagrangian and Hamiltonian formulation
The kinetic momentum p is different to the canonical momentum P (synonymous with the generalized momentum) conjugate to the ordinary position coordinates r, because P includes a contribution from the electric potential φ(r, t) and vector potential A(r, t):
Classical mechanics Relativistic mechanics Lagrangian Canonical momentum Kinetic momentum Hamiltonian
The classical Hamiltonian for a particle in any field equals the total energy of the system - the kinetic energy T = p2/2m (where p2 = p·p, see dot product) plus the potential energy V. For a particle in an electromagnetic field, the potential energy is V = eφ, and since the kinetic energy T always corresponds to the kinetic momentum p, replacing the kinetic momentum by the above equation (p = P − eA) leads to the Hamiltonian in the table.
These Lagrangian and Hamiltonian expressons can derive the Lorentz force.
Canonical commutation relations
Quantum mechanics
In quantum mechanics, momentum is defined as an operator on the wave function. The Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at once. In quantum mechanics, position and momentum are conjugate variables.
For a single particle described in the position basis the momentum operator can be written as
where ∇ is the gradient operator, ħ is the reduced Planck constant, and i is the imaginary unit. This is a commonly encountered form of the momentum operator, though the momentum operator in other bases can take other forms. For example, in momentum space the momentum operator is represented as
where the operator p acting on a wave function ψ(p) yields that wave function multiplied by the value p, in an analogous fashion to the way that the position operator acting on a wave function ψ(x) yields that wave function multiplied by the value x.
For both massive and massless objects, relativistic momentum is related to the de Broglie wavelength λ by
Electromagnetic radiation (including visible light, ultraviolet light, and radio waves) is carried by photons. Even though photons (the particle aspect of light) have no mass, they still carry momentum. This leads to applications such as the solar sail. The calculation of the momentum of light within dielectric media is somewhat controversial (see Abraham–Minkowski controversy).
Deformable bodies and fluids
In fields such as fluid dynamics and solid mechanics, it is not feasible to follow the motion of individual atoms or molecules. Instead, the materials must be approximated by a continuum in which there is a particle or fluid parcel at each point that is assigned the average of the properties of atoms in a small region nearby. In particular, it has a density ρ and velocity v that depend on time t and position r. The momentum per unit volume is ρv.
Consider a column of water in hydrostatic equilibrium. All the forces on the water are in balance and the water is motionless. On any given drop of water, two forces are balanced. The first is gravity, which acts directly on each atom and molecule inside. The gravitational force per unit volume is ρg, where g is the gravitational acceleration. The second force is the sum of all the forces exerted on its surface by the surrounding water. The force from below is greater than the force from above by just the amount needed to balance gravity. The normal force per unit area is the pressure p. The average force per unit volume inside the droplet is the gradient of the pressure, so the force balance equation is
If the forces are not balanced, the droplet accelerates. This acceleration is not simply the partial derivative ∂v/∂t because the fluid in a given volume changes with time. Instead, the material derivative is needed:
Applied to any physical quantity, the material derivative includes the rate of change at a point and the changes dues to advection as fluid is carried past the point. Per unit volume, the rate of change in momentum is equal to ρDv/Dt. This is equal to the net force on the droplet.
Forces that can change the momentum of a droplet include the gradient of the pressure and gravity, as above. In addition, surface forces can deform the droplet. In the simplest case, a shear stress τ, exerted by a force parallel to the surface of the droplet, is proportional to the rate of deformation or strain rate. Such a shear stress occurs if the fluid has a velocity gradient because the fluid is moving faster on one side than another. If the speed in the x direction varies with z, the tangential force in direction x per unit area normal to the z direction is
The momentum balance equations can be extended to more general materials, including solids. For each surface with normal in direction i and force in direction j, there is a stress component σij. The nine components make up the Cauchy stress tensor σ, which includes both pressure and shear. The local conservation of momentum is expressed by the Cauchy momentum equation:
The Cauchy momentum equation is broadly applicable to deformations of solids and liquids. The relationship between the stresses and the strain rate depends on the properties of the material (see Types of viscosity).
Acoustic waves
The flux, or transport per unit area, of a momentum component ρvj by a velocity vi is equal to ρ vjvj. In the linear approximation that leads to the above acoustic equation, the time average of this flux is zero. However, nonlinear effects can give rise to a nonzero average. It is possible for momentum flux to occur even though the wave itself does not have a mean momentum.
History of the concept
In about 530 A.D., working in Alexandria, Byzantine philosopher John Philoponus developed a concept of momentum in his commentary to Aristotle's Physics. Aristotle claimed that everything that is moving must be kept moving by something. For example, a thrown ball must be kept moving by motions of the air. Most writers continued to accept Aristotle's theory until the time of Galileo, but a few were skeptical. Philoponus pointed out the absurdity in Aristotle's claim that motion of an object is promoted by the same air that is resisting its passage. He proposed instead that an impetus was imparted to the object in the act of throwing it. Ibn Sīnā (also known by his Latinized name Avicenna) read Philoponus and published his own theory of motion in The Book of Healing in 1020. He agreed that an impetus is imparted to a projectile by the thrower; but unlike Philoponus, who believed that it was a temporary virtue that would decline even in a vacuum, he viewed it as a persistent, requiring external forces such as air resistance to dissipate it. The work of Philoponus, and possibly that of Ibn Sīnā, was read and refined by the European philosophers Peter Olivi and Jean Buridan. Buridan, who in about 1350 was made rector of the University of Paris, referred to impetus being proportional to the weight times the speed. Moreover, Buridan's theory was different to his predecessor's in that he did not consider impetus to be self-dissipating, asserting that a body would be arrested by the forces of air resistance and gravity which might be opposing its impetus.
René Descartes believed that the total "quantity of motion" in the universe is conserved, where the quantity of motion is understood as the product of size and speed. This should not be read as a statement of the modern law of momentum, since he had no concept of mass as distinct from weight and size, and more importantly he believed that it is speed rather than velocity that is conserved. So for Descartes if a moving object were to bounce off a surface, changing its direction but not its speed, there would be no change in its quantity of motion. Galileo, later, in his Two New Sciences, used the Italian word impeto.
The first correct statement of the law of conservation of momentum was by English mathematician John Wallis in his 1670 work, Mechanica sive De Motu, Tractatus Geometricus: "the initial state of the body, either of rest or of motion, will persist" and "If the force is greater than the resistance, motion will result". Wallis uses momentum and vis for force. Newton's Philosophiæ Naturalis Principia Mathematica, when it was first published in 1687, showed a similar casting around for words to use for the mathematical momentum. His Definition II defines quantitas motus, "quantity of motion", as "arising from the velocity and quantity of matter conjointly", which identifies it as momentum. Thus when in Law II he refers to mutatio motus, "change of motion", being proportional to the force impressed, he is generally taken to mean momentum and not motion. It remained only to assign a standard term to the quantity of motion. The first use of "momentum" in its proper mathematical sense is not clear but by the time of Jenning's Miscellanea in 1721, four years before the final edition of Newton's Principia Mathematica, momentum M or "quantity of motion" was being defined for students as "a rectangle", the product of Q and V, where Q is "quantity of material" and V is "velocity", s/t.
See also
- Here the time coordinate comes first. Several sources put the time coordinate at the end of the vector.
- Feynman Vol. 1, Chapter 9
- "Euler's Laws of Motion". Retrieved 2009-03-30.
- McGill and King (1995). Engineering Mechanics, An Introduction to Dynamics (3rd ed.). PWS Publishing Company. ISBN 0-534-93399-8.
- Plastino, Angel R.; Muzzio, Juan C. (1992). "On the use and abuse of Newton's second law for variable mass problems". Celestial Mechanics and Dynamical Astronomy (Netherlands: Kluwer Academic Publishers) 53 (3): 227–232. Bibcode:1992CeMDA..53..227P. doi:10.1007/BF00052611. ISSN 0923-2958. "We may conclude emphasizing that Newton's second law is valid for constant mass only. When the mass varies due to accretion or ablation, [an alternate equation explicitly accounting for the changing mass] should be used."
- Feynman Vol. 1, Chapter 10
- Goldstein 1980, pp. 54–56
- Goldstein 1980, p. 276
- Carl Nave (2010). "Elastic and inelastic collisions". Hyperphysics. Retrieved 2 August 2012.
- Serway, Raymond A.; John W. Jewett, Jr (2012). Principles of physics : a calculus-based text (5th ed.). Boston, MA: Brooks/Cole, Cengage Learning. p. 245. ISBN 9781133104261.
- Carl Nave (2010). "Forces in car crashes". Hyperphysics. Retrieved 2 August 2012.
- Carl Nave (2010). "The Franck-Hertz Experiment". Hyperphysics. Retrieved 2 August 2012.
- McGinnis, Peter M. (2005). Biomechanics of sport and exercise Biomechanics of sport and exercise (2nd ed.). Champaign, IL [u.a.]: Human Kinetics. p. 85. ISBN 9780736051019.
- Sutton, George (2001), "1", Rocket Propulsion Elements (7th ed.), Chichester: John Wiley & Sons, ISBN 978-0-471-32642-7
- Feynman Vol. 1, Chapter 11
- Rindler 1986, pp. 26–27
- Kleppner; Kolenkow. An Introduction to Mechanics. p. 135–39.
- Goldstein 1980, pp. 11–13
- Jackson 1975, p. 574
- Feynman Vol. 3, Chapter 21-3
- Goldstein 1980, pp. 20–21
- Lerner, Rita G., ed. (2005). Encyclopedia of physics (3rd ed.). Weinheim: Wiley-VCH-Verl. ISBN 978-3527405541.
- Goldstein 1980, pp. 341–342
- Goldstein 1980, p. 348
- Hand, Louis N.; Finch, Janet D. (1998). Analytical mechanics (7th print ed.). Cambridge, England: Cambridge University Press. Chapter 4. ISBN 9780521575720.
- Rindler 1986, Chapter 2
- Feynman Vol. 1, Chapter 15-2
- Rindler 1986, pp. 77–81
- Rindler 1986, p. 66
- Misner, Charles W.; Kip S. Thorne, John Archibald Wheeler (1973). Gravitation. 24th printing. New York: W. H. Freeman. p. 51. ISBN 9780716703440.
- Rindler 1986, pp. 86–87
- Goldstein 1980, pp. 7–8
- Jackson 1975, pp. 238–241 Expressions, given in Gaussian units in the text, were converted to SI units using Table 3 in the Appendix.
- Feynman Vol. 1, Chapter 27-6
- Barnett, Stephen M. (2010). "Resolution of the Abraham-Minkowski Dilemma". Physical Review Letters 104 (7). Bibcode:2010PhRvL.104g0401B. doi:10.1103/PhysRevLett.104.070401. PMID 20366861.
- Tritton 2006, pp. 48–51
- Feynman Vol. 2, Chapter 40
- Tritton 2006, pp. 54
- Bird, R. Byron; Warren Stewart; Edwin N. Lightfoot (2007). Transport phenomena (2nd revised ed.). New York: Wiley. p. 13. ISBN 9780470115398.
- Tritton 2006, p. 58
- Acheson, D. J. (1990). Elementary Fluid Dynamics. Oxford University Press. p. 205. ISBN 0-19-859679-0.
- Gubbins, David (1992). Seismology and plate tectonics (Repr. (with corr.) ed.). Cambridge [England]: Cambridge University Press. p. 59. ISBN 0521379954.
- LeBlond, Paul H.; Mysak, Lawrence A. (1980). Waves in the ocean (2. impr. ed.). Amsterdam [u.a.]: Elsevier. p. 258. ISBN 9780444419262.
- McIntyre, M. E. (1981). "On the 'wave momentum' myth". J. Fluid. Mech 106: 331–347.
- "John Philoponus". Standford Encyclopedia of Philosophy. 8 June 2007. Retrieved 26 July 2012.
- Espinoza, Fernando (2005). "An analysis of the historical development of ideas about motion and its implications for teaching". Physics Education 40 (2): 141.
- Seyyed Hossein Nasr & Mehdi Amin Razavi (1996). The Islamic intellectual tradition in Persia. Routledge. p. 72. ISBN 0-7007-0314-4.
- Aydin Sayili (1987). "Ibn Sīnā and Buridan on the Motion of the Projectile". Annals of the New York Academy of Sciences 500 (1): 477–482. Bibcode:1987NYASA.500..477S. doi:10.1111/j.1749-6632.1987.tb37219.x.
- T.F. Glick, S.J. Livesay, F. Wallis. "Buridian, John". Medieval Science, Technology and Medicine:an Encyclopedia. p. 107.
- Park, David (1990). The how and the why : an essay on the origins and development of physical theory. With drawings by Robin Brickman (3rd print ed.). Princeton, N.J.: Princeton University Press. pp. 139–141. ISBN 9780691025087.
- Daniel Garber (1992). "Descartes' Physics". In John Cottingham. The Cambridge Companion to Descartes. Cambridge: Cambridge University Press. pp. 310–319. ISBN 0-521-36696-8.
- Rothman, Milton A. (1989). Discovering the natural laws : the experimental basis of physics ([2ème édition, revue et augmentée]. ed.). New York: Dover Publications. pp. 83–88. ISBN 9780486261782.
- Scott, J.F. (1981). The Mathematical Work of John Wallis, D.D., F.R.S. Chelsea Publishing Company. p. 111. ISBN 0-8284-0314-7.
- Grimsehl, Ernst; Leonard Ary Woodward, Translator (1932). A Textbook of Physics. London, Glasgow: Blackie & Son limited. p. 78.
- Rescigno, Aldo (2003). Foundation of Pharmacokinetics. New York: Kluwer Academic/Plenum Publishers. p. 19. ISBN 0306477041.
- Jennings, John (1721). Miscellanea in Usum Juventutis Academicae. Northampton: R. Aikes & G. Dicey. p. 67.
Further reading
- Halliday, David; Robert Resnick (1960-2007). Fundamentals of Physics. John Wiley & Sons. Chapter 9.
- Dugas, René (1988). A history of mechanics. Translated into English by J.R. Maddox (Dover ed.). New York: Dover Publications. ISBN 9780486656328.
- Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (2005). The Feynman lectures on physics, Volume 1: Mainly Mechanics, Radiation, and Heat (Definitive ed.). San Francisco, Calif.: Pearson Addison-Wesley. ISBN 978-0805390469.
- Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (2005). The Feynman lectures on physics, Volume III: Quantum Mechanics (Definitive ed.). New York: BasicBooks. ISBN 978-0805390490.
- Goldstein, Herbert (1980). Classical mechanics (2d ed.). Reading, Mass.: Addison-Wesley Pub. Co. ISBN 0201029189.
- Hand, Louis N.; Finch, Janet D. Analytical Mechanics. Cambridge University Press. Chapter 4.
- Jackson, John David (1975). Classical electrodynamics (2d ed.). New York: Wiley. ISBN 047143132X.
- Landau, L.D.; E.M. Lifshitz (2000). The classical theory of fields. 4th rev. English edition, reprinted with corrections; translated from the Russian by Morton Hamermesh. Oxford: Butterworth Heinemann. ISBN 9780750627689.
- Rindler, Wolfgang (1986). Essential Relativity : Special, general and cosmological (Rev. 2. ed.). New York u.a.: Springer. ISBN 0387100903.
- Serway, Raymond; Jewett, John (2003). Physics for Scientists and Engineers (6 ed.). Brooks Cole. ISBN 0-534-40842-7
- Stenger, Victor J. (2000). Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Prometheus Books. Chpt. 12 in particular.
- Tipler, Paul (1998). Physics for Scientists and Engineers: Vol. 1: Mechanics, Oscillations and Waves, Thermodynamics (4th ed.). W. H. Freeman. ISBN 1-57259-492-6
- Tritton, D.J. (2006). Physical fluid dynamics (2nd. ed.). Oxford: Claredon Press. p. 58. ISBN 0198544936.
|Look up momentum in Wiktionary, the free dictionary.|
- Conservation of momentum – A chapter from an online textbook
- MiuntePhysics - Another Physics Misconception - MinutePhysics explaining a common misconception of momentum | http://www.digplanet.com/wiki/Momentum | 13 |
82 | We have seen that functions can consume functions and how important that is for creating single points of control in a function. But functions not only can consume functions, they can also produce them. More precisely, expressions in the new Scheme can evaluate to functions. Because the body of a function definition is also an expression, a function can produce a function. In this section, we first discuss this surprising idea and then show how it is useful for abstracting functions and in other contexts.
While the idea of producing a function may seem strange at first, it is extremely useful. Before we can discuss the usefulness of the idea, though, we must explore how a function can produce a function. Here are three examples:
(define (f x) first) (define (g x) f) (define (h x) (cond ((empty? x) f) ((cons? x) g)))
The body of
first, a primitive operation, so
f to any argument always evaluates to
first. Similarly, the body of
g to any argument always evaluates to
Finally, depending on what kind of list we supply as an argument to
h, it produces
None of these examples is useful but each illustrates the basic idea. In
the first two cases, the body of the function definition is a function. In
the last case, it evaluates to a function. The examples are useless
because the results do not contain or refer to the argument. For a function
f to produce a function that contains one of
f must define a function and return it as the
result. That is,
f's body must be a local-expression.
Recall that local-expressions group definitions and ask DrScheme to evaluate a single expression in the context of these definitions. They can occur wherever an expression can occur, which means the following definition is legal:
(define (add x) (local ((define (x-adder y) (+ x y))) x-adder))
add consumes a number; after all,
x is added
y. It then defines the function
x-adder with a
local-expression. The body of the local-expression is
x-adder, which means the
add better, let us look at how an application of
add to some number evaluates:
(define f (add 5)) = (define f (local ((define (x-adder y) (+ 5 y))) x-adder)) = (define f (local ((define (x-adder5 y) (+ 5 y))) x-adder5)) = (define (x-adder5 y) (+ 5 y)) (define f x-adder5)
The last step adds the function
x-adder5 to the collection of our
definitions; the evaluation continues with the body of the local-expression,
x-adder5, which is the name of a function and thus a value. Now
f is defined and we can use it:
(f 10) = (x-adder5 10) = (+ 5 10) = 15
f stands for
x-adder5, a function, which adds
5 to its argument.
Using this example, we can write
add's contract and a purpose
add : number -> (number -> number);; to create a function that adds
xto its input (define (add x) (local ((define (x-adder y) (+ x y))) x-adder))
The most interesting property of
add is that its result
``remembers'' the value of
x. For example, every time we use
f, it uses
5, the value of
was used to define
f. This form of ``memory'' is the key to our
simple recipe for defining abstract functions, which we discuss in the next
The combination of local-expressions and functions-as-values simplifies our
recipe for creating abstract functions. Consider our very first example in
figure 53 again. If we replace the contents of the
rel-op, we get a function that has a free variable. To
avoid this, we can either add
rel-op to the parameters or we can
wrap the definition in a
local and prefix it with a function that
rel-op. Figure 59 shows what
happnes when we use this idea with
filter. If we also make the
locally defined function the result of the function, we have
defined an abstraction of the two original functions.
Put differently, we follow the example of
add in the preceding
filter2 consumes an argument, defines
a function, and returns this function as a result. The result remembers
rel-op argument for good as the following evaluation shows:
= (local ((define (abs-fun alon t) (cond [(empty? alon) empty] [else (cond [(< (first alon) t) (cons (first alon) (abs-fun (rest alon) t))] [else (abs-fun (rest alon) t)])]))) abs-fun)
= (define (below3 alon t) (cond [(empty? alon) empty] [else (cond [(< (first alon) t) (cons (first alon) (below3 (rest alon) t))] [else (below3 (rest alon) t)])])) below3
Remember that as we lift a
local definition to the top-level
definitions, we also rename the function in case the same
evaluated again. Here we choose the name
below3 to indicate what
the function does. And indeed, a comparison between
below3 reveals that the only difference is the name of the
From the calculation, it follows that we can give the result of
(filter2 <) a name and use it as if it were
(define below2 (filter2 <))
is equivalent to
(define (below3 alon t) (cond [(empty? alon) empty] [else (cond [(< (first alon) t) (cons (first alon) (below3 (rest alon) t))] [else (below3 (rest alon) t)])])) (define below2 below3)
below2 is just another name for
which directly proves that our abstract function correctly implements
The example suggests a variant of the abstraction recipe from section 21:
(local ((define (concrete-fun x y z) ... op1 ... op2 ...)) concrete-fun)
From that, we can create the abstract function by listing the names in the boxes as parameters:
(define (abs-fun op1 op2) (local ((define (concrete-fun x y z) ... op1 ... op2 ...)) concrete-fun))
op2 is a special symbol, say
name it something that is more meaningful in the new context.
aboveas instances of
filter2is now straightforward:
(define below2 (filter2 <)) (define above2 (filter2 >))
We simply apply
filter2 to the contents of the box in the
respective concrete function and that application produces the old
Here is the contract for
filter2 : (X Y -> boolean) -> ((listof X) Y -> (listof X))
It consumes a comparison function and produces a concrete filter-style function.
The generalization of the contract works as before.
Given our experience with the first design recipe, the second one is only a question of practice.
Define an abstraction of the functions
names from section 21.1 using the new recipe for
Define an abstract version of
exercise 19.1.6) using the new recipe for
fold using the new recipe for abstraction. Recall that
fold abstracts the following pair of functions:
Functions as first-class values play a central role in the design of graphical user interfaces. The term ``interface'' refers to the boundary between the program and a user. As long as we are the only users, we can apply functions to data in DrScheme's Interactions window. If we want others to use our programs, though, we must provide a way to interact with the program that does not require any programming knowledge. The interaction between a program and a casual user is the USER INTERFACE.
A GRAPHICAL USER INTERFACE (GUI) is the most convenient interface for casual users. A GUI is a window that contains GUI items. Some of these items permit users to enter text; others are included so that users can apply a specific function; and yet others exist to display a function's results. Examples include buttons, which the user can click with the mouse and which trigger a function application; choice menus, from which the user can choose one of a collection of values; text fields, into which the user can type arbitrary text; and message fields, into which a program can draw text.
Take a look at the simple GUI in figure 60. The left-most picture shows its initial state. In that state, the GUI contains a text field labeled ``Name'' and a message field labeled ``Number'' plus a ``LookUp'' button. In the second picture, the user has entered the name ``Sean'' but hasn't yet clicked the ``LookUp'' button.48 Finally, the right-most picture shows how the GUI displays the phone number of ``Sean'' after the user clicks the ``LookUp'' button.
The core of the program is a function that looks up a phone number for a name in a list. We wrote several versions of this function in part II but always used it with DrScheme's Interactions window. Using the GUI of figure 60, people who know nothing about Scheme can now use our function, too.
To build a graphical user interface, we build structures49 that correspond to the GUI items and hand them over to a GUI manager. The latter constructs the visible window from these items. Some of the structures' fields describe the visual properties of the GUI's elements, such as the label of a button, the initial content of a message field, or the available choices on a menu. Other fields stand for functions. They are called CALL-BACK FUNCTIONS because the GUI manager calls -- or applies -- these functions when the user manipulates the corresponding GUI element. Upon application, a call-back function obtains strings and (natural) numbers from the elements of the GUI and then applies the function proper. This last step computes answers, which the call-back function can place into GUI elements just like graphics functions draw shapes on a canvas.
The ideal program consists of two completely separate components: the MODEL, which is the kind of program we are learning to design, and a VIEW, which is the GUI program that manages the display of information and the user's mouse and keyboard manipulations. The bridge between the two is the CONTROL expression. Figure 61 graphically illustrates the organization, known as the MODEL-VIEW-CONTROL architecture. The lowest arrow indicates how a program makes up a button along with a call-back function. The left-to-right arrow depicts the mouse-click event and how it triggers an application of the call-back function. It, in turn, uses other GUI functions to obtain user input before it applies a core function or to display results of the core function.
The separation of the program into two parts means that the definitions for the model contain no references to the view, and that the definitions for the view contain no references to the data or the functionality of the model. The organization principle evolved over two decades from many good and bad experiences. It has the advantage that, with an adaptation of just the bridge expression, we can use one and the same program with different GUIs and vice versa. Furthermore, the construction of views requires different tools than does the construction of models. Constructing views is a labor-intensive effort and involves graphical design, but fortunately, it is often possible to generate large portions automatically. The construction of models, in contrast, will always demand a serious program design effort.
Here we study the simplified GUI world of the teachpack
gui.ss. Figure 62 specifies the operations that the
teachpack provides.50 The GUI
manager is represented by the function
create-window. Its contract
and purpose statement are instructive. They explain that we create a window
from a list. The function arranges these lists in a corresponding number of
rows on the visible window. Each row is specified as a list of
gui-items. The data definition for
figure 62 shows that there are four kinds:
(make-text a-string)and allow users to enter arbitrary text into an area in the window;
(make-button a-string a-function)and allow users to apply a function with the click of a mouse button;
(make-choice a-list-of-strings)and allow users to pick a choice from a specified set of choices; and
(make-message a-string)and enable the model to inform users of results.
The function that goes with a button is a function of one argument: an event. For most uses, we can ignore the event; it is simply a token that signals the user's click action.
How all this works is best illustrated with examples. Our first example is a canonical GUI program:
(create-window (list (list (make-button "Close" hide-window))))
It creates a window with a single button and equips it with the simplest of
hide-window, the function that hides the window. When
the user clicks the button labeled
"Close", the window disappears.
The second sample GUI copies what the user enters into a text field to a message field. We first create a text field and a message field:
(define a-text-field (make-text "Enter Text:")) (define a-message (make-message "`Hello World' is a silly program."))
Now we can refer to these fields in a call-back function:
echo-message : X -> true;; to extract the contents of
a-text-fieldand to draw it into
a-message(define (echo-message e) (draw-message a-message (text-contents a-text-field)))
The definition of the call-back function is based on our (domain) knowledge
gui-items. Specifically, the function
echo-message obtains the current contents of the text field with
text-contents as a string, and it draws this string into the
message field with the
draw-message function. To put everything
together, we create a window with two rows:
(create-window (list (list a-text-field a-message) (list (make-button "Copy Now" echo-message))))
The first row contains the text and the message field; the second one
contains a button with the label
"Copy Now" whose call-back
echo-message. The user can now enter text into the
text field, click the button, and see the text appear in the message
field of the window.
The purpose of the third and last example is to create a window with a
choice menu, a message field, and a button. Clicking the button puts the
current choice into the message field. As before, we start by defining the
input and output
(define THE-CHOICES (list "green" "red" "yellow")) (define a-choice (make-choice THE-CHOICES)) (define a-message (make-message (first THE-CHOICES)))
Because the list of choices is used more than once in the program, it is specified in a separate variable definition.
As before, the call-back function for the button interacts with
echo-choice : X -> true;; to determine the current choice of
a-choiceand ;; to draw the corresponding string into
a-message(define (echo-choice e) (draw-message a-message (list-ref THE-CHOICES (choice-index a-choice))))
Specifically, the call-back function finds the
of the user's current choice with
choice-index, uses Scheme's
list-ref function to extract the corresponding string from
THE-CHOICES, and then draws the result into the message field of
the window. To create the window, we arrange
a-message in one row and the button in a row below:
(create-window (list (list a-choice a-message) (list (make-button "Confirm Choice" echo-choice))))
Now that we have examined some basic GUI programs, we can study a program
with full-fledged core and GUI components. Take a look at the definitions
in figure 63. The program's purpose is to echo the values
of several digit choice menus as a number into some message field. The
model consists of the
build-number function, which converts a list
of (three) digits into a number. We have developed several such functions,
so the figure mentions only what it does. The GUI component of the program
sets up three choice menus, a message field, and a button. The control
part consists of a single call-back function, which is attached to the
single button in the window. It determines the (list of) current choice
indices, hands them over to
build-number, and draws the result as
a string into the message field.
Let's study the organization of the call-back functions in more detail. It composes three kinds of functions:
The innermost function determines the current state of the
gui-items. This is the user's input. With the given functions, we
can determine the string that the user entered into either a text field or
the 0-based index of a choice menu.
This user input is consumed by the main function of the model. The call-back function may convert the user's string into some other data, say, a symbol or a number.
The result of the model function, in turn, is drawn into a message field, possibly after converting it to a string first.
The control component of a program is also responsible for the visual
composition of the window. The teachpack provides only one function for
create-window. Standard GUI toolboxes provide many
more functions, though all of these toolboxes differ from each other and
are changing rapidly.
Exercise 22.3.1. Modify the program of figure 63 so that it implements the number-guessing game from exercises 5.1.2, 5.1.3, and 9.5.5. Make sure that the number of digits that the player must guess can be changed by editing a single definition in the program.
Hint: Recall that exercise 11.3.1 introduces a function that generates random numbers. Solution
Develop a program for looking up phone numbers. The program's GUI should
consist of a text field, a message field, and a button. The text field
permits users to enter names. The message field should display the number
that the model finds or the message
"name not found", if the model
Generalize the program so that a user can also enter a phone number (as a sequence of digits containing no other characters).
Hints: (1) Scheme provides the function
converting a string to a symbol. (2) It also provides the function
string->number, which converts a string to a number if
possible. If the function consumes a string that doesn't represent a
number, it produces
( string-> number "6670004") = 6670004
( string-> number "667-0004") = false
The generalization demonstrates how one and the same GUI can use two distinct models.
Real-world GUIs: The graphical user interface in
figure 60 was not constructed from the items provided by
the teachpack. GUIs constructed with the teachpack's
primitive. They are sufficient, however, to study the basic
principles of GUI programming. The design of real-world GUIs involves
graphics designers and tools that generate GUI programs (rather than making
them by hand).
pad->gui. The function consumes a title (string) and a
gui-table. It turns the table into a list of lists of
create-window can consume. Here is the
data definition for
A cell is either
A gui-table is a
(listof (listof cell)) .
Here are two examples of gui-tables:
(define pad '((1 2 3) (4 5 6) (7 8 9) (\# 0 *)))
(define pad2 '((1 2 3 +) (4 5 6 -) (7 8 9 *) (0 = \. /)))
pad->gui should turn each cell into a button. The
resulting list should be prefixed with two messages. The first one displays
the title and never changes. The second one displays the latest button that
the user clicked. The two examples above should produce the following two
Hint: The second message header requires a short string, for example,
"N", as the initial value.
48 The program has also cleared the result field to avoid any misunderstanding. Similarly, the user could also just hit the enter key instead of clicking the button. We ignore such subtleties here.
49 More precisely, we construct an object, but we do not need to understand the distinction between a structure and an object here.
gui-items aren't really
structures, which explains the font of the operations' names. | http://www.htdp.org/2003-09-26/Book/curriculum-Z-H-28.html | 13 |
68 | Working With Circles and Circular Figures Study Guide
Introduction to Working With Circles and Circular Figures
In this lesson, you will learn about the irrational number pi, or π. You will also learn to use formulas to find the circumference and area of circles, the surface area and volume of cylinders and spheres, and the volume of cones.
Before you begin to work with circles and circular figures, you need to know about the irrational number, π (pronounced "pie"). Over 2,000 years ago, mathematicians approximated the value of the ratio of the distance around a circle to the distance across a circle to be approximately 3. Years later, this value was named with the Greek letter π. The exact value of π is still a mathematical mystery. π is an irrational number. A rational number is a number that can be written as a ratio, a fraction, or a terminating or repeating decimal. Although its value has been computed in various ways over the past several hundred years, no one has been able to find a decimal value of π where the decimal terminates or develops a repeating pattern. Computers have been used to calculate the value of π to over fifty billion decimal places, but there is still no termination or repeating group of digits.
The most commonly used approximations for π are and 3.14. These are not the true values of π, only rounded approximations. You may have a π key on your calculator. This key will give you an approximation for π that varies according to how many digits your calculator displays.
Circumference of a Circle
Now that you know about the irrational number π, it's time to start working with circles. The distance around a circle is called its circumference. There are many reasons why people need to find a circle's circumference. For example, the amount of lace edge around a circular skirt can be found by using the circumference formula. The amount of fencing for a circular garden is another example of when you need the circumference formula.
Since π is the ratio of circumference to diameter, the approximation of π times the diameter of the circle gives you the circumference of the circle. The diameter of a circle is the distance across a circle through its center. A radius is the distance from the center to the edge. One-half the diameter is equal to the radius or two radii are equal to the length of the diameter.
Here is a theorem that will help you solve circumference problems:
Since π is approximately (not exactly) equal to 3.14, after you substitute the value 3.14 for π in the formula, you should use ≈ instead of =. The symbol ≈ means approximately equal to.
Find the circumference of each circle. Use the approximation of 3.14 for π.
Notice that these two circles have the same circumference because a circle with a diameter of 10 cm has a radius of 5 cm. You pick which formula to use based on what information you are given—the circle's radius or its diameter.
Area of a Circle
To understand the area of a circle, take a look at the following figure. Imagine a circle that is cut into wedges and rearranged to form a shape that resembles a parallelogram.
Notice that this formula squares the radius, not the diameter, so if you are given the diameter, you should divide it by two or multiply it by one-half to obtain the radius. Some people will mistakenly think that squaring the radius and doubling it to get the diameter are the same. They are the same only when the radius is 2. Otherwise, two times a number and the square of a number are very different.
Find the approximate area for each circle. Use 3.14 for π.
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- Steps in the IEP Process | http://www.education.com/study-help/article/working-circles-circular-figures/ | 13 |