score
int64 50
2.08k
| text
stringlengths 698
618k
| url
stringlengths 16
846
| year
int64 13
24
|
---|---|---|---|
56 | Uniform Circular Motion Activity Sheet
The purpose of this activity is to explore the characteristics of the motion of an object in a circle at a constant speed.
Procedure and Questions:
1. Navigate to the Uniform Circular Motion page and experiment with the on-screen buttons in order to gain familiarity with the control of the animation. The object speed, radius of the circle, and object mass can be varied by using the sliders or the buttons. The vector nature of velocity and acceleration can be depicted on the screen. A trace of the objects motion can be turned on, turned off and erased. The acceleration of and the net force acting upon the object are displayed at the bottom of the screen. The animation can be started, paused, continued or rewound.
After gaining familiarity with the program, use it to answer the following questions:
2. Velocity is a vector quantity which has both magnitude and direction. Using complete sentences, describe the body's velocity. Comment on both the magnitude and the direction.
3. TRUE or FALSE?
If an object moves in a circle at a constant speed, its velocity vector will be constant.
Explain your answer.
4. In the diagram at the right, a variety of positions about a circle are shown. Draw the velocity vector at the various positions; direct the v arrows in the proper direction and label them as v. Draw the acceleration vector at the various positions; direct the a arrows in the proper direction and label them as a.
5. Describe the relationship between the direction of the velocity vector and the direction of the acceleration for a body moving in a circle at constant speed.
6. A Puzzling Question to Think About: If an object is in uniform circular motion, then it is accelerating towards the center of the circle; yet the object never gets any closer to the center of the circle. It maintains a circular path at a constant radius from the circle's center. Suggest a reason as to how this can be. How can an object accelerate towards the center without ever getting any closer to the center?
7. A "Thought Experiment": Suppose that an object is moving in a clockwise circle (or at least trying to move in a circle).
- Suppose that at point A the object traveled in a straight line at constant speed towards B'. In what direction must a force be applied to force the object back towards B? Draw an arrow on the diagram in the direction of the required force.
- Repeat the above procedure for an object moving from C to D'. In what direction must a force be applied in order for the object to move back to point D along the path of the circle? Draw an arrow on the diagram.
- If the acceleration of the body is towards the center, what is the direction of the unbalanced force? Using a complete sentence, describe the direction of the net force which causes the body to travel in a circle at constant speed.
8. Thinking Mathematically: Explore the quantitative dependencies of the acceleration upon the speed and the radius of curvature. Then answer the following questions.
a. For the same speed, the acceleration of the object varies _____________ (directly, inversely) with the radius of curvature.
b. For the same radius of curvature, the acceleration of the object varies _____________ (directly, inversely) with the speed of the object.
c. As the speed of an object is doubled, the acceleration is __________________ (one-fourth, one-half, two times, four times) the original value.
d. As the speed of an object is tripled, the acceleration is __________________ (one-third, one-ninth, three times, nine times) the original value.
e. As the radius of the circle is doubled, the acceleration is __________________ (one-fourth, one-half, two times, four times) the original value.
f. As the radius of the circle is tripled, the acceleration is __________________ (one-third, one-ninth, three times, nine times) the original value.
Write a conclusion to this lab in which you completely and intelligently describe the characteristics of an object that is traveling in uniform circular motion. Give attention to the quantities speed, velocity, acceleration and net force.
Start Circular Motion Activity. | http://www.physicsclassroom.com/shwave/ucmdirns.cfm | 13 |
89 | Relativity and Invariance ---- Paradoxes ---- Space-Time diagrams ---- Speculations
"Different" velocities mean different frames. That includes the case where they have the same speed but are moving in different directions. In special relativity if a frame is noninertial it must be accelerating; it's either moving at constant velocity or it's accelerating. Two noninertial frames will also be accelerating with respect to each other, unless they are accelerating in exactly the same way (i.e. two observers could be in the same accelerating frame at different locations and at rest with respect to each other). (The situation is a bit more complicated in general relativity because inertial reference frames can generally be defined only locally.)
Velocities don't add up like they do in Newtonian mechanics. The relativity of space and time intervals extends to velocity. Equation 7.8 describes how to add two velocities together. For your example we have (.75+.75)/(1+.75*.75) = 1.5/1.5625 = 0.96.
One can prove this relation by examining the Lorentz transform of velocity = dx/dt. This involves some mildly complicated algebra.
The second postulate of relativity requires the speed of light to be
the same in all reference frames. Since speed is a change of distance
divided by a change of time, if time changes (time-dilation) then
length must change too to maintain the constancy of light.
A question sometimes arises: is this contraction real or just an optical illusion of some sort? The answer it is real in every sense. A measurement of something moving will be shorter than of that same object at rest according to every possible test.
Is there any way to visualize length contraction rather than just considering the mathematical equation for it?
The spacetime diagram can provide a graphical illustration of length contraction, i.e. you can see how a standard length is shorter in other frames compared to its rest frame.
We are used to thinking of time unfolding, and time being the same for
all possible observers. This isn't the way it is in SR. Simultaneity
is defined as "things happening at the same time." We might also
define it in this way: Two events (two points in spacetime) are
simultaneous in a given frame if a light signal emitted midway between
the spatial locations of those two events arrives at those events. An
example would be a lightbulb flashing at the center of a train car.
In the train-car's frame the light hits the front and the back of the
car at the same time, hence those two events are simultaneous. But to
an observer watching the train car go by at some speed the two events
cannot be simultaneous because the rear of the train comes forward to
meet the light signal while the front of the train moves ahead of
the light signal (put another way, the flashing of the light doesn't
occur midway between the two events, according to the ground-based
observer, because the train is in motion).
Because events that are simultaneous must be spacelike separated, they can have no causal relationship; hence, in a sense, simultaneity is a convention. When we say that a set of clocks is synchronized we are making a statement about how the clocks are set relative to one another in a certain frame. In another frame the clocks will show different times, although all the clocks in a given inertial frame will run at the same rate.
The first thing to understand is that time intervals and space intervals are relative, i.e., the value an inertial observer would get for a measurement of either of them depends on the observer's frame. This is different from what we are used to thinking. Space intervals and time intervals vary between frames according to the Lorentz transformation formula (the gamma or boost factor).
If you were to take the definition for the space-time interval, plug in the lorentz transformation terms as appropriate to change delta t and delta x to a different frame, you would find that the terms cancel in such as way as to leave delta s unchanged, despite the transformation to a different frame. This means that all observers, no matter what their relative motion, will measure the same delta s even though their individual delta t and delta x intervals will be different. The agreement between all the inertial observers means that the quantity is invariant.
But what is delta s, the space time interval? In a mathematical
sense, it is just a thing constructed in such a way that the Lorentz
tranformation terms cancel out, that is it is designed to be an
invariant quantity. Does it have a physical interpretation?
It is a measure of a "distance" in spacetime. In ordinary
space, a distance can be defined that is invariant under the Galilean
transformation: the familiar Pythagorean rule is one such
mathematical rule that gives an invariant distance even though separate
observers' definition of x and y might be different. The spacetime
interval extends this concept to spacetime.
There is one useful interpretation of the spacetime interval. The spacetime interval measured along the timelike worldline of an observer is equal to the proper time of that observer, i.e., what that observer's clock records. Since the spacetime interval is invariant , all observers will agree on the proper time intervals along a worldline, even though they may disagree upon the intervals of space and time along that worldline.
Since proper time is a distance in a geometry, it differs along different
paths in that geometry, just as the spatial distance along a path through
space depends upon the path.
Consider two events in spacetime, separated by a timelike interval,
and suppose that several clocks travel along separate worldlines
between those two events. Each clock will measure a different proper time
along its worldline, but the clock which travels along an inertial path
(a straight line in spacetime) will register the longest elapsed
proper time. This property of spacetime is in some respects opposite to
the familiar Euclidean geometry, in which the straight line is the shortest
distance between two points.
In general relativity we will invert this relationship and use the maximization of the spacetime interval to define inertial motion. Spacetimes in general relativity are curved, which means that inertial motion is not necessarily linear in the special-relativistic sense, but we can still use the maximization of proper time interval to determine which path through the spacetime represents inertial motion. But the details of this is a topic for general relativity.
Is density relative?
Yes. Density is mass per unit volume, so if a block of some density is moving past you, length contraction would decrease the volume while the mass would increase by the relativistic equation.
Speeds are indeed relative, so you must compute the change in speed from the frame of the spaceship. In this case you wish to accelerate from 0.7c to 0.8c, both measured with respect to the Earth. By the relativistic velocity addition law, the velocity change is not 0.1c in the frame of the spaceship, but is 0.23c. In the spaceship frame, the energy required is given by the corresponding boost factor, which in this case is 1.028. The additional energy needed is thus 0.028mc2. The change in boost factor as measured from the Earth frame is Gamma(.8c) = 1.67 - Gamma(.7c)=1.40; the difference is 0.27, so the additional energy required will be 0.27mc2, as measured in Earth's frame. Why the difference? The straightforward answer is that a measurement of energy is relative. However, in either frame the amount of energy required to accelerate by the same factor will increase as the ship's speed increases, because of the dependence of the boost factor upon the square of the speed. Accelerating from 0.8c to 0.9c in the Earth's frame corresponds to an increase of 0.35c in the spaceship's frame. In the spaceship frame, the energy required for the increase is 0.06mc2, while in the Earth frame it is 0.68mc2; in both cases substantially more energy is required to go from 0.8c to 0.9c than to go from 0.7c to 0.8c.
Yes, but only if your frame of reference is an accelerated one, and/or you are deep inside a gravitational field. Gravitational time dilation then causes your clock to run slow as you see it compared to a clock up at high altitude.
It is important to realize that it really is the case that each of these people measures the other's clock as running slowly. So who is really running slowly? So long as both remain within their fixed inertial reference frames, the answer is relative and always the other guy. Ah, you say, what if they meet up again sometime? But they will never meet again if they remain in their inertial reference frames: inertial motion means linear, constant velocity travel, and straight lines meet only once. But what if they accelerate, or what if space is curved so they do meet again? In that case you can always compute how much proper time will have elapsed for each person. The result when acceleration must be considered is no longer as simple as the comparison between two inertial frames, but there need not be reciprocity between general accelerated (or curved spacetime) frames. Hence one or the other observer will be younger, and the answer is unequivocal.
Time dilation and length contraction are not just optical illusions, but neither do they represent a physical contraction. These effects are the result of a measurement from a given inertial frame that is performed on body moving with respect to that frame. We assume that the measurements always take into account the finite travel time of light. Consider two observers moving relative to one another. You have no difficulty with the idea of their velocities being relative - each thinks the other is "really moving." In SR, time intervals and space intervals are also relative. You don't shrink or see your own clock run slow. The other observer sees your clocks slow and meter sticks contracted from his frame. Similarly you will observe his clocks slow and meters sticks short from your frame. The time dilation and length contraction are inherent properties of the way measurements must be performed in spacetime.
Note: It is a complicated question to ponder how things would appear visually if they were moving close to the speed of light. It is necessary to trace light rays carefully to determine what would actually impinge on your retina.
Light is certainly subject to relativity. Light has zero rest mass, and relativity says that anything with zero rest mass always has to go at the speed of light along lightlike trajectories in spacetime. If you want to be anthropomorphic about it, a photon doesn't experience the passage of time. To it, it is everywhere at once.
No. The distance to Alpha Cen is length contracted, so even though the clock is running normally it doesn't take so long to get there.
In the frame of the traveler, the distance between the Earth and Alpha Centrauri is length contracted. As a concrete example, suppose the spaceship is moving with a boost factor of 10 relative to the Earth. The Earth-based observer would say that the spaceship clock is running slow by a factor of 10 so it takes only 0.4 years on the spaceship clock for it to reach Alpha Cen. The spaceship, on the other hand, observes its clock to run normally but sees the Earth and Alpha Cen whipping past at a boost factor of 10, causing the distance between them to be length contracted in such an amount that it takes 0.4 years between the time the Earth passes the window until Alpha Cen appears. Both observers agree on the amount of time required for the journey in the frame of the traveler, but in one case the effect is due to time dilation and in the other, length contraction.
Newton's Laws were formulated for inertial frames, but they work in noninertial frames so long as one adds terms to account for the accelerated frame (e.g., "fictitious" or inertial forces). The same is true for relativity. Length contraction and time dilation also occur in noninertial frames; things are just a bit more complicated. And noninertial reference frames are not equivalent to inertial reference frames, so the principle of reciprocity does not apply. (E.g. in the Twin Paradox the traveling twin is noninertial and is younger at the end of the journey; there is no symmetry between the twins.) General relativity provides the formalism to deal with all frames, including inertial, noninertial, and even curved spacetimes.
If light doesn't move through a medium (the ether) how does it have a measurable speed?
Set up a bulb. Pace off a distance. Set up a receiver. Set up sophisticated timing apparatus. You can measure the time it takes light to travel from the bulb to the receiver. Divide by the distance. There is your speed.
Anything that has zero rest mass must travel at the speed of light. Photons carry the electromagnetic force and have zero rest mass. Now what is so special about this maximum speed? In the relativistic point of view, time and space intervals derive their meaning from interactions and physical processes which involve forces. For example, two particles interacting electromagnetically by the exchange of photon. This could happen at some maximum finite speed (which must be the same for all frames by the relativity principle), or at an infinite speed. In our universe it happens at a finite speed.
Let's speculate for a moment. Would it be possible to have a universe where interactions took place at an infinite speed? Every particle in the universe would immediately interact with every other particle. Everything would have to be fixed into some grand equilibrium. It doesn't seem to me that it would then be possible to have such a universe evolve in time.
If the speed of light is the same in all frames, why does light travel in water at a speed less than c?
The speed of light in vacuum is the same in all frames. When light travels through water it is interacting with the water. These interactions reduce the net speed through the water.
The phase velocity refers to the velocity one gets by multiplying the wavelength times the frequency of a wave. It is the propagation speed of a particular wavelength component of a general wave. However, the rate at which the energy in a wave pulse propagates is the group velocity. This is also the speed at which information can be carried by a wave pulse. The group velocity is always less than c. Certain velocities can exceed c but nothing physical (information, energy etc.) can be transported faster than c.
In answer to the second part of the question, all of physical law is grounded in the principles of relativity as postulated by Einstein. It may be difficult to accept or understand, but modern technology (computers, video, internet) is based on relativity.
Is there a relationship between F=ma and E=mc2?
One is an equation for force (Newtonian) the other for Energy (Einsteinian). They have different units: Force times distance is Energy. Even in Special Relativity you need a force to produce an acceleration. To apply a force over some distance requires energy (energy is the ability to do work). The equivalent Newtonian equation for the kinetic energy (energy of motion) is E = 1/2 m v2.
Einstein's equation as written above applies only to particles with rest mass. The photon has no rest mass, but it does have momentum and inertia and other properties normally associated with mass. Rest mass is an intrinsic property of certain particles, their "energy of being" if you will, the minimum invariant energy that they must have to exist. The photon simply requires no minimum energy to exist. In exchange, it must travel at the speed of light; a photon that stops moving ceases to exist.
Not that much instantaneous acceleration is required actually, just modest acceleration for a long period of time. At one gee of acceleration you could get to the Andromeda galaxy is only about 20 years as you would measure time.
If two twins travel in opposite directions for 5 light years and then return will they be the same age?
If their journeys are symmetrical, i.e., they both go at the same speeds (relative to the Earth) and for the same time (as measured by the Earth), then yes they will be the same age (and younger compared to a third "twin" who stayed home.
Noninertial frames can be analyzed in special relativity; they just require more complicated equations than the ones we have presented in this book. A noninertial frame can be described in special relativity using the proper mathematical forms for relativistic acceleration. Special relativity assumes the existence of inertial frames to which these measurements are related. General relativity tells us what constitutes inertial motion in general, as well as in the presence of gravity.
Not as I see it. Ptolemaic/Aristotelian cosmology assumed the presence of a universal standard of rest, namely the Earth. Physics was different in the celestial and Earthly realms. These ideas are fundamentally counter to the relativity principle.
Andy and Betty will have the same elapsed time (their journeys were symmetric) and they will both have less elapsed time than on the Earth.
This is a variation on the twin paradox.
If you travel in a circle (at least in SR) you are accelerated, so you are not in one inertial frame the whole time. Strict reciprocity only holds between two inertial frames. When there is acceleration all observers can agree on who was "really accelerated." Both observers agree that the person who did the circular journey accelerated.
In any situation one can draw the world lines for travelers and compute the space-time interval (proper time) along their worldlines between two events (e.g. departure and reunion) and this will tell you who has aged least between those two events.
The important sentence in this thought experiment as stated, was that the blast hits both ends of the tunnel simultaneously - but this is true only in the tunnel rest frame. Remember that simultaneity is relative. In the tunnel frame the train is entirely within the tunnel when the blast hits. But an observer in the train frame sees events differently - the blast hits the front end of the tunnel before it hits the back end. Because of this different order of events, the the train still escapes destruction. But the bomb was the same distance from the front and back of the tunnel, you say. Yes, but, to the train the whole system is moving; the tunnel and the bomb are passing by at high speed. And remember that the speed of light is constant in all frames (I am assuming that it is the light pulse we are talking about here). So the situation is the same as the thought experiment in which a light bulb goes off in the center of the moving train. The pulse hits the front and back of the train simultaneously in the train frame but at two different times as measured by an observer in the ground frame.
What does it mean to be completely incompressible? It means that if
you push on it, it doesn't give at all. If you had a completely
incompressible rod, you could push at one end and have it move at the
other end instantaneously. So is this a way to get around the finite
speed of light? Make a big long rod one light year long and push on
one end. If the rod cannot be compressed, then the whole thing will
move at once, including the end a light year away. You have sent an
The problem is that the structural properties of something are determined by the intermolecular forces which are electromagnetic in nature. So when you push on one end of a rod, you apply forces to the molecules at that end, which in turn transmit forces on down the rod. How compressible a rod (or a train) is, is fundamentally limited by the need to transmit the force down its length, which must be limited by the speed of light. Real materials cannot evade this limit.
In the train case there is no way for the back end of the train to "know" about the front end being stopped by the blockage, except at the speed of light. If you suddenly stop the locomotive the rear of the train continues to come forward until it encounters the backward traveling signal (compression wave, shock wave, whatever). It can't avoid being crushed in this scenario.
In special relativity space and time both transform between one inertial frame and another. Furthermore, they are "mixed" because the amount of length contraction or time dilation depends on a velocity, and velocity is a change in time over a change in space. It is convenient, therefore, to adopt a convention in which the units of time and space are the same, e.g. tc and x or t and x/c. Then space and time have the same footing and a light beam will correspond to a line for which delta x = c delta t, i.e. a 45 degree line. This is only for convenience, however, and has no special physical significance. The spacetime units could be meters and seconds, but in that case a light beam would be so close to horizontal on the graph that it would barely be possible to distinguish between spacelike and timelike.
Well when you think about it, what does it mean to say that the space and time axes in YOUR frame are perpendicular? Spacetime diagrams are sometimes useful for representing certain ideas, but their utility is limited by the fact that we are trying to represent graphically the properties of the geometry of Minkowski space on a flat sheet of paper. The essential point is this: all inertial reference frames are equivalent, so if we draw a spacetime diagram to illustrate our frame, we can draw another one to illustrate the moving observer's frame, and each is a valid representation of spacetime.
What does a spacetime diagram actually mean?
It is just a way of graphically illustrating relations between points in space and time (events). In this sense it is like any other graph that might illustrate the behavior of some relationship (mathematical function).
Think about this: if you were driving along you would think that your steering wheel was always in the same spot (right in front of you) for a long period of time. Someone watching on the side of the road says your steering wheel is not at the same spot as time goes along. So in your frame the wheel is always at the same spatial location, and in the other frame it isn't. There is nothing tricky here: the two frames are moving relative to one another. In relativity the strange thing is that something similar holds for things that are at different spatial locations at the same time in one frame (simultaneous events). In another frame these same points are at different spatial locations and different time locations.
A vertical world line would never get out to larger radius. Those world lines tipped toward vertical take longer to get out than those directed at 45 degrees.
In physics spacetime diagrams are very important for mapping out the interactions of particles. In cosmology we can use spacetime diagrams to determine which parts of the universe are "causally connected" - that is, given any spot in the universe, what was in that spot's past? Our backward light cone gives us the picture we have of the universe. We call this the "observable universe."
Why isn't it possible to go faster than light? Why can't you just keep accelerating?
If you keep accelerating (which you can do) you go faster and faster in the sense of having greater gamma factors and more and more energy. You see the universe length-contract more and more. But you never go faster than c as you or anyone else measures it. Things interact by the exchange of massless particles. Massless particles move at the speed of light. These interactions are more fundamental than our definitions of time and space intervals. In an axiomatic sense, the second postulate of relativity means you can't go faster than light. The principle of causality would be violated if things could go faster than light. Interestingly, these things are true even in Galilean relativity, except that in Galilean relativity the speed of light is infinite.
Have we completely and totally ruled out the possibility of faster than light travel? Is it even theoretically possible in our universe?
I have to go with "yes" on this (the ruling out part). "Going faster than light" is almost a semantic error rather than a valid physical idea. It is equivalent to asking (in Galilean relativity) "Is it possible to go faster than infinite speed?" The speed of massless particles is the ultimate speed, but instead of thinking of that as a "velocity barrier" one might do better thinking about that as a way that time and space are defined and related. The speed of light is logically equivalent to "infinite speed" as far as massive particles are concerned.
Why is light the ultimate speed limit? If anything with mass travels less than the speed of light, what about antimatter?
The ultimate speed limit would either be some finite value, call it c, or infinity. If we adopt the relativity postulate that all inertial frames are equivalent, then the ultimate speed limit must be the same for all inertial frames. If it were infinity, then we would have Galilean relativity. If it is equal to some finite value then we have Einstein relativity. Massless particles move at the maximum speed; massive particles move at a lower speed. Since light is massless, it moves at the maximum speed.
Antimatter still has positive mass and travels slower than light. Experiments have even been done to confirm that antimatter falls in a gravitational field.
First, you can't go faster than light. Worldlines of material particles must be timelike -- this is the way that reality is constructed. The basis for the idea of traveling in time by going faster than light can be seen in the spacetime diagram. If you draw a spacelike arrow you can easily make it go back in time. You can even make it go back in time by one observer's coordinate t, and forward in time according to another observer's coordinates. Wouldn't that be strange?
In SciFi the "warp speed" is often superluminal. Let's leave that and concentrate instead on the more realistic idea that a space-traveling civilization could travel close to the speed of light. This is a topic treated less often by scifi writers but a very interesting one. Some examples are "The Forever War" by Joe Haldemann, and "Childhood's End" by Arthur C. Clarke.
Human lifetimes are sufficiently short that travelers would essentially leave their generation and society behind forever. What would the returning traveler experience? What if Ben Franklin had headed out on a journey and was just now returning? How many people would be willing to go on a journey knowing that not only would everyone they know be long dead when they return, but their society would probably be gone as well? This suggests that perhaps most journeys would be one-way. People would have to establish new colonies and stay there.
On the lighter side, if you left your money in an interest bearing account, when you returned you'd be pretty rich (assuming that the civilization you returned to still used money)!
Yes, in a sense, classic Sci-Fi time travel is possible but only if you go forward. And yes you would get time dilation simply going around the Earth. (Note that this will be an accelerated frame.) We shall see (Chapter 8, 9) that the best way to get a large dilation factor is to sit close to a black hole for a while. Gravitational time dilation could quickly propel you into the Earth's distant future. What would you find then? A planet where apes evolved from men??
Is time travel a serious idea and do physicists devote any time to studying it?
Well, yes and no. Time travel is studied as a way to investigate the implications of general relativity and possibly quantum gravity, but nobody is actually looking for a time machine. They are looking for insights into the theory under the presumption that time travel isn't possible, so if one seems to appear in a given scenario, what is wrong?
If time travel existed what (when) would you like to visit?
Most time travel stories focus on visiting recent past events in human history (e.g., stopping John Wilkes Booth). But I would like to go back and see how the solar system formed, galaxy formed, universe formed....all your cosmological questions could be answered.
If someday someone invents a time machine and went back in time would we notice anything different? Would we simply cease to exist?
Many sci-fi stories have been written around this idea. It seems to me that if someone did go into the past and "changed the future" then we wouldn't notice it here and now. All our memories in the altered time line would be consistent with the altered timeline and the previous timeline would cease to have ever existed. This happens all the time in Star Trek but the characters always seem to sense that "something is wrong" with the altered timeline - this seems bogus to me.
Although traveling backward in time seems impossible, is traveling forward in time any more likely?
Well you are "traveling forward" in time right now aren't you? You could use relativistic time dilation (high speeds or strong gravity) to travel forward in time relative to (say) the Earth's frame. In that way you could time travel forward 100 years in a day. But you couldn't come back to this time to report your findings.
Can you slow the growth of cancer by traveling close to the speed of light?
Yes, as measured by another frame, but no as measured from the point of view of the person with the cancer. All life processes proceed as "normal" according to one's proper time.
Let's return to Einstein's famous equation, but now write it as E2 = p2 + (moc2)2. The term p refers to the relativistic momentum. Particles with positive rest mass travel slower than light. Light has zero rest mass, and travels at the speed of light. For light E2 = p2 (light does have momentum). Now if tachyons were real, they would have rest-mass squared less than 0 because they always have v > c. This would mean that E2 - p2< 0. So if p goes up E must go down, meaning that a zero energy tachyon moves infinitely fast. You would have trouble keeping them around to observe.
If tachyons existed they could produce observable effects through interactions with matter. No such effects have ever been seen in experiments. There are also profound difficulties with the existence of tachyons in quantum field theory. The vacuum tends to break down into tachyon-antitachyon pairs. So it seems exceedingly unlikely that tachyons exist. If you are going to suggest that tachyons exist but don't interact in any way with the universe or its contents, then we are going to get into a debate about the meaning of existence. I have some pink elephants with equally valid claims.
It's rather anthropomorphic to talk of how a photon "perceives" the universe, but.... Since the spacetime interval along a lightlike trajectory is zero, there is zero proper time. The photon does not experience the passage of time (or space for that matter). The entire worldline of the photon simply is, from its beginning to its end (e.g. from the emission of the photon to its absorption).
If it takes less time to travel somewhere the faster you go, why does it take light 4 years to go 4 lightyears?
It takes light 4 years to go 4 lightyears as you measure it. Light does it in zero proper time. (The spacetime interval along a light beam is zero.)
If particles are moving at the speed of light you would need to use relativistic equations to describe their properties, but the particles in your body aren't moving near the speed of light relative to you. The subatomic scale is described by quantum mechanics rather than classical mechanics, so it isn't really accurate to think of the electron whipping around the nucleus at high speed (to take one example). The atoms in your body are pretty well described by nonrelativistic quantum mechanics. Now, as a second point, remember one would never personally experience a time dilation in the sense of thinking one's own time is running slow. One sees time dilation in another frame's clock if that frame is moving relative to yours.
How likely is human colonization of space? Will we ever be able to travel around at high speed through the galaxy?
It's difficult to assess how likely is human colonization of space, but it seems pretty unlikely in the immediate future. The only other planet in the solar system that is remotely suitable for human habitation is Mars. Humans would not be able to live in the open on Mars, but would require spacesuits and habitable bubbles. Water would have to be extracted from permafrost. It would be a hard life even with technology, so one might ask what is the point, except for scientific exploration, or to relieve population pressures.
Colonization of extrasolar planets would be even more difficult. At the present time, we do not know of any candidates. Several generations would probably have to spend their lives on spaceships to search for and arrive at somewhat habitable planets. Where all the energy to support the space travelers would come from is not obvious; they couldn't tow a star behind them.
Energy is a major consideration of limitations on traveling at "high speed" through the galaxy. It depends, of course, on what is meant by "high speed," but presumably the expression refers to near-lightspeed travel. The amount of energy to attain such velocities, for any spaceship capable of supporting humans for extended periods (perhaps millenia), is staggering. Even at a speed of 0.99c, the distances and times involved are huge. The Galaxy is some 100,000 light years in diameter. Just to get to the center from the location of the Sun would require about 30,000 years in the Galaxy's rest frame. The travelers' time would be relativistically dilated, of course, and to them such a journey would take "only" about 4300 years, but again one might ask what is the point? There are presumably better ways to deal with population issues, though whether humanity has the self-discipline to do so is another question. Colonization could presumably preserve our species, or some descendant of it, after the Earth becomes uninhabitable due to the aging of the Sun, but ultimately the entropy of the universe itself will be too high for chemistry to occur, at which point life will become impossible anywhere. In any case, the Earth should be habitable for at least another billion years; it seems hard to imagine that any species, even an intelligent one, would last that long.
If one is still determined to undertake a program of galactic colonization, one can exchange higher speed for longer time intervals. Then the number of generations of space travelers becomes so large that one might wonder whether they could even maintain a civilization. Keep in mind that human civilization as we usually define it has existed for barely 6,000 years; we do not even know for sure that it can be maintained on Earth for tens or hundreds of thousands of years, much less on a wandering spaceship.
What are phasers and photon torpedoes, and could they really exist?
Phasers are props for certain science fiction television shows and movies. Their method of operation is generally left vague by the writers, or else some pseudoscientific mumbo-jumbo is invoked, so whether they could exist is an open question. Photon torpedoes (in Star Trek) appear to be matter-antimatter bombs. The explosion of such a weapon would produce very high-energy photons (generally at least X-ray, but gamma ray is better) for destructive purposes.
Now as to reality: some American weapons laboratories have experimented with devices such as X-ray lasers as a kind of photon cannon. There are many technical problems with these weapons, not the least of which is obtaining and maintaining enough energy to send a sufficiently powerful beam over a great enough distance. The high energy of the photons alone is generally inadequate; there also has to be a lot of them. Another problem is that photon weapons generally dissipate most of their energy into making their path in air very hot. They were intended to be space-based weapons but this created even more acute problems with energy generation.
Returning to science fiction, the weapons would be deployed in a vacuum, removing the energy loss due to intervening molecules, but creating and transporting a large quantity of antimatter without destroying the ship would be a technological challenge, to say the least. More conventional sources of photons, such as lasers, require such a huge input of energy, only a tiny fraction of which is converted into photon energy, that their practical application in galactic weaponry is questionable.
Yeah. Too bad they blew that one, since that's the only physics they got wrong in that movie.
Copyright © 2003 John F. Hawley | http://www.astro.virginia.edu/~jh8h/Foundations/Foundations_1/quest7.html | 13 |
120 | Density (symbol: ρ - Greek: rho) is a measure of mass per volume. The average density of an object equals its total mass divided by its total volume. An object made from a comparatively dense material (such as iron) will have less volume than an object of equal mass made from some less dense substance (such as water). The SI unit of density is the kilogram per cubic metre (kg/m3)
- ρ is the object's density (measured in kilograms per cubic metre)
- m is the object's total mass (measured in kilograms)
- V is the object's total volume (measured in cubic metres)
Under specified conditions of temperature and pressure, the density of a fluid is defined as described above. However, the density of a solid material can be different, depending on exactly how it is defined. Take sand for example. If you gently fill a container with sand, and divide the mass of sand by the container volume you get a value termed loose bulk density. If you took this same container and tapped on it repeatedly, allowing the sand to settle and pack together, and then calculate the results, you get a value termed tapped or packed bulk density. Tapped bulk density is always greater than or equal to loose bulk density. In both types of bulk density, some of the volume is taken up by the spaces between the grains of sand.
Also, in terms of candy making, density is affected by the melting and cooling processes. Loose granular sugar, like sand, contains a lot of air and is not tightly packed, but when it has melted and starts to boil, the sugar loses its granularity and entrained air and becomes a fluid. When you mold it to make a smaller, compacted shape, the syrup tightens up and loses more air. As it cools, it contracts and gains moisture, making the already heavy candy even more dense.
A more theoretical definition is also available. Density can be calculated based on crystallographic information and molar mass:
- M is molar mass
- N is the number of atoms in a unit cell
- L is Loschmidt or Avogadro's number
- a, b, c are the lattice parameters
The density with respect to temperature, T, has the following relation:
- C is the coefficient of cubic expansion.
Experimentally density can be found by measuring the dry weight ( Wd ), the wet weight ( Ww) and submersed weight ( Ws), usually in water.
Density in terms of the SI base units is expressed in kilograms per cubic meter (kg/m3). Other units fully within the SI include grams per cubic centimeter (g/cm3) and megagrams per cubic metre (Mg/m3). Since both the litre and the tonne or metric ton are also acceptable for use with the SI, a wide variety of units such as kilograms per litre (kg/L) are also used. Imperial units or U.S. customary units, the units of density include pounds per cubic foot (lb/ft³), pounds per cubic yard (lb/yd³), pounds per cubic inch (lb/in³), ounces per cubic inch (oz/in³), pounds per gallon (for U.S. or imperial gallons) (lb/gal), pounds per U.S. bushel (lb/bu), in some engineering calculations slugs per cubic foot, and other less common units.
The maximum density of pure water at a pressure of one standard atmosphere is 999.861kg/m3; this occurs at a temperature of about 3.98 °C (277.13 K).
From 1901 to 1964, a litre was defined as exactly the volume of 1 kg of water at maximum density, and the maximum density of pure water was 1.000 000 kg/L (now 0.999 972 kg/L). However, while that definition of the litre was in effect, just as it is now, the maximum density of pure water was 0.999 972 kg/dm3. During that period students had to learn the esoteric fact that a cubic centimeter and a milliliter were slightly different volumes, with 1 mL = 1.000 028 cm³. (Often stated as 1.000 027 cm³ in earlier literature).
Density will determine the "order" in which each substance will appear in a bottle. For example, if substance A has a density of .64g/cm3, and Substance B has a density of .84g/cm3, Substance A will be above Substance B in a container due to the simple fact that its density is lower. One example of this is oil and water, where the oil will remain above.
Measurement of Density
A common device for measuring fluid density is a pycnometer. A device for measuring absolute density of a solid is a gas pycnometer.
For a rectagular solid, the formula Mass / (Length x Width x Height) can be used. For an irregularly shaped solid, Displacement (fluid) can be used in place of L x W x H.
Relative density (known as specific gravity when water is the referent) is a measure of the density of a material. It is dimensionless, equal to the density of the material divided by some reference density (most often the density of water, but sometimes the air when comparing to gases):
- ρ denotes density.
Since water's density is 1.0 × 103 kg/m3 in SI units, the relative density of a material is approximately the density of the material measured in kg/m3 divided by 1000 (the density of water). There are no units of measurement.
Water's density can also be measured as nearly one gram per cubic centimeter (at maximum density) in non-SI units. The relative density therefore has nearly the same value as density of the material expressed in grams per cubic centimeter, but without any units of measurement.
Relative density or specific gravity is often an ambiguous term. This quantity is often stated for a certain temperature. Sometimes when this is done, it is a comparison of the density of the commodity being measured at that temperature, with the density of water at the same temperature. But they are also often compared to water at a different temperature.
Relative density is often expressed in forms similar to this:
- relative density: or specific gravity:
The superscripts indicate the temperature at which the density of the material is measured, and the subscripts indicate the temperature of the water to which it is compared.
Density of water
|Density of water at 1 atm (101.325 kPa, 14.7 psi)
Water is nearly incompressible. But it does compress a little; it takes pressures over about 400 kPa or 4 atmospheres before water can reach a density of 1,000.000 kg/m3 at any temperature.
Relative density is often used by geologists and mineralogists to help determine the mineral content of a rock or other sample. Gemologists use it as an aid in the identification of gemstones. The reason that relative density is measured in terms of the density of water is because that is the easiest way to measure it in the field. Basically, density is defined as the mass of a sample divided by its volume. With an irregularly shaped rock, the volume can be very difficult to accurately measure. One way is to put it in a water-filled graduated cylinder and see how much water it displaces. Relative density is more easily and perhaps more accurately measured without measuring volume. Simply suspend the sample from a spring scale and weigh it under water. The following formula for measuring specific gravity:
- G is the relative density,
- W is the weight of the sample (measured in pounds-force, newtons, or some other unit of force),
- F is the force, measured in the same units, while the sample was submerged.
Note that with this technique it is difficult to measure relative densities less than one, because in order to do so, the sign of F must change, requiring the measurement of the downward force needed to keep the sample underwater.
Another practical method uses three measurements. The mineral sample is weighed dry. Then a container filled to the brim with water is weighed, and weighed again with the sample immersed, after the displaced water has overflowed and been removed. Subtracting the last reading from the sum of the first two readings gives the weight of the displaced water. The relative density result is the dry sample weight divided by that of the displaced water. This method works with scales that can't easily accommodate a suspended sample, and also allows for measurement of samples that are less dense than water. Surface tension of the water may keep a significant amount of water from overflowing, which is especially problematic for small objects being immersed. A workaround would be to use a water container with as small a mouth as possible.
Specific Gravity of water
The specific gravity is defined as the ratio of specific weight of the material to the specific weight of distilled water. (S = specific weight of the material/specific weight of water). This implies that if the specific gravity is approximately equal to 1.000, then the specific weight of the material is close to the specific weight of water. If the specific gravity is large this means that the specific weight of the material is much larger than the specific weight of water and if the specific gravity is small this implies that the specific weight of the material is much smaller than the specific weight of water. The specific weight of a gas is generally defined by comparing the specific gravity of air at a temperature of 20 degrees Celsius and a pressure of 101.325 kPa absolute, where the density is 1.205 kg/m3. Specific Gravity is unitless.
Specific gravity of Biogas== The density of biogas at 50% methane proportion is 1.227 kg/m3. Hence Specific gravity of Biogas is 1.227.
The kidneys and specific gravity==
The role of the kidneys in the human is to aid the body in its riddance of bodily toxins. The body effectively excretes these toxins via urination, and the role of the kidney is to concentrate as many toxins as it can into the least amount of urine to provide for a more efficient emission. The specific gravity of urine is the measurement of density of these minerals and toxins in the urine in relation to the density of the water; basically, specific gravity is measuring the concentration of solutes in the solution.
The body generates countless toxins every moment. In the kidneys, these toxins are dissolved in water so the body can filter them out through urination. A healthy kidney will use fewer fluids to eliminate these toxins to promote fluid concentration. In an unhealthy kidney, however, more water might be required to dissolve these toxins.
Such is the case in a person with renal failure. A person with this problem would drink more water to account for the excess water loss and his specific gravity would be lower. If the kidneys fail over an extended period of time, more water would be needed in order to concentrate the same amount of urine. Toxin levels in the body would rise, and ultimately, one could not keep up with the amount of water necessary to excrete the toxins. The rising toxin levels in the body do not increase the specific gravity in the urine because these toxins are not manifesting themselves in the urine which is still heavily diluted. The urine will have the same fixed gravity regardless of water intake.
Lowered specific gravity can also occur in diabetics that are lacking an anti-diuretic hormone. This hormone generally sends an appropriate amount of fluids into the bloodstream, and less water is available for urination. A lack of ADH would increase the water volume in the kidneys. A person with this issue could urinate up to fifteen or twenty liters a day with a low specific gravity. Another occurrence resulting in low specific gravity is when the kidney tubules are damaged and can no longer absorb water. Such an instance would also result in a higher water volume in urine.
A high specific gravity is most often indicative of dehydration. If a person has gone without water for a day, his water level in his blood is lowered, and his brain signals the release of an anti-diuretic hormone which redirects water from urine into the bloodstream. Naturally, a lesser volume of liquid provided for urination with the same amount of toxins would result in a higher specific gravity—a higher density of the solutes. There are also other instances where the specific gravity might be raised. When the renal blood pressure is lowered, the artery must compensate with other fluids. Water is reabsorbed into the bloodstream to balance out the volume of blood and the volume of water in urine is subsequently lowered. As water is also used to control body temperature, when the body temperature goes up, less water is in the kidneys as it is used to aid in perspiration.
When testing for specific gravity, one should be aware that enzymes or dyes used in diagnostic tests can increase specific gravity. A pattern presented throughout the report indicates that when urine volume is increased, the specific gravity is lowered. This can be logically understood upon the cognitive awareness that when there is an identical amount of a solute in two solutions, the solution with a greater liquid will be less dense that that of the lesser liquid. As stated before, specific gravity measures the concentration levels of the solute in the solution, ergo the solution of greater volume has a lower specific gravity.
Density of substances
Perhaps the highest density known is reached in neutron star matter (neutronium). The singularity at the centre of a black hole, according to general relativity, does not have any volume, so its density is undefined.
The densest naturally occurring substance on Earth appears to be iridium, at about 22650 kg/m3. However, because this calculation requires a strong theoretical basis, and the difference between iridium and osmium is so small, definitively stating one or the other is more dense is not possible at this time.
A table of masses of various substances:
|Substance||Density in kg/m3||Particles per cubic metre|
|Gold (0°C)||19300||5.90 ×1028|
|Water (25 °C)||998||3.34 ×1028|
|Ice (0°C)||917||3.07 ×1028|
|Ethyl alcohol||790||1.03 ×1028|
|Liquid Hydrogen||68||4.06 ×1028|
|any gas||0.0446 times the average molecular mass (in g/mol), hence between 0.09 and ca. 13.1 (at 0°C and 1 atm)|
|For example air (0°), (25°)||1.29, 1.17|
|Density of air ρ vs. temperature °C|
|T in °C||ρ in kg/m3|
Note the low density of aluminium compared to most other metals. For this reason, aircraft are made of aluminium. Also note that air has a nonzero, albeit small, density. Aerogel is the world's lightest solid.
- Standard temperature and pressure
- Number density
- Relative density (specific gravity)
- Charge density
- Energy density
- Weight Per Cubic Foot And Specific Gravity. Retrieved February 26, 2008.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed.
<ref> tags exist, but no
<references/> tag was found | http://www.newworldencyclopedia.org/entry/Density | 13 |
79 | Airplane aerodynamics: fundamentals and flight principles.
Fundamentals of aerodynamics
Air, like any other fluid, is able to flow and change its shape when subjected to even minute pressures because of the lack of strong molecular cohesion. For example, gas will completely fill any container into which it is placed, expanding or contracting to adjust its shape to the limits of the container.
Because air has mass and weight, it is a body, and as a body, it reacts to the scientific laws of bodies in the same manner as other gaseous bodies. This body of air resting upon the surface of the earth has weight and at sea level develops an average pressure of 14.7 pounds on each square inch of surface, or 29.92 inches of mercury—but as its thickness is limited, the higher the altitude, the less air there is above. For this reason, the weight of the atmosphere at 18,000 feet is only one-half what it is at sea level.
Though there are various kinds of pressure, this discussion is mainly concerned with atmospheric pressure. It is one of the basic factors in weather changes, helps to lift the airplane, and actuates some of the important flight instruments in the airplane. These instruments are the altimeter, the airspeed indicator, the rate-of-climb indicator, and the manifold pressure gauge.
Though air is very light, it has mass and is affected by the attraction of gravity. Therefore, like any other substance, it has weight, and because of its weight, it has force. Since it is a fluid substance, this force is exerted equally in all directions, and its effect on bodies within the air is called pressure. Under standard conditions at sea level, the average pressure exerted on the human body by the weight of the atmosphere around it is approximately 14.7 lb./sqin.
The density of air has significant effects on the airplane’s capability. As air becomes less dense, it reduces (1) power because the engine takes in less air, (2) thrust because the propeller is less efficient in thin air, and (3) lift because the thin air exerts less force on the airfoils.
Effects of pressure on density
Since air is a gas, it can be compressed or expanded. When air is compressed, a greater amount of air can occupy a given volume. Conversely, when pressure on a given volume of air is decreased, the air expands and occupies a greater space. That is, the original column of air at a lower pressure contains a smaller mass of air. In other words, the density is decreased. In fact, density is directly proportional to pressure. If the pressure is doubled, the density is doubled, and if the pressure is lowered, so is the density. This statement is true, only at a constant temperature.
Effects of temperature on density
The effect of increasing the temperature of a substance is to decrease its density. Conversely, decreasing the temperature has the effect of increasing the density. Thus, the density of air varies inversely as the absolute temperature varies. This statement is true, only at a constant pressure.
In the atmosphere, both temperature and pressure decrease with altitude, and have conflicting effects upon density. However, the fairly rapid drop in pressure as altitude is increased usually has the dominating effect. Hence, density can be expected to decrease with altitude.
Effects of humidity on density
The preceding paragraphs have assumed that the air was perfectly dry. In reality, it is never completely dry. The small amount of water vapor suspended in the atmosphere may be almost negligible under certain conditions, but in other conditions humidity may become an important factor in the performance of an airplane. Water vapor is lighter than air; consequently, moist air is lighter than dry air. It is lightest or least dense when, in a given set of conditions, it contains the maximum amount of water vapor. The higher the temperature, the greater amount of water vapor the air can hold. When comparing two separate air masses, the first warm and moist (both qualities tending to lighten the air) and the second cold and dry (both qualities making it heavier), the first necessarily must be less dense than the second. Pressure, temperature, and humidity have a great influence on airplane performance, because of their effect upon density.
Bernoulli's principle of pressure
Three centuries ago, Mr. Daniel Bernoulli, a Swiss mathematician, explained how the pressure of a moving fluid (liquid or gas) varies with its speed of motion. Specifically, he stated that an increase in the speed of movement or flow would cause a decrease in the fluid’s pressure. This is exactly what happens to air passing over the curved top of the airplane wing.
A practical application of Bernoulli’s theorem is the venturi tube. The venturi tube has an air inlet which narrows to a throat (constricted point) and an outlet section which increases in diameter toward the rear. The diameter of the outlet is the same as that of the inlet. At the throat, the airflow speeds up and the pressure decreases; at the outlet, the airflow slows and the pressure increases.
Figure 1: Air pressure decreases in a venturi.
Air has viscosity, and will encounter resistance to flow over a surface. The viscous nature of airflow reduces the local velocities on a surface and is responsible for skin friction drag. As the air passes over the wing’s surface, the air particles nearest the surface come to rest. The next layer of particles is slowed down but not stopped. Some small but measurable distance from the surface, the air particles are moving at free stream velocity. The layer of air over the wing’s surface, which is slowed down or stopped by viscosity, is termed the “boundary layer.” Typical boundary layer thicknesses on an airplane range from small fractions of an inch near the leading edge of a wing to the order of 12 inches at the aft end of a large airplane such as a Boeing 747.
There are two different types of boundary layer flow: laminar and turbulent. The laminar boundary layer is a very smooth flow, while the turbulent boundary layer contains swirls or “eddies.” The laminar flow creates less skin friction drag than the turbulent flow, but is less stable. Boundary layer flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the leading edge, the laminar boundary layer increases in thickness. At some distance back from the leading edge, the smooth laminar flow breaks down and transitions to a turbulent flow. From a drag standpoint, it is advisable to have the transition from laminar to turbulent flow as far aft on the wing as possible, or have a large amount of the wing surface within the laminar portion of the boundary layer. The low energy laminar flow, however, tends to break down more suddenly than the turbulent layer.
Figure 2: Boundary layer.
Another phenomenon associated with viscous flow is separation. Separation occurs when the airflow breaks away from an airfoil. The natural progression is from laminar boundary layer to turbulent boundary layer and then to airflow separation. Airflow separation produces high drag and ultimately destroys lift. The boundary layer separation point moves forward on the wing as the angle of attack is increased.
The explanation of lift can best be explained by looking at a cylinder rotating in an airstream. The local velocity near the cylinder is composed of the airstream velocity and the cylinder’s rotational velocity, which decreases with distance from the cylinder. On a cylinder, which is rotating in such a way that the top surface area is rotating in the same direction as the airflow, the local velocity at the surface is high on top and low on the bottom.
As shown in figure 3, at point “A,” a stagnation point exists where the airstream line that impinges on the surface splits; some air goes over and some under. Another stagnation point exists at “B,” where the two airstreams rejoin and resume at identical velocities. We now have upwash ahead of the rotating cylinder and downwash at the rear.
The difference in surface velocity accounts for a difference in pressure, with the pressure being lower on the top than the bottom. This low pressure area produces an upward force known as the “Magnus Effect.” This mechanically induced circulation illustrates the relationship between circulation and lift.
An airfoil with a positive angle of attack develops air circulation as its sharp trailing edge forces the rear stagnation point to be aft of the trailing edge, while the front stagnation point is below the leading edge.
Figure 3: The Magnus effect is a lifting effect produced, when a rotating cylinder produces a pressure differential.
If air is recognized as a body and it is accepted that it must follow the above laws, one can begin to see how and why an airplane wing develops lift as it moves through the air.
It has already been discussed in general terms the question of how an airplane wing can sustain flight when the airplane is heavier than air. Perhaps the explanation can best be reduced to its most elementary concept by stating that lift (flight) is simply the result of fluid flow (air) about an airfoil—or in everyday language, the result of moving an airfoil (wing), by whatever means, through the air.
Since it is the airfoil which harnesses the force developed by its movement through the air, a discussion and explanation of this structure will be presented.
An airfoil is a structure designed to obtain reaction upon its surface from the air through which it moves or that moves past such a structure. Air acts in various ways when submitted to different pressures and velocities; but this discussion will be confined to the parts of an airplane that a pilot is most concerned with in flight—namely, the airfoils designed to produce lift.
By looking at a typical airfoil profile, such as the cross section of a wing, one can see several obvious characteristics of design.
Figure 4: Typical airfoil section.
Notice that there is a difference in the curvatures of the upper and lower surfaces of the airfoil (the curvature is called camber). The camber of the upper surface is more pronounced than that of the lower surface, which is somewhat flat in most instances.
In figure 4, note that the two extremities of the airfoil profile also differ in appearance. The end which faces forward in flight is called the leading edge, and is rounded; while the other end, the trailing edge, is quite narrow and tapered.
A reference line often used in discussing the airfoil is the chord line, a straight line drawn through the profile connecting the extremities of the leading and trailing edges. The distance from this chord line to the upper and lower surfaces of the wing denotes the magnitude of the upper and lower camber at any point. Another reference line, drawn from the leading edge to the trailing edge, is the “mean camber line.” This mean line is equidistant at all points from the upper and lower contours.
The construction of the wing, so as to provide actions greater than its weight, is done by shaping the wing so that advantage can be taken of the air’s response to certain physical laws, and thus develop two actions from the air mass; a positive pressure lifting action from the air mass below the wing, and a negative pressure lifting action from lowered pressure above the wing.
The fact that most lift is the result of the airflow’s downwash from above the wing, must be thoroughly understood in order to continue further in the study of flight. It is neither accurate nor does it serve a useful purpose, however, to assign specific values to the percentage of lift generated by the upper surface of an airfoil versus that generated by the lower surface. These are not constant values and will vary, not only with flight conditions, but with different wing designs.
It should be understood that different airfoils have different flight characteristics. Many thousands of airfoils have been tested in wind tunnels and in actual flight, but no one airfoil has been found that satisfies every flight requirement. The weight, speed, and purpose of each airplane dictate the shape of its airfoil. It was learned many years ago that the most efficient airfoil for producing the greatest lift was one that had a concave, or “scooped out” lower surface. Later it was also learned that as a fixed design, this type of airfoil sacrificed too much speed while producing lift and, therefore, was not suitable for high-speed flight. It is interesting to note, however, that through advanced progress in engineering, today’s high-speed jets can again take advantage of the concave airfoil’s high lift characteristics. Leading edge (Kreuger) flaps and trailing edge (Fowler) flaps, when extended from the basic wing structure, literally change the airfoil shape into the classic concave form, thereby generating much greater lift during slow flight conditions.
On the other hand, an airfoil that is perfectly streamlined and offers little wind resistance sometimes does not have enough lifting power to take the airplane off the ground. Thus, modern airplanes have airfoils which strike a medium between extremes in design, the shape varying according to the needs of the airplane for which it is designed. Figure 5 shows some of the more common airfoil sections.
Figure 5: Airfoil designs.
Momentum effects of airflow
In a wind tunnel or in flight, an airfoil is simply a streamlined object inserted into a moving stream of air. If the airfoil profile were in the shape of a teardrop, the speed and the pressure changes of the air passing over the top and bottom would be the same on both sides. But if the teardrop shaped airfoil were cut in half lengthwise, a form resembling the basic airfoil (wing) section would result. If the airfoil were then inclined so the airflow strikes it at an angle (angle of attack), the air molecules moving over the upper surface would be forced to move faster than would the molecules moving along the bottom of the airfoil, since the upper molecules must travel a greater distance due to the curvature of the upper surface. This increased velocity reduces the pressure above the airfoil.
Bernoulli’s principle of pressure by itself does not explain the distribution of pressure over the upper surface of the airfoil. A discussion of the influence of momentum of the air as it flows in various curved paths near the airfoil will be presented.
Figure 6: Momentum influences airflow over an airfoil.
Momentum is the resistance a moving body offers to having its direction or amount of motion changed. When a body is forced to move in a circular path, it offers resistance in the direction away from the center of the curved path. This is “centrifugal force.” While the particles of air move in the curved path AB, centrifugal force tends to throw them in the direction of the arrows between A and B and hence, causes the air to exert more than normal pressure on the leading edge of the airfoil. But after the air particles pass B (the point of reversal of the curvature of the path) the centrifugal force tends to throw them in the direction of the arrows between B and C (causing reduced pressure on the airfoil). This effect is held until the particles reach C, the second point of reversal of curvature of the airflow. Again the centrifugal force is reversed and the particles may even tend to give slightly more than normal pressure on the trailing edge of the airfoil, as indicated by the short arrows between C and D.
Therefore, the air pressure on the upper surface of the airfoil is distributed so that the pressure is much greater on the leading edge than the surrounding atmospheric pressure, causing strong resistance to forward motion; but the air pressure is less than surrounding atmospheric pressure over a large portion of the top surface (B to C).
Fluid flow or airflow then, is the basis for flight in airplanes, and is a product of the velocity of the airplane. The velocity of the airplane is very important to the pilot since it affects the lift and drag forces of the airplane. The pilot uses the velocity (airspeed) to fly at a minimum glide angle, at maximum endurance, and for a number of other flight maneuvers. Airspeed is the velocity of the airplane relative to the air mass through which it is flying.
From experiments conducted on wind tunnel models and on full size airplanes, it has been determined that as air flows along the surface of a wing at different angles of attack, there are regions along the surface where the pressure is negative, or less than atmospheric, and regions where the pressure is positive, or greater than atmospheric. This negative pressure on the upper surface creates a relatively larger force on the wing than is caused by the positive pressure resulting from the air striking the lower wing surface. Figure 7 shows the pressure distribution along an airfoil at three different angles of attack. In general, at high angles of attack the center of pressure moves forward, while at low angles of attack the center of pressure moves aft. In the design of wing structures, this center of pressure travel is very important, since it affects the position of the airloads imposed on the wing structure in low angle-of-attack conditions and high angle-of-attack conditions. The airplane’s aerodynamic balance and controllability are governed by changes in the center of pressure.
Figure 7: Pressure distribution on an airfoil.
The center of pressure is determined through calculation and wind tunnel tests by varying the airfoil’s angle of attack through normal operating extremes. As the angle of attack is changed, so are the various pressure distribution characteristics.
Figure 8: Force vectors on an airfoil.
Positive (+) and negative (–) pressure forces are totaled for each angle of attack and the resultant force is obtained. The total resultant pressure is represented by the resultant force vector shown in figure 8.
The point of application of this force vector is termed the “center of pressure” (CP). For any given angle of attack, the center of pressure is the point where the resultant force crosses the chord line. This point is expressed as a percentage of the chord of the airfoil. A center of pressure at 30 percent of a 60-inch chord would be 18 inches aft of the wing’s leading edge. It would appear then that if the designer would place the wing so that its center of pressure was at the airplane’s center of gravity, the airplane would always balance. The difficulty arises, however, that the location of the center of pressure changes with change in the airfoil’s angle of attack.
Figure 9: CP changes with an angle of attack.
In the airplane’s normal range of flight attitudes, if the angle of attack is increased, the center of pressure moves forward; and if decreased, it moves rearward. Since the center of gravity is fixed at one point, it is evident that as the angle of attack increases, the center of lift (CL) moves ahead of the center of gravity, creating a force which tends to raise the nose of the airplane or tends to increase the angle of attack still more. On the other hand, if the angle of attack is decreased, the center of lift (CL) moves aft and tends to decrease the angle a greater amount. It is seen then, that the ordinary airfoil is inherently unstable, and that an auxiliary device, such as the horizontal tail surface, must be added to make the airplane balance longitudinally.
The balance of an airplane in flight depends, therefore, on the relative position of the center of gravity (CG) and the center of pressure (CP) of the airfoil. Experience has shown that an airplane with the center of gravity in the vicinity of 20 percent of the wing chord can be made to balance and fly satisfactorily.
Airplane loading and weight distribution also affect center of gravity and cause additional forces, which in turn affect airplane balance.
This concludes the Airplane Aerodynamics Page. You can now test how much you remember from this page at the
Airplane Aerodynamics question bank
or read on at the
Flight Controls Page. | http://www.free-online-private-pilot-ground-school.com/aerodynamics.html | 13 |
65 | In mathematics, the integral of a function of several variables defined on a line or curve that has been expressed in terms of arc length (see length of a curve). An ordinary definite integral is defined over a line segment, whereas a line integral may use a more general path, such as a parabola or a circle. Line integrals are used extensively in the theory of functions of a complex variable.
Learn more about line integral with a free trial on Britannica.com.
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics (for example, ) have natural continuous analogs in terms of line integrals (). The line integral finds the work done on an object moving through an electric or gravitational field, for example.
The function f is called the integrand, the curve C is the domain of integration, and the symbol ds may be heuristically interpreted as an elementary arc length. Line integrals of scalar fields do not depend on the chosen parametrization r.
A line integral of a scalar field is thus a line integral of a vector field where the vectors are always tangential to the line.
Line integrals of vector fields are independent of the parametrization r in absolute value, but they do depend on its orientation. Specifically, a reversal in the orientation of the parametrization changes the sign of the line integral.
which happens to be the integrand for the line integral of F on r(t). It follows that, given a path C , then
In words, the integral of F over C depends solely on the values of G in the points r(b) and r(a) and is thus independent of the path between them.
For this reason, a line integral of a vector field which is the gradient of a scalar field is called path independent.
may be defined by subdividing the interval [a, b] into a = t0 < t1 < ... < tn = b and considering the expression
The integral is then the limit of this sum, as the lengths of the subdivision intervals approach zero.
If is a continuously differentiable curve, the line integral can be evaluated as an integral of a function of a real variable:
When is a closed curve, that is, its initial and final points coincide, the notation
is often used for the line integral of f along .
The line integrals of complex functions can be evaluated using a number of techniques: the integral may be split in to real and imaginary parts reducing the problem to that of evaluating two real-valued line integrals, the Cauchy integral formula may be used in other circumstances. If the line integral is a closed curve in a region where the function is analytic and containing no singularities, then the value of the integral is simply zero, this is a consequence of the Cauchy integral theorem. Because of the residue theorem, one can often use contour integrals in the complex plane to find integrals of real-valued functions of a real variable (see residue theorem for an example).
Consider the function f(z)=1/z, and let the contour C be the unit circle about 0, which can be parametrized by eit, with t in [0, 2π]. Substituting, we find
provided that both integrals on the right hand side exist, and that the parametrization of C has the same orientation as .
Due to the Cauchy-Riemann equations the curl of the vector field corresponding to the conjugate of a holomorphic function is zero. This relates through Stokes' theorem both types of line integral being zero.
Also, the line integral can be evaluated using the change of variables.
The "path integral formulation" of quantum mechanics actually refers not to path integrals in this sense but to functional integrals, that is, integrals over a space of paths, of a function of a possible path. However, path integrals in the sense of this article are important in quantum mechanics; for example, complex contour integration is often used in evaluating probability amplitudes in quantum scattering theory.
US Patent Issued to Kabushiki Kaisha Toshiba, Toshiba Medical Systems on June 5 for "Method, Apparatus, and Computer-Readable Medium for Pre-Reconstruction Decomposition and Calibration in Dual Energy Computed Tomography" (Illinois Inventors)
Jun 07, 2012; ALEXANDRIA, Va., June 6 -- United States Patent no. 8,194,961, issued on June 5, was assigned to Kabushiki Kaisha Toshiba (Tokyo)...
Study Results from University of Wisconsin School of Medicine and Public Health Provide New Insights into Dysphagia Research
Sep 07, 2012; By a News Reporter-Staff News Editor at Health & Medicine Week -- New research on Dysphagia Research is the subject of a report.... | http://www.reference.com/browse/line+integral | 13 |
60 | In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. The commutativity of simple operations, such as multiplication and addition of numbers, was for many years implicitly assumed and the property was not named until the 19th century when mathematics started to become formalized. By contrast, division and subtraction are not commutative.
Common uses
The commutative property (or commutative law) is a property associated with binary operations and functions. Similarly, if the commutative property holds for a pair of elements under a certain binary operation then it is said that the two elements commute under that operation.
Propositional logic
Rule of replacement
In standard truth-functional propositional logic, commutation, or commutivity are two valid rules of replacement. The rules allow one to transpose propositional variables within logical expressions in logical proofs. The rules are:
Truth functional connectives
Commutativity is a property of some logical connectives of truth functional propositional logic. The following logical equivalences demonstrate that commutativity is a property of particular connectives. The following are truth-functional tautologies.
Commutativity of conjunction
Commutativity of disjunction
Commutativity of implication (also called the Law of permutation)
Commutativity of equivalence (also called the Complete commutative law of equivalence)
Set theory
In group and set theory, many algebraic structures are called commutative when certain operands satisfy the commutative property. In higher branches of mathematics, such as analysis and linear algebra the commutativity of well known operations (such as addition and multiplication on real and complex numbers) is often used (or implicitly assumed) in proofs.
Mathematical definitions
1. A binary operation on a set S is called commutative if:
An operation that does not satisfy the above property is called noncommutative.
2. One says that x commutes with y under if:
3. A binary function is called commutative if:
History and etymology
Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commutative property of multiplication to simplify computing products. Euclid is known to have assumed the commutative property of multiplication in his book Elements. Formal uses of the commutative property arose in the late 18th and early 19th centuries, when mathematicians began to work on a theory of functions. Today the commutative property is a well known and basic property used in most branches of mathematics.
The first recorded use of the term commutative was in a memoir by François Servois in 1814, which used the word commutatives when describing functions that have what is now called the commutative property. The word is a combination of the French word commuter meaning "to substitute or switch" and the suffix -ative meaning "tending to" so the word literally means "tending to substitute or switch." The term then appeared in English in Philosophical Transactions of the Royal Society in 1844.
Related properties
The associative property is closely related to the commutative property. The associative property of an expression containing two or more occurrences of the same operator states that the order operations are performed in does not affect the final result, as long as the order of terms doesn't change. In contrast, the commutative property states that the order of the terms does not affect the final result.
Most commutative operations encountered in practice are also associative. However, commutativity does not imply associativity. A counterexample is the function
which is clearly commutative (interchanging x and y does not affect the result), but it is not associative (since, for example, but ).
Some forms of symmetry can be directly linked to commutativity. When a commutative operator is written as a binary function then the resulting function is symmetric across the line y = x. As an example, if we let a function f represent addition (a commutative operation) so that f(x,y) = x + y then f is a symmetric function, which can be seen in the image on the right.
For relations, a symmetric relation is analogous to a commutative operation, in that if a relation R is symmetric, then .
Commutative operations in everyday life
- Putting on socks resembles a commutative operation, since which sock is put on first is unimportant. Either way, the result (having both socks on), is the same.
- The commutativity of addition is observed when paying for an item with cash. Regardless of the order the bills are handed over in, they always give the same total.
Commutative operations in mathematics
Two well-known examples of commutative binary operations:
- For example 4 + 5 = 5 + 4, since both expressions equal 9.
- For example, 3 × 5 = 5 × 3, since both expressions equal 15.
- Some truth functions are also commutative, since the truth tables for the functions are the same when one changes the order of the operands.
- For example, EVpqVqp; EApqAqp; EDpqDqp; EEpqEqp; EJpqJqp; EKpqKqp; EXpqXqp; EOpqOqp.
- Further examples of commutative binary operations include addition and multiplication of complex numbers, addition and scalar multiplication of vectors, and intersection and union of sets.
Noncommutative operations in everyday life
- Concatenation, the act of joining character strings together, is a noncommutative operation. For example
- Washing and drying clothes resembles a noncommutative operation; washing and then drying produces a markedly different result to drying and then washing.
- Rotating a book 90° around a vertical axis then 90° around a horizontal axis produces a different orientation than when the rotations are performed in the opposite order.
- The twists of the Rubik's Cube are noncommutative. This can be studied using group theory.
Noncommutative operations in mathematics
Some noncommutative binary operations:
- Subtraction is noncommutative, since
- Division is noncommutative, since
- Some truth functions are noncommutative, since the truth tables for the functions are different when one changes the order of the operands.
- For example, EBpqCqp; ECpqBqp; EFpqGqp; EGpqFqp; EHpqIqp; EIpqHqp; ELpqMqp; EMpqLqp.
- Matrix multiplication is noncommutative since
- The vector product (or cross product) of two vectors in three dimensions is anti-commutative, i.e., b × a = −(a × b).
Mathematical structures and commutativity
- A commutative semigroup is a set endowed with a total, associative and commutative operation.
- If the operation additionally has an identity element, we have a commutative monoid
- An abelian group, or commutative group is a group whose group operation is commutative.
- A commutative ring is a ring whose multiplication is commutative. (Addition in a ring is always commutative.)
- In a field both addition and multiplication are commutative.
Non-commuting operators in quantum mechanics
In quantum mechanics as formulated by Schrödinger, physical variables are represented by linear operators such as x (meaning multiply by x), and d/dx. These two operators do not commute as may be seen by considering the effect of their products x(d/dx) and (d/dx)x on a one-dimensional wave function ψ(x):
According to the uncertainty principle of Heisenberg, if the two operators representing a pair of variables do not commute, then that pair of variables are mutually complementary, which means they cannot be simultaneously measured or known precisely. For example, the position and the linear momentum of a particle are represented respectively (in the x-direction) by the operators x and (ħ/i)d/dx (where ħ is the reduced Planck constant). This is the same example except for the constant (ħ/i), so again the operators do not commute and the physical meaning is that the position and linear momentum in a given direction are complementary.
See also
|Look up commutative property in Wiktionary, the free dictionary.|
- Binary operation
- Commutative diagram
- Commutative (neurophysiology)
- Particle statistics (for commutativity in physics)
- Truth function
- Truth table
- Moore and Parker
- Copi and Cohen
- Axler, p.2
- Gallian, p.34
- p. 26,87
- Krowne, p.1
- Weisstein, Commute, p.1
- Lumpkin, p.11
- Gay and Shute, p.?
- O'Conner and Robertson, Real Numbers
- Cabillón and Miller, Commutative and Distributive
- O'Conner and Robertson, Servois
- Yark, p.1
- Gallian p.236
- Gallian p.250
- Abstract algebra theory. Covers commutativity in that context. Uses property throughout book.
- Goodman, Frederick (2003). Algebra: Abstract and Concrete, Stressing Symmetry, 2e. Prentice Hall. ISBN 0-13-067342-0.
- Abstract algebra theory. Uses commutativity property throughout book.
- Gallian, Joseph (2006). Contemporary Abstract Algebra, 6e. Boston, Mass.: Houghton Mifflin. ISBN 0-618-51471-6.
- Linear algebra theory. Explains commutativity in chapter 1, uses it throughout.
- http://www.ethnomath.org/resources/lumpkin1997.pdf Lumpkin, B. (1997). The Mathematical Legacy Of Ancient Egypt - A Response To Robert Palter. Unpublished manuscript.
- Article describing the mathematical ability of ancient civilizations.
- Robins, R. Gay, and Charles C. D. Shute. 1987. The Rhind Mathematical Papyrus: An Ancient Egyptian Text. London: British Museum Publications Limited. ISBN 0-7141-0944-4
- Translation and interpretation of the Rhind Mathematical Papyrus.
Online resources
- Hazewinkel, Michiel, ed. (2001), "Commutativity", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Krowne, Aaron, Commutative at PlanetMath, Accessed 8 August 2007.
- Definition of commutativity and examples of commutative operations
- Explanation of the term commute
- Examples proving some noncommutative operations
- O'Conner, J J and Robertson, E F. MacTutor history of real numbers, Accessed 8 August 2007
- Article giving the history of the real numbers
- Cabillón, Julio and Miller, Jeff. Earliest Known Uses Of Mathematical Terms, Accessed 22 November 2008
- Page covering the earliest uses of mathematical terms
- O'Conner, J J and Robertson, E F. MacTutor biography of François Servois, Accessed 8 August 2007
- Biography of Francois Servois, who first used the term | http://en.wikipedia.org/wiki/Commutativity | 13 |
99 | The history of advertising is one that has to be psychologically understood. Sophisticated advertising elements introduced over the years have changed motivational research to define consumer behavior, media analysis to reach targeted consumers, and creative strategy to enhance selling messages. Advertising is a communications tool that functions most efficiently in combination with centralized exchange: when goods and services are no longer sold directly between buyers and sellers, but are handled by merchants as intermediaries. Advertising was needed to make potential consumer aware of the availability of goods. In an economy in which supply > demand, advertising created demand by introducing new products or suggesting how consumers can solve some problem with existing products.
The beginning of the fulfillment of democratic ideas created a more literate society. This created more of a need and support for newspaper and magazines. The Industrial revolution swept across America allowing for more efficient manufacturing of goods and mass production of newspapers and magazines. This created a bigger worker class who urbanized and needed more information. Railroads and telegraphic communication would begin to pave the way for brands to better communicate. National brands began to emerge and changed the structure of advertisement. Products are now differentiated and consumers were gaining loyalty to one brand. The availability of branded goods, ability to provide national distribution, and a growing middle class with income to spend provided a market for more products. This supported the growth of advertisement industry.
Advertising’s modern era would arise in research and responsibility. In early advertising, they was a lack of an ethical framework for creating promotional message and a reliable research to measure advertising effects. Used to be that people are responsible for discerning truth from falsehood themselves so little need for regulation of information. The public called for greater consumer protection by late 19th century (mainly in medicines). Research at this time often provided unreliable information and sometimes spread false fear of the power of subliminal advertising and motivational research.
The earliest known evidence of Advertising can be found on Babylonian clay tablets from 3000 B.B. Inns in England were the first to use hoarding-first printed outdoor signs, the forerunner of modern outdoor advertising. The foundations of modern advertising can be divided into 4 parts:
- The premarketing era: from start of product exchange to the 18th century. During this time, buyers and sellers communicated very primitively. They used clay tablets, town criers, tavern signs, or primitive printing.
- Mass communication era: from the 1700s to the early 20th century, advertisers were increasingly able to reach large segments of the population through mass media. Used mass newspapers and national magazines, radios, and brands began to differentiate.
- Research era: in recent years, from 1920s to now, advertisers increasingly have been able to identify narrowly defined audience segments through sophisticated research methods. Early research emphasizes info on broad demographics such as age, sex, and geography. Now it takes lifestyles and motivation into account.
- Interactive era: communication will increasingly become controlled by consumers who will determine when and where they can be reached with promotional messages.
People no longer have to watch commercials. Advertisers have to become more sensitive to feedback. Advertisers have to be more focused on keeping people’s trust to have successful advertising. In today’s world, there is now the move to creativity in advertising. Originally, ad companies were media space brokers (buy bulk space from newspapers and reselling small space allotments to advertisers). By the end of 20th century, persuasive advertising became important because serious brand competition began to take place. The shift in commodity goods to branded goods pushed strong promotional offers accompanying product advertising (this introduces the emotional appeal).
John Wanamaker of Wanamaker Department Stores began to sell products based of style and luxury rather than by utility. He hired the “first” true copy-writer, John Powers, and this began the move towards full-service advertising. John Watson from Johns Hopkins, the father of behavioral research, was hired to perform market research that attempts to determine underlying reasons of purchase behavior. Advertisers began to understand the needs and wants of consumers. Then, Alfred Sloan Jr. issued in his teachings of planned obsolescence where products are discarded not because of lost utility but lost of status. By the 1950s, almost all national companies had accepted that it was the position of a brand in consumers’ mind that sells the product rather than the superiority of utility. Brand could last forever whereas products have a life cycle and will die.
The development of print media would create a symbiotic relationship with advertising. The newspaper as an advertising medium. It would become the forerunner of modern want ads were siquis, handwritten posters in 16th and 17th century England, that sought people to fill positions. With the introduction of the rotary press, the era of the penny press began in the US, forerunner of the mass newspaper in the US. Newspapers established a model for financial support from advertising that continues for the majority of media today.
At first, magazines weren’t as successful as newspapers but when they did become successful, they could reach more people beyond the borders of a particular city. Modern consumer magazine didn’t begin until the later part of the 19th century. Many magazines were about health, fashion, and good, but later about problems in social reforms and medicine advertisement. At the time, advertising support for magazines came from already successful brands that had a national following. Magazines would become the preeminent medium for national advertisers because they offered national circulation, both editorial and advertising credibility, color availability, and an extremely low cost means of reaching millions of readers. Magazines got money from advertisers to make up for the cost of production.
Mass production, a manufacturing technique using specialization and interchangeable parts to achieve production efficiencies- began to take over America, beginning in textiles and furniture, and reached a peak with automobiles. Henry Ford adopted a successful formula where mass production was based on high volume, affordability, and mass selling through advertising. Mass production made products readily available; this improved the lifestyles and standards of living for almost all Americans. Consumption had to keep these mass producing factories fueling; to encourage such consumption, the advertising industry grew to create demand. Hence, cue advertising agencies.
Volney Palmer is generally credited with starting the first advertising agency in 1841, buying bulk newspaper space at a discount and selling it to individual advertisers for profit). In 1869, George Rowell published newspaper’s circulation estimates and started the movement toward published rate cards and verified circulation. This made it harder for space brokers to make a profit. By the end of the 19th century major companies were providing creative services, media placement, and basic research as well as developing the functions of the full-service ad agencies of the future. In 1917, the American Association of Advertising Agencies (AAAA) founded 111 charter members. Today, they have more than 5,000 members that make approximately 75 % of all advertising dollars.
By the 1930s, some companies had begun to expand overseas and began the movement of global advertising. During the rise of America, especially during the presidency of Ulysses S. Grant, the excesses of big businesses and the advertising that contributed to the environment of immorality (think Mad Men) forced the public and Congress to demand stricter regulation of advertising and other business practices. The pure food and drug act of 1906 was issued to protect consumers from fraudulent food producers and advertisers. did not address the problems of faulty food labels. This was not strictly enforced by the FDA either.
The federal trade commission act of 1914 came to act. The agency of the federal government empowered to prevent unfair competition and to prevent fraudulent, misleading, or deceptive advertising in interstate commerce. Today, the FTC primarily ensures that advertising claims and sales practices meet reasonable standards of honestly and truthfulness.
Advertising comes of age. Some advertising executives were trying to gain back the public trusts and stop the faulty ads so they created the American advertising federation. They launched a campaign to promote truth within the industry. This committee is now the Council of Better Business Bureaus: national organization that coordinates a number of local and national initiatives to protect consumers.
The Printers’ Ink Model Statue was intitiated. This act was directed at fraudulent advertising, prepared and sponsored by Printers’ Ink, which was the pioneer advertising magazine- was adopted and still exists today. Audit Bureau of Circulations (ABC) the organization sponsored by publishers, agencies, and advertisers for securing accurate circulation statements- which conducts its own audits and issues own circulation reports
During WWI, advertising agencies promoted patriotism, US government bonds, conservation, and other war-related activates rather than products. The iconic Uncle Sam ad to recruit soldiers was created.
During the 1920′s, radio advertisements grew as more and more people owned radios. The great depression of the 1930 turned the advertising industry into one that was deeply devastated. Basically the advertising industry had an intensified version of their involvement in WWI. Created the war advertising council in 1942 to promote WWII mobilization and evolved into the Advertising Council. This would ultimatley lead to the creation of the famous Rosie the Riveter –women in workforce ad.
Buy us bonds, Promoted rationing, etc. Advertising council: a nonprofit network of agencies, media, and advertisers dedicated to promoting social programs through advertising. Today, they produce campaigns about environmental issues to educational concern. Created Smokey the Bear and McGruff the Crime Dog.
Advertising after WWII to 1975 was a time of growth. After the war, the pent-up demand led to an unprecedented growth rate in consumer spending. Once everyone got what they needed, advertisers were called on the persuade consumers to replace the items they already have. Televisions became a household must and thus creating another portal to advertise. Almost everything increased in the US: population, disposable income, automobile registration, homes with air conditioning, etc., as well as growth in advertising. During this time, many developments would arise in advertising. Ad agencies began to negotiate commission with clients. This encouraged growth of specialized companies. Creativity (especially humor) became hallmarks during this period. Legislation limited outdoor ads along interstate highways and banned cigarette ads from TV. FTC deemed that ads can compare each other but are accountable for honest claims
Radio took a dive when TV came along. Advertising in the fragmented 1980s became a volatile business that was constantly changing and adapting to economic conditions, technology, social, and cultural environment. During this period: New technology: cable, home video recorders, specialized magazines, direct mail, home shopping, sales promotion all changed the fundamentals of advertising. Today, ad agents are more likely to know how to evaluate research and understand the psychology of consumers as well as being able to execute ads.
- Audience fragmentation: the segmentation of mass-media audiences into smaller groups because of diversity of media outlets. This time period marked the beginning of the end of the traditional mass-market strategies. Advertisers began to see consumers are individuals; this changed the way market research was carried out.
- Consolidation: as media and audiences proliferated, ownership of brand, ad agencies, and media were consolidated by a few giant firms. This created a unique dynamic as some agencies, media, and companies were acquired and combined into a giant, global conglomerate when they used to be rivals
- Credit: “buy now, pay later” mentality that plagued everyone also hit the ad. Industry. Media saw ad revenues fall, ads were harder to sell, merchants dealt with consumers looking for discounts, not fancy ads.
- America becomes a service economy
The 2000s have been marked by 2 significant developments in marketing and advertising. Defining and using new technology to reach prospects, technology allows consumers to determine when, where, and if they will invite advertisers to deliver their message. This era of permission marketing, asking consumers for permission or to opt in before sending them ads and other forms of marketing communications, requires companies to rewrite the old rules of marketing and fundamentally redefine exactly what constitutes advertising. For example, cell phones. Measuring the value of investing in various communication channels as it relates to the changes in how we reach prospects. With media fragmentation and consumers increased control over hoe, when, where they receive ad messages, marketers have to come up with new strategies such as product integration, viral marketing, and contextual internet advertising.
The change from mass media to class media has increased cost of advertising, individualized the audience, and forced advertisers to change their mind-set concerning audience measurement. Content delivery is harder to deliver when companies who used to be competitors are now all owned by the same conglomerate. Branding in the 21st century includes a return to strong branding with companies searching for new means to differentiate their products and move away from price competitors and generic brands. Finally, the globalization and diversity of the world makes it necessary to understand language, culture, economy, and political environment of countries around the world necessary for market research of the international market.
Three key points:
- Today’s advertising industry is sophisticated and is worth billions of dollars, the idea of using persuasive communication to sell is as old as trade and commerce.
- Advertising cannot be studied in the abstract. All developments in advertising of technology, research, and society as a whole must be understood.
- Today’s advertising and promotion are no longer confined to the rules of traditional media that dominated the 20th century.
Advertising and the Changing Communication Environment
Finding a cost-efficient plan for reaching increasingly in-control and demanding consumers is the major challenge of contemporary advertising Convergence: coming together or intersecting different components of some related system.
1. Technological Convergence: radio programs on computer
2. Business Convergence: (consolidation) merging companies
3. Content Convergence: primary expense, using commercial content
Dual problem: advertisers must choose a media plan from an ever-expanding number of options and they must develop advertising messages that consumers will invite to share with their time. Citizen media is referred to as the new relationship between advertisers and consumers. User-generated media (blogs) have become important in marketing plans. (free information). New technology and interactive media is a central component.
Advertising as a Communicating Tool. The fundamental principle of good advertising is that it must be built around the overall marketing plan and execute the communication elements of a more far-reaching marketing program. (ex: increasing brand awareness 25% increases growth). The Marketing Plan: Overall goal(s) of plan, marketing objective, marketing strategy, situational analysis, problems/opportunities, financial plan, research. The Advertising Plan: Prospect Identification, Consumer Motivation, Advertising Execution, The Advertising Budget and Allocation
Advertising makes proftiability based on it’s Return-on-investment (ROI): measures the efficiency of a company. How many dollars are produced for every dollar spent. Involvement based ROI measures may give advertising less than its full worth in the marketing process. Clients demand media and ad agencies to measure advertising success on the basis of effective communication rather than audience exposure. The emphasis on short-term audience involvement runs the risk of devaluing the long-term value of advertising.
Integrated Marketing is broken down into the Marketing Mix the combination of 4 marketing functions (product, price, distribution, and communication) plus advertising, used to sell a product. Communication is broken down into 4 parts:
1. Personal Selling: The most effective means of persuading but also the most expensive which makes it impractical in initial selling stages. It is used in business to business marketing as a follow-up, to close a sale, or develop a long-term relationship resulting in a sale.
2. Sales Promotion: Activities that supplement both personal selling and marketing, coordinate the two, and help to make the effective (ex: displays, coupons, giveaways). They are used to persuade distributors to carry a brand/product. They are short-term so no to lose money.
3. Public Relations: Communication with various internal and external publics to create an image for a product or corporation. One of the fastest growing sectors. It is perceived as having higher audience credibility than advertising but is viewed as a compliment not a competitor. Media controls where, when, and if product will be carried.
4. Advertising: A message paid for by an identified sponsor and usually delivered through some medium of mass communication. It is persuasive communication.
The reliance on the marketing mix strategy allows evaluations that view promotion as a coordinated mix of elements. Companies are demanding programs to speak with one voice, meaning demonstrate a consistent overall theme. Integrated Marketing Communication (IMC): The joint planning, execution, and coordination of all areas of marketing communication. More concerned with total effectiveness
Advertising: An Institutional Approach
For Consumers: Its economic role is disseminate product information that allows consumers to know that products exits, to give consumers info about competing brands, and to permit consumers to make intelligent choices among product options. Its social and cultural (inadvertent) role in communicating not only product information but also social values. Advertising can create that Need driven behavior usually to solve a problem. It can be utilitarian (buy new car for transportation) and hedonistic (buy new car to impress neighbor). Advertising’s role is to provide info as efficiently and economically as possible to potential buyers. Introducing new products/services
What Advertising Does For Business
Primary role of advertising include contributing to new-product launching, increasing consumer brand loyalty for existing brands, and maintaining the sales of mature brands. Exchange theory suggests that market transactions will take place only to the extent that both buyers and sellers see value in the process. Built on positive mutual relationships.
What Advertising Does For Society
Advertisers convey subtle messages about society by the manner in which their advertising portrays products and services. Challenges for contemporary marketing are monitoring changes so that a company is aware of what is happening in a society, creating products and services compatible with changing values, and designing marketing messages that reflect and build on the values target markets and individual customers hold. Advertising benefits society by providing revenues to support a diverse and independent press system protected from government and special interest control.
Advertising to Diverse Publics
Regardless of its intended recipients, advertising communicates messages to various groups and individuals who in turn interpret these messages in the context of their own interests. A single advertisement might be directed to a number of publics:
1. Distribution Channel: The various intermediaries, like retailers, that control the flow of goods from manufactures to consumers.
2. Employees: most important assets of any company. Advertising messages may mention quality workmanship that goes into a product and feature employees in it.
3. Current and Potential Customers: building brand awareness to attract new customers and enhancing brand loyalty to current customers.
4. Stockholders: high brand awareness and company’s good reputation contribute to maintaining higher stock prices.
5. The Community at Large: local companies use advertising to be viewed as a “good neighbor” It’s important to keep ALL publics in mind during advertising
The Components of Advertising Strategy
Brand Name: the written or spoken part of a trademark. One of the most valued assets of a company. 1/3-1/2 of value comes from brand name. Brands represent attitudes and feelings about products. Familiarity is enhanced by emphasizing consumer benefits
Brand Extension: As new companies enter the global marketplace, brand will take on even more importance as a competitive tool to differentiate one product from another. Brand Extension: new product introductions under an existing brand to take advantage of brand equity. As opposed to brand innovation. Advantages: Saving money by not needing to build awareness for a new and unknown brand name, Adding equity to an existing brand name (upon success). Disadvantages: Damaging a core brand in the minds of loyal consumers with a failed introduction, Losing marketing focus on your existing brand and/or diluting marketing efforts and budget across several brands, Brand identity is crucial to a product’s success
Fulfilling Perceived Needs: Successful products are those that solve a consumer problem better an/or more economically that an available alternative. The marketer’s fundamental task is not so much to understand the customer as it is to understand what jobs the customers need to do and build products that serve those specific purposes. For example, consumers didn’t say they wanted a microwave oven, they said they were tired/hungry and didn’t want to cook dinner for 45 minutes.
Assessing Needs: A key ingredient in determining product success is reliable research. Conjoint Analysis: a research technique designed to determine what consumers perceive as a product’s most important benefits. it can prevent costly analysis mistakes when companies emphasize product characteristics that are of little value to consumer.
Sales, Revenues, and Profit Potential have 4 major approaches used by established companies to achieve long-term revenues and profits. Emphasizing and expanding new-product niches to reach current customers. For example, Walmart attempting to increase customer base by advertising and trying to win over a more fashion based customer. Emphasizing profits over sales volume. Ex: Henry Ford deciding to produce cars to satisfy customer base, not just to fill a factory and thus prevented bankruptcy. Emphasizing short-term market share rather than profitability. Customer tracking—are all customers created equal? Ex: rewarding most profitable customers (with coupons free gifts) and discouraging less-valuable buyers (buy use of restrictive return policies). Advertisers are focusing on the role of advertising in maintaining sales and market share as a goal of equal importance to increasing sales.
Product Timing: Advertising timing involves the interaction between stages of product development and probability of marketplace acceptance.
Product Life Cycle: the process of a brand moving from introduction to maturity, and eventually, to either adaptation or demise. Product Introduction and the advertising that accompanies it are among the most important decisions that determine long-term success of a company. Key is not to spot new trends but to creatively develop new products/services to take advantage of them. No matter how good a product is, it can rarely be forced on consumers before they are ready to accept it. Market timing may be a matter of doing something first rather than doing something different. Timing is often strategic, involving long-term decisions by both customers and marketers. It can also be tactical, involving sales related to specific events or occasions. It is also a major factor in the everyday function of advertising. Placing TV commercials in primetime/daytime/latenite, 30 seconds/1min etc.
Product Differentiation: is the circumstance in which a target audience regards a product as different from others in a category. Differences may result from tangible attribute of physical product or intangible element of the product’s brand image. Meaningful product differentiation exists only if consumers perceive it as an important distinction. If there is no real perceived differences among brands, products are viewed as interchangeable. Brands built on exclusive product attribute have an advantage . One of the most important elements of differentiation is keeping an open mind about how to achieve it. Often involves minor changes in either a product or the position communicated by the advertising. Problems arise when companies strictly focus on function. Product differentiation is also a mean of target marketing. Advertisers have an obligation to promote meaningful differences. Price is dictated by favorable consumer perceptions of the value of a product. Closely related to product differentiation.
Value Gap: a positive gap between the price of a product and the value the average consumer assigns to the product. The greater the gap the more insulated the product is from price competition. Price defines who a company’s competitors are. Yield Management: a product pricing strategy used to control (or even out) supply and demand. The goal is to neither lose sales by offering price-sensitive customers merchandise at too high a cost nor lose profits by selling goods below what premium buyers would pay. Pricing strategy can be both a means of market entry for new products and a means of product differentiation for mature products. The pricing strategy for a brand determines to a significant degree the type of marketing strategy that can be used and the success that advertising will have in promoting and selling a specific brand
Variations in the Importance of Advertising
The role that advertising plays in a company’s promotional strategy depends on a number of factors. Each Corporate preference for various segments of marketing communication channels will differ in especially taking advantage of the Internet. High sales volume tends to lower advertising-to-sales ratios. Consumers need to be reminded of brand, not “sold” on it. There are industries with a number of competing firms and extensive competition. Product categories with widespread competition and little perceived product differentiation. Reversing sales or market share declines. Smaller companies have to spend more money in advertising to compete with larger companies.
Advertising and the Marketing Channel
The marketing channel illustration is an important aspect of marketing. Industrial good manufacturers are a manufacturer of finished product and wholesalers, retailers, and consumers. A well-organized channel creates efficiencies through specialization in the movement of the channel. Effective communication, including advertising, is crucial of for market channel efficiency. The technology of the Internet has changed the longtime relationships among various elements of the marketing channel. For example, travel agents have phased out and digital cameras are now taking over the film industry. Mainly marketing and communication channels have changed. It has not changed the fundamental decision-making process, but the way we communicate. Regardless of its audience, effective advertising must be successful on 2 levels: (1) communicating (2) carrying out marketing goals.
There are different forms of directness and duration in relation to advertising. How much of the total selling job should be accomplished by advertising and over what time frame must be planned. Advertising is designed to produce an immediate response by a product purchase that is called direct action, short-term advertising. Advertising used as a direct-sales tool, but designed to operate over a longer time frame is called direct action, long-term advertising. Indirect advertising affects the sales of a product over the long term by promoting general attributes of the manufacturer, not specific product characteristics. Long-term advertising is hard to measure.
Advertising to the Consumer
National Advertising: refers to advertising by the owner of a trademark product brand or service sold through different distributors or stores. It tends to be general in terms of product information. National advertisers have begun to identify and reach more narrowly defined market segments and in some cases individual consumers. The Internet has allowed specifically tailored messages to consumers based on individual lifestyle and product usage.
RetailAdvertising: advertising by a merchant who sells directly to the consumer. Includes price information, service and return policies, store locations, and hours of operation. Most important change in retail advertising is consolidation. Customers are now doing more and more “one-stop shopping” rather than patronizing several independent retailers. There are significant ramifications for newspaper and local radio stations. Dramatic advertising spending shifts as retailers move to national promotional plans with decline in ad pages. Manufacturers find themselves competing with in-store brands.
End-Product Advertising: branded ingredient advertising building consumer demand by promoting ingredients in a product. End-Product Advertising began in 1940’s with DuPont and Teflon nonstick coatings. Successful end-product advertising builds consumer demand for an ingredient that will help in the sale of a product encourages companies to use these ingredients in their consumer products. Consumers must be convinced that it offers and added value to the final product. Extensive advertising is required to make consumers aware. Successful advertisements are those that create meaningful differentiation for consumer purchase decisions.
Direct-Response Advertising: is any form of advertising done in direct marketing. Uses all types of media: direct mail, TV, magazines, newspapers, and radio (replaces “mail-order advertising”). Benjamin Franklin was credited with the 1st direct-sales catalog published in 1744. Largest sector is direct mail, which accounts for 1/3 of the total. Fastest growing area is Internet advertising that is providing a catalyst for future growth.
Advertising to Business and Professions
Business-to-business (B2B) is one of the fastest-growing categories of advertising and it requires a much different strategy. Personal selling, telemarketing, and other forms of direct response, and the Internet are the methods most often used. Messages tend to be more fact oriented with little emotional appeal and are addressed to specific industries and job classifications within those industries. Profit oriented appeals are very common. B2B purchase decisions tend to have distinct differences compared to typical consumer purchases. Purchase decisions made by companies frequently involve many people. Organizational and industrial products are often bought according to precise technical specifications that require significant knowledge. Impulse buying is rare and the dollar volume of purchases is often substantial.
Categories of Business Advertising
Trade Advertising: directed to the wholesale or retail merchants or sales agencies through whom the product is sold. It emphasizes product profitability and the consumer advertising support retailers will receive from manufacturers. Promotes products/services that retailers need to operate their businesses. The objectives are to gain additional distribution, increase trade support, and announce consumer promotions.
Industrial Advertising: addressed to manufactures who buy machinery, equipment, raw materials, and the components needed to produce the goods they sell. Directed at a very small, specialized audience. Rarely seeks to sell a product directly. The purchase of industrial equipment is usually a complex process that includes a number of decision makers. Often introducing product or gaining brand awareness.
Professional Advertising: directed at those in professions such as medicine, law, or architecture who are in a position to recommend the use of a particular product or service to their clients. Primary difference is the degree of control exercised.
Institutional Advertising: advertising done by an organization speaking of its work views, and problems as a whole, to gain public goodwill and support rather then to sell a specific product. Sometimes called public relations advertising. Objectives are establishing a public identity, explaining a company’s diverse missions, boosting corporate identity and image, gaining awareness with target audiences for sales across a number of brands, and associating a company’s brands with some distinctive corporate character.
Idea Advertising: used to promote an idea or cause rather than to sell a product or service. Idea advertising is often controversial. The increasing ability of media to narrowly target audiences, by ideology as well as product preference, will make this type of advertising prevalent in the future.
Service Advertising: advertising that promotes a service. Feature tangibles personalized in some way with testimonials of good service. Feature employees as an important aspect of the company and how they develop trust with customers in a service message featuring real employees. Stress quality. Advertisements should emphasize consistency and high levels of competency by using words like caring, professional, and convenient in ads.
Government Advertising: In the past 20 years, the growth of government services and programs has resulted in a greater use of traditional advertising by government agencies. Millions of dollars are spent each year.
Brands are a company’s most valuable assets. The product is not the brand, the product is manufactured and a brand is created. Orlando is a brand and has to compete with other cities for convention revenues. Product may change overtime, but the brand remains.
“A brand represents the most powerful link between the offer and consumer.”– Antonio Marazza Landor
Every product, service, and PR Company with a recognized brand name stands for something slightly different from anything else in the same product category.
“A brand for every intent and purpose, is a promise that it will help the user make money, look better or feel great.-Allen Adamson
For example, Dove promises women they will feel gorgeous and Volvo vows to deliver your family safely. Manufacturers have to offer the best deal to wholesalers to get their products distributed creating a squeeze of profits. The result is that some manufactures decided to differentiate their products form the competition giving their products names then obtained patents to protect their exclusivity and used advertising to take the news about them to consumers over the heads of the wholesalers and retailers.
In the mid 1880’s during the rise of great brands such as Maxwell house coffee in 1873 and Budweiser in 1876, advertising was going through its first true paradigm shift from making a connection in the mid 19th century to the disconnection caused by technology.
“Branding is about the idea that should be the organization principle and it should inform everything you do to help consumers grasp your brand promise in whatever channel you’re using to reach them.”-Michael Mendenhall
In the digital age, it is absolutely critical to understand the value of each branding channel and its relevance to a particular audience. In today’s digital world, your brand and brand organizations must perform, behave, and satisfy the consumer’s needs as they expected. Integrated marketing communications is the integration of all communications culminates from a single strategic platform and will generate a significantly greater return on the communication investment than would be the case with traditional independent media executions. IMC refers to all the messages directed to the consumer on behalf of the brand.
Brand equity: the values of how people such as consumers distributors, and salespeople think and feel about a brand relative to its competition. The most important factor in the determining the actual value of a brand is its equity in the market.
Young & Rubicam: brand asset valuator [BAV] is a diagnostic tool for determining how a brand is performing relative to all other brands. BAV demonstrates that brands are built in a very specific progression of four primary consumer perceptions of differentiation, relevance, esteem and knowledge.
- Differentiation is the basis for choice as the essence of the brand and source of margin.
- Relevance relates to usage and subsumes the five P’s of marketing related to sales.
- Esteem deals with consumer respect, regard and reputation and relates to the fulfillment of perceived consumer promise.
- Knowledge is the culmination of brand-building efforts and relates to consumer experiences. A brands vitality lies in a combination of differentiate and relevance [lack of relevance reason fads come and go]. The components of brand stature are esteem and familiarity. BAV is based on the fact that almost every successful brand began by being very simple according to Steve Owens. A market is a group of people who can be identified by some common characteristic, interest or problem (use a certain product to advantage, can afford to buy it, and be reached through some medium).
Steps for creating advertisements for a brand
- Brand Equity Audit Analysis
- Market context of what is our market and with whom do we compete; what are other brands and product categories; are products highly differentiated. This question helps us understand the statue and role of brands in a give market
- Brandy equity weaknesses and strengths: brand awareness, brand sensitivity, and brand loyalty
- Brand equity descriptions of the personal relationship between the consumer and the brand provide the most meaningful description of brand equity.
First, review all the available research to get as close a feeling as possible on how consumers view the brand and how they feel about it. You must analyze in depth our brands and its competitor’s communications over a period of time. Provide a clear summary of the current communication strategies and tactics of our brand and of key competitors. Include an analysis of all integrated communication in relation to brand equity [assessment of problems and opportunities]
Strategic options drawn on the conclusions form the analysis of communication objectives [primary goal of message], Audience [who speaking to], Source of business [where customers going to come from], Brand position and benefits [benefits of brand to build equity], Marketing mix [mix of advertising], Rationale [how plan effects brand equity].
Do brand equity research for proprietary, qualitative research. Determine which elements or elements of brand equity must be created, altered, or reinforced to achieve our recommended strategy and how far we can stretch each of these components without risking the brands credibility.
Make a Creative Brief as a short statement that clearly defines our audience, how consumer think or feel and behave; what the communication is intended to achieve, and the promise that will create a bond with the consumer. Synthesize all the information and understanding into an action plan for the development of all communication or the brand.
- Key observations-most important market factory that dictates the strategy.
- Communications objective-primary goal the advertising aims to achieve.
- Consumer insight-consumer hot button our communication will trigger.
- Promise-what the brad should represent t in the consumers mind.
- Support-reason the promise is true].
- Audience-who we are speaking and how they feel about the brand.
- Mandatory-items used as compulsory constraints]. There isn’t only one approach to developing an integrated strategic plan for a brand.
Avrett, Free and Ginsbergs planning cycle:
- Brand market status
- Brand mission
- Strategic development
- Creative exploration
- Brand valuation
- Brand vision-some other typical steps agencies and clients take in the planning process:
- Current brand status [evaluate brands over all appeal]
- Brand insight [agency use a series of tools designed to help it develop insights to better understand the customers view] 3. Brand vision [strategic planners look or the consumers hot button to identify the most powerful connection between brand and consumer]
- Big idea [creative expression of the brand vision – foundation of all communication briefs]
- Evolution [essential aspect of communications planning is accountability] the consumer has to be an important part of the strategic planning process. How the advertiser engages consumers is critical to the process.
“Just do it” -Scott Bedbury, brand builder for Nike
A great brand is in it for the long haul. By using a long-term approach a great brand can create economies of scale by which you can earn solid margins over the long term. A great brand can be anything [almost any product offers an opportunity to create a frame of mind that is unique]. A great brand knows itself [keep the brand vital by doing something new and unexpected related to the brands core position]. A great brand invents or reinvents an entire category [brand aims to dominate their entire category ex: Disney, Apple, Nike]. Great brand taps into emotions [it is an emotional connection that transcends the product]. A great brand is a story that’s never completely told [stories create the emotional contest people need to locate themselves in a larger experience]. A great brand is relevant with ideas that need to satisfy peoples wants and perform the way they want. Advertisers need to have a clear understanding of the product and consumer wants and needs when making strategic advertising strategies’. The developmental stage of a product determines the advertising message. As products pass though the stages, the manner in which advertising presents the product to consumers depends largely on the degree of acceptance the product has earned with its life cycle. It is the degree of acceptance that determines the advertising stage of the product.
Pioneering Stage: the advertising stage of a product in which the need for such a product is not recognized and must be established or in which the need has been established but the success of the commodity in filling that need has to be established. Advertising in the pioneering stage must show that methods once accepted as the only ones possible have been improved and that the limitations long tolerated as normal have now been overcome.
Purposes of the Pioneering stage of a products life cycle:
- Educate consumers about the new product or service
- To show that people have a need they did not appreciate before and that the advertised product fulfills that need
- To show that a product now exist that is actually capable of meeting a need that already had been recognized but previously could not have been fulfilled
- Many new products are simply advertisers trying to get a piece of the pie in an established product category [ during the past 25 yrs almost 60% of companies on fortune 500 list have been replaced]
- Companies’ success is based on the fact that they created new markets or reinvented existing ones [ex: Procter and gable introduced tide, the first disposable diaper and the first shampoo conditioner combo]. New ideas travel through cultures at much slower rates, especially if the ideas require throwing something away and replacing it with something else, relearning skills, or coordination by large independent organizations.
Pioneering expense: in the early introduction of a new product, heave advertising and promotional expenses are required to create awareness and acquaint the target with the product benefits. Usually the main advantage of being a pioneer is that you become the leader with a substantial head start over others [Aleve emerging as original pain reliever before aspirin, Tylenol. Advil].
COMPETITIVE STAGE [the advertising stage a product reaches when its general usefulness is recognized but its superiority over similarly brands has to be established in order to gain preference]. In the short term, the pioneer usually has an advantage of leadership that can give dominance in the market. Generally in the early competitive stage, the combined impact of many competitions, each spending to gain a substation market position, creates significant growth for the whole product category [if the pioneers grown, it can more than make up for the earlier expense associated with its pioneering efforts]. The purpose of competitive stage advertising is to communicate the products position or differentiate it to the consumer; the advertising features the differences of the product.
RETENTIVE STAGE [the third advertising stage of a product, reached when its general usefulness is widely known, its individual qualities are thoroughly appreciated and it is satisfied to retain its patronage merely on the strength of its past reputation]. The chief goal of advertising may be to retain those customers by making a brand name for themselves. Ad goal is to maintain market share and ward off consumer trial for the products. Generally products in the retentive stages are at their most profitable level because developmental costs have been amortized, distribution channels established and sales contacts made
REMINDER ADVERTISING: it simply reminds consumers that the brand exists [usually highly visual and is the basically name advertising giving little reason to buy the product]. If your product is alone in the retentive stage, this is cause for alarm because it ma mean the product category is in decline and competition sees little future in challenging you for consumers. Advertisers goal in the retentive stage is to maintain market share and ward off consumer trial for the products. Products in the retentive stage do not successfully cut back on their advertising expenditures but they adopt different marketing and promotional strategies than those used in the pioneering and competitive stages. Generally products in the retentive stage are at their most profitable levels because developmental costs have been amortized, distribution channels established and sales contracts made.
The Advertising spiral is an expanded version of the advertising stages of products providing a point of reference for determining which stage or stages a product has reached at a given time in a given market and what the thrust of the advertising message it should be. Advertising spiral parallels the life cycle of the product. The development of the new types of products or categories does not take place frequently. In using the advertising spiral we deal with one group of consumers at a time. Advertising depends on the attitude of that group toward the product. Pioneering and competitive advertising could be going on simultaneously. Products in the retentive stage usually get the least amount of advertising. As long as the operation of a competitive product does not change, the product continues to be in the competitive stage despite any pioneering improvements [once the principle of this operation changes the product itself enters the pioneering stage]. Whenever a brand in the competitive stage is revitalized with a new feature aimed at differentiating it, pioneering advertising may be needed to make consumer appreciate the new feature. A product can coast for only a short time before declining. No business can rely only on its old customers over period of time and survive. The retentive stage is the most profitable one for a product and can go 2 ways after:
- The manufacturer determines that the product has outlived its effective market life and should be allowed to die [manufacturer quits advertising it and withered other types of support]. The product will gradually lose market share but remain profitable to cuts in spending
- The advertising spiral does not accept the fact that product must decline and product seeks to expand the market into a newer pioneering stage
The newer pioneering stage attempts to get more people to use the product. Ways to enter the stage: making a product change or complete overhaul of a product, such as a radical model change for an automobile. Smart advertisers will initiate a change in direction of their advertising when their product is enjoying great success [show new ways of use]. A product entering the new pioneering stage is actually in different stages in different markets. Longtime consumers will perceive the product to be in the competitive or retentive stage. New consumers will perceive it as being a pioneer
The newest pioneering stage focuses on getting more people to use this type of product. Products in this stage are faced with new problems and opportunities. The advertising focus on the newer pioneering stage must be on getting consumers to understand what the product is about advertising in the newer competitive stage to get more people to buy the brand. A product may try to retain its consumers in one competitive area while at the same time seeking new markets with pioneering advertising aimed at other groups. The life cycle of a product or brand may be affected by many conditions.
Immersive media is one that is involved and fun. Environments (real or virtual) create involvement and exploration. Social tie-in should be used with real place ads. For example, Lays potato chips put chips on ceilings on subways. Some of the best games have been the Monopoly online street games that used Google maps. Facial Profiling where they pickup your body movements. Remote tests are ones that you can actually use a remote car control. Events are the best hands down though. For example, Sony Viao Pc. Other Virtual Places too. Games that have smart phone capabilities. IPhone baby that gets prevented. Pranks: Ex. “Dexter” treatment KEY POINTS: Not information delivery, but involvement. Should be user directed where you explore to do things. This will help connects brand/ product to everyday life and personal experience.
Planning and Context: Single ads not seen in isolation and create a bunch of ads. Campaigns. Other ads for product/ service. Ads for your product in other media. Ads for competitor product/service. Integrated Marketing Communications (IMC). All the above + non-advertising media and mentions of product. New coverage, lawsuits, etc. Campaigns. Many separate ads but need continuity/ relationship. Similarities in Ads. Visual Similarity. Similar appearance: Layout, typeface, and style of image. Verbal Similarity: Content: key points and same “voice.” Aural Similarity: Music/song, Announcer’s voice, and sound design. Attitudinal Similarity: Perspective: stance towards. Brand personality
Integrated Marketing Communications: A strategic plan to connect all communication activities, Built around existing, compelling story/ situation. Campaign criteria extended throughout all marketing communications. They include public relations, events, packaging, direct response, digital, promotions, and sponsorships.
Gatorade Replay: KEY POINTS, Campaign continuity need to have 4 similarities. IMC. Media presence.
ROLE OF ADVERTISING
What is effectiveness? Cause you to buy, persuade, pressure, and cause. They need to be POWERFUL. Behaviorism. Needs, urges, and instincts we cannot control. Food, water, shelter. Romance, fear, aggression. Conditioning: Can be controlled by others, Classical: modification of involuntary reflex behavior. Behaviorist view on Advertising: Create stimuli (ad) to condition responses that benefit the product, Overt (very sexual), and subliminal
Theory of Representation. Symbolism. Isolated textual characteristics= specific meaning. Persuade You to Buy: NOT Powerful and Rationalism: Needs and wants are individual, known, and controllable “enlightenment” individual.
Rationalist View on Advertising delivers dependable information. Theory of Representation is Denotation: clear reason why and Pressures You to Buy. It can be POTENTIALLY Powerful and create Culturalism. Born into and taught general ways of understanding the world. Commonsense “natural.” “Pressure” can be Negative. For Example, if you don’t want to be alone buy this produce. Positive one would be one to have more friends. Culturalist View on Advertising: uses existing pressures in the benefit of campaign. Theory of Representation. Signification: Meaning not in the message, made through associations, Textual and social, Learn and internalize the meaning of particular association, Ads use and remake meanings.
Legal Environment includes what laws and regulations that determine what advertisers can and cannot do. It opens up communications (rationalism) to broaden the range of ideas and debates. It can also expand market economy and democratic society. However it limits Communications (behaviorist). One needs to enforce standards of truthfulness and morality and to protect the economy and society.
Sources of the legal environment are the Supreme Court, regulations created by elected officials, first amendment issues.
Kinds of speech are commercial speech that promotes a commercial transaction (ads). There can be political speech that advocates a cause or point of view. A corporate speech is one that has paid publications of points of views and same protections as political speech. Commercial speech less protected (more regulations) than political speech because: Hardier: getting word out is necessity. Falsehood verifiable.
New York Times vs. Sullivan: Paid advertising that advocated a point of view, the New York Time was not liable, and Lead to corporate speech.
Central Hudson Four-Part Test: s the message eligible for the first amendment protection? Is the government interest asserted in regulating the expression substantial? Does the proposed regulation advance the regulatory interest? Is the proposed regulation narrow enough? If the answer is YES, to all 4, the regulation is constitutional. Elected/ Appointed Government Officials:
Local councils, state/national legislatures, federal trade commission. Early 20thCentury Towards Regulations was a time when Agencies concerned with truthfulness, Industry sought regulations to enhance its value, and the Pure food and drug act of 1906. Correct false messages, not product.
Federal Trade Commission was established in 1916 to Protect consumers from deceptive and unsubstantiated ads. Remedies include to Stop/ change ads, Pay fines, Public correction ads, and Deceptive Advertising likely to mislead. Representation, omission, or practice must be “material.”
Legal vs. Ethical. Legal Realm: DECEPTION while the Ethical Realm is MANIPULATION. Pressures of where Ads can be legal, but still unethical. Defining Ethics and Moral Conduct “What is the right thing to do?”
Ethical Dilemmas: Advertising in Schools, Pressures: budget, money, and revenues. Forms: signs, product contracts, lesson books. Channel One: Lend satellite dish, VCRs, and TV sets, In exchange for 12-minute newscasts and 2-minute ads. Fast Food. Ethics in the Workplace. Ethically Impaired and “Moral Myopia”: Nearsighted and rationalize their ideas
“Moral Muteness” is where one can see ethical dilemmas, but don’t say anything. Compartmentalization is workplace ethics vs. person life. The client is always right. Ethics is bad for business
Ethically Active: Agencies that openly encourage ethical actions, Recognize moral issues. The Steps to take are Recognition, Communications, Saying No, and Moral imagination, KEY POINTS. Don’t forget the Legal Realm of Behaviorism: protect people and Rationalism: dependable information. The Ethical Realm is only addressed fully by culturalism and Recognition of pressures.
CHALLENGING THE AGENCY
The Agency Organization Model since 1870s was an exclusive, skilled professional. Produces-led paradigm. Model is less and less adequate today: Consumer-led paradigm, Web 2.0, Immersive, participatory execution, and The “New Agency.” From professional to crowd sourced From originators to coordinators, Current Situation, Change from producer-led to consumer- led in: Manufacturing and design, Marketing, and form of advertising.
. Change today in: Advertising work, Structure of agency, and Fan Communities. Fan Communities: Facebook Groups (620 million), Identification. Who connects communities together? Who I think I am? (“My identity”) How I can connect to other like-minded people? (Identifying with others). Early forms: Intrinsic, family/clan, race, nationality, and religion. Modern Forms: can decide for self, religious converts, politics. Popular culture: fans, sport fans, and media. Multimedia. Crowd Sourced Advertising: Allowing individuals to create advertisements. Victors and Spoils: Competitive participation, client judges, and Paid for results as well as for “reputational score” of community.
Giant Hydra: Management team coordinates, Creative collaborations to develop ideas, Client oversees process and chooses best.
Zoopa: Work independently, Compete for awards
MOFILM: Aspiring filmmakers, Completion, and Client judges.
PRINCIPLES IN THE REAL WORLD
Major Principles of Advertising in the real world. Strategy: all good effective advertising has a strong strategic basis. Media: best media choices are creative choices. Creative: the best campaigns have fresh, original, imaginative executions. Advertising in Society: study not only ads, but industry and society, then go back and study emergence of all 3 over time. Normative vs. Historical. Normative: Guidelines or ideals, How things “should” operate, Systematic, and consistent.
Historical: The “real world”, How things are actually done, Irrational, unpredictable. Normative is what we learn in class, historical is more real world, learning on the job. What Goods are Principles? Justification for practice, Tools to generate options, Ammo for arguing your case, and explaining your choices to people.
Digital and Direct-Response Advertising *Virtually every advertiser is using the techniques of direct response as a key ingredient of marketing strategies.
A relatively inexpensive, quick, targeted, measurable, and easily available interactive medium. A combination if interactive audio with video capabilities that can engage a customer. Among the most flexible media with an ability to change messages immediately in reaction to market and competitive conditions
Early failures made some advertisers cautious about exploring the unique possibilities offered by this medium. Some consumers are still reluctant to use the Internet as a means of buying products and services; they are timid to give their credit card numbers over the Internet even though secure sites are available. There are so many websites that it makes it difficult for consumers to know what is available or have much time to spend with any single site.
Direct Contact with Consumers
Marketers have moved towards a more personal relationship with their consumers. They have progressed steadily from mass marketing wherein prospects were reached relatively indiscriminately at the lowers possible CPM to: Category marketing (broad demographic targeting. Ex: women aged 18-34), Niche marketing (more narrow categories. Ex: women aged 18-34 with children), Group or community marketing (groups with common interests. Ex: tennis players). While this was happening, the competitive market was reducing the distinction among its brands resulting in: Price competition with shrinking profit margins for sellers and A reliance on trusted brands to provide customers with a perception if consistent quality.
Customer relationship marketing (CRM): a management concept that organizes a business according to the needs of the consumer. From the standpoint of customers, it is clear that the audience feels empowered by interactive media, and they use this empowerment in a proactive manner. Customers are also embracing online couponing, entering sweepstakes online, and participating in other targeted sales promotion activities. Customers respond to promotions tailored to their interests and businesses are happy to avoid the expense of waste circulation. Although CRM sacrifices some control to customers, there are 5 advantages:
More effective cross selling and upselling from current customers. Higher customer retention and loyalty. Higher customer profitability. High response to marketing campaigns. More effective investment of resources. The use of interactive technology allows businesses to deal with the unique purchasing, lifestyle, and behavioral histories of each customer – businesses now have the capability of one-to-one marketing. The end results are that the consumer gains better value and the company engenders continued customer loyalty.
Advertising and Digital Media
It is estimated that in 2009 almost 74% of the U.S. population – or more than 22 million citizens – use the Internet. There are five primary types of advertising in digital media: search, online display, email marketing, social media marketing, and mobile marketing.
- Search: search, or keyword advertising, dominates all forms of digital advertising. Marketers can bid through auction systems on words or phrases related to their offering – advertisers only pay the search engine a fee when consumers click on sponsored links. This is a viable option for small advertisers with limited budgets.
- Online display: the second-largest type of digital advertising – include banners, buttons, microbars, skyscrapers, etc. these ads can be static, animated, or contain video, but most contain the direct-response feature of a link to the advertiser. Click-through rate (CTR) is the traditional measure of success for online display ads. CTR = number of clicks / number of times it is seen. CTR fails to consider a display ad’s influence on branding and other forms of marketing communication. Display ads are monetized in several ways, but cost-per-impression (CPM) still represents the industry norm – CPMs on ad networks range from $.60 to $1.10.
- Email marketing: “the rock star of direct marketing because it is the most cost-effective and most trackable.” Click-through is higher for consumer products and financial services. Marketers are moving from mass-emailings too much more targeted and personable approaches by using demographic data. New vocabulary for online marketers:
- Spam: online advertising messages that are usually unsolicited by the recipient
- Opt-in: a form of permission marketing in which online customers are sent messages only after they have established a relationship with a company. Only consumers who have granted permission are contacted.
- Opt-Out: procedures that recipients use to notify advertisers that they no longer wish to receive advertising messages. A term usually associated with online promotions.
- Congress passed the Controlling the Assault of Non-Solicited Pornography and Marketing Act (CAN-SPAM) Act of 2003 which establishes requirements for those who send commercial email and gives consumers the right to ask e-mailers to stop spamming them.
- Social media marketing: includes blogs, social network sites, chat rooms, message boards, podcasts, and video and sharing sites. There are two primary ways to advertise on Facebook: set up a profile page and actively enlist friends and fans to sign up, and then send bulletins about special events and discounts, OR companies can run traditional online display ads on member pages. They can target audiences based on the information Facebook users provide on their profile. Advertisers on Facebook can choose to pay-per-click or pay by the number of impressions generated. Advertisers use video-sharing sites like YouTube as well. The Internet makes it possible for advertisers to run edgier and more consumer-produced content than they would be able to run in the broadcast media. Advertisers are finding ways to leverage the popularity of social media in such a way as to engage customers rather than making them feel as though the advertiser is invading their personal space
- Mobile Marketing: advertisers see the potential in mobile but must tread carefully because research shows that consumers are annoyed by unwanted mobile solicitations. Smartphone apps are a marketer’s dream because advertisers can use mobile in creative ways to deliver value and advertising to users while developing new customer databases.
- Digital as a complement to other media: digital advertising can stand-alone but it is most often used as a complementary role in integrated campaigns. TV seems to work well with online messages because the medium’s high credibility can build brand equity among large groups of prospects very quickly. Many online promotions must deal with three groups of prospects: the first and largest are the surfers and casual users à web advertising must gain attention quickly, the second group is the entertainment/information-oriented regular users of a site à looking for an interactive activity and will return to a website often if it provides the entertainment and information they seek, the third group is those who are actively in the market for a product/service àthe site should provide relevant, up-to-date product information.
Branding accomplishes two goals in the online world:
It gains awareness for the website itself:
“dot-coms” is a generic designation that refers to companies engaged in some type of online commerce.
Branding is important for the companies that want high visibility on the web.
A number of companies have found that traditional advertising works better than online messages to encourage website visits. *
The Role of Digital Media and Marketing: there are a number of uses for the internet and commercial websites:
As a source of direct sales
As a source of advertising-supported communication
As a source of marketing and promotion information
As builders of consumer engagement
Digital Media and Marketing Research: clearly one of the primary benefits of internet marketing is the ability to gain information about individual buying habits and product preferences. However, one of the emerging advantages of internet technology is the ability to collect market research quickly and inexpensively from a larger respondent base than might be possible with conventional research methodology – a marketer can now sample globally. Some dangers associated with conducting research online is that online survey participants are typically younger and less tech-phobic than other members of the population.
Privacy concerns: the FTC has launched a number of inquiries about online policies and practices. The FTC advocates 4 elements for online privacy policies:
1. Disclosure of what information is collected
2. Choice for customers to opt-out
3. Access by consumers to their personal information
4. Security standards for information use and access
Digital branding, audience, and Daypart: although the internet provides problems for multinational companies attempting to execute localized strategies on a global basis, it offers a number of major advantages as well. For example, companies can provide information about their products as well as receive orders from their customers in other countries without incurring additional expense. 13 – 24 year olds spend 16.7 hours/week online and 13.6 hours watching television.
Direct Response Marketing: direct-response marketing has embraced many of the techniques and technologies of interactive media, but it would be a mistake to think that traditional means of direct response from direct mail and infomercials are going away in the near future.
Direct Response – Pros:
Has the potential to reach virtually any prospect on a geographical, product use, or demographic basiDirect response is a measurable medium with opportunities for short-term, sales-related response. Direct response allows advertisers to personalize their messages and build ongoing relationships with prime target audiences that are often impossible in traditional mass media vehicles.
Direct Response – Cons: Higher cost per contact is a major problem with many forms of direct response, especially direct mail. Expenses in printing, production, and postage all have increased significantly in recent years. To keep up with an increasingly mobile population, prospect lists must be updated constantly at considerable expense to advertisers. Public and government concerns with privacy issues have become a major problem for the direct-response industry. Internet marketers, in particular, are facing restrictive legislation and regulations at both the state and federal levels that have limited their ability to reach new prospects through certain types of contacts.
There are three objectives of direct response marketing:
1. Direct orders. Includes all direct-response advertising that is designed to solicit and close a sale. All the information necessary for the prospective buyer to make a decision and complete a transaction is provided in the offer.
2. Lead generation. Includes all direct-response advertising that is designed to generate interest in a product and provide a prospective buyer with a means to request additional information about an item or to qualify as a sales lead for future follow-up
Traffic generation. Includes all direct-response advertising conducted to motivate buyers to visit a business to make a purchase. The advertisement provides detailed information about a produc but usually no order form.
Three Important Features of Direct Response:
Direct response is targeted communication
Direct response is measurable
The message of direct response is personalThe growth of direct-response is directly attributable to a changing marketplace. We have moved from a manufacture-driven economy to one that is dominated by huge retailers.
Database Marketing: using computer technology, businesses are sorting through large amounts of data to look for new consumer insights, market segments, and patterns of behavior. This more sophisticated research and data cross-checking is called data mining. It allows companies to enhance profitability by examining how they have been successful in the past and apply those lessons in the future. A number of predictive models (called behavior maps) are used by businesses to predict future purchases. Data warehouses are centralized company-wide data storage and retrieval systems. The key to CRM and the benefits of database marketing is to provide information that allows a company to maintain the loyalty and profitability of its customers. The primary element in data mining is that the end result is a relationship that us beneficial to both the company and the consumer.
Television and direct response marketing: Direct-response television (DRTV) comes in a variety of formats, but the most familiar are the short-form spot (30 seconds to 2 minutes) and the program-length infomercial. DRTV marketers are designing their messages with a twofold purpose: (1) immediate sales response and (2) bringing prospects to a company’s website in order to have them bookmark the site and become regular customers. The most used forms of DRTV are:
1. The traditional 30 second format with a tagline allowing consumers to order merchandise.
2. The 2 or 3 minute commercial
3. The infomercial: long-form television advertising that promotes products within the context of a program-length commercial. Regardless of its format, DRTV has certain inherent advantages as an advertising tool:
It shows the product in use and provides opportunities for product demonstrations in realistic circumstances
DRTV can create excitement for a product.
DRTV offers immediate results. Within 15 minutes of a commercial spot, a company will receive 75 percent of its orders. Because most DRTV spots are not time sensitive, they can be scheduled in fringe dayparts for significant discounts. In addition, production costs of most DRTV are less than traditional television commercials. DRTV complements retail sales. DRTV is a great technique for testing various product benefits and measuring sales response. Direct-response advertising is sold both on a paid and per inquiry (PI) basis. PI advertising can be very beneficial, especially to companies with good products but little capital.
Television shopping networks: a number of major retailers and designers use home shopping networks to sell their products to this niche consumer market. These networks will continue to grow and occupy a larger place in general retailing.
Radio and Direct Response: Despite its targeted audiences and niche programming, radio has not been a major player in direct-response marketing. Although the economic promise of “visual” radio is in the near future, for the present, radio can serve as a valuable supplement for a variety of direct-response marketers. The combination of low commercial rates and tightly targeted audience composition can saturate prime prospects with frequently occurring ads.
Magazines and Direct response: it is in the area of business and trade publications that direct response is especially important. Magazines with editorial objectives geared specifically toward some particular business or profession can be extremely beneficial to direct-response marketers. Despite the importance of B2B magazine direct-response, consumer magazines also can provide an important means of reaching prospects. Magazine direct response provides the intimacy of direct response with the traditional advertising virtues of magazines.
Catalogs: one of the oldest and most popular forms of direct-response selling. The role of the internet is a chief concern of catalogers that mail billions of catalogs each year. Another challenges id the increasing number of companies that are entering the ranks with either catalog sales or internet sales. Virtually every cataloger has a strong e-commerce site. The internet may offer catalog companies relief from catalog related production and postage costs that account for a large percentage of their total operating costs. As catalogers begin to take advantage of online options, they are discovering that their basic marketing techniques must be adapted to these new channels. One of the primary challenges involves dealing with the customer-controlled online environment and finding ways of encouraging prospects to visit the cataloger’s website. There are a number of keys to the successful process of moving a person from a prospect to a buyer: The right product. Generally, catalogers have a hard time moving merchandise that is easily obtainable at retail outlets so there has been increased consideration to the uniqueness of products. Product differentiation is always important, but it is doubly so for catalog products Exciting creative execution. People can’t try out what they see so the sales story has to be conveyed in attention-getting messages that grab the imagination of the reader Reach a targeted group of prospects. Waste circulation is more expensive in direct marketing sp every effort has to be made to keep it to a minimum. Fulfillment of customer service. The process of successful selling doesn’t end with a single purchase. Catalogers must establish a means of database management that will allow product inventory management as well as a means of determining the quality of customers on a lifetime value basis.
Direct mail advertising: because of the rising expenses associated with direct mail, it is anticipated that direct mail’s share of direct-response advertising will decrease in the future. One of the problems facing direct mail is that the sheer volume of mail coming to households makes gaining a competitive advantage very difficult. This challenge is all the more reason that direct mailers must take steps to reach targeted prospects with an interesting message and a worthwhile product. There are a number of organizations involved in the direct-mail list process:List brokers: in direct-mail advertising, an agent who rents the prospect lists of one advertiser to another advertiser. The broker receives a commission from the seller for this service.
List compilers: usually a broker who obtains a number of lists from published sources and combines them into a single list and then rents them to advertisers.
List managers: promotes client’s lists to potential renters and buyers. The primary job for the list manager is to maximize income for the list owner by promoting the list to s many advertisers as possible.
Service bureaus: one of the primary jobs of the service bureau is to improve the quality of lists. This is called list enhancement and one of the most important steps is the merge/purge. This is a system used to eliminate duplication by direct-response advertisers who use different mailing lists for the same mailing. Mailing lists are sent to a central merge/purge office that electronically picks out duplicate names. Saves mailing costs, especially important to firms that send out a million pieces in one mailing. Also avoids damage to the goodwill of the public.
Lettershop: a firm that not only addresses the mailing envelop but also is mechanically equipped to insert material, seal and stamp envelopes, and deliver them to the post office according to mailing requirements.
Response lists: lists of prospects who have previously responded to direct-mail offers. These lists are more productive and the rental charges are higher than for compiled lists.
List protection: the most common list abuse is multiple mailings beyond an agreed-upon limit. To monitor this, the list owner will include a number of fictitious names that can be tracked to see the number of mailings àthis is known as list decoying. The much greater problem is that the mailing offer may be too closely competitive with the list owner’s products (want profit, not competition)
Testing direct-mail advertising: the most commonly tested e-mail elements in descending order of importance – subject lines, calls to action, design, body copy, offers, and timing The key elements in testing direct mail are the list, the offer, and the creative presentation. Direct-mail testing can be expensive so it is important to concentrate on major elements that normally determine the success or failure of a mail campaign. It is extremely crucial to research validity to test only one element at a time.
Other direct-mail techniques:
Package insert: these messages, called bounce-back circulars, are delivered to customers who are proven direct-marketing users. The cost is much less than solo mailings because the sales message is being delivered as part of another package
Ride alongs: direct mail pieces that are sent with other mailings, such as bills. Same advantage as package inserts except they are going to a company’s loyal customers with whom a company has a proven and recent sales relationship.
Statement stuffers: these cost nothing to deliver because the mailing expense is being incurred anyway. They are at least seen, because everyone eventually gets around to opening their bills. Most recipients are credit qualified and have already dealt with the company before or they would not be getting a statement.
Ticket jackets: airline, bus line, train ticket jackets; companies such as car rental firms find these to be an ideal way to reach their prime target audience.
Cooperative (joint) mail advertising: high postage costs result in advertisers joining together and dividing up the cost among them. Two major drawbacks: it is extremely impersonal and it is difficult to reach specific customers through joint mailings with the same precision that marketers would have with their own lists.
Chapter 22 – The Complete Campaign
Today’s advertisers usually create campaigns that fit into their integrated marketing communication program. They don’t create only an ad by itself. The four components (creative brief, brand equity probe, strategic options and recommended plan, and brand equity audit) are synthesized into an action plan for developing all communications for a brand – it must maintain a consistent identity. Advertisers’ main concern is reaching every consumer’s “touch point.”
A campaign versus an ad: there is no magic time frame for a campaign, but generally, campaigns are designed to run over a longer period of time than an individual ad. The average length of a regional or national campaign is about 17 months, although it is not uncommon for a campaign to last 3 or 4 years – a few have lasted much longer.
Changing campaign risk: there is never a guarantee that the next campaign will be as strong, let alone stronger, than the original. Although, some believe that most successful campaigns need refreshing over time – people change, products change, and markets change. Adding online advertising to a television campaign boosts brand awareness, but the inclusion does little to impact sales. Broadcast ads upped the linking of brand to a message or value proposition by nearly 13 points, the web added 7 points. Television spots increased the ability to influence purchase decisions by nearly 6 points, whereas the web only contributed a mere 0.4 point incremental boost. The web was stronger at raising awareness and association than influencing purchasing decisions.
Campaign Diversity: many campaigns have purposely highlighted models with racially indeterminate features – “Generation E.A.: Ethnically Ambiguous.” Good advertising starts with a clear understanding of both short and long-term marketing goals
Situation Analysis: establishes a current benchmark or starting point. It has two orientations: the past and the present. The situation analysis is the first step in developing a campaign. The Product: successful advertising and marketing begin with a good product or service. At this point, we need to analyze our product’s strengths and weaknesses objectively. Among the questions usually asked are the following:
- What are the unique consumer benefits?
- What is the value of the product relative to the proposed price?
- Are adequate distribution channels available?
- Can quality control be maintained?
Prime prospect identification: the next step is to identify our prime prospects and determine if there are enough of them to market the product profitably. We also must identify the prime prospects problem: What are their needs and wants in the product or product type?
Competitive atmosphere and marketing climate: we carefully review every aspect of the competition, including direct and indirect competitors. Recognizing the market climate during the recession in 2009, Subway promoted their $5 sandwiches and the promotion was so successful it became a campaign.
Creative objective and Strategy: we begin to select those advertising themes and selling appeals rhar are most likely to move our prime prospects to action. Advertising motivates people by appealing to their problems, desires, and goals – it is not creative if it does not sell. Once we establish an objective, er are ready to implement the copy strategy by outlining how the creative plan will contribute to accomplishing our predetermined goals:
- Determine the specific claim that will be used in the advertising copy (if more than one, list in order of priority)
- Consider various advertising executions
- In the final stage of the creative process, develop the advertising copy and production
Continuity: term used to describe the relationship of one ad to another ad throughout a campaign. This similarity or continuity can be visual, verbal, aural, or attitudinal. Visual similarity: all print ads in a campaign should use the same type face or virtually the same layout format; stress continuity not sameness. Another device is for all ads in a campaign to use the same spokesperson or continuing character – strong continuity strengths communication. Verbal similarity: Great words and great strategies make great campaigns. “What can Brown do for you” UPS “Mmm mm Good” Campbell’s Aural similarity: the same sound effect can make a campaign very distinctive “This is Tom Bodett for Motel 6”. Attitudinal Similarity: some ads express a consistent attitude toward a brand and the people using it. Nike is a strong brand name and its signifies status, glamour, competitive edge – its presence and identity is so strong that many people want to connect with the brand.
Media Objectives: creative planning and media planning have the same foundations – marketing strategy and prospect identification – and they cannot be isolated from each other. Media attempts to be “media neutral.” The media plan involves three primary areas: strategy, tactics, and scheduling.
Media strategy: at the initial stages of media planning, the general approach and role of media in the finished campaign are determined:
i. Prospect identification.
ii. Timing. The media planner must consider many aspects of timing, including media closing dates, production time required for ads and commercials, campaign length, number of exposures desired
iii. Creative considerations. Media and creative teams must accommodate each other. Media has to be creative in finding a way to reach and engage consumers
Media tactics: media planner decides on media vehicles and the advertising weight each is to receive.
Media scheduling: actual media schedule and justification are develope
The promotion plan: discussed very early, and its relationship to the advertising plan is determined. Promotion activities may involve dealer displays, in-store promotions, premiums, cooperative advertising, and coupon offers
Other Integrated Elements: don’t forget the importance of every aspect of your IMC functioning as one voice. You need to maintain focus on the brand or positioning throughout the marketing mix.
Getting the Campaign approved: for approval, it is wise to present a statement of the company’s marketing goals. Next, the philosophy and strategy of the advertising are described, together with the reasons for believing that the proposed plan will help attain those objectives. Not until then are the ads or commercials presented.
Research – Posttests: the final part of a campaign entails testing its success. First, the expected results are defined in specific and measurable terms. Then, the actual research is conducted to see if these goals were met. The pretest is intended not only to provide a benchmark for the campaign but also to determine reasonable goals for future advertising.
”Glocal” describes how advertisers consider language and culture when tailoring global campaigns to populations around the world – or within the United States. American consumers prefer foreign-made goods in a number of major product categories; a significant percentage of future growth in marketing and advertising will be more concentrated in the Far East, Middle East, and Latin America as sellers seek new customers for their goods and services. Nowhere is the expansion of global marketing more apparent than in the increase in international advertising spending levels. Among the major challenges is overcoming significant anti-American opinions abroad. The roots of these attitudes are varied, but many are related to the marketing of American products.
- Exploitation: The sense that American companies take more than they give
- Corrupting influence: The view that American brands enhance thinking and behavior that clash with local customs and debase cultural or religious norms
- Gross insensitivity and arrogance: In many cultures there is a perception that Americans believe everyone wants to be like them
- Hyper-consumerism: Agencies try to find those shared attitudes which are consistent from country to country but these companies must be sensitive to the distinct differences and pride that consumer’s feel toward their national and cultural heritage.
- The Multinational Corporation: The basic motivation is to make profit and expand their customer base.
The fundamental economic model remains the same regardless of the company’s size. Another significant feature of global marketing and advertising is the paradox of a greater ability to communicate across international borders accompanied by growing nationalism and suspicion of outside influences in many countries. Whatever the reason, a growing sense of isolationism is a factor that the international marketers must weigh with care.
In some cases, isolationism can be resolved in part by greater sensitivity to other cultures, a willingness to learn local customers, and openness to new ideas. Globalization is a hybrid term meaning to adapt global marketing efforts (product and advertising messages) to local markets and cultures. Globalization can develop creation or distribution of products or services intended for a global or trans-regional markets but customized to suit local laws or customs. It is also be used electronic communication technologies, such as the Internet, to provide local messages on a global or trans-regional basis.
Glocalization seeks to emphasize the belief that international sales of a product are more likely to succeed when companies adapt the product and/or promotion specifically to a particular local. For example, Nokia launched the “kosher phone” for Orthodox Jews in Israel, and McDonald’s adjusts their menu according to local food habits.
Localized market strategies is good business and good corporate citizenship
A greater sense of local country traditions may be more important than business expertise, and even the most astute marketers sometimes misunderstand local customs. For example, Wal-Mart in German and South Korea are for those people that are one stop, quick-shoppers.
The global economy will continue to grow, fueled by the emergency of middle-class societies in an increasing number of countries. In addition, communication technology is permitting advertisers to reach prospective buyers. A growing global demand and a rising standard of living will combine to make advertising more prevalent worldwide.
China will become a leading global market. For many years, China has been known for its exports rather than its imports, but today, China is recognized for its potential as a growth market because of its size and the capacity of its growing middle class to buy goods. However, marketers have been warned to approach the market with caution because China is a very diverse as a heterogeneous market. For example, advertising within China has been compared to global marketing because of its more than 22 providences, 5 time zones, and almost 300 cities with populations greater than 1 million. As more companies around the world seek growth opportunities, entry into the Chinese domestic market will become more attractive.
A necessary element in the expansion of global business activity has been the growth of communication technology, particularly in developing regions. The introduction of satellite TV brought the possibility of reaching the majority of the world’s population with information, entertainment, and advertising. Business could expand much faster and efficiently. As impressive as satellite is, it pales in comparison to the distribution and usage of Internet technology such as social media.
No communication technology has grown at the rate of the internet (1.6 billion people use). The potential for future growth is outside the United States and Europe. The key to successful global marketing is finding those common appeals that will work on a universal basis (a mother’s love for her child).
International legal issues andmarketers face the problem of country-to-county variances in matters such as privacy and legal and regulatory standards. Online marketers must be aware that regulations differ widely from country to country. Internet promotions sent to an international audience should provide the opportunity to opt out, give honest and credible information, prohibit the use of randomly generated addresses, and set standards to prevent relaying email from computers with authorization.
America no longer dominates global commerce. It is difficult to develop rigid guidelines for global commerce because each company faces unique issues and deals with them in a variety of ways. Traditionally, a major failing of American companies going abroad (especially those with little multinational experience) has been treating a market as if consumers were homogeneous in terms of demography and product preference based on a lack of research and appalling ignorance of local cultures and traditions.
Global Marketing is aterm that denotes the use of advertising and marketing strategies on an international basis. A company should adopt a single advertising and marketing strategy throughout the world where the product development, advertising themes and media, distribution channels, and target market are identical from one country to another. See Open Happiness campaign from Coke.
European Union (EU) is thedeveloping economic integration of Europe, potentially a single market of some 500 million consumers in 2010. The North American Free Trade Agreement (NAFTA) is a treaty designed to eliminate trade barriers among the US, Mexico, and Canada; some see this as being anti-environmental and exploitative of workers and small farmers. In developing a successful multinational marketing organization, management, advertising execution and sensitivity must be addressed.
- Management: consolidating strategic management decisions at corporate headquarters and giving local managers flexibility to develop specific tactics within these general principles. A way for management to tackle the issue of creating a branding or advertising theme that can be adapted globally is to look at brands as an umbrella under which a number of related products can be marketed rather than treating each brand as a unique product. The execution of international marketing and advertising is divided into 2 camps: Agencies that are willing to tolerate dealing with a number of local agencies, or efficiencies of centralized control employing a few agencies are worth the potential loss of local character in advertising. Companies usually develop a management style that integrates both of these approaches.
- Advertising execution: international advertising and marketing strategy must adapt in some way to almost every country it enters. Problems such as language differences, media research and usage, and cultural considerations are the most often frequently seen. It can be argued that the misapplication of global marketing places the well being of the firm ahead of the consumer. Global marketing involves a number of steps requiring decisions about both products and marketing strategy. The following are some situations that are executed:
- Export both the product and the advertising to other countries
- Adapt a product and marketing plan exclusively for the international market or large geographical area
- Export the product, but change the name
- Keep the brand name and advertising strategy, but adapt the product country by country
- Keep the brand name and product, but adapt the advertising in each country (one of the more common strategies)
- Adapt both the product and the advertising to each country (the most expensive)
- Sensitivity: in recent years, multinational businesses have had to deal with numerous complaints about their operations in developing countries. Even corporations acknowledge that there is a fine line between opening new markets for their products and exploitation. More companies are demonstrating that they understand that there is an important balance to be reached between maximizing sales and profits and being a responsible marketer.
The formulation of joint ventures in the 1970s recognized the growing expertise of local advertising talent and the fact that around the world many overseas agencies were providing client services on par with major American agencies. The most significant change in international advertising on the agency side is the growing trend for clients to bring much of the marketing communication function in-house. The most significant result of this change is that multinational companies are now calling on small agencies for creative work to create multinational advertising agencies.
Advertising revenue in the US accounts for approximately 40 % of the world total, however, much of this advertising is placed by foreign-owned agencies. The primary motivation for marketers in a number of regions is to offer more growth potential in other countries than the more mature US and Western Europe economies.
The effects of integrated marketing communication on worldwide agencies has left marketers much more interested in reaching consumers by the most effective venue rather than pigeonholing their marketing communication into artificial categories of advertising and promotion, and have moved toward the decentralization of agencies. Many major multinational marketers have opted to use a number of smaller agencies rather than 1 or 2 large ones for their brands (more evident on the creative side). This provides more choice for the client and it promotes interagency competition. The downside of using a multiagency approach is the problem of keeping the brand image and positioning consistent, and there are potential difficulties with management control of diverse voices.
Research and the Multinational Consumer is changing relationships between international agencies and clients most apparently found in market research. Research for multinational brands is one of the primary areas in which centralization works far better and offers efficiencies in both cost and reliability. Marketing translation is the process of adapting a general marketing plan to multinational environments. A vital element in the translation process is the brand audit which attempts to define what a brand means to consumers worldwide, and then develop market strategies that will enhance the brand’s potential. Regardless of the function – creative, media, or research – agencies face problems in meeting the demands of clients. They must accommodate the original structure of their clients. What is needed for account management often differs from one client to another so some agencies find they need different management organizations for each client. Agencies must also figure out how to manage centrally and communicate locally – must translate broad client marketing strategies to the level of the individual customer in each country they serve.
The Multinational Advertising plan: all advertising begins with sound planning and an adherence to basic marketing strategy. All forms of marketing communication should be coordinated and integrated rather than presented as a series of unrelated advertising messages. There are two areas of primary concern to international agencies and their clients
- Creative and cultural considerations: advertisers can no longer introduce their products and depend on latent demand for sales and profits; they must look at the social and cultural implications of their advertising. Successful creative execution begins with an in-depth awareness of a country’s culture, customs, and buying habits.
- Media planning from a global perspective: the media function has suffered from three primary problems in international advertising:
- Media availability and/or usage levels
- Legal prohibitions – some of the more general restrictions common to advertising regulators throughout the world are: comparative advertising, advertising to children, internet advertising, standards of truth
- The lack of reliable audience research: Rapid expansion of efficient global communication and the expansion of global retailers have dramatically improved the ability of advertisers to reach target markets.
Advertising and Ethnic Diversity in the United States is a huge factor to take into consideration. More than 19% of US residents prefer a language other than English when at home. The task of marketing to a diverse population is made can be difficult. With a Majority-minority transition,the marketplace is changing more quickly than many companies can adapt and the ethnic makeup is constantly evolving. Race is more difficult to determine. For example, the 2000 census had almost 7 million respondents list more than one racial background. Finally, Internet and technology allows more immigrants to stay connected with their family and homeland, making assimilation and acculturation a longer process.
A Multicultural market power forcompanies is not enough to simply identify ethnic market segments, they must relate to them as well. Companies generally give greater attention to the Hispanic market than either the African American or Asian American for two reasons: because the rate of assimilation as lower and the language differences. Even after Hispanics become fluent in English, they still use Spanish on a frequent basis. How advertisers deal with U.S. diversity is a matter of sensitivity and profitability. Language does not define how a person regards his or her cultural roots – even Hispanics who speak little or no Spanish are as likely to have strong values in the Hispanic community as those who speak Spanish. The emergence of value-driven, as contrasted to language-driven, marketing strategy is a primary change in the strategy and execution of ethnic marketing. Marketers are facing a dilemma because on one hand, firms know they must acknowledge consumer diversity in order to profitably market their products, but on the other hand, there is a growing acceptance of diverse cultures that suggests the future will be one of constant borrowing from a number of cultural identities.
Marketing to various groups in Media:
Because of the relatively low-cost entry into the market, ethnic newspapers are the largest medium in terms of number of vehicles (although they are far surpassed by the audiences of ethnic-oriented television). Most ethnic-targeted publications have experienced a significant circulation and advertising increases during the past decade, with the exception of the African American press. This decline can be attributed to the civil rights movement pushed mainstream media to begin more balanced coverage of the African American community, and the fact that it is easier for major newspapers to assimilate African Americans because there is no language barrier.
The Message: marketers must tailor their messages to specific target audiences, but they must also understand that subgroups exist within ethnicities. For example, when General Motors ran a Saturn commercial in Miami with a woman in a Mexican dress dancing in front of the Alamo, the Miami Hispanic community had little or no relevance to the commercial because the overwhelming majority of them are Cuban American.
The Product: products from abroad are being introduced into the United States because companies believe there is a ready-made U.S. market of Mexican immigrants who are already familiar with the brand. They have a duel strategy of appealing to the Hispanic market and at the same time building future sales through crossover purchases from the general population.
Research: for many years, reliable ethnic research did not exist but as more companies are spending more money into advertising, they have a lot invested into making sure that each dollar spent results in profitability among their target ethnic audience.
onomic, Social, and Legal effects of Advertising
The History of Advertising Criticism
I. The Era of exaggerated Claims (1865 – 1900)
II. The Era of Public Awareness (1900 – 1965)
III. The Era of Social Responsibility (1965 – present)
A number of studies indicate that buyers support those companies and brands that have gained a reputation for being good citizens and actively promote their good works.
The Economic Role of Advertising: the primary role of advertising is communication, but there is a constant “persuasion versus information” debate that will never be resolved because of the biases of the pro-advertising and anti-advertising camps and the fact that advertising functions in both roles (persuasive and informative).
Economic Arguments in Favor of Advertising:
- Advertising provides consumers with information to make informed decisions about new products, availability of products, price, and product benefits
- Advertising supports largely unrestricted media that disseminate news and entertainment and provides employment to thousands
- By promoting product differentiation, advertising encourages continuing product improvements and the introduction of new and innovative goods and services
- Mass advertising ultimately results in lower prices
- Advertising contributes to increases in the overall economy by increasing generic as well as brand consumption
Economic Arguments against Advertising:
- The intent is to persuade, not to inform
- On a macroeconomic basis, advertising spending is largely wasted because it primarily causes consumers to switch from one brand to another without any net economic gain to society
- Many economists challenge the notion that advertising lowers product price. They charge that one of the primary goal of advertising is to insulate a brand from price competition by emphasizing emotional appeals so that price comparisons become less important to product decisions
- The high rate of product expenditures in many product categories make it difficult, if not impossible, for new products to enter the market
The fact is that there is evidence to support each of these claims and counterclaims of advertising.
The Social Role of Advertising: Perhaps the fundamental question raised in the social context of advertising criticism is whether advertising shapes and defines culture or simply mirrors and evolving society. The answer is some of both. Cultural effects of advertising on an audience include:
- Advertising’s inadvertent social role: study advertising from the viewpoint that ubiquitous, redundant messages presented by advertising through mass media created various changes in the way the audience responded to their audience. By the sheer weight of exposure, advertising sets a social agenda of what is expected, what is fashionable, and what is tasteful for a number of people.
- Advertising’s overt social role: a second, less studied area of advertising’s social and cultural roles deals with advertising as an agent of social change. That is, those campaigns whose primary objective is the promoting of a social agenda.
As products become homogenized, sellers look to advertising and brand image as the principle difference among competing products. As we discussed earlier, differentiation is not found in the product itself but in the mind of the consumer. Once a brand achieves a dominant position based on image and cultural associations, it is much more difficult for competitors to match than real product differences.
In recent years, social criticism of advertising has taken precedent over its economic effects. Some representative examples of social criticisms include privacy concerns, product placement, and advertising’s role in obesity.
Advertising Content is by far the most criticized area of advertising directed at alleged exaggeration claims where critics charge that advertising is more likely to provide misinformation, negative content, and in some cases, outright falsehoods than useful consumer information. One of the long-standing areas of advertising criticism is the portrayal of various segments of society. There is also a growing awareness that advertising should present a more realistic image of society. For example, look at Dove’s “Campaign for Real Beauty.”
Advertising of certain product categories is also criticized. Now that tobacco advertising is virtually nonexistent in mainstream media, some of the remaining products and product categories that garner most controversy are distilled spirits, condoms, and advertising to children.
Excessive Advertising receives most of its criticism directed at TV, with approximately 25% of television network time devoted to commercials. There are also many within the advertising industry that see excessive advertising as having the potential to dilute ROI. Communication research confirms that the number of messages in a commercial block and the order in which they are seen has an effect on the recall and impact of the advertisement. Excessive advertising is a concern for both the audience and the advertising industry.
Finally, Advertising’s unwanted influences can have an affect on society. Among the major theses of this school of thought are that advertising makes people buy things they don’t want or need, lowers morals, and generally exploits the most susceptible segments of society. The idea that consumers will take some action solely because of advertising is contrary to virtually every theory of communication.
The Advertising Council has the most organized efforts of social advocacy, which has for many years marshaled the advertising industry to support a number of causes – addressing issues such as racial tolerance, equal rights, job and fair housing opportunities, health awareness, and education. The council depends on volunteers across the advertising spectrum. Major agencies produce most of the advertising on a pro bono basis, and the media donate time and space to carry these advertisements and commercials. Because of the council’s success, a number of other organizations have begun advertising (MADD, Planned Parenthood). Another challenge for the council is finding the type of topics that are so prevalent in today’s society like the war on terrorism and the fight against AIDS.
Issue Advocacy Advertising isused to influence public opinion and legislation regarding a range of political issues from health care reform to energy and trade policy. Unlike most brand advertising, the tone of many of these ads is negative as groups seek to derail proposed legislation or emphasize shortcomings in an opponent’s plan. Advertising and Cause-related marketing can be seen back from 1983. In 1983, Amax sponsored a campaign promising to donate to the renovation of the Statue of Liberty each time someone used an American Express card. This initiative is considered to be the introduction of cause-related marketing. Today, large corporations are engaging in strategic philanthropy in which they market their good deeds in the same way they market their products. Most research indicates that consumers rarely make purchasing decisions solely because a company is supporting some favored cause. Some companies have refrained from cause-related marketing because they feared that consumers would view their efforts as exploitative. However, a number of studies indicate that consumers welcome the opportunity to be a part of a worthy cause, and they reward companies for their efforts. Cause-related marketing falls into transactional programs, message promotions, and licensing programs.
The theory behind the adoption of advertising as the primary source of media funding is that, by having economic support spread out over numerous advertisers, is assures that no one entity can exercise undue influence over editorial content – this wall has been breached. A number of research studies indicate that a large majority of the public believes that advertisers are influencing editorial content. When the media loses credibility with their audience, it harms both themselves and the advertisers that use them. These are some of the ways in which the relationship between advertisers and the media are changing:
- Withholding advertising as an attempt to control editorial decisions: in some cases, an advertiser may want favorable coverage of a firm, and in other instances a company demands a medium kills a story critical of another company. Such a demand can involve major ethical as well as financial decisions in part of the media.
- Advertiser-financed productions: a number of advertisers have worked with local and network television outlets to jointly produce programming – one being the family friendly forum which promotes family-friendly programming between 8 and 10 because that is when adults and children are likely to watch TV together. Few people question the motives of these companies to bring family friendly entertainment to television, but many critics see any involvement between advertising and programming as a cause for concern. The Today show was also criticized for not revealing that the companies whose brands they recommended were paying some product experts appearing on the show.
- Product placement: this has been the most prevalent editorial-promotion alliance in the past decade. It is not confined to traditional media. The popularity of video games has provided a ready-made market for the placement of products, especially those directed toward young, male audiences.
- The Advertorial: the use of advertising to promote an idea rather than a product or service.
Advertising’s Legal and Regulatory Environment
The public and the advertising industry itself agree that companies that use illegal or unethical advertising tactics should be dealt with severely. Not only is deceptive advertising wrong, but it also creates a lack of trust in all advertising, making it difficult for honest businesses to effectively promote their products and services. Constraints no advertising include laws and regulations of legally constituted bodies such as Congress and the FCC, control by the media through advertising acceptable guidelines, and self-regulation by advertisers and agencies using various trade practice recommendations and codes of conduct.
Caveat emptor is Latin for “Let the buyer bewares”; represents the notion that there should be no government interference in the marketplace. Many of the principles of caveat emptor have been rejected. Rather, both businesses and the public realize that buyers have far less information than sellers, and they must be protected by legal guarantees of the authenticity of advertising campaigns.
In 1922, in Federal Trade Commission (FTC) vs. Winsted Hosiery Company, the Supreme Court held that false advertising was an unfair trade practice. In 1938, the Wheeler-Lea Amendments broadened the scope of the FTC to include consumer advertising. One of the primary concerns of the FTC is to ensure that consumers are protected from deceptive advertising. The key to FTC enforcement is that advertisers must be able to prove the claims made in their advertising – be able to substantiate what they say. There is a 3-part test to figure out if a claim is untruthful:
- There must be a representation, omission, or practice that is likely to mislead a consumer
- The act or practice must be considered from the perspective of a consumer who is acting reasonably
- The representation, omission, or practice must be material (in other words, the claim, even if not true, must be judged to have had some influence over a consumer’s decision)
A FTC intervention in alleged deception starts with a claim of deception practices to the FTC. Then the FTC begins to investigate with a request for substantiation from the advertiser. If the FTC finds the practice to be unsubstantiated and therefore deceptive, a complaint is issued. The advertiser is asked to sign a consent decree where they stop the practice that is under investigation, but admit no guilt. If the advertiser refuses to sign a consent decree, the FTC issues a cease-and-desist order. This order can carry a $10,000-per-day fine.
Even if an advertiser agrees to abide by a cease-and-desist order, the FTC may find that simply stopping the practice does not repair past damages to consumers. They may be required to run corrective advertising to counteract the past residual effect of previous deceptive advertising (this began around the 1960s).
If a company cannot reach agreement with the FTC, its next recourse is the federal courts. It is extremely rare that cases go beyond the cease-and-desist stage.
FTC Rules and Guidelines: the FTC is responsible for enforcement and education in:
- Federal Laws passed by congress
- Formal FTC industry rules: the telemarketing sales rule means that callers are authorized to provide caller information, the used car rule is intended to prevent oral misrepresentations and unfair omissions of material facts by used car dealers, and the contact lens rule made it mandatory that patients receive copies of their prescriptions.
Among the most common areas of FTC inquiry for which guidelines have been issued are the following:
- Environmental claims, the term “free” in advertising, “Made in the USA” label, Advertising as a Contract, facts versus puffery (meaning the advertiser’s opinion of a product that is considered a legitimate expression of biased opinion), testimonials, and warranties and guarantees.
Robinson- Pitman Act: a three-part “package” that evolved over a period of almost 50 years:
- 1890 Federal Sherman Antitrust Act: designed to prevent alliances of firms conceived to restrict competition
- 1914 Clayton Antitrust Act: amended the Sherman Act; it eliminated preferential price treatment when manufacturers sold merchandise to retailers
- 1939 Robinson-Pitman Act: in turn, the Pitman Act amended the Clayton act. It requires a manufacturer to give proportionate discounts and advertising allowances to all competing dealers in a market. It protects smaller merchants from unfair competition of larger buyers. For example, a manufacturer may not limit co-op dollars to television, knowing that many retailers ion smaller markets might not have practical access to television advertising. Such an offer is known as an improperly structured program.
Slotting fees are payments to retailers by manufacturers to gain shelf space; the FTC has a continuing review of the role of slotting fees because it fears that they have the potential to prevent marketplace entry of new brands or prevent small retailers from gaining access to establish brands because of disproportionately high slotting fees.
The Federal food, drug, and cosmetic Act was passed by Congress in 1938 and established the Food and Drug Administration (FDA); it superseded the original legislation (the original prohibited interstate commerce in misbranded and adulterated foods, drinks, and drugs) and gave the FDA increased responsibility. One of the most active and controversial areas of FDA regulation is consumer prescription drug advertising. Until 1997, pharmaceutical companies could only advertise prescription drugs to doctors, and then, consumer advertising was permitted. By 2005, $5 billion was being spent on direct-to-customer drug advertising. The FDA has begun an aggressive campaign of enforcing promotional regulations, often sending formal letters of warning to drug companies. The jurisdiction of the FDA to control and regulate labeling was enhanced when Congress passed the Nutritional Labeling and Education Act of 1990. Beginning on January 1, 2006, the agency was given greater enforcement authority over labeling, and labeling information was enhanced to include data about trans-fat allergen groups, and whole grain ingredients.
Despite a more open environment for commercial messages, judicial opinions supporting commercial speech still deny full first amendment protection to advertising.
In 1980, the court articulated a set of guidelines concerning the constitutional protection that would be afforded commercial speech. These guidelines were set forth in the case of Central Hudson Gas and Electric v. Public Service Commission of NY. The court established a four-part test to determine when commercial speech is constitutionally protected and when regulation is permissible – known as the Hudson Four-Part Test:
- Is the commercial expression eligible for first amendment protection? Is it neither deceptive nor promoting of illegal activity?
- Is the government interest asserted in regulating the expression substantial? The stated reason for regulating must be of primary interest to the state rather than of a trivial, arbitrary nature
- If the first two tests are met, the court then considers if the regulation of advertising imposed advances the cause of the government interest asserted. If we assume that an activity is of legitimate government concern, will the prohibition of commercial speech further the government’s goals?
- If the first three tests are met, the court must finally decide if the regulation is more extensive than necessary to serve the government’s interest. Is there a less severe restriction that could accomplish the same goals?
Because of the unique nature of communication, it may be that the Supreme Court will never be able to issue a totally definitive decision that will cover every instance of commercial speech.
CAN-SPAM Act of 2003: the controlling the assault of non-solicited pornography and marketing act is enforced by the FTC, and established requirements for those who sent commercial email. Provisions to the act include false or misleading information is banned, deceptive subject lines are prohibited, the e-mail must give recipients an opt-out method, commercial email must be identified as an advertisement and include the senders valid physical postal address.
Advertising of professional services: one of the most controversial areas of commercial speech involves advertising by professionals; especially attorneys and health care providers.
Comparison Advertising has primary concerns. Comparative advertising runs the risk of inadvertently promoting competitive brands and/or appearing to offer credibility to them by including their names. Some comparison advertising may appear unfair to consumers and damage the reputation of the brand as well as advertising in general. Firms fear that comparative advertising claims will precipitate lawsuits by companies that think their brand is unfairly disparaged. Comparison advertising can also invite counterattacks form competitive brands.
The Advertising Clearance Process: the internal process of clearing ads for publication and broadcast, conducted primarily by ad agencies and clients. The toy is presented logically and realistically. Animation is limited to about 10 second spot. Copy must clearly disclose if parts are sold separately and if batteries are not included
Self-Regulation by Industry-wide groups serves two important purposes beyond ensuring more informative and truthful advertising by seeking to overcome the relatively poor public perception of advertising by showing that there is a concerted attempt within the industry to foster responsible advertising and strong self-regulation may ward off even stricter government control.
Better Business Bureaus areone of the best-known, aggressive, and successful organizations in the fight for honest and truthful advertising. Their primary responsibility is for truthful and non-deceptive advertising rests with the advertiser. Advertisements that are untrue, misleading, deceptive, fraudulent, falsely disparaging of competitors, or insincere offers shall not be used. An advertisement as a whole may be misleading although every sentence separately considered is literally true. Although the BBB’s have no legal authority, they are a major influence on truth and accuracy in advertising.
The National Advertising Review Council (NARC)’s primary purpose was to develop a structure which would effectively apply the persuasive capacities of peers to seek the voluntary elimination of national advertising which professionals would consider deceptive. Its objective was to sustain high standards of truth and accuracy in national advertising through voluntary self-regulation.
National Advertising Division (NAD) isthe primary investigating unit of the NARC self-regulation program. The NAD is staffed by full-time lawyers who respond to complaints from competitors and consumers and from referrals from local BBBs. They also monitor national advertising. Primary areas of challenges are product testing, consumer perception studies, taste/sensory claims, pricing, testimonial/anecdotal evidences, and demonstrations. The NAD/NARB process cannot order an advertiser to stop an ad, impose a fine, bar anyone from advertising, or boycott an advertiser or product.
The National Advertising Review Board (NARB) provides an advertiser with a jury of peers if it chooses to appeal a NAD decision.
The Children’s Advertising Review Unit (CARU) was established in 1974 to review the special advertising concerns of advertising directed to children. The CARU primarily deals with product presentations and claims, sales pressure, disclosures and disclaimers, comparative claims, endorsements and promotions by program or editorial characters, safety, and interactive electronic media.
Obesity concerns: an area of concern is food advertising and its alleged contribution to childhood obesity, demonstrates many similarities with the battle over tobacco advertising in the 1990s.
The Children’s Food and Beverage Advertising Initiative (CFBAI): launched by CBBB in 2006 to provide transparent and accountable self-regulatory guidelines for companies that advertise foods and beverages to children. The initiative’s goal is to ensure that food and beverage advertising messages directed to children younger than 12 encourage healthy dietary choices and lifestyles.
- Six Questions to Ask Before Your Next Advertising Campaign (sheehy1.com)
- On the fallacies of attribution (making customers love you) (thoughtgadgets.com)
- The chart that explains media’s addiction to print (gigaom.com)
- Old media vs the cloud: a clash of cultures (zdnet.com)
- Now hiring in Durham: Advertising Operations Coordinator (web.blogads.com)
- How to Steal ROI from Competitors Branding Efforts (slideshare.net)
- LANGUAGE: ADVERTISING: GRADE 11: How To Analyze a Print Advertisement (trippinglyonthetongue01.wordpress.com)
- Using Engagement to Track Consumer Sentiment and Trends (adrants.com)
- Facebook Advertising is Fool’s Gold (behindcompanies.com)
- Microsoft to advertisers: Drop dead (zdnet.com)
Books to read: | http://bkstrategic.com/marketing-strategy/advertising/ | 13 |
51 | Pascal - Units
A Pascal program can consist of modules called units. A unit might consist of some code blocks which in turn are made up of variables and type declarations, statements, procedures etc. There are many built-in units in Pascal and Pascal allows programmers to define and write their own units to be used later in various programs.
Using Built-in Units
Both the built-in units and user defined units are included in a program by the uses clause. We have already used the variants unit in the Pascal - Variants tutorial. This tutorial explains creating and including user-defined units. However, let us first see how to include a built-in unit crt in your program:
program myprog; uses crt;
The following example illustrates using the crt unit:
Program Calculate_Area (input, output); uses crt; var a, b, c, s, area: real; begin textbackground(white); (* gives a white background *) clrscr; (*clears the screen *) textcolor(green); (* text color is green *) gotoxy(30, 4); (* takes the pointer to the 4th line and 30th column) writeln('This program calculates area of a triangle:'); writeln('Area = area = sqrt(s(s-a)(s-b)(s-c))'); writeln('S stands for semi-perimeter'); writeln('a, b, c are sides of the triangle'); writeln('Press any key when you are ready'); readkey; clrscr; gotoxy(20,3); write('Enter a: '); readln(a); gotoxy(20,5); write('Enter b:'); readln(b); gotoxy(20, 7); write('Enter c: '); readln(c); s := (a + b + c)/2.0; area := sqrt(s * (s - a)*(s-b)*(s-c)); gotoxy(20, 9); writeln('Area: ',area:10:3); readkey; end.
It is the same program we used right at the beginning of the Pascal tutorial, compile and run it to find the effects of the change.
Creating and Using a Pascal Unit
To create a unit, you need to write the modules, or subprograms you want to store in it and save it in a file with .pas extension. The first line of this file should start with the keyword unit followed by the name of the unit. For example:
Following are three important steps in creating a Pascal unit:
The name of the file and the name of the unit should be exactly same. So our unit calculateArea will be saved in a file named calculateArea.pas
The next line should consist of a single keyword interface. After this line, you will write the declarations for all the functions and procedures that will come in this unit.
Right after the function declarations, write the word implementation, which is again a keyword. After the line containing the keyword implementation, provide definition of all the subprograms.
The following program creates the unit named calculateArea:
unit CalculateArea; interface function RectangleArea( length, width: real): real; function CircleArea(radius: real) : real; function TriangleArea( side1, side2, side3: real): real; implementation function RectangleArea( length, width: real): real; begin RectangleArea := length * width; end; function CircleArea(radius: real) : real; const PI = 3.14159; begin CircleArea := PI * radius * radius; end; function TriangleArea( side1, side2, side3: real): real; var s, area: real; begin s := (side1 + side2 + side3)/2.0; area := sqrt(s * (s - side1)*(s-side2)*(s-side3)); TriangleArea := area; end; end.
Next, let us write a simple program that would use the unit we defined above:
program AreaCalculation; uses CalculateArea,crt; var l, w, r, a, b, c, area: real; begin clrscr; l := 5.4; w := 4.7; area := RectangleArea(l, w); writeln('Area of Rectangle 5.4 x 4.7 is: ', area:7:3); r:= 7.0; area:= CircleArea(r); writeln('Area of Circle with radius 7.0 is: ', area:7:3); a := 3.0; b:= 4.0; c:= 5.0; area:= TriangleArea(a, b, c); writeln('Area of Triangle 3.0 by 4.0 by 5.0 is: ', area:7:3); end.
When the above code is compiled and executed, it produces following result:
Area of Rectangle 5.4 x 4.7 is: 25.380 Area of Circle with radius 7.0 is: 153.938 Area of Triangle 3.0 by 4.0 by 5.0 is: 6.000 | http://www.tutorialspoint.com/pascal/pascal_units.htm | 13 |
56 | The next class of synthesis tools deals with layout external to the cell. Since most active circuitry is inside the cells, these tools handle the tasks of intercell wiring (routing), cell placement, and pad layout. Since routing can be very complex and can consume more area than the actual cells do, it is important to plan the connections in advance. Although proper cell planning and placement can make the routing simpler, tools that do this automatically are rare.
A router is given sets of points on unconnected cells that must be connected. The space between these cells can be as simple as a rectangle or as complex as a maze (see Fig. 4.7). When the routing space is rectangular and contains connection points on two facing sides, it is called a channel. When routing is done on four sides of a rectangular area, it is called a switchbox. More complex routing areas usually must be decomposed into channels and switchboxes. Many different techniques exist for routing, depending on the number of wires to run, the complexity of the routing space, and the number of layers available for crossing.
|FIGURE 4.7 Routing areas.|
Maze routing is the simplest way to run wires because it considers only one wire at a time. Given the connection points and a set of obstacles, the maze router finds a path. The obstacles may be cells, contacts that cannot be crossed, or wires that have already been run.
|The first maze-routing technique was based on a grid of points to be traversed by the wire [Moore; Lee]. Now known as the Lee-Moore algorithm, it works by growing concentric rings of monotonically increasing values around the starting point. When a ring hits the endpoint, a path is defined by counting downward through the rings (see Fig. 4.8). This method is so simple that hardware implementations have been built [Blank]. It can also employ weighting factors that increment by more than one and use the lowest value when two paths merge. This can prevent wires from running outside of an area when they could run inside.|
One problem with Lee-Moore routing is that it demands a work array that is very large. Moore observed that the integers 0, 1, and 2 can be used cyclically since the adjacent values are always plus or minus one. However, even a two-bit work array can be very large. To avoid this problem, maze routing can be done with traditional maze-solving techniques that walk along the obstacle walls. Figure 4.9 illustrates this technique: The router moves from the starting point toward the finish until it hits an obstacle. It then chooses to walk to the left (although it could choose to walk to the right, so long as it is consistent). When the obstacle is cleared, it heads for the endpoint--but once again hits an obstacle, this time a wire. Crawling along these obstacles to the left makes the path go all the way around the cell to get to the endpoint. Note that the extra loop made near the wire obstacle can be optimized away so that there are only seven turns rather than nine. The Hightower router is a variation of the maze technique that constructs paths from both ends until they intersect and complete a connection [Hightower].
|FIGURE 4.9 Maze routing.|
The basic maze router is inadequate for many tasks, as the figure illustrates. For example, if the router chooses to walk to the right instead of to the left when it hits an obstacle, then it will quickly loop back to its starting point. This is an indication that it should change directions and start again. If the router then returns for the second time after walking both to the left and to the right, the wire cannot be routed. This is the case in Fig. 4.9 when an attempt is made to connect "start" to "aux." The only solution in such a situation is to change layers so that the wire can cross other wire obstacles.
Multilayer routing is an obvious extension to this basic technique. Typically the wire changes layers only long enough to bridge other wires that form an obstacle. Some schemes have one layer that runs horizontally and another layer that runs vertically [Doreau and Koziol; Kernighan, Schweikert, and Persky]. Routing on each layer is not difficult because all wires are parallel, so the only question is how to order sets of parallel paths. Since horizontal and vertical paths are already known, the location of layer contacts can be fixed so that the remaining problem is how to wire between them. This is done by swapping wires until they all fit through the available spaces and around the layer contacts. This scheme employs a constraint network that indicates how many wires can pass between any two obstacles. The network simplifies the computation of wire width and design-rule spacing. It ensures that wires will never form obstacles, but it creates a large number of layer contacts.
Other considerations must be addressed when doing multilayer routing. The most important is to minimize the number of layer changes. Contacts to other layers take up space on both layers and form obstacles for other wires. Also, excessive layer switching can delay a signal in the final circuit. Therefore the router should optimize the layer switches by refusing to make a change if it is not necessary for obstacle avoidance. A useful option to consider in multilayer routing is that there may be different starting and ending layers desired for a wire so that, once a layer change is made, the wire can remain in the layer to which it was switched. Of course, an alternative to switching layers is to route around the outside of an obstacle, which can consume much space. To allow for this without making absurd detours, some routers have parameters to choose between extra run length and extra layer changes for obstacle avoidance [Doreau and Koziol].
Another multilayer-routing consideration is the priority of layers. Since not all layers are the same, the speed of signals on a given layer should bias that layer's use. Also, some signals are restricted to certain layers. In IC layout, the power and ground wires must always run on metal layers [Keller]. When these wires cross other signals, it is the other signal that must change layers. One way to ensure this is to run the power and ground wires first; however, there must still be special code to ensure the correct placement of these wires. The PI system uses a traveling-salesman technique to find a path that runs through each cell only once in an optimal distance [Rivest]. Since this path never crosses itself, it divides the layout into two sides on which power and ground can be run without intersecting.
In printed-circuit routing, there are some signals that must run on specific layers. For example, those wires that are important in board testing must run on outside layers so that they can be probed easily. Therefore the multilayer router must allow special requests for certain wires' paths.
Routing an entire circuit one wire at a time can be difficult, because each wire is run without consideration for others that may subsequently be routed. Without this global consideration, total consistency is hard to achieve. Preordering of the wires to be routed can help in some circumstances. For example, if the wires are sorted by length and are placed starting with shortest, then the simple and direct wiring will get done without trouble and the longer wires will work around them. Alternatively, if the longest wires are run first, then they will be guaranteed to run and the short connections will have to struggle to complete the circuit. Another solution is to run wires in the most congested areas first [Doreau and Koziol]. This is a good choice because it most effectively avoids the problem of being unable to complete all the wiring.
Another option when routing many wires is the use of alternative routes for a wire. If each wire that is routed is kept on a stack, then this information can be used to back up when a new wire cannot be run. By backing up through the stack and undoing wires up to the offending wire that causes the blockage, an alternate path can be made so that subsequent wires can have a different environment that may be better (see Fig. 4.10). The nature of this routing change must be saved on the stack so that the same choice will not be made again if the router backtracks to that point a second time. If repeated backtracking to redo a wire exhausts the possibilities for that wire, backtracking must then go beyond the offending wire to an earlier one that allows even more possibilities. This process can be extremely time consuming as it repeatedly routes wires one at a time. With a little bookkeeping, much of the computing can be saved so that wires that reroute after a backtrack are routed faster. Nevertheless, there are no guarantees that recursive juggling of these wires will result in a good solution. One way to improve the chances for routing is to search the space of possible wire paths so that only those wires that promise no space increase are placed [Kernighan, Schweikert, and Persky].
|FIGURE 4.10 Backtracking in routing: (a) First attempt to route net 2 (b) Backtracking reroutes net 1.|
Another way to manipulate single paths to obtain global consistency is with the simulated-annealing technique used in compaction [Kirkpatrick, Gelatt, and Vecchi; Sechen and Sangiovanni-Vincentelli]. Having a global measure of routing quality and making random changes allows each change to be seen either to help or to hinder the routing. Major changes, which are tried first, establish the overall routing configuration; minor changes are done last to get the details of the wiring correct. This method also consumes vast amounts of time, but achieves very good results.
|An alternative to single-wire routing is channel routing, in which all the wires in a channel are placed at once. This technique essentially sweeps through the routing area, extending all wires toward their final destination. Although this routing method can get stuck at the end of the channel and fail to complete many of the wires, it does make global considerations during the sweep so the results are often better than they are with maze routing. Also, it is faster because it considers all wires in parallel, examines the routing area only once, and avoids backtracking. Those routers that can handle connections on all sides of a routing area are called switchbox routers [Soukup] and they generally work the same way as channel routers. If the routing area is not rectangular, it must be broken into multiple rectangles and connection points assigned along the dividing lines (see Fig. 4.11). This step is called global routing or loose routing [Lauther].|
Channel and switchbox routing typically make the assumption that wires run on a fixed-pitch grid, regardless of wire width or contact size. All contact points are also on these grid locations. Although this may consume extra space because of the need to use worst-case design rules, the area can always be compacted in a postprocessing step. Some routers are able to create new grid lines between the initial rows and columns if the new lines are necessary to complete the wiring successfully [Luk; Lauther].
Although there are many channel and switchbox routers [Deutsch; Rivest and Fiduccia; Doreau and Koziol], only one will be discussed; it combines good features from many others [Hamachi and Ousterhout]. This router works with a switchbox that contains connection points, obstacles, and obstructions and makes connections using two layers of wire (see Fig. 4.12). Obstructions are those areas that can be crossed only on certain layers. Obstacles block all layers, forcing the router to work around them.
|FIGURE 4.12 Switchbox routing: (a) Initial configuration (b) After routing.|
The routing task is viewed as follows: Each net is in a particular horizontal track and must eventually get to a different horizontal track to complete its wiring. The wiring is run one column at a time, starting from the left and sweeping across the area. At each column, there are rules to determine which nets will switch tracks. When the sweep reaches the right side of the area, the layout is complete and all nets have been routed.
In order to implement this model for switchboxes, the nets connected at the top and bottom must be brought into the first horizontal track, and any connections to nets that are not on track centers must jog to the correct place. Once the nets are all on horizontal tracks, routing can begin.
As each column is processed, priority is given to those vertical wirings that can collapse a net. This is the case with the first column on the left in Fig. 4.12 that collapses net 2. Notice that the layer changes when running vertical wires. This allows vertical and horizontal tracks to cross without fear of connection. To prevent excessive layer switching, a postprocessing step eliminates unnecessary switches and moves contacts to maximize use of the preferred layer (if any).
The next priority in processing a column is to shift tracks toward a destination. Net 3 in the figure makes a downward jog as soon as it has cleared the obstacle so that it can approach its destination. Similarly, net 4 jogs down to its correct track immediately. Special rules must be used to handle the righthand side of the channel; otherwise wires like net 5 will not know their destination until it is too late. These rules reserve an appropriate number of columns to route the right edge.
In addition to these basic rules, there are much more complex ones that dictate when to create a contact, when to vacate a track in anticipation of an obstruction, and when to split a net in preparation for multiple end connections. When the rules are followed in the proper order, channel routing is achieved.
|In some situations the routing needs are specialized and can be solved more easily due to the problem's constraints. One such simplified technique is river routing, which connects many wires that are guaranteed not to cross, but instead run parallel to each other. River routing can be used to connect two cells that have incorrect pitch and alignment [Wardle et al.]. The issue in river routing is how to adjust the spacing properly if one end is wider than the other. One method runs the wires of the bus serially and constructs areas of avoidance after each wire [Tompa]. Figure 4.13 shows this process: The dotted lines are the areas of avoidance that are created after the wires are placed. Special methods such as this allow unusual routing needs all to be met in the wiring of a circuit.|
Placement, or floor-planning as it is sometimes called, is the process of positioning cells such that they connect well and do not waste space. Automatic placement tools therefore need to know the size of each cell and the complete set of routing requirements. Since this information is the same as what is required of routers, the two are often thought of as two parts of the same process. In fact, placement and routing comprise a single step in automatic printed-circuit-board layout. However, the two are often independent and have different algorithms driving them. Also, they must function separately as each produces results needed by the other: First, placement establishes an initial location of cells; next, routing connects the cells and indicates, by its success, how changes to the placement can help. The process is repeated until suitable layout is produced. Although an ideal placement and routing system would do these tasks uniformly, no such systems exist today. What is needed is to be able to share the placement and the routing goals in a common data structure so that the system can alternate between the two as the priorities change.
One system does approach this unification by dividing the overall placement and routing task into smaller pieces that are invoked alternately [Supowitz and Slutz]. This system observes that the vertical, lower channel of a T-shaped routing area needs to be routed before the horizontal, upper area. This T-shaped routing area is created by three cells on the left, right, and top. Routing the vertical, lower channel allows the cells on the left and right to be placed correctly. Then the upper area can be routed and the top cell can be placed correctly. The assumption is that a crude placement has already been done so that the placement and routing step can simply connect while fine-tuning the cell locations. Another assumption is that there is no cycle of T-shaped routing areas, which would prevent a good ordering of the placement and routing steps.
Automatic placement is difficult because of the extremely complex problems that it must satisfy. For example, if the wiring is ignored and only placement is considered, the task is simply one of tiling an area optimally, which is NP-complete [Garey and Johnson]. This means that the program for packing cells tightly can take an unreasonable amount of time to complete. If wiring considerations are added to this task, then every combination of cell placements that is proposed will also require seriously large amounts of time in routing. Because of these problems, placement cannot be done optimally; rather, it is done with heuristics that produce tolerably good results in small amounts of time.
One heuristic for automatic placement is min-cut [Breuer]. This technique divides the unplaced cells into two groups that belong on opposite sides of the chip. Each side is further subdivided until a tree-structured graph is formed that properly organizes all cells. Determination of the min-cut division is based on the wiring between the cells. The goal of min-cut is to make a division of cells that cuts the fewest wires (see Fig. 4.14). The number of cells should be divided approximately in two, such that each half has a high amount of internal connectivity and the two halves are minimally interconnected.
|FIGURE 4.14 Min-cut placement method. The goal is to find a partitioning of the cells that results in the fewest wire cuts.|
The min-cut technique cannot do the whole job of placement. It considers neither cell sizes nor the relative orientation of cells within a group. However, it is one method of reducing the size of the problem by breaking it up into smaller subproblems. Another way of doing the same thing, called bottom-up [Rivest; Heller, Sorkin, and Maling], is conceptually the opposite of the min-cut method because it grows clusters rather than cutting them down.
|The bottom-up method of automatic placement presumes that all cells are distinct and must be grouped according to their needs. A weighting function is used to indicate how closely two cells match. This function, like the one for min-cut, counts the number of wires that can be eliminated by clustering two cells. It can also take cell sizes and other weights into account. Clustering then proceeds to group more and more cells until there is no savings to be had by further work. Figure 4.15 illustrates this technique. Initially, there are four unrelated cells. Bottom-up planning finds the two that fit together best by eliminating the most number of wires. In addition to considering wiring outside of cells, it is necessary to examine the complexity of wiring inside a cluster. If this were not done, then the optimal bottom-up clustering would be a single cluster with all cells and no external wires.|
Once min-cut or bottom-up methods have organized the cells into their relative orientations, a more precise placement is needed. This can be done by converting the placement graph into a dual graph that includes position [Kozminski and Kinnen]. The method recursively picks off edge members of the original graph and creates a new graph with position and size information. By proceeding in adjacency order, the new graph is built in a "brickwork" style.
An alternative to converting the placement graph into a more precise placement is to compute this information as the initial placement graph is built. In one system, the initial layout is seen as a square, and each min-cut division divides the square proportionally to the area of cells on each side of the cut [Lauther]. When min-cut is done, the square indicates proper area and position, but incorrect aspect ratio. The problem then is to fit the actual cells into the idealized areas of the square. A number of approximate methods are used, including rotating cells and spreading open the area. Since routing requirements may spread the layout further, this approximation is not damaging.
Besides min-cut and bottom-up methods, those that find analogies to physical systems that behave in similar ways also can be used to do placement. For example, if a circuit is viewed as a resistive network, in which each cell is a constant voltage node and each collection of intercell wires is a resistor, then the problem becomes one of finding the correct voltage. The voltage values that are close indicate cells that should be near. To implement this technique, the square of the wire lengths becomes the objective function that is solved with matrix techniques [Cheng and Kuh]. The resulting graph can then be adjusted into actual placement using the techniques already described.
Another physical emulation that is used for placement is simulated annealing [Kirkpatrick, Gelatt, and Vecchi; Sechen and Sangiovanni-Vincentelli]. This method was shown to be useful in compaction and routing, so it should be no surprise that it also works for placement. Given an initial floor-plan, the annealer makes severe changes and evaluates the results. The severity of the changes decreases until there is nothing but detail to adjust. Like the other annealing tasks, this method consumes large amounts of time but produces good results.
Other information can be used to produce more intelligent cell placement. For example, there can be special code to detect cells that align well and should be placed together. This can be done before the general-purpose placement begins. An idiomatic floor-planner is one that recognizes these special situations and uses appropriate algorithms in each case [Deas and Nixon]. Determining which idioms exist in the layout is done by examining the routing needs and applying specialized rules that recognize arrays, trees, complex switchboxes, and so on.
Once a basic placement has been found, it is possible to use other methods to fine-tune the layout. The results of routing will indicate the amount of channel utilization so that the cell spacing can be adjusted. This information can be used to control another iteration of placement or it can simply indicate the need for compaction of the final layout. It has also been shown that the placement and routing cycle can be shortened by having routing take place after every step of min-cut division [Burstein, Hong and Pelavin]. Once again, the results of routing help with subsequent placement.
Besides routing, other tools can be used to help improve the results of placement. One system even integrates timing information to help placement and routing iterations [Teig, Smith, and Seaton]. This timing analysis, described more fully in Chapter 5, finds the worst-case circuit delay so that the placed modules and routed wires can optimize that path.
Automatic placement is a difficult problem that has not yet been solved well. However, it is one of the few tasks in which hand design excels over machine design, so it is worth trying to solve. With good placement, automatic layout can become a useful reality. Until such time, these systems will provide alternative layouts from which a designer can choose.
The pads of an integrated-circuit chip are large areas of metal that are left unprotected by the overglass layer so they can be connected to the leads of the IC package. Although large compared to other circuit elements, the pads are very small and make a difficult surface on which to bond connecting wires. Automatic wire-bonding machines can make accurate connections, but they must be programmed for each pad configuration. The chip designer needs a tool that will help with all aspects of bonding pads.
The immediate objection of many designers to the offer of automatic pad layout is that they want to place the pads manually. Given that pads consume large areas, the designer often wants total placement control to prevent wasted space. Chips have not typically had many pads anyway, so manual techniques do not consume much time.
The answer to such objections is that any task, no matter how simple, must be automated in a complete design system. As the number of special-purpose chips increases and their production volume decreases, it will be more important to design automatically than to design with optimal space. Also, the programming of the bonding machine can become bottlenecked if it is not directly linked to the CAD system. In modern chips there can be hundreds of pads; placing them manually will lead to tedium and confusion. Finally, there are only a few considerations that must be kept in mind when doing pad layout and these can be easily automated.
One consideration in pad placement is that the pads should be near the perimeter of the chip. The bonding machine may spatter metal and do damage to the circuitry if it has to reach into the center. Some fabrication processes even require that no active circuitry be placed outside of the pads. Others, however, allow pads anywhere on the chip. Another reason to place pads on the edge is to keep the bonding wires from crossing. The pads must present an uncluttered view from the outside of the chip.
In addition to being located on the edge of the chip, pads should also be placed uniformly. This means that there must be approximately the same number of pads on all four edges and that they must be spaced evenly along each edge. On special chips that have rectangular layout, the pads may be evenly spaced along only two edges. Equal spacing makes automatic bonding easier, and uniform pad density keeps the bonding wires from cluttering and possibly shorting. The proper limitations of pad spacing must also be taken into consideration (currently no less than 200 microns between centers [Keller]).
One algorithm for pad placement is called roto-routing [Johannsen]. This technique starts by sorting the pads into the same circular sequence as are their connection points in the chip. The pads are then placed into a uniform spacing around the chip and rotated through all positions. The one position that minimizes overall wire length to the pads is used.
|Another concern in proper pad layout is the location of power and ground pads. Although some applications demand multiple power and ground pads for high power consumption, there are typically only one of each, and they should generally be on opposite sides of the chip, because they are usually bonded to leads that are on opposite sides of the package. In addition, they must be correctly connected to the other pads in order to get proper distribution of power and ground. Figure 4.16 shows one configuration of power, ground, and pads. The ground runs outside of the pads because it is less critical than the power rail, which runs just inside. This arrangement allows each pad to have easy access to the supply voltages it needs to function properly. In order to get these voltages to the chip, however, there must be a gap in the inner power rail.|
Routing of pad signals to the main chip circuitry can be done automatically with standard switchbox techniques. It is important to try to place the pads close to their destination so that signal wires do not run the length and width of the chip. Techniques for automatically assigning pads to signals can take into account both wire length and the importance of that signal's timing. For example, a two-phase clock brought in from outside should be placed on two pads that have similar distances to the active circuit.
Although bonding-pad placement is a minor aspect of integrated-circuit design, it is a necessary part of any good CAD system. Changing design methodologies call for tools that can do all the work and prevent minor oversights from turning into major failures. There are detailed constraints in pad layout, and failure to observe any one of them can cause the chip to fail totally.
|Previous||Table of Contents||Next||Static Free Software| | http://www.rulabinsky.com/cavd/text/chap04-3.html | 13 |
66 | |Slavery in the United States||American West||Civil Rights Movement|
Abraham Lincoln, the son of a farmer, was born born near Hodgenville, Kentucky, on 12th February, 1809. Although his parents were virtually illiterate, and he spent only a year at school, he developed a love of reading. In March 1830, the Lincoln family moved to Illinois.
After helping his father clear and fence his father's new farm, Lincoln moved to New Salem, where he worked as a storekeeper, postmaster and surveyor. He took a keen interest in politics and supported the Whig Party. In 1834 Lincoln was elected to the Illinois State Legislature where he argued that the role of federal government was to encourage business by establishing a national bank, imposing protective tariffs and improving the country's transport system.
In his spare time Lincoln continued his studies and became a lawyer after passing his bar examination in 1836. There was not much legal work in New Salem and the following year he moved to Springfield, the new state capital of Illinois.
In November, 1842, Lincoln married Mary Todd, the daughter of a prosperous family from Kentucky. The couple had four sons: Robert Lincoln (1843-1926), Edward Baker Lincoln (1846-50), William Lincoln (1850-62) and Thomas Lincoln (1853-1871). Three of the boys died young and only Robert lived long enough to marry and have children.
In 1844 Lincoln formed a legal partnership with William Herndon. The two men worked well together and shared similar political views. Herndon later claimed that he was instrumental in changing Lincoln's views on slavery.
Lincoln's continued to build up his legal work and in 1850 obtained the important role as the attorney for the Illinois Central Railroad. He also defended the son of a friend, William Duff Armstrong, who had been charged with murder. Lincoln successfully undermined the testimony of the prosecution's star witness, Charles Allen, and Armstrong was found not guilty.
In the Illinois State Legislature Lincoln spoke against slavery but believed that Southern states had the right to maintain their current system. When Elijah Lovejoy, an anti-slavery newspaperman was killed, Lincoln refused to condemn lynch-law and instead criticized the extreme policies of the American Anti-Slavery Society.
In 1856 Lincoln joined the Republican Party and challenged Stephen A. Douglas for his seat in the Senate. Lincoln was opposed to Douglas's proposal that the people living in the Louisiana Purchase (Louisiana, Arkansas, Oklahoma, Kansas, Missouri, Nebraska, Iowa, the Dakotas, Montana, and parts of Minnesota, Colorado and Wyoming) should be allowed to own slaves. Lincoln argued that the territories must be kept free for "poor people to go and better their condition".
Lincoln raised the issue of slavery again in 1858 when he made a speech at Quincy, Illinois. Lincoln commented: "We have in this nation the element of domestic slavery. The Republican Party think it wrong - we think it is a moral, a social, and a political wrong. We think it is wrong not confining itself merely to the persons of the States where it exists, but that it is a wrong which in its tendency, to say the least, affects the existence of the whole nation. Because we think it wrong, we propose a course of policy that shall deal with it as a wrong. We deal with it as with any other wrong, insofar as we can prevent it growing any larger, and so deal with it that in the run of time there may be some promise of an end to it."
Lincoln's speech upset Southern slaveowners and poor whites, who valued the higher social status they enjoyed over slaves. However, with rapid European immigration taking place in the North, they knew they had a declining influence over federal government. Their concern turned into outrage when in 1860 the Republican Party nominated Lincoln as its presidential candidate. Hannibal Hamlin of Maine, a Radical Republican, was selected as his running mate.
The Democratic Party that met in Charleston in April, 1860, were deeply divided. Stephen A. Douglas was the choice of most northern Democrats but was opposed by those in the Deep South. When Douglas won the nomination, Southern delegates decided to hold another convention in Baltimore and in June selected John Beckenridge of Kentucky as their candidate. The situation was further complicated by the formation of the Constitutional Union Party and the nomination of John Bell of Tennessee as its presidential candidate.
Lincoln won the presidential election with with 1,866,462 votes (18 free states) and beat Stephen A. Douglas (1,375,157 - 1 slave state), John Beckenridge (847,953 - 13 slave states) and John Bell (589,581 - 3 slave states).
Lincoln selected his Cabinet carefully as he knew he would need a united government to face the serious problems ahead. His team included William Seward (Secretary of State), Salmon Chase (Secretary of the Treasury), Simon Cameron (Secretary of War), Gideon Welles (Secretary of the Navy), Edward Bates (Attorney General), Caleb Smith (Secretary of the Interior) and Montgomery Blair (Postmaster General).
During his first administration he made only five changes to his Cabinet: Edwin M. Stanton (Secretary of War - 1862), John Usher (Secretary of the Interior - 1863), William Fessenden (Secretary of the Treasury - 1864), James Speed (Attorney General - 1864), William Dennison (Postmaster General - 1864), Henry McCulloch (Secretary of the Treasury - 1865) and James Harlan (Secretary of the Interior - 1865).
In the three months that followed the election of Lincoln, seven states seceded from the Union: South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana and Texas. Representatives from these seven states quickly established a new political organization, the Confederate States of America.
On 8th February the Confederate States of America adopted a constitution and within ten days had elected Jefferson Davis as its president and Alexander Stephens, as vice-president. Montgomery, Alabama, became its capital and the Stars and Bars was adopted as its flag. Davis was also authorized to raise 100,000 troops.
At his inaugural address, Abraham Lincoln attempted to avoid conflict by announcing that he had no intention "to interfere with the institution of slavery in the states where it exists. I believe I have no lawful right to do so, and I have no inclination to do so." He added: "The government will not assail you. You can have no conflict without yourselves being the aggressors."
President Jefferson Davis took the view that after a state seceded, federal forts became the property of the state. On 12th April, 1861, General Pierre T. Beauregard demanded that Major Robert Anderson surrender Fort Sumter in Charleston harbour. Anderson replied that he would be willing to leave the fort in two days when his supplies were exhausted. Beauregard rejected this offer and ordered his Confederate troops to open fire. After 34 hours of bombardment the fort was severely damaged and Anderson was forced to surrender.
On hearing the news Lincoln called a special session of Congress and proclaimed a blockade of Gulf of Mexico ports. This strategy was based on the Anaconda Plan developed by General Winfield Scott, the commanding general of the Union Army. It involved the army occupying the line of the Mississippi and blockading Confederate ports. Scott believed if this was done successfully the South would negotiate a peace deal. However, at the start of the war, the US Navy, had only a small number of ships and was in no position to guard all 3,000 miles of Southern coast.
On 15th April, 1861, Lincoln called on the governors of the Northern states to provide 75,000 militia to serve for three months to put down the insurrection. Virginia, North Carolina, Arkansas and Tennessee, all refused to send troops and joined the Confederacy. Kentucky and Missouri were also unwilling to supply men for the Union Army but decided not to take sides in the conflict.
Some states responded well to Lincoln's call for volunteers. The governor of Pennsylvania offered 25 regiments, whereas Ohio provided 22. Most men were encouraged to enlist by bounties offered by the state governments. This money attracted the poor and the unemployed. Many black Americans also attempted to join the army. However, the War Department quickly announced that it had "no intention to call into service of the Government any coloured soldiers." Instead, black volunteers were given jobs as camp attendants, waiters and cooks.
General Winfield Scott was seventy-five when the Civil War started so Lincoln persuaded him to retire and appointed General Irvin McDowell as head of the Union Army. Lincoln sent McDowell to take Richmond, the new base the Confederate government. On 21st July McDowell engaged the Confederate Army at Bull Run. The Confederate troops led by Joseph E. Johnston, Thomas Stonewall Jackson, James Jeb Stuart and Pierre T. Beauregard, easily defeated the inexperienced soldiers of the Union Army. The South had won the first great battle of the war and the Northern casualties totaled 1,492 with another 1,216 missing.
On 30th August, 1861, Major General John C. Fremont, commander of the Union Army in St. Louis, proclaimed that all slaves owned by Confederates in Missouri were free. Lincoln was furious when he heard the news as he feared that this action would force slave-owners in border states to join the Confederate Army. Lincoln asked Fremont to modify his order and free only slaves owned by Missourians actively working for the South. When Fremont refused, he was sacked and replaced by the more conservative, General Henry Halleck.
In November, 1861, Lincoln decided to appoint George McClellan, who was only 34 years old, as commander in chief of the Union Army. He developed a strategy to defeat the Confederate Army that included an army of 273,000 men. His plan was to invade Virginia from the sea and to seize Richmond and the other major cities in the South. McClellan believed that to keep resistance to a minimum, it should be made clear that the Union forces would not interfere with slavery and would help put down any slave insurrections.
In January 1862 the Union Army began to push the Confederates southward. The following month Ulysses S. Grant took his army along the Tennessee River with a flotilla of gunboats and captured Fort Henry. This broke the communications of the extended Confederate line and Joseph E. Johnston decided to withdraw his main army to Nashville. He left 15,000 men to protect Fort Donelson on the Cumberland River but this was enough and Grant had no difficulty taking this prize as well. With western Tennessee now secured, Lincoln was now able to set up a Union government in Nashville by appointing Andrew Johnson as its new governor.
George McClellan appointed Allan Pinkerton to employ his agents to spy on the Confederate Army. His reports exaggerated the size of the enemy and McClellan was unwilling to launch an attack until he had more soldiers available. Under pressure from Radical Republicans in Congress, Abraham Lincoln decided in January, 1862, to appoint Edwin M. Stanton as his new Secretary of War.
Soon after this Lincoln ordered George McClellan to appear before a committee investigating the way the war was being fought. On 15th January, 1862, McClellan had to face the hostile questioning of Benjamin Wade and Zachariah Chandler. Wade asked McClellan why he was refusing to attack the Confederate Army. He replied that he had to prepare the proper routes of retreat. Chandler then said: "General McClellan, if I understand you correctly, before you strike at the rebels you want to be sure of plenty of room so that you can run in case they strike back." Wade added "Or in case you get scared". After McClellan left the room, Wade and Chandler came to the conclusion that McClellan was guilty of "infernal, unmitigated cowardice".
As a result of this meeting Lincoln decided he must find a way to force George McClellan into action. On 31st January he issued General War Order Number One. This ordered McClellan to begin the offensive against the enemy before the 22nd February. Lincoln also insisted on being consulted about McClellan's military plans. Lincoln disagreed with McClellan's desire to attack Richmond from the east. Lincoln only gave in when the division commanders voted 8 to 4 in favour of McClellan's strategy. However, Lincoln no longer had confidence in McClellan and removed him from supreme command of the Union Army. He also insisted that McClellan left 30,000 men behind to defend Washington.
In May, 1862 General David Hunter began enlisting black soldiers in the occupied districts of South Carolina. He was ordered to disband the 1st South Carolina (African Descent) but eventually got approval from Congress for his action. Hunter also issued a statement that all slaves owned by Confederates in the area were free. Lincoln quickly ordered Hunter to retract his proclamation as he still feared that this action would force slave-owners in border states to join the Confederates.
Radical Republicans were furious and John Andrew, the governor of Massachusetts, said that "from the day our government turned its back on the proclamation of General Hunter, the blessing of God has been withdrawn from our arms." The actions of General David Hunter and Lincoln's reaction stimulated a discussion on the recruitment of black soldiers in the Northern press. Wendell Phillips asked, "How many times are we to save Kentucky and lose the war?" This debate was also taking place in the Cabinet, as Edwin M. Stanton was now advocating the creation of black regiments in the Union Army.
Horace Greeley, editor of the New York Tribune, one of the leaders of the anti-slavery movement, urged Lincoln to "convert the war into a war on slavery". Lincoln replied that he would continue to place the Union ahead of all else. "My paramount object in this struggle is to save the Union, and is not either to save or destroy slavery. If I could save the Union without freeing any slave, I would do it; and if I could save it by freeing all the slaves, I would do it; and if I could do it by freeing some and leaving others alone, I would also do that."
During the summer of 1862, George McClellan and the Army of the Potomac, took part in what became known as the Peninsular Campaign. The main objective was to capture Richmond, the base of the Confederate government. McClellan and his 115,000 troops encountered the Confederate Army at Williamsburg on 5th May. After a brief battle the Confederate forces retreated South.
McClellan moved his troops into the Shenandoah Valley and along with John C. Fremont, Irvin McDowell and Nathaniel Banks surrounded Thomas Stonewall Jackson and his 17,000 man army. First Jackson attacked John C. Fremont at Cross Keys before turning on Irvin McDowell at Port Republic. Jackson then rushed his troops east to join up with Joseph E. Johnson and the Confederate forces fighting McClellan.
General Joseph E. Johnston with some 41,800 men counter-attacked McClellan's slightly larger army at Fair Oaks. The Union Army lost 5,031 men and the Confederate Army 6,134. Johnson was badly wounded during the battle and General Robert E. Lee now took command of the Confederate forces.
Major General John Pope, the commander of the new Army of Virginia, was instructed to move east to Blue Ridge Mountains towards Charlottesville. It was hoped that this move would help McClellan by drawing Robert E. Lee away from defending Richmond. Lee's 80,000 troops were now faced with the prospect of fighting two large armies: McClellan (90,000) and Pope (50,000)
Joined by Thomas Stonewall Jackson, the Confederate troops constantly attacked McClellan and on 27th June they broke through at Gaines Mill. Convinced he was outnumbered, McClellan retreated to James River. Lincoln, frustrated by McClellan's lack of success, sent in Major General John Pope, but he was easily beaten back by Jackson.
George McClellan wrote to Abraham Lincoln complaining that a lack of resources was making it impossible to defeat the Confederate forces. He also made it clear that he was unwilling to employ tactics that would result in heavy casualties. He claimed that "ever poor fellow that is killed or wounded almost haunts me!" On 1st July, 1862, McClellan and Lincoln met at Harrison Landing. McClellan once again insisted that the war should be waged against the Confederate Army and not slavery.
Salmon Chase (Secretary of the Treasury), Edwin M. Stanton (Secretary of War) and vice president Hannibal Hamlin, who were all strong opponents of slavery, led the campaign to have George McClellan sacked. Unwilling to do this, Abraham Lincoln decided to put McClellan in charge of all forces in the Washington area.
George McClellan became a field commander again when the Confederate Army invaded Maryland in September. McClellan and Major General Ambrose Burnside attacked the armies of Robert E. Lee and Thomas Stonewall Jackson at Antietam on 17th September. Outnumbered, Lee and Jackson held out until Ambrose Hill and reinforcements arrived. It was the most costly day of the war with the Union Army having 2,108 killed, 9,549 wounded and 753 missing.
Although far from an overwhelming victory, Lincoln realized the significance of Antietam and on 22nd September, 1862, he felt strong enough to issue his Emancipation Proclamation. Lincoln told the nation that from the 1st January, 1863, all slaves in states or parts of states, still in rebellion, would be freed. However, to keep the support of the conservatives in the government, this proclamation did not apply to those border slave states such as Delaware, Maryland, Kentucky and Missouri that had remained loyal to the Union.
Lincoln now wanted George McClellan to go on the offensive against the Confederate Army. However, McClellan refused to move, complaining that he needed fresh horses. Radical Republicans now began to openly question McClellan's loyalty. "Could the commander be loyal who had opposed all previous forward movements, and only made this advance after the enemy had been evacuated" wrote George W. Julian. Whereas William P. Fessenden came to the conclusion that McClellan was "utterly unfit for his position".
Frustrated by McClellan unwillingness to attack, Lincoln recalled him to to Washington with the words: "My dear McClellan: If you don't want to use the Army I should like to borrow it for a while." On 7th November Lincoln removed McClellan from all commands and replaced him with Ambrose Burnside.
In January 1863 it was clear that state governors in the north could not raise enough troops for the Union Army. On 3rd March, the federal government passed the Enrollment Act. This was the first example of conscription or compulsory military service in United States history. The decision to allow men to avoid the draft by paying $300 to hire a substitute, resulted in the accusation that this was a rich man's war and a poor man's fight.
Lincoln was also now ready to give his approval to the formation of black regiments. He had objected in May, 1862, when General David Hunter began enlisting black soldiers into the 1st South Carolina (African Descent) regiment. However, nothing was said when Hunter created two more black regiments in 1863 and soon afterwards Lincoln began encouraging governors and generals to enlist freed slaves.
John Andrew, the governor of Massachusetts, and a passionate opponent of slavery, began recruiting black soldiers and established the 5th Massachusetts (Colored) Cavalry Regiment and the 54th Massachusetts (Colored) and the 55th Massachusetts (Colored) Infantry Regiments. In all, six regiments of US Colored Cavalry, eleven regiments and four companies of US Colored Heavy Artillery, ten batteries of the US Colored Light Artillery, and 100 regiments and sixteen companies of US Colored Infantry were raised during the war. By the end of the conflict nearly 190,000 black soldiers and sailors had served in the Union forces.
In December, 1862, Ambrose Burnside, commander of the Army of the Potomac, attacked General Robert E. Lee at Fredericksburg, Virginia. Sharpshooters based in the town initially delayed the Union Army from building a pontoon bridge across the Rappahnnock River. After clearing out the snipers the federal forces had the problem of mounting frontal assaults against troops commanded by James Longstreet. At the end of the day the Union Army had 12,700 men killed or wounded. The well protected Confederate Army suffered losses of 5,300. Ambrose Burnside wanted to renew the attack the following morning but was talked out of it by his commanders.
After the disastrous battle at Fredericksburg Burnside was replaced by Joseph Hooker. Three months later Hooker, with over 104,000 men, began to move towards the Confederate Army. In April, 1863, Hooker decided to attack the Army of Northern Virginia that had been entrenched on the south side of the Rappahonnock River. Hooker crossed the river and took up position at Chancellorsville.
Although outnumbered two to one, Robert E. Lee, opted to split his Confederate Army into two groups. Lee left 10,000 men under Jubal Early, while he and Thomas Stonewall Jackson on 2nd May, successfully attacked the flank of Hooker's army. However, after returning from the battlefield Jackson was accidentally shot by one of his own men. Jackson's left arm was successfully amputated but he developed pneumonia and he died eight days later.
On the 3rd May, James Jeb Stuart, who had taken command of Jackson's troops, mounted another attack and drove Hooker back further. The following day Robert E. Lee and Jubal Early and joined the attack on the Union Army. By 6th May, Hooker had lost over 11,000 men, and decided to retreat from the area.
Later that month Joseph E. Johnston ordered General John Pemberton to attack Ulysses S. Grant at Clinton, Mississippi. Considering this too risky, Pemberton decided to attack Grant's supply train on the road between Grand Gulf and Raymond. Discovering Pemberton's plans, Grant attacked the Confederate Army at Champion's Hill. Pemberton was badly defeated and with the remains of his army returned to their fortifications around Vicksburg. After two failed assaults, Grant decided to starve Pemberton out. This strategy proved successful and on 4th July, Pemberton surrendered the city. The western Confederacy was now completely isolated from the eastern Confederacy and the Union Army had total control of the Mississippi River.
During the summer of 1863 Robert E. Lee decided to take the war to the north. The Confederate Army reached Gettysburg, Pennsylvania on 1st July. The town was quickly taken but the Union Army, led by Major General George Meade, arrived in force soon afterwards and for the next two days the town was the scene of bitter fighting. Attacks led by James Jeb Stuart and James Longstreet proved costly and by the 5th July, Lee decided to retreat south. Both sides suffered heavy losses with Lee losing 28,063 men and Meade 23,049.
Lincoln was encouraged by the army's victories at Vicksburg and Gettysburg, but was dismayed by the news of the Draft Riots in several American cities. There was heavy loss of life in Detroit but the worst rioting took place in New York City in July. The mob set fire to an African American church and orphanage, and attacked the office of the New York Tribune. Started by Irish immigrants, the main victims were African Americans and activists in the anti-slavery movement. The Union Army were sent in and had to open fire on the rioters in order to gain control of the city. By the time the riot was over, nearly a 1,000 people had been killed or wounded.
In September, 1863, General Braxton Bragg and his troops attacked union armies led by George H. Thomas and William Rosecrans at Chickamuga. Thomas was able to hold firm but Rosecrans and his men fled to Chattanooga. Bragg followed and was attempting to starve Rosecrans out when union forces led by Ulysses S. Grant, Joseph Hooker and William Sherman arrived. Bragg was now forced to retreat and did not stop until he reached Dalton, Georgia.
Major General George Meade also followed the army of Robert E. Lee back south. Lee ordered several counter-attacks but was unable to prevent the Union Army advance taking place. Lee decided to dig in along the west bank of the Mine Run. Considering the fortifications too strong, Meade decided against an assault and spent the winter on the north bank of the Rapidan.
In March, 1864, Ulysses S. Grant was named lieutenant general and the commander of the Union Army. He joined the Army of the Potomac where he worked with George Meade and Philip Sheridan. They crossed the Rapidan and entered the Wilderness. When Lee heard the news he sent in his troops, hoping that the Union's superior artillery and cavalry would be offset by the heavy underbrush of the Wilderness. Fighting began on the 5th May and two days later smoldering paper cartridges set fire to dry leaves and around 200 wounded men were either suffocated or burned to death. Of the 88,892 men that Grant took into the Wilderness, 14,283 were casualties and 3,383 were reported missing. Robert E. Lee lost 7,750 men during the fighting.
On 7th May Ulysses S. Grant gave William Sherman the task of destroying the Confederate Army in Tennessee. Joseph E. Johnston and his army retreated and after some brief skirmishes the two sides fought at Resaca (14th May), Adairsvile (17th May), New Hope Church (25th May) the Kennesaw Mountain (27th June) and Marietta (2nd July). President Jefferson Davis was unhappy about Johnson's withdrawal policy and on 17th July replaced him with the more aggressive John Hood. He immediately went on the attack and hit George H. Thomas and his men at Peachtree Creek. He was badly beaten and lost 2,500 men. Two days later he took on William Sherman at the Battle of Atlanta and lost another 8,000 men.
Attempts to clear out the Shenandoah Valley by Major General Franz Sigel in May and Major General David Hunter in June, ended in failure. Major General Jubal Early, who defeated Hunter, was sent north with 14,000 men in an attempt to draw off troops from Grant's army. Major General Lew Wallace encountered Early by the Monacacy River and although defeated was able to slow his advance to Washington. His attempts to breakthrough the ring forts around the city ended in failure. Lincoln, who witnessed the attack from Fort Stevens, became the first president in American history to see action while in office.
In the summer of 1864 the supporters of the Union became more confident they would win the war. Politicians began to debate what should happen to the South after the war. Radical Republicans were worried that Lincoln would be too lenient on the supporters of the Confederacy. Benjamin Wade and Henry Winter Davis decided to sponsored a bill that provided for the administration of the affairs of southern states by provisional governors. They argued that civil government should only be re-established when half of the male white citizens took an oath of loyalty to the Union.
The Wade-Davis Bill was passed on 2nd July, 1864, with only one Republican voting against it. However, Lincoln refused to sign it. Lincoln defended his decision by telling Zachariah Chandler, one of the bill's supporters, that it was a question of time: "this bill was placed before me a few minutes before Congress adjourns. It is a matter of too much importance to be swallowed in that way." Six days later Lincoln issued a proclamation explaining his views on the bill. He argued that he had rejected it because he did not wish "to be inflexibly committed to any single plan of restoration".
The Radical Republicans were furious with Lincoln's decision. On 5th August, Benjamin Wade and Henry Winter Davis published an attack on Lincoln in the New York Tribune. In what became known as the Wade-Davis Manifesto, the men argued that Lincoln's actions had been taken "at the dictation of his personal ambition" and accused him of "dictatorial usurpation". They added that: "he must realize that our support is of a cause and not of a man."
In August, 1864, the Union Army made another attempt to take control of the Shenandoah Valley. Philip Sheridan and 40,000 soldiers entered the valley and soon encountered troops led by Jubal Early who had just returned from Washington. After a series of minor defeats, Sheridan eventually gained the upper hand. Grant now burnt and destroyed anything of value in the area and after defeating Early in another large-scale battle on 19th October, the Union Army took control of the Shenandoah Valley.
With the South on the verge of defeat, growing number of politicians in the North began to criticize Lincoln for not negotiating a peace deal with Jefferson Davis. Even former supporters such as Horace Greeley, editor of the New York Tribune, accused him of prolonging the war to satisfy his personal ambition. Others on the right, such as Clement Vallandigham, claimed that Lincoln was waging a "wicked war in order to free the slaves". Other critics such as Fernando Wood, the mayor of New York, advocated that if Lincoln did not change his policies the city should secede from the Union.
Leading members of the Republican Party began to suggest that Lincoln should replace Hannibal Hamlin as his running mate in the 1864 presidential election. Hamlin was a Radical Republican and it was felt that Lincoln was already sure to gain the support of this political group. It was argued that what Lincoln needed was the votes of those who had previously supported the Democratic Party in the North.
Lincoln's original choice as his vice-president was General Benjamin Butler. Butler, a war hero, had been a member of the Democratic Party, but his experiences during the American Civil War had made him increasingly radical. Simon Cameron was sent to talk to Butler at Fort Monroe about joining the campaign. However, Butler rejected the offer, jokingly saying that he would only accept if Lincoln promised "that within three months after his inauguration he would die".
It was now decided that Andrew Johnson, the governor of Tennessee, would make the best candidate for vice president. By choosing the governor of Tennessee, Lincoln would emphasis that Southern states status were still part of the Union. He would also gain the support of the large War Democrat faction. At a convention of the Republican Party on 8th July, 1864, Johnson received 200 votes to Hamlin's 150 and became Lincoln's running mate. This upset Radical Republications as Johnson had previously made it clear that he was a supporter of slavery.
The military victories of Ulysses S. Grant, William Sherman, George Meade, Philip Sheridan and George H. Thomas in the American Civil War reinforced the idea that the Union Army was close to bringing the war to an end. This helped Lincoln's presidential campaign and with 2,216,067 votes, comfortably beat General George McClellan (1,808,725) in the election.
By the beginning of 1865, Fort Fisher, North Carolina, was the last port under the control of the Confederate Army. Fort Fisher fell to a combined effort of the Union Army and the US Navy on 15th January. William Sherman, removed all resistance in the Shenandoah Valley and then marched to Southern Carolina. On 17th February, Columbia, the capital of South Carolina, was taken. Columbia was virtually burnt to the ground and some people claimed the damage was done by Sherman's men and others said it was carried out by the retreating Confederate Army.
In March, 1865, William Sherman joined Ulysses S. Grant and the main army surrounding Richmond. On 1st April Sherman attacked the Confederate Army at Five Forks. The Confederates, led by Major General George Pickett, were overwhelmed and lost 5,200 men. On hearing the news, Robert E. Lee decided to abandon Richmond and join Joseph E. Johnston and his forces in South Carolina.
President Jefferson Davis, his family and government officials, was forced to flee from Richmond. Soon afterwards the Union Army took the city and Lincoln arrived on 4th April. Protected by ten seamen, he walked the streets and when one black man fell to his knees in front of him, Lincoln told him: "Don't kneel to me. You must kneel to God only and thank him for your freedom." Lincoln travelled to the Confederate Executive Mansion and sat for a while in the former leader's chair before heading back to Washington.
Robert E. Lee, with an army of 8,000 men, probed the Union Army at Appomattox but faced by 110,000 men he decided the cause was hopeless. He contacted Ulysses S. Grant and after agreeing terms on 9th April, surrendered his army at Appomattox Court House. Grant issued a brief statement: "The war is over; the rebels are our countrymen again and the best sign of rejoicing after the victory will be to abstain from all demonstrations in the field."
At his Cabinet meeting on 14th April, Lincoln commented: "There are many in Congress who possess feelings of hate and vindictiveness in which I do not sympathize and cannot participate." He added that enough blood had been shed and would do what he could to prevent any "vengeful actions".
That night Lincoln went to Ford's Theatre with his wife, Mary Lincoln, Clara Harris and Major Henry Rathbone to see a play called Our American Cousin. Lincoln asked Thomas Eckert, chief of the War Department telegraph office, to be his bodyguard. However, Edwin M. Stanton refused permission for Eckert to go claiming he had an important task for him to perform that night. In fact, this was not true and Eckert spent the evening at home.
John Parker, a constable in the Washington Metropolitan Police Force, was detailed to sit on the chair outside the presidential box. During the third act Parker left to get a drink. Soon afterwards, John Wilkes Booth, entered Lincoln's box and shot the president in the back of the head. Booth then jumped to the stage eleven feet below. Despite fracturing his ankle, he was able to reach his horse and gallop out of the city.
Lincoln was taken to the White House but died early the next morning. Over the next few days Mary Surratt, Lewis Powell, George Atzerodt, David Herold, Samuel Mudd, Michael O'Laughlin, Edman Spangler and Samuel Arnold were all arrested charged with conspiring to murder Lincoln. Edwin M. Stanton, the Secretary of War, argued that they should be tried by a military court as Lincoln had been Commander in Chief of the army. Several members of the cabinet, including Gideon Welles (Secretary of the Navy), Edward Bates (Attorney General), Orville H. Browning (Secretary of the Interior), and Henry McCulloch (Secretary of the Treasury), disapproved, preferring a civil trial. However, James Speed, the Attorney General, agreed with Stanton and the new president Andrew Johnson, ordered the formation of a nine-man military commission to try the conspirators involved in the assassination of Lincoln.
The trial began on 10th May, 1865. The military commission included leading generals such as David Hunter, Lewis Wallace, Thomas Harris and Alvin Howe. Joseph Holt was chosen as the the government's chief prosecutor. During the trial Holt attempted to persuade the military commission that Jefferson Davis and the Confederate government had been involved in conspiracy.
Joseph Holt attempted to obscure the fact that there were two plots: the first to kidnap and the second to assassinate. It was important for the prosecution not to reveal the existence of a diary taken from the body of John Wilkes Booth. The diary made it clear that the assassination plan dated from 14th April. The defence surprisingly did not call for Booth's diary to be produced in court.
On 29th June, 1865 Mary Surratt, Lewis Powell, George Atzerodt, David Herold, Samuel Mudd, Michael O'Laughlin, Edman Spangler and Samuel Arnold were found guilty of being involved in the conspiracy to murder Lincoln. Surratt, Powell, Atzerodt and Herold were hanged at Washington Penitentiary on 7th July, 1865. Surratt, who was expected to be reprieved, was the first woman in American history to be executed.
The decision to hold a military court received further criticism when John Surratt, who faced a civil trial in 1867, was not convicted by the jury. Michael O'Laughlin died in prison but Samuel Mudd, Edman Spangler and Samuel Arnold were all pardoned by President Andrew Johnson in 1869.
(1) Annie L. Burton, Abraham Lincoln (1909)
In a little clearing in the backwoods of Harding County, Kentucky, there stood years ago a rude cabin within whose walls Abraham Lincoln passed his childhood. An "unaccountable" man he has been called, and the adjective was well chosen, for who account for a mind and nature like Lincoln's with the ancestry he owned? His father was a thriftless, idle carpenter, scarcely supporting his family, and with but the poorest living. His mother was an uneducated woman, but must have been of an entirely different nature, for she was able to impress upon her boy a love of learning. During her life, his chief, in fact his only book, was the Bible, and in this he learned to read. Just before he was nine years old, the father brought his family across the Ohio River into Illinois, and there in the unfloored log cabin, minus windows and doors, Abraham lived and grew. It was during this time that the mother died, and in a short time the shiftless father with his family drifted back to the old home, and here found another for his children in one who was a friend of earlier days. This woman was of a thrifty nature, and her energy made him floor the cabin, hang doors, and open up windows. She was fond of the children and cared for them tenderly, and to her the boy Abraham owed many pleasant hours.
As he grew older, his love for knowledge increased and he obtained whatever books he could, studying by the firelight, and once walking six miles for an English Grammar. After he read it, he walked the six miles to return it. He needed the book no longer, for with this as with his small collection of books, what he once read was his. He absorbed the books he read.
During these early years he did "odd jobs" for the neighbors. Even at this age, his gift of story telling was a notable one, as well as his sterling honesty. His first knowledge of slavery in all its horrors came to him when he was about twenty-one years old. He had made a trip to New Orleans, and there in the old slave market he saw an auction. His face paled, and his spirits rose in revolt at the coarse jest of the auctioneer, and there he registered a vow within himself, "If ever I have a chance to strike against slavery, I will strike and strike hard." To this end he worked and for this he paid "the last full measure of devotion."
(2) Abraham Lincoln, debate with Stephen Douglas in Alton, Illinois (15th October, 1858)
Stephen Douglas assumes that I am in favor of introducing a perfect social and political equality between the white and black races. These are false issues. The real issue in this controversy is the sentiment on the part of one class that looks upon the institution of slavery as a wrong, and of another class that does not look upon it as a wrong. One of the methods of treating it as a wrong is to make provision that it shall grow no larger.
(3) Abraham Lincoln, speech at Quincy, Illinois (1858)
We have in this nation the element of domestic slavery. The Republican Party think it wrong - we think it is a moral, a social, and a political wrong. We think it is wrong not confining itself merely to the persons of the States where it exists, but that it is a wrong which in its tendency, to say the least, affects the existence of the whole nation. Because we thing it wrong, we propose a course of policy that shall deal with it as a wrong. We deal with it as with any other wrong, insofar as we can prevent it growing any larger, and so deal with it that in the run of time there may be some promise of an end to it.
(4) The journalist, Henry Villard, described the Abraham Lincoln and Stephen A. Douglas debate at Ottawa, Illinois, on 21st August, 1858.
The first joint debate between Douglas and Lincoln, which I attended, took place on the afternoon of August 21, 1858, at Ottawa, Illinois. It was the great event of the day, and attracted an immense concourse of people from all parts of the State.
Senator Douglas was very small, not over four and a half feet height, and there was a noticeable disproportion between the long trunk of his body and his short legs. His chest was broad and indicated great strength of lungs. It took but a glance at his face and head to convince one that they belonged to no ordinary man. No beard hid any part of his remarkable, swarthy features. His mouth, nose, and chin were all large and clearly expressive of much boldness and power of will. The broad, high forehead proclaimed itself the shield of a great brain. The head, covered with an abundance of flowing black hair just beginning to show a tinge of grey, impressed one with its massiveness and leonine expression. His brows were shaggy, his eyes a brilliant black.
Douglas spoke first for an hour, followed by Lincoln for an hour and a half; upon which the former closed in another half hour. The Democratic spokesman commanded a strong, sonorous voice, a rapid, vigorous utterance, a telling play of countenance, impressive gestures, and all the other arts of the practiced speaker.
As far as all external conditions were concerned, there was nothing in favour of Lincoln. He had a lean, lank, indescribably gawky figure, an odd-featured, wrinkled, inexpressive, and altogether uncomely face. He used singularly awkward, almost absurd, up-and-down and sidewise movements of his body to give emphasis to his arguments. His voice was naturally good, but he frequently raised it to an unnatural pitch.
Yet the unprejudiced mind felt at once that, while there was on the one side a skillful dialectician and debater arguing a wrong and weak cause, there was on the other a thoroughly earnest and truthful man, inspired by sound convictions in consonance with the true spirit of American institutions. There was nothing in all Douglas's powerful effort that appealed to the higher instincts of human nature, while Lincoln always touched sympathetic cords. Lincoln's speech excited and sustained the enthusiasm of his audience to the end.
(5) Henry Villard reported on the the Republican Party Convention in 1860. Villard supported William H. Seward and was surprised when Abraham Lincoln won the nomination.
I was enthusiastically for the nomination of William H. Seward, who seemed to me the proper and natural leader of the Republican Party ever since his great "irrepressible conflict" speech in 1858. The noisy demonstrations of his followers, and especially of the New York delegation in his favour, had made me sure, too, that his candidacy would be irresistible. I therefore shared fully the intense chagrin of the New York and other State delegations when, on the third ballot, Abraham Lincoln received a larger vote than Seward.
I had not got over the prejudice against Lincoln with which my personal contact with him in 1858 imbued me. It seemed to me incomprehensible and outrageous that the uncouth, common Illinois politician, whose only experience in public life had been service as a member of the State legislature and in Congress for one term, should carry the day over the eminent and tried statesman, the foremost figure, indeed, in the country.
(6) In his book, Life and Times, Frederick Douglass described the 1860 Presidential Election.
The presidential canvass of 1860 was three sided, and each side had its distinctive doctrine as to the question of slavery and slavery extension. We had three candidates in the field. Stephen A. Douglas was the standard bearer of what may be called the western faction of the old divided democratic party, and John C. Breckenridge was the standard-bearer of the southern or slaveholding, faction of that party. Abraham Lincoln represented the then young, growing, and united republican party. The lines between these parties and candidates were about as distinctly and clearly drawn as political lines are capable of being drawn. The name of Douglas stood for territorial sovereignty, or in other words, for the right of the people of a territory to admit or exclude, to establish or abolish, slavery, as to them might seem best. The doctrine of Breckenridge was that slaveholders were entitled to carry their slaves into any territory of the United States and to hold them there, with or without the consent of the people of the territory; that the Constitution of its own force carried slavery and protected it into any territory open for settlement in the United States. To both these parties, factions, and doctrines, Abraham Lincoln and the republican party stood opposed. They held that the Federal Government had the right and the power to exclude slavery from the territories of the United States, and that that right and power ought to be exercised to the extent of confining slavery inside the slave States, with a view to its ultimate extinction.
(7) Thomas Johnson, Twenty-Eight Years a Slave (1909)
In the year 1860, there was great excitement in Richmond over the election of Mr. Abraham Lincoln as President of the United States. The slaves prayed to God for his success, and they prayed very especially the night before the election. We knew he was in sympathy with the abolition of Slavery. The election was the signal for a great conflict for which the Southern States were ready. The question was: Shall there be Slavery or no Slavery in the United States? The South said: Yes, there shall be Slavery.
(8) James Garfield met Abraham Lincoln for the first time in 1861. He wrote a letter to Burke A. Hinsdale describing his thoughts on the man (17th February, 1861)
On the whole I am greatly pleased with the man. He clearly shows his want of culture - and the marks of western life. But there is no touch of affectation in him and he has a peculiar power of impressing you that he is frank, direct and thoroughly honest. His remarkable good sense, simple and condensed style of expression and evident marks of indomitable will, give me great hopes for the country.
(9) Abraham Lincoln, inaugural speech (4th March, 1861)
I have no purpose, directly or indirectly, to interfere with the institution of slavery in the states where it exists. I believe I have no lawful right to do so, and I have no inclination to do so.
I consider the Union is unbroken. I shall take care that the laws of the Union be faithfully executed in all States. There need be no bloodshed or violence; and there shall be none, unless it be forced upon the national authority.
The government will not assail you. You can have no conflict without yourselves being the aggressors. You have no oath registered in heaven to destroy the government, while I shall have the most solemn one to preserve, protect, and defend it.
(10) Zachariah Chandler, letter to Henry W. Lord (27th October, 1861)
Lincoln means well but has no force of character. He is surrounded by Old Fogy Army officers more than half of whom are downright traitors and the other one half sympathize with the South. One month ago I began to doubt whether this accursed rebellion could be put down with a Revolution in the present Administration.
(11) Abraham Lincoln, speech on why he ordered General David Hunter to retract his proclamation (12th July, 1862)
General Hunter is an honest man. He was, and I hope, still is, my friend. I valued him none the less for his agreeing with me in the general wish that all men everywhere, could be free. He proclaimed all men free within certain states and I repudiated the proclamation. Yet in repudiating it, I gave dissatisfaction, if not offence, to many whose support the country can not afford to lose. And that is not the end of it. the pressure, in this direction, is still upon me, and is increasing.
(12) Horace Greeley, letter to President Abraham Lincoln (19th August, 1862)
I do not intrude to tell you - for you must know already - that a great proportion of those who triumphed in your election, and of all who desire the unqualified suppression of the rebellion now desolating our country, are solely disappointed and deeply pained by the policy you seem to be pursuing with regard to the slaves of the Rebels.
We think you are strangely and disastrously remiss in the discharge of your official and imperative duty with regard to the emancipating provisions of the new Confiscation Act. Those provisions were designed to fight slavery with liberty. They prescribe that men loyal to the Union, and willing to shed their blood in the behalf, shall no longer be held, with the nation's consent, in bondage to persistent, malignant traitors, who for twenty years have been plotting and for sixteen months have been fighting to divide and destroy our country. Why these traitors should be treated with tenderness by you, to the prejudice of the dearest rights of loyal men, we cannot conceive.
Fremont's Proclamation and Hunter's Order favoring emancipation were promptly annulled by you; while Halleck's Number Three, forbidding fugitives from slavery to Rebels to come within his lines - an order as unmilitary as inhuman, and which received the hearty approbation of every traitor in America - with scores of like tendency, have never provoked even your remonstrance.
(13) President Abraham Lincoln, letter to Horace Greeley (22nd August, 1862)
If there be those who would not save the Union unless they could at the same time destroy slavery. I do not agree with them. My paramount object in this struggle is to save the Union, and is not either to save or destroy slavery. If I could save the Union without freeing any slave, I would do it; and if I could save it by freeing all the slaves, I would do it; and if I could do it by freeing some and leaving others alone, I would also do that.
(14) Abraham Lincoln, proclamation (22nd September, 1862)
That on the 1st day of January, A.D. 1863, all persons held as slaves within any State or designated part of a State the people whereof shall then be in rebellion against the United States shall be then, thenceforward, and forever free; and the executive government of the United States, including the military and naval authority thereof, will recognize and maintain the freedom of such persons and will do no act or acts to repress such persons, or any of them, in any efforts they may make for their actual freedom.
(15) In his diary Walt Whitman wrote how he often saw Abraham Lincoln during the American Civil War (12th August, 1863)
I see the president almost every day, as I happen to live where he passes to or from his lodgings out of town. He never sleeps at the White House during the hot season, but has quarters at a healthy location some three miles north of the city, the Soldiers' Home, a United States military establishment. I saw him this morning about 8.30 coming in to business, riding on Vermont Avenue. He always has a company of twenty-five or thirty cavalry, with sabres drawn and held upright over their shoulders. They say this guard was against his personal wish, but he let his counselors have their way. Mr. Lincoln on the saddle generally rides a good-sized, easy-going grey horse, is dressed in plain black, somewhat rusty and dusty, wears a black stiff hat, and looks about as ordinary in attire, etc., as the commonest man. I see very plainly Abraham Lincoln's dark brown face, with the deep-cut lines, the eyes, always to me with a deep latent sadness in the expression.
(16) Abraham Lincoln, letter to James C. Conking, defending his decision to emancipate slaves being held in the Deep South (26th August, 1863)
I know, as fully as one can know the opinions of others, that some of the commanders of our armies in the field who have given us our most important successes believe the emancipation policy and the use of the colored troops constitute the heaviest blow yet dealt to the rebellion, and that at least one of these important successes could not have been achieved when it was but for the aid of black soldiers. Among the commanders holding these views are some who have never had any affinity with what is called Abolitionism or with the Republican Party politics, but who hold them purely as military opinions.
(17) Benjamin F. Butler, Autobiography and Reminiscences (1892)
In the spring of 1893, I had another conversation with President Lincoln upon the subject of the employment of negroes. The question was, whether all the negro troops then enlisted and organized should be collected together and made a part of the Army of the Potomac and thus reinforce it.
We then talked of a favourite project he had of getting rid of the negroes by colonization, and he asked me what I thought of it. I told him that it was simply impossible; that the negroes would not go away, for they loved their homes as much as the rest of us, and all efforts at colonization would not make a substantial impression upon the number of negroes in the country.
Reverting to the subject of arming the negroes, I said to him that it might be possible to start with a sufficient army of white troops, and, avoiding a march which might deplete their ranks by death and sickness, to take in ships and land them somewhere on the Southern coast. These troops could then come up through the Confederacy, gathering up negroes, who could be armed at first with arms that they could handle, so as to defend themselves and aid the rest of the army in case of rebel charges upon it. In this way we could establish ourselves down there with an army that would be a terror to the whole South.
Our conversation then turned upon another subject which had been frequently a source of discussion between us, and that was the effect of his clemency in not having deserters speedily and universally punished by death.
I called his attention to the fact that the great bounties then being offered were such a temptation for a man to desert in order to get home and enlist in another corps where he would be safe from punishment, that the army was being continually depleted at the front even if replenished at the rear.
He answered with a sorrowful face, which always came over him when he discussed this topic: "But I can't do that, General." "Well, then," I replied, "I would throw the responsibility upon the general-in-chief and relieve myself of of it personally."
With a still deeper shade of sorrow he answered: "The responsibility would be mine, all the same."
(18) Abraham Lincoln, Gettysburg Address (19th November, 1863)
Four score and seven years ago, our fathers brought forth upon this continent a new nation: conceived in liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war testing whether that nation, or any nation so conceived and so dedicated can long endure. We are met on a great battlefield of that war.
We have come to dedicate a portion of that field as a final resting place for those who here gave their lives that this nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we cannot dedicate, we cannot consecrate, we cannot hallow this ground. The brave men, living and dead, who struggled here have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember, what we say here, but it can never forget what they did here.
It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion, that we here highly resolve that these dead shall not have died in vain, that this nation, under God, shall have a new birth of freedom and that government of the people, by the people, for the people shall not perish from this earth.
(19) Thaddeus Stevens, letter to Edward McPherson about Abraham Lincoln's proclamation after his rejection of the Wade-Davis Bill (10th July, 1864)
What an infamous proclamation! The president is determined to have the electoral votes of the seceded States. The idea of pocketing a bill and then issuing a proclamation as how far he will conform to it is matched only by signing a bill and then sending in a veto. How little of the rights of war and the law of nations our president knows.
(20) Benjamin Wade and Henry Winter Davis issued a joint statement in the New York Tribune after Abraham Lincoln vetoed the Wade-Davis Bill (5th August, 1864)
The bill directed the appointment of provisional government by and with the advice and consent of the Senate. The President, after defeating the law, proposes to appoint, without law and without the advice and consent of the Senate, military governors for the rebel States!
Whatever is done will be at his will and pleasure, by persons responsible to no law, and more interested to secure the interests and execute the will of the President than of the people; and the will of Congress is to be "held for naught unless the loyal people of the rebel States choose to adopt it."
The President must realize that our support is of a cause and not of a man and that the authority of Congress is paramount and must be respected; and if he wishes our support, he must confine himself to his executive duties - to obey and execute, not make the laws - to suppress by armed rebellion, and leave political reorganization to Congress.
(21) Sojourner Truth, letter to a friend about her meeting with President Abraham Lincoln (17th November, 1864)
Upon entering his reception room we found about a dozen persons in waiting, among them two coloured women. I had quite a pleasant time waiting until he was disengaged, and enjoyed his conversation with others; he showed as much kindness and consideration to the colored persons as to the white. One case was that of a colored woman who was sick and likely to be turned out of her house on account of her inability to pay her rent. The president listened to her with much attention, and spoke to her with kindness and tenderness.
He then congratulated me on my having been spared. Then I said, I appreciate you, for you are the best president who has ever taken the seat. He replied: "I expect you have reference to my having emancipated the slaves in my proclamation". But, said he, mentioning the names of several of his predecessors, "they were all just as good, and would have done just as I have done if the time had come."
(22) Gideon Welles went to Abraham Lincoln's bedside when he heard he had been shot. He later recorded his thoughts in his diary (15th April, 1865)
The night was dark, cloudy, and damp, and about six it began to rain. I remained in the room until then without sitting or leaving it, when, there being a vacant chair which someone left at the foot of the bed, I occupied it for nearly two hours, listening to the heavy groans and witnessing the wasting life of the good and great man who was expiring before me.
About 6 a.m. I experienced a feeling of faintness and, for the first time after entering the room, a little past eleven, I left it and the house, and took a short walk in the open air. It was a dark and gloomy morning, and rain set in before I returned to the house, some fifteen minutes later. Large groups of people were gathered every few yards, all anxious and solicitous. Some one or more from each group stepped forward as I passed to inquire into the condition of the President and to ask if there was no hope. Intense grief was on every countenance when I replied that the President could survive but a short time. The colored people especially - and there were at this time more of them, perhaps, than of whites - were overwhelmed with grief.
A little before seven. I went into the room where the dying President was rapidly drawing near the closing moments. His wife soon after made her last visit to him. The death struggle had begun. Robert, his son, stood with several others at the head of his bed. He bore himself well, but on two occasions gave way to overpowering grief and sobbed aloud, turning his head and leaning on the shoulder of Senator Sumner. The respiration of the President became suspended at intervals and at last entirely ceased at twenty-two minutes past seven.
(23) Elizabeth Keckley, Thirty Years a Slave (1868)
At 11 o'clock at night I was awakened by an old friend and neighbor, Miss M. Brown, with the startling intelligence that the entire Cabinet had been assassinated, and Mr. Lincoln shot, but not mortally wounded. When I heard the words I felt as if the blood had been frozen in my veins, and that my lungs must collapse for the want of air. Mr. Lincoln shot! the Cabinet assassinated!
I waked Mr. and Mrs. Lewis, and told them that the President was shot, and that I must go to the White House. We walked rapidly towards the White House, and on our way passed the residence of Secretary Seward, which was surrounded by armed soldiers, keeping back all intruders with the point of the bayonet.
We learned that the President was mortally wounded - that he had been shot down in his box at the theatre, and that he was not expected to live till morning; when we returned home with heavy hearts. I could not sleep. I wanted to go to Mrs. Lincoln, as I pictured her wild with grief; but then I did not know where to find her, and I must wait till morning. Never did the hours drag so slowly. Every moment seemed an age, and I could do nothing but walk about and hold my arms in mental agony.
Morning came at last, and a sad morning was it. The flags that floated so gaily yesterday now were draped in black, and hung in silent folds at half-mast. The President was dead, and a nation was mourning for him. Every house was draped in black, and every face wore a solemn look. People spoke in subdued tones, and glided whisperingly, wonderingly, silently about the streets.
The last time I saw him he spoke kindly to me, but alas! the lips would never move again. The light had faded from his eyes, and when the light went out the soul went with it. What a noble soul was his--noble in all the noble attributes of God! Never did I enter the solemn chamber of death with such palpitating heart and trembling footsteps as I entered it that day. No common mortal had died. The Moses of my people had fallen in the hour of his triumph. Fame had woven her choicest chaplet for his brow. Though the brow was cold and pale in death, the chaplet should not fade, for God had studded it with the glory of the eternal stars.
When I entered the room, the members of the Cabinet and many distinguished officers of the army were grouped around the body of their fallen chief. They made room for me, and, approaching the body, I lifted the white cloth from the white face of the man that I had worshipped as an idol--looked upon as a demi-god. Not-withstanding the violence of the death of the President, there was something beautiful as well as grandly solemn in the expression of the placid face. There lurked the sweetness and gentleness of childhood, and the stately grandeur of godlike intellect. I gazed long at the face, and turned away with tears in my eyes and a choking sensation in my throat. Ah! never was man so widely mourned before. The whole world bowed their heads in grief when Abraham Lincoln died.
(24) Reverend S. D. Brown, sermon in Troy (April, 1865)
God has a purpose in permitting this great evil. It is a singular fact that the two most favorable to leniency to the rebels, Lincoln and Seward, have been stricken. Other members of the Cabinet were embraced in the fiendish plan, but as to them, it failed.
(25) In her autobiography, Thirty Years a Slave, Elizabeth Keckley described an encounter between Mary Lincoln and John Parker, the man who should have been guarding Abraham Lincoln at the Ford's Theatre.
There were many surmises as to who was implicated with J. Wilkes Booth in the assassination of the President. A new messenger had accompanied Mr. and Mrs. Lincoln to the theatre on that terrible Friday night. It was the duty of this messenger to stand at the door of the box during the performance, and thus guard the inmates from all intrusion. It appears that the messenger was carried away by the play, and so neglected his duty that Booth gained easy admission to the box. Mrs. Lincoln firmly believed that this messenger was implicated in the assassination plot.
Soon after the assassination Mrs. Lincoln said to him fiercely: "So you are on guard tonight - on guard in the White House after helping to murder the President!"
"Pardon me, but I did not help to murder the President. I could never stoop to murder--much less to the murder of so good and great a man as the President."
"But it appears that you did stoop to murder."
"No, no! don't say that," he broke in. "God knows that I am innocent."
"I don't believe you. Why were you not at the door to keep the assassin out when be rushed into the box?"
"I did wrong, I admit, and I have bitterly repented it, but I did not help to kill the President. I did not believe that any one would try to kill so good a man in such a public place, and the belief made me careless. I was attracted by the play, and did not see the assassin enter the box."
"But you should have seen him. You had no business to be careless. I shall always believe that you are guilty. Hush! I shan't hear another word," she exclaimed, as the messenger essayed to reply. "Go now and keep your watch," she added, with an imperious wave of her hand. With mechanical step and white face the messenger left the room, and Mrs. Lincoln fell back on her pillow, covered her face with her hands, and commenced sobbing.
(26) Samuel Gompers, Seventy Years of Life and Labour (1925)
I remember very vividly the morning that brought the news of President Lincoln's death. It was Saturday. Like some cataclysm came the report that an assassin had struck down the great Emancipator. It seemed to me that some great power for good had gone out of the world. A master mind had been taken at a time when most needed. I cried and cried all that day and for days I was so depressed that I could scarcely force myself to work. I had heard Lincoln talked about in London. In the minds of the working people of the world Lincoln symbolized the spirit of humanity - the great leader of the struggle for human freedom.
(27) Carl Schurz wrote about the differences between Abraham Lincoln and Andrew Johnson in his autobiography published in 1906.
It was pretended at the time and it has since been asserted by historians and publicists that Mr. Johnson's Reconstruction policy was only a continuation of that of Mr. Lincoln. This is true only in a superficial sense, but not in reality. Mr. Lincoln had indeed put forth reconstruction plans which contemplated an early restoration of some of the rebel states. But he had done this while the Civil War was still going on, and for the evident purpose of encouraging loyal movements in those States and of weakening the Confederate State government there. Had he lived, he would have as ardently wished to stop bloodshed and to reunite as he ever did. But is it to be supposed for a moment that, seeing the late master class in the South intent upon subjecting the freedmen again to a system very much akin to slavery, Lincoln would have consented to abandon those freemen to the mercies of that master class? | http://www.spartacus.schoolnet.co.uk/USAlincoln.htm | 13 |
441 | Using a figure published in 1960 of 14,300,000 tons per year as the meteoritic dust influx rate to the earth, creationists have argued that the thin dust layer on the moon’s surface indicates that the moon, and therefore the earth and solar system, are young. Furthermore, it is also often claimed that before the moon landings there was considerable fear that astronauts would sink into a very thick dust layer, but subsequently scientists have remained silent as to why the anticipated dust wasn’t there. An attempt is made here to thoroughly examine these arguments, and the counter arguments made by detractors, in the light of a sizable cross-section of the available literature on the subject.
Of the techniques that have been used to measure the meteoritic dust influx rate, chemical analyses (of deep sea sediments and dust in polar ice), and satellite-borne detector measurements appear to be the most reliable. However, upon close examination the dust particles range in size from fractions of a micron in diameter and fractions of a microgram in mass up to millimetres and grams, whence they become part of the size and mass range of meteorites. Thus the different measurement techniques cover different size and mass ranges of particles, so that to obtain the most reliable estimate requires an integration of results from different techniques over the full range of particle masses and sizes. When this is done, most current estimates of the meteoritic dust influx rate to the earth fall in the range of 10, 000-20, 000 tons per year, although some suggest this rate could still be as much as up to 100,000 tons per year.
Apart from the same satellite measurements, with a focusing factor of two applied so as to take into account differences in size and gravity between the earth and moon, two main techniques for estimating the lunar meteoritic dust influx have been trace element analyses of lunar soils, and the measuring and counting of microcraters produced by impacting micrometeorites on rock surfaces exposed on the lunar surface. Both these techniques rely on uniformitarian assumptions and dating techniques. Furthermore, there are serious discrepancies between the microcrater data and the satellite data that remain unexplained, and that require the meteoritic dust influx rate to be higher today than in the past. But the crater-saturated lunar highlands are evidence of a higher meteorite and meteoritic dust influx in the past. Nevertheless the estimates of the current meteoritic dust influx rate to the moon’s surface group around a figure of about 10,000 tons per year.
Prior to direct investigations, there was much debate amongst scientists about the thickness of dust on the moon. Some speculated that there would be very thick dust into which astronauts and their spacecraft might “disappear”, while the majority of scientists believed that there was minimal dust cover. Then NASA sent up rockets and satellites and used earth-bound radar to make measurements of the meteoritic dust influx, results suggesting there was only sufficient dust for a thin layer on the moon. In mid-1966 the Americans successively soft-landed five Surveyor spacecraft on the lunar surface, and so three years before the Apollo astronauts set foot on the moon NASA knew that they would only find a thin dust layer on the lunar surface into which neither the astronauts nor their spacecraft would “disappear”. This was confirmed by the Apollo astronauts, who only found up to a few inches of loose dust.
The Apollo investigations revealed a regolith at least several metres thick beneath the loose dust on the lunar surface. This regolith consists of lunar rock debris produced by impacting meteorites mixed with dust, some of which is of meteoritic origin. Apart from impacting meteorites and micrometeorites it is likely that there are no other lunar surface processes capable of both producing more dust and transporting it. It thus appears that the amount of meteoritic dust and meteorite debris in the lunar regolith and surface dust layer, even taking into account the postulated early intense meteorite and meteoritic dust bombardment, does not contradict the evolutionists’ multi-billion year timescale (while not proving it). Unfortunately, attempted counter-responses by creationists have so far failed because of spurious arguments or faulty calculations. Thus, until new evidence is forthcoming, creationists should not continue to use the dust on the moon as evidence against an old age for the moon and the solar system.
One of the evidences for a young earth that creationists have been using now for more than two decades is the argument about the influx of meteoritic material from space and the so-called “dust on the moon” problem. The argument goes as follows:
“It is known that there is essentially a constant rate of cosmic dust particles entering the earth’s atmosphere from space and then gradually settling to the earth’s surface. The best measurements of this influx have been made by Hans Pettersson, who obtained the figure of 14 million tons per year.1 This amounts to 14 x 1019 pounds in 5 billion years. If we assume the density of compacted dust is, say, 140 pounds per cubic foot, this corresponds to a volume of 1018 cubic feet. Since the earth has a surface area of approximately 5.5 x 1015 square feet, this seems to mean that there should have accumulated during the 5-billion- year age of the earth, a layer of meteoritic dust approximately 182 feet thick all over the world!
There is not the slightest sign of such a dust layer anywhere of course. On the moon’s surface it should be at least as thick, but the astronauts found no sign of it (before the moon landings, there was considerable fear that the men would sink into the dust when they arrived on the moon, but no comment has apparently ever been made by the authorities as to why it wasn’t there as anticipated).
Even if the earth is only 5,000,000 years old, a dust layer of over 2 inches should have accumulated.
Lest anyone say that erosional and mixing processes account for the absence of the 182-foot meteoritic dust layer, it should be noted that the composition of such material is quite distinctive, especially in its content of nickel and iron. Nickel, for example, is a very rare element in the earth’s crust and especially in the ocean. Pettersson estimated the average nickel content of meteoritic dust to be 2.5 per cent, approximately 300 times as great as in the earth’s crust. Thus, if all the meteoritic dust layer had been dispersed by uniform mixing through the earth’s crust, the thickness of crust involved (assuming no original nickel in the crust at all) would be 182 x 300 feet, or about 10 miles!
Since the earth’s crust (down to the mantle) averages only about 12 miles thick, this tells us that practically all the nickel in the crust of the earth would have been derived from meteoritic dust influx in the supposed (5 x 109 year) age of the earth!”2
This is indeed a powerful argument, so powerful that it has upset the evolutionist camp. Consequently, a number of concerted efforts have been recently made to refute this evidence.3-9 After all, in order to be a credible theory, evolution needs plenty of time (that is, billions of years) to occur because the postulated process of transforming one species into another certainly can’t be observed in the lifetime of a single observer. So no evolutionist could ever be happy with evidence that the earth and the solar system are less than 10,000 years old.
But do evolutionists have any valid criticisms of this argument? And if so, can they be answered?
Criticisms of this argument made by evolutionists fall into three categories:-
The man whose work is at the centre of this controversy is Hans Pettersson of the Swedish Oceanographic Institute. In 1957, Pettersson (who then held the Chair of Geophysics at the University of Hawaii) set up dust-collecting units at 11,000 feet near the summit of Mauna Loa on the island of Hawaii and at 10,000 feet on Mt Haleakala on the island of Maui. He chose these mountains because
“occasionally winds stir up lava dust from the slopes of these extinct volcanoes, but normally the air is of an almost ideal transparency, remarkably free of contamination by terrestrial dust.”10
With his dust-collecting units, Pettersson filtered measured quantities of air and analysed the particles he found. Despite his description of the lack of contamination in the air at his chosen sampling sites, Pettersson was very aware and concerned that terrestrial (atmospheric) dust would still swamp the meteoritic (space) dust he collected, for he says: “It was nonetheless apparent that the dust collected in the filters would come preponderantly from terrestrial sources.”11 Consequently he adopted the procedure of having his dust samples analysed for nickel and cobalt, since he reasoned that both nickel and cobalt were rare elements in terrestrial dust compared with the high nickel and cobalt contents of meteorites and therefore by implication of , meteoritic dust also.
Based on the nickel analysis of his collected dust, Pettersson finally estimated that about 14 million tons of dust land on the earth annually. To quote Petterson again:
“Most of the samples contained small but measurable quantities of nickel along with the large amount of iron. The average for 30 filters was 14.3 micrograms of nickel from each 1,000 cubic metres of air. This would mean that each 1,000 cubic metres of air contains .6 milligram of meteoritic dust. If meteoritic dust descends at the same rate as the dust created by the explosion of the Indonesian volcano Krakatoa in 1883, then my data indicate that the amount of meteoritic dust landing on the earth every year is 14 million tons. From the observed frequency of meteors and from other data Watson (F.G. Watson of Harvard University) calculates the total weight of meteoritic matter reaching the earth to be between 365,000 and 3,650,000 tons a year. His higher estimate is thus about a fourth of my estimate, based upon theHawaiian studies. To be on the safe side, especially in view of the uncertainty as to how long it takes meteoritic dust to descend, I am inclined to find five million tons per year plausible.”12
Now several evolutionists have latched onto Pettersson’s conservatism with his suggestion that a figure of 5 million tons per year is more plausible and have thus promulgated the idea that Pettersson’s estimate was “high”,13 “very speculative”,14 and “tentative”.15 One of these critics has even gone so far as to suggest that “Pettersson’s dust- collections were so swamped with atmospheric dust that his estimates were completely wrong”16 (emphasis mine). Others have said that “Pettersson’s samples were apparently contaminated with far more terrestrial dust than he had accounted for.”17 So what does Pettersson say about his 5 million tons per year figure?:
“The five-million-ton estimate also squares nicely with the nickel content of deep-ocean sediments. In 1950 Henri Rotschi of Paris and I analysed 77 samples of cores raised from the Pacific during the Swedish expedition. They held an average of. 044 per cent nickel. The highest nickel content in any sample was .07 per cent. This, compared to the average .008- per-cent nickel content of continental igneous rocks, clearly indicates a substantial contribution of nickel from meteoritic dust and spherules.
If five million tons of meteoritic dust fall to the earth each year, of which 2.5 per cent is nickel, the amount of nickel added to each square centimetre of ocean bottom would be .000000025 gram per year, or .017 per cent of the total red-clay sediment deposited in a year. This is well within the .044-per-cent nickel content of the deep-sea sediments and makes the five- million-ton figure seem conservative.”18
In other words, as a reputable scientist who presented his assumptions and warned of the unknowns, Pettersson was happy with his results.
But what about other scientists who were aware of Pettersson and his work at the time he did it? Dr Isaac Asimov’s comments,19 for instance, confirm that other scientists of the time were also happy with Pettersson’s results. Of Pettersson’s experiment Asimov wrote:-
“At a 2-mile height in the middle of the Pacific Ocean one can expect the air to be pretty free of terrestrial dust. Furthermore, Pettersson paid particular attention to the cobalt content of the dust, since meteor dust is high in cobalt whereas earthly dust is low in it.”20
Indeed, Asimov was so confident in Pettersson’s work that he used Pettersson’s figure of 14,300,000 tons of meteoritic dust falling to the earth’s surface each year to do his own calculations. Thus Asimov suggested:
“Of course, this goes on year after year, and the earth has been in existence as a solid body for a good long time: for perhaps as long as 5 billion years. If, through all that time, meteor dust has settled to the earth at the same rate as it does, today, then by now, if it were undisturbed, it would form a layer 54 feet thick over all of the earth.”21
This sounds like very convincing confirmation of the creationist case, but of course, the year that Asimov wrote those words was 1959, and a lot of other meteoritic dust influx measurements have since been made. The critics are also quick to point this out -
“. ..we now have access to dust collection techniques using aircraft, high-altitude balloons and spacecraft. These enable researchers to avoid the problems of atmospheric dust which plagued Pettersson.”22
However, the problem is to decide which technique for estimating the meteoritic dust influx gives the “true” figure. Even Phillips admits this when he says:
“(Techniques vary from the use of high altitude rockets with collecting grids to deep-sea core samples. Accretion rates obtained by different methods vary from 102 to 109 tons/year. Results from identical methods also differ because of the range of sizes of the measured particles.”23
One is tempted to ask why it is that Pettersson’s 5-14 billion tons per year figure is slammed as being “tentative”, “very speculative” and “completely wrong”, when one of the same critics openly admits the results from the different, more modern methods vary from 100 to 1 billion tons per year, and that even results from identical methods differ? Furthermore, it should be noted that Phillips wrote this in 1978, some two decades and many moon landings after Pettersson’s work!
|(a) Small Size In Space (<0.1 cm)|
| Penetration Satellites
Al26 (sea sediment)
| 36,500-182,500 tons/yr
|(b) Cometary Meteors (104-102g) In Space|
|Cometary Meteors||73,000 tons/yr|
|(c) “Any" Size in Space|
| Barbados Meshes
(ii) Total Winter
(iii) Total Annual
(i) Dust Counter
Ni (Antarctic ice)
Ni (sea sediment)
Os (sea sediment)
CI36 (sea sediment) Sea-sediment Spherules
< 110 tons/yr
<91 ,500 tons/yr
|(d) Large Size in Space|
| 36,500 tons/yr
Table 1. Measurements and estimates of the meteoritic dust influx to the earth. (The data are adapted from Parkin and Tilles,24 who have fully referenced all their data sources.) (All figures have been rounded off.)
In 1968, Parkin and Tilles summarised all the measurement data then available on the question of influx of meteoritic (interplanetary) material (dust) and tabulated it.24 Their table is reproduced here as Table 1, but whereas they quoted influx rates in tons per day, their figures have been converted to tons per year for ease of comparison with Pettersson’s figures.
Even a quick glance at Table 1 confirms that most of these experimentally-derived measurements are well below Pettersson’s 5-14 million tons per year figure, but Phillips’ statement (quoted above) that results vary widely, even from identical methods, is amply verified by noting the range of results listed under some of the techniques. Indeed, it also depends on the experimenter doing the measurements (or estimates, in some cases). For instance, one of the astronomical methods used to estimate the influx rate depends on calculation of the density of the very fine dust in space that causes the zodiacal light. In Table 1, two estimates by different investigators are listed because they differ by 2-3 orders of magnitude.
On the other hand, Parkin and Tilles’ review of influx measurements, while comprehensive, was not exhaustive, there being other estimates that they did not report. For example, Pettersson25 also mentions an influx estimate based on meteorite data of 365,000-3,650,000 tons/year made by F. G. Watson of Harvard University (quoted earlier), an estimate which is also 2-3 orders of magnitude different from the estimate listed by Parkin and Tilles and reproduced in Table 1. So with such a large array of competing data that give such conflicting orders-of-magnitude different estimates, how do we decide which is the best estimate that somehow might approach the “true” value?
Another significant research paper was also published in 1968. Scientists Barker and Anders were reporting on their measurements of iridium and osmium concentration in dated deep-sea sediments (red clays) of the central Pacific Ocean Basin, which they believed set limits to the influx rate of cosmic matter, including dust.26 Like Pettersson before them, Barker and Anders relied upon the observation that whereas iridium and osmium are very rare elements in the earth’s crustal rocks, those same two elements are present in significant amounts in meteorites.
|* Normalized to the composition of C1 carbonaceous chondrites (one class of meteorites).|
Table 2. Estimates of the accretion rate of cosmic matter by chemical methods (after Barker and Anders,26 who have fully referenced all their data sources).
Their results are included in Table 2 (last four estimates), along with earlier reported estimates from other investigators using similar and other chemical methods. They concluded that their analyses, when compared wit iridium concentrations in meteorites (C1 carbonaceous chondrites), corresponded to a meteoritic influx rate forth entire earth of between 30,000 and 90,000 tons per year. Furthermore, they maintained that a firm upper limit on the influx rate could be obtained by assuming that all the iridium and osmium in deep-sea sediments is of cosmic origin. The value thus obtained is between 50,000 and 150,000 tons per year. Notice, however, that these scientists were careful to allow for error margins by using a range of influx values rather than a definitive figure. Some recent authors though have quoted Barker and Anders’ result as 100,000 tons, instead of 100,000 ± 50,000 tons. This may not seem a rather critical distinction, unless we realise that we are talking about a 50% error margin either way, and that’s quite a large error margin in anyone’s language regardless of the magnitude of the result being quoted.
Even though Barker and Anders’ results were published in 1968, most authors, even fifteen years later, still quote their influx figure of 100,000 ± 50,000 tons per year as the most reliable estimate that we have via chemical methods. However, Ganapathy’s research on the iridium content of the ice layers at the South Pole27 suggests that Barker and Anders’ figure underestimates the annual global meteoritic influx.
Ganapathy took ice samples from ice cores recovered by drilling through the ice layers at the US Amundsen-Scott base at the South Pole in 1974, and analysed them for iridium. The rate of ice accumulation at the South Pole over the last century or so is now particularly well established, because two very reliable precision time markers exist in the ice layers for the years 1884 (when debris from the August 26,1983 Krakatoa volcanic eruption was deposited in the ice) and 1953 (when nuclear explosions began depositing fission products in the ice). With such an accurately known time reference framework to put his iridium results into, Ganapathy came up with a global meteoritic influx figure of 400,000 tons per year, four times higher than Barker and Anders’ estimate from mid-Pacific Ocean sediments.
In support of his estimate, Ganapathy also pointed out that Barker and Anders had suggested that their estimate could be stretched up to three times its value (that is, to 300,000 tons per year) by compounding several unfavorable assumptions. Furthermore, more recent measurements by Kyte and Wasson of iridium in deep-sea sediment samples obtained by drilling have yielded estimates of 330,000-340,000 tons per year.28 So Ganapathy’s influx estimate of 400,000 tons of meteoritic material per year seems to represent a fairly reliable figure, particularly because it is based on an accurately known time reference framework.
So much for chemical methods of determining the rate of annual meteoritic influx to the earth’s surface. But what about the data collected by high-flying aircraft and spacecraft, which some critics29,30 are adamant give the most reliable influx estimates because of the elimination of a likelihood of terrestriat dust contamination? Indeed, on the basis of the dust collected by the high-flying U-2 aircraft, Bridgstock dogmatically asserts that the influx figure is only 10,000 tonnes per year.31,32 To justify his claim Bridgstock refers to the reports by Bradley, Brownlee and Veblen,33 and Dixon, McDonnel1 and Carey34 who state a figure of 10,000 tons for the annual influx of interplanetary dust particles. To be sure, as Bridgstock says,35 Dixon, McDonnell and Carey do report that “. ..researchers estimate that some 10,000 tonnes of them fall to Earth every year.”36 However, such is the haste of Bridgstock to prove his point, even if it means quoting out of context, he obviously didn’t carefully read, fully comprehend, and/or deliberately ignored all of Dixon, McDonnell and Carey’s report, otherwise he would have noticed that the figure “some 10,000 tonnes of them fall to Earth every year” refers only to a special type of particle called Brownlee particles, not to all cosmic dust particles. To clarify this, let’s quote Dixon, McDonnell and Carey:
“Over the past 10 years, this technique has landed a haul of small fluffy cosmic dust grains known as ‘Brownlee particles’ after Don Brownlee, an American researcher who pioneered the routine collection of particles by aircraft, and has led in their classification. Their structure and composition indicate that the Brown lee particles are indeed extra-terrestrial in origin (see Box 2), and researchers estimate that some 10,000 tonnes of them fall to Earth every year. But Brownlee particles represent only part of the total range of cosmic dust particles”37 (emphasis mine).
And further, speaking of these “fluffy” Brownlee particles:
“The lightest and fluffiest dust grains, however, may enter the atmosphere on a trajectory which subjects them to little or no destructive effects, and they eventually drift to the ground. There these particles are mixed up with greater quantities of debris from the larger bodies that burn up as meteors, and it is very difficult to distinguish the two”38 (emphasis ours).
What Bridgstock has done, of course, is to say that the total quantity of cosmic dust that hits the earth each year according to Dixon, McDonnell and Carey is 10,000 tonnes, when these scientists quite clearly stated they were only referring to a part of the total cosmic dust influx, and a lesser part at that. A number of writers on this topic have unwittingly made similar mistakes.
But this brings us to a very crucial aspect of this whole issue, namely, that there is in fact a complete range of sizes of meteoritic material that reaches the earth, and moon for that matter, all the way from large meteorites metres in diameter that produce large craters upon impact, right down to the microscopic-sized “fluffy” dust known as Brownlee particles, as they are referred to above by Dixon, McDonnell, and Carey. And furthermore, each of the various techniques used to detect this meteoritic material does not necessarily give the complete picture of all the sizes of particles that come to earth, so researchers need to be careful not to equate their influx measurements using a technique to a particular particle size range with the total influx of meteoritic particles. This is of course why the more experienced researchers in this field are always careful in their records to stipulate the particle size range that their measurements were made on.
Figure 1. The mass ranges of interplanetary (meteoritic) dust particles as detected by various techniques (adapted from Millman39). The particle penetration, impact and collection techniques make use of satellites and rockets. The techniques shown in italics are based on lunar surface measurements.
Millman39 discusses this question of the particle size ranges over which the various measurement techniques are operative. Figure 1 is an adaptation of Millman’s diagram. Notice that the chemical techniques, such as analyses for iridium in South Pole ice or Pacific Ocean deep-sea sediments, span nearly the full range of meteoritic particles sizes, leading to the conclusion that these chemical techniques are the most likely to give us an estimate closest to the “true” influx figure. However, Millman40 and Dohnanyi41 adopt a different approach to obtain an influx estimate. Recognising that most of the measurement techniques only measure the influx of particles of particular size ranges, they combine the results of all the techniques so as to get a total influx estimate that represents all the particle size ranges. Because of overlap between techniques, as is obvious from Figure 1, they plot the relation between the cumulative number of particles measured (or cumulative flux) and the mass of the particles being measured, as derived from the various measurement techniques. Such a plot can be seen in Figure 2. The curve in Figure 2 is the weighted mean flux curve obtained by comparing, adding together and taking the mean at anyone mass range of all the results obtained by the various measurement techniques. A total influx estimate is then obtained by integrating mathematically the total mass under the weighted mean flux curve over a given mass range.
Figure 2. The relation between the cumulative number of particles and the lower limit of mass to which they are counted, as derived from various types of recording - rockets, satellites, lunar rocks, lunar seismographs (adapted from Millman39). The crosses represent the Pegasus and Explorer penetration data.
By this means Millman42 estimated that in the mass range 10-12 to 103g only a mere 30 tons of meteoritic material reach the earth each day, equivalent to an influx of 10,950 tons per year. Not surprisingly, the same critic (Bridgstock) that erroneously latched onto the 10,000 tonnes per year figure of Dixon, McDonnell and Carey to defend his (Bridgstock’s) belief that the moon and the earth are billions of years old, also latched onto Millman’s 10,950 tons per year figure.43 But what Bridgstock has failed to grasp is that Dixon, McDonnell and Carey’s figure refers only to the so-called Brownlee particles in the mass range of 10-12 to 10-6g, whereas Millman’s figure, as he stipulates himself, covers the mass range of 10-12 to 103g. The two figures can in no way be compared as equals that somehow support each other because they are not in the same ballpark since the two figures are in fact talking about different particle mass ranges.
Furthermore, the close correspondence between these two figures when they refer to different mass ranges, the 10,000 tonnes per year figure of Dixon, McDonnell and Carey representing only 40% of the mass range of Millman’s 10,950 tons per year figure, suggests something has to be wrong with the techniques used to derive these figures. Even from a glance at the curve in Figure 2, it is obvious that the total mass represented by the area under the curve in the mass range 10-6 to 103g can hardly be 950 or so tons per year (that is, the difference between Millman’s and Dixon, McDonnell and Carey’s figures and mass ranges), particularly if the total mass represented by the area under the curve in the mass range 10-12 to 10-6g is supposed to be 10,000 tonnes per year (Dixon, McDonnell and Carey’s figure and mass range). And Millman even maintains that the evidence indicates that two-thirds of the total mass of the dust complex encountered by the earth is in the form of particles with masses between 10-6.5 and 10-3.5g, or in the three orders of magnitude 10-6, 10-5 and 10-4g, respectively,44 outside the mass range for the so-called Brownlee particles. So if Dixon, McDonnell and Carey are closer to the truth with their 1985 figure of 10,000 tonnes per year of Brownlee particles (mass range 10-12 to 10-6g), and if two-thirds of the total particle influx mass lies outside the Brownlee particle size range, then Millman’s 1975 figure of 10,950 tons per year must be drastically short of the “real” influx figure, which thus has to be at least 30,000 tons per year.
Millman admits that if some of the finer dust partlcles do not register by either penetrating or cratering, satellite or aircraft collection panels, it could well be that we should allow for this by raising the flux estimate. Furthermore, he states that it should also be noted that the Prairie Network fireballs (McCrosky45), which are outside his (Millman’s) mathematical integration calculations because they are outside the mass range of his mean weighted influx curve, could add appreciably to his flux estimate.46 In other words, Millman is admitting that his influx estimate would be greatly increased if the mass range used in his calculations took into account both particles finer than 10-12g and particularly particles greater than l03g.
Figure 3. Cumulative flux of meteoroids and related objects into the earth’s atmosphere having a mass of M(kg) (adapted from Dohnanyi41). His data sources used to derive this plot are listed in his bibliography.
Unlike Millman, Dohnanyi47 did take into account a much wider mass range and smaller cumulative fluxes, as can be seen in his cumulative flux plot in Figure 3, and so he did obtain a much higher total influx estimate of some 20,900 tons of dust per year coming to the earth. Once again, if McCrosky’s data on the Prairie Network fireballs were included by Dohnanyi, then his influx estimate would have been greater. Furthermore, Dohnanyi’s estimate is primarily based on supposedly more reliable direct meas- urements obtained using collection plates and panels on satellites, but Millman maintains that such satellite penetration methods may not be registering the finer dust particles because they neither penetrate nor crater the collection panels, and so any influx estimate based on such data could be underestimating the “true” figure. This is particularly significant since Millman also highlights the evidence that there is another concentration peak in the mass range 10-13 to 10-14g at the lower end of the theoretical effectiveness of satellite penetration data collection (see Figure 1 again). Thus even Dohnanyi’s influx estimate is probably well below the “true” figure.
This leads us to a consideration of the representativeness both physically and statistically of each of the influx measurement dust collection techniques and the influx estimates derived from them. For instance, how representitive is a sample of dust collected on the small plates mounted on a small satellite or U-2 aircraft compared with the enormous volume of space that the sample is meant to represent? We have already seen how Millman admits that some dust particles probably do not penetrate or crater the plates as they are expected to and so the final particle count is thereby reduced by an unknown amount. And how representative is a drill core or grab sample from the ocean floor? After all, aren’t we analysing a split from a 1-2 kilogram sample and suggesting this represents the tonnes of sediments draped over thousands of square kilometres of ocean floor to arrive at an influx estimate for the whole earth?! To be sure, careful repeat samplings and analyses over several areas of the ocean floor may have been done, but how representative both physically and statistically are the results and the derived influx estimate?
Of course, Pettersson’s estimate from dust collected atop Mauna Loa also suffers from the same question of representativeness. In many of their reports, the researchers involved have failed to discuss such questions. Admittedly there are so many potential unknowns that any statistical quantification is well-nigh impossible, but some discussion of sample representativeness should be attempted and should translate into some “guesstimate” of error margins in their final reported dust influx estimate. Some like Barker and Anders with their deep-sea sediments48 have indicated error margins as high as ±50%, but even then such error margins only refer to the within and between sample variations of element concentrations that they calculated from their data set, and not to any statistical “guesstimate” of the physical representativeness of the samples collected and analysed. Yet the latter is vital if we are trying to determine what the “true” figure might be.
But there is another consideration that can be even more important, namely, any assumptions that were used to derive the dust influx estimate from the raw measurements or analytical data. The most glaring example of this is with respect to the interpretation of deep-sea sediment analyses to derive an influx estimate. In common with all the chemical methods, it is assumed that all the nickel, iridium and osmium in the samples, over and above the average respective contents of appropriate crustal rocks, is present in the cosmic dust in the deep-sea sediment samples. Although this seems to be a reasonable assumption, there is no guarantee that it is completely correct or reliable. Furthermore, in order to calculate how much cosmic dust is represented by the extra nickel, iridium and osmium con- centrations in the deep-sea sediment samples, it is assumed that the cosmic dust has nickel, iridium and osmium concentrations equivalent to the average respective concentrations in Type I carbonaceous chondrites (one of the major types of meteorites). But is that type of meteorite representative of all the cosmic matter arriving at the earth’s surface? Researchers like Barker and Anders assume so because everyone else does! To be sure there are good reasons for making that assumption, but it is by no means certain the Type I carbonaceous chondrites are representative of all the cosmic material arriving at the earth’s surface, since it has been almost impossible so far to exclusively collect such material for analysis. (Some has been collected by spacecraft and U-2 aircraft, but these samples still do not represent that total composition of cosmic material arriving at the earth’s surface since they only represent a specific particle mass range in a particular path in space or the upper atmosphere.)
However, the most significant assumption is yet to come. In order to calculate an influx estimate from the assumed cosmic component of the nickel, iridium and osmium concentrations in the deep-sea sediments it is necessary to determine what time span is represented by the deep-sea sediments analysed. In other words, what is the sedimentation rate in that part of the ocean floor sampled and how old therefore are our sediment samples? Based on the uniformitarian and evolutionary assumptions, isotopic dating and fossil contents are used to assign long time spans and old ages to the sediments. This is seen not only in Barker and Anders’ research, but in the work of Kyte and Wasson who calculated influx estimates from iridium measurements in so-called Pliocene and Eocene-Oligocene deep-sea sediments.49 Unfortunately for these researchers, their influx estimates depend absolutely on the validity of their dating and age assumptions. And this is extremely crucial, for if they obtained influx estimates of 100,000 tons per year and 330,000-340,000 tons per year respectively on the basis of uniformitarian and evolutionary assumptions (slow sedimentation and old ages), then what would these influx estimates become if rapid sedimentation has taken place over a radically shorter time span? On that basis, Pettersson’s figure of 5-14 million tons per year is not far-fetched!
On the other hand, however, Ganapathy’s work on ice cores from the South Pole doesn’t suffer from any assumptions as to the age of the analysed Ice samples because he was able to correlate his analytical results with two time-marker events of recent recorded history. Consequently his influx estimate of 400,000 tons per year has to be taken seriously. Furthermore, one of the advantages of the chemical methods of influx estimating, such as Ganapathy’s analyses of iridium in ice cores, is that the technique in theory, and probably in practice, spans the complete mass range of cosmic material (unlike the other techniques - see Figure 1 again) and so should give a better estimate. Of course, in practice this is difficult to verify, statistically the likelihood of sampling a macroscopic cosmic particle in, for example, an ice core is virtually nonexistent. In other words, there is the question” of representativeness again, since the ice core is taken to represent a much larger area of ice sheet, and it may well be that the cross sectional area intersected by the ice core is an anomalously high or low concentration of cosmic dust particles, or in fact an average concentration -who knows which?
Finally, an added problem not appreciated by many working in the field is that there is an apparent variation in the dust influx rate according to the latitude. Schmidt and Cohen reported50 that this apparent variation is most closely related to geomagnetic latitude so that at the poles the resultant influx is higher than in equatorial regions. They suggested that electromagnetic interactions could cause only certain charged particles to impinge preferentially at high latitudes. This may well explain the difference between Ganapathy’s influx estimate of 400,000 tons per year from the study of the dust in Antarctic ice and, for example, Kyte and Wasson s estimate of 330,000-340,000 tons per year based on iridium measurements in deep-sea sediment samples from the mid-Pacific Ocean.
A number of other workers have made estimates of the meteoritic dust influx to the earth that are often quoted with some finality. Estimates have continued to be made up until the present time, so it is important to contrast these in order to arrive at the general consensus.
In reviewing the various estimates by the different methods up until that time, Singer and Bandermann5l argued in 1967 that the most accurate method for determining the meteoritic dust influx to the earth was by radiochemical measurements of radioactive Al26 in deep-sea sediments. Their confidence in this method was because it can be shown that the only source of this radioactive nuclide is interplanetary dust and that therefore its presence in deep-sea sediments was a more certain indicator of dust than any other chemical evidence. From measurements made others they concluded that the influx rate is 1250 tons per day, the error margins being such that they indicated the influx rate could be as low as 250 tons per day or as high as 2,500 tons per day. These figures equate to an influx rate of over 450,000 tons per year, ranging from 91,300 tons per year to 913,000 tons per year.
They also defended this estimate via this method as opposed to other methods. For example, satellite experiments, they said, never measured a concentration, nor even a simple flux of particles, but rather a flux of particles having a particular momentum or energy greater than some minimum threshold which depended on the detector being used. Furthermore, they argued that the impact rate near the earth should increase by a factor of about 1,000 compared with the value far away from the earth. And whereas dust influx can also be measured in the upper atmosphere, by then the particles have already begun slowing down so that any vertical mass motions of the atmosphere may result in an increase in concentration of the dust particles thus producing a spurious result. For these and other reasons, therefore, Singer and Bandermann were adamant that their estimate based on radioactive Al26 in ocean sediments is a reliable determination of the mass influx rate to the earth and thus the mass concentration of dust in interplanetary space.
Other investigators continued to rely upon a combination of satellite, radio and visual measurements of the “different particle masses to arrive at a cumulative flux rate. Thus in 1974 Hughes reported52 that
“from the latest cumulative influx rate data the influx of interplanetary dust to the earth’s surface in the mass range 10-13 - 106g is found to be 5.7 x 109 g yr-1”,
or 5,700 tons per year, drastically lower than the Singer and Bandermann estimate from Al26 in ocean sediments. Yet within a year Hughes had revised his estimate upwards to 1.62 x 1010 g yr-1, with error calculations indicating that the upper and lower limits are about 3.0 and 0.8 x 1010g yr-1 respectively.53 Again this was for the particle mass range between 10-13g and 106 g, and this estimate translates to 16,200 tons per year between lower to upper limits of 8,000 - 30,000 tons per year. So confident now was Hughes in the data he had used for his calculations that he submitted an easier-to-read account of his work in the widely-read, popular science magazine, New Scientist.54 Here he again argued that
“as the earth orbits the sun it picks up about 16,000 tonnes of interplanetary material each year. The particles vary in size from huge meteorites weighing tonnes to small microparticles less than 0.2 micron in diameter. The majority originate from decaying comets.”
Figure 4. Plot of thecumulative flux of interplanetary matter (meteorites, meteors, and meteoritic dust, etc.) into the earth’s atmosphere (adapted from Hughes54). Note that he has subdivided the debris into two modes of origin - cometary and asteroidal - based on mass, with the former category being further subdivided according to detection techniqes. From this plot Hughes calculated a flux of 16,000 tonnes per year.
Figure 4 shows the cumulative flux curve built from the various sources of data that he used to derive his calculated influx of about 16,000 tons per year. However, it should be noted here that using the same methodology with similar data Millman55 had in 1975, and Dohnanyi56 in 1972, produced influx estimates of 10,950 tons per year and 20,900 tons per year respectively (Figures 2 and 3 can be compared with Figure 4). Nevertheless, it could be argued that these two estimates still fall within the range of 8,000 -30,000 tons per year suggested by Hughes. In any case, Hughes’ confidence in his estimate is further illustrated by his again quoting the same 16,000 tons per year influx figure in a paper published in an authoritative book on the subject of cosmic dust.58
Meanwhile, in a somewhat novel approach to the problem, Wetherill in 1976 derived a meteoritic dust influx estimate by looking at the possible dust production rate at its source.59 He argued that whereas the present sources of meteorites are probably multiple, it being plausible that both comets and asteroidal bodies of several kinds contribute to the flux of meteorites on the earth, the immediate source of meteorites is those asteroids, known as Apollo objects, that in their orbits around the sun cross the earth’s orbit. He then went on to calculate the mass yield of meteoritic dust (meteoroids) and meteorites from the fragmentation and cratering of these Apollo asteroids. He found that the combined yield from both crate ring and complete fragmentation to be 7.6 x 1010g yr-l, which translates into a figure of 76,000 tonnes per year. Of this figure he calculated that 190 tons per year would represent meteorites in the mass range of 102 - 106g, a figure which compared well with terrestrial meteorite mass impact rates obtained by various other calculation methods, and also with other direct measurement data, including observation of the actual meteorite flux. This figure of 76,000 tons per year is of course much higher than those estimates based on cumulative flux calculations such as those of Hughes,60 but still below the range of results gained from various chemical analyses of deep-sea sediments, such as those of Barker and Anders,61 Kyte and Wasson,62 and Singer and Bandermann,63 and of the Antarctic ice by Ganapathy.64 No wonder a textbook in astronomy compiled by a worker in the field and published in 1983 gave a figure for the total meteoroid flux of about 10,000 - 1,000,000 tons per year.65
In an oft-quoted paper published in 1985, Griin and his colleagues66 reported on yet another cumulative flux calculation, but this time based primarily on satellite measurement data. Because these satellite measurements had been made in interplanetary space, the figure derived from them, would be regarded as a measure of the interplanetary dust flux. Consequently, to calculate from that figure the total meteoritic mass influx on the earth both the gravitational increase at the earth and the surface area of the earth had to be taken into account. The result was an influx figure of about 40 tons per day, which translates to approximately 14,600 tons per year. This of course still equates fairly closely to the influx estimate made by Hughes.67
As well as satellite measurements, one of the other major sources of data for cumulative flux calculations has been measurements made using ground-based radars. In 1988 Olsson-Steel68 reported that previous radar meteor observations made in the VHF band had rendered a flux of particles in the 10-6 - 10-2g mass range that was anomalously low when compared to the, fluxes derived from optical meteor observations or satellite measurements. He therefore found that HF radars were necessary in order to detect the total flux into the earth’s atmosphere. Consequently he used radar units near Adelaide and Alice Springs in Australia to make measurements at a number of different frequencies in the HF band. Indeed, Olsson-Steel believed that the radar near Alice Springs was at that time the most powerful device ever used for meteor detection, and be- cause of its sensitivity the meteor count rates were extremely high. From this data he calculated a total influx of particles in the range 10-6 - 10-2g of 12,000 tons per year, which as he points out is almost identical to the flux in the same mass range calculated by Hughes.69,70 He concluded that this implies that, neglecting the occasional asteroid or comet impact, meteoroids in this mass range dominate the total flux to the atmosphere, which he says amounts to about 16,000 tons per year as calculated by Thomas et al.71
In a different approach to the use of ice as a meteoritic dust collector, in 1987 Maurette and his colleagues72 reported on their analyses of meteoritic dust grains extracted from samples of black dust collected from the melt zone of the Greenland ice cap. The reasoning behind this technique was that the ice now melting at the edge of the ice cap had, during the time since it formed inland and flowed outwards to the melt zone, been collecting cosmic dust of all sizes and masses. The quantity thus found by analysis represents the total flux over that time period, which can then be converted into an annual influx rate. While their analyses of the collected dust particles were based on size fractions, they relied on the mass-to-size relationship established by Griin et al.73 to convert their results to flux estimates. They calculated that each kilogram of black dust they collected for extraction and analysis of its contained meteoritic dust corresponded to a collector surface of approximately 0.5 square metres which had been exposed for approximately 3,000 years to meteoritic dust infall. Adding together their tabulated flux estimates for each size fraction below 300 microns yields a total meteoritic dust influx estimate of approximately 4,500 tons per year, well below that calculated from satellite and radar measurements, and drastically lower than that calculated by chemical analyses of ice.
However, in their defense it can at least be said that in comparison to the chemical method this technique is based on actual identification of the meteoritic dust grains, rather than expecting the chemical analyses to represent the meteoritic dust component in the total samples of dust analysed. Nevertheless, an independent study in another polar region at about the same time came up with a higher influx rate more in keeping with that calculated from satellite and radar measurements. In that study, Tuncel and Zoller74 measured the iridium content in atmospheric samples collected at the South Pole. During each 10-day sampling period, approximately 20,000-30,000 cubic metres of air was passed through a 25-centimetre-diameter cellulose filter, which was then submitted for a wide range of analyses. Thirty such atmospheric particulate samples were collected over an 11 month period, which ensured that, seasonal variations were accounted for. Based on their analyses they discounted any contribution of iridium to their samples from volcanic emissions, and concluded that iridium concentrations in their samples could be used to estimate both the meteoritic dust component in their atmospheric particulate samples and thus the global meteoritic dust influx rate. Thus they calculated a global flux of 6,000 -11,000 tons per year.
In evaluating their result they tabulated other estimates from the literature via a wide range of methods, including the chemical analyses of ice and sediments. In defending their estimate against the higher estimates produced by those chemical methods, they suggested that samples (particularly sediment samples) that integrate large time intervals include in addition to background dust particles the fragmentation products from large bodies. They reasoned that this meant the chemical methods do not discriminate between background dust particles and fragmentation products from large bodies, and so a significant fraction of the flux estimated from sediment samples may be due to such large body impacts. On the other hand, their estimate of 6,000-11,000 tons per year for particles smaller than 106g they argued is in reasonable agreement with estimates from satellite and radar studies.
Finally, in a follow-up study, Maurette with another group of colleagues75 investigated a large sample of micrometeorites collected by the melting and filtering of approximately 100 tons of ice from the Antarctic ice sheet. The grains in the sample were first characterised by visual techniques to sort them into their basic meteoritic types, and then selected particles were submitted for a wide range of chemical and isotopic analyses. Neon isotopic analyses, for example, were used to confirm which particles were of extraterrestrial origin. Drawing also on their previous work they concluded that a rough estimate of the meteoritic dust flux, for particles in the size range 50-300 microns, as recovered from either the Greenland or the Antarctic ice sheets, represents about a third of the total mass influx on the earth at approximately 20,000 tons per year.
|Ni in atmospheric dust||14,300,000|
| Barker and Anders
|Ir and Os in deep-sea sediments|| 100,000
(50,000 - 150,000)
|Ir in Antarctic ice||400,000|
| Kyte and Wasson
|Ir in deep-sea sediments||330,000 - 340,000|
|Satellite, radar, visual||10,950|
|Satellite, radar, visual||20,900|
| Singer and Bandermann
|Al26 in deep-sea sediments|| 456,000
(91,300 - 913,000)
(1975 - 1978)
|Satellite, radar, visual|| 16,200
(8,000 - 30,000)
|Fragmentation of Apollo asteroids||76,000|
| Grün et al.
|Satellite data particularly||14,600|
|Radar data primarily||16,000|
| Maurette et al.
|Dust from melting Greenland ice||4,500|
| Tuncel and Zoller
|Ir in Antarctic atmospheric particulates||6,000 - 11,000|
| Maurette et al.
|Dust from melting Antarctic ice||20,000|
Table 3. Summary of the earth’s meteoritic dust influx estimates via the different measurement techniques.
Over the last three decades numerous attempts have been made using a variety of methods to estimate the meteoritic dust influx to the earth. Table 3 is the summary of the estimates discussed here, most of which are repeatedly referred to in the literature.
Clearly, there is no consensus in the literature as to what the annual influx rate is. Admittedly, no authority today would agree with Pettersson’s 1960 figure of 14,000,000 tons per year. However, there appear to be two major groupings -those chemical methods which give results in the 100,000-400,000 tons per year range or thereabouts, and those methods, particularly cumulative flux calculations based on satellite and radar data, that give results in the range 10,000-20,000 tons per year or thereabouts. There are those that would claim the satellite measurements give results that are too low because of the sensitivities of the techniques involved, whereas there are those on the other hand who would claim that the chemical methods include background dust particles and fragrnentation products.
Perhaps the “safest” option is to quote the meteoritic dust influx rate as within a range. This is exactly what several authorities on this subject have done when producing textbooks. For example, Dodd76 has suggested a daily rate of between 100 and 1,000 tons, which translates into 36,500-365,000 tons per year, while Hartmann,77 who refers to Dodd, quotes an influx figure of 10,000-1 million tons per year. Hartmann’s quoted influx range certainly covers the range of estimates in Table 3, but is perhaps a little generous with the upper limit. Probably to avoid this problem and yet still cover the wide range of estimates, Henbest writing in New Scientist in 199178 declares:
“Even though the grains are individually small, they are so numerous in interplanetary space that the Earth sweeps up some 100,000 tons of cosmic dust every year.79
Perhaps this is a “safe” compromise!
However, on balance we would have to say that the chemical methods when reapplied to polar ice, as they were by Maurette and his colleagues, gave a flux estimate similar to that derived from satellite and radar data, but much lower than Ganapathy’s earlier chemical analysis of polar ice. Thus it would seem more realistic to conclude that the majority of the data points to an influx rate within the range 10,000-20,000 tons per year, with the outside possibility that the figure may reach 100,000 tons per year.
Van Till et al. suggest:
“To compute a reasonable estimate for the accumulation of meteoritic dust on the moon we divide the earth’s accumulation rate of 16,000 tons per year by 16 for the moon’s smaller surface area, divide again by 2 for the moon’s smaller gravitational force, yielding an accumulation rate of about 500 tons per year on the moon.”80
However, Hartmann81 suggests a figure of 4,000 tons per year from his own published work,82 although this estimate is again calculated from the terrestrial influx rate taking into account the smaller surface area of the moon.
These estimates are of course based on the assumption that the density of meteoritic dust in the area of space around the earth-moon system is fairly uniform, an assumption verified by satellite measurements. However, with the US Apollo lunar exploration missions of 1969-1972 came the opportunities to sample the lunar rocks and soils, and to make more direct measurements of the lunar meteoritic dust influx.
One of the earliest estimates based on actual moon samples was that made by Keays and his colleagues,83 who analysed for trace elements twelve lunar rock and soil samples brought back by the Apollo 11 mission. From their results they concluded that there was a meteoritic or cometary component to the samples, and that component equated to an influx rate of 2.9 x 10-9g cm-2 yr-l of carbonaceous-chondrite-like material. This equates to an influx rate of over 15,200 tons per year. However, it should be kept in mind that this estimate is based on the assumption that the meteoritic component represents an accumulation over a period of more than 1 billion years, the figure given being the anomalous quantity averaged over that time span. These workers also cautioned about making too much of this estimate because the samples were only derived from one lunar location.
Within a matter of weeks, four of the six investigators published a complete review of their earlier work along with some new data.84 To obtain their new meteoritic dust influx estimate they compared the trace element contents of their lunar soil and breccia samples with the trace element contents of their lunar rock samples. The assumption then was that the soil and breccia is made up of the broken-down rocks, so that therefore any trace element differences between the rocks and soils/breccias would represent material that had been added to the soils/breccias as the rocks were mechanically broken down. Having determined the trace element content of this “extraneous component” in their soil samples, they sought to identify its source. They then assumed that the exposure time of the region (the Apollo 11 landing site or Tranquillity Base) was 3.65 billion years, so in that time the proton flux from the solar -wind would account for some 2% of this extraneous trace elements component in the soils, leaving the remaining 98% or so to be of meteoritic (to be exact, “particulate’) origin. Upon further calculation, this approximate 98% portion of the extraneous component seemed to be due to an approximate 1.9% admixture of carbonaceous-chondrite-like material (in other words, meteoritic dust of a particular type), and the quantity involved thus represented, over a 3.65 billion year history of soil formation, an average influx rate of 3.8 x 10-9gcm-2 yr-l, which translates to over 19,900 tons per year. However, they again added a note of caution because this estimate was only based on a few samples from one location.
Nevertheless, within six months the principal investigators of this group were again in print publishing further results and an updated meteoritic dust influx estimate.85 By now they had obtained seven samples from the Apollo 12 landing site, which included two crystalline rock samples, four samples from core “drilled” from the lunar regolith, and a soil sample. Again, all the samples were submitted for analyses of a suite of trace elements, and by again following the procedure outlined above they estimated that for this site the extraneous component represented an admixture of about 1.7% meteoritic dust material, very similar to the soils at the Apollo 11 site. Since the trace element content of the rocks at the Apollo 12 site was similar to that at the Apollo 11 site, even though the two sites are separated by 1,400 kilometres, other considerations aside, they concluded that this
“spatial constancy of the meteoritic component suggests that the influx rate derived from our Apollo 11 data, 3.8 x 10-9gcm-2yr-l, is a meaningful average for the entire moon.”86
So in the abstract to their paper they reported that
“an average meteoritic influx rate of about 4 x 10-9 per square centimetre per year thus seems to be valid for the entire moon. ”87
This latter figure translates into an influx rate of approximately 20,900 tons per year.
Ironically, this is the same dust influx rate estimate as for the earth made by Dohnanyi using satellite and radar measurement data via a cumulative flux calculation.88 As for the moon’s meteoritic dust influx, Dohnanyi estimated that using “an appropriate focusing factor of 2,” it is thus half of the earth’s influx, that is, 10,450 tons per year.89 Dohnanyi defended his estimate, even though in his words it “is slightly lower than the independent estimates” of Keays, Ganapathy and their colleagues. He suggested that in view of the uncertainties involved, his estimate and theirs were “surprisingly close”.
While to Dohnanyi these meteoritic dust influx estimates based on chemical studies of the lunar rocks seem very close to his estimate based primarily on satellite measurements, in reality the former are between 50% and 100% greater than the latter. This difference is significant, reasons already having been given for the higher influx estimates for the earth based on chemical analyses of deep- sea sediments compared with the same cumulative flux estimates based on satellite and radar measurements. Many of the satellite measurements were in fact made from satellites in earth orbit, and it has consequently been assumed that these measurements are automatically applicable to the moon. Fortunately, this assumption has been verified by measurements made by the Russians from their moon-orbiting satellite Luna 19, as reported by Nazarova and his colleagues.90 Those measurements plot within the field of near-earth satellite data as depicted by, for example, Hughes.91 Thus there seems no reason to doubt that the satellite measurements in general are applicable to the meteoritic dust influx to the moon. And since Nazarova et al.’s Luna 19 measurements are compatible with Hughes’ cumulative flux plot of near-earth satellite data, then Hughes, meteoritic dust influx estimate for the earth is likewise applicable to the moon, except that when the relevant focusing factor, as outlined and used by Dohnanyi,92 is taken into account we obtain a meteoritic dust influx to the moon estimate from this satellite data (via the standard cumulative flux calculation method) of half the earth’s figure, that is, about 8,000-9,000 tons per year.
Apart from satellite measurements using various techniques and detectors to actually measure the meteoritic dust influx to the earth-moon system, the other major direct detection technique used to estimate the meteoritic dust influx to the moon has been the study of the microcraters that are found in the rocks exposed at the lunar surface. It is readily apparent that the moon’s surface has been impacted by large meteorites, given the sizes of the craters that have resulted, but craters of all sizes are found on the lunar surface right down to the micro-scale. The key factors are the impact velocities of the particles, whatever their size, and the lack of an atmosphere on the moon to slow down (or burn up) the meteorites. Consequently, provided their mass is sufficient, even the tiniest dust particles will produce microcraters on exposed rock surfaces upon impact, just as they do when impacting the windows on spacecraft (the study of microcraters on satellite windows being one of the satellite measurement techniques). Additionally, the absence of an atmosphere on the moon, combined with the absence of water on the lunar surface, has meant that chemical weathering as we experience it on the earth just does not happen on the moon. There is of course still physical erosion, again due to impacting meteorites of all sizes and masses, and due to the particles of the solar wind, but these processes have also been studied as a result of the Apollo moon landings. However, it is the microcraters in the lunar rocks that have been used to estimate the dust influx to the moon.
Perhaps one of the first attempts to try and use microcraters on the moon’s surface as a means of determining the meteoritic dust influx to the moon was that of Jaffe,93 who compared pictures of the lunar surface taken by Surveyor 3 and then 31 months later by the Apollo 12 crew. The Surveyor 3 spacecraft sent thousands of television pictures of the lunar surface back to the earth between April 20 and May 3, 1967, and subsequently on November 20, 1969 the Apollo 12 astronauts visited the same site and took pictures with a hand camera. Apart from the obvious signs of disturbance of the surface dust by the astronauts, Jaffe found only one definite change in the surface. On the bottom of an imprint made by one of the Surveyor footpads when it bounced on landing, all of the pertinent Apollo pictures showed a particle about 2mm in diameter that did not appear in any of the Surveyor pictures. After careful analysis he concluded that the particle was in place subsequent to the Surveyor picture-taking. Furthermore, because of the resolution of the pictures any crater as large as 1.5mm in diameter should have been visible in the Apollo pictures. Two pits were noted along with other particles, but as they appeared on both photographs they must have been produced at the time of the Surveyor landing. Thus Jaffe concluded that no meteorite craters as large as 1.5 mm in diameter appeared on the bottom of the imprint, 20cm in diameter, during those 31 months, so therefore the rate of meteorite impact was less than 1 particle per square metre per month. This corresponds to a flux of 4 x 10-7 particles m-2sec-1 of particles with a mass of 3 x 10-8g, a rate near the lower limit of meteoritic dust influx derived from spacecraft measurements, and many orders of magnitude lower than some previous estimates. He concluded that the absence of detectable craters in the imprint of the Surveyor 3 footpad implied a very low meteoritic dust influx onto the lunar surface.
With the sampling of the lunar surface carried out by the Apollo astronauts and the return of rock samples to the earth, much attention focused on the presence of numerous microcraters on exposed rock surfaces as another means of calculating the meteoritic dust influx. These microcraters range in diameter from less than 1 micron to more than 1 cm, and their ubiquitous presence on exposed lunar rock sur- faces suggests that microcratering has affected literally every square centimetre of the lunar surface. However, in order to translate quantified descriptive data on microcraters into data on interplanetary dust particles and their influx rate, a calibration has to be made between the lunar microcrater diameters and the masses of the particles that must have impacted to form the craters. Hartung et al.94 suggest that several approaches using the results of laboratory cratering experiments are possible, but narrowed their choice to two of these approaches based on microparticle accelerator experiments. Because the crater diameter for any given particle diameter increases proportionally with increasing impact velocity, the calibration procedure employs a constant impact velocity which is chosen as 20km/sec. Furthermore, that figure is chosen because the velocity distribution of interplanetary dust or meteoroids based on visual and radar meteors is bounded by the earth and the solar system escape velocities, and has a maximum at about 20km/sec, which thus conventionally is considered to be the mean velocity for meteoroids. Particles impacting the moon may have a minimum velocity of 2.4km/sec, the lunar escape velocity, but the mean is expected to remain near 20km/sec because of the relatively low effective crosssection of the moon for slower particles. Inflight velocity measurements of micron-sized meteoroids are generally consistent with this distribution. So using a constant impact velocity of 20km/sec gives a calibration relationship between the diameters of the impacting particles and the diameters of the microcraters. Assuming a density of 3g/cm3 allows this calibration relationship to be between the diameters of the microcraters and the masses of the impacting particles.
After determining the relative masses of micrometeoroids, their flux on the lunar surface may then be obtained by correlating the areal density of microcraters on rock surfaces with surface exposure times for those sample rocks. In other words, in order to convert crater populations on a given sample into the interplanetary dust flux the sample’s residence time at the lunar surface must be known.95 These residence times at the lunar surface, or surface exposure times, have been determined either by Cosmogenic Al26 radioactivity measurements or by cosmic ray track density measurements,96 or more often by solar-flare particle track density measurements.97
On this basis Hartung et al.98 concluded that an average minimum flux of particles 25 micrograms and larger is 2.5 x 10-6 particles per cm2 per year on the lunar surface supposedly over the last 1 million years, and that a minimum cumulative flux curve over the range of masses 10-12 - 10-4g based on lunar data alone is about an order of magnitude less than independently derived present-day flux data from satellite-borne detector experiments. Furthermore, they found that particles of masses 10-7 - 10-4g are the dominant contributors to the cross-sectional area of interplanetary dust particles, and that these particles are largely responsible for the exposure of fresh lunar rock surfaces by superposition of microcraters. Also, they suggested that the overwhelming majority of all energy deposited at the surface of the moon by impact is delivered by particles 10-6 - 10-2g in mass.
A large number of other studies have been done on microcraters on lunar surface rock samples and from them calculations to estimate the meteoritic dust (micrometeoroid) influx to the moon. For example, Fechtig et al. investigated in detail a 2cm2 portion of a particular sample using optical and scanning electron microscope (SEM) techniques. Microcraters were measured and counted optically, the results being plotted to show the relationship between microcrater diameters and the cumulative crater frequency. Like other investigators, they found that in all large microcraters 100-200 microns in diameter there were on average one or two “small” microcraters about 1 micron in diameter within them, while in all “larger” microcraters (200-1,000 microns in diameter), of which there are many on almost all lunar rocks, there are large numbers of these “smaller” microcraters. The counting of these “small” microcraters within the “larger” microcraters was found to be statistically significant in estimating the overall microcratering rate and the distribution of particle sizes and masses that have produced the microcraters, because, assuming an unchanging impacting particle size or energy distribution with time, they argued that an equal probability exists for the case when a large crater superimposes itself upon a small crater, thus making its observation impossible, and the case when a small crater superimposes itself upon a larger crater, thus enabling the observation of the small crater. In other words, during the random cratering process, on the average, for each small crater observable within a larger microcrater, there must have existed one small microcrater rendered unobservable by the subsequent formation of the larger microcrater. Thus they reasoned it is necessary to correct the number of observed small craters upwards to account for this effect. Using a correction factor of two they found that their resultant microcrater size distribution plot agreed satisfactorily with that found in another sample by Schneider et al.100 Their measuring and counting of microcraters on other samples also yielded size distributions similar to those reported by other investigators on other samples.
Fechtig et al. also conducted their own laboratory simulation experiments to calibrate microcrater size with impacting particle size, mass and energy. Once the cumulative microcrater number for a given area was calculated from this information, the cumulative meteoroid flux per second for this given area was easily calculated by again dividing the cumulative microcrater number by the exposure ages of the samples, previously determined by means of solar-flare track density measurements. Thus they calculated a cumulative meteoroid flux on the moon of 4 (±3) x 10-5 particles m-2 sec-1, which they suggested is fairly consistent with in situ satellite measurements. Their plot comparing micrometeoroid fluxes derived from lunar microcrater measurements with those attained from various satellite experiments (that is, the cumulative number of particles per square metre per second across the range of particle masses) is reproduced in Figure 5.
Mandeville101 followed a similar procedure in studying the microcraters in a breccia sample collected at the Apollo 15 landing site. Crater numbers were counted and diameters measured. Calibration curves were experimentally derived to relate impact velocity and microcrater diameter, plus impacting particle mass and microcrater diameter. The low solar-flare track density suggested a short and recent exposure time, as did the low density of microcraters. Consequently, in their calculating of the cumulative micrometeoroid flux they assumed a 3,000-year exposure time because of this measured solar-flare track density and the assumed solar-track production rate. The resultant cumulative particle flux was 1.4 x 10-5 particles per square metre per second for particles greater than 2.5 x 10-10g at an impact velocity of 20km/sec, a value which again appears to be in close agreement with flux values obtained by satellite measurements, but at the lower end of the cumulative flux curve calculated from microcraters by Fechtig et al.
Figure 5. Comparison of micrometeoroid fluxes derived from lunar microcrater measurements (cross-hatched and labelled “MOON’) with those obtained in various satellite in situ experiments (adapted from Fechtig et al.99) The range of masses/sizes has been subdivided into dust and meteors.
Schneider et al.102 also followed the same procedure in looking at microcraters on Apollo 15 and 16, and Luna 16 samples. After counting and measuring microcraters and calibration experiments, they used both optical and scanning electron microscopy to determine solar-flare track densities and derive solar-flare exposure ages. They plotted their resultant cumulative meteoritic dust flux on a flux versus mass diagram, such as Figure 5, rather than quantifying it. However, their cumulative flux curve is close to the results of other investigators, such as Hartung et al.103 Nevertheless, they did raise some serious questions about the microcrater data and the derivation of it, because they found that flux values based on lunar microcrater studies are generally less than those based on direct measurements made by satellite-borne detectors, which is evident on Figure 5 also. They found that this discrepancy is not readily resolved but may be due to one or more factors. First on their list of factors was a possible systematic error existing in the solar-flare track method, perhaps related to our present-day knowledge of the solar-flare particle flux. Indeed, because of uncertainties in applying the solar-flare flux derived from solar-flare track records in time-control led situations such as the Surveyor 3 spacecraft, they concluded that these implied their solar-flare exposure ages were systematically too low by a factor of between two and three. Ironically, this would imply that the calculated cumulative dust flux from the microcraters is systematically too high by the same factor, which would mean that there would then be an even greater discrepancy between flux values from lunar microcrater studies and the direct measurements made by the satellite-borne detectors. However, they suggested that part of this systematic difference may be because the satellite-borne detectors record an enhanced flux due to particles ejected from the lunar surface by impacting meteorites of all sizes. In any case, they argued that some of this systematic difference may be related to the calibration of the lunar microcraters and the satellite-borne detectors. Furthermore, because we can only measure the present flux, for example by satellite detectors, it may in fact be higher than the long-term average, which they suggest is what is being derived from the lunar microcrater data.
Morrison and Zinner104 also raised questions regarding solar-flare track density measurements and derived exposure ages. They were studying samples from the Apollo 17 landing area and counted and measured microraters on rock sample surfaces whose original orientation on the lunar surface was known, so that their exposure histories could be determined to test any directional variations in both the micrometeoroid flux and solar-flare particles. Once measured, they compared their solar-flare track density versus depth profiles against those determined by other investigators on other samples and found differences in the steepnesses of the curves, as well as their relative positions with respect to the track density and depth values. They found that differences in the steepnesses of the curves did not correlate with differences in supposed exposure ages, and thus although they couldn’t exclude these real differences in slopes reflecting variations in the activity of the sun, it was more probable that these differences arose from variations in observational techniques, uncertainties in depth measurements, erosion, dust cover on the samples, and/or the precise lunar surface exposure geometry of the different samples measured. They then suggested that the weight of the evidence appeared to favour those curves (track density versus depth profiles) with the flatter slopes, although such a conclusion could be seriously questioned as those profiles with the flatter slopes do not match the Surveyor 3 profile data even by their own admission.,
Rather than calculating a single cumulative flux figure, Morrison and Zinner treated the smaller microcraters separately from the larger microcraters, quoting flux rates of approximately 900 0.1 micron diameter craters per square centimetre per year and approximately 10 -15 x 10-6 500 micron diameter or greater craters per square centimetre per year. They found that these rates were independent of the pointing direction of the exposed rock surface relative to the lunar sky and thus this reflected no variation in the micrometeorite flux directionally. These rates also appeared to be independent of the supposed exposure times of the samples. They also suggested that the ratio of microcrater numbers to solar-flare particle track densities would make a convenient measure for comparing flux results of different laboratories/investigators and varying sampling situations. Comparing such ratios from their data with those of other investigations showed that some other investigators had ratios lower than theirs by a factor of as much as 50, which can only raise serious questions about whether the microcrater data are really an accurate measure of meteoritic dust influx to the moon. However, it can’t be the microcraters themselves that are the problem, but rather the underlying assumptions involved in the determination/estimation of the supposed ages of the rocks and their exposure times.
Another relevant study is that made by Cour-Palais,105 who examined the heat-shield windows of the command modules of the Apollo 7 - 17 (excluding Apollo 11) spacecrafts for meteoroid impacts as a means of estimating the interplanetary dust flux. As part of the study he also compared his results with data obtained from the Surveyor 3 lunar-lander’s TV shroud. In each case, the length of exposure time was known, which removed the uncertainty and assumptions that are inherent in estimation of exposure times in the study of microcraters on lunar rock samples. Furthermore, results from the Apollo spacecrafts represented planetary space measurements very similar to the satellite-borne detector techniques, whereas the Surveyor 3 TV shroud represented a lunar surface detector. In all, Cour-Palais found a total of 10 micrometeoroid craters of various diameters on the windows of the Apollo spacecrafts. Calibration tests were conducted by impacting these windows with microparticles for various diameters and masses, and the results were used to plot a calibration curve between the diameters of the micrometeoroid craters and the estimated masses of the impacting micrometeoroids. Because the Apollo spacecrafts had variously spent time in earth orbit, and some in lunar orbit also, as well as transit time in interplanetary space between the earth and the moon, correction factors had to be applied so that the Apollo window data could be taken as a whole to represent measurements in interplanetary space. He likewise applied a modification factor to the Surveyor 3 TV shroud results so that with the Apollo data the resultant cumulative mass flux distribution could be compared to results obtained from satellite-borne detector systems, with which they proved to be in good agreement.
He concluded that the results represent an average micrometeoroid flux as it exists at the present time away from the earth’s gravitational sphere of influence for masses < l0-7g. However, he noted that the satellite-borne detector measurements which represent the current flux of dust are an order of magnitude higher than the flux supposedly recorded by the lunar microcraters, a record which is interpreted as the “prehistoric” flux. On the other hand he, corrected the Surveyor 3 results to discount the moon’s gravitational effect and bring them into line with the interplanetary dust flux measurements made by satellite- borne detectors. But if the Surveyor 3 results are taken to represent the flux at the lunar surface then that flux is currently an order of magnitude lower than the flux recorded by the Apollo spacecrafts in interplanetary space. In any case, the number of impact craters measured on these respective spacecrafts is so small that one wonders how statistically representative these results are. Indeed, given the size of the satellite-borne detector systems, one could argue likewise as to how representative of the vastness of interplanetary space are these detector results.
Figure 6. Cumulative fluxes (numbers of micrometeoroids with mass greater than the given mass which will impact every second on a square metre of exposed surface one astronomical unit from the sun) derived from satellite and lunar microcrater data (adapted from Hughes106).
Others had been noticing this disparity between the lunar microcrater data and the satellite data. For example, Hughes reported that this disparity had been known “for many years’.106 His diagram to illustrate this disparity is shown here as Figure 6. He highlighted a number of areas where he saw there were problems in these techniques for measuring micrometeoroid influx. For example, he reported that new evidence suggested that the meteoroid impact velocity was about 5km/sec rather than the 20km/ sec that had hithertofore been assumed. He suggested that taking this into account would only move the curves in Figure 6 to the right by factors varying with the velocity dependence of microphone response and penetration hole size (for the satellite-borne detectors) and crater diameter (the lunar microcraters), but because these effects are only functions of meteoroid momentum or kinetic energy their use in adjusting the data is still not sufficient to bring the curves in Figure 6 together (that is, to overcome this disparity between the two sets of data). Furthermore, with respect to the lunar microcrater data, Hughes pointed out that two other assumptions, namely, the ratio of the diameter of the microcrater to the diameter of the impacting particle being fairly constant at two, and the density of the particle being 3g per cm3, needed to be reconsidered in the light of laboratory experiments which had shown the ratio decreases with particle density and particle density varies with mass. He suggested that both these factors make the interpretation of microcraters more difficult, but that “the main problem” lies in estimating the time the rocks under consideration have remained exposed on the lunar surface. Indeed, he pointed to the assumption that solar activity has remained constant in the past, the key assumption required for calculation of an exposure age, as “the real stumbling block” - the particle flux could have been lower in the past or the solar-flare flux could have been higher. He suggested that because laboratory simulation indicates that solarwind sputter erosion is the dominant factor determining microcrater lifetimes, then knowing this enables the micrometeoroid influx to be derived by only considering rock surfaces with an equilibrium distribution of microcraters. He concluded that this line of research indicated that the micrometeoroid influx had supposedly increased by a factor of four in the last 100,000 years and that this would account for the disparity between the lunar microcrater data and the satellite data as shown by the separation of the two curves in Figure 6. However, this “solution”, according to Hughes, “creates the question of why this flux has increased” a problem which appears to remain unsolved.
In a paper reviewing the lunar microcrater data and the lunar micrometeoroid flux estimates, Hörz et al.107 discuss some key issues that arise from their detailed summary of micrometeoroid fluxes derived by various investigators from lunar sample analyses. First, the directional distribution of micrometeoroids is extremely non-uniform, the meteoroid flux differing by about three orders of magnitude between the direction of the earth’s apex and anti-apex. Since the moon may only collect particles greater than 1012g predominantly from only the apex direction, fluxes derived from lunar microcrater statistics, they suggest, may have to be increased by as much as a factor of p for comparison with satellite data that were taken in the apex direction. On the other hand, apex-pointing satellite data generally have been corrected upward because of an assumed isotropic flux, so the actual anisotropy has led to an overestimation of the flux, thus making the satellite results seem to represent an upper limit for the flux. Second, the micrometeoroids coming in at the apex direction appear to have an average impact velocity of only 8km/sec, whereas the fluxes calculated from lunar microcraters assume a standard impact velocity of 20km/sec. If as a result corrections are made, then the projectile mass necessary to produce any given microcrater will increase, and thus the moon-based flux for masses greater than 10-10g will effectively be enhanced by a factor of approximately 5. Third, particles of mass less than 10-12g generally appear to have relative velocities of at least 50km/sec, whereas lunar flux curves for these masses are based again on a 20km/sec impact velocity. So again, if appropriate corrections are made the lunar cumulative micrometeoroid flux curve would shift towards smaller masses by a factor of possibly as much as 10. Nevertheless, Hörz et al. conclude that
“as a consequence the fluxes derived from lunar crater statistics agree within the order of magnitude with direct satellite results if the above uncertainties in velocity and directional distribution are considered.”
Although these comments appeared in a review paper published in 1975, the footnote on the first page signifies that the paper was presented at a scientific meeting in 1973, the same meeting at which three of those investigators also presented another paper in which they made some further pertinent comments. Both there and in a previous paper, Gault, Hörz and Hartung108,109 had presented what they considered was a “best” estimate of the cumulative meteoritic dust flux based on their own interpretation of the most reliable satellite measurements. This “best” estimate they expressed mathematically in the form
N=9.l4 x l0-6m-l.213 l0-7<m<l03.
Figure 7. The micrometeoroid flux measurements from spacecraft experiments which were selected to define the mass-flux distribution (adapted from Gault et al.109) Also shown is the incremental mass flux contained within each decade of m, which sum to approximately 10,000 tonnes per year. Their data sources used are listed in their bibliography.
They commented that the use of two such exponential expressions with the resultant discontinuity is an artificial representation for the flux and not intended to represent a real discontinuity, being used for mathematical simplicity and for convenience in computational procedures. They also plotted this cumulative flux presented by these two exponential expressions, together with the incremental mass flux in each decade of particle mass, and that plot is reproduced here as Figure 7. Note that their flux curve is based on what they regard as the most reliable satellite measurements. Note also, as they did, that the fluxes derived from lunar rocks (the microcrater data) “are not necessarily directly comparable with the current satellite or photographic meteor data.” 110 However, using their cumulative flux curve as depicted in Figure 7, and their histogram plot of incremental mass flux, it is possible to estimate (for example, by adding up each incremental mass flux) the cumulative mass flux, which comes to approximately 2 x 10-9gcm-2yr-1 or about 10,000 tons per year. This is the same estimate that they noted in their concluding remarks:-
“We note that the mass of material contributing to any enhancement, which the earth-moon system is currently sweeping up, is of the order of 1010g per year.”111
Having derived this “best” estimate flux from their mathematical modelling of the “most reliable satellite measurements’ their later comments in the same paper seem rather contradictory:-
“If we follow this line of reasoning, the basic problem then reduces to consideration of the validity of the ‘best’ estimate flux, a question not unfamiliar to the subject of micrometeoroids and a question not with- out considerable historical controversy. We will note here only that whereas it is plausible to believe that a given set of data from a given satellite may be in error for any number of reasons, we find the degree of correlation between the various spacecraft experiments used to define the ‘best’ flux very convincing, especially when consideration is given to the different techniques employed to detect and measure the flux. Moreover, it must be remembered that the abrasion rates, affected primarily by microgram masses, depend almost exclusively on the satellite data while the rupture times, affected only by milligram masses, depend exclusively on the photographic meteor determinations of masses. It is extremely awkward to explain how these fluxes from two totally different and independent techniques could be so similarly in error. But if, in fact, they are in error then they err by being too high, and the fluxes derived from lunar rocks are a more accurate description of the current near- earth micrometeoroid flux.”(emphasis theirs )112
One is left wondering how they can on the one hand emphasise the lunar microcrater data as being a more accurate description of the current micrometeoroid flux, when they based their “best” estimate of that flux on the “most reliable satellite measurements”. However, their concluding remarks are rather telling. The reason, of course, why the lunar microcrater data is given such emphasis is because it is believed to represent a record of the integrated cumulative flux over the moon’s billions-of- years history, which would at face value appear to be a more statistically reliable estimate than brief point-in-space satellite-borne detector measurements. Nevertheless, they are left with this unresolved discrepancy between the microcrater data and the satellite measurements, as has already been noted. So they explain the microcrater data as presenting the “prehistoric” flux, the fluxes derived from the lunar rocks being based on exposure ages derived from solar- flare track density measurements and assumptions regarding solar-flare activity in the past. As for the lunar microcrater data used by Gault et al., they state that the derived fluxes are based on exposure ages in the range 2,500 - 700,000 years, which leaves them with a rather telling enigma. If the current flux as indicated by the satellite measurements is an order of magnitude higher than the microcrater data representing a “prehistoric” flux, then the flux of meteoritic dust has had to have increased or been enhanced in the recent past. But they have to admit that
“if these ages are accepted at face value, a factor of 10 enhancement integrated into the long term average limits the onset and duration of enhancement to the past few tens of years.”
They note that of course there are uncertainties in both the exposure ages and the magnitude of an enhancement, but the real question is the source of this enhanced flux of particles, a question they leave unanswered and a problem they pose as the subject for future investigation. On the other hand, if the exposure ages were not accepted, being too long, then the microcrater data could easily be reconciled with the “more reliable satellite measurements”.
Only two other micrometeoroid and meteor influx measuring techniques appear to have been tried. One of these was the Apollo 17 Lunar Ejecta and Micrometeorite Experiment, a device deployed by the Apollo 17 crew which was specifically designed to detect micrometeorites.113 It consisted of a box containing monitoring equipment with its outside cover being sensitive to impacting dust particles. Evidently, it was capable not only of counting dust particles, but also of measuring their masses and velocities, the objective being to establish some firm limits on the numbers of microparticles in a given size range which strike the lunar surface every year. However, the results do not seem to have added to the large database already established by microcrater investigations.
The other direct measurement technique used was the Passive Seismic Experiment in which a seismograph was deployed by the Apollo astronauts and left to register subsequent impact events.114 In this case, however, the particle sizes and masses were in the gram to kilogram range of meteorites that impacted the moon’s surface with sufficient force to cause the vibrations to be recorded by the seismograph. Between 70 and 150 meteorite impacts per year were recorded, with masses in the range 100g to 1,000 kg, implying a flux rate of
log N = -1.62 -1.16 log m,
where N is the number of bodies that impact the lunar surface per square kilometre per year, with masses greater than m grams.115 This flux works out to be about one order of magnitude less than the average integrated flux from microcrater data. However, the data collected by this experiment have been used to cover that particle mass range in the development of cumulative flux curves (for example, see Figure 2 again) and the resultant cumulative mass flux estimates.
Figure 8. Constraints on the flux of micrometeoroids and larger objects according to a variety of independent lunar studies (adapted from Hörz et al.107)
Hörz et al. summarised some of the basic constraints derived from a variety of independent lunar studies on the lunar flux of micrometeoroids and larger objects.116 They also plotted the broad range of cumulative flux curves that were bounded by these constraints (see Figure 8). Included are the results of the Passive Seismic Experiment and the direct measurements of micrometeoroids encountered by spacecraft windows. They suggested that an upper limit on the flux can be derived from the mare cratering rate and from erosion rates on lunar rocks and other cratering data. Likewise, the negative findings on the Surveyor 3 camera lens and the perfect preservation of the footpad print of the Surveyor 3 1anding gear (both referred to above) also define an upper limit. On the other hand, the lower limit results from the study of solar and galactic radiation tracks in lunar soils, where it is believed the regolith has been reworked only by micrometeoroids, so because of presumed old undisturbed residence times the flux could not have been significantly lower than that indicated. The “geochemical”, evidence is also based on studies of the lunar soils where the abundance of trace elements are indicative of the type and amount of meteoritic contamination. Hörz et al. suggest that strictly, only the passive seismometer, the Apollo windows and the mare craters yield a cumulative mass distribution. All other parameters are either a bulk measure of a meteoroid mass or energy, the corresponding “flux” being calculated via the differential mass-distribution obtained from lunar microcrater investigations (‘lunar rocks , on Figure 8). Thus the corresponding arrows on Figure 8 may be shifted anywhere along the lines defining the “upper” and “lower” limits. On the other hand, they point out that the Surveyor 3 camera lens and footpad analyses define points only.
|Calculated from estimates of influx to the earth||4,000|
| Keays et al.
|Geochemistry of lunar soil and rocks||15,200|
| Ganapathy et al.
|Geochemistry of lunar soil and rocks||19,900|
|Calculated from satellite, radar data||10,450|
| Nazarova et al.
|Lunar orbit satellite data||8,000 - 9,000|
| by comparison with Hughes
|Calculated from satellite, radar data||(4,000 - 15,000)|
| Gault, et al.
|Combination of lunar microcrater and satellite data||10,000|
Table 4. Summary of the lunar meteoritic dust influx estimates.
Table 4 summarises the different lunar meteoritic dust estimates. It is difficult to estimate a cumulative mass flux from Hörz et al.’s diagram showing the basic constraints for the flux of micrometeoroids and larger objects derived from independent lunar studies (see Figure 8), because the units on the cumulative flux axis are markedly different to the units on the same axis of the cumulative flux and cumulative mass diagram of Gault et al. from which they estimated a lunar meteoritic dust influx of about 10,000 tons per year. The Hörz et al. basic constraints diagram seems to have been partly constructed from the previous figure in their paper, which however includes some of the microcrater data used by Gault et al. in their diagram (Figure 7 here) and from which the cumulative mass flux calculation gave a flux estimate of 10,000 tons per year. Assuming then that the basic differences in the units used on the two cumulative flux diagrams (Figures 7 and 8 here) are merely a matter of the relative numbers in the two log scales, then the Gault et al. cumulative flux curve should fall within a band between the upper and lower limits, that is, within the basic constraints, of Hörz et al.’s lunar cumulative flux summary plot (Figure 8 here). Thus a flux estimate from Hörz et al.’s broad lunar cumulative flux curve would still probably centre around the 10,000 tons per year estimate of Gault et al.
In conclusion, therefore, on balance the evidence points to a lunar meteoritic dust influx figure of around 10,000 tons per year. This seems to be a reasonable, approximate estimate that can be derived from the work of Hörz et al., who place constraints on the lunar cumulative flux by carefully drawing on a wide range of data from various techniques. Even so, as we have seen, Gault et al. question some of the underlying assumptions of the major measurement techniques from which they drew their data - in particular, the lunar microcrater data and the satellite measurement data. Like the “geochemical” estimates, the microcrater data depends on uniformitarian age assumptions, including the solar-flare rate, and in common with the satellite data, uniformitarian assumptions regarding the continuing level of dust in interplanetary space and as influx to the moon. Claims are made about variations in the cumulative dust influx in the past, but these also depend upon uniformitarian age assumptions and thus the argument could be deemed circular. Nevertheless, questions of sampling statistics and representativeness aside, the figure of approximately 10,000 tons per year has been stoutly defended in the literature based primarily on present-day satellite-borne detector measurements.
Finally, one is left rather perplexed by the estimate of the moon’s accumulation rate of about 500 tons per year made by Van Till et al.117 In their treatment of the “moon dust controversy”, they are rather scathing in their comments about creationists and their handling of the available data in the literature. For example, they state:
“The failure to take into account the published data pertinent to the topic being discussed is a clear failure to live up to the codes of thoroughness and integrity that ought to characterize professional science.”118
“The continuing publication of those claims by young- earth advocates constitutes an intolerable violation of the standards of professional integrity that should characterize the work of natural scientists.”119
Having been prepared to make such scathing comments, one would have expected that Van Till and his colleagues would have been more careful with their own handling of the scientific literature that they purport to have carefully scanned. Not so, because they failed to check their own calculation of 500 tons per year for lunar dust influx with those estimates that we have seen in the same literature which were based on some of the same satellite measurements that Van Till et al. did consult, plus the microcrater data which they didn’t. But that is not all - they failed to check the factors they used for calculating their lunar accumulation rate from the terrestrial figure they had established from the literature. If they had consulted, for example, Dohnanyi, as we have already seen, they would have realised that they only needed to use a focusing factor of two, the moon’s smaller surface area apparently being largely irrelevant. So much for lack of thoroughness! Had they surveyed the literature thoroughly, then they would have to agree with the conclusion here that the dust influx to the moon is approximately 10,000 tons per year.
The second major question to be addressed is whether NASA really expected to find a thick dust layer on the moon when their astronauts landed on July 20, 1969. Many have asserted that because of meteoritic dust influx estimates made by Pettersson and others prior to the Apollo moon landings, that NASA was cautious in case there really was a thick dust layer into which their lunar lander and astronauts might sink.
Asimov is certainly one authority at the time who is often quoted. Using the 14,300,000 tons of dust per year estimate of Pettersson, Asimov made his own dust on the moon calculation and commented:
“But what about the moon? It travels through spacewith us and although it is smaller and has a weaker gravity, it, too, should sweep up a respectable quan tity of micrometeors.
To be sure, the moon has no atmosphere to friction the micrometeors to dust, but the act of striking the moon’s surface should develop a large enough amount of heat to do the job.
Now it is already known, from a variety of evidence, that the moon (or at least the level lowlands) is covered with a layer of dust. N o one, however, knows for sure how thick this dust may be.
It strikes me that if this dust is the dust of falling micrometeors, the thickness may be great. On the moon there are no oceans to swallow the dust, or winds to disturb it, or life forms to mess it up generally one way or another. The dust that forms must just lie there, and if the moon gets anything like the earth’s supply, it could be dozens of feet thick.
In fact, the dust that strikes craters quite probably rolls down hill and collects at the bottom, forming ‘drifts’ that could be fifty feet deep, or more. Why not?
I get a picture, therefore, of the first spaceship, picking out a nice level place for landing purposes coming slowly downward tail-first … and sinking majestically out of sight.”120
Asimov certainly wasn’t the first to speculate about the thickness of dust on the moon. As early as 1897 Peal121 was speculating on how thick the dust might be on the moon given that “it is well known that on our earth there is a considerable fall of meteoric dust.” Nevertheless, he clearly expected only “an exceedingly thin coating” of dust. Several estimates of the rate at which meteorites fall to earth were published between 1930 and 1950, all based on visual observations of meteors and meteorite falls. Those estimates ranged from 26 metric tons per year to 45,000 tons per year.122 In 1956 Öpik123 estimated 25,000 tons per year of dust falling to the earth, the same year Watson124 estimated a total accumulation rate of between 300,000 and 3 million tons per year, and in 1959 Whipple125 estimated 700,000 tons per year.
However, it wasn’t just the matter of meteoritic dust falling to the lunar surface that concerned astronomers in their efforts to estimate the thickness of dust on the lunar surface, since the second source of pulverised material on the moon is the erosion of exposed rocks by various processes. The lunar craters are of course one of the most striking features of the moon and initially astronomers thought that volcanic activity was responsible for them, but by about 1950 most investigators were convinced that meteorite impact was the major mechanism involved.126 Such impacts pulverise large amounts of rock and scatter fragments over the lunar surface. Astronomers in the 1950s agreed that the moon’s surface was probably covered with a layer of pulverised material via this process, because radar studies were consistent with the conclusion that the lunar surface was made of fine particles, but there were no good ways to estimate its actual thickness.
Yet another contributing source to the dust layer on the moon was suggested by Lyttleton in 1956,127 He suggested that since there is no atmosphere on the moon, the moon‘s surface is exposed to direct radiation, so that ultraviolet light and x-rays from the sun could slowly erode the surface of exposed lunar rocks and reduce them to dust, Once formed, he envisaged that the dust particles might be kept in motion and so slowly “flow” to lower elevations on the lunar surface where they would accumulate to form a layer of dust which he suggested might be “several miles deep”. Lyttleton wasn’t alone, since the main proponent of the thick dust view in British scientific circles was Royal Greenwich astronomer Thomas Gold, who also suggested that this loose dust covering the lunar surface could present a serious hazard to any spacecraft landing on the moon.128 Whipple, on the other hand, argued that the dust layer would be firm and compact so that humans and vehicles would have no trouble landing on and moving across the moon’s surface.129 Another British astronomer, Moore, took note of Gold’s theory that the lunar seas “were covered with layers of dust many kilometres deep” but flatly rejected this. He commented:
“The disagreements are certainly very marked. At one end of the scale we have Gold and his supporters, who believe in a dusty Moon covered in places to a great depth; at the other, people such as myself, who incline to the view that the dust can be no more than a few centimetres deep at most. The only way to clear the matter up once and for all is to send a rocket to find out.”150
So it is true that some astronomers expected to find a thick dust layer, but this was no means universally supported in the astronomical community. The Russians too were naturally interested in this question at this time because of their involvement in the “space race”, but they also had not reached a consensus on this question of the lunar dust. Sharonov,131 for example, discussed Gold’s theory and arguments for and against a thick dust layer, admitting that “this theory has become the object of animated discussion.” Nevertheless, he noted that the “majority of selenologists” favoured the plains of the lunar “seas’ (mares ) being layers of solidified lavas with minimal dust cover.
The lunar dust question was also on the agenda of the December 1960 Symposium number 14ofthe International Astronomical Union held at the Puikovo Observatory near Leningrad. Green summed up the arguments as follows:
“Polarization studies by Wright verified that the surface of the lunar maria is covered with dust. However, various estimates of the depth of this dust layer have been proposed. In a model based on the radioastronomy techniques of Dicke and Beringer and others, a thin dust layer is assumed, Whipple assumes the covering to be less than a few meters’ thick.
On the other hand, Gold, Gilvarry, and Wesselink favor a very thick dust layer. … Because no polar homogenization of lunar surface details can be demonstrated, however, the concept of a thin dust layer appears more reasonable. … Thin dust layers, thickening in topographic basins near post-mare craters, are predicted for mare areas.”132
In a 1961 monograph on the lunar surface, Fielder discussed the dust question in some detail, citing many of those who had been involved in the controversy. Having discussed the lunar mountains where he said “there may be frequent pockets of dust trapped in declivities” he concluded that the mean dust cover over the mountains would only be a millimetre or so.133 But then he went on to say,
“No measurements made so far refer purely to marebase materials. Thus, no estimates of the composition of maria have direct experimental backing. This is unfortunate, because the interesting question ‘How deep is the dust in the lunar seas?’ remains unanswered.”
In 1964 a collection of research papers were published in a monograph entitled The Lunar Surface Layer, and the consensus therein amongst the contributing authors was that there was not a thick dust layer on the moon’s surface. For example, in the introduction, Kopal stated that
“this layer of loose dust must extend down to a depth of at least several centimeters, and probably a foot or so; but how much deeper it may be in certain places remains largely conjectural.”134
In a paper on “Dust Bombardment on the Lunar Surface”, McCracken and Dubin undertook a comprehensive review of the subject, including the work of Öpik and Whipple, plus many others who had since been investigating the meteoritic dust influx to the earth and moon, but concluded that
“The available data on the fluxes of interplanetary dust particles with masses less than 104gm show that the material accreted by the moon during the past 4.5 billion years amounts to approximately 1 gm/cm2 if the flux has remained fairly constant.”135
(Note that this statement is based on the uniformitarian age constraints for the moon.) Thus they went on to say that
“The lunar surface layer thus formed would, therefore, consist of a mixture of lunar material and interplanetary material (primarily of cometary origin) from 10cm to 1m thick. The low value for the accretion rate for the small particles is not adequate to produce large-scale dust erosion or to form deep layers of dust on the moon. …”.136
In another paper, Salisbury and Smalley state in their abstract:
“It is concluded that the lunar surface is covered with a layer of rubble of highly variable thickness and block size. The rubble in turn is mantled with a layer of highly porous dust which is thin over topographic highs, but thick in depressions. The dust has a complex surface and significant, but not strong, coherence.”137
In their conclusions they made a number of predictions.
“Thus, the relief of the coarse rubble layer expected in the highlands should be largely obliterated by a mantle of fine dust, no more than a few centimeters thick over near-level areas, but meters thick in steep- walled depressions. …The lunar dust layer should provide no significant difficulty for the design of vehicles and space suits. …”138
Expressing the opposing view was Hapke, who stated that
“recent analyses of the thermal component of the lunar radiation indicate that large areas of the moon may be covered to depths of many meters by a substance which is ten times less dense than rock. …Such deep layers of dust would be in accord with the suggestion of Gold.”139
He went on:
“Thus, if the radio-thermal analyses are correct, the possibility of large areas of the lunar surface being covered with thick deposits of dust must be given serious consideration.”140
However, the following year Hapke reported on research that had been sponsored by NASA, at a symposium on the nature of the lunar surface, and appeared to be more cautious on the dust question. In the proceedings he wrote:
“I believe that the optical evidence gives very strong indications that the lunar surface is covered with a layer of fine dust of unknown thicknes.”141
There is no question that NASA was concerned about the presence of dust on the moon’s surface and its thickness. That is why they sponsored intensive research efforts in the 1960s on the questions of the lunar surface and the rate of meteoritic dust influx to the earth and the moon. In order to answer the latter question, NASA had begun sending up rockets and satellites to collect dust particles and to measure their flux in near-earth space. Results were reported at symposia, such as that which was held in August 1965 at Cambridge, Massachusetts, jointly sponsored by NASA and the Smithsonian Institution, the proceedings of which were published in 1967.142
A number of creationist authors have referred to this proceedings volume in support of the standard creationist argument that NASA scientists had found a lot of dust in space which confirmed the earlier suggestions of a high dust influx rate to the moon and thus a thick lunar surface layer of dust that would be a danger to any landing spacecraft. Slusher, for example, reported that he had been involved in an intensive review of NASA data on the matter and found
“that radar, rocket, and satellite data published in 1976 by NASA and the Smithsonian Institution show that a tremendous amount of cosmic dust is present in the space around the earth and moon.”143
(Note that the date of publication was incorrectly reported as 1976, when it in fact is the 1967 volume just referred to above.) Similarly, Calais references this same 1967 proceedings volume and says of it,
“NASA has published data collected by orbiting satellites which confirm a vast amount of cosmic dust reaching the vicinity of the earth-moon system.”144,145
Both these assertions, however, are far from correct, since the reports published in that proceedings volume contain results of measurements taken by detectors on board spacecraft such as Explorer XVI, Explorer XXIII, Pegasus I and Pegasus II, as well as references to the work on radio meteors by Elford and cumulative flux curves incorporating the work of people like Hawkins, Upton and Elsässer. These same satellite results and same investigators’ contributions to cumulative flux curves appear in the 1970s papers of investigators whose cumulative flux curves have been reproduced here as Figures 3, 5 and 7, all of which support the 10,000 - 20,000 tons per year and approximately 10,000 tons per year estimates for the meteoritic dust influx to the earth and moon respectively - not the “tremendous” and “vast” amounts of dust incorrectly inferred from this proceedings volume by Slusher and Calais.
The next stage in the NASA effort was to begin to directly investigate the lunar surface as a prelude to an actual manned landing. So seven Ranger spacecraft were sent up to transmit television pictures back to earth as they plummeted toward crash landings on selected flat regions near the lunar equator.146 The last three succeeded spectacularly, in 1964 and 1965, sending back thousands of detailed lunar scenes, thus increasing a thousand-fold our ability to see detail. After the first high-resolution pictures of the lunar surface were transmitted by television from the Ranger VII spacecraft in 1964, Shoemaker147 concluded that the entire lunar surface was blanketed by a layer of pulverised ejecta caused by repeated impacts and that this ejecta would range from boulder-sized rocks to finely-ground dust. After the remaining Ranger crash-landings, the Ranger investigators were agreed that a debris layer existed, although interpretations varied from virtually bare rock with only a few centimetres of debris (Kuiper, Strom and Le Poole) through to estimates of a layer from a few to tens of metres deep (Shoemaker).148 However, it can’t be implied as some have done149 that Shoemaker was referring to a dust layer that thick that was unstable enough to swallow up a landing spacecraft. After all, the consolidation of dust and boulders sufficient to support a load has nothing to do with a layer’s thickness. In any case, Shoemaker was describing a surface layer composed of debris from meteorite impacts, the dust produced being from lunar rocks and not from falling meteoritic dust.
But still the NASA planners wanted to dispel any lingering doubts before committing astronauts to a manned spacecraft landing on the lunar surface, so the soft-landing Surveyor series of spacecraft were designed and built However, the Russians just beat the Americans when they achieved the first lunar soft-landing with their Luna 9 spacecraft. Nevertheless, the first American Surveyor spacecraft successfully achieved a soft-landing in mid- 1966 and returned over 11,000 splendid photographs, which showed the moon’s surface in much greater detail than ever before.150 Between then and January 1968 four other Surveyor spacecraft were successfully landed on the lunar surface and the pictures obtained were quite remarkable in their detail and high resolution, the last in the series (Surveyor 7) returning 21,000 photographs as well as a vast amount of scientific data. But more importantly,
“as each spindly, spraddle-legged craft dropped gingerly to the surface, its speed largely negated by retrorockets, its three footpads sank no more than an inch or two into the soft lunar soil. The bearing strength of the surface measured as much as five to ten pounds per square inch, ample for either astronaut or landing spacecraft.”151
Two of the Surveyors carried a soil mechanics surface sampler which was used to test the soil and any rock fragments within reach. All these tests and observations gave a consistent picture of the lunar soil. As Pasachoff noted:
“It was only the soft landing of the Soviet Luna and American Surveyor spacecraft on the lunar surface in 1966 and the photographs they sent back that settled the argument over the strength of the lunar surface; the Surveyor perched on the surface without sinking in more than a few centimeters.”152152
Moore concurred, with the statement that
“up to 1966 the theory of deep dust-drifts was still taken seriously in the United States and there was considerable relief when the soft-Ianding of Luna 9 showed it to be wrong.”153
Referring to Gold’s deep-dust theory of 1955, Moore went on to say that although this theory had gained a considerable degree of respectability, with the successful soft-landing of Luna 9 in 1966 “it was finally discarded.”154 So it was in May 1966 when Surveyor I landed on the moon three years before Apollo 11 that the long debate over the lunar surface dust layer was finally settled, and NASA officials then knew exactly how much dust there was on the surface and that it was capable of supporting spacecraft and men.
Since this is the case, creationists cannot say or imply, as some have,155-160 that most astronomers and scientists expected a deep dust layer. Some of course did, but it is unfair if creationists only selectively refer to those few scientists who predicted a deep dust layer and ignore the majority of scientists who on equally scientific grounds had predicted only a thin dust layer. The fact that astronomy textbooks and monographs acknowledge that there was a theory about deep dust on the moon,161,162 as they should if they intend to reflect the history of the development of thought in lunar science, cannot be used to bolster a lop-sided presentation of the debate amongst scientists at the time over the dust question, particularly as these same textbooks and monographs also indicate, as has already been quoted, that the dust question was settled by the Luna and Surveyor soft-landings in 1966. Nor should creationists refer to papers like that ofWhipple,163 who wrote of a “dust cloud” around the earth, as if that were representative of the views at the time of all astronomers. Whipple’s views were easily dismissed by his colleagues because of subsequent evidence. Indeed, Whipple did not continue promoting his claim in subsequent papers, a clear indication that he had either withdrawn it or been silenced by the overwhelming response of the scientific community with evidence against it, or both.
Two further matters need to be also dealt with. First, there is the assertion that NASA built the Apollo lunar lander with large footpads because they were unsure about the dust and the safety of their spacecraft. Such a claim is, inappropriate given the success of the Surveyor soft-landings, the Apollo lunar lander having footpads which were proportionally similar to the relative sizes of the respective spacecraft. After all, it stands to reason that since the design of Surveyor spacecraft worked so well and survived landing on the lunar surface that the same basic design should be followed in the Apollo lunar lander.
As for what Armstrong and Aldrin found on the lunar surface, all are agreed that they found a thin dust layer .The transcript of Armstrong’s words as he stepped onto the moon are instructive:
“I am at the foot of the ladder. The LM [lunar module ] footpads are only depressed in the surface about one or two inches, although the surface appears. to be very, very fine grained, as. you get close to it. It is almost like a powder. Now and then it is very fine. I am going to step off the LM now. That is one small step for man, one giant leap for mankind.”164164
Moments later while taking his first steps on the lunar surface, he noted:
“The surface is fine and powdery. I can - I can pick it up loosely with my toe. It does adhere in fine layers like powdered charcoal to the sole and sides. of my boots. I only go in a small fraction of an inch, maybe an eighth of an inch, but I can see the footprints. of my boots and the treads in the fine sandy particles.‘
And a little later, while picking up samples of rocks and fine material, he said:
“This is very interesting. It is a very soft surface, but here and there where I plug with the contingency sample collector, I run into a very hard surface, but it appears to be very cohesive material of the same sort. I will try to get a rock in here. Here’s a couple.”165
So firm was the ground, that Armstrong and Aldrin had great difficulty planting the American flag into the rocky and virtually dust-free lunar surface.
The fact that no further comments were made about the lunar dust by NASA or other scientists has been taken by some166-168 to represent some conspiracy of silence, hoping that some supposed unexplained problem will go away. There is a perfectly good reason why there was silence - three years earlier the dust issue had been settled and Armstrong and Aldrin only confirmed what scientists already knew about the thin dust layer on the moon. So because it wasn’t a problem just before the Apollo 11 landing, there was no need for any talk about it to continue after the successful exploration of the lunar surface. Armstrong himself may have been a little concerned about the constituency and strength of the lunar surface as he was about to step onto it, as he appears to have admitted in subsequent interviews,169 but then he was the one on the spot and about to do it, so why wouldn’t he be concerned about the dust, along with lots of other related issues.
Finally, there is the testimony of Dr William Overn.170,171 Because he was working at the time for the Univac Division of Sperry Rand on the television sub-system for the Mariner IV spacecraft he sometimes had exchanges with the men at the Jet Propulsion Laboratory (JPL) who were working on the Apollo program. Evidently those he spoke to were assigned to the Ranger spacecraft missions which, as we have seen, were designed to find out what the lunar surface really was like; in other words, to investigate among other things whether there was a thin or thick dust layer on the lunar surface. In Bill’s own words:
“I simply told them that they should expect to find less than 10,000 years’ worth of dust when they got there. This was based on my creationist belief that the moon is young. The situation got so tense it was suggested I bet them a large amount of money about the dust. … However, when the Surveyor spacecraft later landed on the moon and discovered there was virtually no dust, that wasn’t good enough for these people to pay off their bet. They said the first landing might have been a fluke in a low dust area! So we waited until ,.,. astronauts actually landed on the moon. …”172
Neither the validity of this story nor Overn’s integrity is in question. However, it should be noted that the bet Overn made with the JPL scientists was entered into at a time when there was still much speculation about the lunar surface, the Ranger spacecraft just having been crash-landed on the moon and the Surveyor soft-landings yet to settle the dust issue. Furthermore, since these scientists involved with Overn were still apparently hesitant after the Surveyor missions, it suggests that they may not have been well acquainted with NASA’s other efforts, particularly via satellite measurements, to resolve the dust question, and that they were not “rubbing shoulders with” those scientists who were at the forefront of these investigations which culminated in the Surveyor soft-landings settling the speculations over the dust. Had they been more informed, they would not have entered into the wager with Overn, nor for that matter would they have seemingly felt embarrassed by the small amount of dust found by Armstrong and Aldrin, and thus conceded defeat in the wager. The fact remains that the perceived problem of what astronauts might face on the lunar surface was settled by NASA in 1966 by the Surveyor soft-landings.
The final question to be resolved is, now that we know how much meteoritic dust falls to the moon’s surface each year, then what does our current knowledge of the lunar surface layer tell us about the moon’s age? For example, what period of time is represented by the actual layer of dust found on the moon? On the one hand creationists have been using the earlier large dust influx figures to support a young age of the moon, and on the other hand evolutionists are satisfied that the small amount of dust on the moon supports their billions-of-years moon age.
To begin with, what makes up the lunar surface and how thick is it? The surface layer of pulverised material on the moon is now, after on-site investigations by the Apollo astronauts, not called moon dust, but lunar regolith, and the fine materials in it are sometimes referred to as the lunar soil. The regolith is usually several metres thick and extends as a continuous layer of debris draped over the entire lunar bedrock surface. The average thickness of the regolith on the maria is 4-5m, while the highlands regolith is about twice as thick, averaging about 10m.173 The seismic properties of the regolith appear to be uniform on the highlands and maria alike, but the seismic signals indicate that the regolith consists of discrete layers, rather than being simply “compacted dust”. The top surface is very loose due to stirring by micrometeorites, but the lower depths below about 20cm are strongly compacted, probably due to shaking during impacts.
The complex layered nature of the regolith has been studied in drill-core samples brought back by the Apollo missions. These have clearly revealed that the regolith is not a homogeneous pile of rubble. Rather, it is a layered succession of ejecta blankets.174 An apparent paradox is that the regolith is both well mixed on a small scale and also displays a layered structure. The Apollo 15 deep core tube, for example, was 2.42 metres long, but contained 42 major textural units from a few millimetres to 13cm in thickness. It has been found that there is usually no correlation between layers in adjacent core tubes, but the individual layers are well mixed. This paradox has been resolved by recognising that the regolith is continuously “gardened” by large and small meteorites and micrometeorites. Each impact inverts much of the microstratigraphy and produces layers of ejecta, some new and some remnants of older layers. -The new surface layers are stirred by micrometeorites, but deeper stirring is rarer. The result is that a complex layered regolith is built up, but is in a continual state of flux, particles now at the surface potentially being buried deeply by future impacts. In this way, the regolith is turned over, like a heavily bombarded battlefield. However, it appears to only be the upper 0.5 - l mm of the lunar surface that is subjected to intense churning and mixing by the meteoritic influx at the present time. Nevertheless, as a whole, the regolith is a primary mixing layer of lunar materials from all points on the moon with the incoming meteoritic influx, both meteorites proper and dust.
Figure 9. Processes of erosion on the lunar surface today appear to be extremely slow compared with the processes on the earth. Bombardment by micrometeorites is believed to be the main cause. A large meteorite strikes the surface very rarely, excavating bedrock and ejecting it over thousands of square kilometres, sometimes as long rays of material radiating from the resulting crater. Much of the meteorite itself is vaporized on impact, and larger fragments of the debris produce secondary craters. Such an event at a mare site pulverizes and churns the rubble and dust that form the regolith. Accompanying base surges of hot clouds of dust. gas and shock waves might compact the dust into breccias. Cosmic rays continually bombard the surface. During the lunar day ions from the solar wind and unshielded solar radiation impinge on the surface. (Adapted from Eglinton et al.176)
So apart from the influx of the meteoritic dust, what other processes are active on the moon’s surface, particularly as there is no atmosphere or water on the moon to weather and erode rocks in the same way as they do on earth? According to Ashworth and McDonnell,
“Three major processes continuously affecting the surface of the moon are meteor impact, solar wind sputtering, and thermal erosion.”175
The relative contributions of these processes towards the erosion of the lunar surface depend upon various factors, such as the dimensions and composition of impacting bodies and the rate of meteoritic impacts and dust influx, These processes of erosion on the lunar surface are of course extremely slow compared with erosion processes on the earth, Figure 9, after Eglinton et al.,176 attempts to illustrate these lunar surface erosion processes.
Of these erosion processes the most important is obviously impact erosion, Since there is no atmosphere on the moon, the incoming meteoritic dust does not just gently drift down to the lunar surface, but instead strikes at an average velocity that has been estimated to be between 13 and 18 km/sec,177 or more recently as 20 km/sec,178 with a maximum reported velocity of 100 km/sec.179 Depending not ,ony on the velocity but on the mass of the impacting dust particles, more dust is produced as debris.
A number of attempts have been made to quantify the amount of dust-caused erosion of bare lunar rock on the lunar surface. Hörz et al.180 suggested a rate of 0.2-0.4mm/106 year (or 20-40 x 10-9cm/yr) after examination of micrometeorite craters on the surfaces of lunar rock samples brought back by the Apollo astronauts. McDonnell and Ashworth181 discussed the range of erosion rates over the range of particle diameters and the surface area exposed. They thus suggested that a rate of 1-3 x 10-7cm/yr (or 100-300 x 10-9cm/yr), basing this estimate on Apollo moon rocks also, plus studies of the Surveyor 3 camera. They later revised this estimate, concluding that on the scale of tens of metres impact erosion accounts for the removal of some 10-7cm/yr (or 100x 10-9cm/yr) of lunar material.182 However, in another paper, Gault et al.183 tabulated calculated abrasion rates for rocks exposed on the lunar surface compared with observed erosion rates as determined from solar-flare particle tracks. Discounting the early satellite data and just averaging the values calculated from the best, more recent satellite data and from lunar rocks, gave an erosion rate esti mate of 0.28cm/106yr (or 280 x 10-9cm/yr), while the average of the observed erosion rates they found from the literature was 0.03cm/106yr (or 30 x 10-9cm/yr). However, they naturally favoured their own “best” estimate from the satellite data of both the flux and the consequent abrasion rate, the latter being 0.1 cm/106yr (or 100 x 10-9cm/ yr), a figure identical with that ofMcDonnell and Ashworth. Gault et al. noted that this was higher, by a factor approaching an order of magnitude, than the “consensus’ of the observed values, a discrepancy which mirrors the difference between the meteoritic dust influx estimates derived from the lunar rocks compared with the satellite data.
These estimates obviously vary from one to another, but 30-100 x 10-9cm/yr would seem to represent a “middle of the range” figure. However, this impact erosion rate only applies to bare, exposed rock. As McCracken and Dubin have stated, once a surface dust layer is built up initially from the dust influx and impact erosion, this initial surface dust layer would protect the underlying bedrock surface against continued erosion by dust particle bombardment.184 If continued impact erosion is going to add to the dust and rock fragments in the surface layer and regolith, then what is needed is some mechanism to continually transport dust away from the rock surfaces as it is produced, so as to keep exposing bare rock again for continued impact erosion. Without some active transporting process, exposed rock surfaces on peaks and ridges would be worn away to give a somewhat rounded moonscape (which is what the Apollo astronauts found), and the dust would thus collect in thicker accumulations at the bottoms of slopes. This is illustrated in Figure 9.
So bombardment of the lunar surface by micrometeorites is believed to be the main cause of surface erosion. At the Current rate of removal, however, it would take a million years to remove an approximately 1mm thick skin of rock from the whole lunar surface and convert it to dust. Occasionally a large meteorite strikes the surface (see Figure 9 again), excavating through the dust down into the bedrock and ejecting debris over thousands of square kilometres sometimes as long rays of material radiating from the resulting crater. Much of the meteorite itself is vaporised on impact, and larger fragments of the debris create secondary craters. Such an event at a mare site pulverises and churns the rubble and dust that forms the regolith.
The solar wind is the next major contributor to lunar surface erosion. The solar wind consists primarily of protons, electrons, and some alpha particles, that are continuously being ejected by the sun. Once again, since the moon has virtually no atmosphere or magnetic field, these particles of the solar wind strike the lunar surface unimpeded at velocities averaging 600 km/sec, knocking individual atoms from rock and dust mineral lattices. Since the major components of the solar wind are H+ (hydrogen) ions, and some He (helium) and other elements, the damage upon impact to the crystalline structure of the rock silicates creates defects and voids that accommodate the gases and other elements which are simultaneously implanted in the rock surface. But individual atoms are also knocked out of the rock surface, and this is called sputtering or sputter erosion. Since the particles in the solar wind strike the lunar surface with such high velocities,
“one can safely conclude that most of the sputtered atoms have ejection velocities higher than the escape velocity of the moon.”185
There would thus appear to be a net erosional mass loss from the moon to space via this sputter erosion.
As for the rate of this erosional loss, Wehner186 suggested a value for the sputter rate of the order of 0.4 angstrom (Å)/yr. However, with the actual measurement of the density of the solar wind particles on the surface of the moon, and lunar rock samples available for analysis, the intensity of the solar wind used in sputter rate calculations was downgraded, and consequently the estimates of the sputter rate itself (by an order of magnitude lower). McDonnell and Ashworth187 estimated an average sputter rate of lunar rocks of about 0.02Å/yr, which they later revised to 0.02-0.04Å/yr.188 Further experimental work refined their estimate to 0.043Å/yr,189 which was reported in Nature by Hughes.190 This figure of 0.043 Å/yr continued to be used and confirmed in subsequent experimental work,191 although Zook192 suggested that the rate may be higher, even as high as 0.08Å/yr.193 Even so, if this sputter erosion rate continued at this pace in the past then it equates to less than one centimetre of lunar surface lowering in one billion years. This not only applies to solid rock, but to the dust layer itself, which would in fact decrease in thickness in that time, in opposition to the increase in thickness caused by meteoritic dust influx. Thus sputter erosion doesn’t help by adding dust to the lunar surface, and in any case it is such a slow process that the overall effect is minimal. Yet another potential form of erosion process on the lunar surface is thermal erosion, that is, the breakdown of the lunar surface around impact/crater areas due to the marked temperature changes that result from the lunar diurnal cycle. Ashworth and McDonnell194 carried out tests on lunar rocks, submitting them to cycles of changing temperature, but found it “impossible to detect any surface changes”. They therefore suggested that thermal erosion is probably “not a major force.” Similarly, McDonnell and Flavill195 conducted further experiments and found that their samples showed no sign of “degradation or enhancement” due to the temperature cycle that they had been subjected to. They reported that
“the conditions were thermally equivalent to the lunar day-night cycle and we must conclude that on this scale thermal cycling is a very weak erosion mechanism.‘
The only other possible erosion process that has ever been mentioned in the literature was that proposed by Lyttleton196 and Gold.197 They suggested that high-energy ultraviolet and x-rays from the sun would slowly pulverize lunar rock to dust, and over millions of years this would create an enormous thickness of dust on the lunar surface. This was proposed in the 1950s and debated at the time, but since the direct investigations of the moon from the mid- 1960s onwards, no further mention of this potential process has appeared in the technical literature, either for the idea or against it. One can only assume that either the idea has been ignored or forgotten, or is simply ineffective in producing any significant erosion, contrary to the suggestions of the original proposers. The latter is probably true, since just as with impact erosion the effect of this radiation erosion would be subject to the critical necessity of a mechanism to clean rock surfaces of the dust produced by the radiation erosion. In any case, even a thin dust layer will more than likely simply absorb the incoming rays, while the fact that there are still exposed rock surfaces on the moon clearly suggests that Lyttleton and Gold’s radiation erosion process has not been effective over the presumed millions of years, else all rock surfaces should long since have been pulverized to dust. Alternately, of course, the fact that there are still exposed rock surfaces on the moon could instead mean that if this radiation erosion process does occur then the moon is quite young.
So how much dust is there on the lunar surface? Because of their apparent negligible or non-existent contribution, it may be safe to ignore thermal, sputter and radiation erosion. This leaves the meteoritic dust influx itself and the dust it generates when it hits bare rock on the lunar surface (impact erosion). However, our primary objective is to determine whether the amount of meteoritic dust in the lunar regolith and surface dust layer, when compared to the current meteoritic dust influx rate, is an accurate indication of the age of the moon itself, and by implication the earth and the solar system also.
Now we concluded earlier that the consensus from all the available evidence, and estimate techniques employed by different scientists, is that the meteoritic dust influx to the lunar surface is about 10,000 tons per year or 2x10-9g cm-2yr-1. Estimates of the density of micrometeorites vary widely, but an average value of 19/cm3 is commonly used. Thus at this apparent rate of dust influx it would take about a billion years for a dust layer a mere 2cm thick to accumulate over the lunar surface. Now the Apollo astronauts apparently reported a surface dust layer of between less than 1/8 inch (3mm)and 3 inches (7.6cm). Thus, if this surface dust layer were composed only of meteoritic dust, then at the current rate of dust influx this surface dust layer would have accumulated over a period of between 150 million years (3mm) and 3.8 billion years (7.6cm). Obviously, this line of reasoning cannot be used as an argument for a young age for the moon and therefore the solar system.
However, as we have already seen, below the thin surface dust layer is the lunar regolith, which is up to 5 metres thick across the lunar maria and averages 10 metres thick in the lunar highlands. Evidently, the thin surface dust layer is very loose due to stirring by impacting meteoritic dust (micrometeorites), but the regolith beneath which consists of rock rubble of all sizes down to fines (that are referred to as lunar soil) is strongly compacted. Nevertheless, the regolith appears to be continuously “gardened” by large and small meteorites and micrometeorites, particles now at the surface potentially being buried deeply by future impacts. This of course means then that as the regolith is turned over meteoritic dust particles in the thin surface layer will after some time end up being mixed into the lunar soil in the regolith below. Therefore, also, it cannot be assumed that the thin loose surface layer is entirely composed of meteoritic dust, since lunar soil is also brought up into this loose surface layer by impacts.
However, attempts have been made to estimate the proportion of meteoritic material mixed into the regolith. Taylor198 reported that the meteoritic compositions recognised in the maria soils turn out to be surprisingly uniform at about 1.5% and that the abundance patterns are close to those for primitive unfractionated Type I carbonaceous chondrites. As described earlier, this meteoritic component was identified by analysing for trace elements in the broken-down rocks and soils in the regolith and then assuming that any trace element differences represented the meteoritic material added to the soils. Taylor also adds that the compositions of other meteorites, the ordinary chondrites, the iron meteorites and the stony-irons, do not appear to be present in the lunar regolith, which may have some significance as to the origin of this meteoritic material, most of which is attributed to the influx of micrometeorites. It is unknown what the large crater-forming meteorites contribute to the regolith, but Taylor suggests possibly as much as 10% of the total regolith. Additionally, a further source of exotic elements is the solar wind, which is estimated to contribute between 3% and 4% to the soil. This means that the total contribution to the regolith from extra-lunar sources is around 15%. Thus in a five metre thick regolith over the maria, the thickness of the meteoritic component would be close to 60cm, which at the current estimated meteoritic influx rate would have taken almost 30 billion years to accumulate, a timespan six times the claimed evolutionary age of the moon.
The lunar surface is heavily cratered, the largest crater having a diameter of 295kms. The highland areas are much more heavily cratered than the maria, which suggested to early investigators that the lunar highland areas might represent the oldest exposed rocks on the lunar surface. This has been confirmed by radiometric dating of rock samples brought back by the Apollo astronauts, so that a detailed lunar stratigraphy and evolutionary geochronological framework has been constructed. This has led to the conclusion that early in its history the moon suffered intense bombardment from scores of meteorites, so that all highland areas presumed to be older than 3.9 billion years have been found to be saturated with craters 50-100 km in diameter, and beneath the 10 metre-thick regolith is a zone of breccia and fractured bedrock estimated in places to be more than 1 km thick.199
Figure 10. Cratering history of the moon (adapted from Taylor200). An aeon represents a billion years on the evolutionists’ time scale, while the vertical bar represents the error margin in the estimation of the cratering rate at each data point on the curve.
Following suitable calibration, a relative crater chronology has been established, which then allows for the cratering rate through lunar history to be estimated and then plotted, as it is in Figure 10.200 There thus appears to be a general correlation between crater densities across the lunar surface and radioactive “age” dates. However, the crater densities at the various sites cannot be fitted to a straightforward exponential decay curve of meteorites or asteroid populations.201 Instead, at least two separate groups of objects seem to be required. The first is believed to be approximated by the present-day meteoritic flux, while the second is believed to be that responsible for the intense early bombardment claimed to be about four billion years ago. This intense early bombardment recorded by the crater-saturated surface of the lunar highland areas could thus explain the presence of the thicker regolith (up to 10 metres) in those areas.
It follows that this period of intense early bombardment resulted from a very high influx of meteorites and thus meteoritic dust, which should now be recognisable in the regolith. Indeed, Taylor202 lists three types of meteoritic debris in the highlands regolith- the micrometeoritic component, the debris from the large-crater-producing bodies, and the material added during the intense early bombardment. However, the latter has proven difficult to quantify. Again, the use of trace element ratios has enabled six classes of ancient meteoritic components to be identified, but these do not correspond to any of the currently known meteorite classes, both iron and chondritic. It would appear that this material represents the debris from the large projectiles responsible for the saturation cratering in the lunar highlands during the intense bombardment early in the moon’s history. It is this early intense bombardment with its associated higher influx rate of meteoritic material that would account for not only the thicker regolith in the lunar highlands, but the 12% of meteoritic component in the thinner regolith of the maria that we have calculated (above) would take up to 30 billion years to accumulate at the current meteoritic influx rate. Even though the maria are believed to be younger than the lunar highlands and haven’t suffered the same saturation cratering, the cratering rate curve of Figure 10 suggests that the meteoritic influx rate soon after formation of the maria was still almost 10 times the current influx rate, so that much of the meteoritic component in the regolith could thus have more rapidly accumulated in the early years after the maria’s formation. This then removes the apparent accumulation timespan anomaly for the evolutionists’ timescale, and suggests that the meteoritic component in the maria regolith is still consistent with its presumed 3 billion year age if uniformitarian assumptions are used. This of course is still far from satisfactory for those young earth creationists who believed that uniformitarian assumptions applied to moon dust could be used to deny the evolutionists’ vast age for the moon.
Given that as much as 10% of the maria regolith may have been contributed by the large crater-forming meteorites,203 impact erosion by these large crater-producing meteorites may well have had a significant part in the development of the regolith, including the generation of dust, particularly if the meteorites strike bare lunar rock. Furthermore, any incoming meteorite, or micrometeorite for that matter, creates a crater much bigger than itself,204 and since most impacts are at an oblique angle the resulting secondary cratering may in fact be more important205 in generating even more dust. However, to do so the impacting meteorite or micrometeorite must strike bare exposed rock on the lunar surface. Therefore, if bare rock is to continue to be available at the lunar surface, then there must be some mechanism to move the dust off the rock as quickly as it is generated, coupled with some transport mechanism to carry it and accumulate it in lower areas, such as the maria.
Various suggestions have been made apart from the obvious effect of steep gradients, which in any case would only produce local accumulation. Gold, for example, listed five possibilities,206 but all were highly speculative and remain unverified. More recently, McDonnell207 has proposed that electrostatic charging on dust particle surfaces may cause those particles to levitate across the lunar surface up to 10 or more metres. As they lose their charge they float back to the surface, where they are more likely to settle in a lower area. McDonnell gives no estimate as to how much dust might be moved by this process, and it remains somewhat tentative. In any case, if such transport mechanisms were in operation on the lunar surface, then we would expect the regolith to be thicker over the maria because of their lower elevation. However, the fact is that the regolith is thicker in the highland areas where the presumed early intense bombardment occurred, the impact-generated dust just accumulating locally and not being transported any significant distance.
Having considered the available data, it is inescapably clear that the amount of meteoritic dust on the lunar surface and in the regolith is not at all inconsistent with the present meteoritic dust influx rate to the lunar surface operating, over the multi-billion year time framework proposed by evolutionists, but including a higher influx rate in the early history of the moon when intense bombardment occurred producing many of the craters on the lunar surface. Thus, for the purpose of “proving” a young moon, the meteoritic dust influx as it appears to be currently known is at least two orders of magnitude too low. On the other hand, the dust influx rate has, appropriately enough, not been used by evolutionists to somehow “prove” their multi-billion year timespan for lunar history. (They have recognised some of the problems and uncertainties and so have relied more on their radiometric dating of lunar rocks, coupled with wide- ranging geochemical analyses of rock and soil samples, all within the broad picture of the lunar stratigraphic succession.) The present rate of dust influx does not, of course, disprove a young moon.
Some creationists have tentatively recognised that the moon dust argument has lost its original apparent force. For example, Taylor(Paul)208 follows the usual line of argument employed by other creationists, stating that based on published estimates of the dust influx rate and the evolutionary timescale, many evolutionists expected the astronauts to find a very thick layer of loose dust on the moon, so when they only found a thin layer this implied a young moon. However, Taylor then admits that the case appears not to be as clear cut as some originally thought, particularly because evolutionists can now point to what appear to be more accurate measurements of a smaller dust influx rate compatible with their timescale. Indeed, he says that the evidence for disproving an old age using this particular process is weakened, but that furthermore, the case has been blunted by the discovery of what is said to be meteoritic dust within the regolith. However, like Calais,209,210 Taylor points to the NASA report211 that supposedly indicated a very large amount of cosmic dust in the vicinity of the earth and moon (a claim which cannot be substantiated by a careful reading of the papers published in that report, as we have already seen). He also takes up DeYoung’s comment212 that because all evolutionary theories about the origin of the moon and the solar system predict a much larger amount of incoming dust in the moon’s early years, then a very thick layer of dust would be expected, so it is still missing. Such an argument cannot be sustained by creationists because, as we have seen above, the amount of meteoritic dust that appears to be in the regolith seems to be compatible with the evolutionists’ view that there was a much higher influx rate of meteoritic dust early in the moon’s history at the same time as the so-called “early intense bombardment”.
Indeed, from Figure 10 it could be argued that since the cratering rate very early in the moon’s history was more than 300 times today’s cratering rate, then the meteoritic dust influx early in the moon’s history was likewise more than 300 times today’s influx rate. That would then amount to more than 3 million tons of dust per year, but even at that rate it would take a billion years to accumulate more than six metres thickness of meteoritic dust across the lunar surface, no doubt mixed in with a lesser amount of dust and rock debris generated by the large-crater-producing meteorite impacts. However, in that one billion years, Figure 10 shows that the rate of meteoritic dust influx is postulated to have rapidly declined, so that in fact a considerably lesser amount of meteoritic dust and impact debris would have accumulated in that supposed billion years. In other words, the dust in the regolith and the surface layer is still compatible with the evolutionists’ view that there was a higher influx rate early in the moon’s history, so creationists cannot use that to shore up this considerably blunted argument.
Coupled with this, it is irrelevant for both Taylor and DeYoung to imply that because evolutionists say that the sun and the planets were formed from an immense cloud of dust which was thus obviously much thicker in the past, that their theory would thus predict a very thick layer of dust. On the contrary, all that is relevant is the postulated dust influx after the moon’s formation, since it is only then that there is a lunar surface available to collect the dust, which we can now investigate along with that lunar surface. So unless there was a substantially greater dust influx after the moon formed than that postulated by the evolutionists (see Figure 10 and our calculations above), then this objection also cannot be used by creationists.
De Young also adds a second objection in order to counter the evolutionists’ case. He maintains that the revised value of a much smaller dust accumulation from space is open to question, and that scientists continue to make major adjustments in estimates of meteors and space dust that fall upon the earth and moon.213 If this is meant to imply that the current dust influx estimate is open to question amongst evolutionists, then it is simply not the case, because there is general agreement that the earlier estimates were gross overestimates. As we have seen, there is much support for the current figure, which is two orders of magnitude lower than many of the earlier estimates. There may be minor adjustments to the current estimate, but certainly not anything major.
While De Young hints at it, Taylor (Ian)214 is quite open in suggesting that a drastic revision of the estimated meteoritic dust influx rate to the moon occurred straight after the Apollo moon landings, when the astronauts , observations supposedly debunked the earlier gross over-estimates, and that this was done quietly but methodically in some sort of deliberate way. This is simply not so. Taylor insinuates that the Committee for Space Research (COSPAR) was formed to work on drastically downgrading the meteoritic dust influx estimate, and that they did this only based on measurements from indirect techniques such as satellite-borne detectors, visual meteor counts and observations of zodiacal light, rather than dealing directly with the dust itself. That claim does not take into account that these different measurement techniques are all necessary to cover the full range of particle sizes involved, and that much of the data they employed in their work was collected in the 1960s before the Apollo moon landings. Furthermore, that same data had been used in the 1960s to produce dust influx estimates, which were then found to be in agreement with the minor dust layer found by the astronauts subsequently. In other words, the data had already convinced most scientists before the Apollo moon landings that very little dust would be found on the moon, so there is nothing “fishy” about COSPAR’s dust influx estimates just happening to yield the exact amount of dust actually found on the moon’s surface. Furthermore, the COSPAR scientists did not ignore the dust on the moon’s surface, but used lunar rock and soil samples in their work, for example, with the study of lunar microcraters that they regarded as representing a record of the historic meteoritic dust influx. Attempts were also made using trace element geochemistry to identify the quantity of meteoritic dust in the lunar surface layer and the regolith below.
A final suggestion from De Young is that perhaps there actually is a thick lunar dust layer present, but it has been welded into rock by meteorite impacts.215 This is similar and related to an earlier comment about efforts being made to re-evaluate dust accumulation rates and to find a mechanism for lunar dust compaction in order to explain the supposed absence of dust on the lunar surface that would be needed by the evolutionists’ timescale216 For support, Mutch217 is referred to, but in the cited pages Mutch only talks about the thickness of the regolith and the debris from cratering, the details of which are similar to what has previously been discussed here. As for the view that the thick lunar dust is actually present but has been welded into rock by meteorite impacts, no reference is cited, nor can one be found. Taylor describes a “mega-regolith” in the highland areas218 which is a zone of brecciation, fracturing and rubble more than a kilometre thick that is presumed to have resulted from the intense early bombardment, quite the opposite to the suggestion of meteorite impacts welding dust into rock. Indeed, Mutch,219 Ashworth and McDonnell220 and Taylor221 all refer to turning over of the soil and rubble in the lunar regolith by meteorite and micrometeorite impacts, making the regolith a primary mixing layer of lunar materials that have not been welded into rock. Strong compaction has occurred in the regolith, but this is virtually irrelevant to the issue of the quantity of meteoritic dust on the lunar surface, since that has been estimated using trace element analyses.
Parks222 has likewise argued that the disintegration of meteorites impacting the lunar surface over the evolutionists’ timescale should have produced copious amounts of dust as they fragmented, which should, when added to calculations of the meteoritic dust influx over time, account for dust in the regolith in only a short period of time. However, it has already been pointed out that this debris component in the maria regolith only amounts to 10%, which quantity is also consistent with the evolutionists, postulated cratering rate over their timescale. He then repeats the argument that there should have been a greater rate of dust influx in the past, given the evolutionary theories for the formation of the bodies in the solar system from dust accretion, but that argument is likewise negated by the evolutionists having postulated an intense early bombardment of the lunar surface with a cratering rate, and thus a dust influx rate, over two orders of magnitude higher than the present (as already discussed above). Finally, he infers that even if the dust influx rate is far less than investigators had originally supposed, it should have contributed much more than the 1.5%’s worth of the 1-2 inch thick layer of loose dust on the lunar surface. The reference cited for this percentage of meteoritic dust in the thin loose dust layer on the lunar surface is Ganapathy et al.223 However, when that paper is checked carefully to see where they obtained their samples from for their analytical work, we find that the four soil samples that were enriched in a number of trace elements of meteoritic origin came from depths of 13-38 cms below the surface, from where they were extracted by a core tube. In other words, they came from the regolith below the 1-2 inch thick layer of loose dust on the surface, and so Parks’ application of this analytical work is not even relevant to his claim. In any case, if one uses the current estimated meteoritic dust influx rate to calculate how much meteoritic dust should be within the lunar surface over the evolutionists’ timescale one finds the results to be consistent, as has already been shown above.
Parks may have been influenced by Brown, whose personal correspondence he cites. Brown, in his own publication,224 has stated that
“if the influx of meteoritic dust on the moon has been at just its present rate for the last 4.6 billion years, then the layer of dust should be over 2,000 feet thick.”
Furthermore, he indicates that he made these computations based on the data contained in Hughes225 and Taylor.226 This is rather baffling, since Taylor does not commit himself to a meteoritic dust influx rate, but merely refers to the work of others, while Hughes concentrates on lunar microcraters and only indirectly refers to the meteoritic dust influx rate. In any case, as we have already seen, at the currently estimated influx rate of approximately 10,000 tons per year a mere 2 cm thickness of meteoritic dust would accumulate on the lunar surface every billion years, so that in 4.6 billion years there would be a grand total of 9.2 cm thickness. One is left wondering where Brown’s figure of 2,000 feet (approximately 610 metres) actually came from? If he is taking into account Taylor’s reference to the intense early bombardment, then we have already seen that, even with a meteoritic dust influx rate of 300 times the present figure, we can still comfortably account for the quantity of meteoritic dust found in the lunar regolith and the loose surface layer over the evolutionists’ timescale. While defence of the creationist position is totally in order, baffling calculations are not. Creation science should always be good science; it is better served by thorough use of the technical literature and by facing up to the real data with sincerity, as our detractors have often been quick to point out.
So are there any loopholes in the evolutionists’ case that the current apparent meteoritic dust influx to the lunar surface and the quantity of dust found in the thin lunar surface dust layer and the regolith below do not contradict their multi-billion year timescale for the moon’s history? Based on the evidence we currently have the answer has to be that it doesn’t look like it. The uncertainties involved in the possible erosion process postulated by Lyttleton and Gold (that is, radiation erosion) still potentially leaves that process as just one possible explanation for the amount of dust in a young moon model, but the dust should no longer be used as if it were a major problem for evolutionists. Both the lunar surface and the lunar meteoritic influx rate seem to be fairly well characterised, even though it could be argued that direct geological investigations of the lunar surface have only been undertaken briefly at 13 sites (six by astronauts and seven by unmanned spacecraft) scattered across a portion of only one side of the moon.
Furthermore, there are some unresolved questions regarding the techniques and measurements of the meteoritic dust influx rate. For example, the surface exposure times for the rocks on whose surfaces microcraters were measured and counted are dependent on uniformitarian age assumptions. If the exposure times were in fact much shorter, then the dust influx estimates based on the lunar microcraters would need to be drastically revised, perhaps upwards by several orders of magnitude. As it is, we have seen that there is a recognised discrepancy between the lunar microcrater data and the satellite-borne detector data, the former being an order of magnitude lower than the latter. Hughes227 explains this in terms of the meteoritic dust influx having supposedly increased by a factor of four in the last 100,000 years, whereas Gault et al.228 admit that if the ages are accepted at face value then there had to be an increase in the meteoritic dust influx rate by a factor of 10 in the past few tens of years! How this could happen we are not told, yet according to estimates of the past cratering rate there was in fact a higher influx of meteorites, and by inference meteoritic dust, in the past. This is of course contradictory to the claims based on lunar microcrater data. This seems to leave the satellite-borne detector measurements as apparently the more reliable set of data, but it could still be argued that the dust collection areas on the satellites are tiny, and the dust collection timespans far too short, to be representative of the quantity of dust in the space around the earth-moon system.
Should creationists then continue to use the moon dust as apparent evidence for a young moon, earth and solar system? Clearly, the answer is no. The weight of the evidence as it currently exists shows no inconsistency within the evolutionists’ case, so the burden of proof is squarely on creationists if they want to argue that based on the meteoritic dust the moon is young. Thus it is inexcusable for one creationist writer to recently repeat verbatim an article of his published five years earlier,229,230 maintaining that the meteoritic dust is proof that the moon is young in the face of the overwhelming evidence against his arguments. Perhaps any hope of resolving this issue in the creationists, favour may have to wait for further direct geological investigations and direct measurements to be made by those manning a future lunar surface laboratory, from where scientists could actually collect and measure the dust influx, and investigate the characteristics of the dust in place and its interaction with the regolith and any lunar surface processes.
Over the last three decades numerous attempts have been made using a variety of methods to estimate the meteoritic dust influx to both the earth and the moon. On the earth, chemical methods give results in the range of 100,000-400,000 tons per year, whereas cumulative flux calculations based on satellite and radar data give results in the range 10,000-20,000 tons per year. Most authorities on the subject now favour the satellite data, although there is an outside possibility that the influx rate may reach 100,000 tons per year. On the moon, after assessment of the various techniques employed, on balance the evidence points to a meteoritic dust influx figure of around 10,000 tons per year.
Although some scientists had speculated prior to spacecraft landing on the moon that there would be a thick dust layer there, there were many scientists who disagreed and who predicted that the dust would be thin and firm enough for a manned landing. Then in 1966 the Russians with their Luna 9 spacecraft and the Americans with their five successful Surveyor spacecraft accomplished soft-landings on the lunar surface, the footpads of the latter sinking no more than an inch or two into the soft lunar soil and the photographs sent back settling the argument over the thickness of the dust and its strength. Consequently, before the Apollo astronauts landed on the moon in 1969 the moon dust issue had been settled, and their lunar exploration only confirmed the prediction of the majority, plus the meteoritic dust influx measurements that had been made by satellite-borne detector systems which had indicated only a minor amount.
Calculations show that the amount of meteoritic dust in the surface dust layer, and that which trace element analyses have shown to be in the regolith, is consistent with the current meteoritic dust influx rate operating over the evolutionists’ timescale. While there are some unresolved problems with the evolutionists’ case, the moon dust argument, using uniformitarian assumptions to argue against an old age for the moon and the solar system, should for the present not be used by creationists.
Research on this topic was undertaken spasmodically over a period of more than seven years by Dr Andrew Snelling. A number of people helped with the literature search and obtaining copies of papers, in particular, Tony Purcell and Paul Nethercott. Their help is acknowledged. Dave Rush undertook research independentl yon this topic while studying and working at the Institute for Creation Research, before we met and combined our efforts. We, of course, take responsibility for the conclusions, which unfortunately are not as encouraging or complimentary for us young earth creationists as we would have liked.
Help keep these daily articles coming. Support AiG.
“Now that I have updated, revised, and expanded The Lie, I believe it’s an even more powerful, eyeopening book for the church—an essential resource to help all of us to understand the great delusion that permeates our world! The message of The Lie IS the message of AiG and why we even exist! It IS the message God has laid on our hearts to bring before the church! It IS a vital message for our time.”
– Ken Ham, president and founder of AiG–U.S.
Answers magazine is the Bible-affirming, creation-based magazine from Answers in Genesis. In it you will find fascinating content and stunning photographs that present creation and worldview articles along with relevant cultural topics. Each quarterly issue includes a detachable chart, a pullout children’s magazine, a unique animal highlight, excellent layman and semi-technical articles, plus bonus content. Why wait? Subscribe today and get a FREE DVD download! | http://www.answersingenesis.org/articles/tj/v7/n1/moondust | 13 |
54 | Douglas H. Clements
Close your eyes and picture students doing mathematics. Like many educators, the mental pictures may include manipulative objects, such as cubes, geoboards, or colored rods. Does the use of such concrete objects really help students learn mathematics? What is meant by "concrete"? Are computer displays concrete and can they play an important role in learning? By addressing these questions, the authors hope to change the mental picture of what manipulatives are and how they might be used effectively.
Are Manipulatives Helpful?
Helpful, yes... Students who use manipulatives in their mathematics classes usually outperform those who do not (Driscoll, 1983; Sowell, 1989; Suydam, 1986). This benefit holds across grade level, ability level, and topic, given that using a manipulative makes sense for the topic. Manipulative use also increases scores on retention and problem-solving tests. Finally, attitudes toward mathematics are improved when students are instructed with concrete materials by teachers knowledgeable about their use (Sowell, 1989).
...But no guarantee. Manipulatives, however, do not guarantee success (Baroody, 1989). One study showed that classes not using manipulatives outperformed classes using manipulatives on a test of transfer (Fennema, 1972). In this study, all teachers emphasized learning with understanding.
In contrast, students sometimes learn to use manipulatives only in a rote manner. They perform the correct steps, but have learned little more. For example, a student working on place value with beans and bean sticks used the bean as 10 and the bean stick as 1 (Hiebert and Wearne, 1992).
Similarly, students often fail to link their actions on base-ten blocks with the notation system used to describe the actions (Thompson and Thompson, 1990). For example, when asked to select a block to stand for 1 then put blocks out to represent 3.41, one fourth grader put out three flats, four longs, and one single after reading the decimal as "three hundred forty-one."
Although research suggests that instruction begin concretely, it also warns that concrete manipulatives are not sufficient to guarantee meaningful learning. This conclusion leads to the next question.
What Is Concrete?
Manipulatives are supposed to be good for students because they are concrete. The first question to consider might be, What does concrete mean? Does it mean something that students can grasp with their hands? Does this sensory character itself make manipulatives helpful? This view presents several problems.
First, it cannot be assumed that when children mentally close their eyes and picture manipulative-based concepts, they "see" the same picture that the teacher sees. Holt (1982, 138-39) said that he and his fellow teacher "were excited about the rods because we could see strong connections between the world of rods and the world of numbers. We therefore assumed that children, looking at the rods and doing things with them, could see how the world of numbers and numerical operations worked. The trouble with this theory is that [my colleague] and I already knew how the numbers worked. We could say, "Oh, the rods behaved just the way numbers do. But if we hadn't known how numbers behaved, would looking at the rods enable us to find out? Maybe so, maybe not."
Second, physical actions with certain manipulatives may suggest mental actions different from those that teachers wish students to learn. For example, researchers found a mismatch when students used the number line to perform addition. When adding 5 and 4, the students located 5, counted "one, two, three, four," and read the answer. This procedure did not help them solve the problem mentally, for to do so they must count "six, seven, eight, nine" and at the same time count the counts -- 6 is 1, 7 is 2, and so on. These actions are quite different (Gravemeijer, 1991, 59). These researchers also found that students' external actions on an abacus did not always match the mental activity intended by the teacher.
Although manipulatives have an important place in learning, they do not carry the meaning of the mathematical idea. They can even be used in a rote manner. Students may need concrete materials to build meaning initially, but they must reflect on their actions with manipulatives to do so. Later, they are expected to have a "concrete" understanding that goes beyond these physical manipulatives. For example, teachers like to see that numbers as mental objects -- "I can think of 13 + 10 in my head" are "concrete" for sixth graders. It appears that "concrete" can be defined in different ways.
Types Of Concrete Knowledge
Students demonstrate sensory-concrete knowledge when they use sensory material to make sense of an idea. For example, at early stages, children cannot count, add, or subtract meaningfully unless they have actual objects to touch.
Integrated-concrete knowledge is built through learning. It is knowledge that is connected in special ways. This concept is the root of the word concrete -- "to grow together." Sidewalk concrete derives its strength from the combination of separate particles in an interconnected mass. Integrated-concrete thinking derives its strength from the combination of many separate ideas in an interconnected structure of knowledge. While still in primary school, Jacob read a problem on a restaurant place mat asking for the answer to 3/4 + 3/4. He solved the problem by thinking about the fractions in terms of money: 75¢ plus 75¢ is $1.50, so 3/4 + 3/4 is 1 1/2. When children have this type of interconnected knowledge, the physical objects, the actions they perform on the objects, and the abstractions they make all are interrelated in a strong mental structure. Ideas such as "four," "3/4," and "rectangle" become as real and tangible as a concrete sidewalk. Each idea is as concrete to a student as a ratchet wrench is to a plumber -- an accessible and useful tool. Jacob's knowledge of money was such a tool.
An idea, therefore, is not simply concrete or not concrete. Depending on what kind of relationship the student has with it (Wilensky 1991), an idea might be sensory-concrete, abstract, or integrated-concrete. The catch, however, is that mathematics cannot be packaged into sensory-concrete materials, no matter how clever our attempts are, because ideas such as number are not "out there." As Piaget has shown, they are constructions -- reinventions -- of each human mind. "Fourness" is no more "in" four blocks than it is "in" a picture of four blocks. The child creates "four" by building a representation of number and connecting it with either real or pictured blocks (Clement, 1989; Clement and Battista 1990; Kamii 1973, 1985, 1986).
Mathematical ideas are ultimately made integrated-concrete not by their physical or real-world characteristics but rather by how "meaningful" -- connected to other ideas and situations -- they are. Holt (1982, 219) found that children who already understood numbers could perform the tasks with or without the blocks. "But children who could not do these problems without the blocks didn't have a clue about how to do them with the blocks. ... They found the blocks ... as abstract, as disconnected from reality, mysterious, arbitrary, and capricious as the numbers that these blocks were supposed to bring to life."
Are Computer Manipulatives Concrete?
The reader's earlier mental picture of students using manipulatives probably did not feature computer technology. But, as has been shown, "concrete" cannot be equated simply with physical manipulatives. Computers might supply representations that are just as personally meaningful to students as real objects; that is, they might help develop integrated-concrete knowledge. These representations may also be more manageable, "clean," flexible, and extensible. For example, one group of young students learned number concepts with a computer-felt-board environment. They constructed "bean stick pictures" by selecting and arranging beans, sticks, and number symbols. Compared with a real bean-stick environment, this computer environment offered equal, and sometimes greater, control and flexibility to students (Char 1989). The computer manipulatives were just as meaningful and were easier to use for learning. Both computer and physical bean sticks were worthwhile, but work with one did not need to precede work with the other.
The important point is that "concrete" is, quite literally, in the mind of the beholder. Ironically, Piaget's period of concrete operations is often used, incorrectly, as a rationalization for objects-for-objects' sake in elementary school. Good concrete activity is good mental activity (Clements 1989; Kamii 1989).
This idea can be made more concrete. Several computer programs allow children to manipulate on-screen base-ten blocks. These blocks are not physically concrete. However, no base-ten blocks "contain" place-value ideas (Kamii 1986). Students must build these ideas from working with the blocks and thinking about their actions.
Actual base-ten blocks can be so clumsy and the manipulations so disconnected one from the other that students see only the trees -- manipulations of many pieces -- and miss the forest -- place-value ideas. The computer blocks can be more manageable and "clean."
In addition, students can break computer base-ten blocks into ones or glue ones together to form tens. Such actions are more in line with the mental actions that students are expected to learn. The computer also links the blocks to the symbols. For example, the number represented by the base-ten blocks is usually dynamically linked to the students' actions on the blocks; when the student changes the blocks, the number displayed is automatically changed as well. This process can help students make sense of their activity and the numbers. Computers encourage students to make their knowledge explicit, which helps them build integrated-concrete knowledge. A summary of specific advantages follows.
Computers offer a manageable, clean manipulative. They avoid distractions often present when students use physical manipulatives. They can also mirror the desired mental actions more closely.
Computers afford flexibility. Some computer manipulatives offer more flexibility than do their noncomputer counterparts. For example, Elastic Lines (Harvey, McHugh, and McGlathery 1989) allows the student to change instantly both the size, that is, the number of pegs per row, and the shape of a computer-generated geoboard (fig. 1). The ease of accessing these computer geoboards allows the software user many more experiences on a wider variety of geoboards. Eventually, these quantitative differences become qualitative differences.
Fig. 1: Elastic Lines (Harvey, McHugh, and McGlathery 1989) allows a variety of arrangements of "nails" on its electronic geoboard; size can also be altered.
Computer manipulatives allow for changing the arrangement or representation. Another aspect of the flexibility afforded by many computer manipulatives is the ability to change an arrangement of the data. Most spreadsheet and data base software will sort and reorder the data in numerous different ways. Primary Graphing and Probability Workshop (Clements, Crown, and Kantowski 1991) allows the user to convert a picture graph to a bar graph with a single keystroke (fig. 2).
Fig. 2: Primary Graphing and Probability Workshop (Clements, Crown, and Kantowski 1991) converts a picture graph to a bar graph with a single keystroke.
Computer store and later retrieve configurations. Students and teachers can save and later retrieve any arrangement of computer manipulatives. Students who had partially solved a problem can pick up immediately where they left off. They can save a spreadsheet or database created for one project and use it for other projects.
Computers record and replay students' actions. Computers allow the storage of more than static configurations. Once a series of actions is finished, it is often difficult to reflect on it. But computers have the power to record and replay sequence of actions on manipulatives. The computer-programming commands can be recorded and later replayed, changed, and viewed. This ability encourages real mathematical exploration. Computer games such as Tetris allow students to replay the same game. In one version, Tumbling Tetrominoes, which is included in Clements, Russell et al. (1995), students try to cover a region with a random sequence of tetrominoes (fig. 3). If students believe that they can improve their strategy, they can elect to receive the same tetrominoes in the same order and try a new approach.
Fig. 3: When playing Tumbling Tetrominoes (Clements, Russell et al. 1995), students attempt to tile tetrominoes -- shapes that are like dominoes except that four squares are connected with full sides touching. Research indicates that playing such games involves conceptual and spatial reasoning (Bright, Usnick, and Williams 1992). Students can elect to replay a game to improve their strategy.
Computer manipulatives link the concrete and the symbolic by means of feedback. Other benefits go beyond convenience. For example, a major advantage of the computer is the ability to associate active experience with manipulatives to symbolic representations. The computer connects manipulatives that students make, move, and change with numbers and words. Many students fail to relate their actions on manipulatives with the notation system used to describe these actions. The computer links the two.
For example, students can draw rectangles by hand but never go further to think about them in a mathematical way. In Logo, however, students must analyze the figure to construct a sequence of commands, or a procedure, to draw a rectangle (see fig. 4). They have to apply numbers to the measures of the sides and angles, or turns. This process helps them become explicitly aware of such characteristics as "opposite sides equal in length." If instead of fd 75 they enter FD 90, the figure will not be a rectangle. The link between the symbols and the figure is direct and immediate. Studies confirm that students' ideas about shapes are more mathematical and precise after using Logo (Clements and Battista 1989; Clements and Battista 1992).
Fig. 4: Students use a new version of Logo, Turtle Math (Clements and Meredith 1994), to construct a rectangle. The commands are listed in the command center on the left (Clements, Battista et al. 1995; Clements and Meredith 1994).
Some students understand certain ideas, such as angle measure, for the first time only after they have used Logo. They have to make sense of what it is being controlled by the numbers they give to right- and left-turn commands. The turtle immediately links the symbolic command to a sensory-concrete turning action. Receiving feedback from their explorations over several tasks, they develop an awareness of these quantities and the geometric ideas of angle and rotation (Kieran and Hillel 1990).
Fortunately, students are not surprised that the computer does not understand natural language and that they must formalize their ideas to communicate them. Students formalize about five times more often using computers than they do using paper (Hoyles, Healy, and Sutherland 1991). For example, students struggled to express the number pattern that they had explored on spreadsheets. They used such phrases as "this cell equals the next one plus 2; and then that result plus this cell plus 3 equals this." Their use of the structure of the spreadsheet's rows and columns, and their incorporation of formulas in the cells of the spreadsheet, helped them more formally express the generalized pattern they had invented.
But is it too restrictive or too hard to operate on symbols rather than to operate directly on the manipulatives? Ironically, less "freedom" might be more helpful. In a study of place value, one group of students worked with a computer base-ten manipulative. The students could not move the computer blocks directly. Instead, they had to operate on symbols -- digits -- as shown in figure 5 (Thompson 1992; Thompson and Thompson 1990). Another group of students used physical base-ten blocks. Although teachers frequently guided students to see the connection between what they did with the blocks and what they wrote on paper, the physical-blocks group did not feel constrained to write something that represented what they did with blocks. Instead, they appeared to look at the two as separate activities. In comparison, the computer group used symbols more meaningfully, tending to connect them to the base-ten blocks.
Fig. 5: A screen display of the base-ten blocks computer microworld (Thompson 1992).
In computer environments, such as computer base-tens blocks or computer programming, students cannot overlook the consequences of their actions, which is possible to do with physical manipulatives. Computer manipulatives, therefore, can help students build on their physical experiences, tying them tightly to symbolic representations. In this way, computers help students link sensory-concrete and abstract knowledge so they can build integrated-concrete knowledge.
Computer manipulatives dynamically link multiple representations. Such computer links can help students connect many types of representations, such as pictures, tables, graphs, and equations. For example, many programs allow students to see immediately the changes in a graph as they change data in a table.
These links can also be dynamic. Students might stretch a computer geoboard's rectangle and see the measures of the sides, perimeter, and area change with their actions.
Computers change the very nature of the manipulative. Students can do things that they cannot do with physical manipulatives. Instead of trading a one hundred-block for ten ten-blocks, students can break the hundred-block pictured on the screen into ten ten-blocks, a procedure that mirrors the mathematical action closely. Students can expand computer geoboards to any size or shape. They can command the computer to draw automatically a figure symmetrical to any they create on the geoboard.
Advantages Of Computer Manipulatives for Teaching and Learning
In addition to the aforementioned advantages, computers and computer manipulatives possess other characteristics that enhance teaching and learning mathematics. Descriptions of these features follow.
Computer manipulatives link the specific to the general. Certain computer manipulatives help students view a mathematical object not just as one instance but as a representative of an entire class of objects. For example, in Geometric Supposer (Schwartz and Yerushalmy 1986) or Logo, students are more likely to see a rectangle as one of many that could be made rather than as just one rectangle.
This effect even extends to problem-solving strategies. In a series of studies, fourth-grade through high school students who used Logo learned problem-solving strategies better than those who were taught the same strategies with noncomputer manipulatives (Swan and Black 1989). Logo provided malleable representations of the strategies that students could inspect, manipulate, and test through practice. For example, in Logo, students broke a problem into parts by disembedding, planning, and programming each piece of a complex picture separately. They then generalized this strategy to other mathematics problems.
Computer manipulatives encourage problem posing and conjecturing. This ability to link the specific to the general also encourages students to make their own conjectures. "The essence of mathematical creativity lies in the making and exploring of mathematical conjectures" (Schwartz 1989). Computer manipulatives can furnish tools that allow students to explore their own conjectures while also decreasing the psychological cost of making incorrect conjectures.
Because students may themselves test their ideas on the computer, they can more easily move from naive to empirical to logical thinking as they make and test conjectures. In addition, the environments appear conducive not only to posing problems but to wondering and to playing with ideas. In early phases of problem solving, the environments help students explore possibilities, not become "stuck" when no solution path presents itself. Overall, research suggests that computer manipulatives can enable "teaching children to be mathematicians vs. teaching about mathematics" (Papert, 1980, 177).
For example, consider the following dialogue in which a teacher was overheard discussing students' Logo procedures for drawing equilateral triangles.
Great. We got the turtle to draw bigger
and smaller equilateral triangles. Who can summarize how we did it?
We changed all the forward numbers
with a different number -- but all the same. But the turns had to stay 120, 'cause they're all the same in equilateral triangles. (See fig. 6.)
|Chris:||We didn't make the biggest triangle.|
|Teacher:||What do you mean?|
|Chris:||What's the biggest one you could make?|
|Teacher:||What do people think?|
|Rashad:||Let's try 300.|
The class did (see fig. 7).
Fig. 6: Students use Turtle Math (Clements and Meredith 1994) to construct an equilateral triangle.
Fig. 7: When commands are changed, the figure is automatically changed -- here, to an equilateral triangle with sides of length 300 turtle steps.
It didn't fit on the screen. All we see is
|Teacher:||Where's the rest?|
Off here [gesturing]. The turtle doesn't wrap
around the screen.
|Tanisha:||Let's try 900!|
The student typing made a mistake, and changed the command to FD 3900.
Whoa! Keep it! Before you try it, class,
tell me what it will look like!
It'll be bigger. Totally off the screen! You
won't see it at all!
No, two lines will still be there, but they'll be
way far apart.
The children were surprised when it turned out the same! (See fig. 8.)
Fig. 8: Why did the figure not change when the side lengths were changed to 3900 turtle steps?
|Teacher:||Is that what you predicted?|
|Rashad:||No! We made a mistake.|
Oh, I get it. It's right. It's just farther off
the screen. See, it goes way off, there, like about past the ceiling.
The teacher challenged them to explore this and other problems they could think of.
I'm going to find the smallest equilateral
We're going to try to get all the sizes
inside one another.
Computer manipulatives build scaffolding for problem solving. Computer environments may be unique in furnishing problem-solving scaffolding that allows students to build on their initial intuitive visual approaches and construct more analytic approaches. In this way, early concepts and strategies may be precursors of more sophisticated mathematics. In the realm of turtle geometry, research supports Papert's (1980) contention that ideas of turtle geometry are based on personal, intuitive knowledge (Clements and Battista 1991; Kynigos 1992). One boy, for example, wrote a procedure to draw a rectangle. He created a different variable for the length of each of the four sides. He gradually saw that he needed only two variables, since the lengths of the opposite sides are equal. In this way, he recognized that the variables could represent values rather than specific sides of the rectangle. No teacher intervened; Logo supplied the scaffolding by requiring a symbolic representation and by allowing the boy to link the symbols to the figure.
Computer manipulatives may also build scaffolding by assisting students in getting started on a solution. For example, in a spreadsheet environment, typing headings or entering fixed numbers might help students organize their ideas.
Computer manipulatives focus attention and increase motivation. One group of researchers studied pairs of students as they worked on computers and found that the computer "somehow draws the attention of the pupils and becomes a focus for discussion," thus resulting in very little off-task talk (Hoyles, Healy, and Sutherland 1991). Although most children seem to enjoy working on the computer, such an activity can be especially motivating for some students who have been unsuccessful with mathematics. For example, two such third graders were observed as they eagerly worked in a Logo environment. They had gone forward twenty-three turtle step, but then figured out that they needed to go forward sixty turtle steps in all. They were so involved that both of them wanted to do the necessary subtraction. One grabbed the paper from the other so he could compute the difference.
Computer manipulatives encourage and facilitate complete, precise explanations. Compared with students using paper and pencil, students using computers work with more precision and exactness (Butler and Close 1989; Clements and Battista 1991; Gallou-Dumiel 1989). For example, students can use physical manipulatives to perform such motions as slides, flips, and turns. However, they make intuitive movements and corrections without being aware of these geometric motions. Even young children can move puzzle pieces into place without a conscious awareness of the geometric motions that can describe these physical movements. In one study, researchers attempted to help a group of students using noncomputer manipulatives become aware of these motions. However, descriptions of the motions were generated from, and interpreted by, physical motions of students who understood the task. In contrast, students using the computer specified motions to the computer, which does not "already understand." The specification had to be thorough and detailed. The results of these commands were observed, reflected on, and corrected. This effort led to more discussion of the motions themselves, not just the shapes (Butler and Close 1989).
Firming Up Ideas about the Concrete
Manipulatives can play a role in students' construction of meaningful ideas. They should be used before formal instruction, such as teaching algorithms. However, teachers and students should avoid using manipulatives as an end -- without careful thought -- rather than as a means to that end.
The appropriate use of representations is important to mathematics learning. In certain topics, such as early number concepts, geometry, measurement, and fractions, proper use of manipulatives is especially crucial. However, manipulatives alone are not sufficient -- they must be used to actively engage children's thinking with teacher guidance -- and definitions of what constitute a "manipulative" may need to be expanded. Research offers specific guidelines for selecting and using manipulatives.
How Should Manipulatives Be Selected?
The following guidelines are offered to assist teachers in selecting appropriate and effective manipulatives.
Select manipulatives for children's use. Teacher demonstrations with manipulatives can be valuable; however, children should use the manipulatives to solve a variety of problems.
Select manipulatives that allow children to use their informal methods. Manipulatives should not prescribe or unnecessarily limit students' solutions or ways of making sense of mathematical ideas. Students should be in control.
Use caution in selecting "prestructured" manipulatives in which the mathematics is built in by the manufacturer, such as base-ten blocks as opposed to interlocking cubes. They can become what the colored rods were for Holt's students -- "another kind of numeral, symbols made of colored wood rather than marks on paper" (1982, 170). Sometimes the simpler, the better. For example, educators from the Netherlands found that students did not learn well using base-ten blocks and other structured base-ten materials. A mismatch may have occurred between trading one base-ten block for another and the actions of mentally separating a ten into ten ones or thinking of the same quantity simultaneously as "one ten" and "ten ones." The Netherlands students were more successful after hearing a story of a sultan who often wants to count his gold. The setting of the story gave students a reason for counting and grouping: the gold had to be counted, packed, and sometimes unwrapped -- and an inventory had to be constantly maintained (Gravemeijer 1991). Students, therefore, might best start by using manipulatives with which they create and break up groups of tens into ones, such as interlocking cubes, instead of base-ten blocks (Baroody 1990). Settings that give reasons for grouping are ideal.
Select manipulatives that can serve many purposes. Some manipulatives, such as interlocking cubes, can be used for counting, place value, arithmetic, patterning, and many other topics. This versatility allows students to find many different uses. However, a few single-purpose devices, such as mirrors or Miras, make a significant contribution.
Choose particular representations of mathematical ideas with care. Perhaps the most important criteria are that the experience be meaningful to students and that they become actively engaged in thinking about it.
To introduce a topic, use a single manipulative instead of many different manipulatives. One theory held that students had to see an idea presented by several different manipulatives to abstract the essence of this idea. However, in some circumstances, using the same material consistently is advantageous. "Using the tool metaphor for representations, perhaps a representation becomes useful for students as they handle it and work with it repeatedly" (Hiebert and Wearne 1992, 114). If the tool is to become useful, perhaps an advantage accrues in using the same tool in different situations rather than in using different tools in the same situation. Students gain expertise through using a tool over and over on different projects.
Should only one manipulative be used, then? No, different children may find different models meaningful (Baroody 1990). Further, reflecting on and discussing different models may indeed help students abstract the mathematical idea. Brief and trivial use, however, will not help; each manipulative should become a tool for thinking. Different manipulatives allow, and even encourage, students to choose their own representations. New material can also be used to assess whether students understand the idea or just have learned to use the previous material in a rote manner.
Select computer manipulatives when appropriate. Certain computer manipulatives may be more beneficial than any physical manipulative. Some are just the sort of tools that can lead to mathematical expertise. The following recommendations and special considerations pertain to computer manipulatives. Select programs that --
- have uncomplicated changing, repeating, and undoing actions;
- allow students to save configurations and sequences of actions;
- dynamically link different representations and maintain a tight connection between pictured objects and symbols;
- allow students and teachers to pose and solve their own problems; and
- allow students to develop increasing control of a flexible, extensible, mathematical tool. Such programs also serve many purposes and help form connections between mathematical ideas.
Select computer manipulatives that --
- encourage easy alterations of scale and arrangement,
- go beyond what can be done with physical manipulatives, and
- demand increasingly complete and precise specifications.
How Should Manipulatives Be Used?
The following suggestions are offered to assist teachers in effectively using manipulatives in their classrooms.
Increase students' use of manipulatives. Most students do not use manipulatives as often as needed. Thoughtful use can enhance almost every topic. Also, short sessions do not significantly enhance learning. Students must learn to use manipulatives as tools for thinking about mathematics.
Recognize that students may differ in their need for manipulatives. Teachers should be cautious about requiring all students to use the same manipulative. Many might be better off if allowed to choose their manipulatives or to just use paper and pencil. Some students in the Netherlands were more successful when they drew pictures of the sultan's gold pieces than when they used any physical manipulative. Others may need manipulatives for different lengths of time (Suydam 1986).
Encourage students to use manipulatives to solve a variety of problems and then to reflect on and justify their solutions. Such varied experience and justification helps students build and maintain understanding. Ask students to explain what each step in their solution means and to analyze any errors that occurred as they use manipulatives -- some of which may have resulted from using the manipulative.
Become experienced with manipulatives. Attitudes toward mathematics, as well as concepts, are improved when students have instruction with manipulatives, but only if their teachers are knowledgeable about their use (Sowell 1989).
Some recommendations are specific to computer manipulatives.
- Use computer manipulatives for assessment as mirrors of students' thinking.
- Guide students to alter and reflect on their actions, always predicting and explaining.
- Create tasks that cause students to see conflicts or gaps in their thinking.
- Have students work cooperatively in pairs.
- If possible, use one computer and a large-screen display to focus and extend follow-up discussions with the class.
- Recognize that much information may have to be introduced before moving to work on computers, including the purpose of the software, ways to operate the hardware and software, mathematics content and problem-solving strategies, and so on.
- Use extensible programs for long periods across topics when possible.
With both physical and computer manipulatives, teachers should choose meaningful representations, then guide students to make connections between these representations. No one yet knows what modes of presentations are crucial and what sequence of representations should be used before symbols are introduced (Baroody 1989; Clements 1989). Teachers should be careful about adhering blindly to an unproved, concrete-pictorial-abstract sequence, especially when more than one way of thinking about "concrete" is possible It is known that students' knowledge is strongest when they connect real-world situations, manipulatives, pictures, and spoken and written symbols (Lesh 1990). They should relate manipulative models to their intuitive, informal understanding of concepts and translate between representations at all points of their learning. This process builds integrated-concrete ideas.
Now when teachers close their eyes and picture children doing mathematics, manipulatives should still be in the picture, but the mental image should include a new perspective on how to use them.
Baroody, Arthur J. "One Point of View: Manipulatives Don't Come with Guarantees." Arithmetic Teacher 37 (October 1989):4-5.
-----. "How and When Should Place-Value Concepts and Skills Be Taught?" Journal for Research in Mathematics Education 21 (July 1990):281-86.
Bright, George, Virginia E. Usnick, and Susan Williams. Orientation of Shapes in a Video Game. Hilton Head, S.C.: Eastern Educational Research Association, 1992.
Butler, Deirdre, and Sean Close. "Assessing the Benefits of a Logo Problem-Solving Course." Irish Educational Studies 8 (1989):168-90.
Char, Cynthia A. Computer Graphic Feltboards: New Software Approaches for Young Children's Mathematical Exploration. San Francisco: American Educational Research Association, 1989.
Clements, Douglas H. Computers in Elementary Mathematics Education. Englewood Cliffs, N.J.: Prentice-Hall, 1989.
Clements, Douglas H., and Michael T. Battista. "Learning of Geometric Concepts in a Logo Environment." Journal for Research in Mathematics Education 20 (November 1989):450-67.
-----. "Research into Practice: Constructivist Learning and Teaching." Arithmetic Teacher 38 (September 1990): 34-35.
-----. "The Development of a Logo-Based Elementary School Geometry Curriculum." Final report for NSF grant no. MDR-8651668. Buffalo, N.Y.: State University of New York at Buffalo, and Kent, Ohio: Kent State University, 1991.
-----. "Geometry and Spatial Reasoning." In Handbook of research on mathematics teaching and learning, edited by Douglas A. Grouws. 420-64. New York: Macmillan Publishing Co., 1992.
Clements, Douglas H., Michael T. Battista, Joan Akers, Virginia Woolley, Julie Sarama Meredith, and Sue McMillen. Turtle Paths. Cambridge, Mass.: Dale Seymour Publications, 1995. Includes software.
Clements, Douglas H., Warren D. Crown, and Mary Grace Kantowski. Primary Graphing and Probability Workshop. Glenview, Ill.: Scott, Foresman & Co., 1991. Software.
Clements, Douglas H., and Julie Sarama Meredith. 1994. Turtle Math. Montreal: Logo Computer Systems (LCSI), 1994. Software.
Clements, Douglas H., Susan Jo Russell, Cornelia Tierney, Michael T. Battista, and Julie Sarama Meredith. Flips, Turns, and Area. Cambridge, Mass.: Dale Seymour Publications, 1995. Includes software.
Driscoll, Mark J. Research within Reach: Elementary School Mathematics and Reading. St. Louis: CEMREL, 1983.
Fennema, Elizabeth. "The Relative Effectiveness of a Symbolic and a Concrete Model in Learning a Selected Mathematics Principle." Journal for Research in Mathematics Education 3 (November 1972):233-38.
Gallou-Dumiel, Elisabeth. "Reflections, Point Symmetry and Logo." In Proceedings of the Eleventh Annual Meeting, North American Chapter of the International Group for the Psychology of Mathematics Education, edited by C. A. Maher, G. A. Goldin, and R. B. Davis, 140-57. New Brunswick, N.J.: Rutgers University Press, 1989.
Gravemeijer, K. P. E. "An Instruction-Theoretical Reflection on the Use of Manipulatives." In Realistic Mathematics Education in Primary School, edited by L. Streefland. 57-76. Utrecht, The Netherlands: Freudenthal Institute, Utrecht University, 1991.
Harvey, Wayne, Robert McHugh, and Douglas McGlathery. Elastic Lines. Pleasantville, N.Y.: Sunburst Communications, 1989. Software.
Hiebert, James, and Diana Wearne. "Links between Teaching and Learning Place Value with Understanding in First Grade." Journal for Research in Mathematics Education 23 (March 1992):98-122.
Holt, John. How Children Fail. New York: Dell Publishing Co., 1982.
Hoyles, Celia, Lulu Healy, and Rosamund Sutherland. "Patterns of Discussion between Pupil Pairs in Computer and Non-Computer Environments." Journal of Computer Assisted Learning 7 (1991): 210-28.
Kamii, Constance. "Pedagogical Principles Derived from Piaget's Theory: Relevance for Educational Practice." In Piaget in the Classroom, edited by M. Schwebel and J. Raph, 199-215. New York: Basic Books, 1973.
-----. Young Children Reinvent Arithmetic: Implications of Piaget's Theory. New York: Teachers College Press, 1985.
-----. "Place Value: An Explanation of Its Difficulty and Educational Implications for the Primary Grades." Journal of Research in Childhood Education 1 (August 1986):75-86.
-----. Young Children Continue to Reinvent Arithmetic: 2nd grade. Implications of Piaget's Theory. New York: Teachers College Press, 1989.
Kieran, Carolyn, and Joel Hillel. ""It's Tough When You Have to Make the Triangles Angles": Insights from a Computer-Based Geometry Environment." Journal of Mathematical Behavior 9 (October 1990):99-127.
Kynigos, Chronis. "The Turtle Metaphor as a Tool for Children's Geometry." In Learning Mathematics and Logo, edited by C. Hoyles and R. Noss, 97-126. Cambridge, Mass.: MIT Press, 1992.
Lesh, Richard. "Computer-Based Assessment of Higher Order Understandings and Processes in Elementary Mathematics." In Assessing Higher Order Thinking in Mathematics, edited by G. Kulm, 81-110. Washington, D.C.: American Association for the Advancement of Science, 1990.
Papert, Seymour. Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books, 1980.
Schwartz, Judah L. "Intellectual Mirrors: A Step in the Direction of Making Schools Knowledge-Making Places." Harvard Educational Review 59 (February 1989):51-61.
Schwartz, Judah L., and Michal Yerushalmy. 1986. The Geometric Supposer Series. Pleasantville, N.Y.: Sunburst Communications, 1986. Software.
Sowell, Evelyn J. "Effects of Manipulative Materials in Mathematics Instruction." Journal for Research in Mathematics Education 20 (November 1989):498-505.
Suydam, Marilyn N. "Research Report: Manipulative Materials and Achievement." Arithmetic Teacher 33 (February 1986):10, 32.
Swan, Karen, and John B. Black. 1989. "Logo Programming, Problem Solving, and Knowledge-Based Instruction." University of Albany, Albany, N.Y., 1989. Manuscript.
Thompson, Patrick W. "Notations, Conventions, and Constraints: Contributions to Effective Use of Concrete Materials in Elementary Mathematics." Journal for Research in Mathematics Education 23 (March 1992):123-47.
Thompson, Patrick W., and Alba G. Thompson. Salient Aspects of Experience with Concrete Manipulatives. Mexico City: International Group for the Psychology of Mathematics Education, 1990.
Wilensky, Uri. "Abstract Mediations on the Concrete and Concrete Implications for Mathematics Education." In Constructionism, edited by I. Harel and S. Papert, 193-199. Norwood, N.J.: Ablex Publishing Co., 1991.
Time to prepare this material was funded in part by the National Science Foundation under Grants No. MDR-9050210 and MDR-8954664. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation.
The authors extend their appreciation to Arthur J. Baroody and several anonymous reviewers for their insightful comments and suggestions on earlier drafts of this article.
Douglas Clements teaches and conducts research a the State University of New York at Buffalo, Amherst, NY 14260; [email protected]. He develops educational software and elementary curriculum materials. Sue McMillen teachers at D'Youville College, Buffalo, NY 14201; [email protected]. Her current research interests are graphing calculators, educational software, and education for mathematically gifted students.
Other Articles by Douglas H. Clements:
Other Articles by Sue McMillen: | http://investigations.terc.edu/library/bookpapers/rethinking_concrete.cfm | 13 |
81 | Proteomics/Protein Separations - Centrifugation/How the Centrifuge Works
Forces of the Centrifuge
Centripetal Force
A centrifuge works by spinning mixtures around a central axis (centrifugal force). As the sample spins the tendency of the inertia of the object is to move in a straight-line path. However, due to its confinement within the centrifuge, the path of the object must be bent into a circular one. The body of the centrifuge, or the body of the container within the centrifuge, provides a normal force that pushes the object toward the center of the circular path of travel. This inward force is refered to as a centripetal force, and its magnitude and direction are exactly what is needed to keep the object moving in a circular path around the axis of rotation of the centrifuge.
Centrifugal Force
The strength of the outward force exerted by an object (due to its inertia trying to move it in a straight-line path) as it moves in a circle at a constant angular velocity depends on the angular velocity and the radius of rotation. This force is denoted F, the angular velocity is measured in radians and denoted as w, and the radius of rotation, r, is measured in centimeters.
Usually, the value cited for the force applied to a suspension of particles during the centrifugation is a relative one, that is to say it is compared with the force that the earth's gravity would have on the same particles. This is referred to as relative centrifugal force (RCF). It was Galileo Galilei (1564-1642) who first systematically and scientifically investigated gravity as a natural phenomenon. The gravitational acceleration constant is customarily assigned the symbol "g" and for simplicity taken to be 980 cm/sec sec. Based on this measure, relative centrifugal force is expressed as:
(RFC = F centrifugation / F gravity)
The common way of denoting the operating speed of a centrifuge is in “revolutions per minute” or rpm. The formula above may be converted so that this relationship expressed as such. If you are expressing the radius in cm then the formula for RCF is:
If you are expressing the radius in inches then the formula for RCF is:
The centrifugal force created by the spinning of the centrifuge is greater at the bottom of the tube (away from the central axis) and less at the top of the tube (closer to the central axis). This difference is almost twofold. Because denser items have a greater mass, this causes their centripetal force to be greater (F=ma), and the overall result is that they settle near the outside of the circular path. This places denser objects at the bottom of a test tube that has been run in a centrifuge. Less dense objects remain closer to the center of the path, or the top of a test tube that has been run in a centrifuge.
The Action of Centrifugal Force on Molecules
As samples spin in a centrifuge the particles in each sample are subjected to centrifugal force. However, this force is proportional to the mass of the particle. To express the centrifugal force applied to a particular molecule its molecular weight (M) is used in the formula:
Centrifugal force =
Because weight takes into account the force of gravity, using molecular weight in the formula for centrifugal force removes the need to divide by the force of gravity as shown above. This formula shows how particles within a centrifuge are separated based on their molecular weight. In addition to this the size and shape of a particle also affects its migration in the gradient created by centrifugation. For example, plasmid DNA will travel farther down a gradient then chromosomal DNA. This is the result of the counteracting forces of buoyancy and friction counteracting the force of centrifugation.
Buoyancy and Friction
While the centrifugal force acts to accelerate a particle away from the axis of rotation, the particle is also subjected to additional forces including the force of buoyancy and the force of friction.
- Buoyancy force refers to the interaction of the molecule within the solvent. It is calculated as the centrifugal force multiplied by the volume of solvent the molecule displaces (V, the “partial specific volume”) and by the density of the solvent itself (rho – r). Together this gives the formula:
Buoyant force = Mω2rVρ
- As the particles move through the solvent frictional force is also generated. The size and shape of a molecule are determinates in the measure of this force. These contribute to the rate of sedimentation which is expressed as the change in the axis of rotation over time (dr/dt). This force combines with the Buoyant force to counteract against the centrifugal force.
Frictional force = f(v) = f(dr/dt)
The final result of these forces acting together is that a particle will move through the solvent, away from the axis of rotation, until the centrifugal force is equivalent to the forced of buoyancy and friction. Using the formulas above the sedimentation coefficient (s) can be calculated from the molecular weight of a particle.
Sedimentation Coefficient (s) = M(1-Vr)D/RT (R is equal to the gas constant T is equal to absolute temperature)
The sedimentation coefficient of a molecule describes where it will settle in a gradient with the viscosity and density of water under centrifugation and is measured in seconds. For biological molecules these values range between 1 – 500 x seconds. Instead of using , this number is described as one Svedberg unit (S) after Theodore Svedberg; so that 12x is expressed as 12S.
Coriolis Force
In addition to centrifugal force, particles in suspension (and the body of suspending fluid itself) within a spinning rotor are subjected to Coriolis force. The Coriolis force which results from the inertia of the liquid and the suspended particles, is a small force directed at the right angles to both the axis of rotation clockwise. This force acts to deflect particles in a counterclockwise direction (and vice versa).Under nearly all experiment conditions, the coriolis force is very small in comparison with the centripetal force; however, its effects are magnified when the rotor's speed changes (e.g during acceleration and deceleration). While the centrifugal force acts to accelerate a particle away from the axis of rotation, the particle is also subjected to additional forces including frictional force, the force of buoyancy and gravitational force.
Centrifuge – the Machine
The basic design of a centrifuge consists of a rotor which holds samples and rotates around a fixed axis driven by a motor (in modern centrifuges). More advanced centrifuges may also have lubrication and cooling systems. In addition to this, some centrifuges are capable of creating a vacuum environment around the rotor.
The most popular and widely used centrifuge rotors are the swinging-bucket and fixed-angle rotors. Two other types of rotors are the vertical rotor and the zonal rotor.
- Fixed-angle (or angle head) rotors are generally simpler in design than are swinging-bucket rotors. In this type of rotor, the centrifuge tubes are held at a specific and constant angle to the horizontal plane that is the tube does not reorient between the vertical and horizontal positions. This type of rotor works very well for simple pelleting centrifugation but has limited and variable success in rate-zonal sedimentation and isopycnic sedimentation respectively.
- Swinging-bucket rotors are able to pivot within the centrifuge. As speeds increase the angle of the rotor perpendicular to the axis of rotation also increases, positing it in a horizontal configuration. Conversely, as the centrifuge slows down the rotor returns to a vertical position. When the bucket swings out the pathlength is increased allowing for improved separation of individual particles, particularly in density gradient centrifugation. This type of rotor is inefficient when used for pelleting. However, it is good for use with both rate-zonal and isopycnic sedimentation.
Most swinging-bucket rotors are interchangeable so that different size test tubes can be used. Additionally, some have the ability to hold multiple test tubes in a single arm.
- Vertical rotors hold samples in a vertical position within the centrifuge. This type of rotor is not suitable for pelleting centrifugation but it does a good job when used for rate-zonal sedimentation, and an excellent job with isopycnic sedimentation.
Centrifuge Tubes
Depending on the sample/ rotor size and speed of centrifugation different types of centrifuge tubes can be used. Proper selection of centrifuge tubes helps to ensure that leakage does not occur and none of the sample is lost, the chemical properties of the sample and the tube do not conflict, and that the sample can be recovered with little effort.
Rotor and Tube Materials
Early rotors such as the Svedberg rotors were made of steel and occasionally brass. The high density of these materials and the resulting high rotor weight produces an appreciable load on the centrifuge drive and significantly limits operating speeds. Most commercial rotors are now made of the partly or entirely of aluminum or titanium.
Lubrication and Cooling Systems
The high speeds at which centrifuges operate generate a great deal of frictional force. Along with this friction comes heat. To prevent damage to the centrifuge and/or samples within the centrifuge, many present day centrifuges have lubrication and cooling systems to combat friction and the heat it produces.
Convection occurs whenever uniform suspensions of particles are sedimented in a conventional centrifuge. The term "convection" means the bulk movement of solute and/or solvent within the centrifuge tube. Unwanted convection can be caused by variations in temperature in different parts of the centrifuge. By controlling the temperature, lubrication and cooling systems help to guard against this.
Superspeed vs. Ultraspeed Centrifuges
There are two basic types of preparative centrifuges; superspeed centrifuges and ultraspeed centrifuges or ultracentrifuges.
- Superspeed centrifuges generally operate at speeds up to about 20,000 rpm. These centrifuges usually do not require evacuation of the rotor chamber, and drive the rotor directly to through belts or gears. The picture below is of a Sorvall RC-5B refrigerated superspeed centrifuge.
- Ultracentrifuges can be operated at much greater speeds (up to 65,000 or 75,000 rpm). Due to these high speeds the rotor chamber must be evacuated of air to reduce friction and permit accurate rotor temperature control. In most ultracentrifuges the rotor is driven either by a motor and a set of gears or by an oil or air turbine system. the picture below is of a Beckman Coulter optima LE-80K ultracentrifuge.
Next section: Density Gradient Centrifugation
- "Basics of Centrifugation" Cole-Parmer Technical Library (Published with permission of THERMO).
- Bloomfield, L. A. "How Things Work: Explaining the Physics of Everyday Life" University of Virginia.
- Buckley, Nancy. "Lecture 5: Centrifugation" Biological Sciences Department, California State Polytechnic University.
- "How do centrifuges work?" Physics Forum. | http://en.wikibooks.org/wiki/Proteomics/Protein_Separations_-_Centrifugation/How_the_Centrifuge_Works | 13 |
91 | From Math Images
Probability distributions reveal either the probability of a random variable being a particular outcome (as with discrete probability distributions) or the probability that a random variable will fall within a particular interval of outcomes (as with continuous probability distributions). In addition, probability distributions are such that the total sum of the set of outcomes must be equal to 1 and the probability corresponding to a single outcome of interval of outcomes must be between 0 and 1.
Discrete Probability Distributions
Illustration: Rolling A Fair, Six-Sided Dice There are only 6 possible outcomes if you roll a six-sided dice: 1, 2, 3, 4, 5, and 6. The probabilities that you roll any of these outcomes is 1/6, and the sum of the probabilities of the six different outcomes is 1. The discrete random variable here is the value of a roll.
The following graphs show the results of rolling a six-sided dice 1000 times.
The frequencies and probabilities of each outcome can be determined from these graphs (ex. Rolling a 4 had a frequency of 160, meaning that 160 out of the 1000 rolls resulted in a value of 4. Also, Rolling a 4 had probability of about 0.16).
We can see that the CDF graph is a step function that is defined at each individual possible outcome and increases towards 1. The probability that the dice rolls a 6 or lower must be 1, because this interval contains the entire set of outcome values. The cumulative probability of each outcome can be determined from this graph (ex. Rolling a 4 had a cumulative probability of approximately 0.65, meaning that the probability of rolling a 4 or lower was about 65%).
Mathematically, a discrete probability distribution can be defined as a distribution with a set of outcomes that are discrete values, and are usually also pre-defined and finite. If X represents the discrete random variable, while x represents a possible outcome of X and j represents the set of all outcomes of X, then a discrete probability distribution is such that:
- , where p(x) is the probability that X = x
- , where the sum of all possible outcomes of X is 1
Six-Sided Dice Demonstration
This applet demonstrates rolling a six-sided dice while recording the outcomes.
Continuous Probability Distributions
Illustration: Weighing Apples The average weight of an apple is about 150 grams. However, if you were to measure a number of apples, the outcomes that you would obtain from measuring the weight of each apple would vary like so: 150.534... grams, 149.259...grams, 154.274... grams, 152.389... grams, and so on. There are an infinite number of outcomes that emerge from these measurements, so the probability that an apple would weight exactly 152.234... grams is zero. The continuous random variable here is the weight of an apple.
The graphs below show the results of measuring the weights of 1000 apples.
It is unreasonable to plot the frequency of each outcome individually (each outcome is unique and would only have a frequency of 1), so the frequencies must be grouped by intervals to make a histogram that can be generalized into a function. Using the function, we can calculate the probability that the random variable with fall within an interval (ex. [a,b]) of values.
The cumulative probability graph for this illustration ends at 1, and the cumulative probability of each outcome (ex. b) can be determined from this graph. Also, the CDF graph is steeper in the middle where the frequencies are greater.
Mathematically, continuous probability distribution can be defined as a distribution with an infinite set of uncountable outcomes. The probability that a random outcome is equal to any real-value is zero, because there are an infinite number of outcomes that are possible. Thus, probabilities can only be calculated over intervals. If X represents the continuous random variable while x represents a possible outcome of X, then a continuous probability distribution is such that:
- , where p(a < x < b) is the probability of the interval [a, b] and f(x) is the function describing the distribution
- , where the probability of any single possible outcome is zero
- , where the sum of all the probabilities of the infinite set of outcomes is 1
Cumulative Distribution Function
Another way to define these two types of distributions is by their relationship to the cumulative distribution function F(x). A cumulative distribution function (CDF) is used to find the probability that a random variable X is less than or equal to an outcome value a.
Mathematically, we can define the CDF to be: , where must be positive and must stay between 0 and 1 because the CDF represents accumulating probabilities.
To find the cumulative probability of X for an outcome value a:
- for discrete probability functions, we evaluate the expression
Therefore, discrete probability distributions must have fragmented CDFs that are uniquely defined at each outcome. Please refer back to the section on discrete probability distributions, and click to expand the CDF graph for the given illustration.
- for continuous probability functions, we evaluate the expression
Probability Density Function
The probability density function (PDF) of a random variable describes the probability of each point in the set of outcomes available to the random variable. The PDF can also be defined as the derivative of the cumulative distribution function, because the CDF corresponds with the gradual summation of the PDF. If the CDF is always increasing to 1, then the PDF must always be positive and the entire PDF must integrate to 1.
Mean, Median, and Mode
- Median: The middle value of a set of outcomes
- Mode: The most commonly appearing value in a set of outcomes
- Mean (Expected Value): The average value in a set of outcomes
- Mathematically, the mean, E(X), can be found:
- for discrete probability functions, where A is the number of values in the set j of all outcomes
- for continuous probability functions
For example, let's supposed that the PDF of a situation is where as seen in the graph to the left.
The mean of the probability distribution is the average value of the set of outcomes. Graphically, the mean is the value at which the graph would "balance", where the outcomes on the right side of the mean and those on the left side would be equal in relative magnitude and amount.
- In this case, the mean is .
The median of the probability distribution is the middle value of the set of outcomes. Thus, it is the value on the graph where the area to the right of the median and the area to the left of the median are equal. In other words, where the sum of probabilities of the enclosed outcomes on either side is equal to 0.5 or 50%.
- In this case, the median is .
The mode of the probability distribution is the most frequent value in the set of outcomes. Since the function f(x) reveals the probability (relative frequency) of each outcome, the value with the highest probability or the maximum of the graph is the mode.
- In this case, the mode is .
ReliaSoft Corporation, Basic Statistical Definitions
Engineering Statistics Handbook, What is a Probability Distribution
Statistics Help Online, Continuous distributions
Wikipedia, Probability Density Function
Math Forum - Ask Dr.Math, Visually Identifying Mean of a Probability Density Function | http://mathforum.org/mathimages/index.php?title=Probability_Distributions&oldid=20648 | 13 |
50 | The purpose of this essay is to explain to readers why clock and
watch gears are designed as they are. It addresses the criteria that
must be considered when designing a gear tooth.
When a gear tooth engages a pinion leaf (tooth), it
pushes the pinion leaf in its own direction of rotation,
thereby transferring power to the pinion. The direction of the force
that acts upon the pinion leaf could be seen as a tangent line on the
edge of the gear tooth’s circle (also called the pitch circle). This is
similar to the action of the escape wheel on a pallet: see Chapter 4
of "Clock and Watch Escapement Mechanics."
In order to minimize power losses, we want the gear and pinion
teeth to roll together as smoothly as possible, as when two well-honed disks roll together. The design of the gear and
pinion teeth must be designed to simulate the rolling action as
closely as possible to maximize the efficiency of power transferred
from gear to pinion.
The ratio of the diameters of the two gears must be the same as the
ratio of the number of teeth of the two gears. For example, if one
gear has a diameter of 12 cm. and 120 teeth, and the other gear has
a diameter of 1 cm., the other gear should have 10 teeth because
the ratios are 12:1.
At the midpoint of the impulse, the point of contact between the
gear tooth and pinion leaf must be on a part of the gear tooth (A)
that is at right angles to the direction of the force that the gear
tooth applies to the pinion leaf (B).
If the point of contact is not at right angles to the force, there is a
loss of power. For example, if the depthing were too shallow, with
the result that the point of contact were to be at 70º to the direction
of force (instead of 90º), the efficiency loss due to vector
forces would be about 12%, so the pinion would receive about 88% of
the power from the gear. In addition to this, the direction of the
force would result in some repulsion, causing undue wear of the
bearings (or bushings).
Since the point of contact must be at right angles to the direction of
force, and the direction of force could be seen as acting at a
tangent to the circumference of the gear’s circle, so the lower part
of the tooth (called the "dedendum") must be parallel to the radius
line going from the circle center to the point of contact. Since the
gear’s circle is relatively large, the lines along the sides of the
tooth’s lower part (the dedendum) appear to be parallel, but they
are not: they point to the gear’s circle center. This is more obvious
when you observe the dedendum of the pinion leaf, which appears
to be tapered inwards, but both are designed on the same principle.
This drawing represents the basic design of the pinion leaf’s
dedendum, where the outer circle is the pitch circle, and each area
labeled with an "L" represents the area occupied by a pinion leaf.
The dedendum is the side of the leaf between the outer circle and
the inner circle (shown here as parts of the circle).
The addendum is the part of the leaf that extends beyond the pitch
circle. Each side of the addendum is a mirror image of the other
side. (The addendum in this drawing is sketched only
The most important factor to consider is the angle of rotation of the
two gears during the engagement of each tooth. Since the angle of
rotation is greatest for the smaller gear (pinion), most attention is
paid here. As the gear pushes the pinion leaf beyond the mid-point
of the impulse, there are power losses caused by vector
forces (the direction of the impulse is not the same as the direction
of the pinion leaf receiving the impulse, so only a percentage of the
impulse is received). If the angle between the two directions were
small, the power losses would be small. The more teeth there are
on the smaller gear, the smaller the angle of rotation of the two
gears during the engagement of each tooth. This should be quite
obvious since the angle occupied by each tooth in a 6 tooth pinion
is 60º, whereas it is only 30º in a 12 tooth pinion
(minus one degree to avoid binding). A 12 tooth pinion is much
more efficient. A 12 tooth pinion is also much stronger because
two pinion leaves are engaged at all times (the mid-point of
engagement of the next leaf is reached before the first leaf is
The engagement of two gear teeth should be seen as two stages,
engaging and disengaging. There is much more friction and power
loss during engagement than during disengagement, so the teeth
must be designed such that the tooth of the smaller gear (the
pinion) is not released until the next tooth has reached the
mid-point of the impulse. This is why the teeth of the larger gear
extend beyond the pitch circle.
Take a moment to consider the effect of releasing the tooth of the
smaller gear before the next tooth has reached the mid-point of the
impulse. The next tooth is receiving the force of the tooth of the
larger gear as it engages more deeply into the larger tooth, having a
repelling effect and a grinding effect (the gear teeth are grinding
into one another). The result of this can readily be seen in clocks
with lantern pinions that have a bent or damaged pinion wires, or
that have an unevenly spaced lantern pinion assembly (which may
not be visible to the eye!): the great wheel gears of many American
clocks that have lantern pinions have severe (abnormal) wear in the
gear teeth. This severe wear is not caused by a weakness of the
metal in the gear nor by the fact that lantern pinions are being used
in the clock, but caused by a defect in the lantern pinion of the
second wheel. If there is no defect visible in the lantern pinion
wires, then a new lantern pinion assembly should be made for the
second wheel as well as the great wheel being replaced.
You may have decided by now that the thing to do is to design the
pinion with 12 teeth or more and to design the dedendum part of
each tooth (the "flank") along the radial lines of the gear circle.
Since the angle of engagement of each tooth would be very small,
the design of the rest of the tooth would be less important and so
the addendum could simply be rounded off, right? Almost, but not
quite because of one more principle: during impulse, the angle of
rotation of each gear must be proportional to the ratio of the teeth
of each gear at each moment in time. If the gear providing the
impulse (the "driver") rotates a little more in one instant that in
another, while the rate of rotation of the pinion remains constant,
then the rate of power transferred from the driver gear to the
receiving gear would not be constant because work done is defined
as the product of force and displacement (in this case, angle
rotated). If the rotation of the pair of gears is not smooth and
continuously proportional, the power transferred will not be even
over the length of the impulse. In order to achieve this smooth
transfer of power, the design of the addendum of each tooth must
be such as to simulate the rolling action of two discs. When the gear teeth are designed so as to produce a constant angular-velocity ratio during meshing, they are said to have conjugate action.
Mathematicians have determined that this is best achieved by
considering the path traced by a point on a circle that rotates on a
flat plane (the Cycloidal Curve) and also on a curved plane (the
Go To Cycloidal Curve.
Clock Repair Main Page
Escapements in Motion | http://www.abbeyclock.com/gearing1.html | 13 |
58 | Trigonometry (from the Greek trigonon = three angles and metro = measure) is a part of elementary mathematics dealing with angles, triangles and trigonometric functions such as sine (abbreviated sin), cosine (abbreviated cos) and tangent (abbreviated tan). It has some connection to geometry, although there is disagreement on exactly what that connection is; for some, trigonometry is just a section of geometry.
Overview and definitions in Trigonometry [change]
Trigonometry uses a large number of specific words to describe parts of a triangle. Some of the definitions in trigonometry are:
- Right-angled triangle - A right-angled triangle is a triangle that has one angle that is equal to 90 degrees. (A triangle can not have more than one right angle.) The standard trigonometric ratios can only be used on right-angled triangles.
- Hypotenuse - The hypotenuse of a triangle is the longest side, and the side that is opposite the right angle. For example, for the triangle on the right, the hypotenuse is side c.
- Opposite of an angle - The opposite side of an angle is the side that does not intersect with the vertex of the angle. For example, side a is the opposite of angle A in the triangle to the right.
- Adjacent of an angle - The adjacent side of an angle is the side that intersects the vertex of the angle but is not the hypotenuse. For example, side b is adjacent to angle A in the triangle to the right.
Trigonometric Ratios [change]
Sine (sin) - The sine of an angle is equal to the
Cosine (cos) - The cosine of an angle is equal to the
Tangent (tan) - The tangent of an angle is equal to the
The reciprocals of these ratios are:
Cosecant (csc) - The cosecant of an angle is equal to the or
Secant (sec) - The secant of an angle is equal to the or
Cotangent (cot) - The cotangent of an angle is equal to the or
Students often use a mnemonic to remember this relationship. The sine, cosine, and tangent ratios in a right triangle can be remembered by representing them as strings of letters, such as SOH-CAH-TOA:
- Sine = Opposite ÷ Hypotenuse
- Cosine = Adjacent ÷ Hypotenuse
- Tangent = Opposite ÷ Adjacent
Some Old Horse Caught A Horse Taking Oats Away
Some Officers Have Curly Auburn Hair Till Old Age
Stephen Opens His Car And Hits The Open Avenue | http://simple.wikipedia.org/wiki/Trigonometry | 13 |
62 | CURRICULUM MAP: 10108.map
Trigonometry (SCP) 493
25.1 MATHEMATICS - ALG REASONING: PATTERNS & FUNCTS
-- Students will explore and describe patterns and sequences using tables, graphs and charts.
-- Students will describe and compare properties and classes of functions including exponential, polynomial, rational, logarithmic and trigonometric.
-- Students will identify an appropriate symbolic representation for a function or relation displayed graphically or verbally.
-- Students will relate the graphical representation of a function to its function family and find equations, intercepts, maximum or minimum values, asymptotes and line of symmetry for that function.
-- Students will solve problems using concrete, verbal, symbolic, graphical and tabular representations.
-- Students will use logarithms, vectors and matrices to solve problems.
25.1 MATHEMATICS - ALG REASONING: PATTERNS & FUNCTS
-- Students will identify the characteristics of functions and relations including domain and range.
-- Students will use equations to describe the rules for number patterns and to model word problems.
25.2 MATHEMATICS - NUMERICAL & PROP REASONING
-- Students will select and use an appropriate form of number (integer, fraction, decimal, ratio, percent, exponential, scientific notation, irrational, complex) to solve practical problems involving order, magnitude, measures, labels, locations and scales.
-- Students will judge the effects of computations with powers and roots on the magnitude of results.
-- Students will perform operations with complex numbers, matrices, determinants, and logarithms.
-- Students will identify reasonable answers to problems that reflect real world experiences.
-- Students will select and use an appropriate form of number (integer, fraction, decimal, ratio, percent, exponential, scientific notation, irrational) to solve practical problems involving order, magnitude, measures, labels, locations and scales.
-- Students will use technological tools such as spreadsheets, probes, computer algebra systems and graphing utilities to organize and analyze large amounts of numerical information.
25.3 MATHEMATICS - GEOM & MEASUREMT
-- Students will use indirect methods including the Pythagorean Theorem, trigonometric ratios and proportions in similar figures to solve a variety of measurement problems.
-- Students will use properties of similarity and techniques of trigonometry to make indirect measurements of lengths and angles to solve a variety of problems.
-- Students will use the Pythagorean theorem to solve indirect measurement problems.
1. Where and how is trigonometry used in the real world?
2. How can students learn the behavior of and operations on families of functions.
3. How can students demonstrate increased skill in problem solving with applications using these functions?
4. How can students make the connection between radians and degrees, three basic trigonometric graphs and their cofunctions and inverse functions?
Acute angles and right triangles
Radian measure and circular functions
Graphs of the circular functions
Inverse circular functions and trigonometric equations
Applications of Trigonometry and vectors
Complex numbers and polar equations
Exponential and logarithmic functions
Identify angle rotations (from standard position)
Use geometry concepts and similar triangles to determine angle and side relationships
Use Pythagorean Theorem to define the six trigonometric functions
Find functional values using reciprocal, Pythagorean, or quotient identities
Identify signs and ranges of six trigonometric functions
Find functional values of six trigonometric functions for acute and non-acute angles (including special angles) with and without a calculator
Solve applied right angle trigonometry problems using significant digits including angles of elevation or depression and bearing
Convert angle values between degrees and radians as appropriate
Use radian measure to find arc length of a circle including using latitudes to find distance between two cities and area of a sector of a circle
Define trigonometric functions using circular functions
Find values of circular functions with and without exact values
Differentiate between linear and angular speed to solve application problems
Sketch the graphs of sine, cosine, tangent, and other periodic/ circular functions using transformations (shifts and reflections)
Identify and sketch the trigonometric functions with various amplitudes, periods, and translations
Determine a trigonometric model using curve fitting
Using simple harmonic motion identify the amplitude, the period, and the frequency of the motion of a spring
Using fundamental identities find trigonometric functional values given one value and the quadrant
Simplifying and verifying trigonometric identities
Simplifying, identifying, and using sum and difference, double angle, and half angle and cofunction formulas to find exact values
Identify and evaluate inverse trigonometric functions (arc) with and without a calculator
Solve trigonometric equations by linear methods, by factoring, by the quadratic formula
Solve trigonometric equations with half angles or multiple angles
Solving oblique triangles using law of sines or law of cosines with applications
Find the area of a triangle with specific information given
Analyzing data to determine the number of possible triangles with the ambiguous case
Sketching, and operations with vectors,
Finding magnitude and direction of resultant with vectors, and resolving vectors into horizontal and vertical components
Solve equations using complex numbers
Operations with complex numbers
Graph complex numbers and converting between rectangular, trigonometric, and polar forms
Use DeMoivre’s Theorem to find powers of complex numbers
Find roots of complex numbers
Graph an exponential function with a base greater than 1 and an exponential equation with a base between zero and one
Solve equations using properties of exponents
Solve equations using the compound interest formula
Graph a logarithmic function using base 10
Solve a logarithmic equation by converting to exponential form
Simplifying logarithmic expressions using properties of logarithms
Solve applications of logarithms using pH or dec
use a graphing calculator extensively
Students will be assessed by:
1. Daily homework assignments, which will account for 15% of their final grade.
2. Quizzes, approximately one every week
3. Unit tests, comprised of 85-90% computation, 10-15% application
4. Final exam that is comprehensive and will account for 20% of final grade
1. textbook course: Trigonometry (8th ed) by Lial, Hornsby, Schneider
2. Teacher's edition Trigonometry (8th ed)
3. Graphing calculator manual by Nester
4. Instructor's solution manual
5. Student solution manual
6. Course III: Integrated Math textbook
7. Functions modeling Change by Connally textbook for Pre-Calculus | http://www.woodstockacademy.org/ecsweb/maps/10108a.htm | 13 |
143 | 1. Perform basic operations on real numbers. (CCC 7)
1.1 Use the names of different types of numbers, add
signed numbers with same signs and add signed numbers with opposite signs.
1.2 Subtract signed numbers.
1.3 Multiply and divide signed numbers.
1.4 Write numbers in exponent form; evaluate numerical expressions that contain
1.5 Recognize polynomials; use the distributive property to multiply a
polynomial by a monomial .
1.6 Identify like terms, combine like terms.
1.7 Use the order of operations to simplify numerical expressions involving
addition, subtraction, multiplication, division and exponents.
1.8 Evaluate a variable expression for a specified value; evaluate a formula by
1.9 Simplify variable expressions with several grouping symbols.
1.10 Evaluate a square root of a perfect square and approximate a square root to
the nearest thousandth.
1.11 Simplify radical expressions.
1.12 Add or subtract radical expressions.
1.13 Multiply radical expressions.
2.1 Solve equations of the form x + b = c using the
2.2 Solve equations of the form x/a = b and ax = b.
2.3 Solve equations of the form ax + b = c and equations with parentheses.
2.4 Solve equations with fractions.
2.5 Solve formulas for a specified variable.
2.6 Interpret an inequality statement and graph an inequality on a number line.
2.7 Solve an inequality.
2.8 Write an algebraic expression for two or more quantities that are being
2.9 Use equations to solve word problems.
2.10 Solve word problems involving comparisons.
2.11 Solve applied problems with periodic rate changes, percent problems,
investment problems involving simple interest, and coin problems.
2.12 Solve word problems using geometric formulas.
2.13 Use inequalities to solve word problems.
3. Perform basic operations with exponents and
scientific notation. (CCC 7)
3.1 Multiply and divide exponential expressions with like
bases and raise exponential expressions to a power.
3.2 Use negative exponents and write numbers in scientific notation.
4. Perform basic operations and factoring of
polynomials. (CCC 7)
4.1 Add and subtract polynomials.
4.2 Multiply polynomials.
4.3 Multiply binomials of the type (a + b)(a - b), (a + b)^2 and (a - b)^2;
multiply polynomials with more than two terms.
4.4 Divide a polynomial by a monomial and by a binomial.
4.5 Factor polynomials containing a common factor in each term.
4.6 Factor problems with four terms by grouping.
4.7 Factor polynomials of the form x^2 + bx + c.
4.8 Factor a trinomial of the form by the
trial and error method and by the grouping method.
4.9 Recognize and factor problems of the type a^2 - b^2 and the type a^2 + 2ab +
4.10 Identify and factor any polynomial that can be factored.
4.11 Solve a quadratic equation by factoring.
5. Graph functions. (CCC 7)
5.1 Plot a point given the coordinates, name the
coordinates of a plotted point, and find ordered pairs for a given linear
5.2 Graph a straight line by plotting points, by finding its x- and
y-intercepts, and graph horizontal and vertical lines.
5.3 Graph linear inequalities in two variables.
6. Solve equations and inequalities including
applications. (CCC 2, 7)
6.1 Find the slope given (a) two points and (b) the equation of a line; write
the equation of a line given the slope and y-intercept, graph using slope
y-intercept and find slope of lines that are parallel or perpendicular.
6.2 Write the equation of a line given (a) a point and a slope, (b) two points,
and (c) from a graph of a line.
7. Solve systems of simultaneous equations including
applications. (CCC 2, 7)
7.1 Solve a system of linear equations by graphing.
7.2 Solve a system of linear equations by the substitution method.
7.3 Solve a system of linear equations by the addition method (elimination
7.4 Choose an appropriate method to solve a system of linear equations
7.5 Solve word problems using a system of linear equations.
8. Perform operations on radicals and radical
expressions. (CCC 7)
8.1 Simplify a fraction involving radicals.
8.2 Use the Pythagorean Theorem and solve radical equations.
8.3 Solve problems involving direct and inverse variations.
8.4 Simplify an algebraic fraction by factoring.
8.5 Multiply and divide algebraic fractions and write the answer in simplest
8.6 Add and subtract algebraic fractions with the same denominator and with
9. Perform basic operations on algebraic functions. (CCC
9.1 Simplify complex rational expressions.
9.2 Solve equations involving algebraic fractions.
9.3 Solve problems involving ratio and proportion, similar triangles, distance
problems and work problems.
Students will demonstrate proficiency on all Measurable
Performance Objectives at least to the 75% level. The final grade will be
determined using the College Grading System:
92 – 100
83 – 91
75 – 82
0 – 74
Students should refer to the Student Handbook for
information on Academic Standing Policy, Academic Honesty Policy, Students
Rights and Responsibilities and other policies relevant to their academic
Upon completion of this course, the student will be able
1. Identify the elements and properties of the real number
2. Add, subtract, multiply, and divide polynomials.
3. Simplify algebraic expressions .
4. Factor polynomials.
5. Add, subtract, multiply, and divide rational expressions, and write rational
expressions in simplified form .
6. Simplify exponential expressions with integral and rational expressions.
7. Simplify radical expressions .
8. Solve linear equations.
9. Identify a complex number and elementary properties of the complex number
10. Perform operations on complex numbers.
11. Solve quadratic equations by appropriate methods: factoring, applying the
square root property, completing the square, and using the quadratic formula.
12. Solve equations involving rational expressions by clearing of fractions.
13. Solve formulas for an indicated variable.
14. Explain the steps int he derivation of the quadratic formula and theorems of
15. Apply theorems, rules, and definitions of algebra in problem solving .
16. Model real-world applications mathematically.
17. Solve real-world applications.
18. Demonstrate efficiency in problem solving.
19. Use induction and deduction as methods of reasoning.
20. Solve radical equations and equations that are quadratic in form.
21. Solve linear inequalities; graph solutions and write solutions in interval
22. Solve quadratic and rational and rational inequalities; graph solutions and
solutions in interval notation.
23. Solve equations and inequalities involving absolute value; graph solutions
write solutions in interval notation.
24. Find the distance and midpoint between two points.
25. Test an equation algebraically for symmetry with respect to the y-axis,
26. Graph linear equations.
27. Analyze the graph of a linear equation and determine the slope , the
and the equation of the graph.
28. Determine algebraically if two lines are parallel , perpendicular, or
29. Derive the general and standard form of the equation of a line under given
conditions: given a point and the slope, given two points, given a point and the
equation of a line parallel or perpendicular to the line.
30. Graph circles given the equation of a circle in either standard or general
31. Derive the equation of a circle given the center and radius of the circle.
32. Determine if a given relation is a function.
33. Determine if a given equation is a function.
34. Determine the domain and range of a given function and apply the rule of
maximum domain if the domain is not specified.
35. Evaluate functions.
36. Perform operations on functions.
37. Form a composite function given two functions and find the domain of the
38. Identify even and odd functions given an equation of a function.
39. Determine if a given graph is the graph of a function.
40. Analyze the graph of a function and determine if the function is even or
symmetry of the function; the intercepts; the domain and range; if the function
continuous; intervals where the function is increasing, decreasing, or constant;
and the value of the function at a given point.
41. Graph functions using transformations.
42. Graph quadratic functions.
43. Analyze quadratic functions and determine the maximum or minimum point, the
intercepts, points on the graph using symmetry, the range, if the graph will be
concave up or down, the intervals where the function is increasing or
44. Graph polynomial functions .
45. Find the domain and range of rational functions.
46. Find and graph the asymptotes of rational functions.
47. Graph rational functions.
48. Use synthetic division and the remainder theorem to evaluate a polynomial.
49. Find the zeros of polynomial functions of degree greater than two.
50. Solve systems of linear equations and related applications by the following
methods: substitution, elimination, and matrices (if time permits).
3. Catalog Description: Fundamental operations;
factoring and fractions, exponents, and
radicals; functions and graphs; equations and inequalities; systems of
Prerequisite: MAT 099 or high school algebra. (3 cr.)
4. Descriptive Overview of Course
1. Outline of Course Content:
1. Basic Algebraic Operations
(1) Review of the real number system
(3) Factoring polynomials
(4) Rational expressions
(5) Integral exponents
(6) Rational exponents
(7) Radical expressions
2. Equations and Inequalities
(1) Linear equations
(2) Applications of linear equations
(3) Linear inequalities
(4) Equations and inequalities involving absolute value
(5) Complex numbers
(6) Quadratic equations
(7) Radical equations and equations that are quadratic in form
(8) Polynomial and rational inequalities
3. Graphing and Functions
(1) Rectangular coordinate system
(a) Distance formula
(b) Midpoint formula
(2) Graphs of equations
(a) Introduction to graphing
(3) Equations and graphs of lines
(4) Parallel and perpendicular lines
(a) Determining where a function is increasing,
(b) Determining even and odd functions
(c) Graphs of functions
(7) Graphing using transformations
(8) Graphing quadratic functions
(9) Operations on functions
(10) Composite functions
4. Polynomial and Rational Functions
(1) Polynomial functions
(2) Rational functions
(3) Synthetic division
(4) The real zeros of a polynomial function
(5) Isolating real zeros
(6) Rational Root Theorem
(7) Approximating real zeros
(8) Fundamental Theorem of Algebra
5. Systems of Linear Equations
(1) Solving systems of linear equations by substitution and
(2) Solving systems of linear equations using matrices
2. Teaching Methodology: This course will be taught
using the lecture/discussion
format. Small group and individual work will be assigned at the discretion of
instructor. Use of appropriate technology for concept exploration will be used
at the discretion of the instructor.
3. Text and Other Support Materials 1. Barnett, R. & Ziegler, M. (1993). College Algebra, 5^th ed. New
York: McGraw Hill.
2. Scientific calculator
4. Methods of Evaluation and Assessment: The final
grade will be determined as a
percentage from the following evaluation methods with varying weights at the
discretion of the instructor:
8. Perform the following in regard to Relations and Functions
a. Find the value of a function using functional notation
b. Determine whether a relation represents a function (from a set of ordered
pairs and from a graph)
c. Find the domain and range of a function
d. Find the sum, difference, product and quotient of two functions
e. Find the composite of a pair of functions (and its domain)
f. Find the Difference Quotient of a Polynomial, Rational, and Radical Function
9. Hand graph the following functions
a. Linear using Slope Y-Intercept (find the slope of a line and write an
equation of a line ).
b. Quadratic (Find x and y-intercepts, find the axis of symmetry, and find the
d. Rational and determine asymptotes
f. Absolute Value
g. Restricted domain, Split domain or Piecewise
h. Greatest Integer
10. Using the graphing calculator
a. To graph a function and find/determine (x-and y-intercepts, zeros, intervals
on which the function is increasing and decreasing, maximum and minimum and
local minima and maxima, obtain information from or about the graph of a
b. Regression Program to find the Curve of Best Fit (linear, quadratic, power,
polynomial – cubic, exponential, logarithmic)
11. Solve systems of equations algebraically and graphically , by hand and
a. That are linear and nonlinear, two variables
b. *That are linear, three variables
II. Trigonometry Students will be able to:
1. Solve a right triangle using Right Triangle Trig and The Pythagorean Theorem
2. Convert between degrees , minutes, seconds and decimal form for angles
3. Convert from degrees to radians and from radians to degrees
4. Find the trigonometric functions of an angle of any size
5. Solve oblique triangles using the Law of Sines and Law of Cosines
6. Graph trigonometric functions
7. *Graph and solve vector problems
"I ordered the Algebra Buster late one night when my daughter was having problems in her honors algebra class. After we ordered your software
she was able to see step by step how to solve the problems. Algebra Buster definitely saved the day." | http://www.algebra-online.com/alternative-math/math-software/math-015-elementary-algebra.html | 13 |
130 | Inside Computer Logic
Original article by Ken Bigelow and some more additions
Just what goes on inside logic gates to actually perform logic functions? Here are the internal schematics of various gates, as implemented by several different logic families.
I won't cover the internal operation of individual semiconductor devices in these pages, except to state the basic behavior of a given device under specific conditions. More detailed coverage of semiconductor physics and internal behavior is a job for another set of pages.
There are several different families of logic gates. Each family has its capabilities and limitations, its advantages and disadvantages. The following list describes the main logic families and their characteristics. You can follow the links to see the circuit construction of gates of each family.
Relay logic (RL)
The schematic diagrams for relay logic circuits are often called line diagrams, because the inputs and outputs are essentially drawn in a series of lines. A relay logic circuit is an electrical network consisting of lines, or rungs, in which each line or rung must have continuity to enable the output device. A typical circuit consists of a number of rungs, with each rung controlling an output. This output is controlled by a combination of input or output conditions, such as input switches and control relays. The conditions that represent the inputs are connected in series, parallel, or series-parallel to obtain the logic required to drive the output. The relay logic circuit forms an electrical schematic diagram for the control of input and output devices. Relay logic diagrams represent the physical interconnection of devices. The basic format for relay logic diagrams is as follows:
1. The two vertical lines that connect all devices on the relay logic diagram are labeled L1 and L2. The space between L1 and L2 represents the voltage of the control circuit.
2. Output devices are always connected to L2. Any electrical overloads that are to be included must be shown between the output device and L2; otherwise, the output device must be the last component before L2.
3. Control devices are always shown between L1 and the output device. Control devices may be connected either in series or in parallel with each other.
4. Devices which perform a STOP function are usually connected in series, while devices that perform a START function are connected in parallel.
5. Electrical devices are shown in their normal conditions. An NC contact would be shown as normally closed, and an NO contact would appear as a normally open device. All contacts associated with a device will change state when the device is energized.
Figure below shows a typical relay logic diagram. In this circuit, a STOP/START station is used to control two pilot lights. When the START button is pressed, the control relay energizes and its associated contacts change state. The green pilot light is now ON and the red lamp is OFF. When the STOP button is pressed, the contacts return to their resting state, the red pilot light is ON, and the green switches OFF.
VTL is a term that I have given to describe the logic circuits that use vacuum tubes. I do not know if this was called like this back then. There is an analytical explanation of gates created using vacuum tubes in this article and this one too (pdf).
Here are two examples of gates that can be constructed using vacuum tubes. As you would expect, the voltage levels are high and also much power is consumed to keep the tube filaments hot.
DVTL is a term that I have given to describe the logic circuits that use vacuum tube gates in combination with diode gates. I do not know if this was called like this back then.
By letting diodes perform the logical AND or OR function and then amplifying the result with a vacuum tube, we can reduce cost significantly. DVTL takes diode logic gates and adds a vacuum tube to the output, in order to provide logic inversion and to restore the signal to full logic levels.
Here is an example of a combination of a tube circuit and diode gates to form a flip-flop, not a true tube gate but just for illustration. Again refer to this article for a more analytical explanation.
Diode Logic (DL)
Diode logic gates are very simple and inexpensive, and can be used effectively in specific situations. However, they cannot be used extensively, as they tend to degrade digital signals rapidly. In addition, they cannot perform a NOT function, so their usefulness is quite limited.
Diode Logic makes use of the fact that the electronic device known as a diode will conduct an electrical current in one direction, but not in the other. In this manner, the diode acts as an electronic switch.
To the left you see a basic Diode Logic OR gate. We'll assume that a logic 1 is represented by +5 volts, and a logic 0 is represented by ground, or zero volts. In this figure, if both inputs are left unconnected or are both at logic 0, output Z will also be held at zero volts by the resistor, and will thus be a logic 0 as well. However, if either input is raised to +5 volts, its diode will become forward biased and will therefore conduct. This in turn will force the output up to logic 1. If both inputs are logic 1, the output will still be logic 1. Hence, this gate correctly performs a logical OR function.
To the right is the equivalent AND gate. We use the same logic levels, but the diodes are reversed and the resistor is set to pull the output voltage up to a logic 1 state. For this example, +V = +5 volts, although other voltages can just as easily be used. Now, if both inputs are unconnected or if they are both at logic 1, output Z will be at logic 1. If either input is grounded (logic 0), that diode will conduct and will pull the output down to logic 0 as well. Both inputs must be logic 1 in order for the output to be logic 1, so this circuit performs the logical AND function.
In both of these gates, we have made the assumption that the diodes do not introduce any errors or losses into the circuit. This is not really the case; a silicon diode will experience a forward voltage drop of about 0.65v to 0.7v while conducting. But we can get around this very nicely by specifying that any voltage above +3.5 volts shall be logic 1, and any voltage below +1.5 volts shall be logic 0. It is illegal in this system for an output voltage to be between +1.5 and +3.5 volts; this is the undefined voltage region.
Individual gates like the two above can be used to advantage in specific circumstances. However, when DL gates are cascaded, as shown to the left, some additional problems occur. Here, we have two AND gates, whose outputs are connected to the inputs of an OR gate. Very simple and apparently reasonable.
But wait a minute! If we pull the inputs down to logic 0, sure enough the output will be held at logic 0. However, if both inputs of either AND gate are at +5 volts, what will the output voltage be? That diode in the OR gate will immediately be forward biased, and current will flow through the AND gate resistor, through the diode, and through the OR gate resistor.
If we assume that all resistors are of equal value (typically, they are), they will act as a voltage divider and equally share the +5 volt supply voltage. The OR gate diode will insert its small loss into the system, and the output voltage will be about 2.1 to 2.2 volts. If both AND gates have logic 1 inputs, the output voltage can rise to about 2.8 to 2.9 volts. Clearly, this is in the "forbidden zone," which is not supposed to be permitted.
If we go one step further and connect the outputs of two or more of these structures to another AND gate, we will have lost all control over the output voltage; there will always be a reverse-biased diode somewhere blocking the input signals and preventing the circuit from operating correctly. This is why Diode Logic is used only for single gates, and only in specific circumstances
GDTL is a term that I have given to describe the logic circuits that use cold cathode gas discharge tube gates. I do not know if this was called like this back then. More analytical information about logic circuits using gas discharge tubes can be found in this article (pdf).
Two examples of logic gates are shown below.
Resistor-Transistor Logic (RTL)
RTL gates are almost as simple as DL gates, and remain inexpensive. They also are handy because both normal and inverted signals are often available. However, they do draw a significant amount of current from the power supply for each gate. Another limitation is that RTL gates cannot switch at the high speeds used by today's computers, although they are still useful in slower applications.
Although they are not designed for linear operation, RTL integrated circuits are sometimes used as inexpensive small-signal amplifiers, or as interface devices between linear and digital circuits.
Consider the most basic transistor circuit, such as the one shown to the left. We will only be applying one of two voltages to the input I: 0 volts (logic 0) or +V volts (logic 1). The exact voltage used as +V depends on the circuit design parameters; in RTL integrated circuits, the usual voltage is +3.6v. We'll assume an ordinary NPN transistor here, with a reasonable dc current gain, an emitter-base forward voltage of 0.65 volt, and a collector-emitter saturation voltage no higher than 0.3 volt. In standard RTL ICs, the base resistor is 470 and the collector resistor is 640.
When the input voltage is zero volts (actually, anything under 0.5 volt), there is no forward bias to the emitter-base junction, and the transistor does not conduct. Therefore no current flows through the collector resistor, and the output voltage is +V volts. Hence, a logic 0 input results in a logic 1 output.
When the input voltage is +V volts, the transistor's emitter-base junction
will clearly be forward biased. For those who like the mathematics, we'll assume
a similar output circuit connected to this input. Thus, we'll have a voltage of
3.6 - 0.65 = 2.95 volts applied across a series combination of a 640
output resistor and a 470
input resistor. This gives us a base current of:
RTL is a relatively old technology, and the transistors used in RTL ICs have a dc forward current gain of around 30. If we assume a current gain of 30, 2.66 ma base current will support a maximum of 79.8 ma collector current. However, if we drop all but 0.3 volts across the 640 collector resistor, it will carry 3.3/640 = 5.1 ma. Therefore this transistor is indeed fully saturated; it is turned on as hard as it can be.
With a logic 1 input, then, this circuit produces a logic 0 output. We have already seen that a logic 0 input will produce a logic 1 output. Hence, this is a basic inverter circuit.
As we can see from the above calculations, the amount of current provided to the base of the transistor is far more than is necessary to drive the transistor into saturation. Therefore, we have the possibility of using one output to drive multiple inputs of other gates, and of having gates with multiple input resistors. Such a circuit is shown to the right.
In this circuit, we have four input resistors. Raising any one input to +3.6 volts will be sufficient to turn the transistor on, and applying additional logic 1 (+3.6 volt) inputs will not really have any appreciable effect on the output voltage. Remember that the forward bias voltage on the transistor's base will not exceed 0.65 volt, so the current through a grounded input resistor will not exceed 0.65v/470 = 1.383 ma. This does provide us with a practical limit on the number of allowable input resistors to a single transistor, but doesn't cause any serious problems within that limit.
The RTL gate shown above will work, but has a problem due to possible signal interactions through the multiple input resistors. A better way to implement the NOR function is shown to the left.
Here, each transistor has only one input resistor, so there is no interaction between inputs. The NOR function is performed at the common collector connection of all transistors, which share a single collector load resistor.
This is in fact the pattern for all standard RTL ICs. The very commonly-used µL914 is a dual two-input NOR gate, where each gate is a two-transistor version of the circuit to the left. It is rated to draw 12 ma of current from the 3.6V power supply when both outputs are at logic 0. This corresponds quite well with the calculations we have already made.
Standard fan-out for RTL gates is rated at 16. However, the fan-in for a standard RTL gate input is 3. Thus, a gate can produce 16 units of drive current from the output, but requires 3 units to drive an input. There are low-power versions of these gates that increase the values of the base and collector resistors to 1.5K and 3.6K, respectively. Such gates demand less current, and typically have a fan-in of 1 and a fan-out of 2 or 3. They also have reduced frequency response, so they cannot operate as rapidly as the standard gates. To get greater output drive capabilities, buffers are used. These are typically inverters which have been designed with a fan-out of 80. They also have a fan-in requirement of 6, since they use pairs of input transistors to get increased drive.
We can get a NAND function in either of two ways. We can simply invert the inputs to the NOR/OR gate, thus turning it into an AND/NAND gate, or we can use the circuit shown to the right.
In this circuit, each transistor has its own separate input resistor, so each is controlled by a different input signal. However, the only way the output can be pulled down to logic 0 is if both transistors are turned on by logic 1 inputs. If either input is a logic 0 that transistor cannot conduct, so there is no current through either one. The output is then a logic 1. This is the behavior of a NAND gate. Of course, an inverter can also be included to provide an AND output at the same time.
The problem with this NAND circuit stems from the fact that transistors are not ideal devices. Remember that 0.3 volt collector saturation voltage? Ideally it should be zero. Since it isn't, we need to look at what happens when we "stack" transistors this way. With two, the combined collector saturation voltage is 0.6 volt -- only slightly less than the 0.65 volt base voltage that will turn a transistor on.
If we stack three transistors for a 3-input NAND gate, the combined collector saturation voltage is 0.9 volt. This is too high; it will promote conduction in the next transistor no matter what. In addition, the load presented by the upper transistor to the gate that drives it will be different from the load presented by the lower transistor. This kind of unevenness can cause some odd problems to appear, especially as the frequency of operation increases. Because of these problems, this approach is not used in standard RTL ICs.
Diode-Transistor Logic (DTL)
As we said in the page on diode logic, the basic problem with DL gates is that they rapidly deteriorate the logical signal. However, they do work for one stage at a time, if the signal is re-amplified between gates. Diode-Transistor Logic (DTL) accomplishes that goal.
The gate to the right is a DL OR gate followed by an inverter. The OR function is still performed by the diodes. However, regardless of the number of logic 1 inputs, there is certain to be a high enough input voltage to drive the transistor into saturation. Only if all inputs are logic 0 will the transistor be held off. Thus, this circuit performs a NOR function.
The advantage of this circuit over its RTL equivalent is that the OR logic is performed by the diodes, not by resistors. Therefore there is no interaction between different inputs, and any number of diodes may be used. A disadvantage of this circuit is the input resistor to the transistor. Its presence tends to slow the circuit down, thus limiting the speed at which the transistor is able to switch states.
At first glance, the NAND version shown on the left should eliminate this problem. Any logic 0 input will immediately pull the transistor base down and turn the transistor off, right?
Well, not quite. Remember that 0.65 volt base input voltage for the transistor? Diodes exhibit a very similar forward voltage when they're conducting current. Therefore, even with all inputs at ground, the transistor's base will be at about 0.65 volt, and the transistor can conduct.
To solve this problem, we can add a diode in series with the transistor's base lead, as shown to the right. Now the forward voltage needed to turn the transistor on is 1.3 volts. For even more insurance, we could add a second series diode and require 1.95 volts to turn the transistor on. That way we can also be sure that temperature changes won't significantly affect the operation of the circuit.
Either way, this circuit will work as a NAND gate. In addition, as with the NOR gate, we can use as many input diodes as we may wish without raising the voltage threshold. Furthermore, with no series resistor in the input circuit, there is less of a slowdown effect, so the gate can switch states more rapidly and handle higher frequencies. The next obvious question is, can we rearrange things so the NOR gate can avoid that resistor, and therefore switch faster as well?
The answer is, Yes, there is. Consider the circuit shown to the left. Here we use separate transistors connected together. Each has a single input, and therefore functions as an inverter by itself. However, with the transistor collectors connected together, a logic 1 applied to either input will force the output to logic 0. This is the NOR function.
We can use multiple input diodes on either or both transistors, as with the DTL NAND gate. This would give us an AND-NOR function, and is useful in some circumstances. Such a construction is also known as an AOI (for AND-OR-INVERT) circuit.
Transistor-Transistor Logic (TTL)
As the state of the art improved, TTL integrated circuits were adapted slightly to handle a wider range of requirements, but their basic functions remained the same. These devices comprise the 7400 family of digital ICs.
With the rapid development of integrated circuits (ICs), new problems were encountered and new solutions were developed. One of the problems with DTL circuits was that it takes as much room on the IC chip to construct a diode as it does to construct a transistor. Since "real estate" is exceedingly important in ICs, it was desirable to find a way to avoid requiring large numbers of input diodes. But what could be used to replace many diodes?
Well, looking at the DTL NAND gate to the right, we might note that the opposed diodes look pretty much like the two junctions of a transistor. In fact, if we were to have an inverter, it would have a single input diode, and we just might be able to replace the two opposed diodes with an NPN transistor to do the same job.
In fact, this works quite nicely. The figure to the left shows the resulting inverter.
In addition, we can add multiple emitters to the input transistor without greatly increasing the amount of space needed on the chip. This allows us to construct a multiple-input gate in almost the same space as an inverter. The resulting savings in real estate translates to a significant savings in manufacturing costs, which in turn reduces the cost to the end user of the device.
One problem shared by all logic gates with a single output transistor and a pull-up collector resistor is switching speed. The transistor actively pulls the output down to logic 0, but the resistor is not active in pulling the output up to logic 1. Due to inevitable factors such as circuit capacitances and a characteristic of bipolar transistors called "charge storage," it will take a certain amount of time for the transistor to turn completely off and the output to rise to a logic 1 level. This limits the frequency at which the gate can operate.
The designers of commercial TTL IC gates reduced that problem by modifying the output circuit. The result was the "totem pole" output circuit used in most of the 7400/5400 series TTL ICs. The final circuit used in most standard commercial TTL ICs is shown to the right. The number of inputs may vary — a commercial IC package might have six inverters, four 2-input gates, three 3-input gates, or two 4-input gates. An 8-input gate in one package is also available. But in each case, the circuit structure remains the same.
Emitter-Coupled Logic (ECL)
Emitter-Coupled Logic is based on the use of a multi-input differential amplifier to amplify and combine the digital signals, and emitter followers to adjust the dc voltage levels. As a result, none of the transistors in the gate ever enter saturation, nor do they ever get turned completely off. The transistors remain entirely within their active operating regions at all times. As a result, the transistors do not have a charge storage time to contend with, and can change states much more rapidly. Thus, the main advantage of this type of logic gate is extremely high speed.
The schematic diagram shown here is taken from Motorola's 1000/10,000 series of MECL devices. This particular circuit is of one 4-input OR/NOR gate. Standard voltages for this circuit are -5.2 volts (VEE) and ground (VCC). Unused inputs are connected to VEE. The bias circuit at the right side, consisting of one transistor and its associated diodes and resistors, can handle any number of gates in a single IC package. Typical ICs include dual 4-input, triple 3-input, and quad 2-input gates. In each case, the gates themselves differ only in how many input transistors they have. A single bias circuit serves all gates.
In operation, a logical ouput changes state by only 0.85 volt, from a low of -1.60 volts to a high of -0.75 volt. The internal bias circuit supplies a fixed voltage of -1.175 volts to the bias transistor in the differential amplifier. If all inputs are at -1.6 volts (or tied to VEE), the input transistors will all be off, and only the internal differential transistor will conduct current. This reduces the base voltage of the OR output transistor, lowering its output voltage to -1.60 volts. At the same time, no input transistors are affecting the NOR output transistor's base, so its output rises to -0.75 volt. This is simply the emitter-base voltage, VBE, of the transistor itself. (All transistors are alike within the IC, and are designed to have a VBE of 0.75 volt.)
When any input rises to -0.75 volt, that transistor siphons emitter current away from the internal differential transistor, causing the outputs to switch states.
The voltage changes in this type of circuit are small, and are dictated by the VBE of the transistors involved when they are on. Of greater importance to the operation of the circuit is the amount of current flowing through various transistors, rather than the precise voltages involved. Accordingly, Emitter-Coupled Logic is also known as Current Mode Logic (CML). This is not the only technology to implement CML by any means, but it does fall into that general description. In any case, this leads us to a major drawback of this type of gate: it draws a great deal of current from the power supply, and hence tends to dissipate a significant amount of heat.
To minimize this problem, some devices such as frequency counters use an ECL decade counter at the input end of the circuitry, followed by TTL or high-speed CMOS counters at the later digit positions. This puts the fast, expensive IC where it is absolutely required, and allows us to use cheaper ICs in locations where the signal will never be at that high a frequency.
CMOS gates are, however, severely limited in their speed of operation. Nevertheless, they are highly useful and effective in a wide range of battery-powered applications.
CMOS logic is a newer technology, based on the use of complementary MOS transistors to perform logic functions with almost no current required. This makes these gates very useful in battery-powered applications. The fact that they will work with supply voltages as low as 3 volts and as high as 15 volts is also very helpful.
CMOS gates are all based on the fundamental inverter circuit shown to the left. Note that both transistors are enhancement-mode MOSFETs; one N-channel with its source grounded, and one P-channel with its source connected to +V. Their gates are connected together to form the input, and their drains are connected together to form the output.
The two MOSFETs are designed to have matching characteristics. Thus, they are complementary to each other. When off, their resistance is effectively infinite; when on, their channel resistance is about 200 . Since the gate is essentially an open circuit it draws no current, and the output voltage will be equal to either ground or to the power supply voltage, depending on which transistor is conducting.
When input A is grounded (logic 0), the N-channel MOSFET is unbiased, and therefore has no channel enhanced within itself. It is an open circuit, and therefore leaves the output line disconnected from ground. At the same time, the P-channel MOSFET is forward biased, so it has a channel enhanced within itself. This channel has a resistance of about 200 , connecting the output line to the +V supply. This pulls the output up to +V (logic 1).
When input A is at +V (logic 1), the P-channel MOSFET is off and the N-channel MOSFET is on, thus pulling the output down to ground (logic 0). Thus, this circuit correctly performs logic inversion, and at the same time provides active pull-up and pull-down, according to the output state.
This concept can be expanded into NOR and NAND structures by combining inverters in a partially series, partially parallel structure. The circuit to the right is a practical example of a CMOS 2-input NOR gate.
In this circuit, if both inputs are low, both P-channel MOSFETs will be turned on, thus providing a connection to +V. Both N-channel MOSFETs will be off, so there will be no ground connection. However, if either input goes high, that P-channel MOSFET will turn off and disconnect the output from +V, while that N-channel MOSFET will turn on, thus grounding the output.
The structure can be inverted, as shown to the left. Here we have a two-input NAND gate, where a logic 0 at either input will force the output to logic 1, but it takes both inputs at logic 1 to allow the output to go to logic 0.
This structure is less limited than the bipolar equivalent would be, but there are still some practical limits. One of these is the combined resistance of the MOSFETs in series. As a result, CMOS totem poles are not made more than four inputs high. Gates with more than four inputs are built as cascading structures rather than single structures. However, the logic is still valid.
Even with this limit, the totem pole structure still causes some problems in certain applications. The pull-up and pull-down resistances at the output are never the same, and can change significantly as the inputs change state, even if the output does not change logic states. The result is uneven and unpredictable rise and fall times for the output signal. This problem was addressed, and was solved with the buffered, or B-series CMOS gates.
The technique here is to follow the actual NAND gate with a pair of inverters. Thus, the output will always be driven by a single transistor, either P-channel or N-channel. Since they are as closely matched as possible, the output resistance of the gate will always be the same, and signal behavior is therefore more predictable.
One of the main problems with CMOS gates is their speed. They cannot operate very quickly, because of their inherent input capacitance. B-series devices help to overcome these limitations to some extent, by providing uniform output current, and by switching output states more rapidly, even if the input signals are changing more slowly.
Note that we have not gone into all of the details of CMOS gate construction here. For example, to avoid damage caused by static electricity, different manufacturers developed a number of input protection circuits, to prevent input voltages from becoming too high. However, these protection circuits do not affect the logical behavior of the gates, so we will not go into the details here.
One type of gate, shown to the left, is unique to CMOS technology. This is the bilateral switch, or transmission gate. It makes full use of the fact that the individual FETs in a CMOS IC are constructed to be symmetrical. That is, the drain and source connections to any individual transistor can be interchanged without affecting the performance of either the transistor itself or the circuit as a whole.
When the N- and P-type FETs are connected as shown here and their gates are driven from complementary control signals, both transistors will be turned on or off together, rather than alternately. If they are both off, the signal path is essentially an open circuit — there is no connection between input and output. If they are both on, there is a very low-resistance connection between input and output, and a signal will be passed through.
What is truly interesting about this structure is that the signal being controlled in this manner does not have to be a digital signal. As long as the signal voltage does not exceed the power supply voltages, even an analog signal can be controlled by this type of gate.
Most logic families share a common characteristic: their inputs require a certain amount of current in order to operate correctly. CMOS gates work a bit differently, but still represent a capacitance that must be charged or discharged when the input changes state. The current required to drive any input must come from the output supplying the logic signal. Therefore, we need to know how much current an input requires, and how much current an output can reliably supply, in order to determine how many inputs may be connected to a single output.
However, making such calculations can be tedious, and can bog down logic circuit design. Therefore, we use a different technique. Rather than working constantly with actual currents, we determine the amount of current required to drive one standard input, and designate that as a standard load on any output. Now we can define the number of standard loads a given output can drive, and identify it that way. Unfortunately, some inputs for specialized circuits require more than the usual input current, and some gates, known as buffers, are deliberately designed to be able to drive more inputs than usual. For an easy way to define input current requirements and output drive capabilities, we define two new terms:
Remember, fan-in and fan-out apply directly only within a given logic family. If for any reason you need to interface between two different logic families, be careful to note and meet the drive requirements and limitations of both families, within the interface circuitry. | http://www.neazoi.com/technology/logic.htm | 13 |
54 | The development of CD offered truly digital technology for the first time and set new standards in sound quality. Digital recording achieves high quality with low cost by using measured values instead of the analogue signal manipulation.
Digital technology describes the music signal in the form of series of numbers in binary notation, called bits. The bits are recorded on the disc in small grooves, called pits and lands. Pits and lands represent 0's and 1's, respectively. From the digital numbers, the CD computer reconstructs the original audio signal. The measuring of the audio signal bit by bit is called sampling.
Bits make up digital words (numbers in binary notation). Binary notation has the base 2, and uses only 0's and 1's. Computers use binary notation since they can read a circuit as "on" or "off", as a "0" or as a "1".
16-bit coding means that 16 digits are used in forming the digital words. The sampling frequency is how often this value is measured.
The Compact Disc player.
The laser beam reads
coded information from the compact disc. The reading is kept accurate by
the servo processor. The data from the reading is sent to the decoder, where
it is converted to regular digital information. A digital
filter then removes noise. The DAC, the most important part of the CD player,
converts the digital data to an analogue audio wave. After an analogue filter
(not shown in the figure) removes noise, this wave is sent to the loudspeakers
(L and R) for reproduction as sound. The microprocessor controls features,
such as volume, balance, tone, etc.
The Digital-Analogue Converter (DAC) takes the digital word as input. It uses a series of comparators to measure the value of the word. One comparator (or a combination of comparators) comes "on" at a given signal strength. The signal measurement must be very accurate, since any error in the Least Significant Bit (LSB) means a large deviation in the Most Significant Bit (MSB). Two DACs may be applied to end small phase errors in the signals of different channels.
A compact disc is scanned by a laser beam. The laser's light is reflected from a land to a photo-electric cell.. The cell emits (give or send out) current and a 1 is registered. When the beam shines on a pit, half of the light is reflected from the surface and half from the depth of the pit. The interference between the two reflected beams eliminates the original beam. The photo- electric cell emits no current, and a 0 is registered.
Error correctionworks through parity bits. Groups of bits are such that adding certain series of them has only one result, eg, a 1. If a 1 drops out, the result of the addition is different. A parity bit is then added to correct the error.
Oversampling is a simple, inexpensive device to filter out conversion side effects. Oversampling takes the sampling farther away from the audible range (from 44.1 to 176.4 kHz), so that ringing is eliminated. Oversampling is another technology from Philips which has helped to achieve audio excellence. CDs may be enhanced by features such as track, time and index indications.
The disc is read on the reflecting side, but the more vulnerable side is the label side. Care should always be taken when handling compact discs. Some CDs are recorded and/or processed with analogue methods. The codes DDD, ADD, and AAD are used to indicate (respectively) completely digitally made discs, discs with analogue recording and digital editing and dubbing, and discs with analogue recording and editing, and only digital dubbing. ADD and AAD discs are not necessarily of inferior quality compared to Ddd discs.
VideoCD is an extension of the capabilities of CD, used for video.
For the HiFi dealer and the consumer, the compact disc player is the first piece of equipment to which the term "digital" may be applied correctly and fully.
Since the introduction of the CD, "digital" has become the vogue word. It is even used loosely of amplifiers and tuners. Digitalisation is coming, slowly but surely. What does the term "digital" actually mean? To find the answer, let's look at compact disc technology.
Digital recording of sound is intended to achieve superb sound reproduction with less cost. Superb reproduction has been possible using analogue methods, but only with a great deal of cost and expertise. The secret of the digital advance in quality is in the fact that digital recording does not manipulate the music signal itself, as is the case with analogue processing. Digital technology instead uses measured values to record sound.
Figure 1: Digital technology uses measured values to record sound.
An analogue signal takes the form of a wave. At different times therefore, its amplitude will vary. When an analogue signal is converted into a digital one, the signal is measured at regular intervals. These measurements (or measured values) are then converted into digital (binary) numbers (a series of '0's' and '1's').
Example of a fragment
of CD music encoded
There are in fact a number of different kinds of digital-analogue conversion in various CD players: 16-, 18-, 20-, or just 1-bit con- version.
The sampling frequency is the number of times a signal is measured per second. The number of bits determines the accuracy with which the measured value is converted into binary notation.
The number of bits used determines the accuracy of measurement, since it determines how much information is in each sampling. The more bits, the greater the accuracy. With an 8-bit system, the waveform is rough. Therefore, music is commonly coded on CD in 16 bits.
That is why the CD
player operates with the higher value of 16 bits, and why nearly all CD players
are equipped with a 16-bit Digital Analogue Convertor (DAC). CD players with
16-bit DACs can detect 65,536 different voltage values (0 + 20 + 21 + 22
Figure 2: The CD player operates with the higher value of 16 bits.
Decimal notation uses the base of 10, and is therefore also called Base 10 notation. The base is the exponent which increases with each place to the left. In decimal notation, each place has a value which corresponds to a power of the base 10. Numbers are formed by positioning powers of the base 10. Compare the decimal notation numbers 1, 10, 100 and 1,000. The figure "1" means "1 x 100" when placed on the far right (= 1). In the second place from the right it means "1 x 101" (= 10). In the third place from the right it means "1 x 100" (= 100), and in the fourth place from the right it means "1 x 103" (= 1,000). The power, represented by the exponent, increases by 1 with each place to the left (101, 102, 103, etc.). In decimal (base 10) notation, the power is a power of 10. The base is 10. So when the figure moves one place to the left and the power increases by 1, the figure is worth 10 times more.
The calculation of
the number 231 in decimal notation would therefore be:
(2 x 10E+02) + (3 x 10E+01) + (1 x 10E+00) = (2 x 100) + (3 x 10) + (1 x 1) = 200 + 30 + 1 = 231
For our discussion of binary notation, we will call the power which increases by position the weight of the figure. Consider the binary figure: 1011. Converting this number into our common decimal notation is helpful for understanding what it means. The weights of the figures now are not powers of 10, but powers of 2. So when the power increases by 1, the figure is worth 2 times more.
To convert the number 1011 from binary notation into
decimal notation, use this formula for each figure:
(the figure: "1" or "0") x (the base "2" to the power of the place) 1011 (1 x 23) + (0 x 22) + (1 x 21) + (1 x 20) = (1 x 8) + (0 x 4) + (1 x 2) + (1 x 1) = 8 + 0 + 2 + 1 = 11.
In the binary number 1011, the right "1" has the lowest
weight and the left "1" has the highest weight. They are therefore called
the Least Significant Bit (LSB) (the bit carrying the least weight) and the
Most Significant Bit (MSB) (the bit carrying the most weight).
1 0 1 1 | | MSB LSB
Computers, including the compact disc player, read electrical pulses. They interpret an electrical pulse to mean one of the figures of binary notation, a "1". They understand a lack of an electrical pulse to mean a "0". The consequence of having to convert these measured values to "1's" and "0's" is that long series of figures are needed. However, modern computer technology has no difficulty with long series of figures. All these series of data are recorded on the CD in the form of microscopically small pits and lands. At playback the pits and lands are scanned by a laserbeam. Pits are interpreted as "0's" and lands as "1's".
Suppose that, during recording, a volt metre is used
for measuring the analogue signal. This volt metre only indicates whether
the voltage is "negative" or "positive". This is a 1-bit system, the simplest
system possible. This is a very coarse and slow system which is unable to
transmit any signal details. A wavelike signal, of any form whatsoever, becomes
a block wave with such a simple converter.
Figure 3: A 1-bit system transmits only block waves
Instead of a volt metre, digital comparators are used in the digital- analogue converter (DAC). Each comparator tells whether the voltage being measured is higher or lower than a set value. The comparators, each adjusted to a different level, are connected in series. To illustrate, imagine four comparators called A, B, C and D. A is adjusted to 1 V (=20). B is adjusted to 2 V (=21), C to 4 V (=22) and D to 8 V (=23). (Remember that we are working with base 2 notation.)
The value of the signal is 1 V at the moment the measuring sample is taken (the moment of the sample pulse). The output of the 1 V comparator (A) is high: it emits a signal, a "1" in binary notation. The output of the next few comparators (B, C and D), which are all adjusted to a somewhat higher level, is 0. A digital signal is formed--a digital word, 0001.
At a signal value of 2 V, the comparator B has a high output. The digital word 0010 is therefore formed. At a voltage of 5 V (4+1), a high output appears at comparator A (for the "1") and C (for the "4"). The other two remain 0. The digital word is 0101. Remember that, in binary notation, each place to the left is worth one power of 2 more than the value immediately to its right. (The base is 2.)
That is how the A-D converter works during recording. In this example, there are four comparators. This is a 4-bit system for measuring different voltage values. Only steps of 1 V can be measured. The converter cannot be adjusted for smaller differences. The 4-bit system is too inaccurate to produce enjoyable music, so more than four digital comparators are commonly used. The system then has more than four bits. The absolute minimum for reproducing music is eight bits. The 8-bit system can measure 256 different voltage values (0 + 20 + 21 22 + ...27). Quite a few more than the 4-bit system!
In digitising an audio signal of which the maximum voltage is 1 V, the 8-bit system can only measure differences larger than .003906 V (1/256). This is still coarse, although it is effective in practice. With the 8- bit system, the wave form is rough after digital-analogue conversion.
Figure 5: With the 8-bit system,
the waveform is rough after digital-analogue conversion.
As we saw earlier, the LSB is the bit carrying the least weight and the MSB is the bit carrying the most weight. The MSB is the first (on the far left) and most important bit of a digital word. In a 16-bit binary number, the MSB is 215 (=32,768) times bigger than the LSB (remembering that each place to the left in a binary number is worth 1 power of 2 more than the place to its right). An error of only 0.01% will shatter the value of the 15th and 16th bits completely. That is why every measurement must be very accurate. (see figure 2 above).
Just as the analogue-digital converter determines the quality of the digital recording, the DAC determines the quality of playback. With the standard 16-bit converter, the DAC converts the 16 bits of coded information into a series of voltages through a 16-step resistor ladder network to which a reference voltage is connected. This series of voltages is added by an "operational amplifier", which generates a certain output voltage.
Because of the analogue character of the resistors in the DAC, the accuracy is limited. Such a DAC, when going from a "0" to a "1" and vice versa, generates zero-crossing (or cross-over) distortion (distortion caused by amplifying the positive and negative half cycles separately), non-linear distortion and harmonic distortion. As a result, the sound loses part of its clarity and, especially in the silent passages, irregularities such as noise are heard.
The number of bits determines what the highest attainable, or theoretical, S/N ratio is. With 16 bits, this ratio is 96 dB S/N, allowing the CD to have a larger dynamic range than any other medium.
The S/N ratio is not a constant; it varies with the signal level. At low sound levels, the S/N ratio is lower; there is more noise. This can be heard in CD players of inferior quality. They produce a soft background noise as soon as they amplify even very weak signals. This is quantisation noise. Quantisation is the conversion of digital information to an analogue signal. Quantisation noise is the result of the digital system working with voltage steps.
With the analogue
tape or cassette recorder something similar happens. When the system is recording
a tone, small side bands may be generated to the left and to the right of
the actual frequency. These side bands cause modulation noise. With
the digital signal, the many more voltage sources cause more side bands to
be generated. Some side bands are generated at very low levels, yet nevertheless
they may become quite audible, if not as noise, then as a certain coarseness
Music is recorded on compact disc with 16 bits, so it may seem impossible to play the music back more accurately than with a 16-bit CD player. But in practice it is not even possible to use this 16-bit information capacity to a full 100%.
This is the reason why some converters are designed to work with more than 16 bits. These converters represent many manufacturers' attempts to reach the theoretical 100% use of information capacity as closely as possible. Most manufacturers try to do this by developing 18-bit, 20-bit or even 22-bit converters, and by oversampling (a technique to filter out noise, to be discussed later in this chapter). These refinements in the form of 18 or 20 bits aim at enlarging the S/N ratio and at constructing even more accurately the waveform to be transmitted by the DAC.
In theory, converters with more capacity and/or oversampling allow some gains in quality. In practice, however, it appears that the performance of these converters is often below that of a good 16-bit converter, because of their complexity. Even though Philips CD has obtained very good results with a 16-bit converter, Philips has developed a new converter to take one step closer to perfection. This new technique is called 1-bit Bitstream conversion.
The 16-bit DAC requires 16 transistors (current dividers), one for each bit. The currents are switched "on" or "off" according to the digital values (1 or 0) of the bits. With 16 bits, there are 65,536 possible current values. The current values range from half the total current for the MSB to 1/32, 768th of the total for the LSB.
Any variation of more than a fraction of the LSB in any of these current values causes distortion and non-linearities. Noise is generated when the switches fail to operate in perfect synchronisation. Philips TDA 1541 series of multi-bit converters use the patented Dynamic Element matching principle to adjust the bit current values constantly and automatically, in order to maintain maximum performance.
But the Philips Pulse Density Modulation (PDM) Single-Bit D-A Conversion is an even better solution. In Philips PDM conversion, the 16-bit digital samples read from the CD are transformed into a high-speed, one-bit data stream (with 256 times oversampling). This stream is then converted into an analogue signal by a digital single-bit converter. This technique eliminates the non-linearities and distortion for which there is no ultimate solution in multi-bit converters.
The Philips single-bit converter generates positive and negative current values more than 11 million times per second, in accordance with the 1's and 0's of the high-speed bitstream. The ratio of 1's to 0's determines the actual level of the current. A data stream of all 1's produces the maximum positive current, and a data stream of all 0's results in the maximum negative current. Alternating 1's and 0's results in no current. The high speed of this single-bit conversion is what makes bitstream much better than the coarse, slow 1-bit system we mentioned earlier.
The single bit bitstream converter eliminates zero crossing distortion, quantisation noise and other distortion producing better quality sound. What these converters produce is a wide stereo image and almost tangible depth. Philips bitstream technology is a new level of excellence in D-A conversion.
technology is used to prevent problems like component non-linearities and
cross-over distortion (zero crossing). 1-bit DAC uses a high oversampling
rate of 256 times. In the process the normal 16-bit input signal with a sampling
frequency of 44.1 kHz is converted into a 1-bit data stream with a frequency
of 11.2896 MHz. Since this 1-bit DAC must choose from one of only two possible
values, instead of 65,536 which are possible in a 16-bit converter, there
can be no errors.
In order to avoid these kinds of side effects, oversampling is now used. Sometimes two, three, or even four oversampling devices are applied in one machine. The great value of oversampling lies in the possibility of applying a simple filter that leaves the high quality of the signal intact. Due to oversampling, a low-priced digital filter can be applied.
As a CD player reads information from the disc, the information is converted from its digital form ("1's" and "0's") to its analogue form (a series of electrical current values). This conversion, called quantisation, causes noise. So before sending the signal to the loudspeakers, it is necessary to remove the quantisation noise with a filter. However, some types of filters affect the quality of the music reproduction.
The brick-wall filters employed in many first-generation CD players cut off any information above a certain point, 24 kHz. Unfortunately, these filters were themselves found to cause ringing, a type of distortion. Ringing is actually a remainder of the sampling frequency in the output signal. Ringing causes the stereo image to become unclear.
The technology of oversampling, developed by Philips, virtually eliminates this problem through ultrasonic noise reduction. If the problem of ringing with brick-wall filters lies in their proximity to the audible range, why not move the filter further away from the audible range? By oversampling the digital data four times (at 176.4 kHz instead of 44.1 kHz), the quantisation noise is moved further away from the audible range. The noise is then removed by an analogue filter that also eliminates ringing. The rough analogue wave form of simple sampling is refined by oversampling.
Figure 11: By oversampling the digital data four times (at 176 kHz instead of 44.1 kHz), the quantisation noise is moved further away from the audible range. The noise is then removed by an analogue filter.
Figure 12: The rough analogue waveform of simple sampling is refined by oversampling.
In practice, 2-time oversampling works very well, as many CD players prove. Even better is 4-time oversampling which most higher-quality CD players have. The more expensive models aim at 8-time or even 16-time oversampling, to attain even further refinement. Some CD players even have a 256-time oversampling system!
Higher oversampling levels are not, however, always better. Sampling systems which make use of higher rates can actually degrade the quality of music produced, since many DACs cannot "settle" from one input value to the next. Converters can only process a certain amount of information in a given period of time. Beyond a certain point, the faster the process is attempted, the more errors are introduced. Imagine that a person is told to compute the equation 2 + 2 in ten seconds. This is a simple matter. But if the same person must calculate a series of 20 equations in the same ten seconds, the chance of error is much greater. So it is with the hundreds of thousands of calculations which the DAC must perform. Beyond a certain point, sound quality is diminished rather than increased.
The digital signal transmitted by the CD contains the complete stereo information, both the left (L) and right (R) channels. This information is passed on alternately. After oversampling, the digital signal reaches the DAC, which converts it into an analogue signal. If only one DAC is applied, as is the case with many simple CD players, the signal is very quickly switched from L to R behind the DAC. This system appears to work well at first sight, but it results in a small time difference between L and R. This small time delay leads to phase shifts. Human ears are very sensitive to these differences, however small they may be. Stereo infor- mation is located precisely in these phase relations of the high tones. Through faults in the conversion procedure, a considerable part of the real depth of the stereo image is lost.
If, on the other hand, two DACs are applied, things will be quite diffe- rent! At playback, the same phase relation as was recorded by the microphones will be preserved exactly. Then the playback will have real spaciousness, depth and complete naturalness. A double DAC is an essenti- al feature of a good CD player.
Left and Right signals do not reach the DACs at the same time. By inserting a delay circuit in one of the two channels, the delay of the second signal is perfectly counterbalanced. Because this is done in the digital section, the approach is the same for all frequencies, so that higher frequencies are never delayed more than the lower frequencies.
Behind the single or double DAC there is a single or double analogue filter, as we saw above. We have the intact sound signal at our disposal, after it passes through this filter. We only have to connect the CD output to the AUX or CD input of the stereo amplifier. | http://library.thinkquest.org/26171/gforce.html | 13 |
51 | This article describes some of the common coordinate systems that appear in elementary mathematics. For advanced topics, please refer to coordinate system. For more background, see Cartesian coordinate system.
The coordinates of a point are the components of a tuple of numbers used to represent the location of the point in the plane or space. A coordinate system is a plane or space where the origin and axes are defined so that coordinates can be measured.
In the two-dimensional Cartesian coordinate system, a point P in the xy-plane is represented by a tuple of two components (x,y).
- x is the signed distance from the y-axis to the point P, and
- y is the signed distance from the x-axis to the point P.
In the three-dimensional Cartesian coordinate system, a point P in the xyz-space is represented by a tuple of three components (x,y,z).
- x is the signed distance from the yz-plane to the point P,
- y is the signed distance from the xz-plane to the point P, and
- z is the signed distance from the xy-plane to the point P.
For advanced topics, please refer to Cartesian coordinate system.
The polar coordinate systems are coordinate systems in which a point is identified by a distance from some fixed feature in space and one or more subtended angles. They are the most common systems of curvilinear coordinates.
The term polar coordinates often refers to circular coordinates (two-dimensional). Other commonly used polar coordinates are cylindrical coordinates and spherical coordinates (both three-dimensional).
The circular coordinate system, commonly referred to as the polar coordinate system, is a two-dimensional polar coordinate system, defined by an origin, O, and a semi-infinite line L leading from this point. L is also called the polar axis. In terms of the Cartesian coordinate system, one usually picks O to be the origin (0,0) and L to be the positive x-axis (the right half of the x-axis).
- (radius) is the distance from the origin to the point P, and
- (azimuth) is the angle between the positive x-axis and the line from the origin to the point P.
Possible coordinate transformations from one circular coordinate system to another include:
- change of zero direction
- changing from the angle increasing anticlockwise to increasing clockwise or conversely
- change of scale
and combinations. More generally, transformations of the corresponding Cartesian coordinates can be translated into transformations from one circular coordinate system to another by basically transforming to Cartesian coordinates, transforming those, and transforming back to circular coordinates. This is e.g needed for:
- change of origin
- change of scale in one direction
A minor change is changing the range to e.g.
Circular coordinates can be convenient in situations where only the distance, or only the direction to a fixed point matters, rotations about a point, etc. (by taking the special point as the origin).
A complex number can be viewed as a point or a position vector on a plane, the so-called complex plane or Argand diagram. Here the circular coordinates are r = |z|, called the absolute value or modulus of z, and φ = arg(z), called the complex argument of z. These coordinates (mod-arg form) are especially convenient for complex multiplication and powers.
The cylindrical coordinate system is a three-dimensional polar coordinate system.
In the cylindrical coordinate system, a point P is represented by a tuple of three components (r,θ,h). Using terms of the Cartesian coordinate system,
- (radius) is the distance between the z-axis and the point P,
- (azimuth or longitude) is the angle between the positive x-axis and the line from the origin to the point P projected onto the xy-plane, and
- h (height) is the signed distance from xy-plane to the point P.
- Note: some sources use z for h; there is no "right" or "wrong" convention, but it is necessary to be aware of the convention being used.
Cylindrical coordinates involve some redundancy; θ loses its significance if r = 0.
Cylindrical coordinates are useful in analyzing systems that are symmetrical about an axis. For example the infinitely long cylinder that has the Cartesian equation x2 + y2 = c2 has the very simple equation r = c in cylindrical coordinates.
The spherical coordinate system is a three-dimensional polar coordinate system.
In the spherical coordinate system, a point P is represented by a tuple of three components (ρ,φ,θ). Using terms of the Cartesian coordinate system,
- (radius) is the distance between the point P and the origin,
- (zenith, colatitude or polar angle) is the angle between the z-axis and the line from the origin to the point P, and
- (azimuth or longitude) is the angle between the positive x-axis and the line from the origin to the point P projected onto the xy-plane.
NB: The above convention is the standard used by American mathematicians and American calculus textbooks. However, most physicists, engineers, and non-American mathematicians interchange the symbols φ and θ above, using φ to denote the azimuth and θ the colatitude. One should be very careful to note which convention is being used by a particular author. One argument against the conventional American mathematical definition, regardless of how one labels the coordinates, is that it produces a left-handed coordinate system, rather than the usual convention of a right-handed coordinate system. One argument for it, on the other hand, is that it more closely resembles two-dimensional polar notation where θ ranges from 0 to 2π.
The latitude δ is the complement of the colatitude φ: . The latitude is the angle between the xy-plane (the equator) and the line from the origin to the point P. Although here indicated with a δ, the latitude is usually also indicated with the symbol φ.
The spherical coordinate system also involves some redundancy; φ loses its significance if ρ = 0, and θ loses its significance if ρ = 0 or φ = 0 or .
To construct a point from its spherical coordinates: from the origin, go ρ along the positive z-axis, rotate φ about y-axis toward the direction of the positive x-axis, and rotate θ about the z-axis toward the direction of the positive y-axis.
Spherical coordinates are useful in analyzing systems that are symmetrical about a point; a sphere that has the Cartesian equation x2 + y2 + z2 = c2 has the very simple equation ρ = c in spherical coordinates.
Spherical coordinates are the natural coordinates for physical situations where there is spherical symmetry. In such a situation, one can describe waves using spherical harmonics. Another application is ergonomic design, where ρ is the arm length of a stationary person and the angles describe the direction of the arm as it reaches out.
The concept of spherical coordinates can be extended to higher dimensional spaces and are then referred to as hyperspherical coordinates.
ACHLEE IS A SKANK
- Frank Wattenberg has made some attractive animations illustrating spherical and cylindrical coordinate systems.
- http://www.physics.oregonstate.edu/bridge/papers/spherical.pdf is a description of the different conventions in use for naming components of spherical coordinates, along with a proposal for standardizing this. | http://www.exampleproblems.com/wiki/index.php/Cartesian_coordinates | 13 |
240 | ||This article contains many unreferenced sections and needs additional citations for verification. (January 2011)|
Centripetal force (from Latin centrum "center" and petere "to seek") is a force that makes a body follow a curved path: its direction is always orthogonal to the velocity of the body, toward the fixed point of the instantaneous center of curvature of the path. Centripetal force is generally the cause of circular motion.
In simple terms, centripetal force is defined as a force which keeps a body moving with a uniform speed along a circular path and is directed along the radius towards the centre. The mathematical description was derived in 1659 by Dutch physicist Christiaan Huygens. Isaac Newton's description was: "A centripetal force is that by which bodies are drawn or impelled, or in any way tend, towards a point as to a centre."
where is the centripetal acceleration. The direction of the force is toward the center of the circle in which the object is moving, or the osculating circle, the circle that best fits the local path of the object, if the path is not circular. The speed in the formula is squared, so twice the speed needs four times the force. The inverse relationship with the radius of curvature shows that half the radial distance requires twice the force. This force is also sometimes written in terms of the angular velocity ω of the object about the center of the circle:
Expressed using the period for one revolution of the circle, T, the equation becomes:
Sources of centripetal force
For a satellite in orbit around a planet, the centripetal force is supplied by gravity. Some sources, including Newton, refer to the entire force as a centripetal force, even for eccentric orbits, for which gravity is not aligned with the direction to the center of curvature.
The gravitational force acts on each object toward the other, which is toward the center of mass of the two objects; for circular orbits, this center of gravity is the center of the circular orbits. For non-circular orbits or trajectories, only the component of gravitational force directed orthogonal to the path (toward the center of the osculating circle) is termed centripetal; the remaining component acts to speed up or slow down the satellite in its orbit. For an object swinging around on the end of a rope in a horizontal plane, the centripetal force on the object is supplied by the tension of the rope. For a spinning object, internal tensile stress provides the centripetal forces that make the parts of the object trace out circular motions.
The rope example is an example involving a 'pull' force. The centripetal force can also be supplied as a 'push' force such as in the case where the normal reaction of a wall supplies the centripetal force for a wall of death rider.
Another example of centripetal force arises in the helix which is traced out when a charged particle moves in a uniform magnetic field in the absence of other external forces. In this case, the magnetic force is the centripetal force which acts towards the helix axis.
Analysis of several cases
Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration.
Uniform circular motion
Uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case.
Assume uniform circular motion, which requires three things.
- The object moves only on a circle.
- The radius of the circle does not change in time.
- The object moves with constant angular velocity around the circle. Therefore where is time.
Notice that the term in parenthesis is the original expression of in Cartesian coordinates. Consequently,
The negative shows that the acceleration is pointed towards the center of the circle (opposite the radius), hence it is called "centripetal" (i.e. "center-seeking"). While objects naturally follow a straight path (due to inertia), this centripetal acceleration describes the circular motion path caused by a centripetal force.
Derivation using vectors
|This section does not cite any references or sources. (June 2009)|
The image at right shows the vector relationships for uniform circular motion. The rotation itself is represented by the angular velocity vector Ω, which is normal to the plane of the orbit (using the right-hand rule) and has magnitude given by:
with θ the angular position at time t. In this subsection, dθ/dt is assumed constant, independent of time. The distance traveled dℓ of the particle in time dt along the circular path is
which, by properties of the vector cross product, has magnitude rdθ and is in the direction tangent to the circular path.
In other words,
Differentiating with respect to time,
Lagrange's formula states:
Applying Lagrange's formula with the observation that Ω • r(t) = 0 at all times,
In words, the acceleration is pointing directly opposite to the radial displacement r at all times, and has a magnitude:
where vertical bars |...| denote the vector magnitude, which in the case of r(t) is simply the radius R of the path. This result agrees with the previous section, though the notation is slightly different.
When the rate of rotation is made constant in the analysis of nonuniform circular motion, that analysis agrees with this one.
A merit of the vector approach is that it is manifestly independent of any coordinate system.
Example: The banked turn
The upper panel in the image at right shows a ball in circular motion on a banked curve. The curve is banked at an angle θ from the horizontal, and the surface of the road is considered to be slippery. The object is to find what angle the bank must have so the ball does not slide off the road. Intuition tells us that on a flat curve with no banking at all, the ball will simply slide off the road; while with a very steep banking, the ball will slide to the center unless it travels the curve rapidly.
Apart from any acceleration that might occur in the direction of the path, the lower panel of the image above indicates the forces on the ball. There are two forces; one is the force of gravity vertically downward through the center of mass of the ball mg where m is the mass of the ball and g is the gravitational acceleration; the second is the upward normal force exerted by the road perpendicular to the road surface man. The centripetal force demanded by the curved motion also is shown above. This centripetal force is not a third force applied to the ball, but rather must be provided by the net force on the ball resulting from vector addition of the normal force and the force of gravity. The resultant or net force on the ball found by vector addition of the normal force exerted by the road and vertical force due to gravity must equal the centripetal force dictated by the need to travel a circular path. The curved motion is maintained so long as this net force provides the centripetal force requisite to the motion.
The horizontal net force on the ball is the horizontal component of the force from the road, which has magnitude |Fh| = m|an|sinθ. The vertical component of the force from the road must counteract the gravitational force: |Fv| = m|an|cosθ = m|g|, which implies |an|=|g| / cosθ. Substituting into the above formula for |Fh| yields a horizontal force to be:
On the other hand, at velocity |v| on a circular path of radius R, kinematics says that the force needed to turn the ball continuously into the turn is the radially inward centripetal force Fc of magnitude:
Consequently the ball is in a stable path when the angle of the road is set to satisfy the condition:
As the angle of bank θ approaches 90°, the tangent function approaches infinity, allowing larger values for |v|2/R. In words, this equation states that for faster speeds (bigger |v|) the road must be banked more steeply (a larger value for θ), and for sharper turns (smaller R) the road also must be banked more steeply, which accords with intuition. When the angle θ does not satisfy the above condition, the horizontal component of force exerted by the road does not provide the correct centripetal force, and an additional frictional force tangential to the road surface is called upon to provide the difference. If friction cannot do this (that is, the coefficient of friction is exceeded), the ball slides to a different radius where the balance can be realized.
These ideas apply to air flight as well. See the FAA pilot's manual.
Nonuniform circular motion
|This section does not cite any references or sources. (June 2009)|
As a generalization of the uniform circular motion case, suppose the angular rate of rotation is not constant. The acceleration now has a tangential component, as shown the image at right. This case is used to demonstrate a derivation strategy based upon a polar coordinate system.
Let r(t) be a vector that describes the position of a point mass as a function of time. Since we are assuming circular motion, let r(t) = R·ur, where R is a constant (the radius of the circle) and ur is the unit vector pointing from the origin to the point mass. The direction of ur is described by θ, the angle between the x-axis and the unit vector, measured counterclockwise from the x-axis. The other unit vector for polar coordinates, uθ is perpendicular to ur and points in the direction of increasing θ. These polar unit vectors can be expressed in terms of Cartesian unit vectors in the x and y directions, denoted i and j respectively:
- ur = cosθ i + sinθ j
- uθ = -sinθ i + cosθ j.
We differentiate to find velocity:
where ω is the angular velocity dθ/dt.
This result for the velocity matches expectations that the velocity should be directed tangential to the circle, and that the magnitude of the velocity should be ωR. Differentiating again, and noting that
we find that the acceleration, a is:
Thus, the radial and tangential components of the acceleration are:
where |v| = Rω is the magnitude of the velocity (the speed).
These equations express mathematically that, in the case of an object that moves along a circular path with a changing speed, the acceleration of the body may be decomposed into a perpendicular component that changes the direction of motion (the centripetal acceleration), and a parallel, or tangential component, that changes the speed.
General planar motion
||This section needs additional citations for verification. (June 2009)|
The above results can be derived perhaps more simply in polar coordinates, and at the same time extended to general motion within a plane, as shown next. Polar coordinates in the plane employ a radial unit vector uρ and an angular unit vector uθ, as shown above. A particle at position r is described by:
where the notation ρ is used to describe the distance of the path from the origin instead of R to emphasize that this distance is not fixed, but varies with time. The unit vector uρ travels with the particle and always points in the same direction as r(t). Unit vector uθ also travels with the particle and stays orthogonal to uρ. Thus, uρ and uθ form a local Cartesian coordinate system attached to the particle, and tied to the path traveled by the particle. By moving the unit vectors so their tails coincide, as seen in the circle at the left of the image above, it is seen that uρ and uθ form a right-angled pair with tips on the unit circle that trace back and forth on the perimeter of this circle with the same angle θ(t) as r(t).
When the particle moves, its velocity is
To evaluate the velocity, the derivative of the unit vector uρ is needed. Because uρ is a unit vector, its magnitude is fixed, and it can change only in direction, that is, its change duρ has a component only perpendicular to uρ. When the trajectory r(t) rotates an amount dθ, uρ, which points in the same direction as r(t), also rotates by dθ. See image above. Therefore the change in uρ is
In a similar fashion, the rate of change of uθ is found. As with uρ, uθ is a unit vector and can only rotate without changing size. To remain orthogonal to uρ while the trajectory r(t) rotates an amount dθ, uθ, which is orthogonal to r(t), also rotates by dθ. See image above. Therefore, the change duθ is orthogonal to uθ and proportional to dθ (see image above):
The image above shows the sign to be negative: to maintain orthogonality, if duρ is positive with dθ, then duθ must decrease.
Substituting the derivative of uρ into the expression for velocity:
To obtain the acceleration, another time differentiation is done:
Substituting the derivatives of uρ and uθ, the acceleration of the particle is:
As a particular example, if the particle moves in a circle of constant radius R, then dρ/dt = 0, v = vθ, and:
These results agree with those above for nonuniform circular motion. See also the article on non-uniform circular motion. If this acceleration is multiplied by the particle mass, the leading term is the centripetal force and the negative of the second term related to angular acceleration is sometimes called the Euler force.
For trajectories other than circular motion, for example, the more general trajectory envisioned in the image above, the instantaneous center of rotation and radius of curvature of the trajectory are related only indirectly to the coordinate system defined by uρ and uθ and to the length |r(t)| = ρ. Consequently, in the general case, it is not straightforward to disentangle the centripetal and Euler terms from the above general acceleration equation. To deal directly with this issue, local coordinates are preferable, as discussed next.
By local coordinates is meant a set of coordinates that travel with the particle, and have orientation determined by the path of the particle. Unit vectors are formed as shown in the image at right, both tangential and normal to the path. This coordinate system sometimes is referred to as intrinsic or path coordinates or nt-coordinates, for normal-tangential, referring to these unit vectors. These coordinates are a very special example of a more general concept of local coordinates from the theory of differential forms.
Distance along the path of the particle is the arc length s, considered to be a known function of time.
A center of curvature is defined at each position s located a distance ρ (the radius of curvature) from the curve on a line along the normal un (s). The required distance ρ(s) at arc length s is defined in terms of the rate of rotation of the tangent to the curve, which in turn is determined by the path itself. If the orientation of the tangent relative to some starting position is θ(s), then ρ(s) is defined by the derivative dθ/ds:
The radius of curvature usually is taken as positive (that is, as an absolute value), while the curvature κ is a signed quantity.
Using these coordinates, the motion along the path is viewed as a succession of circular paths of ever-changing center, and at each position s constitutes non-uniform circular motion at that position with radius ρ. The local value of the angular rate of rotation then is given by:
with the local speed v given by:
As for the other examples above, because unit vectors cannot change magnitude, their rate of change is always perpendicular to their direction (see the left-hand insert in the image above):
and using the chain-rule of differentiation:
- with the tangential acceleration
In this local coordinate system the acceleration resembles the expression for nonuniform circular motion with the local radius ρ(s), and the centripetal acceleration is identified as the second term.
Looking at the image above, one might wonder whether adequate account has been taken of the difference in curvature between ρ(s) and ρ(s + ds) in computing the arc length as ds = ρ(s)dθ. Reassurance on this point can be found using a more formal approach outlined below. This approach also makes connection with the article on curvature.
To introduce the unit vectors of the local coordinate system, one approach is to begin in Cartesian coordinates and describe the local coordinates in terms of these Cartesian coordinates. In terms of arc length s let the path be described as:
Then an incremental displacement along the path ds is described by:
where primes are introduced to denote derivatives with respect to s. The magnitude of this displacement is ds, showing that:
- (Eq. 1)
This displacement is necessarily tangent to the curve at s, showing that the unit vector tangent to the curve is:
while the outward unit vector normal to the curve is
Orthogonality can be verified by showing that the vector dot product is zero. The unit magnitude of these vectors is a consequence of Eq. 1. Using the tangent vector, the angle θ of the tangent to the curve is given by:
The radius of curvature is introduced completely formally (without need for geometric interpretation) as:
The derivative of θ can be found from that for sinθ:
in which the denominator is unity. With this formula for the derivative of the sine, the radius of curvature becomes:
where the equivalence of the forms stems from differentiation of Eq. 1:
With these results, the acceleration can be found:
as can be verified by taking the dot product with the unit vectors ut(s) and un(s). This result for acceleration is the same as that for circular motion based on the radius ρ. Using this coordinate system in the inertial frame, it is easy to identify the force normal to the trajectory as the centripetal force and that parallel to the trajectory as the tangential force. From a qualitative standpoint, the path can be approximated by an arc of a circle for a limited time, and for the limited time a particular radius of curvature applies, the centrifugal and Euler forces can be analyzed on the basis of circular motion with that radius.
This result for acceleration agrees with that found earlier. However, in this approach the question of the change in radius of curvature with s is handled completely formally, consistent with a geometric interpretation, but not relying upon it, thereby avoiding any questions the image above might suggest about neglecting the variation in ρ.
Example: circular motion
|This section does not cite any references or sources. (June 2009)|
To illustrate the above formulas, let x, y be given as:
which can be recognized as a circular path around the origin with radius α. The position s = 0 corresponds to [α, 0], or 3 o'clock. To use the above formalism the derivatives are needed:
With these results one can verify that:
The unit vectors also can be found:
which serve to show that s = 0 is located at position [ρ, 0] and s = ρπ/2 at [0, ρ], which agrees with the original expressions for x and y. In other words, s is measured counterclockwise around the circle from 3 o'clock. Also, the derivatives of these vectors can be found:
To obtain velocity and acceleration, a time-dependence for s is necessary. For counterclockwise motion at variable speed v(t):
where v(t) is the speed and t is time, and s(t = 0) = 0. Then:
where it already is established that α = ρ. This acceleration is the standard result for non-uniform circular motion.
Notes and references
- Craig, John (1849). A new universal etymological, technological and pronouncing dictionary of the English language: embracing all terms used in art, science, and literature, Volume 1. Harvard University. p. 291., Extract of page 291
- Russelkl C Hibbeler (2009). "Equations of Motion: Normal and tangential coordinates". Engineering Mechanics: Dynamics (12 ed.). Prentice Hall. p. 131. ISBN 0-13-607791-9.
- Paul Allen Tipler, Gene Mosca (2003). Physics for scientists and engineers (5th ed.). Macmillan. p. 129. ISBN 0-7167-8339-8.
- Newton, Isaac (2010). The principia : mathematical principles of natural philosophy. [S.l.]: Snowball Pub. p. 10. ISBN 978-1-60796-240-3.
- Chris Carter (2001). Facts and Practice for A-Level: Physics. S.l.: Oxford Univ Press. p. 30. ISBN 978-0-19-914768-7.
- Eugene Lommel and George William Myers (1900). Experimental physics. K. Paul, Trench, Trübner & Co. p. 63.
- Colwell, Catharine H. "A Derivation of the Formulas for Centripetal Acceleration". PhysicsLAB. Retrieved 31 July 2011.
- Theo Koupelis (2010). In Quest of the Universe (6th ed.). Jones & Bartlett Learning. p. 83. ISBN 978-0-7637-6858-4.
- Johnnie T. Dennis (2003). The Complete Idiot's Guide to Physics. Alpha Books. p. 91. ISBN 978-1-59257-081-2.
- A. V. Durrant (1996). Vectors in physics and engineering. CRC Press. p. 103. ISBN 978-0-412-62710-1.
- Lawrence S. Lerner (1997). Physics for Scientists and Engineers. Boston: Jones & Bartlett Publishers. p. 128. ISBN 0-86720-479-6.
- Arthur Beiser (2004). Schaum's Outline of Applied Physics. New York: McGraw-Hill Professional. p. 103. ISBN 0-07-142611-6.
- Alan Darbyshire (2003). Mechanical Engineering: BTEC National Option Units. Oxford: Newnes. p. 56. ISBN 0-7506-5761-8.
- Federal Aviation Administration (2007). Pilot's Encyclopedia of Aeronautical Knowledge. Oklahoma City OK: Skyhorse Publishing Inc. Figure 3–21. ISBN 1-60239-034-7.
- Note: unlike the Cartesian unit vectors i and j, which are constant, in polar coordinates the direction of the unit vectors ur and uθ depend on θ, and so in general have non-zero time derivatives.
- Although the polar coordinate system moves with the particle, the observer does not. The description of the particle motion remains a description from the stationary observer's point of view.
- Notice that this local coordinate system is not autonomous; for example, its rotation in time is dictated by the trajectory traced by the particle. Note also that the radial vector r(t) does not represent the radius of curvature of the path.
- John Robert Taylor (2005). Classical Mechanics. Sausalito CA: University Science Books. pp. 28–29. ISBN 1-891389-22-X.
- Cornelius Lanczos (1986). The Variational Principles of Mechanics. New York: Courier Dover Publications. p. 103. ISBN 0-486-65067-7.
- See, for example, Howard D. Curtis (2005). Orbital Mechanics for Engineering Students. Butterworth-Heinemann. p. 5. ISBN 0-7506-6169-0.
- S. Y. Lee (2004). Accelerator physics (2nd ed.). Hackensack NJ: World Scientific. p. 37. ISBN 981-256-182-X.
- The observer of the motion along the curve is using these local coordinates to describe the motion from the observer's frame of reference, that is, from a stationary point of view. In other words, although the local coordinate system moves with the particle, the observer does not. A change in coordinate system used by the observer is only a change in their description of observations, and does not mean that the observer has changed their state of motion, and vice versa.
- Zhilin Li & Kazufumi Ito (2006). The immersed interface method: numerical solutions of PDEs involving interfaces and irregular domains. Philadelphia: Society for Industrial and Applied Mathematics. p. 16. ISBN 0-89871-609-8.
- K L Kumar (2003). Engineering Mechanics. New Delhi: Tata McGraw-Hill. p. 339. ISBN 0-07-049473-8.
- Lakshmana C. Rao, J. Lakshminarasimhan, Raju Sethuraman & SM Sivakuma (2004). Engineering Dynamics: Statics and Dynamics. Prentice Hall of India. p. 133. ISBN 81-203-2189-8.
- Shigeyuki Morita (2001). Geometry of Differential Forms. American Mathematical Society. p. 1. ISBN 0-8218-1045-6.
- The osculating circle at a given point P on a curve is the limiting circle of a sequence of circles that pass through P and two other points on the curve, Q and R, on either side of P, as Q and R approach P. See the online text by Lamb: Horace Lamb (1897). An Elementary Course of Infinitesimal Calculus. University Press. p. 406. ISBN 1-108-00534-9.
- Guang Chen & Fook Fah Yap (2003). An Introduction to Planar Dynamics (3rd ed.). Central Learning Asia/Thomson Learning Asia. p. 34. ISBN 981-243-568-9.
- R. Douglas Gregory (2006). Classical Mechanics: An Undergraduate Text. Cambridge University Press. p. 20. ISBN 0-521-82678-0.
- Edmund Taylor Whittaker & William McCrea (1988). A Treatise on the Analytical Dynamics of Particles and Rigid Bodies: with an introduction to the problem of three bodies (4rth ed.). Cambridge University Press. p. 20. ISBN 0-521-35883-3.
- Jerry H. Ginsberg (2007). Engineering Dynamics. Cambridge University Press. p. 33. ISBN 0-521-88303-2.
- Joseph F. Shelley (1990). 800 solved problems in vector mechanics for engineers: Dynamics. McGraw-Hill Professional. p. 47. ISBN 0-07-056687-9.
- Larry C. Andrews & Ronald L. Phillips (2003). Mathematical Techniques for Engineers and Scientists. SPIE Press. p. 164. ISBN 0-8194-4506-1.
- Ch V Ramana Murthy & NC Srinivas (2001). Applied Mathematics. New Delhi: S. Chand & Co. p. 337. ISBN 81-219-2082-5.
- The article on curvature treats a more general case where the curve is parametrized by an arbitrary variable (denoted t), rather than by the arc length s.
- Ahmed A. Shabana, Khaled E. Zaazaa, Hiroyuki Sugiyama (2007). Railroad Vehicle Dynamics: A Computational Approach. CRC Press. p. 91. ISBN 1-4200-4581-4.
- Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7.
- Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4.
- Centripetal force vs. Centrifugal force, from an online Regents Exam physics tutorial by the Oswego City School District
|Look up centripetal in Wiktionary, the free dictionary.|
- Notes from University of Winnipeg
- Notes from Physics and Astronomy HyperPhysics at Georgia State University; see also home page
- Notes from Britannica
- Notes from PhysicsNet
- NASA notes by David P. Stern
- Notes from U Texas.
- Analysis of smart yo-yo
- The Inuit yo-yo
- Kinematic Models for Design Digital Library (KMODDL)
Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering. | http://en.wikipedia.org/wiki/Centripetal_force | 13 |
52 | Newton thought that a force must act on the Moon because,
since it moves in a curved (almost, but not quite) circular path,
it is accelerating. After all, the direction of its
velocity is constantly changing, and acceleration is the rate
velocity changes. An acceleration requires a net force (2nd
The Moon (and every other Earth satellite) is in free fall
toward the Earth, but it is a projectile whose tangential
velocity keeps it from getting closer to the Earth. In the
same time that the Moon falls a centimeter, the Earth curves a
centimeter out from under it!
In order for a scientific hypothesis to advance to the status
of a scientific theory, it must be thoroughly and extensively
Since Fgrav = GMm/r2, the gravitational force is directly proportional to G. The small size of the gravitational constant G tells you that
the gravitational force is actually quite weak compared to
other known forces like the electric, magnetic, and nuclear
You need to know: (1) your mass, (2) the Earth's
mass, and (3) the radius of the Earth in order to
determine the gravitational force on you, which is your
The gravitational force is an inverse-square-law force - the
strength of the force decreases with the square of the
distance between objects. In other words, twice the distance
means one-fourth of the force, three times the distance means
one-ninth of the force, etc.
(a) If you were five times farther from the center of the
Earth than you are now, your weight would be 1/25 ( =
1/52) of your current weight.
(b) If you were ten times farther from the center of the Earth
than you are now, your weight would be 1/100 (=
1/102) of your current weight.
Ch 12 Plug & Chug Answers:
(Note that this is the same result you would get from w = mg.)
(Comparing the answers for this question and the last one, notice that the gravitational force that the Sun exerts on the Moon is about 100 times as much as the gravitational force that the Earth exerts on the Moon. Why, then, does the Moon orbit the Earth and not the Sun? )
Ch 12 Think & Explain Answers:
No, this label is not cause for alarm! The same thing
can be said about every object in the universe -
The same amount of force. The forces "Earth pulls Moon" and
"Moon pulls Earth" are a Newton's 3rd Law action/reaction
500 N toward the center of the Earth. This gravitational force
is the force we commonly call her weight.
If the gravitational force of the Sun on the planets suddenly
disappeared, they would move off with the velocity that they had
at that instant (Newton's 1st Law). So, they would move in a
straight line tangent to their orbit, at constant speed.
(a) Yes, if the Moon were twice as massive, the gravitational
force of the Earth on the Moon would be twice as much, since the
gravitational force is proportional to both masses involved.
(b) Yes, the gravitational force that the Moon exerts on the Earth
would double also (according to Newton's Third Law).
A rocket going from the Earth to the Moon would require more
fuel. Since the Earth has more mass than the Moon, the Earth will
exert a larger gravitational force on the rocket than the Moon
would (things weigh less on the Moon). This means that the rocket
would have to exert a larger force to balance its weight leaving
the Earth than leaving the Moon. Since the rocket exerts a larger
force leaving the Earth, it must do more work to leave the Earth,
which takes more energy (in the form of chemical potential energy
stored in the rocket fuel).
Gravitational forces on a galaxy near the "edge" of
the Universe. (Not all forces are shown.)
Net force on a galaxy near the "edge" of the
The observation that the expansion of the Universe is slowing down
is consistent with the law of gravity. Every object in the
Universe attracts every other object in the Universe with a
gravitational force. The diagram on the left above shows some of
the gravitational forces on a galaxy near the "edge" of the
universe. Looking at the diagram you can see that all of the
forces on this galaxy point "inward", since all gravitational
forces are attractive. This means that the net force on this
galaxy points "inward" toward the "center" of the Universe, as
shown in the diagram at right above. So, if the galaxy is moving
away from the "center" of the Universe (an expanding universe),
then the net force on the galaxy will act to slow the galaxy down,
and the expansion of the Universe should be slowing down.
However, recent observations (since the publication of your text)
seem to indicate that the expansion of the Universe is in fact NOT
slowing down - in fact, the Universe seems to be accelerating.
This is contrary to the behavior that the law of gravity predicts
as described above. This is an area of very heated and intense
research and discussion at the moment. If the Universe's expansion
is really speeding up, then there has to be some
previously-undetected force that is driving it. What could that
Ch 12 Think & Solve Answers
If the Earth's diameter doubled and its mass also doubled, your weight would be half as much as now. Your weight is the gravitational force between you and the Earth. Doubling the mass of the Earth would double your weight, since gravitational force is directly proportional to mass, but doubling the radius (which doubles if the diameter doubles) would decrease your weight by a factor of 1/4, since gravitational force is inversely proportional to the square of the radius. Mathematically:
If you were twice as far from the center of the Earth as you are now, your weight would be 1/4 of its current value (See #32). | http://batesvilleinschools.com/physics/PhyNet/Mechanics/Gravity/answers/assignment_answers.htm | 13 |
130 | X-ray astronomy is a physical subfield of astronomy, more specifically radiation astronomy, that uses a variety of X-ray detectors fashioned into X-ray telescopes to observe natural sources that emit, reflect, transmit, or fluoresce X-rays. X-rays can only penetrate so far into a planetary atmosphere such as that surrounding the crustal and oceanic surface of the Earth. This limitation requires that these detectors and telescopes be lofted above nearly all of the atmosphere to function. Another alternative is to place them on astronomical bodies such as the Moon or in orbit.
Like the learning resource on Earth-based astronomy, this resource starts out at a secondary level, proceeds through a university undergraduate level, and engages the learner with the state of the art.
Notation: let the symbol Def. indicate that a definition is following.
Notation: let the symbols between [ and ] be replacement for that portion of a quoted text.
To help with definitions, their meanings and intents, there is learning resource theory of definition.
Def. evidence that demonstrates that a concept is possible is called proof of concept.
The proof-of-concept structure consists of
- findings, and
The findings demonstrate a statistically systematic change from the status quo or the control group.
X-ray astronomy consists of three fundamental parts:
- logical laws with respect to incoming X-rays, or X-radiation,
- natural X-ray sources, and
- the sky and associated realms with respect to X-rays.
Def. an action or process of throwing or sending out a traveling X-ray in a line, beam, or stream of small cross section is called X-radiation.
"X-rays span 3 decades in wavelength, frequency and energy. From 10 to 0.1 nanometers (nm) (about 0.12 to 12 keV) they are classified as soft x-rays, and from 0.1 nm to 0.01 nm (about 12 to 120 keV) as hard X-rays."
"Although the more energetic X-rays, photons with an energy greater than 30 keV (4,800 aJ) can penetrate the air at least for distances of a few meters (they would never have been detected and medical X-ray machines would not work if this was not the case) the Earth's atmosphere is thick enough that virtually none are able to penetrate from outer space all the way to the Earth's surface. X-rays in the 0.5 to 5 keV (80 to 800 aJ) range, where most celestial sources give off the bulk of their energy, can be stopped by a few sheets of paper; ninety percent of the photons in a beam of 3 keV (480 aJ) X-rays are absorbed by traveling through just 10 cm of air."
X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area."
"Auroras are produced by solar storms that eject clouds of energetic charged particles. These particles are deflected when they encounter the Earth’s magnetic field, but in the process large electric voltages are created. Electrons trapped in the Earth’s magnetic field are accelerated by these voltages and spiral along the magnetic field into the polar regions. There they collide with atoms high in the atmosphere and emit X-rays".
Theoretical X-radiation astronomy
Def. a theory of the science of the biological, chemical, physical, and logical laws (or principles) with respect to any natural X-ray source in the sky especially at night is called theoretical X-ray astronomy.
An individual science such as physics (astrophysics) is theoretical X-ray astrophysics.
"Theoretical X-ray astronomy is a branch of theoretical astronomy that deals with the theoretical astrophysics and theoretical astrochemistry of X-ray generation, emission, and detection as applied to astronomical objects."
"Like theoretical astrophysics, theoretical X-ray astronomy uses a wide variety of tools which include analytical models to approximate the behavior of a possible X-ray source and computational numerical simulations to approximate the observational data. Once potential observational consequences are available they can be compared with experimental observations. Observers can look for data that refutes a model or helps in choosing between several alternate or conflicting models."
"Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model."
"Most of the topics in astrophysics, astrochemistry, astrometry, and other fields that are branches of astronomy studied by theoreticians involve X-rays and X-ray sources. Many of the beginnings for a theory can be found in an Earth-based laboratory where an X-ray source is built and studied."
"From the observed X-ray spectrum, combined with spectral emission results for other wavelength ranges, an astronomical model addressing the likely source of X-ray emission can be constructed. For example, with Scorpius X-1 the X-ray spectrum steeply drops off as X-ray energy increases up to 20 keV, which is likely for a thermal-plasma mechanism. In addition, there is no radio emission, and the visible continuum is roughly what would be expected from a hot plasma fitting the observed X-ray flux. The plasma could be a coronal cloud of a central object or a transient plasma, where the energy source is unknown, but could be related to the idea of a close binary."
"In the Crab Nebula X-ray spectrum there are three features that differ greatly from Scorpius X-1: its spectrum is much harder, its source diameter is in light-years (ly)s, not astronomical units (AU), and its radio and optical synchrotron emission are strong. Its overall X-ray luminosity rivals the optical emission and could be that of a nonthermal plasma. However, the Crab Nebula appears as an X-ray source that is a central freely expanding ball of dilute plasma, where the energy content is 100 times the total energy content of the large visible and radio portion, obtained from the unknown source."
"The "Dividing Line" as giant stars evolve to become red giants also coincides with the Wind and Coronal Dividing Lines. To explain the drop in X-ray emission across these dividing lines, a number of models have been proposed:
- low transition region densities, leading to low emission in coronae,
- high-density wind extinction of coronal emission,
- only cool coronal loops become stable,
- changes in a magnetic field structure to that of an open topology, leading to a decrease of magnetically confined plasma, or
- changes in the magnetic dynamo character, leading to the disappearance of stellar fields leaving only small-scale, turbulence-generated fields among red giants."
Astronomical X-ray entities are often discriminated further into sources or objects when more information becomes available, including that from other radiation astronomies.
A researcher who turns on an X-ray generator to study the X-ray emissions in a laboratory so as to understand an apparent astronomical X-ray source is an astronomical X-ray entity. So is one who writes an article about such efforts or a computer simulation to possibly represent such a source.
Def. a natural source usually of X-rays (X-radiation) in the sky especially at night is called an astronomical X-ray source.
The apparent source may be reflecting, generating and emitting, transmitting, or fluorescing X-rays which may be detectable.
"Apart from the Sun, the known X-ray emitters now include planets (Venus, Earth, Mars, Jupiter, and Saturn), planetary satellites (Moon, Io, Europa, and Ganymede), all active comets, the Io plasma torus, the rings of Saturn, the coronae (exospheres) of Earth and Mars, and the heliosphere."
Serpens X-1 is an X-ray source with an error circle fixed for all time on the celestial sphere. It is also an X-ray entity in the sense that it has an "independent, separate, or self-contained astronomical existence." from theoretical astronomy. It has a history, a spatial extent, and a spectral extent.
Solar coronal cloud
Although a coronal cloud (as part or all of a stellar or galactic corona) is usually "filled with high-temperature plasma at temperatures of T ≈ 1–2 (MK), ... [h]ot active regions and postflare loops have plasma temperatures of T ≈ 2–40 MK."
In the image at right, the photosphere of the Sun is dark in X-rays. However, apparently associated with the Sun is a high-temperature plasma that radiates in X-rays at temperatures 1,000 times as hot as the photosphere.
Super soft X-ray sources
"A super soft X-ray source (SSXS, or SSS) is an astronomical source of very low energy X-rays. Soft X-rays have energies in the 0.09 to 2.5 keV range, whereas hard X-rays are in the 1-20 keV range."
"SSXSs are in most cases only detected below 0.5 keV, so that within our own galaxy they are usually hidden by interstellar absorption in the galactic disk. They are readily evident in external galaxies, with ~10 found in the Magellanic Clouds and at least 15 seen in M31."
Ultraluminous X-ray sources
"Ultraluminous X-ray sources (ULXs) are pointlike, nonnuclear X-ray sources with luminosities above the Eddington limit of 3 × 1039 ergs s−1 for a 20 Mʘ black hole. Many ULXs show strong variability and may be black hole binaries. To fall into the class of intermediate-mass black holes (IMBHs), their luminosities, thermal disk emissions, variation timescales, and surrounding emission-line nebulae must suggest this. However, when the emission is beamed or exceeds the Eddington limit, the ULX may be a stellar-mass black hole. The nearby spiral galaxy NGC 1313 has two compact ULXs, X-1 and X-2. For X-1 the X-ray luminosity increases to a maximum of 3 × 1040 ergs s−1, exceeding the Eddington limit, and enters a steep power-law state at high luminosities more indicative of a stellar-mass black hole, whereas X-2 has the opposite behavior and appears to be in the hard X-ray state of an IMBH."
The X-ray/optical composite at right "highlights an ultraluminous X-ray source (ULX) shown in the box. ... The timing and regularity of these outbursts ... make the object one of the best candidates yet for a so-called intermediate-mass black hole. ... Chandra X-ray Observatory observations of this ULX have provided evidence that its X-radiation is produced by a disk of hot gas swirling around a black hole with a mass of about 10,000 suns." "Chandra observed M74 twice: once in June 2001 and again in October 2001. The XMM-Newton satellite also (a European Space Agency mission) observed this object in February 2002 and January 2003."
Many astronomical objects when studied with visual astronomy may not appear to also be X-ray objects.
The SIMBAD database "contains identifications, 'basic data', bibliography, and selected observational measurements for several million astronomical objects." Among these are some 209,612 astronomical X-ray objects. This information is found by going to the SIMBAD cite listed under 'External links', clicking on "Criteria query" and entering into the box "otype='X'", without the quotes, for an 'object count', and clicking on 'submit query'.
X-ray continuum emission "can arise both from a jet and from the hot corona of the accretion disc via a scattering process: in both cases it shows a power-law spectrum. In some radio-quiet [active galactic nuclei] AGN there is an excess of soft X-ray emission in addition to the power-law component."
X-ray line emission "is a result of illumination of cold heavy elements by the X-ray continuum that causes fluorescence of X-ray emission lines."
Using X-rays to determine a crystal structure results in diffraction intensities that are represented in reciprocal space as peaks. These have a finite width due to a variety of defects away from a perfectly periodic lattice. "[T]here may be significant diffuse scattering, a continuum of scattered X-rays that fall between the Bragg peaks."
"The X-ray continuum can arise from bremsstrahlung, black-body radiation, synchrotron radiation, or what is called inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions."
The diffuse cosmic X-ray background is indicated in the figure at right with the notation CXB.
"In addition to discrete sources which stand out against the sky, there is good evidence for a diffuse X-ray background. During more than a decade of observations of X-ray emission from the Sun, evidence of the existence of an isotropic X-ray background flux was obtained in 1956. This background flux is rather consistently observed over a wide range of energies. The early high-energy end of the spectrum for this diffuse X-ray background was obtained by instruments on board Ranger 3 and Ranger 5. The X-ray flux corresponds to a total energy density of about 5 x 10−4 eV/cm3. The ROSAT soft X-ray diffuse background (SXRB) image shows the general increase in intensity from the Galactic plane to the poles. At the lowest energies, 0.1 - 0.3 keV, nearly all of the observed soft X-ray background (SXRB) is thermal emission from ~106 K plasma."
"On September 20, the X-Ray Laboratory at the Faculty of Geological Sciences, Mayor de San Andres University, La Paz, Bolivia, published a report of their analysis of a small sample of material recovered from the impact site. They detected iron, nickel, cobalt, and traces of iridium — elements characteristic of the elemental composition of meteorites. The quantitative proportions of silicon, aluminum, potassium, calcium, magnesium, and phosphorus are incompatible with rocks that are normally found at the surface of the Earth."
"In X-ray wavelengths, many scientists are investigating the scattering of X-rays by interstellar dust, and some have suggested that astronomical X-ray sources would possess diffuse haloes, due to the dust."
"[S]ome cosmic-ray observatories also look for high energy gamma rays and x-rays."
"Aluminium-26, 26Al, is a radioactive isotope of the chemical element aluminium, decaying by either of the modes beta-plus or electron capture, both resulting in the stable nuclide magnesium-26. The half-life of 26Al is 7.17×105 years. This is far too short for the isotope to survive to the present, but a small amount of the nuclide is produced by collisions of argon atoms with cosmic ray protons."
"Circinus X-1 is an X-ray binary star system that includes a neutron star. Observation of Circinus X-1 in July 2007 revealed the presence of X-ray jets normally found in black hole systems; it is the first of the sort to be discovered that displays this similarity to black holes."
"Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, Antonius van den Broek proposed that the place of each element in the periodic table (its atomic number) is equal to its nuclear charge. This was confirmed experimentally by Henry Moseley in 1913 using X-ray spectra."
"X-rays remove electrons from atoms and ions, and those photoelectrons can provoke secondary ionizations. As the intensity is often low, this [X-ray] heating is only efficient in warm, less dense atomic medium (as the column density is small). For example in molecular clouds only hard x-rays can penetrate and x-ray heating can be ignored. This is assuming the region is not near an x-ray source such as a supernova remnant."
"In an X-ray tube, electrons are accelerated in a vacuum by an electric field and shot into a piece of metal called the "target". X-rays are emitted as the electrons slow down (decelerate) in the metal. The output spectrum consists of a continuous spectrum of X-rays, with additional sharp peaks at certain energies [characteristic of the elements of the target]."
"X-ray binaries are a class of binary stars that are luminous in X-rays. The X-rays are produced by matter falling from one component, called the donor (usually a relatively normal star) to the other component, called the accretor, which is compact: a white dwarf, neutron star, or black hole. The infalling matter releases gravitational potential energy, up to several tenths of its rest mass, as X-rays. (Hydrogen fusion releases only about 0.7 percent of rest mass.) An estimated 1041 positrons escape per second from a typical hard low-mass X-ray binary."
Of some 87,216 astronomical ultraviolet sources in the SIMBAD database, at least 2,767 are known X-ray sources.
"In 1994, a Galactic speed record was obtained with the discovery of a superluminal source in our own Galaxy, the cosmic x-ray source GRS 1915+105. The expansion occurred on a much shorter timescale. Several separate blobs were seen to expand in pairs within weeks by typically 0.5 arcsec. Because of the analogy with quasars, this source was called a microquasar."
"The frequency spectrum of Cherenkov radiation by a particle is given by the Frank–Tamm formula. Unlike fluorescence or emission spectra that have characteristic spectral peaks, Cherenkov radiation is continuous. Around the visible spectrum, the relative intensity per unit frequency is approximately proportional to the frequency. That is, higher frequencies (shorter wavelengths) are more intense in Cherenkov radiation. This is why visible Cherenkov radiation is observed to be brilliant blue. In fact, most Cherenkov radiation is in the ultraviolet spectrum—it is only with sufficiently accelerated charges that it even becomes visible; the sensitivity of the human eye peaks at green, and is very low in the violet portion of the spectrum."
"There is a cut-off frequency above which the equation cannot be satisfied. Since the refractive index is a function of frequency (and hence wavelength), the intensity does not continue increasing at ever shorter wavelengths even for ultra-relativistic particles (where v/c approaches 1). At X-ray frequencies, the refractive index becomes less than unity (note that in media the phase velocity may exceed c without violating relativity) and hence no X-ray emission (or shorter wavelength emissions such as gamma rays) would be observed. However, X-rays can be generated at special frequencies just below those corresponding to core electronic transitions in a material, as the index of refraction is often greater than 1 just below a resonance frequency (see Kramers-Kronig relation and anomalous dispersion)."
"The phase velocity of an electromagnetic wave, when traveling through a medium, can routinely exceed c, the vacuum velocity of light. For example, this occurs in most glasses at X-ray frequencies. However, the phase velocity of a wave corresponds to the propagation speed of a theoretical single-frequency (purely monochromatic) component of the wave at that frequency. Such a wave component must be infinite in extent and of constant amplitude (otherwise it is not truly monochromatic), and so cannot convey any information. Thus a phase velocity above c does not imply the propagation of signals with a velocity above c."
Def. a cloud, or cloud-like, natural astronomical entity, composed of plasma at least hot enough to emit X-rays is called a coronal cloud.
The Earth is a known astronomical object. It is usually not thought of as an X-ray source.
At right is a composite image which contains the first picture of the Earth in X-rays, taken in March, 1996, with the orbiting Polar satellite. The area of brightest X-ray emission is red.
Energetic charged particles from the Sun energize electrons in the Earth's magnetosphere. These electrons move along the Earth's magnetic field and eventually strike the ionosphere, causing the X-ray emission. Lightning strikes or bolts across the sky also emit X-rays.
Close inspection of the Chandra X-ray image of the Moon shows a region of X-rays in the dark region (the shadow region) trending toward the lower left corner of the X-ray image at second right. These X-rays only appear to come from the Moon. Instead, they originate from radiation of the Earth's geocorona (an extended outer atmosphere) through which orbiting spacecraft such as the Chandra satellite move.
The "image of Jupiter [at right] shows concentrations of auroral X-rays near the north and south magnetic poles." The Chandra X-ray Observatory accumulated X-ray counts from Jupiter for its entire 10-hour rotation on December 18, 2000.
In the second at right is a diagram describing interaction with the local magnetic field. Jupiter's strong, rapidly rotating magnetic field (light blue lines in the figure) generates strong electric fields in the space around the planet. Charged particles (white dots), "trapped in Jupiter's magnetic field, are continually being accelerated (gold particles) down into the atmosphere above the polar regions, so auroras are almost always active on Jupiter. Electric voltages of about 10 million volts, and currents of 10 million amps - a hundred times greater than the most powerful lightning bolts - are required to explain the auroras at Jupiter's poles, which are a thousand times more powerful than those on Earth. On Earth, auroras are triggered by solar storms of energetic particles, which disturb Earth's magnetic field. As shown by the swept-back appearance in the illustration, gusts of particles from the Sun also distort Jupiter's magnetic field, and on occasion produce auroras."
Gaseous objects are astronomical objects with gases predominately detected and apparently constituting a surface.
Depending primarily upon gas temperature, the presence of gas may be used to determine the composition of the gas object observed, at least the outer layer.
The first ever X-ray image of Venus is shown at right. The "half crescent is due to the relative orientation of the Sun, Earth and Venus. The X-rays from Venus are produced by fluorescent radiation from oxygen and other atoms in the atmosphere between 120 and 140 kilometers above the surface of the planet. In contrast, the optical light from Venus is caused by the reflection from clouds 50 to 70 kilometers above the surface. Solar X-rays bombard the atmosphere of Venus, knock electrons out of the inner parts of atoms, and excite the atoms to a higher energy level. The atoms almost immediately return to their lower energy state with the emission of a fluorescent X-ray. A similar process involving ultraviolet light produces the visible light from fluorescent lamps."
"During the Soviet Venera program, the Venera 11 and Venera 12 probes detected a constant stream of lightning, and Venera 12 recorded a powerful clap of thunder soon after it landed. The European Space Agency's Venus Express recorded abundant lightning in the high atmosphere."
"In the sparse upper atmosphere of Mars, about 120 (75 miles) kilometers above its surface, the observed X-rays [shown in the image at right] are produced by fluorescent radiation from oxygen atoms."
"X-radiation from the Sun impacts oxygen atoms, knock electrons out of the inner parts of their electron clouds, and excite the atoms to a higher energy level in the process. The atoms almost immediately return to their lower energy state and may emit a fluorescent X-ray in this process with an energy characteristic of the atom involved - oxygen in this case. A similar process involving ultraviolet light produces the visible light from fluorescent lamps."
"The X-ray power detected from the Martian atmosphere is very small, amounting to only 4 megawatts, comparable to the X-ray power of about ten thousand medical X-ray machines. Chandra was scheduled to observe Mars when it was only 70 million kilometers from Earth, and also near the point in its orbit when it is closest to the Sun."
"At the time of the Chandra observation, a huge dust storm developed on Mars that covered about one hemisphere, later to cover the entire planet. This hemisphere rotated out of view over the course of the 9-hour observation but no change was observed in the X-ray intensity, implying that the dust storm did not affect the upper atmosphere."
"The astronomers also found evidence for a faint halo of X-rays that extends out to 7,000 kilometers above the surface of Mars. Scientists believe the X-rays are produced by collisions of ions racing away from the Sun (the solar wind) with oxygen and hydrogen atoms in the tenuous exosphere of Mars."
"One of the great surprises of Hyakutake's passage through the inner Solar System was the discovery that it was emitting X-rays [image at left], with observations made using the ROSAT satellite revealing very strong X-ray emission. This was the first time a comet had been seen to do so, but astronomers soon found that almost every comet they looked at was emitting X-rays. The emission from Hyakutake was brightest in a crescent shape surrounding the nucleus with the ends of the crescent pointing away from the Sun."
The image at right of Comet Lulin merges data acquired by Swift's Ultraviolet/Optical Telescope (blue and green) and X-Ray Telescope (red). At the time of the observation, the comet was 99.5 million miles from Earth and 115.3 million miles from the Sun.
"NASA's Swift Gamma-ray Explorer satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. "The solar wind—a fast-moving stream of particles from the sun—interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the Sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet."
"An image of comet Hale-Bopp (C/1995 O1) in soft x-rays reveals a central emission offset from the nucleus, as well as an extended emission feature that does not correlate with the dust jets seen at optical wavelengths."
The Chandra X-ray Observatory accumulated X-ray counts from Jupiter for its entire 10-hour rotation on December 18, 2000. Note that X-rays from the entire globe of Jupiter are detected.
The X-ray astronomy image of Saturn is on the left in the composite at right. The Chandra X-ray Observatory "image of Saturn held some surprises for the observers. First, Saturn's 90 megawatts of X-radiation is concentrated near the equator. This is different from a similar gaseous giant planet, Jupiter, where the most intense X-rays are associated with the strong magnetic field near its poles. Saturn's X-ray spectrum, or the distribution of its X-rays according to energy, was found to be similar to that of X-rays from the Sun. This indicates that Saturn's X-radiation is due to the reflection of solar X-rays by Saturn's atmosphere. The intensity of these reflected X-rays was unexpectedly strong. ... The optical image of Saturn is also due to the reflection of light from the Sun - visible wavelength light in this case - but the optical and X-ray images obviously have dramatic differences. The optical image is much brighter, and shows the beautiful ring structures, which were not detected in X-rays. This is because the Sun emits about a million times more power in visible light than in X-rays, and X-rays reflect much less efficiently from Saturn's atmosphere and rings."
"[T]he soft X-ray emissions of Jupiter (and Saturn) can largely be explained by scattering and fluorescence of solar X-rays."
Rocky objects are astronomical objects with solid surfaces.
"Now, 205 measurements of Mercury's surface composition, made by the X-ray spectrometer onboard Messenger, reveal how much Mercury's surface differs from those of other planets in the solar system."
"The surface is dominated by minerals high in magnesium and enriched in sulfur, making it similar to partially melted versions of an enstatite chondrite, a rare type of meteorite that formed at high temperatures in low-oxygen conditions in the inner solar system."
""The similarity between the constituents of these meteorites and Mercury's surface leads us to believe that either Mercury formed via the accretion of materials somewhat like the enstatite chondrites, or that both enstatite chondrites and the Mercury precursors were built from common ancestors," [Shoshana] Weider [a planetary geologist at the Carnegie Institution of Washington] said."
Like the Earth, the Moon is generally not thought of as an astronomical X-ray source. But, as the image at right shows, the Chandra X-ray Observatory detects X-rays from the Moon. These X-rays are produced by fluorescence when solar X-rays bombard the Moon's surface.
With respect to the second image at right, "[t]his x-ray image of the Moon was made by the orbiting ROSAT (Röntgensatellit) Observatory [on June 29,] 1990. In this digital picture, pixel brightness corresponds to x-ray intensity. Consider the image in three parts: the bright hemisphere of the x-ray moon, the darker half of the moon, and the x-ray sky background. The bright lunar hemisphere shines in x-rays because it reflects x-rays emitted by the sun ... just as it shines at night by reflecting visible sunlight. The background sky has an x-ray glow in part due to the myriad of distant, powerful active galaxies, unresolved in the ROSAT picture but recently detected in Chandra Observatory x-ray images. But why isn't the dark half of the moon completely dark? It's true that the dark lunar face is in shadow and so is not reflecting solar x-rays. Still, the few x-ray photons which seem to come from the moon's dark half are currently thought to be caused by energetic particles in the solar wind bombarding the lunar surface." The measured lunar X-ray luminosity of ~ 1.2 x 1012 erg/s makes the Moon one of the weakest known non-terrestrial X-ray source. The scale on the picture says "16 arcmin".
The image at right is an X-ray diffraction pattern from Martian soil. The image is from "the Chemistry and Mineralogy (CheMin) experiment on NASA's Curiosity rover. The image reveals the presence of crystalline feldspar, pyroxenes and olivine mixed with some amorphous (non-crystalline) material. The soil sample, taken from a wind-blown deposit within Gale Crater, where the rover landed, is similar to volcanic soils in Hawaii."
"Curiosity scooped the soil on Oct. 15, 2012, the 69th sol, or Martian day, of operations. It was delivered to CheMin for X-ray diffraction analysis on October 17, 2012, the 71st sol. By directing an X-ray beam at a sample and recording how X-rays are scattered by the sample at an atomic level, the instrument can definitively identify and quantify minerals on Mars for the first time. Each mineral has a unique pattern of rings, or "fingerprint," revealing its presence. The colors in the graphic represent the intensity of the X-rays, with red being the most intense."
"Evidence from NEAR Shoemaker's x-ray measurements of Eros indicate an ordinary chondrite composition despite a red-sloped, S-type spectrum, again suggesting that some process has altered the optical properties of the surface. "
"Each element has electronic orbitals of characteristic energy. Following removal of an inner electron by an energetic photon provided by a primary radiation source, an electron from an outer shell drops into its place. There are a limited number of ways in which this can happen ... The main transitions are given names: an L→K transition is traditionally called Kα, an M→K transition is called Kβ, an M→L transition is called Lα, and so on. Each of these transitions yields a fluorescent photon with a characteristic energy equal to the difference in energy of the initial and final orbital. The wavelength of this fluorescent radiation can be calculated from Planck's Law:"
The second image at right "shows the typical form of the sharp fluorescent spectral lines obtained in the energy-dispersive method".
"[E]lemental abundances which cannot be determined from meteorites include several of the most important for interstellar X-ray absorption: H, He, C, N, O, Ne, and Ar."
"Single ionization and double ionization of elements heavier than helium have only a small effect on the magnitude of X-ray absorption cross sections."
For hydrogen, complete ionization "obviously reduces its cross section to zero, but ... the net effect of partial ionization of hydrogen on calculated absorption depends on whether or not observations of hydrogen [are] used to estimate the total gas. ... [A]t least 20 % of interstellar hydrogen at high galactic latitudes seems to be ionized".
"The 2.98-3.07 Å [0.298-0.307 nm] range is centered on the Lα Ca XX lines and includes associated satellite line emission from Ca XIX."
"The 3.14-3.24 Å [0.314-0.324 nm] region covers emission primarily from Ca XIX (He-like) and Ca XVIII."
"[I]n the X-ray region where high-temperature emission lines appear during solar flares ... The 1.82-1.97 Å [0.182-0.197 nm] range covers emission from Fe XXV (He-like) and similar transitions in lower degrees of ionization of iron."
For possibly locating X-ray sources above the Earth's atmosphere, there are a number of reasons to consider probing from different geographical locations:
- early visual observations of the solar corona are associated with eclipses of the Sun by the Moon,
- if the Sun is an X-ray source, then perhaps other stars are, and only so many can be observed from one location,
- laboratory measurements use a peak of intensity to background (possible unknown sources) technique which demands measuring an X-ray background noise, and
- there may be X-ray scattering by the Earth's upper atmosphere.
Observatories on the Earth's surface do not seem like a useful place to conduct X-ray astronomy observations in view of the inability of X-rays to reach even the peaks of the highest mountains. From the earliest speculations about detecting X-rays above the Earth's atmosphere, the need to use an appropriate probe suggested a high altitude sounding rocket. The ending of World War II presented an opportunity to use a ballistic missile for just such a purpose. The White Sands Proving Grounds in New Mexico, at the time an army base, is the first location on land to test the concept. The image at the right shows the V-2 launch complex prior to the launch of V-2 number 6.
The first successful attempt to detect X-rays above the Earth's surface occurred at White Sands Proving Grounds on August 5, 1948, by lofting an X-ray detector with a V-2 rocket.
As with visual or optical astronomy observatories, there is a tendency to place them away from population centers. The photograph at right of the January 18, 1951, V-2 launch indicates one reason for doing so with X-ray observing. Rockets lofted upwards tend to return.
In the southern hemisphere at Woomera, South Australia, another X-ray observing location uses a famous and probably the most successful sounding rocket, the Skylark, to place X-ray detectors at suborbital altitudes. "[T]he first X-ray surveys of the sky in the Southern Hemisphere" are accomplished by Skylark launches.
The NRL and NASA establish another rocket launching facility outside Natal, Brazil to detect X-ray sources in the southern hemisphere. In addition to land-based surface launches of sounding rockets for X-ray detection, occasionally ocean surface ships served as stable platforms. The USS Point Defiance (LSD-31) is one of the first rocket-launching surface ships to support the 1958 IGY Solar Eclipse Expedition to the Danger Island region of the South Pacific. Launchers on deck fired eight Nike-Asp sounding rockets. Each rocket carried an X-ray detector to record X-ray emission from the Sun during the solar eclipse on October 12, 1958.
"The importance of X-ray astronomy is exemplified in the use of an X-ray imager such as the one on GOES 14 for the early detection of solar flares, coronal mass ejections (CME)s and other X-ray generating phenomena that impact the Earth."
In 1927, E.O. Hulburt of the US Naval Research Laboratory (NRL) and associates Gregory Breit and Merle Tuve of the Carnegie Institution of Washington considered the possibility of equipping Robert H. Goddard's rockets to explore the upper atmosphere. "Two years later, he proposed an experimental program in which a rocket might be instrumented to explore the upper atmosphere, including detection of ultraviolet radiation and X-rays at high altitudes."
In the late 1930s, "the presence of a very hot, tenuous gas surrounding the Sun ... was inferred indirectly from optical coronal lines of highly ionized species". In the mid-1940s "radio observations revealed a radio corona" around the Sun. "Of course, the sheer beauty of the solar corona has been admired in scattered visible light ever since humans first wondered about solar eclipses".
The beginning of the search for X-ray sources above the Earth's atmosphere is August 5, 1948, at 12:07 GMT (Greenwich Mean Time). As part of Project Hermes a US Army (formerly German) V-2 rocket number 43 is launched from White Sands Proving Grounds, launch complex (LC) 33, to an altitude of 166 km. This is "the first detection of solar X-rays." After detecting X-ray photons from the Sun in the course of the rocket flight, T.R. Burnight wrote, “The sun is assumed to be the source of this radiation although radiation of wave-length shorter than 4 angstroms would not be expected from theoretical estimates of black body radiation from the solar corona.”
For some plasma X-ray sources, "an exponential spectrum corresponding to a thermal bremsstrahlung source [may fit]":
where a least squares fit to the X-ray detection data yields a kT.
In terms of radiation detected, for example, f(x) = photons (cm2-sec-keV)-1 versus keV. As the photon flux decreases with increasing keV, the exponent (k) is negative. Observations of X-rays have sometimes found the spectrum to have an upper portion with k ~ -2.3 and the lower portion being steeper with k ~ -4.7. This suggests a two stage acceleration process.
X-rays are electromagnetic radiation from a portion of the wavelength spectrum of about 5 to 8 nanometers (nm)s down to approximately 5 to 8 picometers (pm)s. As the figure at the left indicates with respect to surface of the Earth measurements, they do not penetrate the atmosphere. Laboratory measurements with X-ray generating sources are used to determine atmospheric penetration.
There is an “extensive 1/4 keV emission in the Galactic halo”, an “observed 1/4 keV [X-ray emission originating] in a Local Hot Bubble (LHB) that surrounds the Sun. ... and an isotropic extragalactic component.” In addition to this “distribution of emission responsible for the soft X-ray diffuse background (SXRB) ... there are the distinct enhancements of supernova remnants, superbubbles, and clusters of galaxies.”
"The ROSAT soft X-ray diffuse background (SXRB) image shows the general increase in intensity from the Galactic plane to the poles. At the lowest energies, 0.1 - 0.3 keV, nearly all of the observed soft X-ray background (SXRB) is thermal emission from ~106 K plasma."
"Generally, a coronal cloud, a cloud composed of plasma, is usually associated with a star or other celestial or astronomical body, extending sometimes millions of kilometers into space, or thousands of light-years, depending on the associated body. The high temperature of the coronal cloud gives it unusual spectral features. These features have been traced to highly ionized atoms of elements such as iron which indicate a plasma's temperature in excess of 106 K (MK) and associated emission of X-rays."
A spectral distribution is often a plot or intensity, brightness, flux density, or other characteristic of a spectrum versus the spectral property such as wavelength, frequency, energy, particle speed, refractive or reflective index, for example.
The first three dozen or so astronomical X-ray objects detected other than the Sun "represent a brightness range of about a thousandfold from the most intense source, Sco XR-1, ca. 5 x 10-10 J m-2 s-1, to the weakest sources at a few times 10-13 J m-2 s-1."
A temporal distribution is a distribution over time. Also known as a time distribution. A temporal distribution usually has the independent variable 'Time' on the abscissa and other variables viewed approximately orthogonal to it. The time distribution can move forward in time, for example, from the present into the future, or backward in time, from the present into the past. Usually, the abscissa is plotted forward in time with the earlier time at the intersection with the ordinate variable at left. Geologic time is often plotted on the abscissa versus phenomena on the ordinate or as a twenty-four hour clock analogy.
Supergiant fast X-ray transients (SFXTs)
"There are a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). XTE J1739–302 is one of these. Discovered in 1997, remaining active only one day, with an X-ray spectrum well fitted with a thermal bremsstrahlung (temperature of ∼20 keV), resembling the spectral properties of accreting pulsars, it was at first classified as a peculiar Be/X-ray transient with an unusually short outburst. A new burst was observed on April 8, 2008 with Swift."
An astronomical X-ray source may have one or more positional locations, plus associated error circles or boxes, from which incoming X-radiation (X-rays) has been detected. The location may be associated with a known astronomical object such as a source of electromagnetic radiation in another portion of the electromagnetic spectrum, for example, the visible or radio. An astronomical object previously detected say in the visible portion of the spectrum and later observed with an X-ray observatory in orbit around Earth is also an astronomical X-ray source. Striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth.
An astronomical X-ray source catalog or catalogue is a list or tabulation of astronomical objects that are X-ray sources, typically grouped together because they share a common type, morphology, origin, means of detection, or method of discovery. Astronomical X-ray source catalogs are usually the result of an astronomical survey of some kind, often performed using an X-ray astronomical observatory in orbit around Earth.
- "Distribution and Variability of Cosmic X-Ray Sources", published on April 1, 1967, describes 35 astronomical X-ray sources detected by sounding rocket launched with an X-ray detector on board by the X-ray astronomy group at the Naval Research Laboratory in the United States.
- "Development of a Catalogue of Galactic X-ray Sources", published in June 1967, lists 17 sources in order of right ascension as of October 5, 1966. It does not contain actual dates of initial observation.
- A Catalogue of Discrete Celestial X-ray Sources, contains 59 sources as of December 1, 1969, that at the least had an X-ray flux published in the literature.
- "The fourth Uhuru catalog of X-ray sources", contains 339 sources observed over the entire active period of the satellite, but not necessarily the earlier designation. It does not contain actual dates of observation for any sources. Sources detected during the final observation period from August 27, 1973, to January 12, 1974, are prefixed with "4U".
- "The Ariel V /3 A/ catalogue of X-ray sources. II - Sources at high galactic latitude |b| > 10°", contains sources with high galactic latitudes and includes some sources observed by HEAO 1, Einstein, OSO 7, SAS 3, Uhuru, and earlier, mainly rocket, observations.
Many devices have been developed to improve X-ray astronomy.
"In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fields. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field."
"Of interest is the hot ionized medium (HIM) consisting of a coronal cloud ejection from star surfaces at 106-107 K which emits X-rays. The ISM is turbulent and full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures are stellar wind bubbles and superbubbles of hot gas. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble."
"To measure the spectrum of the diffuse X-ray emission from the interstellar medium over the energy range 0.07 to 1 keV, NASA launched a Black Brant 9 from White Sands Missile Range, New Mexico on May 1, 2008. The Principal Investigator for the mission is Dr. Dan McCammon of the University of Wisconsin."
"The US Naval Research Laboratory group launched an Aerobee 150 during April, 1965 that was equipped with a pair of geiger counters. This flight discovered seven candidate X-ray sources, including the first extragalactic X-ray source; designated Virgo X-1 as the first X-ray source detected in Virgo. A later Aerobee rocket launched from White Sands Missile Range on July 7, 1967, yielded further evidence that the source Virgo X-1 was the radio galaxy Messier 87. Subsequent X-ray observations by the HEAO 1 and Einstein Observatory showed a complex source that included the active galactic nucleus of Messier 87. However, there is little central concentration of the X-ray emission."
The MeV Auroral X-ray Imaging and Spectroscopy experiment (MAXIS) is carried aloft by a balloon for a 450 h flight from McMurdo Station, Antarctica. The MAXIS flight detected an auroral X-ray event possibly associated with the solar wind as it interacted with the upper atmosphere between January 22nd and 26th, 2000.
"Balloon flights can carry instruments to altitudes of up to 40 km above sea level, where they are above as much as 99.997% of the Earth's atmosphere. Unlike a rocket where data are collected during a brief few minutes, balloons are able to stay aloft for much longer. However, even at such altitudes, much of the X-ray spectrum is still absorbed. X-rays with energies less than 35 keV (5,600 aJ) cannot reach balloons."
"On July 21, 1964, the Crab Nebula supernova remnant was discovered to be a hard X-ray (15 – 60 keV) source by a scintillation counter flown on a balloon launched from Palestine, Texas, USA. This was likely the first balloon-based detection of X-rays from a discrete cosmic X-ray source."
High-energy focusing telescope
"The high-energy focusing telescope (HEFT) is a balloon-borne experiment to image astrophysical sources in the hard X-ray (20–100 keV) band. Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The angular resolution of HEFT is ~1.5'. Rather than using a grazing-angle X-ray telescope, HEFT makes use of a novel tungsten-silicon multilayer coatings to extend the reflectivity of nested grazing-incidence mirrors beyond 10 keV. HEFT has an energy resolution of 1.0 keV full width at half maximum at 60 keV. HEFT was launched for a 25-hour balloon flight in May 2005. The instrument performed within specification and observed Tau X-1, the Crab Nebula."
High-resolution gamma-ray and hard X-ray spectrometer (HIREGS)
"One of the recent balloon-borne experiments is called the High-resolution gamma-ray and hard X-ray spectrometer (HIREGS). It is launched from McMurdo Station, Antarctica in December 1991, steady winds carried the balloon on a circumpolar flight lasting about two weeks."
Aircraft assisted launches
"The Array of Low Energy X-ray Imaging Sensors (ALEXIS) X-ray telescopes feature curved mirrors whose multilayer coatings reflect and focus low-energy X-rays or extreme ultraviolet light the way optical telescopes focus visible light. ... The Launch was provided by the United States Air Force Space Test Program on a Pegasus Booster on April 25, 1993."
"Satellites in use today include the XMM-Newton observatory (low to mid energy X-rays 0.1-15 keV) and the INTEGRAL satellite (high energy X-rays 15-60 keV). Both were launched by the European Space Agency. NASA has launched the Rossi X-ray Timing Explorer (RXTE), and the Swift and Chandra observatories. One of the instruments on Swift is the Swift X-Ray Telescope (XRT)."
"The GOES 14 spacecraft carries on board a Solar X-ray Imager to monitor the Sun's X-rays for the early detection of solar flares, coronal mass ejections, and other phenomena that impact the geospace environment. It was launched into orbit on June 27, 2009 at 22:51 GMT from Space Launch Complex 37B at the Cape Canaveral Air Force Station."
"On January 30, 2009, the Russian Federal Space Agency successfully launched the Koronas-Foton which carries several experiments to detect X-rays, including the TESIS telescope/spectrometer FIAN with SphinX soft X-ray spectrophotometer."
"The Italian Space Agency (ASI) gamma-ray observatory satellite Astro-rivelatore Gamma ad Imagini Leggero (AGILE) has on board the Super-AGILE 15-45 keV hard X-ray detector. It was launched on April 23, 2007 by the Indian PSLV-C8."
"A soft X-ray solar imaging telescope is on board the GOES-13 weather satellite launched using a Delta IV from Cape Canaveral LC37B on May 24, 2006. However, there have been no GOES 13 SXI images since December 2006."
"Although the Suzaku X-ray spectrometer (the first micro-calorimeter in space) failed on August 8, 2005 after launch on July 10, 2005, the X-ray Imaging Spectrometer (XIS) and Hard X-ray Detector (HXD) are still functioning."
The Solar Heliospheric Observatory (SOHO) is launched at top left atop an ATLAS-IIAS expendable launch vehicle. The early Atlas is a development (an Intercontinental Ballistic Missile, ICBM) for defense as part of the mutual assured destruction (MAD) effort which helped to end the Cold War.
"The primary payload of mission STS-35 [December 1990] was ASTRO-1 ... The primary objectives were round-the-clock observations of the celestial sphere in ultraviolet and X-ray spectral wavelengths with the ASTRO-1 observatory ...The Broad Band X-Ray Telescope (BBXRT) and its Two-Axis Pointing System (TAPS) rounded out the instrument complement in the aft payload bay."
"Spacelab 1 was the first Spacelab mission in orbit in the payload bay of the Space Shuttle (STS-9) between November 28 and December 8, 1983. An X-ray spectrometer, measuring 2-30 keV photons (although 2-80 keV was possible), was on the pallet. The primary science objective was to study detailed spectral features in cosmic sources and their temporal changes. The instrument was a gas scintillation proportional counter (GSPC) with ~ 180 cm2 area and energy resolution of 9% at 7 keV. The detector was collimated to a 4.5° (FWHM) FOV. There were 512 energy channels."
"Spartan 1 was deployed from the Space Shuttle Discovery (STS-51G) on June 20, 1985, and retrieved 45.5 hours later. The X-ray detectors aboard the Spartan platform were sensitive to the energy range 1-12 keV. The instrument scanned its target with narrowly collimated (5' x 3°) GSPCs. There were 2 identical sets of counters, each having ~ 660 cm2 effective area. Counts were accumulated for 0.812 s into 128 energy channels. The energy resolution was 16% at 6 keV. During its 2 days of flight, Spartan-1 observed the Perseus cluster of galaxies and our galactic center region."
"Helios 1 and Helios 2 ... are a pair of probes launched into heliocentric orbit for the purpose of studying solar processes. ... The probes are notable for having set a maximum speed record among spacecraft at 252,792 km/h (157,078 mi/h or 43.63 mi/s or 70.22 km/s or 0.000234c). Helios 2 flew three million kilometers closer to the Sun than Helios 1, achieving perihelion on 17 April 1976 at a record distance of 0.29 AU (or 43.432 million kilometers), slightly inside the orbit of Mercury. Helios 2 was sent into orbit 13 months after the launch of Helios 1. ... The probes are no longer functional but still remain in their elliptical orbit around the Sun." On board, each probe carried an instrument for cosmic radiation investigation (the CRI) for measuring protons, electrons, and X-rays "to determine the distribution of cosmic rays."
"Usually observational astronomy is considered to occur on Earth's surface (or beneath it in neutrino astronomy). The idea of limiting observation to Earth includes orbiting the Earth. As soon as the observer leaves the cozy confines of Earth, the observer becomes a deep space explorer. Except for Explorer 1 and Explorer 3 and the earlier satellites in the series, usually if a probe is going to be a deep space explorer it leaves the Earth or an orbit around the Earth."
"For a satellite or space probe to qualify as a deep space X-ray astronomer/explorer or "astronobot"/explorer, all it needs to carry aboard is an XRT or X-ray detector and leave Earth orbit."
"Ulysses is launched October 6, 1990, and reached Jupiter for its "gravitational slingshot" in February 1992. It passed the south solar pole in June 1994 and crossed the ecliptic equator in February 1995. The solar X-ray and cosmic gamma-ray burst experiment (GRB) had 3 main objectives: study and monitor solar flares, detect and localize cosmic gamma-ray bursts, and in-situ detection of Jovian aurorae. Ulysses was the first satellite carrying a gamma burst detector which went outside the orbit of Mars. The hard X-ray detectors operated in the range 15–150 keV. The detectors consisted of 23-mm thick × 51-mm diameter CsI(Tl) crystals mounted via plastic light tubes to photomultipliers. The hard detector changed its operating mode depending on (1) measured count rate, (2) ground command, or (3) change in spacecraft telemetry mode. The trigger level was generally set for 8-sigma above background and the sensitivity is 10−6 erg/cm2 (1 nJ/m2). When a burst trigger is recorded, the instrument switches to record high resolution data, recording it to a 32-kbit memory for a slow telemetry read out. Burst data consist of either 16 s of 8-ms resolution count rates or 64 s of 32-ms count rates from the sum of the 2 detectors. There were also 16 channel energy spectra from the sum of the 2 detectors (taken either in 1, 2, 4, 16, or 32 second integrations). During 'wait' mode, the data were taken either in 0.25 or 0.5 s integrations and 4 energy channels (with shortest integration time being 8 s). Again, the outputs of the 2 detectors were summed."
"The Ulysses soft X-ray detectors consisted of 2.5-mm thick x 0.5 cm2 area Si surface barrier detectors. A 100 mg/cm2 beryllium foil front window rejected the low energy X-rays and defined a conical FOV of 75° (half-angle). These detectors were passively cooled and operate in the temperature range −35 to −55 °C. This detector had 6 energy channels, covering the range 5–20 keV."
"The International Cometary Explorer (ICE) spacecraft was originally known as [the] International Sun/Earth Explorer 3 (ISEE-3) satellite".
ISEE-3 was launched on August 12, 1978. It was inserted into a "halo" orbit about the libration point some 240 Earth radii upstream between the Earth and Sun. ISEE-3 was renamed ICE (International Cometary Explorer) when, after completing its original mission in 1982, it was gravitationally maneuvered to intercept the comet P/Giacobini-Zinner. On September 11, 1985, the veteran NASA spacecraft flew through the tail of the comet. The X-ray spectrometer aboard ISEE-3 was designed to study both solar flares and cosmic gamma-ray bursts over the energy range 5-228 keV.
"X-ray telescopes can use a variety of different designs to image X-rays. The most common methods used in X-ray telescopes are grazing incidence mirrors and coded apertures. The limitations of X-ray optics result in much narrower fields of view than visible or UV telescopes."
An extreme example of a reflecting telescope is demonstrated by the grazing incidence X-ray telescope (XRT) of the Swift satellite that focuses X-rays onto a state-of-the-art charge-coupled device (CCD), in red at the focal point of the grazing incidence mirrors (in black at the right).
"A Wolter telescope is a telescope for X-rays using only grazing incidence optics. ... X-rays mirrors can be built, but only if the angle from the plane of reflection is very low (typically 10 arc-minutes to 2 degrees). These are called glancing (or grazing) incidence mirrors. In 1952, Hans Wolter outlined three ways a telescope could be built using only this kind of mirror.. Not surprisingly, these are called Wolter telescopes of type I, II, and III. Each has different advantages and disadvantages."
"The mirrors can be made of ceramic or metal foil. The most commonly used grazing angle incidence materials for X-ray mirrors are gold and iridium. The critical reflection angle is energy dependent. For gold at 1 keV, the critical reflection angle is 3.72 degrees. A limit for this technology in the early 2000s with Chandra and XMM-Newton was about 15 keV light. Using new multi-layered coatings, computer aided manufacturing, and other techniques the [X-ray] mirror for the NuStar telescope pushed this up to 79 keV light. To reflect at this level, glass layers were multi-coated with Tungsten (W)/Silicon (Si) or Platinum(Pt)/Siliconcarbite(SiC).".
"Some X-ray telescopes use coded aperture imaging. This technique uses a flat aperture grille in front of the detector, which weighs much less than any kind of focusing X-ray lens, but requires considerably more post-processing to produce an image."
"Coded Apertures or Coded-Aperture Masks are grids, gratings, or other patterns of materials opaque to various wavelengths of light. The wavelengths are usually high-energy radiation such as X-rays and gamma rays. By blocking and unblocking light in a known pattern, a coded "shadow" is cast upon a plane of detectors. Using computer algorithms, properties of the original light source can be deduced from the shadow on the detectors. Coded apertures are used in X- and gamma rays because their high energies pass through normal lenses and mirrors."
A modulation collimator consists of “two or more wire grids [diffraction gratings] placed in front of an X-ray sensitive Geiger tube or proportional counter.” Relative to the path of incident X-rays (incoming X-rays) the wire grids are placed one beneath the other with a slight offset that produces a shadow of the upper grid over part of the lower grid.
Use of wire grids
Each grid consists of only parallel wires (like a diffraction grating, not a network of crossing wires) of diameter d and a center-to-center spacing of 2d. Let D be the distance between the grids for a bigrid, or the distance between the uppermost grid and the lower most grid (the grid immediately in front of the detector) in a multigrid system.
Incident parallel radiation from a distant point source "falls upon the first grid" so that "depending upon the angle of incidence, the portions of the beam ... transmitted by the first grid fall
- solely on the wires of the second grid,
- "solely [through] the open spaces, or
- upon both wires and spaces of the second grid."
The planes of 50% transmittance, planes of maximum transmittance (PMT), through the bigrid or multiple grid system, intersect "the celestial sphere [to] form [two or] multiple great circles ('lines-of-position') upon one of which the [astronomical] X-ray source must lie."
Net angular response
"[T]he net angular response of [a] two-grid or bigrid modulation collimator to a parallel X-ray beam is cyclic and trangular in shape with a peak transmission of 50%".
Def. the full width at half maximum (FWHM):
- θr = d/D.
is called "[t]he response angle" (θr).
"The two-grid system unambiguously determines the angular size of an X-ray source with size between about θr/4 and 2θr, and clearly distinguishes sizes above and below this range."
The collimating effects of the grid enclosure or external metal slats determine the envelope for the triangular transmission peaks. The enclosure or slats, in general, slowly modulate the peak heights.
The multigrid collimator has the additional grid (third grid or more) inserted
- at a specified intermediate position between the two grids,
- aligned approximately parallel to them, and
- "positioned and rotated so that each [third] wire lies in a plane defined by a wire in [the] outer grid and a wire in the [inner] grid."
This positioning is such that every other triangular peak of the bigrid system is removed. An additional grid would be placed midway between one of the initial grids "and the adjacent intermediate grid."
"In electronics and telecommunications, modulation is the process of varying one or more properties of a high frequency periodic waveform, called the carrier signal, with a modulating signal ... This is done in a similar fashion as a musician modulating a tone (a periodic waveform) from a musical instrument by varying its volume, timing and pitch. The three key parameters of a periodic waveform are its amplitude ("volume"), its phase ("timing") and its frequency ("pitch"). Any of these properties can be modified in accordance with a low frequency signal to obtain the modulated signal. Typically a high-frequency sinusoid waveform is used as carrier signal, but a square wave pulse train may also be used."
Here with the 'modulation collimator' the amplitude (intensity) of the incoming X-rays is reduced by the presence of two or more 'diffraction gratings' of parallel wires that block or greatly reduce that portion of the signal incident upon the wires.
"A collimator is a device that narrows a beam of particles or waves. To "narrow" can mean either to cause the directions of motion to become more aligned in a specific direction (i.e. collimated or parallel) or to cause the spatial cross section of the beam to become smaller."
"The figure to the right illustrates how a Söller collimator is used in neutron and X-ray machines. The upper panel shows a situation where a collimator is not used, while the lower panel introduces a collimator. In both panels the source of radiation is to the right, and the image is recorded on the gray plate at the left of the panels."
For the modulation collimator, the collimating slats as represented in the diagram are replaced by wires (end on, ⊗←D→⊗, rather than a slat ▬).
Normal incidence X-ray optics
Detectors such as the X-ray detector at right collect individual X-rays (photons of X-ray light), count them, discern the energy or wavelength, or how fast they are detected. The detector and telescope system can be designed to yield temporal, spatial, or spectral information.
Energy-dispersive X-ray spectroscopy
The image at right is an EDS spectrum of the mineral crust of Rimicaris exoculata.
"Energy-dispersive X-ray spectroscopy (EDS or EDX) is an analytical technique used for the elemental analysis or chemical characterization of a sample. It relies on the investigation of an interaction of some source of X-ray excitation and a sample. Its characterization capabilities are due in large part to the fundamental principle that each element has a unique atomic structure allowing unique set of peaks on its X-ray spectrum. To stimulate the emission of characteristic X-rays from a specimen, a high-energy beam of charged particles such as electrons or protons (see PIXE), or a beam of X-rays, is focused into the sample being studied. At rest, an atom within the sample contains ground state (or unexcited) electrons in discrete energy levels or electron shells bound to the nucleus. The incident beam may excite an electron in an inner shell, ejecting it from the shell while creating an electron hole where the electron was. An electron from an outer, higher-energy shell then fills the hole, and the difference in energy between the higher-energy shell and the lower energy shell may be released in the form of an X-ray. The number and energy of the X-rays emitted from a specimen can be measured by an energy-dispersive spectrometer. ... X-ray beam excitation is used in X-ray fluorescence (XRF) spectrometers."
The MESSENGER X-ray spectrometer (XRS) "[m]aps mineral composition within the top millimeter of the surface on Mercury by detecting X-ray spectral lines from magnesium, aluminum, sulphur, calcium, titanium, and iron, in the 1-10 keV range."
Wavelength dispersive X-ray spectroscopy
A technique called wavelength dispersive X-ray spectroscopy (WDS) "is a method used to count the number of X-rays of a specific wavelength diffracted by a crystal. The wavelength of the impinging X-ray and the crystal's lattice spacings are related by Bragg's law ... [where the detector] counts only [X]-rays of a single wavelength". Many elements emit or fluoresce specific wavelengths of X-rays which in turn allow their identification.
"In order to interpret the data, the expected elemental wavelength peak locations need to be known. For the ESRO-2B WDS X-ray instruments, calculations of the expected solar spectrum had to be performed and were compared to peaks detected by rocket measurements."
- Amateur Astronomy (journal)
- Astronomy outline
- Astronomy Project
- Backyard Astronomy
- Coronal cloud
- First X-ray source in Andromeda
- First X-ray source in Antlia
- First X-ray source in Apus
- First X-ray source in Aquarius
- Introduction to Astronomy
- Mathematical astronomy
- Observational astronomy
- School:Physics and Astronomy
- Serpens X-1
- Sun as an X-ray source
- Topic:Astronomy/Help desk
- Ginger Lehrman and Ian B Hogue, Sarah Palmer, Cheryl Jennings, Celsa A Spina, Ann Wiegand, Alan L Landay, Robert W Coombs, Douglas D Richman, John W Mellors, John M Coffin, Ronald J Bosch, David M Margolis (August 13, 2005). "Depletion of latent HIV-1 infection in vivo: a proof-of-concept study". Lancet 366 (9485): 549-55. doi:10.1016/S0140-6736(05)67098-5. Retrieved on 2012-05-09.
- (April 15, 2013) "X-ray astronomy". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-10.
- Manuel Güdel (September 2004). "X-ray astronomy of stellar coronae". The Astronomy and Astrophysics Review 12 (2-3). doi:10.1007/s00159-004-0023-2. Bibcode: 2004A&ARv..12...71G. Retrieved on 2012-08-06.
- A. Bhardwaj & R. Elsner (February 20, 2009). "Earth Aurora: Chandra Looks Back At Earth". 60 Garden Street, Cambridge, MA 02138 USA: Harvard-Smithsonian Center for Astrophysics. Retrieved 2013-05-10.
- Marshallsumter (April 15, 2013). "X-ray astronomy". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-11.
- P Morrison (1967). "Extrasolar X-ray Sources". Annual Review of Astronomy and Astrophysics 5 (1): 325–50. doi:10.1146/annurev.aa.05.090167.001545. Bibcode: 1967ARA&A...5..325M.
- Kashyap V, Rosner R, Harnden FR Jr, Maggio A, Micela G, Sciortino S (199). "X-ray emission on hybird stars: ROSAT observations of alpha Trianguli Australis and IOTA Aurigae". The Astrophysical Journal 431. doi:10.1086/174494.
- A. Finoguenov, M.G. Watson, M. Tanaka, C.Simpson, M. Cirasuolo, J.S. Dunlop, J.A. Peacock, D. Farrah, M. Akiyama, Y. Ueda, V. Smolčič, G. Stewart, S. Rawlings, C.vanBreukelen, O. Almaini, L.Clewley, D.G. Bonfield, M.J. Jarvis, J.M. Barr, S. Foucaud, R.J. McLure, K. Sekiguchi, E. Egami (April 2010). "X-ray groups and clusters of galaxies in the Subaru-XMM Deep Field". Monthly Notices of the Royal Astronomical Society 403 (4): 2063-76. doi:10.1111/j.1365-2966.2010.16256.x. Retrieved on 2011-12-09.
- Markus J. Aschwanden (2007). "Fundamental Physical Processes in Coronae: Waves, Turbulence, Reconnection, and Particle Acceleration In: Waves & Oscillations in the Solar Atmosphere: Heating and Magneto-Seismology". Proceedings IAU Symposium 3 (S247): 257–68. doi:10.1017/S1743921308014956.
- "Supersoft X-Ray Sources".
- (March 8, 2013) "Super soft X-ray source". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-18.
- White NE, Giommi P, Heise J, Angelini L, Fantasia S. "RX J0045.4+4154: A Recurrent Supersoft X-ray Transient in M31". The Astrophysical Journal Letters 445: L125.
- Marshallsumter (March 8, 2013). "Super soft X-ray source". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-18.
- Feng H, Kaaret P (2006). "Spectral state transitions of the ultraluminous X-RAY sources X-1 and X-2 in NGC 1313". Ap J 650 (1). doi:10.1086/508613. Bibcode: 2006ApJ...650L..75F.
- (June 12, 2012) "Astrophysical X-ray source". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-15.
- Jifeng Liu (March 26, 2005). "X-Rays Signal Presence Of Elusive Intermediate-Mass Black Hole". Ann Arbor, Michigan, USA: ScienceDaily. Retrieved 2012-11-25.
- Marc Wenger, François Ochsenbein, Daniel Egret, Pascal Dubois, François Bonnarel, Suzanne Borde, Françoise Genova, Gérard Jasniewicz, Suzanne Laloë, Soizick Lesteven, and Richard Monier (April 2000). "The SIMBAD astronomical database The CDS Reference Database for Astronomical Objects". Astronomy and Astrophysics 143 (4): 9-22. doi:10.1051/aas:2000332. Bibcode: 2000A&AS..143....9W. Retrieved on 2011-10-31.
- (April 29, 2013) "Active galactic nucleus". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-06.
- (May 1, 2013) "X-ray crystallography". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-06.
- Kupperian JE Jr, Friedman H (1958). "Experiment research US progr. for IGY to 1.7.58". IGY Rocket Report Ser. (1).
- Mario Blanco Cazas, "Informe Laboratorio de Rayos X — FRX-DRX" (in Spanish), Universidad Mayor de San Andres, Facultad de Ciencias Geologicas, Instituto de Investigaciones Geologicas y del Medio Ambiente, La Paz, Bolivia, September 20, 2007. Retrieved October 10, 2007.
- (April 26, 2013) "2007 Carancas impact event". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- Smith RK, Edgar RJ, Shafer RA (Dec 2002). "The X-ray halo of GX 13+1". Ap J 581 (1): 562–69. doi:10.1086/344151. Bibcode: 2002ApJ...581..562S.
- (April 27, 2013) "Cosmic dust". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- (April 30, 2013) "Cosmic-ray observatory". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- (March 18, 2013) "Aluminium-26". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- "Nuclide Safety Data Sheet Aluminum-26". www.nchps.org.
- (October 5, 2012) "Circinus X-1". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-01-06.
- (May 11, 2013) "Proton". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- (April 17, 2013) "Interstellar medium". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- (April 24, 2013) "Bremsstrahlung". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- Georg Weidenspointner (January 8, 2008). "An asymmetric distribution of positrons in the Galactic disk revealed by gamma-rays". Nature 451 (7175). doi:10.1038/nature06490. Retrieved on 2009-05-04.
- "Mystery of Antimatter Source Solved – Maybe" by John Borland 2008
- (March 12, 2013) "X-ray binary". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- I.F. Mirabel, L.F. Rodriguez (1994). "A superluminal source in the Galaxy". Nature 371 (6492): 46–8. doi:10.1038/371046a0. Bibcode: 1994Natur.371...46M.
- (April 22, 2013) "Superluminal motion". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- (May 7, 2013) "Cherenkov radiation". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- Eugene Hecht (1987). Optics (2nd ed.). Addison Wesley. p. 62. ISBN 0-201-11609-X.
- Arnold Sommerfeld (1907). "An Objection Against the Theory of Relativity and its Removal". Physikalische Zeitschrift 8 (23): 841–2.
- "MathPages - Phase, Group, and Signal Velocity". Retrieved 2007-04-30.
- (May 9, 2013) "Faster-than-light". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- Newitz, A. (2007) Educated Destruction 101. Popular Science magazine, September. pg. 61.
- NASA/CXC/SWRI/G.R.Gladstone et al. (February 27, 2002). "Jupiter Hot Spot Makes Trouble For Theory". Cambridge, Massachusetts: Harvard-Smithsonian Center for Astrophysics. Retrieved 2012-07-11.
- X-ray: NASA/CXC/MSFC/R.Elsner et al.; Illustration: CXC/M.Weiss (March 2, 2005). "Jupiter: Chandra Probes High-Voltage Auroras on Jupiter". Cambridge, Massachusetts: Harvard-Smithsonian Center for Astrophysics. Retrieved 2012-07-11.
- (March 27, 2013) "August 1975". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- K. Dennerl (November 29, 2001). "Venus: Venus in a New Light". Boston, Massachusetts, USA: Harvard University, NASA. Retrieved 2012-11-26.
- "Venus also zapped by lightning". CNN. 29 November 2007. Retrieved 2007-11-29.
- (May 19, 2013) "Venus". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-21.
- K. Dennerl et al. (November 7, 2002). "Mars: Mars Glows in X-rays". Boston, Massachusetts, USA: NASA, Harvard University. Retrieved 2012-11-26.
- J Glanz (1996). "Comet Hyakutake Blazes in X-rays". Science 272 (5259): 194–0. doi:10.1126/science.272.5259.194. Bibcode: 1996Sci...272..194G.
- (March 12, 2013) "Comet Hyakutake". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-21.
- F. Reddy. "NASA's Swift Spies Comet Lulin".
- Vladimir A. Krasnopolsky, Michael J. Mumma, Mark Abbott, Brian C. Flynn, Karen J. Meech, Donald K. Yeomans, Paul D. Feldman, Cristiano B. Cosmovici (September 5, 1997). "Detection of Soft X-rays and a Sensitive Search for Noble Gases in Comet Hale-Bopp (C/1995 O1)". Science 277 (5331): 1488-91. doi:10.1126/science.277.5331.1488. PMID 9278508. Retrieved on 2013-05-21.
- Samantha Harvey (August 19, 2008). "X-Ray Saturn". NASA. Retrieved 2012-07-21.
- G. Branduardi-Raymont, A. Bhardwaj, R.F. Elsner, G.R. Gladstone, G. Ramsay, P. Rodriguez, R. Soria, J.H. Waite Jr., T.E. Cravens (June 2007). "Latest results on Jovian disk X-rays from XMM-Newton". Planetary and Space Science 55 (9): 1126-34. doi:10.1016/j.pss.2006.11.017. Retrieved on 2013-05-23.
- Charles Q. Choi (September 24, 2012). "Mercury's Surface Resembles Rare Meteorites". SPACE.com. Retrieved 2012-09-24.
- Robert Burnham (2004). Moon Prospecting. Kalmbach Publishing Co.. http://elibrary.ru/item.asp?id=7602287. Retrieved 2012-01-11.
- Robert Nemiroff & Jerry Bonnell (September 2, 2000). "Astronomy Picture of the Day". LHEA at NASA/GSFC & Michigan Tech. U. Retrieved 2013-05-11.
- Sue Lavoie (October 30, 2012). "PIA16217: First X-ray View of Martian Soil". Pasadena, California, USA: NASA, JPL, California Institute of Technology. Retrieved 2012-11-26.
- (April 28, 2013) "Space weathering". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- (March 20, 2013) "X-ray fluorescence". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- Robert Morrison and Dan McCammon (July 1983). "Interstellar photoelectric absorption cross sections, 0.03-10 keV". The Astrophysical Journal 270 (7): 119-22. Bibcode: 1983ApJ...270..119M. Retrieved on 2011-11-11.
- G. A. Doschek, U. Feldman, and R. W. Kreplin and Leonard Cohen (July 15, 1980). "High-resolution X-ray spectra of solar flares. III - General spectral properties of X1-X5 type flares". The Astrophysical Journal 239 (07): 725-37. doi:10.1086/158158. Bibcode: 1980ApJ...239..725D. Retrieved on 2012-12-12.
- Ken Pounds (September 2002). "Forty years on from Aerobee 150: a personal perspective". Philosophical Transactions of the Royal Society London A 360 (1798): 1905-21. doi:10.1098/rsta.2002.1044. PMID 12804236. Retrieved on 2011-10-19.
- William R. Corliss (1971). NASA Sounding Rockets, 1958-1968 A Historical Summary NASA SP-4401. Washington, DC: NASA. pp. 158. http://history.nasa.gov/SP-4401/sp4401.htm. Retrieved 2011-10-19.
- (June 9, 2012) "Radiation astronomy". Wikiversity. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-14.
- Bruce Hevly (1994). Gregory Good. ed. Building a Washington Network for Atmospheric Research, In: The Earth, the Heavens, and the Carnegie Institution of Washington. Washington, DC: American Geophysical Union. pp. 143-8. ISBN 0-87590-279-0. http://books.google.com/books?hl=en&lr=&id=YTvlaU_Ot6AC&oi=fnd&pg=PA143&ots=OnxgivuQeK&sig=aWoylkajjpSpi8ZDFdCT3G2OnVI. Retrieved 2011-10-16.
- Rolf Mewe (December 1996). "X-ray Spectroscopy of Stellar Coronae: History - Present - Future". Solar Physics 169 (2): 335-48. doi:10.1007/BF00190610. Bibcode: 1996SoPh..169..335M. Retrieved on 2011-10-16.
- T. R. Burnight (1949). "Soft X-radiation in the upper atmosphere". Physical Review A 76: 165. Retrieved on 2011-10-16.
- R. M. Thomas (December 1968). "The Detection of High-Energy X-rays from Ara XR-1 and Nor XR-1". Proceedings of the Astronomical Society of Australia 1 (12): 156-6. Bibcode: 1968PASAu...1..165T. Retrieved on 2012-01-10.
- (November 10, 2011) "power law". Wiktionary. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-12-25.
- K. J. Frost and B. R. Dennis (May 1, 1971). "Evidence from Hard X-Rays for Two-Stage Particle Acceleration in a Solar Flare". The Astrophysical Journal 165 (5): 655. doi:10.1086/150932. Bibcode: 1971ApJ...165..655F. Retrieved on 2012-03-01.
- S. L. Snowden, R. Egger, D. P. Finkbiner, M. J. Freyberg, and P. P. Plucinsky (February 1, 1998). "Progress on Establishing the Spatial Distribution of Material Responsible for the 1/4 keV Soft X-Ray Diffuse Background Local and Halo Components". The Astrophysical Journal 493 (1): 715-29. doi:10.1086/305135. Bibcode: 1998ApJ...493..715S. Retrieved on 2012-06-14.
- Friedman H (November 1969). "Cosmic X-ray observations". Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 313 (1514): 301-15. Bibcode: 1969RSPSA.313..301F. Retrieved on 2011-11-25.
- Negueruela I, Smith DM, Reig P, Chaty S, Torrejon JM. "Supergiant Fast X-ray Transients: A new class of high mass X-ray binaries unveiled by INTEGRAL". arXiv:astro-ph/0511088.
- Sidoli L (2008). "Transient outburst mechanisms". arXiv:0809.3157.
- Friedman H, Byram ET, Chubb TA (April 1967). "Distribution and Variability of Cosmic X-Ray Sources". Science 156 (3773): 374-8. doi:10.1126/science.156.3773.374. PMID 17812381. Retrieved on 2009-11-25.
- Ouellette GA (June 1967). "Development of a catalogue of galactic x-ray sources". The Astronomical Journal 72 (5): 597-900. doi:10.1086/110278. Bibcode: 1967AJ.....72..597O. Retrieved on 2011-11-28.
- Dolan JF (April 1970). "A Catalogue of Discrete Celestial X-Ray Sources". Astronomical Journal 75 (4): 223-30. doi:10.1086/110966. Bibcode: 1970AJ.....75..223D. Retrieved on 2011-11-28.
- Forman W, Jones C, Cominsky L, Julien P, Murray S, Peters G (December 1978). "The fourth Uhuru catalog of X-ray sources". The Astrophysical Journal Supplemental Series 38 (12): 357-412. doi:0.1086/190561. Bibcode: 1978ApJS...38..357F. Retrieved on 2009-10-11.
- McHardy IM, Lawrence A, Pye JP, Pounds KA (December 1981). "The Ariel V /3 A/ catalogue of X-ray sources. II - Sources at high galactic latitude /absolute value of B greater than 10 deg/". Monthly Notices of the Royal Astronomical Society (MNRAS) 197: 893-919. Bibcode: 1981MNRAS.197..893M. Retrieved on 2010-01-10.
- L. Spitzer (1978). Physical Processes in the Interstellar Medium. Wiley. ISBN 0-471-29335-0.
- B. Wright. "36.223 UH MCCAMMON/UNIVERSITY OF WISCONSIN".
- Stephen A. Drake. "A Brief History of High-Energy Astronomy: 1965 - 1969". NASA HEASARC. Retrieved 2011-10-28.
- Charles, P. A.; Seward, F. D. (1995). Exploring the X-ray universe. Cambridge, England: Press Syndicate of the University of Cambridge. p. 9. ISBN 0-521-43712-1.
- Bradt, H.; Naranan, S.; Rappaport, S.; Spada, G. (June 1968). "Celestial Positions of X-Ray Sources in Sagittarius". The Astrophysical Journal 152 (6): 1005–13. doi:10.1086/149613. Bibcode: 1968ApJ...152.1005B.
- Lea, S. M.; Mushotzky, R.; Holt, S. S. (November 1982). "Einstein Observatory solid state spectrometer observations of M87 and the Virgo cluster". The Astrophysical Journal 262: 24–32. doi:10.1086/160392. Bibcode: 1982ApJ...262...24L.
- B. D.Turland (February 1975). "Observations of M87 at 5 GHz with the 5-km telescope". Monthly Notices of the Royal Astronomical Society 170: 281–94. Bibcode: 1975MNRAS.170..281T.
- RJHall (April 21, 2013). "Messier 87". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12.
- R. M. Millan, R. P. Lin, D. M. Smith, K. R. Lorentzen, and M. P. McCarthy (December 2002). "X-ray observations of MeV electron precipitation with a balloon-borne germanium spectrometer". Geophysical Research Letters 29 (24): 2194-7. doi:10.1029/2002GL015922. Retrieved on 2011-10-26.
- S. A. Drake. "A Brief History of High-Energy Astronomy: 1960–1964".
- F. A. Harrison, Steven Boggs, Aleksey E. Bolotnikov, Finn E. Christensen, Walter R. Cook III, William W. Craig, Charles J. Hailey, Mario A. Jimenez-Garate, Peter H. Mao (2000). "Development of the High-Energy Focusing Telescope (HEFT) balloon experiment". Proc SPIE 4012. doi:10.1117/12.391608.
- "ALEXIS satellite marks fifth anniversary of launch". Los Alamos National Laboratory. 23 April 1998. Retrieved 17 August 2011.
- (December 18, 2011) "Array of Low Energy X-ray Imaging Sensors". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-12-09.
- (March 24, 2013) "X-ray astronomy satellites". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-11.
- "GOES Solar X-ray Imager".
- Marshallsumter (March 24, 2013). "X-ray astronomy satellites". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-11.
- M. Wade. "Chronology - Quarter 2 2007".
- M. Wade. "Chronology - Quarter 2 2006".
- (November 8, 2012) "STS-35". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-12-05.
- (September 19, 2012) "History of X-ray astronomy". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-12-10.
- John Wilkinson (2012). New Eyes on the Sun: A Guide to Satellite Images and Amateur Observation. Astronomers' Universe Series. Springer. p. 37. ISBN 3-642-22838-0. http://books.google.com/books?id=Ud2icgujz0wC&pg=PA37.
- "Solar System Exploration: Missions: By Target: Our Solar System: Past: Helios 2".
- (November 11, 2012) "Helios (spacecraft)". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-12-10.
- Kawakatsu Y (December 2007). "Concept study on Deep Space Orbit Transfer Vehicle". Acta Astronaut 61 (11–12): 1019–28. doi:10.1016/j.actaastro.2006.12.019. Bibcode: 2007AcAau..61.1019K.
- (October 14, 2012) "International Cometary Explorer". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-12-08.
- (April 17, 2012) "X-ray telescope". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-15.
- Kulinder Pal Singh. "Techniques in X-ray Astronomy" (pdf).
- Hans Wolter (1952). "Glancing Incidence Mirror Systems as Imaging Optics for X-rays". Ann. Physik 10: 94.
- Hans Wolter (1952). "A Generalized Schwarschild Mirror Systems For Use at Glancing Incidence for X-ray Imaging". Ann. Physik 10: 286.
- Rob Petre. "X-ray Imaging Systems". NASA.
- (February 20, 2012) "Wolter telescope". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-15.
- "Mirror Laboratory".
- NuStar: Instrumentation: Optics
- (May 30, 2012) "Coded aperture". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-15.
- H. Bradt, G. Garmire, M. Oda, G. Spada, and B.V. Sreekantan, P. Gorenstein and H. Gursky (September 1968). "The Modulation Collimator in X-ray Astronomy". Space Science Reviews 8 (4): 471-506. doi:10.1007/BF00175003. Bibcode: 1968SSRv....8..471B. Retrieved on 2011-12-10.
- Minoru Oda (January 1965). "High-Resolution X-Ray Collimator with Broad Field of View for Astronomical Use". Applied Optics 4 (1): 143. doi:10.1364/AO.4.000143. Bibcode: 1965ApOpt...4..143O. Retrieved on 2011-12-10.
- (April 15, 2013) "Modulation". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-11.
- (March 13, 2013) "Collimator". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-11.
- Hoover RB et al. (1991). "Solar Observations with the Multi-Spectral Solar Telescope Array". Proc. SPIE 1546: 175.
- Corbari, L et al. (2008). "Iron oxide deposits associated with the ectosymbiotic bacteria in the hydrothermal vent shrimp Rimicaris exoculata". Biogeosciences 5: 1295-1310. doi:10.5194/bg-5-1295-2008.
- J. Goldstein, D. Newbury, D. Joy, C. Lyman, P. Echlin, E. Lifshin, L. Sawyer, and J. Michael, Scanning Electron Microscopy and X-ray Microanalysis, 3rd Ed., Kluwer Academic/Plenum Publishers, New York (2002).
- (June 15. 2012) "Energy-dispersive X-ray spectroscopy". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-15.
- Hannu Parviainen, Jyri Näränen, Karri Muinonen (July 2011). "Soft X-Ray Fluorescence from Particulate Medium: Numerical Simulations". Journal of Quantitative Spectroscopy and Radiative Transfer 112 (11): 1907-18. doi:10.1016/j.jqsrt.2011.03.011. Retrieved on 2012-03-04.
- Charles Schlemm, Richard D. Starr, George C. Ho, Kathryn E. Bechtold, Sarah A. Hamilton, John D. Boldt, William V. Boynton, Walter Bradley, Martin E. Fraeman and Robert E. Gold, et al. (2007). "The X-Ray Spectrometer on the MESSENGER Spacecraft". Space Science Reviews 131 (1): 393–415. doi:10.1007/s11214-007-9248-5. Bibcode: 2007SSRv..131..393S. Retrieved on 2011-01-26.
- "X-ray Spectrometer (XRS)". NASA / National Space Science Data Center. Retrieved 2011-02-19.
- (June 14, 2012) "MESSENGER". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-15.
- (October 28, 2011) "Wavelength dispersive X-ray spectroscopy". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-15.
- H. Bradt, G. Garmire, M. Oda, G. Spada, and B.V. Sreekantan, P. Gorenstein and H. Gursky (September 1968). "The Modulation Collimator in X-ray Astronomy". Space Science Reviews 8 (4): 471-506. doi:10.1007/BF00175003. Bibcode: 1968SSRv....8..471B. Retrieved on 2011-12-10.
- Manuel Güdel (2004). "X-ray astronomy of stellar coronae". Astron Astrophys Rev 12 (2-3): 71-237. doi:10.1007/s00159-004-0023-2. Bibcode: 2004A&ARv..12...71G. Retrieved on 2011-10-16.
- Fiona A. Harrison, William W. Craig, Finn E. Christensen, Charles J. Hailey, Will W. Zhang, Steven E. Boggs, Daniel Stern, W. Rick Cook, Karl Forster, Paolo Giommi, Brian W. Grefenstette, Yunjin Kim, Takao Kitaguchi, Jason E Koglin, Kristin K. Madsen, Peter H. Mao, Hiromasa Miyasaka, Kaya Mori, Matteo Perri, Michael J. Pivovaroff, Simonetta Puccetti, Vikram R. Rana, Niels J. Westergaard, Jason Willis, Andreas Zoglauer, Hongjun An, Matteo Bachetti, Nicolas M. Barriere, Eric C. Bellm, Varun Bhalerao, Nicolai F. Brejnholt, Felix Fuerst, Carl C. Liebe, Craig B. Markwardt, Melania Nynka, Julia K. Vogel, Dominic J. Walton, Daniel R. Wik, David M. Alexander, Lynn R. Cominsky, Ann E. Hornschemeier, Allan Hornstrup, Victoria M. Kaspi, Greg M. Madejski, Giorgio Matt, Silvano Molendi, David M. Smith, et al. (June 2013). "The Nuclear Spectroscopic Telescope Array (NuSTAR) Mission". The Astrophysical Journal 770 (2): 19. doi:10.1088/0004-637X/770/2/103. Bibcode: 2013ApJ...770..103H. Retrieved on 2013-06-05.
- African Journals Online
- Bing Advanced search
- Google Books
- Google scholar Advanced Scholar Search
- International Astronomical Union
- Is My Favorite Object an X-ray, Gamma-Ray, or EUV Source?
- Lycos search
- NASA/IPAC Extragalactic Database - NED
- NASA's National Space Science Data Center.
- Office of Scientific & Technical Information
- Questia - The Online Library of Books and Journals
- SAGE journals online
- The SAO/NASA Astrophysics Data System
- Scirus for scientific information only advanced search
- SDSS Quick Look tool: SkyServer
- SIMBAD Astronomical Database
- Spacecraft Query at NASA.
- Taylor & Francis Online
- Universal coordinate converter
- Wiley Online Library Advanced Search
- Yahoo Advanced Web Search
Learn more about X-ray astronomy | http://en.wikiversity.org/wiki/X-ray_astronomy | 13 |
118 | The response of airplanes to gusts has been a subject of concern to airplane designers since the earliest days of aviation. The Wright Brothers, flying in high winds at low altitude on the seashore at Kill Devil Hills, North Carolina, purposely designed their gliders and their powered airplane with low or negative dihedral to avoid lateral upsets due to side gusts. Pilots soon found the necessity of seat belts to avoid being tossed out of the seats of their machines when flying through turbulence. One of the first women pilots, Harriet Quimby, and her passenger, flying without seat belts in a Bleriot airplane over Boston, Massachusetts, in 1911, were unfortunately thrown out of their machine and fell 1500 feet to their deaths. In more recent years, gusts have been recognized as one of the sources of critical design loads on airplanes, as well as a source of fatigue loads due to repeated small loads. In addition to these design problems, many people are susceptible to airsickness when flying through rough air.
Despite the long interest of airplane designers in the effects of turbulence, very few efforts have been made to design airplanes with reduced response to gusts. Even in 1995, none of the commonly used transport airplanes or general aviation aircraft were equipped with gust-alleviation systems.
In the past, several attempts have been made by airplane designers to build airplanes with reduced response to turbulence. All of these attempts were characterized by an intuitive approach with no attempt at analysis prior to flight tests, and all were notably unsuccessful.
One of these airplanes (figure 13.1) was designed by Waldo Waterman. It had wings attached to the fuselage with skewed hinges and restrained by pneumatic struts that acted as springs. The effect of the skewed hinge was to reduce the angle of attack of the wing panels when they deflected upward, and vice versa. The response to gusts was not noticeably reduced from that of the airplane with the wings locked, probably because the dynamic response of the system was not suitable. Also, the degree of flexibility of the wings was limited because deflection of the ailerons would deflect the wings to oppose the aileron rolling moment, which resulted in reduced or reversed roll response.
The effect of wings with skewed, spring-loaded hinges is similar to the effect of bending of a swept wing. Airplanes with swept wings do have smoother rides in certain frequency ranges and suffer from reduced aileron reversal speed when compared with airplanes with unswept wings.
A similar method that has been tried inflight is to incorporate springs in the struts of a conventional strut-braced high-wing monoplane. This method may be likened to the springs used in an automobile chassis to.....
....reduce bumps. This method has also proved ineffective, probably because of the slow dynamic response of the system.
Other schemes involving wing motion have been proposed from time to time, and some of them have been investigated in wind-tunnel tests or in flight. One method that has been given considerable attention is the 'free wing" concept. In this method, the wing is pivoted with respect to the fuselage about a spanwise hinge ahead of its aerodynamic center, and its angle of attack is controlled by a flap on the trailing edge. A serious disadvantage of this method is that upflap deflection must be used to trim the wing at a high-lift coefficient for landing. This flap obviously reduces the maximum lift, just the opposite from what is normally obtained with a downward-deflected landing flap. Also, the dynamic response of the wing may be too slow to provide reduction of the accelerations due to high-frequency gusts.
In England, shortly after WW 11, a large commercial airplane called the Brabazon was designed. In the design stage, a system was incorporated to reduce wing bending due to gusts by operating the ailerons symmetrically to oppose bending due to gusts. The ailerons were to be operated by a mechanical linkage connected to the wing in a way to be moved by wing bending. The system was abandoned before the airplane was flown, and the airplane never went into production. Nevertheless, the project stimu- -lated interest in a flight project at the Royal Aircraft Establishment (RAE) in which a system of this type was tried in a Lancaster bomber. This system used a vane ahead of the nose as a gust detector to operate the ailerons symmetrically through a hydraulic servomechanism. The system was built with little preliminary analysis, and when the pilot engaged the system inflight for the first time, the flight in rough air seemed noticeably more bumpy than without the system. By reversing the sign of the gain constant relating aileron deflection to vane deflection, the ride was made somewhat smoother. Later, an analysis by an RAE engineer named J. Zbrozek showed the reasons for the unexpected behavior. These reasons will be mentioned later in the presentation.
Another experimental program was conducted on a C-47 airplane by the Air Force. This system was similar to that originally planned for the Brabazon. The ailerons were arranged to deflect symmetrically upward with upward wing bending, and vice versa, by means of a linkage which added a component of this deflection to that of the conventional aileron linkage. Since the wing deflection provided a large driving force, no servomechanism was required, and as a result, a system of high reliability was expected. The system suffered from the same objections as the one tested on the Lancaster. In addition, the inertia of the ailerons combined with flexibility of the operating linkage caused the aileron deflection to lag behind the wing deflection. Such a system is very conducive to flutter To avoid flutter, the ratio between the aileron deflection and wing bending had to be kept to a very low value. As a result, the system was unable to provide more than 9 percent reduction in wing bending moments, a rather small improvement.
Despite the discouraging results of these experiments, the advantages of gust alleviation remained worthwhile. As a result, a project was initiated at Langley to study this subject. These activities are described in the following section.
Background and Analysis of Gust Alleviation
I became head of the Stability and Control Section of the Flight Research Division in 1943. During the wartime years, there was little difficulty in deciding on the type of work to be done by the section. Most of the work was concerned with flying qualities or with improving the control systems of airplanes. Some of this work has been described in the preceding sections or may be further seen from my list of reports (appendix II). By 1947, jet airplanes with power control systems were being developed, and the use of automatic control to improve the stability characteristics of airplanes was a rapidly developing field. Applications of automatic control were therefore important subjects of research. The change in emphasis was recognized in 1952 when I was made head of the Guidance and Control Branch of the Flight Research Division.
In the military services, the new technology of automatic control was largely applied in the development of guided missiles. At Langley, the Pilotless Aircraft Research Division, under Gilruth, had been established to use missiles for aerodynamic research. Some of the engineers in that division, however, became interested in guided missiles. Gilruth and William N. Gardiner started to perform tests on a missile with infrared guidance that used gyroscopic stabilization and employed rocket propulsion techniques developed for aerodynamic testing. It later turned out that this missile was very similar in concept to the Sidewinder missile developed by the Navy, which was one of the first air-to-air guided missiles widely employed by the armed services.
The Deputy Director of the NACA at that time was Dr. Hugh L. Dryden, a noted scientist, who was also a lay preacher in the Methodist Church and a religious man. He did not like to see the NACA centers unnecessarily involved in military work. He issued a directive that no work with military applications should be done at the NACA centers unless requested by the military services.
From the standpoint of a stability and control engineer, the study of missile guidance systems was one of the most interesting and challenging fields available at that time. Nevertheless, I had the same feelings as Dr. Dryden concerning the desire to avoid emphasis on military projects. With this area of research ruled out, I was faced with the question of what type of automatic control research with peacetime applications was of most interest. At that time, airplanes with swept wings were starting to be fitted with yaw dampers to improve the damping of lateral oscillations, particularly at high altitudes. Yaw dampers, however, did not require much research. In most cases, companies were able to hook up a rate gyroscope through an analog amplifier to the power control actuator on the rudder, and the system worked very well in damping out lateral oscillations.
After much thought, I concluded that one field in which automatic control could be applied and which had not previously been studied to any great extent, was gust alleviation. A gust-alleviation system is one that provides the airplane with a smooth ride through rough air. At that time, all transport airplanes had piston engines and flew at altitudes below the top of storm clouds. Fear of airsickness was a common problem and was a deterrent to many people who wanted to travel on commercial airlines.
I was already familiar with one study of gust alleviation that had been made. Philip Donely, who was then a branch head in the Loads Division, had formerly been designer of the gust tunnel at Langley. In this wind tunnel, a model was catapulted at flying speed through a vertical jet of air to simulate a gust in the atmosphere. Evidently while doing this work, Donely had come across a report by Rene Hirsch in France on a study of gust alleviation done as a doctoral thesis and published in 1938 (ref. 13.1). Donely brought this report to my attention. Hirsch had devised a gust-alleviation system in which the halves of the horizontal tail were attached by chordwise hinges. These surfaces were connected by pushrods to flaps on the wing. On encountering an upward gust, the tail halves would deflect up, moving the flaps up and thereby offsetting the effect of the gust. Other features of the system made the airplane insensitive to horizontal gusts and to rolling gusts, all without adversely affecting the ability of the pilot to control the airplane. These many ingenious features are too complex to discuss herein.
I was very impressed by the report for two reasons. First, Hirsch had performed an analysis to determine the relations between the tail, elevator, and flap hinge-moment characteristics; linkage ratios; and other parameters so that the flaps moved exactly the right amount to offset the effect of a gust. This analysis required consideration of so-called stability derivatives for all these components. At that time, stability derivatives were used to describe the stability characteristics of an airplane, such as, the variation of pitching moment with angle of attack and the variation of lift coefficient with angle of attack . Prior to 1938, very few people had ever considered similar quantities for control surfaces or flaps, such as variation of flap hinge moment with angle of attack or variation of elevator hinge moment with deflection. Hirsch's analysis required a multitude of these quantities, as well as linkage ratios and other quantities describing the mechanism, all tied together by algebraic equations that were solved to obtain the desired design characteristics of the system.
The second important contribution by Hirsch was that he tested his system with a dynamic model. I have previously mentioned some of my experiments with a dynamic model. In Hirsch's case, the model was mounted in a wind tunnel so that it was free to pitch and to slide up and down on a vertical rod. The tunnel had an open throat and was equipped with a series of slats ahead of the test section, similar to a venetian blind, that could be deflected to produce an abrupt change in the flow direction. When Hirsch tested his model in this artificial gust with the system locked, it immediately banged up against its stop at the top of the rod. With the system working, however, it showed only a small disturbance and settled back to stable flight.
Hirsch was so impressed by these results that he devoted most of his professional life to demonstrating the system in flight. I later corresponded with Hirsch, visited him in France, and saw his airplanes. This part of the story will be mentioned later. At the time I read the thesis, however, I simply realized that gust alleviation was a feasible idea and that with automatic controls, it might be possible to do the job more simply than Hirsch had done with his complex aeromechanical system.
I started, in 1948, to make analyses of the response of example airplanes to sinusoidal gusts and the control motions that would be required to reduce (alleviate) the response of the airplane or the accelerations that would be applied to the passengers. The method used was what has been described earlier as the frequency-response method, which had the advantage that the calculations were greatly simplified when compared with calculating response to discrete gust inputs. These early studies showed, as was known from experience, that control by the elevators alone was ineffective. Control by flaps on the wing was thought to be promising, but the analysis showed that in many cases, the flaps would produce excessive pitching response of the airplane. In general, control by a combination of flap and elevators was required. With flaps alone, successful results could be obtained only by careful attention to the pitching moments applied to the airplane by the gust and by the flaps.
A survey was also made of previous attempts at gust alleviation on full-scale airplanes. These included an airplane made by Waterman with wings pivoted on skewed hinges, a DC-3 modified by the Air Force so that the ailerons deflected symmetrically in response to wing bending, and an Avro Lancaster bomber modified by the British in which the ailerons were moved symmetrically in response to gusts sensed by a vane on the nose. All these attempts, made without any theoretical analysis, had been unsuccessful in that very little alleviation of airplane accelerations was obtained. My analysis showed why each of the attempts had failed. In general, the lack of attention to pitching moments produced by the control surfaces was responsible. In the case of the Lancaster, for example, the symmetric deflection of the ailerons in the upward direction proportional to a positive change in angle of attack produced a positive, or upward, pitching moment. A positive variation of pitching moment with angle of attack represents a decrease in longitudinal stability. This decrease in stability increased the response to low-frequency gusts, which resulted in an increase in the bumpy ride experienced by the pilots. In addition, the aileron motion reduced the damping of the wing bending oscillation, which increased the effect of structural oscillations on the sensation of the pilots. In considering the effect of the ailerons on the pitching moments, it should be realized that a symmetric upward deflection of the ailerons produces a direct effect on the pitching moments of the wing. It also changes the wing lift distribution to produce an increased downwash on the tail due to up aileron deflection, which further increases the destabilizing variation of pitching moment with angle of attack. My analysis showed that the downwash effects on the tail due to the deflection of flaps or ailerons on the wing are very important in designing a gust-alleviation system.
After making these preliminary studies, I analyzed a system in which a gust-sensing vane mounted on a boom ahead of the nose was used to operate flaps on the wing through a hydraulic servomechanism. Any gust-alleviation system working on this principle reduces the lift produced by a change in angle of attack. For complete alleviation, the lift due to angle of attack is reduced to zero. Since the pilot maneuvers the airplane by changing its angle of attack, this system would prevent the pilot from making any longitudinal maneuvers. To restore this capability, the input from the control stick, normally used to move the elevators, was also fed to the flap servomechanism. With this arrangement, when the pilot moves the stick back to make a pull up, the flaps first go down to produce upward lift. Then, as the angle of attack increases in response to the elevator motion, the flaps move back to neutral and the pull up is continued with the airplane at a higher angle of attack. The result is a faster response to control motion than obtained with a conventional airplane. This type of control, in later years, has been called "direct lift control" and is advantageous for control situations requiring rapid response.
The analysis also showed that provision had to be made for avoiding excessive pitching moments due to the gusts and flap deflection. Since the flaps were moved in proportion to angle of attack, any pitching moment from the flaps contributed directly to the variation of pitching moment with angle of attack, which determines the longitudinal stability of the airplane. For satisfactory stability, the pitching moment due to angle of attack must be kept within prescribed limits. The additional contribution due to the gust-alleviation
system had the possibility of greatly exceeding these limits, which made the airplane either violently unstable or excessively stable. It was possible to solve the equations to determine the flap and elevator motion to completely offset both the lift and pitching moments applied to the airplane. This analysis showed that this objective could be attained in two ways. In one method, both the flap and elevator had to be moved in response to the gust, but the elevator motion was not in phase with the flap motion and generally had to lag behind the flap motion. In the other method, the elevator and flap moved in phase, but the downwash from the flaps on the tail had to be of opposite sign from that normally encountered. That is, down flap deflection had to produce an upwash at the horizontal tail.
Having reached this stage in the analysis, the results appeared sufficiently promising for a report on the results and a flight program to demonstrate a gust-alleviation system. A job order request was submitted about June 1948 to obtain official approval for this work. This job order is presented here to illustrate the type of request required to get a research program approved.
Note the man-hour estimate. The value of 3000, about 1.5 man years, was just a guess based on the effort put into previous reports. The cost of $6000 was based on a standard rule of $2 per man-hour. Little effort was required for this phase of the job approval.
When the work described in this job order was undertaken, I appointed Christopher C. (Chris) Kraft, Jr. as project engineer. Kraft was then an engineer in my section with experience in flying qualities and with work using the free-fall and wing-flow methods. Later in his career, he was a flight controller during the Apollo missions and was made director of the Johnson Space Flight Center following Dr. Gilruth's retirement. Kraft and I made additional calculations to have a logical series of examples to place in the report, which was later entitled Theoretical Study of Some Methods for Increasing the Smoothness of Flight through Rough Air (ref. 13.2). The report starts with a review of the available data on the causes of airsickness. The main source of data, which is still believed to be the best available, was a series of tests made during WW II at Wesleyan University, in which subjects were tested with various wave forms of vertical acceleration in an elevator. The results showed that relatively low-frequency variations of acceleration that had periods of 1.4 seconds and greater were the most important causes of motion sickness.
Excellent control of the directional and rolling disturbances of airplanes could be obtained with conventional autopilots available at the time the report was written. These devices, however, were relatively ineffective in reducing the vertical accelerations of airplanes. The report therefore concentrated on the problems of longitudinal gust alleviation and control.
Following a section on the theoretical analysis, examples were included in the report to show the use of elevator alone, flaps alone, and a combination of flap and elevator motion to offset the effect of gusts. Next, two types of gust-sensing devices, a vane ahead of the nose and an accelerometer in the airplane, were considered. Because these systems affect the controllability of the airplane. detailed studies were made of static and dynamic longitudinal stability of airplanes incorporating these systems.
In various studies of gust-alleviation systems made by other researchers since the one described herein, the now popular approach of optimal control theory has been applied. In this method, some balance is sought between the amount of control motion required by the system and the amount of reduction of acceleration achieved. In the study made by Phillips and Kraft, however, an effort was made to achieve complete gust alleviation within the limits imposed by the assumptions of the theory. Complete alleviation was found to be possible with the vanetype sensor, but not with the accelerometer sensor. Inasmuch as complete alleviation was found to be possible with reasonable control motions, the use of optimal control theory when a vane sensor is used is really not optimal and represents an inappropriate application of this theory.
The theory used for the analysis is very similar to that taught by Professor Koppen in my courses at MIT. This theory, in turn, was based on the theory first presented by G. H. Bryan in England in 1903 and later in the textbook Stability in Aviation in 1911 (ref. 4. 1). This theory shows that the longitudinal and lateral motions of the airplane can be considered separately and assumes small disturbances so that all aerodynamic forces and moments can be assumed to vary linearly with the magnitude of the disturbance. This theory was extended to calculate the response of an airplane to gusts and reported in NACA Report No. 1 and later reports in the period 1915-1918 by Edwin B. Wilson, then a professor of Physics at MIT. To study the effect of a gust-alleviation system, it was necessary only to add to Wilson's theory the additional forces and moments caused by the airplane's control surfaces as they were moved by the gust-alleviation system. Like Wilson, I considered that the gust was constant across the span, though this subject was studied in more detail later.
Two refinements were added to the theory, as presented by Wilson, that were found to be of importance for the study of gust-alleviation systems and that are believed to make the results very close to what would be obtained with an exact computer simulation such as would be possible with modern electronic computers. First, the penetration effect was considered, that is, the difference in the time of penetrating the gust by a vane ahead of the nose, the wing, and the tail. In Wilson's study, the gust was assumed to affect all parts of the airplane simultaneously. Second, the time lead or lag effects caused by gust penetration were approximated by a linearized representation to keep the equations linear. This technique had been used by Cowley and Glauert in a British report published in 1921 to improve the calculation of pitch damping of airplanes by taking into account the time for the downwash leaving the wing to reach the tail (ref. 13.3). A similar method was used in the present analysis to account for all the lead and lag effects, such as the lead of the vane in penetrating the gust, the lag in response of the servomechanism operating the flaps, and the lag of downwash from the wing and flaps in hitting the tail, in addition to the lag of the gust itself in reaching the tail. These lead and lag effects were found to be very important, particularly in affecting the damping of the short-period longitudinal motion of the alleviated airplane.
A useful advantage was found in this approach in that all the effects of the alleviation systems studied could be considered as changes to stability derivatives of the basic airplane. Most of the effects of these derivatives were known from experience, or at least had simple physical interpretations. In addition, the order of the equations was not increased over that of the basic airplane.
These equations made it possible to solve for the characteristics of the system that would produce complete gust alleviation, that is, that would produce zero response to a gust. In examining these formulas, I suddenly realized that the results had a simple physical interpretation. This interpretation is shown in figure 13.2, in which an airplane with a vanetype gust-alleviation system is shown penetrating a region in the atmosphere where there is a change in vertical gust velocity (called a step gust). First, the vane is deflected by the gust. If the servomechanism operating the flaps has a time lag equal to the time for the gust to reach the wing, then the flap moves just at the right time and the right amount to offset the lift change on the wing due to the gust. For the airplane response to be zero, however, the pitching moment about the center of gravity caused by the flap deflection must be zero. This condition is not ordinarily obtained with conventional wing flaps, but may be obtained by moving the elevators the correct amount in phase with the flaps. A little later, the tail is affected by the gust and by the downwash from the flaps. The effect of the forces on the tail can be eliminated in two ways. Either the flap downwash must be equal and opposite to the effect of the gust on the tail or the elevator must be given an additional movement to cancel the combined effect of the flap downwash and the gust.
The simple interpretation of the action of a perfect gust alleviation system has interesting ramifications. First, the influence of the feedback of angle of attack from the vane makes the alleviation system a closed-loop control system, but the feedback decreases as the system approaches the condition of complete gust alleviation. In this condition, the system behaves as an open-loop control, because there is no motion of the airplane to be sensed by the vane. Second, the consideration of lag effects is seen to be exact in the limiting case of perfect alleviation, even though these results were obtained from an approximate linearized theory. Third, the discovery of this interpretation could presumably have been made a priori, without use of any theory, but in my case, working through the theory first and examining the resulting formulas was necessary for me to realize that this interpretation existed. Many simple physical principles in the history of physics...
.....have been realized only after long periods of thought and analysis by their discoverers.
The vane-type system studied has some disadvantages when adjusted to give complete alleviation. The system results in an airplane with zero lift and zero pitching moment due to angle of attack. The airplane would respond to pilot's commands as provided by the direct lift control system, but would have no inherent tendency to stabilize in a new equilibrium condition. For this reason, systems were studied that retained a small amount of longitudinal stability. Use of these systems was found to be feasible and did not seriously reduce the gust-alleviation properties of the perfect system.
The studies of the use of an accelerometer to sense the gusts showed that the gain of the system, that is, the amount of flap deflection used for a given change in acceleration, had to be limited otherwise a poorly damped short-period vertical oscillation would result. As a result, the amount of alleviation potentially available with this system was limited. On the other hand, the use of the accelerometer sensor inside the airplane avoids the problem of having a delicate vane exposed to potential damage from handling. In practice, the system with an accelerometer sensor would be much more likely to excite structural oscillations of the airplane, thereby further limiting the gain and requiring a more detailed analysis to insure the safety of the system.
Design and Test of a Gust-Alleviated Airplane
During the publication process of the report on the analysis, work was started on a....
....program to demonstrate gust alleviation in flight. The airplane chosen for the program was a Beech B- 18, a small twin-engine transport. The airplane was obtained from the Navy and had the Navy designation C-45. Because of the need for some major alterations to the airplane control surfaces, Kenneth Bush and Edwin C. Kilgore of the Engineering Division were called into the project to do the design work. Steve Rock of the Instrument Research Division was assigned to design the servomechanism to actuate the control surfaces.
At the time the design was started, about 1950, electronic control systems had a poor reputation for reliability. These systems used vacuum-tube amplifiers. Furthermore, the use of techniques of redundancy to improve reliability had not then been developed. For these reasons, many of the design features were governed by safety considerations. All autopilots in use at that time were designed so that the pilot could readily overpower the autopilot with his manual control system in the event of a failure. This method could not be used with the gust-alleviation system because the wing flaps, which required large....
....operating forces, were not normally connected to the pilot's control stick. As a result, the design features described in the following paragraphs were incorporated.
A drawing of the airplane as modified for the gust-alleviation project is shown in figure 13.3. A boom was built on the nose to hold the angle-of-attack vane as shown in figure 13.4. The wing flaps, which normally deflect only downward, were modified to move up and down because both up and down gusts must be counteracted by the system. The elevator was split into three sections, the two outboard segments being linked to the flaps for use with the gust-alleviation system and the inboard segment being used in the normal manner for pitch control (figure 13.5). Finally, small segments of the flaps near the fuselage were driven separately from the rest of the flap system so that they could be geared to move either in the same direction or in the opposite direction from the rest of the flaps (figure 13.6).
As stated previously, perfect gust alleviation according to the theory could have been obtained either by driving the elevator separately from the flaps and with a different phase relationship or the elevator could have been geared directly to the flaps and the downwash from the flaps altered to offset the gust at the tail. The latter method was selected for the following reasons. With a direct mechanical linkage between the flaps and the outboard elevators, the pitching moment due to flap deflection, a critical quantity for longitudinal stability, could be finely adjusted as required and would hold its setting. If a separate servomechanism and electronic amplifier had been used to operate the elevators, the gain of the amplifier might have drifted and caused the airplane to become unstable. In fact, the gain of the amplifier between the vane and the flaps often did vary in flight by amounts that could have caused violent instability if a similar.....
....amplifier had been used to operate the elevators.
The small inboard segments of the flaps were used to reverse the direction of the flap downwash at the tail, as required by the theory if the elevators moved in phase with the flaps. This method of course, reduces the flap effectiveness in producing lift. To regain sufficient flap effectiveness, the flaps and ailerons were geared together so that the flaps and ailerons deflected symmetrically for gust alleviation. These surfaces were driven by an electrical input variable-displacement pump hydraulic servomechanism, which was taken from a naval gun turret. Electrical signals from the vane and the pilot's control column were combined in a vacuum-tube amplifier and fed to the control valve of the servomechanism. For lateral control, the entire flap and aileron system was deflected asymmetrically through a separate servomechanism of the same type. The system was designed so that the airplane could be flown through its original manual control system if the alleviation system failed or was switched off. In this configuration, the control wheel on the pilot's side remained connected at all times to the inboard segment of the elevator and to the ailerons. In the gust-alleviation mode, the ailerons were driven symmetrically through preloaded spring struts. This system remained connected in the manual mode, but the pilot could overpower the forces in the preloaded struts with his inputs to the control wheel. With the system in the gust-alleviation mode, the control wheel on the copilot's side was used to apply control inputs through the electronic control system. In addition, it remained connected through the mechanical linkage to the inboard portion of the elevator.
When the pilot turned the alleviation system off, the actuator driving the flaps was bypassed, and a separate hydraulic actuator with its own accumulator forcibly drove the flaps to neutral with a caliper-like linkage that could capture the flaps in any position.
Changes in angle of attack due to change in airspeed or drift of the amplifier used to operate the flaps could have caused the trim position of the flaps to vary slowly in flight. To maintain the trim position of the flaps at zero deflection over long periods, a mechanical ball-disk integrator driven by the flap linkage was used to feed an additional signal into the flap servomechanism to slowly run the flaps to the neutral position. This system had a time constant of 10 seconds, which was slow enough to avoid interference with the gust-alleviation function or the control of the airplane. To provide automatic control of the lateral and directional motion of the airplane in the gust-alleviation mode, a Sperry A-12 autopilot was connected to the aileron and rudder systems. This autopilot was the most advanced type available at the time of the tests.
Finally, the pilots landed the airplane with the wing flaps in neutral. This operation did not pose any problem on the long runways at Langley Field, but the design of a high-lift flap system that can also provide upward deflection of the flap for gust alleviation remains one of the engineering problems of such systems that no one has yet tried to solve.
The airplane was instrumented with strain gauges to measure wing shear and bending moments at two stations and tail shear and bending moment at the root. The project became a joint project with the Aircraft Loads Branch. After the tests were completed, two reports had been published by each group: an initial and a final report on the gust-alleviation characteristics and an initial and a final report on the loads (refs. 13.4, 13.5, 13.6, and 13.7).
The tests occupied a long period of time. One of the main problems encountered was finding suitable rough air. The NACA test pilots were understandably conservative in flying experimental airplanes and usually declined to do test flying in clouds or stormy weather. Clear-air turbulence occurred on occasions after passage of a cold front. These conditions were used as often as possible in making the tests, but frequently the turbulence was of low intensity. For these reasons, some of the data were not as extensive as might have been desired.
Despite these problems, results were obtained with various sets of gearings to obtain varying degrees of longitudinal stability, and cases with the inboard flaps in neutral and moving oppositely from the outboard flaps were studied. A time history comparing the airplane motions in flight through rough air with the system on and off is shown in figure 13.7. In these runs, obtained early in the test program, the inboard flap segments were locked.
The results of later tests in which the inboard flap segments moved oppositely from the rest of the flap system are shown in figure 13.8. These results are shown as power spectral densities of the normal acceleration and pitching moment plotted on log-log scales. The results show that the system was effective in reducing the response in both normal acceleration and pitching velocity at frequencies below about 2 hertz. These plots....
....of power spectral density are the results of the usual evaluation procedure for randomly varying quantities, but they do not give a very clear comparison of the results obtained with the alleviation system on and off. The power spectra present the square of the recorded quantities, which tend to exaggerate the differences, while plotting on log-log paper tends to reduce the apparent differences. The question therefore arises as to how the data could be compared to give an impression of the effectiveness of the system more meaningful to the user. For this reason, data on the normal acceleration responses....
....are plotted in two different ways in figure 13.9 In part (a) of this figure, results are plotted on linear scales in the form of a transfer function, that is the ratio of normal acceleration to gust angle of attack for sinusoidal inputs of various frequencies. This plot is of interest to the control engineer and shows correctly the relative magnitude of the normal acceleration for the two cases at various frequencies, but does not include the variation of the actual gust forcing function with frequency. In part (b), the square root of the power spectral density of the response is plotted as a function of frequency on linear scales. This plot attempts to show the actual magnitude of the normal acceleration at each frequency. This plot is believed to be more meaningful in interpreting the passengers' impression of a ride in the airplane.
As can be seen, the basic airplane has a large peak in the acceleration response at a frequency about 0.2 hertz. This low-frequency peak is typical of the response of unalleviated airplanes and occurs because of the larger amplitude of the turbulence input at low frequencies. In the case of the alleviated airplane, the response is reduced to a fairly constant, relatively low value in the frequency range between 0 and 1.6 hertz. For the degree of turbulence encountered, this reduction greatly improved the subjective impression of ride comfort. Though the results are not shown on the plots, the response at frequencies above 2 hertz with the alleviation system operating were slightly increased above those of the basic airplane. Also, in turbulence of relatively large magnitude, the pilots noted a fore-and aft oscillation caused by drag of the flaps at large deflections.
Though the results obtained were quite gratifying, the question arises as to why the performance of the system was not better, inasmuch as the theory predicted perfect gust alleviation. One of the main reasons was the nonlinear characteristics of the servomechanism used to drive the flap system. The designer of the system in the Instrument Research Division had been requested to provide a rather sharp cutoff in the response beyond 2 hertz, to avoid the possibility of exciting wing flutter. The C-45 was a very stiff airplane, with the wing primary bending mode at 8 hertz. It was intended, therefore, that the output of the servomechanism should be close to zero at a frequency of 8 hertz. It was not until after the device was installed that it was found that the cutoff in response had been obtained by rate limiting the output. This provided a steep cutoff, but the response was a function of amplitude. The result was that in a large amplitude gust, rate limiting was encountered and the alleviation was reduced just when it was needed most. A later unpublished investigation of the effects of rate limiting demonstrated these adverse effects and showed that with sufficiently severe rate limiting, a continuous sawtooth oscillation of the flap would be encountered in rough air, which resulted in response greater than that of the basic airplane.
A second problem was the nonlinear lift characteristics of the flaps. The flaps deflected about plus or minus 25 degrees, but the slope of the curve of lift versus flap deflection fell off markedly before this deflection was reached. As a result, a fixed gain between the vane and the flaps did not give a uniform value of the ratio of lift to gust angle of attack. This nonlinear response would be expected to introduce higher frequency harmonics into the response and probably accounts for the increase in response beyond 2 hertz. This and other characteristics were not known before the tests were made. I concluded that on any future project of this type, wind-tunnel tests of the airplane to determine the control characteristics would be advisable. Despite these deficiencies, enough was learned to show the feasibility of gust alleviation and to show how the results could be improved in a future attempt.
Analytical Studies of Additional Problems
The studies of gust alleviation introduced several problems that had not been considered previously in airplane design. One problem was the effect of variations in gust velocity across the wing span. The use of a single vane on the center line of the airplane to sense the gusts would work perfectly if the gust velocity were constant across the wing span, but would be less effective if different values of gust velocity were encountered at different stations along the span. This problem can be studied if the turbulence in the atmosphere is assumed to be isotropic (or for this application, axisymmetric); that is, it has the same characteristics regardless of the direction in which the airplane is flying. This assumption appears reasonable for most types of turbulence. The theory of isotropic turbulence had been studied theoretically. The nature of the spectrum of turbulence, that is, the way the gust velocity varies with gust wavelength, had been determined. The gust velocity has been found experimentally to vary approximately directly with the wavelength even for wavelengths many times larger than the wing span of the largest existing airplanes. The most intense gusts, which cause the most disturbance to the airplane, have wavelengths long compared to the wing span and are therefore approximately constant across the span. Gusts with wavelengths short compared to the span have low intensity and therefore do not disturb the airplane much. The use of a single vane on the center line is therefore quite effective. Gusts of wavelength short compared to the span may cause up and down loads that average out across the span. For these gusts, the response of the vane at the center line would be too large. These gusts have such high frequency, however, that the vane response is filtered by the lag in response of the flap servomechanism. For a vane located ahead of the nose, the lag in the flap servomechanism is also beneficial in delaying the response sensed by the vane until the gust reaches the wing. Considering these factors, the effectiveness of a gust-alleviation system using a vane on the centerline may be shown to be about 98 percent as effective in axisymmetric turbulence as it would be with gusts constant across the span. This problem was studied in more detail after the flight investigation was completed (refs. 13.8 and 13.9).
An interesting optimization problem, which to my knowledge has not been solved, is to determine the optimal filter to place between the vane and the flap to obtain the greatest amount of gust alleviation, considering the distance of the vane ahead of the wing, the wing span, and the spectrum of atmospheric turbulence. This problem is only of academic importance, however, as about 98 percent alleviation was obtained by using a second-order linear filter with reasonable frequency and 0.7 critical damping (refs. 13.6 and 13.7).
Some time after completion of the tests, a summary report was given as part of a lecture series at Renssellaer Polytechnic Institute (ref. 13.10). This paper gives a more complete discussion of the subject of gust alleviation than contained herein.
Future Possibilities of Gust Alleviation
Some review of later efforts in the field of gust alleviation may be of interest, inasmuch as the development of computers and automatic control technology would permit approaches quite different from that used in the early NACA tests. It was a disappointment to me that very little effort was made by aircraft companies to incorporate provision for gust alleviation even after the development of control technology would have made it more feasible.
To my knowledge, the only airplane in service that incorporates a system performing some gust-alleviation function is the Lockheed 1011. In some later models, the wing span was increased by extending the wing tips to allow the airplane to carry greater loads. To avoid changing the wing structure to withstand greater bending moments, the ailerons were operated symmetrically by an automatic control system to reduce bending moments due to gusts.
One reason for the lack of interest in gust alleviation is that following the NACA tests, jet transports were introduced. As a result of higher wing loading, swept wings, flight at higher altitudes, and the use of weather radar to avoid storms, these airplanes were much less likely to encounter violent airplane motions that would cause airsickness. In addition, the problem of gust alleviation became more difficult because the structural flexibility of these airplanes placed their structural frequencies closer to the frequency range of interest for gust-alleviation. As a result, structural response would have to be considered in designing the system. In recent years, these reasons for avoiding the use of gust-alleviation systems have become less significant. Extensive use is now made of commuter airplanes that fly at lower altitudes and frequently encounter rough air. In addition, methods have been developed to analyze the structural response and to damp out the structural modes by use of automatic control systems.
Review of Work by René Hirsch
In closing, a brief review is given of the work of Rene Hirsch, whose thesis was mentioned at the beginning of this chapter. Also, a few programs and studies applicable to gust alleviation that have occurred since the NACA program on the C-45 are reviewed.
After reviewing Hirsch's thesis, no more was heard of his activity until the early 1950's, during the course of the NACA program. At this time, a French report was discovered revealing that, following WW II, Hirsch had made additional wind-tunnel tests in the French large-scale tunnel at Chalais-Meudon on a model of a proposed airplane and had built this small twin-engine airplane incorporating his system. The airplane was envisioned as a quarter-scale model of a piston-engine transport of a class similar to the Douglas DC-6 or the Lockheed Constellation, which were the largest transports in service in that period. Correspondence was established with Hirsch and additional reports and information were obtained (ref. 13.11). Hirsch's airplane had a wing span of 27 feet and had two 100-horsepower motors. It incorporated the same system described in his thesis, in which the halves of the horizontal tail moved on chordwise hinges to operate flaps on the wings. The conventional elevators provided not only pitching moments, but moved the tail halves about their chordwise hinges to cause the flaps to move in the direction to provide direct lift control. In this way, the loss of longitudinal control due to the gust-alleviation system was overcome. Hirsch's airplane incorporated many other ingenious features, including provisions for reducing rolling moments due to rolling gusts and lift due to horizontal gusts. His design also incorporated large pneumatic servos operated by dynamic pressure to restore damping in roll and to stabilize the rate of climb or descent. The airplane had good handling qualities and appeared to have been very successful in providing a smooth ride, as shown by some time histories in rough air with the system turned on and off. After about 30 flights, the airplane ran into a ditch at the end of the runway and was damaged.
No more was heard until 1967, when another report appeared showing that the airplane had been rebuilt and equipped with two 180-horsepower motors (ref. 13.12). In this condition, the airplane made numerous additional flights with somewhat more complete instrumentation. A photograph of the airplane in flight is shown in figure 13.10. From the data obtained, the results appeared very similar to those obtained with the NACA C-45 in that the accelerations due to gusts were reduced by about 60 percent at frequencies below about 2 hertz, but were increased somewhat at higher frequencies.
All of Hirsch's work in designing and building his airplanes was done with his own funds, though some help with instrumentation was obtained from ONERA, the French equivalent of the NACA. Outside of France, Hirsch's work was little known. His reports described the work in general, but were not sufficiently detailed to give engineering data on all of the ingenious ideas and systems incorporated on his airplanes.
In 1975 during a trip to France, I visited Hirsch. He had found at that time that a twin-engine airplane was too expensive for him to operate during the oil crisis, and he had donated it to the French Air Museum. When....
....I saw the airplane, it stood like a little jewel amid a group of dilapidated antique airplanes in an old WW I hangar at Villaroche. It is now on display at the new French Air Museum at Le Bourget.
At that time, Hirsch was starting modification of a single-engine light plane, the Aerospatial Rallye, to incorporate his gust-alleviation system. This airplane was completed and flying when I visited France again in 1980. Hirsch is shown standing in front of his Rallye airplane in figure 13.11. This airplane was never as successful, in Hirsch's opinion, as the first one, probably because of its lower airspeed and lower frequency of response of the flap systems. In recent years (1995), Hirsch modified a third airplane, the Sobata Trinidad, with small canard surfaces ahead of the wing root to operate the flaps. The more forward position of these sensing surfaces was intended to improve the response to high-frequency gusts. Hirsch died in August 1995 at the age of 87 without having had the opportunity to test his latest design.
Hirsch's dedication to the pursuit of gust alleviation is a remarkable story in view of the general disregard of this subject by the rest of the aviation industry. Of course, the aeromechanical systems used by Hirsch have been superseded by automatic controls using computers and electro-hydraulic actuators. Hirsch readily admitted that he would prefer such systems but was unable to afford them.
Later Studies by Other Investigators
Though a number of studies of gust alleviation have been made during the years since the C-45 tests, most of them have not contributed any notable new developments. Only two are mentioned to bring the subject up to date. One is the so-called LAMS project, an acronym for Load Alleviation and Mode Stabilization, conducted at the Air Force Flight Dynamics Laboratory at Wright-Patterson
Air Force Base, Ohio, about 1968-1969 (ref. 13.13). In this project, a B-52 bomber was equipped with an electronic analog-type flight control system to operate the existing flight controls to damp out structural modes. This work is important because consideration of damping of structural modes would be required in any attempt to install a gust-alleviation system in a high-speed airplane. The report illustrates the success of modern control analysis techniques (as they existed at that time) in designing a modal damping system and in predicting the results obtained.
The second contribution of note is the analytical work of Dr. Edmund G. Rynaski, formerly of Calspan and now at EGR Associates, in designing a system to alleviate both the rigid-body motions and selected structural modes of an airplane (ref. 13.14). Rynaski's work, based on matrix analysis techniques, shows how to provide essentially open-loop control of the rigid-body modes, as was done on the C-45 airplane, as well as to provide open-loop cancellation of a selected number of structural modes. This method also makes possible improved damping of higher order structural modes by use of closed-loop control.
With the availability of digital flight computers and modern control actuators, different approaches should be considered for gust alleviation. One approach would be to operate the flaps on the wing as a function of angle of attack sensed by a vane or similar device, but to operate the elevators by a modern reliable pitch damper as part of a longitudinal command control system to control the pitching response of the airplane. Another approach would be to calculate absolute gust velocity on line by a method similar to that referred to previously in the work by Crane and Chilton in measuring gust velocity (ref. 12.5). This method requires correcting the angle of attack measured by a vane for the inertial motions of the airplane at the vane location. This signal could be used as an input into a gust-alleviation system without the need to modify the normal control or handling qualities of the airplane. | http://history.nasa.gov/monograph12/ch13.htm | 13 |
58 | The term race or racial group usually refers to the concept of categorizing humans into populations or groups on the basis of various sets of characteristics. The most widely used human racial categories are based on visible traits (especially skin color, cranial or facial features and hair texture), and self-identification.
Conceptions of race, as well as specific ways of grouping races, vary by culture and over time, and are often controversial for scientific as well as social and political reasons. The controversy ultimately revolves around whether or not races are natural types or socially constructed, and the degree to which perceived differences in ability and achievement, categorized on the basis of race, are a product of inherited (i.e. genetic) traits or environmental, social and cultural factors.
Some argue that although race is a valid taxonomic concept in other species, it cannot be applied to humans. Many scientists have argued that race definitions are imprecise, arbitrary, derived from custom, have many exceptions, have many gradations, and that the numbers of races delineated vary according to the culture making the racial distinctions; thus they reject the notion that any definition of race pertaining to humans can have taxonomic rigour and validity. Today most scientists study human genotypic and phenotypic variation using concepts such as "population" and "clinal gradation". Many contend that while racial categorizations may be marked by phenotypic or genotypic traits, the idea of race itself, and actual divisions of persons into races or racial groups, are social constructs.
Given visually complex social relationships, humans presumably have always observed and speculated about the physical differences among individuals and groups. But different societies have attributed markedly different meanings to these distinctions. For example, the Ancient Egyptian sacred text called Book of Gates identifies four categories that are now conventionally labeled "Egyptians", "Asiatics", "Libyans", and "Nubians", but such distinctions tended to conflate differences as defined by physical features such as skin tone, with tribal and national identity. Classical civilizations from Rome to China tended to invest much more importance in familial or tribal affiliation than with one's physical appearance (Dikötter 1992; Goldenberg 2003). Ancient Greek and Roman authors also attempted to explain and categorize visible biological differences among peoples known to them. Such categories often also included fantastical human-like beings that were supposed to exist in far-away lands. Some Roman writers adhered to an environmental determinism in which climate could affect the appearance and character of groups (Isaac 2004). In many ancient civilizations, individuals with widely varying physical appearances became full members of a society by growing up within that society or by adopting that society's cultural norms (Snowden 1983; Lewis 1990).
Julian the Apostate was an early observer of the differences in humans, based upon ethnic, cultural, and geographic traits, but as the ideology of "race" had not yet been constructed, he believed that they were the result of "Providence":
Come, tell me why it is that the Celts and the Germans are fierce, while the Hellenes and Romans are, generally speaking, inclined to political life and humane, though at the same time unyielding and warlike? Why the Egyptians are more intelligent and more given to crafts, and the Syrians unwarlike and effeminate, but at the same time intelligent, hot-tempered, vain and quick to learn? For if there is anyone who does not discern a reason for these differences among the nations, but rather declaims that all this so befell spontaneously, how, I ask, can he still believe that the universe is administered by a providence? — Julian, the Apostate.
Medieval models of "race" mixed Classical ideas with the notion that humanity as a whole was descended from Shem, Ham and Japheth, the three sons of Noah, producing distinct Semitic (Asian), Hamitic (African), and Japhetic (European) peoples.
The first scientific attempts to classify humans by categories of race date from the 17th century, along with the development of European imperialism and colonization around the world. The first post-Classical published classification of humans into distinct races seems to be François Bernier's Nouvelle division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the different species or races which inhabit it"), published in 1684.
These scientists made three claims about race: first, that races are objective, naturally occurring divisions of humanity; second, that there is a strong relationship between biological races and other human phenomena (such as forms of activity and interpersonal relations and culture, and by extension the relative material success of cultures), thus biologizing the notion of "race", as Foucault demonstrated in his historical analysis; third, that race is therefore a valid scientific category that can be used to explain and predict individual and group behavior. Races were distinguished by skin color, facial type, cranial profile and size, texture and color of hair. Moreover, races were almost universally considered to reflect group differences in moral character and intelligence.
The eugenics movement of the late 19th and early 20th centuries, inspired by Arthur Gobineau's An Essay on the Inequality of the Human Races (1853–1855) and Vacher de Lapouge's "anthroposociology", asserted as self-evident the biological inferiority of particular groups (Kevles 1985). In many parts of the world, the idea of race became a way of rigidly dividing groups by culture as well as by physical appearances (Hannaford 1996). Campaigns of oppression and genocide were often motivated by supposed racial differences (Horowitz 2001).
In Charles Darwin's most controversial book, The Descent of Man, he made strong suggestions of racial differences and European superiority. In Darwin's view, stronger tribes of humans always replaced weaker tribes. As savage tribes came in conflict with civilized nations, such as England, the less advanced people were destroyed. Nevertheless, he also noted the great difficulty naturalists had in trying to decide how many "races" there actually were (Darwin was himself a monogenist on the question of race, believing that all humans were of the same species and finding "race" to be a somewhat arbitrary distinction among some groups):
Man has been studied more carefully than any other animal, and yet there is the greatest possible diversity amongst capable judges whether he should be classed as a single species or race, or as two (Virey), as three (Jacquinot), as four (Kant), five (Blumenbach), six (Buffon), seven (Hunter), eight (Agassiz), eleven (Pickering), fifteen (Bory St. Vincent), sixteen (Desmoulins), twenty-two (Morton), sixty (Crawfurd), or as sixty-three, according to Burke. This diversity of judgment does not prove that the races ought not to be ranked as species, but it shews that they graduate into each other, and that it is hardly possible to discover clear distinctive characters between them.
In a recent article, Leonard Lieberman and Fatimah Jackson have suggested that any new support for a biological concept of race will likely come from another source, namely, the study of human evolution. They therefore ask what, if any, implications current models of human evolution may have for any biological conception of race.
Today, all humans are classified as belonging to the species Homo sapiens and sub-species Homo sapiens sapiens. However, this is not the first species of hominids: the first species of genus Homo, Homo habilis, evolved in East Africa at least 2 million years ago, and members of this species populated different parts of Africa in a relatively short time. Homo erectus evolved more than 1.8 million years ago, and by 1.5 million years ago had spread throughout the Old World. Virtually all physical anthropologists agree that Homo sapiens evolved out of Homo erectus. Anthropologists have been divided as to whether Homo sapiens evolved as one interconnected species from H. erectus (called the Multiregional Model, or the Regional Continuity Model), or evolved only in East Africa, and then migrated out of Africa and replaced H. erectus populations throughout the Old World (called the Out of Africa Model or the Complete Replacement Model). Anthropologists continue to debate both possibilities, and the evidence is technically ambiguous as to which model is correct, although most anthropologists currently favor the Out of Africa model.
Lieberman and Jackson have argued that while advocates of both the Multiregional Model and the Out of Africa Model use the word race and make racial assumptions, none define the term. They conclude that "Each model has implications that both magnify and minimize the differences between races. Yet each model seems to take race and races as a conceptual reality. The net result is that those anthropologists who prefer to view races as a reality are encouraged to do so" and conclude that students of human evolution would be better off avoiding the word race, and instead describe genetic differences in terms of populations and clinal gradations.
With the advent of the modern synthesis in the early 20th century, many biologists sought to use evolutionary models and populations genetics in an attempt to formalise taxonomy. The Biological Species Concept (BSC) is the most widely used system for describing species, this concept defines a species as a group of organisms that interbreed in their natural environment and produce viable offspring. In practice species are not classified according to the BSC but according to typology by the use of a holotype, due to the difficulty of determining whether all members of a group of organisms do or can in practice potentially interbreed. BSC species are routinely classified on a subspecific level, though this classification is conducted differently for different taxons, for mammals the normal taxonomic unit below the species level is usually the subspecies. More recently the Phylogenetic Species Concept (PSC) has gained a substantial following. The PSC is based on the idea of a least-inclusive taxonomic unit (LITU), in phylogenetic classification no subspecies can exist because they would automatically constitute a LITU (any monophyletic group). Technically species cease to exist as do all hierarchical taxa, a LITU is effectively defined as any monophyletic taxon, phylogenetics is strongly influenced by cladistics which classifies organisms based on evolution rather than similarities between groups of organisms. In biology the term "race" is very rarely used because it is ambiguous, "'Race' is not being defined or used consistently; its referents are varied and shift depending on context. The term is often used colloquially to refer to a range of human groupings. Religious, cultural, social, national, ethnic, linguistic, genetic, geographical and anatomical groups have been and sometimes still are called 'races'". Generally when it is used it is synonymous with subspecies. One of the main obstacles to identifying subspecies is that, while it is a recognised taxonomic term, it has no precise definition.
Species of organisms that are monotypic (i.e. form a single subspecies) display at least one of these properties:
A polytypic species has two or more subspecies. These are separate populations that are more genetically different from one another and that are more reproductively isolated, gene flow between these populations is much reduced leading to genetic differentiation.
In 1978, Sewall Wright suggested that human populations that have long inhabited separated parts of the world should, in general, be considered to be of different subspecies by the usual criterion that most individuals of such populations can be allocated correctly by inspection. It does not require a trained anthropologist to classify an array of Englishmen, West Africans, and Chinese with 100% accuracy by features, skin color, and type of hair in spite of so much variability within each of these groups that every individual can easily be distinguished from every other. However, it is customary to use the term race rather than subspecies for the major subdivisions of the human species as well as for minor ones.
On the other hand in practice subspecies are often defined by easily observable physical appearance, but there is not necessarily any evolutionary significance to these observed differences, so this form of classification has become less acceptable to evolutionary biologists. Likewise this typological approach to "race" is generally regarded as discredited by biologists and anthropologists.
Because of the difficulty in classifying subspecies morphologically, many biologists reject the concept altogether, citing problems such as:
In their 2003 paper "Human Genetic Diversity and the Nonexistence of Biological Races Jeffrey Long and Rick Kittles give a long critique of the application of FST to human populations. They find that the figure of 85% is misleading because it implies that all human populations contain on average 85% of all genetic diversity. This does not correctly reflect human population history, they claim, because it treats all human groups as independent. A more realistic portrayal of the way human groups are related is to understand that some human groups are parental to other groups and that these groups represent paraphyletic groups to their descent groups. For example under the recent African origin theory the human population in Africa is paraphyletic to all other human groups because it represents the ancestral group from which all non-African populations derive, but more than that, non-African groups only derive from a small non-representative sample of this African population. This means that all non-African groups are more closely related to each other and to some African groups (probably east Africans) than they are to others, and further that the migration out of Africa represented a genetic bottleneck, with a great deal of the diversity that existed in Africa not being carried out of Africa by the emigrating groups. This view produces a version of human population movements that do not result in all human populations being independent, but rather produces a series of dilutions of diversity the further from Africa any population lives, each founding event representing a genetic subset of it's parental population. Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 70% of human genetic diversity exists in a population derived from New Guinea. Long and Kittles make the observation that this still produces a global human population that is genetically homogeneous compared to other mammalian populations.
Wright's F statistics are not used to determine whether a group can be described as a subspecies or not, though the statistic is used to measure the degree of differentiation between populations, the degree of genetic differentiation is not a marker of subspecies status. Generally taxonomists prefer to use phylogenetic analysis to determine whether a population can be considered a subspecies. Phylogenetic analysis relies on the concept of derived characteristics that are not shared between groups, this means that these populations are usually allopatric and therefore discretely bounded, this makes subspecies, evolutionarily speaking, monophyletic groups. The clinality of human genetic variation in general rules out any idea that human population groups can be considered monophyletic as there appears to always have been a great deal of gene flow between human populations.
The first to challenge the concept of race on empirical grounds were anthropologists Franz Boas, who demonstrated phenotypic plasticity due to environmental factors (Boas 1912), and Ashley Montagu (1941, 1942), who relied on evidence from genetics. Zoologists Edward O. Wilson and W. Brown then challenged the concept from the perspective of general animal systematics, and further rejected the claim that "races" were equivalent to "subspecies" (Wilson and Brown 1953).
In a response to Livingston, Theodore Dobzhansky argued that when talking about "race" one must be attentive to how the term is being used: "I agree with Dr. Livingston that if races have to be 'discrete units,' then there are no races, and if 'race' is used as an 'explanation' of the human variability, rather than vice versa, then the explanation is invalid." He further argued that one could use the term race if one distinguished between "race differences" and "the race concept." The former refers to any distinction in gene frequencies between populations; the latter is "a matter of judgment." He further observed that even when there is clinal variation, "Race differences are objectively ascertainable biological phenomena .... but it does not follow that racially distinct populations must be given racial (or subspecific) labels. In short, Livingston and Dobzhansky agree that there are genetic differences among human beings; they also agree that the use of the race concept to classify people, and how the race concept is used, is a matter of social convention. They differ on whether the race concept remains a meaningful and useful social convention.
In 1964, biologists Paul Ehrlich and Holm pointed out cases where two or more clines are distributed discordantly—for example, melanin is distributed in a decreasing pattern from the equator north and south; frequencies for the haplotype for beta-S hemoglobin, on the other hand, radiate out of specific geographical points in Africa (Ehrlich and Holm 1964). As anthropologists Leonard Lieberman and Fatimah Linda Jackson observe, "Discordant patterns of heterogeneity falsify any description of a population as if it were genotypically or even phenotypically homogeneous" (Lieverman and Jackson 1995).
Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. For example if only skin colour and a "two race" system of classification were used, then one might classify Indigenous Australians in the same "race" as Black people, and Caucasians in the same "race" as East Asian people, but biologists and anthropologists would dispute that these classifications have any scientific validity. On the other hand the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, due to the fact that traits and gene frequencies do not always correspond to the same geographical location, or as Ossario and Duster (2005) put it:
Richard Lewontin, claiming that 85 percent of human variation occurs within populations, and not among populations, argued that neither "race" nor "subspecies" were appropriate or useful ways to describe populations (Lewontin 1973). Nevertheless, barriers—which may be cultural or physical— between populations can limit gene flow and increase genetic differences. Recent work by population geneticists conducting research in Europe suggests that ethnic identity can be a barrier to gene flow. Others, such as Ernst Mayr, have argued for a notion of "geographic race" Some researchers report the variation between racial groups (measured by Sewall Wright's population structure statistic FST) accounts for as little as 5% of human genetic variation². Sewall Wright himself commented that if differences this large were seen in another species, they would be called subspecies. In 2003 A. W. F. Edwards argued that cluster analysis supersedes Lewontin's arguments (see below).
These empirical challenges to the concept of race forced evolutionary sciences to reconsider their definition of race. Mid-century, anthropologist William Boyd defined race as:
The distribution of many physical traits resembles the distribution of genetic variation within and between human populations (American Association of Physical Anthropologists 1996; Keita and Kittles 1997). For example, ~90% of the variation in human head shapes occurs within every human group, and ~10% separates groups, with a greater variability of head shape among individuals with recent African ancestors (Relethford 2002).
With the recent availability of large amounts of human genetic data from many geographically distant human groups scientists have again started to investigate the relationships between people from various parts of the world. One method is to investigate DNA molecules that are passed down from mother to child (mtDNA) or from father to son (Y chromosomes), these form molecular lineages and can be informative regarding prehistoric population migrations. Alternatively autosomal alleles are investigated in an attempt to understand how much genetic material groups of people share. This work has led to a debate amongst geneticists, molecular anthropologists and medical doctors as to the validity of conceps such as "race". Some researchers insist that classifying people into groups based on ancestry may be important from medical and social policy points of view, and claim to be able to do so accurately. Others claim that individuals from different groups share far too much of their genetic material for group membership to have any medical implications. This has reignited the scientific debate over the validity of human classification and concepts of "race".
Mitochondrial DNA and Y chromosome research has produced three reproducible observations relevant to race and human evolution.
Firstly all mtDNA and Y chromosome lineages derive from a common ancestral molecule. For mtDNA this ancestor is estimated to have lived about 140,000-290,000 years ago (Mitochondrial Eve), while for Y chromosomes the ancestor is estimated to have lived about 70,000 years ago (Y chromosome Adam). These observations are robust, and the individuals that originally carried these ancestral molecules are the direct female and male line most recent common ancestors of all extant anatomically modern humans. The observation that these are the direct female line and male line ancestors of all living humans should not be interpreted as meaning that either was the first anatomically modern human. Nor should we assume that there were no other modern humans living concurrently with mitochondrial Eve or Y chromosome Adam. A more reasonable explanation is that other humans who lived at the same time did indeed reproduce and pass their genes down to extant humans, but that their mitochondrial and Y chromosomal lineages have been lost over time, probably due to random events (e.g. producing only male or female children). It is impossible to know to what extent these non-extant lineages have been lost, or how much they differed from the mtDNA or Y chromosome of our maternal and paternal lineage MRCA. The difference in dates between Y chromosome Adam and mitochondrial Eve is usually attributed to a higher extinction rate for Y chromosomes. This is probably because a few very successful men produce a great many children, while a larger number of less successful men will produce far fewer children.
Secondly mtDNA and Y chromosome work supports a recent African origin for anatomically modern humans, with the ancestors of all extant modern humans leaving Africa somewhere between 100,000 - 50,000 years ago.
Thirdly studies show that specific types (haplogroups) of mtDNA or Y chromosomes do not always cluster by geography, ethnicity or race, implying multiple lineages are involved in founding modern human populations, with many closely related lineages spread over large geographic areas, and many populations containing distantly related lineages. Keita et al. (2004) say, with reference to Y chromosome and mtDNA studies and their relevance to concepts of "race":
Human genetic variation is not distributed uniformly throughout the global population, the global range of human habitation means that there are great distance between some human populations (e.g. between South America and Southern Africa) and this will reduce gene flow between these populations. On the other hand environmental selection is also likely to play a role in differences between human populations. Conversely it is now believed that the majority of genetic differences between populations is selectively neutral. The existence of differences between peoples from different regions of the world is relevant to discussions about the concept of "race", some biologists believe that the language of "race" is relevant in describing human genetic variation. It is now possible to reasonably estimate the continents of origin of an individual's ancestors based on genetic data
Richard Lewontin has claimed that "race" is a meaningless classification because the majority of human variation is found within groups (~85%), and therefore two individuals from different "races" are almost as likely to be as similar to each other as either is to someone from their own "race". In 2003 A. W. F. Edwards rebuked this argument, claiming that Lewontin's conclusion ignores the fact that most of the information that distinguishes populations is hidden in the correlation structure of the data and not simply in the variation of the individual factors (see Infobox: Multi Locus Allele Clusters). Edwards concludes that "It is not true that 'racial classification is ... of virtually no genetic or taxonomic significance' or that 'you can't predict someone’s race by their genes'. Researchers such as Neil Risch and Noah Rosenberg have argued that a person's biological and cultural background may have important implications for medical treatment decisions, both for genetic and non-genetic reasons.
The results obtained by clustering analyses are dependent on several criteria:
Rosenberg et al.'s (2002) paper "Genetic Structure of Human Populations." especially was taken up by Nicholas Wade in the New York Times as evidence that genetics studies supported the "popular conception" of race. On the other hand Rosenberg's work used samples from the Human Genome Diversity Project (HGDP), a project that has collected samples from individuals from 52 ethnic groups from various locations around the world. The HGDP has itself been criticised for collecting samples on an "ethnic group" basis, on the grounds that ethnic groups represent constructed categories rather than categories which are solely natural or biological. Scientists such as the molecular anthropologist Jonathan Marks, the geneticists David Serre, Svante Pääbo, Mary-Claire King and medical doctor Arno G. Motulsky argue that this is a biased sampling strategy, and that human samples should have been collected geographically, i.e. that samples should be collected from points on a grid overlaying a map of the world, and maintain that human genetic variation is not partitioned into discrete racial groups (clustered), but is spread in a clinal manner (isolation by distance) that is masked by this biased sampling strategy. The existence of allelic clines and the observation that the bulk of human variation is continuously distributed, has led scientists such as Kittles and Weiss (2003) to conclude that any categorization schema attempting to partition that variation meaningfully will necessarily create artificial truncations. It is for this reason, Reanne Frank argues, that attempts to allocate individuals into ancestry groupings based on genetic information have yielded varying results that are highly dependent on methodological design.
In a follow up paper "Clines, Clusters, and the Effect of Study Design on the Inference of Human Population Structure" in 2005, Rosenberg et al. maintain that their clustering analysis is robust. But they also agree that there is evidence for clinality (isolation by distance). Thirdly they distance themselves from the language of race, and do not use the term "race" in any of their publications: "The arguments about the existence or nonexistence of 'biological races' in the absence of a specific context are largely orthogonal to the question of scientific utility, and they should not obscure the fact that, ultimately, the primary goals for studies of genetic variation in humans are to make inferences about human evolutionary history, human biology, and the genetic causes of disease."
One of the underlying questions regarding the distribution of human genetic diversity is related to the degree to which genes are shared between the observed clusters, and therefore the extent that membership of a cluster can accurately predict an individuals genetic makeup or susceptibility to disease. This is at the core of Lewontin's argument. Lewontin used Sewall Wright's Fixation index (FST), to estimate that on average 85% of human genetic diversity is contained within groups. Are members of the same cluster always more genetically similar to each other than they are to members of a different cluster? Lewontin's argument is that within group differences are almost as high as between group differences, and therefore two individuals from different groups are almost as likely to be more similar to each other than they are to members of their own group. Can clusters correct for this finding? In 2004 Bamshad et al. used the data from Rosenberg et al. (2002) to investigate the extent of genetic differences between individuals within continental groups relative to genetic differences between individuals between continental groups. They found that though these individuals could be classified very accurately to continental clusters, there was a significant degree of genetic overlap on the individual level.
This question was addressed in more detail in a 2007 paper by Witherspoon et al. entitled "Genetic Similarities Within and Between Human Populations". Where they make the following observations:
The paper states that "All three of the claims listed above appear in disputes over the significance of human population variation and 'race'" and asks "If multilocus statistics are so powerful, then how are we to understand this [last] finding?"
Witherspoon et al. (2007) attempt to reconcile these apparently contradictory findings, and show that the observed clustering of human populations into relatively discrete groups is a product of using what they call "population trait values". This means that each individual is compared to the "typical" trait for several populations, and assigned to a population based on the individual's overall similarity to one of the populations as a whole. They therefore claim that clustering analyses cannot necessarily be used to make inferences regarding the similarity or dissimilarity of individuals between or within clusters, but only for similarities or dissimilarities of individuals to the "trait values" of any given cluster. The paper measures the rate of misclassification using these "trait values" and calls this the "population trait value misclassification rate" (CT). The paper investigates the similarities between individuals by use of what they term the "dissimilarity fraction" (ω): "the probability that a pair of individuals randomly chosen from different populations is genetically more similar than an independent pair chosen from any single population." Witherspoon et al. show that two individuals can be more genetically similar to each other than to the typical genetic type of their own respective populations, and yet be correctly assigned to their respective populations. An important observation is that the likelihood that two individuals from different populations will be more similar to each other genetically than two individuals from the same population depends on several criteria, most importantly the number of genes studied and the distinctiveness of the populations under investigation. For example when 10 loci are used to compare three geographically disparate populations (sub-Saharan African, East Asian and European) then individuals are more similar to members of a different group about 30% of the time. If the number of loci is increased to 100 individuals are more genetically similar to members of a different population ~20% of the time, and even using 1000 loci, ω ~ 10%. They do stated that for these very geographically separated populations it is possible to reduce this statistic to 0% when tens of thousands of loci are used. That means that individuals will always be more similar to members of their own population. But the paper notes that humans are not distributed inot geographically separated populations, omitting intermediate regions may produce a false distinctiveness for human diversity. The paper supports the observation that "highly accurate classification of individuals from continuously sampled (and therefore closely related) populations may be impossible". Furthermore the results indicate that clustering analyses and self reported ethnicity may not be good estimates for genetic susceptibility to disease risk. Witherspoon et al. conclude that:
|Essentialist||Hooton (1926)||"A great division of mankind, characterized as a group by the sharing of a certain combination of features, which have been derived from their common descent, and constitute a vague physical background, usually more or less obscured by individual variations, and realized best in a composite picture."|
|Taxonomic||Mayr (1969)||"An aggregate of phenotypically similar populations of a species, inhabiting a geographic subdivision of the range of a species, and differing taxonomically from other populations of the species."|
|Population||Dobzhansky (1970)||"Races are genetically distinct Mendelian populations. They are neither individuals nor particular genotypes, they consist of individuals who differ genetically among themselves."|
|Lineage||Templeton (1998)||"A subspecies (race) is a distinct evolutionary lineage within a species. This definition requires that a subspecies be genetically differentiated due to barriers to genetic exchange that have persisted for long periods of time; that is, the subspecies must have historical continuity in addition to current genetic differentiation."|
Since 1932, some college textbooks introducing physical anthropology have increasingly come to reject race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to 1984, thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race. According to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical Anthropology employed these or nearly synonymous terms reflecting a bio-race paradigm, only 36 percent did so in 1965, and just 28 percent did in 1996. The American Anthropological Association, drawing on biological research, currently holds that "The concept of race is a social and cultural construction... . Race simply cannot be tested or proven scientifically," and that, "It is clear that human populations are not unambiguous, clearly demarcated, biologically distinct groups. The concept of 'race' has no validity ... in the human species".
In an ongoing debate, some geneticists argue that race is neither a meaningful concept nor a useful heuristic device, and even that genetic differences among groups are biologically meaningless, on the grounds that more genetic variation exists within such races than among them, and that racial traits overlap without discrete boundaries. Other geneticists, in contrast, argue that categories of self-identified race/ethnicity or biogeographic ancestry are both valid and useful, that these categories correspond with clusters inferred from multilocus genetic data, and that this correspondence implies that genetic factors might contribute to unexplained phenotypic variation between groups.
In February, 2001, the editors of the medical journal Archives of Pediatrics and Adolescent Medicine asked authors to no longer use "race" as an explanatory variable and not to use obsolescent terms. Some other peer-reviewed journals, such as the New England Journal of Medicine and the American Journal of Public Health, have made similar endeavours. Furthermore, the National Institutes of Health recently issued a program announcement for grant applications through February 1, 2006, specifically seeking researchers who can investigate and publicize among primary care physicians the detrimental effects on the nation's health of the practice of medical racial profiling using such terms. The program announcement quoted the editors of one journal as saying that, "analysis by race and ethnicity has become an analytical knee-jerk reflex.
A survey, taken in 1985 (Lieberman et al. 1992), asked 1,200 American anthropologists how many disagree with the following proposition: "There are biological races in the species Homo sapiens." The responses were:
The figure for physical anthropologists at PhD granting departments was slightly higher, rising from 41% to 42%, with 50% agreeing. This survey, however, did not specify any particular definition of race (although it did clearly specify biological race within the species Homo Sapiens); it is difficult to say whether those who supported the statement thought of race in taxonomic or population terms.
The same survey, taken in 1999, showed the following changing results for anthropologists:
In Poland the race concept was rejected by only 25 percent of anthropologists in 2001, although: "Unlike the U.S. anthropologists, Polish anthropologists tend to regard race as a term without taxonomic value, often as a substitute for population.
In the face of these issues, some evolutionary scientists have simply abandoned the concept of race in favor of "population." What distinguishes population from previous groupings of humans by race is that it refers to a breeding population (essential to genetic calculations) and not to a biological taxon. Other evolutionary scientists have abandoned the concept of race in favor of cline (meaning, how the frequency of a trait changes along a geographic gradient). (The concepts of population and cline are not, however, mutually exclusive and both are used by many evolutionary scientists.)
According to Jonathan Marks,
In the face of this rejection of race by evolutionary scientists, many social scientists have replaced the word race with the word "ethnicity" to refer to self-identifying groups based on beliefs concerning shared culture, ancestry and history. Alongside empirical and conceptual problems with "race," following the Second World War, evolutionary and social scientists were acutely aware of how beliefs about race had been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum in the 1960s during the U.S. civil rights movement and the emergence of numerous anti-colonial movements worldwide. They thus came to understood that these justifications, even when expressed in language that sought to appear objective, were social constructs.
Even as the idea of "race" was becoming a powerful organizing principle in many societies, the shortcomings of the concept were apparent. In the Old World, the gradual transition in appearances from one group to adjacent groups emphasized that "one variety of mankind does so sensibly pass into the other, that you cannot mark out the limits between them," as Blumenbach observed in his writings on human variation (Marks 1995, p. 54). As anthropologists and other evolutionary scientists have shifted away from the language of race to the term population to talk about genetic differences, Historians, anthropologists and social scientists have re-conceptualized the term "race" as a cultural category or social construct, in other words, as a particular way that some people have of talking about themselves and others. As Stephan Palmie has recently summarized, race "is not a thing but a social relation"; or, in the words of Katya Gibel Mevorach, "a metonym," "a human invention whose criteria for differentiation are neither universal nor fixed but have always been used to manage difference." As such it cannot be a useful analytical concept; rather, the use of the term "race" itself must be analyzed. Moreover, they argue that biology will not explain why or how people use the idea of race: history and social relationships will. For example, the fact that in many parts of the United States, categories such as Hispanic or Latino are viewed to constitute a race (instead of an ethnic group) reflect this new idea of "race as a social construct". However, it may be in the interest of dominant groups to cluster Spanish speakers into a single, isolated population, rather than classifying them according to Race (as are the rest of U.S. racial groups). Especially in the context of the debate over immigration. "According to the 2000 census, two-thirds [of Hispanics] are of Mexican heritage . . . So, for practical purposes, when we speak of Hispanics and Latinos in the U.S., we’re really talking about Native Americans . . . [therefore] if being Hispanic carries any societal consequences that justify inclusion in the pantheon of great American racial minorities, they’re the result of having Native American blood. [But imagine the] the impact this would have on the illegal-immigration debate. It’s one thing to blame the fall of western civilization on illegal Mexican immigration, but quite thornier to blame it on illegal Amerindian immigration from Mexico.
In the United States since its early history, Native Americans, African-Americans and European-Americans were classified as belonging to different races. For nearly three centuries, the criteria for membership in these groups were similar, comprising a person’s appearance, his fraction of known non-White ancestry, and his social circle.2 But the criteria for membership in these races diverged in the late 19th century. During Reconstruction, increasing numbers of Americans began to consider anyone with "one drop" of known "Black blood" to be Black regardless of appearance.3 By the early 20th century, this notion of invisible blackness was made statutory in many states and widely adopted nationwide.4 In contrast, Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood quantum) due in large part to American slavery ethics. Finally, for the past century or so, to be White one had to have perceived "pure" White ancestry.
Efforts to sort the increasingly mixed population of the United States into discrete categories generated many difficulties (Spickard 1992). By the standards used in past censuses, many millions of children born in the United States have belonged to a different race than have one of their biological parents. Efforts to track mixing between groups led to a proliferation of categories (such as "mulatto" and "octoroon") and "blood quantum" distinctions that became increasingly untethered from self-reported ancestry. A person's racial identity can change over time, and self-ascribed race can differ from assigned race (Kressin et al. 2003). Until the 2000 census, Latinos were required to identify with a single race despite the long history of mixing in Latin America; partly as a result of the confusion generated by the distinction, 32.9% (U.S. census records) of Latino respondents in the 2000 census ignored the specified racial categories and checked "some other race". (Mays et al. 2003 claim a figure of 42%)
The difference between how Native American and Black identities are defined today (blood quantum versus one-drop) has demanded explanation. According to anthropologists such as Gerald Sider, the goal of such racial designations was to concentrate power, wealth, privilege and land in the hands of Whites in a society of White hegemony and privilege (Sider 1996; see also Fields 1990). The differences have little to do with biology and far more to do with the history of racism and specific forms of White supremacy (the social, geopolitical and economic agendas of dominant Whites vis-à-vis subordinate Blacks and Native Americans) especially the different roles Blacks and Amerindians occupied in White-dominated 19th century America. The theory suggests that the blood quantum definition of Native American identity enabled Whites to acquire Amerindian lands, while the one-drop rule of Black identity enabled Whites to preserve their agricultural labor force. The contrast presumably emerged because as peoples transported far from their land and kinship ties on another continent, Black labor was relatively easy to control, thus reducing Blacks to valuable commodities as agricultural laborers. In contrast, Amerindian labor was more difficult to control; moreover, Amerindians occupied large territories that became valuable as agricultural lands, especially with the invention of new technologies such as railroads; thus, the blood quantum definition enhanced White acquisition of Amerindian lands in a doctrine of Manifest Destiny that subjected them to marginalization and multiple episodic localized campaigns of extermination.
The political economy of race had different consequences for the descendants of aboriginal Americans and African slaves. The 19th century blood quantum rule meant that it was relatively easier for a person of mixed Euro-Amerindian ancestry to be accepted as White. The offspring of only a few generations of intermarriage between Amerindians and Whites likely would not have been considered Amerindian at all (at least not in a legal sense). Amerindians could have treaty rights to land, but because an individual with one Amerindian great-grandparent no longer was classified as Amerindian, they lost any legal claim to Amerindian land. According to the theory, this enabled Whites to acquire Amerindian lands. The irony is that the same individuals who could be denied legal standing because they were "too White" to claim property rights, might still be Amerindian enough to be considered as "breeds", stigmatized for their Native American ancestry.
The 20th century one-drop rule, on the other hand, made it relatively difficult for anyone of known Black ancestry to be accepted as White. The child of a Black sharecropper and a White person was considered Black. And, significant in terms of the economics of sharecropping, such a person also would likely be a sharecropper as well, thus adding to the employer's labor force.
In short, this theory suggests that in a 20th century economy that benefited from sharecropping, it was useful to have as many Blacks as possible. Conversely, in a 19th century nation bent on westward expansion, it was advantageous to diminish the numbers of those who could claim title to Amerindian lands by simply defining them out of existence.
It must be mentioned, however, that although some scholars of the Jim Crow period agree that the 20th century notion of invisible Blackness shifted the color line in the direction of paleness, thereby swelling the labor force in response to Southern Blacks' great migration northwards, others (Joel Williamson, C. Vann Woodward, George M. Fredrickson, Stetson Kennedy) see the one-drop rule as a simple consequence of the need to define Whiteness as being pure, thus justifying White-on-Black oppression. In any event, over the centuries when Whites wielded power over both Blacks and Amerindians and widely believed in their inherent superiority over people of color, it is no coincidence that the hardest racial group in which to prove membership was the White one.
In the United States, social and legal conventions developed over time that forced individuals of mixed ancestry into simplified racial categories (Gossett 1997). An example is the "one-drop rule" implemented in some state laws that treated anyone with a single known African American ancestor as black (Davis 2001). The decennial censuses conducted since 1790 in the United States also created an incentive to establish racial categories and fit people into those categories (Nobles 2000). In other countries in the Americas where mixing among groups was overtly more extensive, social categories have tended to be more numerous and fluid, with people moving into or out of categories on the basis of a combination of socioeconomic status, social class, ancestry, and appearance (Mörner 1967).
The term "Hispanic" as an ethnonym emerged in the 20th century with the rise of migration of laborers from American Spanish-speaking countries to the United States. It includes people who had been considered racially distinct (Black, White, Amerindian, Asian, and mixed groups) in their home countries. Today, the word "Latino" is often used as a synonym for "Hispanic". In contrast to "Latino"´or "Hispanic" "Anglo" is now used to refer to non-Hispanic White Americans or non-Hispanic European Americans, most of whom speak the English language but are not necessarily of English descent.
Typically, a consumer of a commercial PGH service sends in a sample of DNA which is analyzed by molecular biologists and is sent a report, of which the following is a sample
Through these kinds of reports, new advances in molecular genetics are being used to create or confirm stories have about social identities. Although these identities are not racial in the biological sense, they are in the cultural sense in that they link biological and cultural identities. Nadia Abu el-Haj has argued that the significance of genetic lineages in popular conceptions of race owes to the perception that while genetic lineages, like older notions of race, suggests some idea of biological relatedness, unlike older notions of race they are not directly connected to claims about human behaviour or character. Abu el-Haj has thus argued that "postgenomics does seem to be giving race a new lease on life." Nevertheless, Abu el-Haj argues that in order to understand what it means to think of race in terms of genetic lineages or clusters, one must understand that Abu el-Haj argues that genomics and the mapping of lineages and clusters liberates "the new racial science from the older one by disentangling ancestry from culture and capacity." As an example, she refers to recent work by Hammer et al., which aimed to test the claim that present-day Jews are more closely related to one another than to neighbouring non-Jewish populations. Hammer et. al found that the degree of genetic similarity among Jews shifted depending on the locus investigated, and suggested that this was the result of natural selection acting on particular loci. They therefore focused on the non-recombining Y chromosome to "circumvent some of the complications associated with selection". As another example she points to work by Thomas et al., who sought to distinguish between the Y chromosomes of Jewish priests (in Judaism, membership in the priesthood is passed on through the father's line) and the Y chromosomes of non-Jews. Abu el-Haj concluded that this new "race science" calls attention to the importance of "ancestry" (narrowly defined, as it does not include all ancestors) in some religions and in popular culture, and peoples' desire to use science to confirm their claims about ancestry; this "race science," she argues is fundamentally different from older notions of race that were used to explain differences in human behaviour or social status:
On the other hand, there are tests that do not rely on molecular lineages, but rather on correlations between allele frequencies, often when allele frequencies correlate these are called clusters. Clustering analyses are less powerful than lineages because they cannot tell an historical story, they can only estimate the proportion of a person's ancestry from any given large geographical region. These sorts of tests use informative alleles called Ancestry-informative marker (AIM), which although shared across all human populations vary a great deal in frequency between groups of people living in geographically distant parts of the world. These tests use contemporary people sampled from certain parts of the world as references to determine the likely proportion of ancestry for any given individual. In a recent Public Service Broadcasting (PBS) programme on the subject of genetic ancestry testing the academic Henry Louis Gates: "wasn’t thrilled with the results (it turns out that 50 percent of his ancestors are likely European)". Charles Rotimi, of Howard University's National Human Genome Center, is one of many who have highlighted the methodological flaws in such research - that "the nature or appearance of genetic clustering (grouping) of people is a function of how populations are sampled, of how criteria for boundaries between clusters are set, and of the level of resolution used" all bias the results - and concluded that people should be very cautious about relating genetic lineages or clusters to their own sense of identity. (see also above section How much are genes shared? Clustering analyses and what they tell us)
Thus, in analyses that assign individuals to groups it becomes less apparent that self-described racial groups are reliable indicators of ancestry. One cause of the reduced power of the assignment of individuals to groups is admixture. For example, self-described African Americans tend to have a mix of West African and European ancestry. Shriver et al. (2003) found that on average African Americans have ~80% African ancestry. Also, in a survey of college students who self-identified as “white” in a northeastern U.S. university, ~30% of whites had less than 90% European ancestry.
Stephan Palmie has responded to Abu el-Haj's claim that genetic lineages make possible a new, politically, economically, and socially benign notion of race and racial difference by suggesting that efforts to link genetic history and personal identity will inevitably "ground present social arrangements in a time-hallowed past," that is, use biology to explain cultural differences and social inequalities.
Researchers have reported differences in the average IQ test scores of various ethnic groups. The interpretation, causes, accuracy and reliability of these differences are highly controversial. Some researchers, such as Arthur Jensen, Richard Herrnstein, and Richard Lynn have argued that such differences are at least partially genetic. Others, for example Thomas Sowell, argue that the differences largely owe to social and economic inequalities. Still others have such as Stephen Jay Gould and Richard Lewontin have argued that categories such as "race" and "intelligence" are cultural constructs that render any attempt to explain such differences (whether genetically or sociologically) meaningless.
The Flynn effect is the rise of average Intelligence Quotient (IQ) test scores, an effect seen in most parts of the world, although at varying rates. Scholars therefore believe that rapid increases in average IQ seen in many places are much too fast to be as a result of changes in brain physiology and more likely as a result of environmental changes. The fact that environment has a significant effect on IQ demolishes the case for the use of IQ data as a source of genetic information.
There is an active debate among biomedical researchers about the meaning and importance of race in their research. The primary impetus for considering race in biomedical research is the possibility of improving the prevention and treatment of diseases by predicting hard-to-ascertain factors on the basis of more easily ascertained characteristics. Some have argued that in the absence of cheap and widespread genetic tests, racial identification is the best way to predict for certain diseases, such as Cystic fibrosis, Lactose intolerance, Tay-Sachs Disease and sickle cell anemia, which are genetically linked and more prevalent in some populations than others. The most well-known examples of genetically-determined disorders that vary in incidence among populations would be sickle cell disease, thalassaemia, and Tay-Sachs disease.
There has been criticism of associating disorders with race. For example, in the United States sickle cell is typically associated with black people, but this trait is also found in people of Mediterranean, Middle Eastern or Indian ancestry. The sickle cell trait offers some resistance to malaria. In regions where malaria is present sickle cell has been positively selected and consequently the proportion of people with it is greater. Therefore, it has been argued that sickle cell should not be associated with a particular race, but rather with having ancestors who lived in a malaria-prone region. Africans living in areas where there is no malaria, such as the East African highlands, have prevalence of sickle cell as low as parts of Northern Europe.
Another example of the use of race in medicine is the recent U.S. FDA approval of BiDil, a medication for congestive heart failure targeted at black people in the United States. Several researchers have questioned the scientific basis for arguing the merits of a medication based on race, however. As Stephan Palmie has recently pointed out, black Americans were disproportionately affected by Hurricane Katrina, but for social and not climatological reasons; similarly, certain diseases may disproportionately affect different races, but not for biological reasons. Several researchers have suggested that BiDil was re-designated as a medicine for a race-specific illness because its manufacturer, Nitromed, needed to propose a new use for an existing medication in order to justify an extension of its patent and thus monopoly on the medication, not for pharmacological reasons.
Gene flow and intermixture also have an effect on predicting a relationship between race and "race linked disorders". Multiple sclerosis is typically associated with people of European descent and is of low risk to people of African descent. However, due to gene flow between the populations, African Americans have elevated levels of MS relative to Africans. Notable African Americans affected by MS include Richard Pryor and Montel Williams. As populations continue to mix, the role of socially constructed races may diminish in identifying diseases.
In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, it is generally more important to arrive at a description that will readily suggest the general appearance of an individual than to make a scientifically valid categorization by DNA or other such means. Thus in addition to assigning a wanted individual to a racial category, such a description will include: height, weight, eye color, scars and other distinguishing characteristics, etc. Scotland Yard use a classification based in the ethnic background of British society: W1 (White-British), W2 (White-Irish), W9 (Any other white background); M1 (White and black Caribbean), M2 (White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other black background); O1 (Chinese), O9 (Any other). Some of the characteristics that constitute these groupings are biological and some are learned (cultural, linguistic, etc.) traits that are easy to notice.
In many countries, such as France, the state is legally banned from maintaining data based on race, which often makes the police issue wanted notices to the public that include labels like "dark skin complexion", etc. One of the factors that encourages this kind of circuitous wordings is that there is controversy over the actual relationship between crimes, their assigned punishments, and the division of people into the so called "races," leading officials to try to deemphasize the alleged race of suspects. In the United States, the practice of racial profiling has been ruled to be both unconstitutional and also to constitute a violation of civil rights. There is active debate regarding the cause of a marked correlation between the recorded crimes, punishments meted out, and the country's "racially divided" people. Many consider de facto racial profiling an example of institutional racism in law enforcement. The history of misuse of racial categories to adversely impact one or more groups and/or to offer protection and advantage to another has a clear impact on debate of the legitimate use of known phenotypical or genotypical characteristics tied to the presumed race of both victims and perpetrators by the government.
More recent work in racial taxonomy based on DNA cluster analysis (see Lewontin's Fallacy) has led law enforcement to narrow their search for individuals based on a range of phenotypical characteristics found consistent with DNA evidence.
While controversial, DNA analysis has been successful in helping police identify both victims and perpetrators by giving an indication of what phenotypical characteristics to look for and what community the individual may have lived in. For example, in one case phenotypical characteristics suggested that the friends and family of an unidentified victim would be found among the Asian community, but the DNA evidence directed official attention to missing Native Americans, where her true identity was eventually confirmed. In an attempt to avoid potentially misleading associations suggested by the word "race," this classification is called "biogeographical ancestry" (BGA), but the terms for the BGA categories are similar to those used as for race. The difference is that ancestry-informative DNA markers identify continent-of-ancestry admixture, not ethnic self-identity, and provide a wide range of phenotypical characteristics such that some people in a biogeographical category will not match the stereotypical image of an individual belonging to the corresponding race. To facilitate the work of officials trying to find individuals based on the evidence of their DNA traces, firms providing the genetic analyses also provide photographs showing a full range of phenotypical characteristics of people in each biogeographical group. Of special interest to officials trying to find individuals on the basis of DNA samples that indicate a diverse genetic background is what range of phenotypical characteristics people with that general mixture of genotypical characteristics may display.
Similarly, forensic anthropologists draw on highly heritable morphological features of human remains (e.g. cranial measurements) in order to aid in the identification of the body, including in terms of race. In a recent article anthropologist Norman Sauer asked, "if races don't exist, why are forensic anthropologists so good at identifying them. Sauer observed that the use of 19th century racial categories is widespread among forensic anthropologists:
According to Sauer, "The assessment of these categories is based upon copious amounts of research on the relationship between biological characteristics of the living and their skeletons." Nevertheless, he agrees with other anthropologists that race is not a valid biological taxonomic category, and that races are socially constructed. He argued there is nevertheless a strong relationship between the phenotypic features forensic anthropologists base their identifications on, and popular racial categories. Thus, he argued, forensic anthropologists apply a racial label to human remains because their analysis of physical morphology enables them to predict that when the person was alive, that particular racial label would have been applied to them.
Issues of (in)compatibility between the worldview and research rules of the Science of Unitary Human Beings: an invitation to dialogue.(Clinical report)
Jan 01, 1996; ABSTRACT This paper offers an invitation to dialogue about the degree to which the research rules associated with the Science of...
The beat generation and beyond: popular culture and the development of the Science of Unitary Human Beings.(Clinical report)
Jan 01, 1998; ABSTRACT In the 1940s and 50s, Greenwich Village, and New York City in general, were places that were participating in great... | http://www.reference.com/browse/human%20beings | 13 |
81 | Deoxyribonucleic acid (DNA) is a molecule that encodes the genetic instructions used in the development and functioning of all known living organisms and many viruses. Along with RNA and proteins, DNA is one of the three major macromolecules essential for all known forms of life. Genetic information is encoded as a sequence of nucleotides (guanine, adenine, thymine, and cytosine) recorded using the letters G, A, T, and C. Most DNA molecules are double-stranded helices, consisting of two long polymers of simple units called nucleotides, molecules with backbones made of alternating sugars (deoxyribose) and phosphate groups (related to phosphoric acid), with the nucleobases (G, A, T, C) attached to the sugars. DNA is well-suited for biological information storage, since the DNA backbone is resistant to cleavage and the double-stranded structure provides the molecule with a built-in duplicate of the encoded information.
These two strands run in opposite directions to each other and are therefore anti-parallel, one backbone being 3′ (three prime) and the other 5′ (five prime). This refers to the direction the 3rd and 5th carbon on the sugar molecule is facing. Attached to each sugar is one of four types of molecules called nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes information. This information is read using the genetic code, which specifies the sequence of the amino acids within proteins. The code is read by copying stretches of DNA into the related nucleic acid RNA in a process called transcription.
Within cells, DNA is organized into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
DNA is a long polymer made from repeating units called nucleotides. DNA was first identified and isolated by Friedrich Miescher and the double helix structure of DNA was first discovered by James Watson and Francis Crick. The structure of DNA of all species comprises two helical chains each coiled round the same axis, and each with a pitch of 34 ångströms (3.4 nanometres) and a radius of 10 ångströms (1.0 nanometres). According to another study, when measured in a particular solution, the DNA chain measured 22 to 26 ångströms wide (2.2 to 2.6 nanometres), and one nucleotide unit measured 3.3 Å (0.33 nm) long. Although each individual repeating unit is very small, DNA polymers can be very large molecules containing millions of nucleotides. For instance, the largest human chromosome, chromosome number 1, consists of approximately 220 million base pairs and is 85 mm long.
In living organisms DNA does not usually exist as a single molecule, but instead as a pair of molecules that are held tightly together. These two long strands entwine like vines, in the shape of a double helix. The nucleotide repeats contain both the segment of the backbone of the molecule, which holds the chain together, and a nucleobase, which interacts with the other DNA strand in the helix. A nucleobase linked to a sugar is called a nucleoside and a base linked to a sugar and one or more phosphate groups is called a nucleotide. A polymer comprising multiple linked nucleotides (as in DNA) is called a polynucleotide.
The backbone of the DNA strand is made from alternating phosphate and sugar residues. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These asymmetric bonds mean a strand of DNA has a direction. In a double helix the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are antiparallel. The asymmetric ends of DNA strands are called the 5′ (five prime) and 3′ (three prime) ends, with the 5′ end having a terminal phosphate group and the 3′ end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the alternative pentose sugar ribose in RNA.
The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases. In the aqueous environment of the cell, the conjugated π bonds of nucleotide bases align perpendicular to the axis of the DNA molecule, minimizing their interaction with the solvation shell and therefore, the Gibbs free energy. The four bases found in DNA are adenine (abbreviated A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar/phosphate to form the complete nucleotide, as shown for adenosine monophosphate.
The nucleobases are classified into two types: the purines, A and G, being fused five- and six-membered heterocyclic compounds, and the pyrimidines, the six-membered rings C and T. A fifth pyrimidine nucleobase, uracil (U), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. In addition to RNA and DNA a large number of artificial nucleic acid analogues have also been created to study the properties of nucleic acids, or for use in biotechnology.
Uracil is not usually found in DNA, occurring only as a breakdown product of cytosine. However in a number of bacteriophages – Bacillus subtilis bacteriophages PBS1 and PBS2 and Yersinia bacteriophage piR1-37 – thymine has been replaced by uracil.
Base J (beta-d-glucopyranosyloxymethyluracil), a modified form of uracil, is also found in a number of organisms: the flagellates Diplonema and Euglena, and all the kinetoplastid genera Biosynthesis of J occurs in two steps: in the first step a specific thymidine in DNA is converted into hydroxymethyldeoxyuridine; in the second HOMedU is glycosylated to form J. Proteins that bind specifically to this base have been identified. These proteins appear to be distant relatives of the Tet1 oncogene that is involved in the pathogenesis of acute myeloid leukemia. J appears to act as a termination signal for RNA polymerase II.
Twin helical strands form the DNA backbone. Another double helix may be found tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not symmetrically located with respect to each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.
In a DNA double helix, each type of nucleobase on one strand bonds with just one type of nucleobase on the other strand. This is called complementary base pairing. Here, purines form hydrogen bonds to pyrimidines, with adenine bonding only to thymine in two hydrogen bonds, and cytosine bonding only to guanine in three hydrogen bonds. This arrangement of two nucleotides binding together across the double helix is called a base pair. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can therefore be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. Indeed, this reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in living organisms.
The two types of base pairs form different numbers of hydrogen bonds, AT forming two hydrogen bonds, and GC forming three hydrogen bonds (see figures, right). DNA with high GC-content is more stable than DNA with low GC-content.
As noted above, most DNA molecules are actually two polymer strands, bound together in a helical fashion by noncovalent bonds; this double stranded structure (dsDNA) is maintained largely by the intrastrand base stacking interactions, which are strongest for G,C stacks. The two strands can come apart – a process known as melting – to form two ssDNA molecules. Melting occurs when conditions favor ssDNA; such conditions are high temperature, low salt and high pH (low pH also melts DNA, but since DNA is unstable due to acid depurination, low pH is rarely used).
The stability of the dsDNA form depends not only on the GC-content (% G,C basepairs) but also on sequence (since stacking is sequence specific) and also length (longer molecules are more stable). The stability can be measured in various ways; a common way is the "melting temperature", which is the temperature at which 50% of the ds molecules are converted to ss molecules; melting temperature is dependent on ionic strength and the concentration of DNA. As a result, it is both the percentage of GC base pairs and the overall length of a DNA double helix that determines the strength of the association between the two strands of DNA. Long DNA helices with a high GC-content have stronger-interacting strands, while short helices with high AT content have weaker-interacting strands. In biology, parts of the DNA double helix that need to separate easily, such as the TATAAT Pribnow box in some promoters, tend to have a high AT content, making the strands easier to pull apart.
In the laboratory, the strength of this interaction can be measured by finding the temperature necessary to break the hydrogen bonds, their melting temperature (also called Tm value). When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules (ssDNA) have no single common shape, but some conformations are more stable than others.
A DNA sequence is called "sense" if its sequence is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.
A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome.
DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.
DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although, only B-DNA and Z-DNA have been directly observed in functional organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, as well as the presence of polyamines in solution.
The first published reports of A-DNA X-ray diffraction patterns— and also B-DNA — used analyses based on Patterson transforms that provided only a limited amount of structural information for oriented fibers of DNA. An alternate analysis was then proposed by Wilkins et al., in 1953, for the in vivo B-DNA X-ray diffraction/scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double-helix.
Although the `B-DNA form' is most common under the conditions found in cells, it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in living cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder.
Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partially dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, as well as in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.
For a number of years exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA. A report in 2010 of the possibility in the bacterium GFAJ-1, was announced, though the research was disputed, and evidence suggests the bacterium actively prevents the incorporation of arsenic into the DNA backbone and other biomolecules.
At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.
These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases form a flat plate and these flat four-base units then stack on top of each other, to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.
In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.
In DNA fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible. Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below.
The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. DNA packaging and its influence on gene expression can also occur by covalent modifications of the histone protein core around which DNA is wrapped in the chromatin structure or else by remodeling carried out by chromatin remodeling complexes (see Chromatin remodeling). There is, further, crosstalk between DNA methylation and histone modification, so they can coordinately affect chromatin and gene expression.
For one example, cytosine methylation, produces 5-methylcytosine, which is important for X-chromosome inactivation. The average level of methylation varies between organisms – the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations. Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain, and the glycosylation of uracil to produce the "J-base" in kinetoplastids.
DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks. A typical human cell contains about 150,000 bases that have suffered oxidative damage. Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions and deletions from the DNA sequence, as well as chromosomal translocations. These mutations can cause cancer. Because of inherent limitations in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. DNA damages that are naturally occurring, due to normal cellular processes that produce reactive oxygen species, the hydrolytic activities of cellular water, etc., also occur frequently. Although most of these damages are repaired, in any cell some DNA damage may remain despite the action of repair processes. These remaining DNA damages accumulate with age in mammalian postmitotic tissues. This accumulation appears to be an important underlying cause of aging.
Many mutagens fit into the space between two adjacent base pairs, this is called intercalation. Most intercalators are aromatic and planar molecules; examples include ethidium bromide, acridines, daunomycin, and doxorubicin. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen. Others such as benzo[a]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells.
DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In alternative fashion, a cell may simply copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here we focus on the interactions between DNA and other molecules that mediate the function of the genome.
Genomic DNA is tightly and orderly packed in the process called DNA condensation to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, as well as small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid. The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, as well as regulatory sequences such as promoters and enhancers, which control the transcription of the open reading frame.
In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much noncoding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species represent a long-standing puzzle known as the "C-value enigma". However, some DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression.
Some noncoding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes, but are important for the function and stability of chromosomes. An abundant form of noncoding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.
A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT).
In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons ( combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAA, TGA and TAG codons.
Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing, and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.
All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.
Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are therefore largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes.
A distinct group of DNA-binding proteins are the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases.
In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This changes the accessibility of the DNA template to the polymerase.
As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.
Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GATATC-3′ and makes a cut at the vertical line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.
Enzymes called DNA ligases can rejoin cut or broken DNA strands. Ligases are particularly important in lagging strand DNA replication, as they join together the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.
Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.
Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly ATP, to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases.
Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products are copies of existing polynucleotide chains—which are called templates. These enzymes function by adding nucleotides onto the 3′ hydroxyl group of the previous nucleotide in a DNA strand. As a consequence, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use.
In DNA replication, a DNA-dependent DNA polymerase makes a copy of a DNA sequence. Accuracy is vital in this process, so many of these polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.
RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure.
Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.
A DNA helix usually does not interact with other segments of DNA, and in human cells the different chromosomes even occupy separate areas in the nucleus called "chromosome territories". This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is during chromosomal crossover when they recombine. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.
Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.
The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break caused by either an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA.
DNA contains the genetic information that allows all modern living things to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes.
However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible. This is because DNA survives in the environment for less than one million years, and slowly degrades into short fragments in solution. Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old, but these claims are controversial.
On 8 August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of DNA (adenine, guanine and related organic molecules) may have been formed extraterrestrially in outer space.
Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction, and to manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector. The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research, or be grown in agriculture.
Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, but may also be called "genetic fingerprinting". In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case.
The development of forensic science, and the ability to now obtain genetic matching on minute samples of blood, skin, saliva or hair has led to a re-examination of a number of cases. Evidence can now be uncovered that was not scientifically possible at the time of the original examination. Combined with the removal of the double jeopardy law in some places, this can allow cases to be reopened where previous trials have failed to produce sufficient evidence to convince a jury. People charged with serious crimes may be required to provide a sample of DNA for matching purposes. The most obvious defence to DNA matches obtained forensically is to claim that cross-contamination of evidence has taken place. This has resulted in meticulous strict handling procedures with new cases of serious crime. DNA profiling is also used to identify victims of mass casualty incidents. As well as positively identifying bodies or body parts in serious accidents, DNA profiling is being successfully used to identify individual victims in mass war graves – matching to family members.
Bioinformatics involves the manipulation, searching, and data mining of biological data, and this includes DNA sequence data. The development of techniques to store and search DNA sequences have led to widely applied advances in computer science, especially string searching algorithms, machine learning and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. The DNA sequence may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally. Entire genomes may also be compared, which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events.
DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based as well as using the "DNA origami" method) as well as three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins.
Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology; For example, DNA evidence is being used to try to identify the Ten Lost Tribes of Israel.
DNA has also been used to look at modern family relationships, such as establishing family relationships between the descendants of Sally Hemings and Thomas Jefferson. This usage is closely related to the use of DNA in criminal investigations detailed above. Indeed, some criminal investigations have been solved when DNA from crime scenes has matched relatives of the guilty individual.
In a paper published in Nature in January, 2013, scientists from the European Bioinformatics Institute and Agilent Technologies proposed a mechanism to use DNA's ability to code information as a means of digital data storage. The group was able to encode 739 kilobytes of data into DNA code, synthesize the actual DNA, then sequence the DNA and decode the information back to its original form, with a reported 100% accuracy. The encoded information consisted of text files and audio files. A prior experiment was published in August 2012. It was conducted by researchers at Harvard University, where the text of a 54,000-word book was encoded in DNA.
DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein". In 1878, Albrecht Kossel isolated the non-protein component of "nuclein", nucleic acid, and later isolated its five primary nucleobases. In 1919, Phoebus Levene identified the base, sugar and phosphate nucleotide unit. Levene suggested that DNA consisted of a string of nucleotide units linked together through the phosphate groups. However, Levene thought the chain was short and the bases repeated in a fixed order. In 1937 William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure.
In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template". In 1928, Frederick Griffith discovered that traits of the "smooth" form of Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carries genetic information—the Avery–MacLeod–McCarty experiment—when Oswald Avery, along with coworkers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle in 1943. DNA's role in heredity was confirmed in 1952, when Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the T2 phage.
In 1953, James Watson and Francis Crick suggested what is now accepted as the first correct double-helix model of DNA structure in the journal Nature. Their double-helix, molecular model of DNA was then based on a single X-ray diffraction image (labeled as "Photo 51") taken by Rosalind Franklin and Raymond Gosling in May 1952, as well as the information that the DNA bases are paired — also obtained through private communications from Erwin Chargaff in the previous years. Chargaff's rules played a very important role in establishing double-helix configurations for B-DNA as well as A-DNA.
Experimental evidence supporting the Watson and Crick model was published in a series of five articles in the same issue of Nature. Of these, Franklin and Gosling's paper was the first publication of their own X-ray diffraction data and original analysis method that partially supported the Watson and Crick model; this issue also contained an article on DNA structure by Maurice Wilkins and two of his colleagues, whose analysis and in vivo B-DNA X-ray patterns also supported the presence in vivo of the double-helical DNA configurations as proposed by Crick and Watson for their double-helix molecular model of DNA in the previous two pages of Nature. In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. Nobel Prizes were awarded only to living recipients at the time. A debate continues about who should receive credit for the discovery.
In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson–Stahl experiment. Further work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology.
|Wikiquote has a collection of quotations related to: DNA|
|Wikimedia Commons has media related to: DNA|
Here you can share your comments or contribute with more information, content, resources or links about this topic. | http://www.mashpedia.com/DNA | 13 |
149 | A variable is an area of memory used to store values that can be used in a program. Before using a variable, you must inform the interpreter. This is also referred to as declaring a variable. To declare a variable, use the DECLARE keyword using the following formula:
DECLARE @VariableName DataType
The DECLARE keyword lets the interpreter know that you are making a declaration. In Transact-SQL, the name of a variable starts with the @ sign. Whenever you need to refer to the variable, you must include the @ sign. The name of a variable allows you to identify the area of memory where the value of the variable is stored. Transact-SQL is very flexible when it comes to names. For example, a name can be made of digits only. Here is an example:
There are rules and suggestions you will use for the names:
When declaring a variable, after giving a name, you must also specify its data type.
You can declare more than variable at the same time. To do that, separate them with a comma. The formula would be:
DECLARE @Variable1 DataType1, @Variable2 DataType2, @Variable_n DataType_n;
Unlike many other languages like C/C++, C#, Java, or Pascal, if you declare many variables that use the same data type, the name of each variable must be followed by its own data type.
After declaring a variable, the interpreter reserves space in the computer memory for it but the space doesn't necessarily hold a recognizable value. This means that, at this time, the variable is null. One way you can change this is to give a value to the variable. This is referred to as initializing the variable.
To initialize a variable, in the necessary section, type the SELECT or the SET keyword followed by the name of the variable, followed by the assignment operator "=", followed by an appropriate value. The formula used is:
SELECT @VariableName = DesiredValue
SET @VariableName = DesiredValue
Once a variable has been initialized, you can make its value available or display it. This time, you can type the name of the variable to the right side of PRINT or SELECT.
After setting the name of a variable, you must specify the amount of memory that the variable will need to store its value. Since there are various kinds of information a database can deal with, SQL provides a set of data types. The types used for variables are exactly those we used for columns. This also means that the rules we reviewed for those data types are the same. The data types are reviewed here simply as reminders.
A Boolean variable is declared using the BIT or bit data type. Here is an example:
DECLARE @IsOrganDonor bit;
After declaring a Boolean variable, you can initialize it with 0 or another value. If the variable is initialized with 0, it receives the Boolean value of False. If it is initialized with any other number, it receives a True value. Here is an example of using a Boolean variable:
Transact-SQL supports various types of natural numbers. If a variable would hold natural numbers in the range of -2,147,483,648 to 2,147,483,647, you can declare it with the int data type. Here is an example:
DECLARE @Category int; SET @Category = 1450; PRINT @Category; GO
This would produce 1450:
If the variable will hold very small positive numbers that range from 0 to 255, declare it using the tinyint data type. Here is an example:
1> DECLARE @NumberOfPages SMALLINT; 2> SET @NumberOfPages = 16; 3> SELECT @NumberOfPages AS [Number of Pages]; 4> GO Number of Pages --------------- 16 (1 rows affected)
The bigint data type is used for variables that use small or very large numbers from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Here is an example:
1> DECLARE @CountryPopulation BigInt; 2> SET @CountryPopulation = 16500000; 3> SELECT @CountryPopulation AS 'Country Population'; 4> GO Country Population -------------------- 16500000 (1 rows affected)
Transact-SQL supports decimal numbers of all types. For example, you can use the numeric or decimal data type for a variable that would hold all types of numbers, whether natural or decimal. Here is an example:
1> DECLARE @Distance DECIMAL; 2> SET @Distance = 648.16; 3> PRINT @Distance; 4> GO 648
The precision of a decimal number specifies the number of digits used to display the value. As seen already, to specify the precision of a decimal or numeric data type, add some parentheses to the data type. In the paretheses, enter a number between 1 and 38.
The scale specifies the fractional part of a decimal number. It is set on the right side of the period (in US English). Here is an example:
Transact-SQL supports floating-point numbers through the float and the real data types. Here is an example of declaring and using a variable of type float:
1> DECLARE @Radius FLOAT; 2> SET @Radius = 48.16; 3> SELECT @Radius AS Radius; 4> GO Radius ------------------------ 48.159999999999997 (1 rows affected)
If you want the variable to use monetary values, declare it with the money data type. Here is an example:
1> DECLARE @YearlyIncome Money; 2> SET @YearlyIncome = 48500.15; 3> SELECT @YearlyIncome AS [Yearly Income]; 4> GO Yearly Income --------------------- 48500.1500 (1 rows affected)
Remember that Transact-SQL also supports the smallmoney data type whose values range from -214,748.3648 to 214,748.3647. The precision and scale of a money or smallmoney variable are fixed by Microsoft SQL Server. The scale is fixed to 4.
To declare a variable that uses a character or any kind of symbol, use the char data type. To initialize the variable, include its value in single-quotes. Here is an example:
1> DECLARE @Gender char; 2> SET @GENDER = 'M'; 3> SELECT @Gender AS Gender; 4> GO Gender ------ M (1 rows affected)
If the variable deals with international characters or non-Latin symbols (Unicode), use the nchar data type. When initializing the variable, you should procede its value with N. Here is an example:
1> DECLARE @Gender nchar; 2> SET @GENDER = N'M'; 3> SELECT @Gender AS Gender; 4> GO Gender ------ M (1 rows affected)
A string is a combination of characters or symbols of any kind. To declare a variable for such a value, use the varchar data type. Here is an example:
DECLARE @FirstName varchar;
Remember that you can (in fact should always) specify the length of the string by passwing a number in the parentheses of the data type. Here are examples:
DECLARE @Gender char(1); DECLARE @FirstName varchar(20);
You can then initialize the variable(s) by including its value in single-quotes. Here are examples:
DECLARE @Gender char(1); DECLARE @FirstName varchar(20); SET @Gender = 'Male'; SET @FirstName = 'Yolanda';
If you are using the Command Prompt (SQLCMD.EXE), include its value between double-quotes. Here is an example:
If you are using a Query Editor, don't include the string value in double-quotes; otherwise, you would receive an error.
If the variable may involve international characters or symbols (Unicode), you should declare it using the nvarchar data type . When initializing the variable, precede its value with N. Here are examples:
DECLARE @Gender char; DECLARE @Code nchar; DECLARE @FirstName varchar; DECLARE @LastName nvarchar; SET @Gender = N'Male'; SET @Code = N'7HHF-294'; SET @FirstName = 'Yolanda'; SET @LastName = N'Williamson';;
Notice that you can initialize the variable with N' whether it was declared as char, nchar, varchar, or nvarchar. You will not receive an error.
If you include more than one character in the single-quotes, only the first (most left) character would be stored in the variable.
If the variable will use large text, declare it using the varchar(max) data type. If the text may involve Unicode characters, declare it using the nvarchar(max) data type. Here is an example:
declare @TermPaper nvarchar(max);
You can initialize the variable using any of the rules we reviewed for strings.
Transact-SQL provides the sql_variant data type. If can be used to declare a variable that can hold any type of value. When initializing the variable, you must follow the rules of the actual data type the SQL variant represents. Here are examples:
DECLARE @FullName SQL_VARIANT, @DateHired Sql_Variant, @IsMarried SQL_variant, @YearlyIncome sql_variant; SET @FullName = N'Paul Yamo'; SET @DateHired = N'20110407'; SET @IsMarried = 1; SET @YearlyIncome = 48500.15; SELECT @FullName AS [Full Name]; SELECT @DateHired AS [Date Hired]; SELECT @IsMarried AS [Is Married?]; SELECT @YearlyIncome AS [Yearly Income]; GO
Transact-SQL support geometric coordinates through the geometry data type. You can use it to declare a variable. Here is an example:
DECLARE @Location geometry;
The geometry type is a class with properties and methods. After declaring a geometry variable, you must initialize it. The most fundamental action is to initialize the variable. This is done through the STGeomFromText method whose syntax is:
static geometry STGeomFromText('geography_tagged_text', SRID)
The method is static. This means that, to access it, you use geometry::STGeomFromText. Here is an example:
DECLARE @Location geometry; SET @Location = geometry::STGeomFromText(. . .)
This method takes two arguments. The first argument is holds a value identified as a Well-Known Text (WKT) value. The value follows a format defined by OGC. There are various you can specify this value. As you may know already, a geometric point is an object that has two values: the horizontal coordinate x and the vertical coordinate y. The value can be integers or flowing-point numbers.
If you know the coordinates of a point and you want to use it as the value of the geometry object, type point() (or POINT(), this is not case-sensitive) and, in the parentheses, type both values separated by a space. Here is an example:
DECLARE @Location geometry; SET @Location = geometry::STGeomFromText('point(6 4)', . . .
Instead of just one point, you may want to use a geometric value that is a line. In this case, specify the shape as linestring(, ). In the parentheses and on both sides of the comma, type each point as x and y. Here is an example:
DECLARE @Location geometry; SET @Location = geometry::STGeomFromText('linestring(1 4, 5 2)', . . .);
You can also use a complex geometric, in which case you can pass the argument as a polygon. Use polygon(()) (or POLYGON(())) and pass the vertices in the parentheses. Each vertext should specify its x and y coordinates. The vertices are separated by commas. A last vertex should be used to close the polygon, in which case the first and the last vertices should be the same. Here is an example:
DECLARE @Location geometry; SET @Location = geometry::STGeomFromText('polygon((1 2, 2 5, 5 5, 4 2, 1 2))', . . );
The second argument of the geometry::STGeomFromText method is a contant integer known as the spatial reference ID (SRID).
After declaring and initializing the value, you can use a SELECT statement to display its value. Here is an example:
DECLARE @Location geometry; SET @Location = geometry::STGeomFromText('point(6 4)', 0); SELECT @Location;
Transact-SQL supports geographical locations.
Transact-SQL allows you to define a type based on one of the existing data type. This is called a user-defined data type (UDT). We have already reviewed how to create it. Here are examples:
CREATE TYPE NaturalNumber FROM int; GO CREATE TYPE ShortString FROM nvarchar(20); GO CREATE TYPE ItemCode FROM nchar(10); GO CREATE TYPE LongString FROM nvarchar(80); GO CREATE TYPE Salary FROM decimal(8, 2); GO CREATE TYPE Boolean FROM bit; GO
After creating a UDT, you can declare a variable for it. Then, before using it, you must initialize it with the appropriate value. Here are examples:
DECLARE @EmployeeID NaturalNumber, @EmployeeNumber ItemCode, @FirstName ShortString, @LastName ShortString, @Address LongString, @HourlySalary Salary, @IsMarried Boolean; SET @EmployeeID = 1; SET @EmployeeNumber = N'28-380'; SET @FirstName = N'Gertrude'; SET @LastName = N'Monay'; SET @Address = N'1044 Alicot Drive'; SET @HourlySalary = 26.75; SET @IsMarried = 1; SELECT @EmployeeID AS [Empl ID], @EmployeeNumber AS [Empl #], @FirstName AS [First Name], @LastName AS [Last Name], @Address, @HourlySalary AS [Hourly Salary], @IsMarried AS [Is Married ?]; GO
Of course, you can mix Transact-SQL data types and your own defined type in your code.
A composite operation consists of performing the operation from a variable to itself. For example, suppose you have a variable a that has a value and you want to change the value of that variable by adding its own value to itself. Composite operations use an operator that is in fact a combination of two operators. The variable can be almost any type that supports the type of operation you want to perform.
The composite operation uses the += operator. Using it, to add the value of a variable to itself, type the variable and insert this operation between both operands. Here is an example:
DECLARE @Variable int; SET @Variable = 248; SELECT @Variable; SET @Variable += @Variable; SELECT @Variable;
Once you have performed the operation, the variable holds the new value. Consider this:
As mentioned already, a variable that is involved in a composite operation can be of any type as long as the type supports that operation. For example, strings in Transact-SQL support the addition. This means that the variable can be of type char or any of its variants.
One variant of the composite operation is to add oone variable to another. To do this, include the += operator between the operants. Here is an example:
DECLARE @Name nvarchar(50); DECLARE @LastName nvarchar(20); SET @Name = N'Paul'; SET @LastName = N' Yamaguchi'; SELECT @Name; SELECT @LastName; SET @Name += @LastName; SELECT @Name;
When the operation has been performed, the left operand now holds its value and that of the other variable:
Another variant of the composite operation consists of adding a constant to a variable. In this case, on the right side of the += operator, use the constant. Here is an example:
DECLARE @FirstName nvarchar(50); SET @FirstName = N'Paul'; SELECT @FirstName; SET @FirstName += N' Motto'; SELECT @FirstName;
Once again, remember that when the operation has been performed, the variable holds the new value. Here is an example:
In the same way, you can perform this operation as many time as you want by adding right operands to a left operands. Here are examples:
DECLARE @Name nvarchar(50); DECLARE @MiddleName nvarchar(20); DECLARE @LastName nvarchar(20); SET @Name = N'Paul'; SET @MiddleName = N' Bertrand'; SET @LastName = N' Yamaguchi'; SET @Name += @MiddleName; SELECT @Name; SET @Name += @LastName; SELECT @Name;
One important thing you must keep in mind is the storage capacity of the left operand: It must be able to hold all values added to it.
The concept of composite operation can be applied to all arithmetic binary operations. As seen above, strings also support the addition composite operation. Composite operations are also available on all bit manipulation operations. The most important thing to remember is that not all data types support all operations. Overall:
You should know that these operations can be performed on natural or decimal numbers. | http://functionx.com/sqlserver/Lesson09.htm | 13 |
54 | v = rw
Linear Velocity....as the name implies....deals with speed in a
straight line, the units are often km/hr or m/s or mph (miles per
(Linear Velocity (v) = change in distance/change in time
v = ^d/^t
Angular Velocity....as the name implies...deals with speed through an angle, or in a circular motion, the units are often radians per second or degrees per second.
[Angular Velocity (w) = change in rotation / change in time
w = ^0 / ^t
To relate the two types of velocity to each other, we can use the following relationship:
v = rw
where r = radius.
Linear and angular velocities relate the speed of an object, dependent on the perspective taken. Linear velocity applies to any object or particle that moves, while angular velocity applies to those that turn (such as a wheel, the revolution of the earth, or a spinning top). Angular velocity is an expression of angular displacement over time, and can be expressed in degrees or radians (radians/hr, degrees/sec, and so on). Angular velocity is found with the equation . To determine linear velocity (linear displacement over time) from angular velocity, apply the formula , where ω is expressed in radians/time and r is the radius of the path taken (or the radius of the object, if it is spinning).
Everything which turns or moves in the circular
direction has both linear velocity and angular velocity. The
angular velocity of a particle traveling on a circular path is
defined as ratio of the angle traversed to the amount of time it
takes to traverse that angle. So, the angular velocity (ω)
can be defined as;
ω = θt radians/sec
ω = angular velocity
θ = angle the object traversed
t = time in which object traversed θ angle
The given expression is called Angular Velocity Formula. The angular velocity can also be defined as the rate of change of angular displacement with respect to time.
Mathematically, if the displacement is very less;
ω = dθdt radians/sec
For finding the relation between the angular and linear velocity we must first find the relation between the linear displacement (x) and the angular displacement (θ).
Let’s consider that a body is moving in a circular track whose radius is ‘r’, then the distance (x) travelled by it completing one circle is equal to the circumference of the circle, so,
x = 2 π r
Here 2π is the total angle the body travelled from its initial position, so if the body travels only θ angle on the circle then;
x = θ × r
x = linear displacement
θ = angular displacement
r = radius of the curved surface
So, using this equation with the equation of linear velocity:
Definition: Tangential Speed
The average tangential speed of such an object is defined to be the length of arc, s , travelled divided by the time interval, t :
= . (11)
The instantaneous tangential speed is obtained by taking t to zero:
v t = . (12)
Using the fact that
s = r (13)
we obtain the relationship between the angular velocity of an object in circular motion and its tangential velocity:
vt = r = r. (14)
This relation holds for both average and instantaneous speeds.
The instantaneous tangential velocity vector is always perpendicular to the radius vector for circular motion.
Definition: Tangential Acceleration
Tangential acceleration is the rate of change of tangential speed. The average tangential acceleration is:
= r = r (15)
where is the average angular acceleration. The instantaneous tangential acceleration is given by:
= r (16)
where is the instantaneous angular acceleration. | http://www.chegg.com/homework-help/questions-and-answers/how-are-linear-and-angular-speeds-related-q3625612 | 13 |
90 | The last two decades have seen significant progress in our understanding of the universe. While previously we only knew the planets and moons in our own solar system, we are now aware of many planets orbiting distant stars. At last count, there are more than 800 of those extrasolar planets (see http://exoplanet.eu/). Even in the star system closest to our sun, the alpha Centauri system, an extrasolar planet candidate has been detected recently.
What would it take to visit another star? This question has been discussed seriously for several decades now, even before any extrasolar planet had been discovered. The point of this blog post will be to recall how hard interstellar travel is (answer: it is really, really hard).
The challenge is best illustrated by taking the speed of the fastest spacecraft flying today: That is the Voyager 1 space probe, flying at about 17 kilometers per second away from our sun. At this speed, it would take Voyager 70,000 years to reach alpha Centauri, if it were flying into that direction. It is safe to assume that no civilization is that patient (or that long-lived), waiting through several ice ages to hear back from a space probe launched tens of thousands of years ago. Thus, faster space probes are called for, preferably with a mission duration of only a few decades to reach the target star.
Why travel to another star?
First, however, we should ask: why would one want to travel there? The short answer is that trying to explore another planet without going there is quite challenging.
Of course, as the detection of extrasolar planets shows, one can at least obtain some information about the planet’s orbit and its mass just by looking at the star. It is even possible to figure out a bit about the planet’s atmosphere, by doing spectroscopy on the small amount of sunlight that is reflected from the planet. In other words, one tries to see the colors of the planet’s atmosphere. This has been achieved recently for a planet orbiting in a star system 130 light years from earth (see http://www.eso.org/public/news/eso1002/). The spectrum that can be teased out from this method is quite rough, since it is extremely challenging to see the planet’s spectrum right next to the significantly brighter star. In any case, this is a promising way to learn more about planetary atmospheres. In the best possible scenario, changes in the atmosphere might then hint towards life processes taking place on the planet.
However, measuring the spectrum in this way only gives an overall view of the planet’s color. It does not reveal the shape of the planet’s surface (the clouds, oceans, continents etc.). In order to take a snapshot of a planet’s surface with a good resolution, one would need to build gigantic telescopes (or telescope arrays). We can illustrate this by taking alpha Centauri as an example, which is our closest neighboring star system at only about 4 light years distance. If we wanted to resolve, say, 1km on the surface of the planet, then we would need a telescope with a diameter the size of the earth! At least, this is the result of a rough estimate based on the standard formula for the resolving power of a telescope.
Therefore, it seems one would need to travel there even if only to take a look and send back some pictures to earth. More ambitious projects would then involve sending a robotic probe down to the surface of the planet (just like the Curiosity rover currently exploring Mars), or even send a team of astronauts. However, as we will see that interstellar travel is really hard, we will be content in our estimates to take the most modest approach, i.e. an unmanned space probe, with the goal to take some close-up pictures. Probably that would mean a spacecraft of about a ton (1000 kg), since that is the size of probes like the Mars Global Surveyor or Voyager 1. Possibly that mass could be reduced, but even if the imaging and processing system were only a few tens of kilograms, one still needs a power source and a radio dish for communicating back to earth.
The Voyager spacecraft was carried up by a rocket and then used, in addition, the gravitational pull of the large planets Jupiter and Saturn to reach its present speed. However, as we have seen above, 17 km per second is just not fast enough. It is a thousand times too slow.
If you want to reach alpha Centauri within 40 years, you need to travel at 10 percent of the speed of light, since alpha Centauri is 4 light years away. Which concepts are out there that provide acceleration to speeds of this kind?
Usual rocket fuel is not good enough. What could work conceivably are ideas based on nuclear propulsion. In Project Orion, a study from the 50s, it was suggested to use nuclear bombs being ignited at the rear end of a spacecraft. Each explosion would give a push to the craft, against a plate. In this way, a spacecraft of about 100,000 tons could reach alpha Centauri in about 100 years. Later project studies (like Project Daedalus) envisaged nuclear fusion of small pellets in a reaction chamber. Again, the design called for a spacecraft on the order of about 50,000 tons. Since the helium-3 required for the fusion pellets is very scarce on earth, it would have to be mined from Jupiter.
If you think these designs sound crazy, you are not alone. Launching such a massive spaceship filled to the rim with nuclear bombs from the surface of the earth is probably not going to happen. And constructing these gigantic ships in space, even though safer, would probably require resources beyond what seems feasible.
All of these nuclear-powered designs are really large spaceships, because they have to carry along a large amount of fuel. The scientific payload would be only a very small fraction of the total mass.
Carrying along the fuel is obviously a nuisance, since a lot of energy is used up for accelerating the fuel and not the payload. This can be avoided in schemes where the power is generated on the home planet and “beamed” to the spaceship. That is the concept behind light sails.
One of the less obvious properties of light is that is can exert forces, so called radiation forces. These forces are very feeble. For example, direct sun-light hitting a human body generates a push that is equivalent to the weight of a few grains of sand. That is why you will notice the heat and the brightness, but not the force. Nevertheless, the force can be made larger by increasing the surface area or by increasing the light intensity. And that is the concept behind light sails: Unfold a large reflecting sail and wait for the radiation pressure force to accelerate the sail. Even though the accelerations are still modest, they are good enough if you can afford to be patient. A constant small acceleration acting continuously over hundreds of days can bring you to considerable speeds.
The first proposals for light sails in space seem to have originated from the space flight pioneers Konstantin Tsiolkovsky and Friedrich Zander during the 1920s. The radiation pressure force had been predicted theoretically in the 19th century by James Clerk Maxwell, starting from his equations of electromagnetism, although very early speculations in this direction date back even to Johannes Kepler around 1610. The force had been demonstrated experimentally around 1900 by Lebedev in Moscow and by Nichols and Hull at Dartmouth College in the U.S.
For voyages within the solar system, one could use the light emanating from the Sun. In that case, the craft would be termed a solar sail.
First demonstrations of solar sails
The first demonstrations of solar sails failed, but the failure was not related to the sails themselves. In 2005, the Cosmos 1 mission was launched by the Planetary Society, with additional funding from Cosmos Studios. It was launched onboard a converted intercontinental ballistic missile from a Russian submarine in the Barents Sea. Unfortunately, the rocket failed and the mission was lost. The same fate awaited NanoSail-D, which was launched by NASA in 2008 but again was lost due to rocket failure.
In 2010, the Japanese space agency JAXA demonstrated the first solar sail that also uses solar panels to power onboard systems. This successful project is named IKAROS. It demonstrated propulsion by the radiation pressure force, and after half a year it passed by Venus, taking some pictures. IKAROS is still sailing on. The square-shaped IKAROS sail measures 20m along the diagonal, and it is made up of a very thin plastic membrane, about 10 times thinner than a human hair. The sail is stabilized by spinning around. In this way, the centrifugal force pushes the sail outward from the center, so it does not crumple.
The overall radiation pressure force on IKAROS is still tiny: Only about a milliNewton, which (at a mass of 315 kg) translates into an acceleration that is more than a million times smaller than the gravitational acceleration “g” on Earth. Nevertheless, in 100 days, such a tiny acceleration would already propel the craft over a distance of about 100,000 km. It should be noted that the motion of IKAROS towards Venus was due to the initial velocity given to the craft, not the radiation pressure force (which, as this example demonstrates, would have been too small to go to Venus in half a year).
For interstellar travel, however, the sun-light quickly becomes too dim, as the sail recedes from the sun. In that case one needs to focus the light onto the sail, such that the radiation power received by the sail does not diminish as the sail moves away. This could be done either via gigantic mirrors focussing a beam of sun-light, or by a large array of lasers. Laser sails were analyzed in the 1980s by the physicist and science-fiction write Robert L. Forward and subsequently by others.
What are the challenges faced by laser sails?
In brief: In order to get a sufficient acceleration, one wants to have a large beam power and a very thin, light-weight material. However, the beam will tend to heat the sail and thus the material should be able to withstand large temperatures. In addition, the space craft will fly through the dust and gas of interstellar space, which rips holes into the sail and again tends to heat up further the material.
In the following, we are going to go through the most important points, illustrating them with estimates.
Since the radiation pressure force is so feeble, a lot of light power is needed. Of course, all of this depends on the mass that has to be accelerated. Suppose for the moment a very modest mass, of only 100 kg.
In addition, the power needed will depend on the acceleration we aim for.
How large is the acceleration we would need for a successful decades-long trip to Alpha Centauri? It turns out that the standard gravitational acceleration on earth (1 g) would be good enough by far: If a spacecraft is accelerated at 1 g for about 35 days, it will have already reached 10% of the speed of light. Since the whole mission takes a few decades, we can easily be more modest, and require only, say, 10% of g. Then it would take about a year to reach 10% of the speed of light. That acceleration amounts to increasing the speed by 1 meter per second every second.
So here is the question: how much light power do you need to accelerate 100 kg at a rate of 1 meter per second every second?
The number turns out to be: 15 Giga Watt!
And that is assuming the optimal situation, where the light gets completely reflected, so as to provide the maximum force.
How large is 15 Giga Watt? This amounts to the total electric power consumption of a country like Sweden (see Wikipedia power consumption article).
Still, there is some leeway here: We can also do with an acceleration phase that lasts a decade, at one percent of g. Then the power is reduced down to a tenth, i.e. 1.5 Giga Watt. This is roughly the power provided by a nuclear power plant, or by direct sun light hitting an area of slightly more than a square kilometer (if all of that power could be used).
The area and the mass
How large would one want to make the sail? In principle, it could be quite small, if all that light power were focussed on a small area.
However, as we will see, the heating of the structure is a serious concern, and so it is better to dilute the light power over a larger area. As a reasonable approach, let’s assume that the light intensity (power per area) should be like that of direct sunlight hitting an area on earth. That is 1 kilo Watt per square meter. In that case, the 1.5 Giga Watt would have to be distributed over an area of somewhat more than a square kilometer. So the sail would be roughly a kilometer on each side.
The area is important, since it also determines the total mass of the sail. In order to figure out the mass, we also need to know the thickness and the density of the sail. The current solar sails mentioned above each have a thickness of a few micrometer (millionths of a meter), thinner than a human hair.
Even if we just assume 1 micrometer thickness, a square kilometer sail would already have a total mass of 1000 kg (at the density of water). This shows that our modest payload mass of 100 kg is no longer relevant. It is rather the sail mass itself that needs to be accelerated.
Once the sail mass is larger than the payload mass, we rather have to ask what is the acceleration at a given light intensity (e.g. 1 kW per square meter, as assumed above). If the light intensity is fixed, the acceleration becomes independent of the total sail area: Doubling the area doubles the force but also the mass.
Typical proposals for laser-sail missions to reach 10 % of the speed of light assume sail areas of a few square kilometers, total masses on the order of a ton (sail and payload), total power in the range of Giga Watt, and accelerations on the order of a few percent of g.
Focussing the beam
The light beam, originating from our own solar system, has to be focussed onto the sail of a few km diameter, across a distance measuring light years. Basic laws of wave optics dictate that any light beam, even that produced by a laser, will spread as it propagates (diffraction). To keep this spread as small as possible, the beam has to be focussed by a large lens or produced by a large array of lasers. Estimates show that one would need a lens measuring thousands of kilometers to focus the beam over a distance of a light year! Thus, any such system would have to fly in space, which again makes power production more difficult.
The requirements on the size of the lens can be relaxed a bit by having the acceleration only operate during a smaller fraction of the trip. However, even if the beam is switched on only during the first 0.1 light years of travel (as opposed to the full 4 light years), a thousand kilometers are the order of magnitude required for the size of the lens.
Suppose that power generation were no problem. Suppose you could have cheap access to a power source equivalent to the power consumption of a country like Germany (60 GW) or the US (400 GW). What would be the limit to the acceleration you can achieve? It turns out that powers of this magnitude would not even be needed, since at some point it is not the total power that provides the limiting factor.
The problem is once you fix the material density and the thickness, you can increase the acceleration only by increasing the intensity, i.e. the light power impinging on a square meter. However, at least a small fraction of that power will be absorbed, and it will heat up the sail. The problem is known to anyone who have left their car in direct sunlight, which makes the metal surface very hot. In space, an equilibrium would be established between the power being absorbed and the power being re-radiated from the sail as thermal radiation. Typical materials considered for light sails, like aluminum, have melting points on the order of several hundred to thousand degrees centigrade.
Ideally, the material would reflect most of the light it receives from the beam, absorbing very little. The little heat flux it receives should be re-radiated very efficiently at other wave lengths. Tayloring the optical properties of a sail in this way is possible in principle. However, usually it also means the thickness of the sail has to be increased, e.g. to incorporate different layers of material with a thickness matching the wavelength of light (leading to sails of some micrometers thickness).
In some of the current proposals of laser sails for interstellar travel, it is this heating effect that limits the admissible light intensity and therefore the acceleration.
Several different materials are being considered, among them dielectrics (rather thick but very good reflectivity, little absorption) and metals like aluminum. In addition, one may replace optical light beams by microwave beams, which can be generated more efficiently and be reflected by a thin mesh. The downside of using microwaves is that their wavelength is ten thousand times larger, so the size of the lens is correspondingly larger as well.
As the sail flies through space, it will ecounter gas atoms and dust particles. Admittedly, matter in interstellar space is spread out very thin (that is why there is almost no friction to begin with!). Nevertheless, the Interstellar Medium is not completely devoid of matter. Somewhat fortunately for light sails, our sun (and its nearest stars) is sitting inside a low-density region, the so-called “Local Bubble“. In the few light years around our sun (in the Local Interstellar Cloud), the density of hydrogen atoms is about 1 atom per ten cubic centimeters, vastly smaller than the density of air. That is more than a thousand times less atoms per cubic centimeter than in the best man-made vacuum. In addition, there are grains of dust, with sizes on the order of a micrometer.
It is quite simple to calculate how many atoms will hit the surface of the sail: Just take a single atom of the sail’s surface. As the sail flies through space, this atom will encounter a few of the hydrogen atoms of the interstellar medium. How many? That depends on the length of the trip (a few light years) and the density of hydrogen atoms. All told, for a typical atomic radius of 1 Angstrom (0.1 nanometer), our surface atom will encounter about 100 hydrogen atoms. That means roughly: If all of the atoms were to stick to the surface, they would pile up 100 layers thick, which would be on the order of 10 nanometers.
That in itself does not sound dramatic. The crucial significance of the problem is realized only when one takes into account the speed at which the hydrogen atoms and other particles are bombarding the sail: That is 10 percent of the speed of light, since the sail is zipping through space at that speed! Being hit by a shower of projectiles traveling at 10 percent the speed of light does not bode well for the integrity of the sail.
It turns out that the speed itself may actually help. This is because an atom zipping by at 10% of the speed of light has only very little time to interact with the atoms in the sail. For two atoms colliding at this speed, it is better not to view an atom as a solid, albeit fuzzy, object of about 1 Angstrom radius. Rather, each atom consists of a point-like nucleus and a few point-like electrons. When two such atoms zip through each other, it is very unlikely that any of those particles (electrons and nuclei) come very close to each other. They will exert Coulomb forces (on the order of a nanoNewton), but since they are flying by so fast, the forces do not have a lot of time to transfer energy. In this regard, faster atoms really do less damage. Nevertheless, there is some energy transfer, and the biggest part of it is due to incoming interstellar atoms kicking the electrons inside the sail. In a quick-and-dirty estimate (based on some data for proton bombardment of Silicon targets), the typical numbers here are tens or hundreds of keV of energy transferred to a sail of 1 micrometer thickness during one passage of an interstellar atom. This produces heating (and ionization, and some X ray radiation).
Powerful computer simulations are nowadays being used to study such processes in detail (see a 2012 Lawrence Livermore Lab study on energetic protons traveling through aluminum).
You can find a general (non-technical) discussion of this crucial problem for laser sails on the “Centauri Dreams Blog“. The overall conclusion there seems to be optimistic, but the story also does not seem to be completely settled.
The problem could be reduced somewhat by having the acceleration going on only for a shorter time, after which the sail is no longer needed. However, then the acceleration needs to be higher, with a correspondingly larger intensity and heating issues. In any case, the scientific payload needs to be protected all the way, even if the sail could be jettisoned early.
Once the space probe has reached the target system, things have to go very fast. At 10 % of the speed of light, the probe would cover the distance between the Earth and the Sun in a mere 80 minutes, and the distance between Earth and Moon in only a second (!). That means, there is precious little time to take the pictures and do all the measurements for which one has been waiting several decades. Presumably the probe would first snap pictures and then later take its time to radio back the results to Earth, where they would arrive 4 years later.
While the probe is flying through the target star system, it is also in much greater danger of running into dust grains and gas atoms, and the scientific instruments need to be protected against that. If the probe were to hit the planet, that could be catastrophic, since even a 1000 kg probe traveling at 10% of the speed of light would set free an energy of about ten hydrogen bombs.
Robert Forward has proposed an ingenious way to actually slow down the probe: This involves two sails, one of which is then jettisoned and afterwards serves as a freely floating reflector to send the light beam back onto the end of the sail that faces away from the Earth. This approach, however, requires even much larger sails and resources.
Interstellar travel is really, really hard if you are not very patient. However, using laser sails, it is not a purely fantastic outlandish idea anymore. In addition, concrete steps towards testing the concepts are being taken right now, with modest solar sails deployed and planned by the Japanese space agency JAXA , by NASA, and by the Planetary Society.
- A presentation on the Planetary Society’s LightSail-1 design.
- The Centauri Dreams Blog, commenting in a serious way on original research papers relevant for interstellar travel.
- The Icarus Interstellar website, a foundation dedicated to achieving interstellar travel within the next 100 years. See their “Project Forward“, which analyzes the potential of laser sails.
- The 100 Year Starship Initiative, seed-funded by DARPA, to make human interstellar travel possible.
- A July 2012 opinion piece “Alone in the Void” in the New York Times, by the astrophysicist Adam Frank, claiming that human interstellar travel will most likely not happen during the next thousands of years, though without any serious discussion of any interstellar travel concept. And a reply by Paul Gilster on Centauri Dreams, with many comments criticizing the somewhat superficial opinion piece.
- The report of a 1999 study performed by Geoffrey Landis, with many of the earlier references.
- Forward, R. L. (1984): “Roundtrip Interstellar Travel Using Laser-Pushed Lightsails”, Journal of Spacecraft and Rockets, Vol. 21, p. 187-195. The original pioneering paper. | http://coherence.wordpress.com/ | 13 |
54 | |Module 2: Describing, Clarifying and Presenting Data
4. Summarising data
4.1. Measuring the centre of a distribution
There are several measures of centre but here we consider three:
What is the median?
The median is the middle value of a ranked data set - so that half of the data falls above it and half below it.
This is easy if there is an odd number of data, but when there is an even number of data you need to find the two central values, add them together and then divide them by two to obtain the median. Once again, let’s look at the student grades from the beginning of this module.
52, 64, 16, 48, 35, 52, 85, 96, 90, 87, 77, 78, 37, 68, 62, 60, 51, 55, 57, 64, 54, 51, 62, 43, 68, 71, 76, 68, 65, 83, 47, 44, 76
To restate the process: in this example, the total number of marks is 33. The middle value of the ranked set is the 17th mark and so the median mark is 62. Note that there are 16 data values below 62 and 16 data values above 62 (16 + 1 + 16 = 33)
Consider a smaller data set of 8 values:
Rank the data (smallest to largest):
When there are an even number of data, the data set splits evenly and the median is not a member of the data set.
In this case, the median will be at position 4_ - halfway between the data in the 4th position ($30) and the data in the 5th position ($31). Therefore, the value (size) of the median is
Note that there are 4 values lower than $30.50 and 4 values higher than $30.50
1. Identify the size of the data set (n).
2. Rank the values of the data set (usually lowest to highest).
3. Locate the position of the median – it is found at position
4. Last, identify the size (value) of the data value at that position and quote it as the median..
Return to the student marks data set.
The structure that has been drawn here is a table but it is also drawn in graphical form. In this type of frequency histogram (frequency is really just another name for counts), data have been collected into cells. This allows you to get an idea of the shape of the distribution of the data. A stem and leaf plot is a shorthand way of doing the same thing without sacrificing information. With a stem-and-leaf plot you must always include a statement about the size of the data. In the example above, the stems are tens, as shown in the key and the leaves are units (values of one). This means the size of the ‘5’ in the stem is actually ‘50’. And that stem really includes all marks from 50 to 59 inclusive.
Creating a Histogram from a Stem-and-Leaf Plot
|Test your knowledge|
How many marks are in the 60s?
- 68 marks
- 8 marks
- 9 marks
- 2 marks
Click here for answers
Did you answer '9 marks'? If you did, then you are demonstrating the ability to read a stem-and-leaf plot. Stem-and-leaf plots also provide a picture of the spread and shape of the distribution of a particular variable within a sample. More about that later.
If you were to convert each leaf beside a stem (also called a class) into a rectangle, it would look something like this:
If you then rotated the histogram and removed the horizontal lines separating the rectangles in each class, you would end up with a classical graphical display of a histogram.
The height of the rectangles above each class is proportional to the number of data values that fall into that class.
ii. Finding the mean
The mean can be described as the arithmetic average. Statisticians use symbols and equations to show how the mean can be calculated.
Don’t be put off by this equation. Remember, to calculate the mean is to calculate the mathematical average. Therefore, essentially you are adding together all the measurements and then dividing that total by the number of measurements. For this set of student marks the total number of measurements is 33. The sum of these 33 measurement values is 1964. The mean is calculated by dividing 1964 by 33 and is 59.58. Rounding gives a mean of approximately 60. It is often useful to round statistics, especially summary statistics such as the mean, for presentation purposes.
If your mark for the subject was 76, are you above or below the mean for the class?
You are above the mean of 62.
NOTE: In this case the mean of 60 is slightly smaller than the median. This is because the mean is affected by the numerical value of every measurement, so a very low score like 16 affects the mean. Likewise, a very large data will drag the mean upwards. The median is affected only by the relative position of measurements and so 16 has the same effect on the median as any other number below 62. The median is not affected by the size of extreme data values; it is affected by the number of data in the data set.
What is the Mode?
The mode is the most common value in a data list. It is the value with the highest frequency. In the example of student marks, the mode is 68 because it occurs three times (i.e. three students obtained 68). The mode can be useful with categorical or discrete variables. For example, if you managed a shoe shop you might find the mode a useful concept because it could tell you which men's and women's shoe sizes are the most common among your customers. | http://www.abs.gov.au/websitedbs/a3121120.nsf/4a256353001af3ed4b2562bb00121564/42abbf01e3e04fccca257612000957f1!OpenDocument&ExpandSection=2 | 13 |
73 | |The Simple English Wiktionary has a definition for: integral.|
In calculus, an integral is the space under a graph of an equation (sometimes said as "the area under a curve"). An integral is the reverse of a derivative. A derivative is the steepness (or "slope"), as the rate of change, of a curve. The word "integral" can also be used as an adjective meaning "related to integers".
The symbol for integration, in calculus, is: as a tall letter "S". This symbol was first used by Gottfried Wilhelm Leibniz, who used it as a stylized "ſ" (for summa, Latin for sum) to mean the summation of the area covered by an equation, such as y = f(x).
Integrals and derivatives are part of a branch of mathematics called calculus. The link between these two is very important, and is called the Fundamental Theorem of Calculus. The theorem says that an integral can be reversed by a derivative, similar to how an addition can be reversed by a subtraction.
Integration helps when trying to multiply units into a problem. For example, if a problem with rate, , needs an answer with just distance, one solution is to integrate with respect to time. This means multiplying in time to cancel the time in . This is done by adding small slices of the rate graph together. The slices are close to zero in width, but adding them forever makes them add up to a whole. This is called a Riemann Sum.
Adding these slices together gives the equation that the first equation is the derivative of. Integrals are like a way to add many tiny things together by hand. It is like summation, which is adding . Integration is like we also have to add all the decimals and fractions in between as well.
Another time integration is helpful is when finding the volume of a solid. It can add two-dimensional (without width) slices of the solid together forever until there is a width. This means the object now has three dimensions: the original two and a width. This gives the volume of the three-dimensional object described.
Methods of Integration [change]
If we take the function , for example, and anti-differentiate it, we can say that an integral of is . We say an integral, not the integral, because the antiderivative of a function is not unique. For example, also differentiates to . Because of this, when taking the antiderivative a constant C must be added. This is called an indefinite integral. This is because when finding the derivative of a function, constants equal 0, as in the function
- . Note the 0: we cannot find it if we only have the derivative, so the integral is
Simple Equations [change]
A simple equation such as can be integrated with respect to x using the following technique. To integrate, you add 1 to the power x is raised to, and then divide x by the value of this new power. Therefore, integration of a normal equation follows this rule:
The at the end is what shows that we are integrating with respect to x, that is, as x changes. This can be seen to be the inverse of differentiation. However, there is a constant, C, added when you integrate. This is called the constant of integration. This is required because differentiating an integer results in zero, therefore integrating zero (which can be put onto the end of any integrand) produces an integer, C. The value of this integer would be found by using given conditions.
Equations with more than one terms are simply integrated by integrating each individual term:
Integration involving e and ln [change]
There are certain rules for integrating using e and the natural logarithm. Most importantly, is the integral of itself (with the addition of a constant of integration):
The natural logarithm, ln, is useful when integrating equations with . These cannot be integrated using the formula above (add one to the power, divide by the power), because adding one to the power produces 0, and a division by 0 is not possible. Instead, the integral of is :
In a more general form:
The two vertical bars indicated a absolute value; the sign (positive or negative) of is ignored. This is because there is no value for the natural logarithm of negative numbers.
Sum of functions [change]
The integral of a sum of functions is the sum of each function's integral. that is,
Proof is straightforward: The definition of an integral is a limit of sums. Thus
Note that both integrals have the same limits.
Constants in integration [change]
When a constant is in an integral with a function, the constant can be taken out. Further, when a constant c is not accompanied by a function, its value is c * x. That is,
This can only be done with a constant.
Proof is again by the definition of an integral.
If a, b and c are in order (i.e. after each other on the x-axis), the integral of f(x) from point a to point b plus the integral of f(x) from point b to c equals the integral from point a to c. That is,
- if they are in order. (This also holds when a, b, c are not in order if we define .)
- . This follows the fundamental theorem of calculus (FTC): F(a)-F(a)=0
- Again, following the FTC: | http://simple.wikipedia.org/wiki/Integral | 13 |
196 | Make a chain of a function and its inverse: f^-1(f(x)) = x starts with x and ends with x.
Take the slope using the Chain Rule. On the right side the slope of x is 1.
Chain Rule: dx/dy dy/dx = 1 Here this says that df^-1/dy times df/dx equals 1.
So the derivative of f^-1(y) is 1/ (df/dx) BUT you have to write df/dx in terms of y.
The derivative of ln y is 1/ (derivative of f = e^x) = 1/e^x. This is 1/y, a neat slope !
Changing letters is OK : The derivative of ln x is 1/x. Watch this video for GRAPHS
Professor Strang's Calculus textbook (1st edition, 1991) is freely available here.
Subtitles are provided through the generous assistance of Jimmy Ren.
Lecture summary and Practice problems (PDF)
PROFESSOR: OK, earlier lecture introduced the logarithm as the inverse function to the exponential. And now it's time to do calculus, find its derivative. And we spoke about other inverse functions, and here is an important one: the inverse sine function, or sometimes called the arc sine. We'll find its derivative, too.
OK, so what we're doing really is sort of completing the list of important rules for derivatives. We know about the derivative of a sum. Just add the derivatives. f minus g, just subtract the derivatives. We know the product rule and the quotient rule, so that's add, subtract, multiply, divide functions.
And then very, very important is the chain of functions, the chain rule. This is never to be mixed up with that. You wouldn't do such a thing. And now we're adding one more f inverse. That's today, the derivative of the inverse. That will really complete the rules. Out of simple functions like exponential, sine and cosine, powers of x, this creates all the rest of the functions that we typically use.
OK, so let's start with the most important of all: the f of x is e to the x. And then the inverse function, we named the natural logarithm. And notice, remember how I reversed the letters. Here, x is the input and y is the output from the exponential function. So for the inverse function, y is the input. I go backwards to x, and the thing to remember, the one thing to remember, just tell yourself x is the exponent. The logarithm is the exponent.
OK, so a chain of functions is coming here, and it's a perfectly terrific chain. This is the rule for inverse functions. If I start with an x and I do f of x, that gets me to y. And now I do the inverse function, and it brings me back to x. So I really have a chain of functions. The chain has a very special result. And our situation is if we know how to take the derivative of f, this ought to tell us-- the chain rule-- how to take the derivative of the inverse function, f inverse. Let me try it with this all-important example. So which-- and notice also, chain goes the other way. If I start with y, do the inverse function, then I've reached x. Then f of x is y.
Maybe before I take e to the x, let me take the function that always is the starting point. So practice is-- I'll call this example of-- just to remember how inverse functions work-- linear functions. y equals ax plus b. That's f of x there. Linear. What's the inverse of that? Now the point about inverses is I want to solve for x in terms of y. I want to get x by itself. So I move b to the opposite side. So in the end, I want to get x equals something.
And how do I do that? I move b to the opposite side, and then I still have ax, so I divide by a. Then I've got x by itself. This is the inverse. This is f inverse of y.
Notice something about inverse functions, that here we did-- this function f of x was created in two steps. it was sort of a chain in itself. The first step was multiply by a. We multiply and then we add. What's inverse function do? The inverse function takes y. Subtracts b. So it does subtract first to get y minus b, and then it divides.
What's my point? My point is that if a function is built in two steps, multiply and then add in this nice case, the inverse function does the inverse steps. Instead of multiplying it divides. Instead of add, it subtracts. What it does though is the opposite order. Notice the multiply was done first, then the divide is last. The add was done second, the subtract was done first.
When you invert things, the order-- well, you know it. It has to be that way. It's that way in life, right? If we're standing on the beach and we walk to the water and then we swim to the dock, so that's our function f of x from where we were to the dock, then how do we get back? Well, wise to swim first, right? You don't want to walk first. From the dock, you swim back. So you swam one way at the end, and then in the inverse, you swim-- oh, you get it.
OK, so now I'm ready for the real thing. I'll take one of those chains and take its derivative. Let me take the first one. So the first one says that the log of e to the x is x. That's-- The logarithm was defined that way. That's what the log is.
Let's take the derivative of that equation, the derivative of both sides. We will be learning what's the derivative of the log. That's what we don't know. So if I take the derivative, well, on the right-hand side, I certainly get 1. On the left-hand side, this is where it's interesting. So it's the chain rule. The log of something, so I should take the derivative-- oh, I need a little more space here, over here.
The derivative of log y, y is e to the x. Everybody's got that, right? So this is log y, and I'm taking its derivative with respect to y. But then I have to, as the chain rule tells me, take the derivative of what's inside. What's inside is e to the x, so I have dy dx. So I put dy dx. And the derivative on the right-hand side, the neat point here is that the x-derivative of that is 1.
OK, now I'm going to learn what this is because I know what this is: the derivative of dy dx, the derivative of the exponential. Well, now comes the most important property that we use to construct this exponential. dy dx is e to the x. No problem, OK? And now, I'm going to divide by it to get what I want-- almost. Almost, I say. Well, I've got the derivative of log y here. Correct, but there's a step to take still.
I have to write-- I want a function of y. The log is a function of y. It's derivative is-- the answer is a function of y. So I have to go back from x to y. But that's simple. e to the x is y. Oh! Look at this fantastic answer. The derivative of the log, the thing we wanted, is 1 over y.
Why do I say fantastic? Because out of the blue almost, we've discovered the function log y, which has that derivative 1 over y. And the point is this is the minus 1 power. It's the only power that we didn't produce earlier as a derivative. I have to make that point.
You remember the very first derivatives we knew were the derivatives of x to the n-th, powers of x. Everybody knows that that's n times x to the n minus 1. The derivative of every power is one power below. With one exception. With one exception. If n is 0, so I have to put except n equals 0.
Well, it's true when n is 0, so I don't mean the formula doesn't fail. What fails is when n is 0, this right-hand side is 0, and I don't get the minus 1 power. No power of x produces the minus 1 power when I take the derivative. So that was like an open hole in the list of derivatives. Nobody was giving the derivative to be the minus 1 power when we were looking at the powers of x.
Well, here it showed up. Now, you'll say the letter y's there. OK, that's the 25th letter of the alphabet. I'm perfectly happy if you prefer the 26th letter. You can write d log z dz equals 1/z if you want to. You can write, as you might like to, d by dx. Use the 24th letter of log x is 1/x. I'm perfectly OK for you to do that, to write the x there, now after we got the formula.
Up to this point, I really had to keep x and y straight because I was beginning from y is e to the x. That was my starting point. OK, so that keeping them straight got me the derivative of log y as 1/y. End. Now, I'm totally happy if you use any other letter. Use t if you have things growing.
And remember about the logarithm now. We can see why it grows so slowly. Because its slope is 1/y. Or let's look at this one, because we're used to thinking of graphs with x along the axis. And this is telling us that the slope of the log curve-- the log curve is increasing, but the slope is decreasing, getting smaller and smaller. As x gets very small, it's just barely increasing. It does keep going on up to infinity, but very, very slowly. And why is that? That's because the exponential is going very, very quickly. And you remember that the one graph is just the flip of the other graph, so if one is climbing like mad, the other one is growing slowly.
OK, that's the main facts, the most important formula of today's lecture. I could-- do you feel like practice to take the chain in the opposite direction just to see what would happen? So what's the opposite direction?
I guess the opposite direction is to start with-- which did I start with? I started with log of e to the x is x. The opposite direction would be to start with e to the log y is y, right? That's the same chain. That's the f inverse coming before the f. What do I do? Take derivatives. Take the derivative of everything, OK?
So take the derivative, the y-derivative. I get the nice thing. I mean, that's the fun part, taking the derivative on the right-hand side. On the left side, a little more work, but I know how to take the derivative of e to the something. It's the chain rule. Of course it's the chain rule. We got a chain here.
So the derivative of e to the something, now you remember with the chain rule, is e to that same something times the derivative of what's inside. The derivative and what's inside is this guy: the derivative of log y dy. This is what we want to know, the one we know, and what is e to the log y? It's sitting up there on the line before. e to the log y is y. So this parenthesis is just containing y. Bring it down. Set it under there, and you have it again. The derivative of log y dy is 1 over e to the log y, which is y.
OK, we sort of have done more about inverse functions than typical lectures might, but I did it really because they're kind of not so simple. And yet, they're crucially important in this situation of connecting exponential with log. And by the way, I prefer to start with exponential. The logic goes also just fine. In fact, some steps are a little smoother if you start with a logarithm function, define that somehow, and then take its inverse, which will be the exponential. But for me, the exponential is so all important. The logarithm is important, but it's not in the league of e to the x. So I prefer to do it this way to know e to the x.
Now if you bear with me, I'll do the other derivative for today. The other derivative is this one. Can we do that? OK, so I want the derivative of this arc sine function, all right?
So I'm going to-- let me bring that. This side of the board is now going to be x is the inverse sine of y, or it's often called the arc sine of y. OK, good. All right. So again, I have a chain. I start with x. I create y. So y is sine x. So y is the sine of x, but x is the arc sine of y. That's the chain. Start with a y. Do f inverse. Do f, and you got y again, all right?
Now, I'm interested in the derivative, the derivative of this arc sine of y. I want the y-derivative. I'm just going to copy this plan, but instead of e, I've got sines here. So take the y-derivative of both sides, the y-derivative of both sides. Well, I always like that one. The y-derivative of this is the chain rule. So I have the sine of some inside function. So the derivative is the cosine of that inside function times the derivative of the inside function, which is the guy we want. OK, so I have to figure out that thing. In other words, I guess I've got to think a little bit about these inverse trig functions.
OK, so what's the story with the inverse trig functions? The point will be this is an angle. Ha! That's an angle. Let me draw the triangle. Here is my angle theta. Here is my sine theta. Here is my cos theta, and everybody knows that now the hypotenuse is 1. So here is theta. OK, whoa! Wait a minute. I would love theta to be the angle whose-- oh, maybe it is. This is the angle whose sine-- theta should be the angle whose sine is y, right? OK, theta is the angle whose sine is y. OK, let me make that happen.
And now, tell me the other side because I got to get a cosine in here somewhere. What is this side? Back to Pythagoras, the most important fact about a right triangle. This side will be the square root of-- this squared plus this squared is 1, so this is the square root of 1 minus y squared. And that's the cosine. The cosine of this angle theta is this guy divided by 1. We're there, and all I've used pretty quickly was I popped up a triangle there. I named an angle theta. I took its sine to be y, and I figured out what its cosine had to be. OK, so there's the theta. Its cosine has to be this, and now I'm ready to write out the answer. I'm ready to write down the answer there.
That has a 1 equals-- the cosine of theta, that's this times the derivative of the inverse sine. You see, I had to get this expression into something-- I had to solve it for y. I had to figure out what that quantity is as a function of y. And now I just put this down below. So if I cross this out and put it down here, I've got the answer. There is the derivative of the arc sine function: 1 over the square root of 1 minus y squared.
OK, it's not as beautiful as 1/y, but it shows up in a lot of problems. As we said earlier, sines and cosines are involved with repeated motion, going around a circle, going up and down, going across and back, in and out. And it will turn out that this quantity, which is really coming from the Pythagoras, is going to turn up, and we'll need to know that it's the derivative of the arc sine.
And may I just write down what's the derivative of the arc cosine as long as we're at it? And then I'm done. The derivative of the arc cosine, well, you remember what-- what's the difference between sines and cosines when we take derivatives? The cosine has a minus. So there'll be a minus 1 over the square root of 1 minus y squared.
That's sort of unexpected. This function has this derivative. This function has the same derivative but with a minus sign. That suggests that somehow if I add those, yeah, let's just think about that for the last minute here. That says that if I add sine inverse y to cosine inverse y, their derivatives will cancel. So the derivative of that sum of this one-- can I do a giant plus sign there?-- is 0. The derivative of that plus the derivative of that is a plus thing and a minus thing, giving 0.
So how could that be? Have you ever thought about what functions have derivative 0? Well, actually, you have. You know what functions have no slope. Constant functions. So I'm saying that it must happen that the arc sine function plus the arc cosine function is a constant. Then its derivative is 0, and we are happy with our formulas.
And actually, that's true. The arc sine function gives me this angle. The arc cosine function would give me-- shall I give that angle another name like alpha? This one would be the theta. That one would be the alpha. And do you believe that in that triangle theta plus alpha is a constant and therefore has derivative 0? In fact, yes, you know what it is. Theta plus alpha in a right triangle, if I add that angle and that angle, I get 90 degrees. A constant. Well, 90 degrees, but I shouldn't allow myself to write that. I must write it in radians. A constant.
OK, don't forget the great result from today. We filled in the one power that was missing, and we're ready to go. Thank you.
NARRATOR: This has been a production of MIT OpenCourseWare and Gilbert Strang. Funding for this video was provided by the Lord Foundation. To help OCW continue to provide free and open access to MIT courses, please make a donation at ocw.mit.edu/donate. | http://ocw.mit.edu/resources/res-18-005-highlights-of-calculus-spring-2010/derivatives/derivatives-of-ln-y-and-sin-1-y/ | 13 |
80 | Let's graph the following function:
First we have to consider the domain of the function. We must note that we cannot have a negative value under the square root sign or we will end up with a complex number. Therefore, we set whatever is under the root sign great than or equal to 0.
Remember that when we divide by a negative number, we flip the inequality. This result means that the domain of x, or the input, is any value less than or equal to 2.
Next, we can go ahead and plot our points, but we must be careful not to plot points that are close together since we will not get an accurate picture. A method we can use is to set the function equal to different positive integers to see what their x value is.
For instance, we want to see what x value gives a y value of 4, we can ask ourselves, "The square root of what value gives us 4?" We know it is 16. Then we can ask, "What value when subtracted from 2 gives us 16?" We can see 2 - (-14) = 16. Therefore, to get a y value of 4, we need an x value of -14.
We can see why x cannot be greater than 2 on the graph, and we can also see why there are no negative y values. If x is greater than 2, we would end up with a complex number and we cannot yield a negative y value from an expression under a square root.
We can also see that this looks somewhat like a sideways parabola, with the negative y values ommited. This is true, and if we square both sides of the function and isolate x, we end up with the equation of the parabola in terms of y.
This way, our range is not restricted to only the positive y values. However, we must realize that this equation is different from original function, because it is in fact not a function. Recall that to be a function, the image must pass the vertical line test. It is important to be aware of this difference, and understand how radical functions in terms of x algebraically and geometrically relate to equations in terms of y.
Next, let's graph the functions:
Remember, the first thing we need to do is see if we have any restrictions on our domain. We cannot have a negative value inside of the square root, so we could set both expressions inside the root sign greater or equal to 0. Instead, we can graph both of the expressions inside the roots as functions on the graph and see if any x values yield a negative y value.
We can see the graph of g(x) = x2+9 is always above the x axis, meaning that for all x values, the functions yields positive y values. Since we will always have a positive y value for this function, there are no restrictions on our domain.
The graph of f(x) = x2-9 dips below the x axis in between -3 and 3. This means we have restrictions on our domain in between the values of x = -3 and x = 3.
When dealing with polynomials inside a root sign, graphing the polynomial function is the easiest way to see where the function dips below the x axis and find the x intercepts. Wherever the graph is negative on the x axis is the restriction on the domain of the original function.
To graph the functions, we need to keep the domain in mind for f(x) and graph points less than -3 and greater than 3. For the the graph of g(x), we can plot any x value we want. We can do this by making an xy table or plugging the equations into a graphing calculator.
If we observe the nature of the graphs, they look very similar to two different hyperbolas without their negative y values. We can do some manipulation and see that the functions can be represented as hyperbolic equations.
Every radical function will be part of a conic section. This is because when we manipulate the function to be in terms of both x and y, we will always have a y2. Let's do another example illustrating this point.
Next, graph the function of:
First, we check the domain by graphing the expression inside the root and setting it equal to y.
We can see that the x intercepts are (-4,0) and (0,4), and when x is less than -4 and greater than positive 4, we have negative y values. These are our restrictions.
Let's plot our original function with our domain restrictions in mind (in other words, plot points between x = -4 and x = 4).
We should observe that this is a semicircle missing it's negative y values. With some manipulation, we can come up with the equation of a circle with radius 4.
We have evaluated radical functions involving square roots. When graphing these functions, we must be aware of the domain before we graph them. Some radical functions, however, will never have domain constraints. Let's look at a cube-root function.
By way of example, graph the cube-root function:
There are no domain restraints because we can take the cube root of a negative number. Therefore, our domain is "all real numbers," and we can plot any x value we want.
What if we have a function with a 4th root such as
We cannot have a negative y value for any input. For example, (2)4 and (-2) 4 both yield positive 16. If we plug in -16 for x, we will get a complex number. This means we need to think about our domain before we graph.
We can deduce that for any radical function
If n is odd, our domain is not restricted. If n is even, we must consider constraints on our domain.
For the next example, we want to find the domain of the function:
We can graph the inside functions, but let's set the expression inside the radical to greater or equal to 0.
We can then right the domain as [-∞,-3] U [3,∞] to indicate the domain is any x value less than -3 and greater than 3.
Sometimes the domain of a radical function will not have any positive y values, and therefore the graph will not exist for real numbers.
For example, find the domain and solution set to the following function
We can already see by inspection that the expression inside the square root will never be positive. Let't set it greater or equal to 0.
We cannot have a square root of a negative, so the domain is undefined and therefore the image of the function is undefined on the real xy plane. We can also check by graphing the expression and setting it equal to y.
Finding the zeros is another way of saying finding the roots. Finding the zeros of radical functions is unique because sometimes the roots that we find do not actually satisfy the function. These roots are called extraneous zeros.
The strategy for finding roots of radical functions is to isolate the radical expression and then square both sides to solve for x. In doing this, we square a quantity which will get us the same result if we square its opposite, which does not satify the original function. Like we observed before in the first three examples, the equation we ended up with when we solved for both x and y is different from the original function we had. Because of this, we must always check our results when finding the roots.
Last, let's look at the function:
First, to find our domain, we set each expression under the radical greater or equal to 0.
Since we have two constraints, we take the one that is most restrictive, and thus the domain is [-7/3, ∞].
To find our x intercepts, we set the function equal to 0 and solve for x.
We have two roots - x = 3 and x = -2. Lets plug them in and check to see if they satisfy the function.
Since our function equals 0, 3 is a root.
Since our function does not equal 0, -2 is not a root.
Looking at our function, we can clearly see our x intercept and our domain restriction. | http://www.wyzant.com/help/math/precalculus/radical_functions | 13 |
68 | Any attempt to assess the distribution of volcanism through time must take into account the variable definitions of the word eruption. We consider an eruption to consist of the arrival of solid volcanic products at the Earth's surface. This can be in the form of either the explosive ejection of fragmental material or the effusion of initially liquid lava. This definition excludes energetic, but non-ash-bearing steam eruptions. The ejection of fragmental material, however, does not require magmatic explosions producing fresh (juvenile) pyroclastics; phreatic explosions of greatly variable intensity are produced by the interaction of volcanically generated heat and near-surface water and can eject significant amounts of old material. Most eruptions in fact result from a combination of magmatic and non-magmatic processes and are referred to as phreatomagmatic.
The duration of eruptive events also influences eruption documentation. The word eruption has variously been applied to events ranging from an individual explosion to eruptive periods lasting up to hundreds of years. Quiescent periods are common during eruptions, and we have attempted to standardize eruption data by considering clearly linked events separated by surface quiet of up to three months to be part of the same eruption. This distinction is possible at volcanoes in more populated areas, but can be problematical in the case of scattered observations from travelers who witness an ongoing eruption from a remote volcano at separate times. Furthermore, the end of an eruption is often less dramatic than its start and therefore is often not documented; consequently many eruptions have only a start date. Further discussion of the uncertainties of eruption reporting and documentation can be found in Simkin and Siebert (1994).
Eruptions are documented in a wide variety of ways. The initial IAVCEI volcano catalogs were almost entirely restricted to historical eruptions documented at or near the time of their occurrence. Even historically documented eruptions are subject to vagaries such as the extent of monitoring, the proximity (and experience) of observers, and inclement weather that can inhibit observations. Tabular compilations are devoid of essential caveats and explanatory words and underscore the cautions necessary in interpreting these events.
Eruptions preceding human observation have been documented with a variety of techniques. These are shown by an alphabetical code in the table below. The dating methods range from radiometric procedures such as radiocarbon or fission track to tephrochronology, the careful study of the stratigraphic relationships of dated and undated tephra layers. Users should note in particular the distinction between uncorrected radiocarbon dates (C) and dates corrected for past variations in carbon isotopic ratios of atmospheric carbon dioxide (G). These dates are comparable (<100-150 years) for the last 2500 years, but begin to diverge to as much as 700-900 years for the last 4000 years of the Holocene. Some eruption reports, established in the volcanological literature such as the Catalog of Active Volcanoes of the World, have subsequently been found to be incorrect. Rather than delete these events, which could appear to be mistaken omissions, they have been flagged with an "X" to note that they have been discredited. These events (further distinguished by the fact that the eruption year is unbolded) should not be included in any eruption totals. Bolding is visible using most browsers when style sheets are enabled.
Caution is also necessary in the interpretation of historical eruption dates. In sparsely populated regions, reported eruption dates (even in recent years) may be that of major eruptive events likely to be noticed by distant observers, and minor preceding eruptive activity may go unreported. Even in populated regions, the likelihood that only major events are reported increases for events prior to the past few centuries.
Earlier historical eruption reports are further complicated by the great temporal and spatial variability in useage of calendars to document time. The Roman Julian calendar (referred to as the Old Style calendar) used in western Europe for more than 1500 years was surplanted by Papal decree in 1528 by the more precise Gregorian calendar (referred to as the New Style calendar). However, adoption of the New Style calendar was regionally variable and in some cases did not take place until the first part of the 20th century. The type of calendar used is rarely specified in the literature, and consequently it is often not known whether eruption dates are reported using Julian or Gregorian calendars (or even using regionally adopted lunar calendars). Japanese eruption dates from the Catalog of Active Volcanoes of the World have been converted to New Style dates, even for pre-1528 eruptions (Hayakawa 1996, pers. comm.); however, elsewhere the type of calendar used is often not known. Consequently we have not attempted to convert dates to the New Style calendar, but have accepted the dates used in earlier published papers or compilations. Users should note that Gregorian dates progressively diverge from Julian dates beginning with the adoption of the Julian calendar in the mid-1st century BC. Old-Style (Julian) dates are 3 days earlier during the 7th century, for example, and increase to 10 days earlier at the time of the adoption of the New-Style (Gregorian) calendar in the 16th century.
The area of activity is listed for eruptions originating from known locations other than the central summit conduit. The names of flank vents and/or their location on the edifice are listed here when known. Comments (enclosed in parentheses) sometimes denote uncertainties in eruption validity or are used to identify the designated labels of specific tephra deposits.
Volcanologists have used a wide range of procedures to date the prehistorical eruptions that are critical in determining the geologic history of a volcano. When an eruption date in this compilation is not historical, the dating technique used is shown by a letter code immediately preceding the start year. By far the most commonly used techniques are radiocarbon dating (corrected and uncorrected) and tephrochronology. These and other techniques are shown in the table below and then briefly described, along with associated age uncertainties.
A "?" before the eruption date denotes uncertainty about the validity of the eruption. This is applied, for example, to common reports that a volcano was "smoking," which could denote either simple steam emission or ash-bearing eruption plumes. The year columns of valid eruptions are bolded (visible using most browser settings) to distinguish them from the unbolded dates of both uncertain and discredited eruption reports. In some cases eruptive activity was observed from a distance, without clear indication of which volcano it originated from. These events are attached to the most probable volcanic source, but a "@" precedes the date to indicate uncertainty about the source of the eruption.
Some events--although once established in the volcanological literature, such as the CAVW--have since been discredited. These are included in none of our eruption totals, but great effort is often invested in proving a reported eruption to be false, and we thought it better to retain these "non-events"--in a form that allows easy identification (and removal)--rather than have them appear to readers of earlier compilations as mistaken omissions. Discredited eruptions are flagged by an "X" before the event date. Both discredited and uncertain eruptions can be further distinguished in the eruption table by the absence of bolding of dates.
- = BC date
? = eruption itself uncertain
@ = eruption locality uncertain
X = discredited eruption
A = anthropology
C = carbon-14 (uncorrected)
D = dendrochronology (tree ring)
E = surface exposure
F = fission track
G = carbon-14 (corrected)
H = hydration rind - glass
I = ice core
K = K-Ar
L = lichenometry
M = magnetism
N = thermoluminescence
R = Ar-Ar
S = SOFAR (hydrophonic)
T = tephrochronology
U = Uranium-Series
V = varve count
A = "ANTHROPOLOGY." Eruption dates carrying this designation include native legend (or "traditional" dates) or dates obtained from the age of human artifacts or structures buried or entrained in tephra layers or lava flows. Some are entered without a date uncertainty code (e.g., Bedouin legends of a 640 AD Arabian eruption), while other uncertainties range to 50 years (the "11th century" eruption of Mexico's Michoacán-Guanajuato), but all should be treated with some caution, recognizing the human ability to misremember an undocumented date. Still other dates have been obtained by anthropologists but entered in our file under the dating technique used (commonly 14C).
C = "14C," or UNCORRECTED RADIOCARBON. This is the most common dating technique used for prehistorical eruptions. The technique is based upon the 1951 discovery that wood and other organic matter contains minute amounts of carbon's radioactive isotope (of atomic weight 14). When the organism dies, however, its radioactive carbon is no longer replenished and the proportion of 14C in its carbon begins to decrease by radioactive decay. Because this decay rate is accurately known, careful laboratory measurement of the 14C/12C ratio in prehistoric wood can accurately date that wood's death. Although the half-life of 14C is about 5568 years and its initial concentration is only one part in a trillion (1012) parts of 12C, ages to 100,000 years are now being successfully measured.
Radiocarbon dates are normally expressed in years BP ("before present"), and we have followed the standard convention of treating 1950 as "present" (unless otherwise stated) in converting to calendar year dates. Some uncertainty in radiocarbon dates is guaranteed by analytical error and the fact that the 14C decay rate is known only to within 30 years. Most authors combine these and other factors in a single uncertainty, or "±" value, after each radiocarbon date presented. We then accept the author's reported date and attach the appropriate uncertainty code upon entry to our file. Many eruptions' radiocarbon dates in this compilation would have been "historical" if they had taken place in southern Italy where the written record extends to 1500 BC. The "uncorrected" adjective applied to this technique is important: note the distinction between uncorrected radiocarbon dates (C) from corrected radiocarbon ages (G) discussed below under "corrected radiocarbon."
D = "DENDROCHRONOLOGY." The annual character of tree rings was first noticed by the ancient Greeks, and precise chronologies have been developed from comparison of tree-ring growth patterns. Eruptions frequently perturb the growth cycle of nearby trees, and comparison of narrowed tree ring intervals from affected trees with regional tree-ring chronologies allows the precise dating of volcanic eruptions. Distant but long-lived trees bear frost rings from known historical eruptions, and the hope is strong that tree-ring chronologies will help establish a detailed record of the planet's largest eruptions.
Other paleobotanical techniques can also be useful to volcanology. The famous eruption resulting in Oregon's Crater Lake, for example, is dated only to 50 years (nearly 6000 years ago), but careful study of pollen associated with its volcanic ash in a far-away Montana bog shows that the eruption began in the autumn and apparently continued for at least 3 years. Analysis of annual layers in Irish peat bogs is revealing detailed records (including fine particles of volcanic ash) from Icelandic eruptions, and leaf impressions under Japanese ash layers are dating prehistoric eruptions to the exact season of the year. In New Zealand, insect remains preserved by the famous Taupo eruption of the second century AD have shown that the eruption took place in the early afternoon. The application of biology to eruptive deposits holds great promise for unraveling the recent histories of many volcanoes.
E = "SURFACE EXPOSURE" This relatively new technique measures the exposure ages of rocks at the earth's surface to cosmic-ray production. Cosmic-ray production rates are dependent on both altitude and latitude, but if local cosmogenic helium (3He) production rates can be determined, helium isotopes can be used to determine the ages of rocks exposed to the surface since their formation. Chlorine isotopes (36Cl) have also been used to date young volcanic rocks. Careful sampling is required, but surface exposure procedures such as these have been used to date late-Pleistocene and Holocene lava flows. The technique has somewhat larger uncertainties (many hundreds to more than a thousand years) than ages from calibrated radiocarbon dating (see below). It has been relatively infrequently used but is useful where organic material for radiocarbon dating is unavailable and correlates fairly well with radiocarbon ages where both techniques have been used.
F = "FISSION TRACK." Another relatively new technique depends upon the natural spontaneous fission decay of uranium. The resulting heavy fission particles leave minute damage tracks in volcanic glass that can be revealed by chemical etching of a cut and polished surface. The number of tracks per unit area, counted microscopically, is proportional to the age of the glass (for any given uranium content) and can therefore provide eruption dates. Although the technique is capable of better accuracy, one fission track date included here--1000 BP from Canada's Mount Edziza--carries the largest uncertainty in the VRF: 6000 years!
G - 14C, or "CORRECTED RADIOCARBON." Careful radiocarbon dating has been done on selected portions of long-lived bristlecone pine trees that can be independently dated by tree-ring techniques. This work shows generally close agreement between the two methods for the last 2,500 years, but they then start to diverge until "true" tree-ring dates exceed radiocarbon dates by about 900 years for specimens at the limit of the tree-ring time scale (about 7500 years ago). The reason for this divergence is apparent variation in past content of atmospheric radiocarbon. When a corrected date is available we have proceeded it by the letter G (which can be thought of, mnemonically, as a slightly altered C), but many published dates are not accompanied by all the information required for an accurate correction, and we have not applied a correction factor to uncorrected dates in our file. The mixing of uncorrected radiocarbon dates with a growing number of calendar dates can be very misleading to readers who do not pay attention to the letter code in front prehistorical dates. It is imperative that readers be aware of the significant age difference between C and G dates: to 100-150 years during the last 2500 years rising to 900 years in the early Holocene.
H = "HYDRATION RIND." Obsidian flows are formerly-molten liquids that cooled too quickly to permit growth of the crystals that make up most volcanic rocks. The resulting glass is unstable and gradually decomposes by the addition of moisture from the atmosphere. The thickness of the hydration rind on an obsidian flow surface is proportional to the time that it has been exposed to the atmosphere, and this thickness has been used to date 10 flows in our file, mainly from Oregon's Newberry Caldera and California's Mono Craters. Uncertainties are large (several hundred to more than a thousand years) for this technique.
I = "ICE CORE." The far-traveled aerosol of major eruptions eventually settles to the earth's surface, leaving a chemical trace in glaciers and ice caps that grow by annual accumulation of snow. Cores through these annual layers then provide an important record of past volcanism that can extend, as with the new cores from Greenland, over 250,000 years. Whereas tree ring studies give unequivocal link to volcanism only if close to the source, strong sulphate layers are formed in the ice of both polar regions by major historical eruptions, and similar [even larger] layers in prehistoric portions of the core point clearly to volcanism with global distribution as the sulphate source. This gives the exciting potential of establishing a complete chronology of large eruptions, but the difficulty lies in determining what volcano was the source of a specific sulphate layer.
K = "K-Ar", Potassium-Argon dating. One of the most widely used methods of geochronometry. Like radiocarbon dating, it depends upon the relative proportions of parent (40K) to daughter (40A) isotopes, and the well-established half-life of that constant decay. It has been used to date rocks approaching the age of the earth [4.5 x 109 years], but is rarely used on materials younger than 100,000 years. The technique has been applied to some Holocene dates, but the associated uncertainties are large (often several thousand years).
L = "LICHENOMETRY." The slow but rather regular growth rate of lichens on a lava flow surface has been used to date two eruptions on Penguin Island, Antarctica [1683 and 1905 AD]. The technique is useful for establishing relative ages on young lava flows, but absolute ages require accurate baseline growth rates, under comparable conditions of-- climate and substrate, that are rarely available over more than a century.
M = "MAGNETISM." When lava cools from its molten state, it often retains an accurate "memory" of the earth's magnetic field at that time. Secular variation, or historical wander of the earth's magnetic poles, has been large enough that careful study of -a lava's magnetic "memory" may reveal its approximate date of cooling. Most dates carry uncertainties in the 25-150-year range. The accuracy of the technique decreases greatly for events older than a few thousand years, and the oldest eruption in this compilation dated by magnetics--Oregon's Mount Bachelor around 5800 BC--carries a ~750-year uncertainty.
N = "THERMOLUMINESCENCE" dating depends on the effects of radioactive decay (like the Fission Track technique) rather than direct counts of isotopic ratios. Some electrons freed during decay are trapped in crystal defects, and laboratory heating frees them, with light being produced in the process. The amount of light depends, in part, on the age of the crystal. This technique is much used by archeologists, but has uncertainties often larger than those from radiocarbon dating. This dating technique was referred to in previous compilations by the dating method code "U," a letter now used for Uranium-series dating.
R = "ARGON-ARGON" (40Ar/39Ar) dating was first developed in the late 1980s. It offers greater precision than typical K-Ar and requires much smaller amounts of material. During stepwise heating, a spectrum of apparent ages is shown by changing isotopic ratios (reflecting contamination) until reaching a plateau representing the crystal's true age. The technique is particularly useful for relatively young materials, and is bringing new order to geologic time scales over the past few tens of millions of years. Holocene dates can have uncertainties of several thousand years, but stratigraphically consistent 40Ar/39Ar ages have successfully been obtained for Holocene volcanic rocks.
S = SOFAR, or submarine "Hydrophone" detection. Explosive eruptions on the sea floor send out shock waves through the water in much the same way that earthquakes send shock waves through the solid earth's crust. The velocities are slower, about 5300 km/hr, but they travel for long distances through the SOFAR channel (a layer of water within 1200 m of the surface) and their arrival times at submarine hydrophones can be used to locate the eruption in the same way that seismologists locate earthquake epicenters. Study of hydrophone records from observed submarine eruptions has shown features characteristic of volcanism, and when these features appear on records from more remote parts of the sea floor they have been used to locate and to date (often to the hour and minute) volcanism that would otherwise have been completely missed.
Although the quiet, nonexplosive effusion of lava that typifies most seafloor volcanism is difficult to detect by hydrophones, earthquake swarms commonly accompany these more gentle eruptions in places such as Hawaii and Iceland, and such swarms from submerged seamounts have been interpreted as submarine eruptions. We have included several volcanoes because of earthquake swarms (STATUS entered as "Seismicity") and the fresh glass dredged from their submerged summits. However, the earthquake swarms might represent magma movement without eruption, so we have preceded these dates with a question mark rather than a symbol representing a "seismic" dating technique.
T = "TEPHROCHRONOLOGY." Aristotle used the Greek word for ash, "tephra," in describing an eruption on the island of Vulcano. Because modern volcanologists define "ash" as particles smaller than 2 mm in diameter, a broader term is useful for describing material of all sizes explosively ejected by volcanoes. In 1944, Sigurdur Thorarinsson proposed the word "tephra" for this purpose and it is widely accepted today. Tephra from large explosive eruptions may be distributed over enormous distances, forming a distinctive layer that later proves useful as a "marker" horizon dating nearby layers of sediment. Careful mapping of layers throughout a volcanic area can develop a relative sequence of overlapping ash layers. When some of these ash layers are dated, either historically or by some other technique, then dates (generally with large uncertainty) can be assigned to the intervening layers in this relative sequence. The technique is a broad one, embracing a variety of field geologic and stratigraphic methods, and we have used this designation to cover prehistoric dates for which our source specified no technique. Uncertainties are often large (hundreds to a few thousand years), and those dates without listed uncertainties should likewise be treated with caution.
U = "URANIUM-SERIES." Several dating techniques utilize Uranium-series disequilibrium ratios. The Uranium-Thorium disequilibrium series is often used to date carbonate materials such as speleothems, travertines, corals, deep sea sediments, bones, teeth, peat, or evaporites. More complex applications of this technique have also been applied to volcanic rocks. 230Thorium, part of the 238Uranium decay series, has a half-life of about 75,000 years, in comparison to the half-life of 238Uranium of 4,470,000,000 years. When the amounts of Uranium and Thorium isotopes are compared, an estimation of the age of an object can be obtained. This technique has been applied to volcanic rocks as young as the end of the Pleistocene and the beginning of the Holocene and has relatively large uncertainties (from hundreds to a few thousand years) during these time intervals. Other Uranium-series nuclides have shorter half-lives. 226Ra-230Th ratios have been used to date eruptions during the mid-Holocene, and 210Po-210Pb ratios have been applied to eruptions as young as a few decades or less.
V = "VARVE COUNT." Seasonal changes affect the sediment accumulation in many small lakes, particularly where the spring melting of ice provides an annual layer of coarse sandy particles to the lake floor in alternation with the finer clay deposited through the rest of the year. These layers, or varves, can later be counted to establish the date for a layer of volcanic ash in their midst. Like tree rings and ice-core layers, these annual layers provide very accurate dates under ideal conditions and careful work, but uncertainty increases with age and non-ideal conditions. Few dates in the file carry stated uncertainties (up to several hundred years). The sediments of Turkey's Lake Van provide a remarkable record--16 eruptions since 8104 BC--of nearby Nemrut volcano, but uncertainties are not listed.
Codes after dates denote uncertainties about the date itself. When the date is known only to the year or month, the following columns are left blank. Letter codes are used when the size of the dating uncertainty is known. This allows us to deal with eruption dates known only between two observations ("after July 10 but before July 24" would be shown as 0717g). Frequently used codes include a "t" in the year column (± 50); a 17th century eruption would appear as 1650t. A "p" in the month column (± 30) likewise would be used for an eruption known only to have begun in July or August. Larger uncertainties (such as those accompanying radiocarbon dates) may not exactly match one of the codes below; in these cases the closest available letter is used.
A ">" symbol after the year or the day indicates that the eruption was continuing as of that date. There is substantially less interest in documenting the end of an eruption than either its beginning or its most vigorous phases; consequently many eruptions (even in recent years) are listed as "continuing." The waning stages of an eruption are often not considered noteworthy, and unreported eruptive activity may occur after the departure of observers from an isolated volcano. When an eruption is reported to be continuing on one date, and on a later date activity is observed to have ceased, the mid-point of the range is entered as the stop date (along with the appropriate uncertainty range). If the time between these observations is long, however, the eruption is generally listed as continuing on the date of the last observation. All these factors emphasize the substantial caution necessary when interpreting eruption stop dates.
Code ±Years ±Days a 1 1 b 2 2 c 3 3 d 4 4 e 5 5 f 6 6 g 7 7 h 8 8 i 9 9 j 10 10 k 12 12 m 14 15 n 16 20 > After date listed
Code ±Years ±Days o 18 25 p 20 30 (1 mo) q 25 45 r 30 60 (2 mo) s 40 75 t 50 90 (3 mo) u 75 120 v 100 150 w 150 180 (6 mo) x 200 270 y 300 365 (1 yr) z 500 545 * 1000 730 (2 yr) ? Date uncertain (no data) < Before date listed
EXAMPLES: 1731< = on or before 1731 1731a = between 1730 & 1732 1731 1105d = between Nov 1 & 9
1750t = 18th century 1790j = late 18th century 1778 02 ? = February (?) 1778
Twenty common eruptive characteristics designated by the IAVCEI originators of the Catalog of Active Volcanoes of the World are shown in these tables. The reported presence of a particular characteristic is shown by an "X" in the appropriate column, a "?" marks uncertain occurrence, and a "-" indicates that this characteristic was not reported. This tabular format allows quick visual inspection of the occurrence of a particular eruptive characteristic, but the quality of eruption reporting is highly variable, and the absence of an "X" does not necessarily mean that this characteristic did not occur.
Eruptive characteristics are shown in five groups of four. The first four characteristics relate to vent location, and note activity originating from the central vent and/or from flank vent(s). Some eruptions may originate from long fissures cutting the summit or flanks of the volcano. These may be either radial to the central conduit or parallel to regional tectonic trends. The second four characteristics relate to interaction with water, and document submarine eruptions (and their occasional formation of new islands), subglacial eruptions, and those from crater lakes. The third group covers tephra-related processes, such as explosive eruptions, the formation of pyroclastic flows and surges (hot glowing avalanches--sometimes referred to as nuées ardentes--that can move down slopes at hurricane velocities), phreatic explosions, and fumarolic activity. The fourth group documents processes related to lava extrusion, and includes lava flows, lava lakes (molten lakes over submerged vents that may keep lava circulating for years), lava domes (the extrusion of viscous lava that accumulates around the vent), and lava spines. The last group documents the impact of eruptions on humans, and notes the occurrence of fatalities, damage to land, property, etc., as well as the formation of often destructive mudflows (also referred to by the Indonesian term lahar) and tsunamis. Mudflows directly associated with glacier outbursts (often known by the Icelandic term jökulhlaups) are identified by a "J" rather than the "X" used to indicate other characteristics.
Place C = Central crater eruption E = Flank (excentric) vent R = Radial fissure eruption F = Regional fissure eruption Water S = Submarine eruption I = New island formation G = Subglacial eruption C = Crater lake eruption Tephra E = Explosive N = Pyroclastic flows P = Phreatic explosions F = Fumarolic activity
Lava F = Lava flow(s) L = Lava lake eruption D = Dome extrusion S = Spine extrusion Damage F = Fatalities D = Damage (land, property, etc) M = Mudflows (lahars) T = Tsunami (giant sea waves) Symbol Key X = recorded ? = uncertain - = not recorded
The 20 standardized eruptive characteristics of the IAVCEI volcano catalog displayed in the "Eruptive History (table)" format are supplemented by three additional eruptive characteristics in the "Eruptive History (expanded)" format. These are "Caldera collapse," "Evacuation," and "Debris avalanche(s)." Caldera collapse is restricted to caldera formation by magma chamber collapse and is not used for large horseshoe-shaped avalanche calderas formed by sector collapse of the volcanic edifice. The latter can often be distinguished by the occurrence of the "Debris avalanche(s) characteristic that accompanies these edifice failures, although sometimes this characteristic is attached to smaller slope failures.
Particular caution should be used in the interpretation of several eruptive characteristics. Although the Catalog of Active Volcanoes of the World volumes distinguished phreatic explosions from "normal" explosions, many historical reports are inadequate to distinguish magmatic from phreatic explosions, and the presence of an "X" in the Explosive column should not be taken as an indication of magmatic eruptions. Solfataric or fumarolic activity accompanies most eruptions, but the Fumarolic column has been mostly restricted to cases where original accounts are unclear as to whether explosive activity or only fumarolic activity occurred. The formation of lava spines was included by the Catalog of Active Volcanoes of the World compilers in part as a response to the spectacular 311-m-high spine that temporarily formed during the 1902 Pelée eruption in the West Indies. However, this typically minor process accompanying the growth of lava domes is often not documented, and this column does not reflect the actual frequency of spine formation.
The reported size, or "bigness," of historical eruptions depends very much on both the experience and vantage point of the observer. To meet the need for a meaningful magnitude measure that can be easily applied to eruption sizes, Newhall and Self (1982) integrated quantitative data with the subjective descriptions of observers, resulting in the Volcanic Explosivity Index (VEI). It is a simple 0-to-8 index of increasing explosivity, with each successive integer representing about an order of magnitude increase. Criteria for VEI assignments are shown in the table below, which is followed by examples of eruptions in different VEI size classes. VEI assignments have been updated from those in Newhall and Self (1982) and Simkin and Siebert (1994).
VEI Tephra Volume (km3) Example 0 Effusive Masaya (Nicaragua), 1570 1 >0.00001 Poás (Costa Rica), 1991 2 >0.001 Ruapehu (New Zealand), 1971 3 >0.01 Nevado del Ruiz (Colombia), 1985 4 >0.1 Pelée (West Indies), 1902 5 >1 Mount St. Helens (United States), 1980 6 >10 Krakatau (Indonesia), 1883 7 >100 Tambora (Indonesia), 1815 8 >1000 Yellowstone (United States), Pleistocene
A "*" following a VEI indicates that there are two or more VEI assignments for that eruption, as in the common example of one or more short, paroxysmal eruptions preceded by lower level activity. The "*" follows the maximum VEI recorded between the indicated start and stop dates, and alerts the user to the fact that more information on the eruption exists.
A "?" accompanies those VEIs that were particularly difficult to assign, and those that are based on purely circumstantial evidence. For example, a VEI of 1? might have been assigned to an undescribed eruption because a nearby contemporaneous eruption received sufficient historical comment to confidently assign a VEI of 2. When there was simply no evidence on which to base a VEI, this column has normally been left empty (20% of the eruptions in our file).
A "+" following a VEI indicates an eruption volume in the upper third of the range for that particular VEI designation. It shows those eruptions known to be larger than most others sharing the same VEI numeral, but its absence does not necessarily indicate a relatively small event. The designation is used only for VEIs > 4, volume data permit adding it to only 22 events globally, but it is helpful to identify the obviously larger events in volume ranges that span a full order of magnitude.
A very few eruptions, mostly before 1500 AD, have been upgraded by one VEI unit with the assumption that early in the historical record only relatively large eruptions would have been documented. These are shown by a "^" following the VEI.
Eruptions associated with caldera collapse are normally large (probably VEI >4), and those for which data are lacking to assign a specific VEI are indicated by a "C" in the VEI column. Likewise, Plinian eruptions in the absence of more quantitative data are marked with a "P" in the VEI column.
Eruptions that were definitely explosive, but lack other descriptive information to assess their magnitude, have been assigned a default VEI of 2, that of "moderate" eruptions. Conversely, other eruptions in which substantial tephra volumes were accumulated over long periods of time and/or much of the tephra volume was in near-vent cone construction, have been downgraded by one VEI unit.
Accurate measurement of eruptive volumes requires careful field work and is often subject to unresolvable uncertainties. Consequently, volume information is available for only a small proportion of eruptions. Volume data is displayed in two different formats. Because of space constraints in the Eruptive History table view, only the order of magnitude (in cubic meters) of calculated lava and/or tephra volumes is displayed. An entry of 8/9 under the L/T header, for example, indicates an eruption with 108 m3 of lava and 109 m3 of tephra. Because only the exponent is displayed, this means that the eruption volume may be nearly 10 times larger than shown. The Eruptive History expanded view, in contrast, has room to display the full volume data, including in some cases uncertainty ranges. It is important to note that tephra and lava volumes are listed without correction for vesicularity (the void space occupied by air bubbles, or vesicles), the extraneous fragments of older rock included accidentally in the deposit, or compaction of ash layers with time. The tephra volumes displayed, therefore, are bulk tephra volumes, and not Dense Rock Equivalents (DRE), or volumes of new magma erupted.
The volcano and eruption data of this digital version of Volcanoes of the World (Siebert and Simkin, 2002-) are updated from its hardcopy predecessor (Simkin and Siebert, 1994) and originate from more than 3500 references. These references are accessible in this website through both regional and volcano-specific listings. The basic building block of the Smithsonian's volcano database is the Catalog of Active Volcanoes of the World (CAVW), a series of regional volcano catalogs published by IAVCEI beginning in 1951. In order to more easily locate these important compilations (which contain many primary references not listed in our compilation), these IAVCEI regional catalog references are bolded in our regional and volcano-specific listings.
The listings appearing here are not intended to be a comprehensive bibliography of references for a particular volcano or region, but represent those references that are cited as the sources of the volcano and eruption data in Volcanoes of the World. Several other global compilations have been helpful: among them are IAVCEI data sheets of post-Miocene volcanoes (1975-80), Volcano Letter reports of the U S Geological Survey from 1926-1955 (compiled in Fiske et al., 1987), independent compilations by Latter (1975) and Gushchenko (1979), and a caldera compilation by Newhall and Dzurisin (1988). Major sources of eruption data subsequent to or supplementing the CAVW can be found in a series of annual summaries by Gustav Hantke published between 1939 and 1962 (mostly in the IAVCEI publication Bulletin of Volcanology), and annual eruption compilations by the Volcanological Society of Japan (1960-96) and Smithsonian Institution reports (since 1968) in various formats, compiled in McClelland et al., (1985) and in the Activity Reports section of this website (Venzke et al., 2002-). The data sources referenced focus almost exclusively on Holocene volcanism and emphasize papers on volcanic stratigraphy and physical volcanology. Abstracts are typically not referenced unless they contain significant data not in other sources. As with the Georef bibliographic database, diacritical marks are not used.
References are linked directly to data in our Volcano Reference File. This sometimes results in apparently incorrect citations in lists of data sources for a volcano or a region. Discussion of another volcano or eruption (sometimes far from the one that is the subject of the manuscript) may produce a citation that is not at all apparent from the title. Alert readers will note a backlog of uncited references for publications in recent years, which we will continue to address.
Volcano locations are shown in two symbol sizes, with the smaller triangles representing volcanoes with uncertain Holocene eruptions. Red triangles on each map mark volcanoes of that region; yellow triangles indicate volcanoes of other regions. The physiology of the world and regional maps on this web site originates from two data sets, plotted using ER Mapper. Subaerial topography uses the GTOPO30 data set of the U S Geological Survey, and submarine topography originates from satellite altimetry data (Smith and Sandwell, 1997) of sea-surface topography, which mimics that of the sea floor.
Volcano photos by Smithsonian scientists are supplemented by many other images by volcanologists from the U.S. Geological Survey and other organizations around the world. Photographers are acknowledged with individual photo credits, and their collective contributions have greatly helped to give a visual footprint to the world's volcanoes and their eruptions. Photo galleries for volcanoes show volcano morphology images first, followed by eruption images linked to the start date of the eruption. For each eruption (which may have lasted for multiple years), an image with a summary caption appears first, followed by additional images for that eruption in chronological order.
CAVW Editors (1951-1975). Catalog of Active Volcanoes of the World. Rome: International Association of Volcanology and Chemistry of the Earth's Interior, 22 volumes.
Fiske R S, Simkin T, Nielsen E A (eds) (1987). The Volcano Letter. Washington, DC: Smithsonian Inst Press, 1536 p (Reprinting of 1926-1955 issues of the U S Geological Survey's Hawaiian Volcano Observatory).
Gushchenko I I (1979). Eruptions of Volcanoes of the World: A Catalog. Moscow: Nauka Pub, Acad Sci USSR Far Eastern Sci Center, 474 p (in Russian).
Hantke G (1939-62). Übersicht über die Vulkanische Tätigkeit. Eruption summaries published in the Zeitschrift Deutsche Geologie Gesellschafft in 1939 and the Bulletin of Volcanology in 1951, 1953, 1955, 1959, and 1962.
IAVCEI (1973-80). Post-Miocene Volcanoes of the World. IAVCEI data sheets, Rome: Internatl Assoc Volc Chem Earth's Interior.
Latter J H (1975). The history and geography of active and dormant volcanoes. A worldwide catalogue and index of active and potentially active volcanoes, with an outline of their eruptions. Unpublished manuscript.
McClelland L, Simkin T, Summers M, Nielsen E, and Stein T C (eds.) (1989). Global Volcanism 1975-1985. Prentice-Hall and American Geophysical Union, 653 p.
Newhall C G, and Dzurisin D (1988). Historical unrest at large calderas of the world. U S Geol Surv Bull, 1855: 1108 p, 2 vol.
Newhall C G, and Self S (1982). The volcanic explosivity index (VEI): an estimate of explosive magnitude for historical volcanism. J Geophys Res (Oceans & Atmospheres), 87: 1231-38.
Siebert L, and Simkin T (2002-). Volcanoes of the World: an Illustrated Catalog of Holocene Volcanoes and their Eruptions. Smithsonian Institution. Global Volcanism Program Digital Information Series, GVP-3, (http://www.volcano.si.edu/world/).
Simkin T, and Siebert L (1994). Volcanoes of the World, 2nd edition. Geoscience Press in association with the Smithsonian Institution Global Volcanism Program, Tucson AZ, 368 p.
Smith W H F, and Sandwell D T (1997). Global seafloor topography from satellite altimetry and ship depth soundings. Science, 277: 1957-1962.
U S Geological Survey (2002). GTOPO30. Land Processes Distributed Active Archive Center (LP DAAC), U S Geol Surv EROS Data Center http://edcdaac.usgs.gov.
Venzke E, Wunderman R W, McClelland L, Simkin, T, Luhr, J F, Siebert L, and Mayberry G (eds.) (2002-). Global Volcanism, 1968 to the Present. Smithsonian Institution, Global Volcanism Program Digital Information Series, GVP-4 (http://www.volcano.si.edu/reports/).
Volcanological Society of Japan (1960-96). Bulletin of Volcanic Eruptions, no 1-33 [Annual reports issued 1 to 3 years after event year, published since 1986 in the Bulletin of Volcanology]. | http://www.volcano.si.edu/world/eruptioncriteria.cfm | 13 |
172 | Centrifugal force (rotating reference frame)
Centrifugal force (from Latin centrum "center" and fugere "to flee") can generally be any force directed outward relative to some origin. More particularly, in classical mechanics, the centrifugal force is an outward force which arises when describing the motion of objects in a rotating reference frame. Because a rotating frame is an example of a non-inertial reference frame, Newton's laws of motion do not accurately describe the dynamics within the rotating frame. However, a rotating frame can be treated as if it were an inertial frame so that Newton's laws can be used if so-called fictitious forces (also known as inertial or pseudo- forces) are included in the sum of external forces on an object. The centrifugal force is what is usually thought of as the cause for apparent outward movement like that of passengers in a vehicle turning a corner, of the weights in a centrifugal governor, and of particles in a centrifuge. From the standpoint of an observer in an inertial frame, the effects can be explained as results of inertia without invoking the centrifugal force. Centrifugal force should not be confused with centripetal force or the reactive centrifugal force, both of which are real forces independent of the frame of the observer.
Analysis of motion within rotating frames can be greatly simplified by the use of the fictitious forces. By starting with an inertial frame, where Newton's laws of motion hold, and keeping track of how the time derivatives of a position vector change when transforming to a rotating reference frame, the various fictitious forces and their forms can be identified. Rotating frames and fictitious forces can often reduce the description of motion in two dimensions to a simpler description in one dimension (corresponding to a co-rotating frame). In this approach, circular motion in an inertial frame, which only requires the presence of a centripetal force, becomes the balance between the real centripetal force and the frame-determined centrifugal force in the rotating frame where the object appears stationary. If a rotating frame is chosen so that just the angular position of an object is held fixed, more complicated motion, such as elliptical and open orbits, appears because the centripetal and centrifugal forces will not balance. The general approach however is not limited to these co-rotating frames, but can be equally applied to objects at motion in any rotating frame.
In classical Newtonian physics
Although Newton's laws of motion hold exclusively in inertial frames, often it is far more convenient and more advantageous to describe the motion of objects within a rotating reference frame. Sometimes the calculations are simpler (an example is inertial circles), and sometimes the intuitive picture coincides more closely with the rotational frame (an example is sedimentation in a centrifuge). By treating the extra acceleration terms due to the rotation of the frame as if they were forces, subtracting them from the physical forces, it's possible to treat the second time derivative of position (relative to the rotating frame) as absolute acceleration. Thus the analysis using Newton's laws of motion can proceed as if the reference frame was inertial, provided the fictitious force terms are included in the sum of external forces. For example, centrifugal force is used in the FAA pilot's manual in describing turns. Other examples are such systems as planets, centrifuges, carousels, turning cars, spinning buckets, and rotating space stations.
If objects are seen as moving within a rotating frame, this movement results in another fictitious force, the Coriolis force; and if the rate of rotation of the frame is changing, a third fictitious force, the Euler force is experienced. Together, these three fictitious forces allow for the creation of correct equations of motion in a rotating reference frame.
For the following formalism, the rotating frame of reference is regarded as a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame denoted the stationary frame.
In a rotating frame of reference, the time derivatives of the position vector r, such as velocity and acceleration vectors, of an object will differ from the time derivatives in the stationary frame according to the frame's rotation. The first time derivative [dr/dt] evaluated within a reference frame with a coincident origin at but rotating with the absolute angular velocity ω is:
where denotes the vector cross product and square brackets [...] denote evaluation in the rotating frame of reference. In other words, the apparent velocity in the rotating frame is altered by the amount of the apparent rotation at each point, which is perpendicular to both the vector from the origin r and the axis of rotation ω and directly proportional in magnitude to each of them. The vector ω has magnitude ω equal to the rate of rotation and is directed along the axis of rotation according to the right-hand rule.
Newton's law of motion for a particle of mass m written in vector form is:
where r is the position vector of the particle.
By twice applying the transformation above from the stationary to the rotating frame, the absolute acceleration of the particle can be written as:
The apparent acceleration in the rotating frame is [d2r/dt2]. An observer unaware of the rotation would expect this to be zero in the absence of outside forces. However Newton's laws of motion apply only in the stationary frame and describe dynamics in terms of the absolute acceleration d2r/dt2. Therefore the observer perceives the extra terms as contributions due to fictitious forces. These terms in the apparent acceleration are independent of mass; so it appears that each of these fictitious forces, like gravity, pulls on an object in proportion to its mass. When these forces are added, the equation of motion has the form:
From the perspective of the rotating frame, the additional force terms are experienced just like the real external forces and contribute to the apparent acceleration. The additional terms on the force side of the equation can be recognized as, reading from left to right, the Euler force , the Coriolis force , and the centrifugal force , respectively. Unlike the other two fictitious forces, the centrifugal force always points radially outward from the axis of rotation of the rotating frame, with magnitude mω2r, and unlike the Coriolis force in particular, it is independent of the motion of the particle in the rotating frame. As expected, for a non-rotating inertial frame of reference the centrifugal force and all other fictitious forces disappear.
Absolute rotation
Three scenarios were suggested by Newton to answer the question of whether the absolute rotation of a local frame can be detected; that is, if an observer can decide whether an observed object is rotating or if the observer is rotating.
- The shape of the surface of water rotating in a bucket. The shape of the surface becomes concave to balance the centrifugal force against the other forces upon the liquid.
- The tension in a string joining two spheres rotating about their center of mass. The tension in the string will be proportional to the centrifugal force on each sphere as it rotates around the common center of mass.
In these scenarios, the effects attributed to centrifugal force are only observed in the local frame (the frame in which the object is stationary) if the object is undergoing absolute rotation relative to an inertial frame. By contrast, in an inertial frame, the observed effects arise as a consequence of the inertia and the known forces without the need to introduce a centrifugal force. Based on this argument, the privileged frame, wherein the laws of physics take on the simplest form, is a stationary frame in which no fictitious forces need to be invoked.
Within this view of physics, any other phenomenon that is usually attributed to centrifugal force can be used to identify absolute rotation. For example, the oblateness of a sphere of freely flowing material is often explained in terms of centrifugal force. The oblate spheroid shape reflects, following Clairaut's theorem, the balance between containment by gravitational attraction and dispersal by centrifugal force. That the Earth is itself an oblate spheroid, bulging at the equator where the radial distance and hence the centrifugal force is larger, is taken as one of the evidences for its absolute rotation.
Below several examples illustrate both the stationary and rotating frames of reference, and the role of centrifugal force and its relation to Coriolis force in rotating frameworks. For more examples see Fictitious force, rotating bucket and rotating spheres.
Dropping ball
An example of straight-line motion as seen in a stationary frame is a ball that steadily drops at a constant rate parallel to the axis of rotation. From a stationary frame of reference it moves in a straight line, but from the rotating frame it moves in a helix. The projection of the helical motion in a rotating horizontal plane is shown at the right of the figure. Because the projected horizontal motion in the rotating frame is a circular motion, the ball's motion requires an inward centripetal force, provided in this case by a fictitious force that produces the apparent helical motion. This force is the sum of an outward centrifugal force and an inward Coriolis force. The Coriolis force overcompensates the centrifugal force by exactly the required amount to provide the necessary centripetal force to achieve circular motion.
Banked turn
Riding a car around a curve, we take a personal view that we are at rest in the car, and should be undisturbed in our seats. Nonetheless, we feel sideways force applied to us from the seats and doors and a need to lean to one side. To explain the situation, we propose a centrifugal force that is acting upon us and must be combated. Interestingly, we find this discomfort is reduced when the curve is banked, tipping the car inward toward the center of the curve.
A different point of view is that of the highway designer. The designer views the car as executing curved motion and therefore requiring an inward centripetal force to impel the car around the turn. By banking the curve, the force exerted upon the car in a direction normal to the road surface has a horizontal component that provides this centripetal force. That means the car tires no longer need to apply a sideways force to the car, but only a force perpendicular to the road. By choosing the angle of bank to match the car's speed around the curve, the car seat transmits only a perpendicular force to the passengers, and the passengers no longer feel a need to lean nor feel a sideways push by the car seats or doors.
A calculation for Earth at the equator ( seconds, meters) shows that an object experiences a centrifugal force equal to approximately 1/289 of standard gravity. Because centrifugal force increases according to the square of , one would expect gravity to be cancelled for an object travelling 17 times faster than the Earth's rotation, and in fact satellites in low orbit at the equator complete 17 full orbits in one day.
Gravity diminishes according to the inverse square of distance, but centrifugal force increases in direct proportion to the distance. Thus a circular geosynchronous orbit has a radius of 42164 km; 42164/6378.1 = 6.61, the cube root of 289.
Planetary motion
Centrifugal force arises in the analysis of orbital motion and, more generally, of motion in a central-force field: in the case of a two-body problem, it is easy to convert to an equivalent one-body problem with force directed to or from an origin, and motion in a plane, so we consider only that.
The symmetry of a central force lends itself to a description in polar coordinates. The dynamics of a mass, m, expressed using Newton's second law of motion (F = ma), becomes in polar coordinates:
where is the force accelerating the object and the "hat" variables are unit direction vectors ( points in the centrifugal or outward direction, and is orthogonal to it).
In the case of a central force, relative to the origin of the polar coordinate system, can be replaced by , meaning the entire force is the component in the radial direction. An inward force of gravity would therefore correspond to a negative-valued F(r).
The components of F = ma along the radial direction therefore reduce to
in which the term proportional to the square of the rate of rotation appears on the acceleration side as a "centripetal acceleration", that is, a negative acceleration term in the direction. In the special case of a planet in circular orbit around its star, for example, where is zero, the centripetal acceleration alone is the entire acceleration of the planet, curving its path toward the sun under the force of gravity, the negative F(r).
As pointed out by Taylor, for example, it is sometimes convenient to work in a co-rotating frame, that is, one rotating with the object so that the angular rate of the frame, , equals the of the object in the stationary frame. In such a frame, the observed is zero and alone is treated as the acceleration: so in the equation of motion, the term is "reincarnated on the force side of the equation (with opposite signs, of course) as the centrifugal force mω2r in the radial equation": The "reincarnation" on the force side of the equation is necessary because, without this force term, observers in the rotating frame would find they could not predict the motion correctly. They would have an incorrect radial equation:
where the term is known as the centrifugal force. The centrifugal force term in this equation is called a "fictitious force", "apparent force", or "pseudo force", as its value varies with the rate of rotation of the frame of reference. When the centrifugal force term is expressed in terms of parameters of the rotating frame, replacing with , it can be seen that it is the same centrifugal force previously derived for rotating reference frames.
Because of the absence of a net force in the azimuthal direction, conservation of angular momentum allows the radial component of this equation to be expressed solely with respect to the radial coordinate, r, and the angular momentum , yielding the radial equation (a "fictitious one-dimensional problem" with only an r dimension):
The term is again the centrifugal force, a force component induced by the rotating frame of reference. The equations of motion for r that result from this equation for the rotating 2D frame are the same that would arise from a particle in a fictitious one-dimensional scenario under the influence of the force in the equation above. If F(r) represents gravity, it is a negative term proportional to 1/r2, so the net acceleration in r in the rotating frame depends on a difference of reciprocal square and reciprocal cube terms, which are in balance in a circular orbit but otherwise typically not. This equation of motion is similar to one originally proposed by Leibniz. Given r, the rate of rotation is easy to infer from the constant angular momentum L, so a 2D solution can be easily reconstructed from a 1D solution of this equation.
When the angular velocity of this co-rotating frame is not constant, that is, for non-circular orbits, other fictitious forces—the Coriolis force and the Euler force—will arise, but can be ignored since they will cancel each other, yielding a net zero acceleration transverse to the moving radial vector, as required by the starting assumption that the vector co-rotates with the planet. In the special case of circular orbits, in order for the radial distance to remain constant the outward centrifugal force must cancel the inward force of gravity; for other orbit shapes, these forces will not cancel, so r will not be constant.
Concepts of centripetal and centrifugal force played a key early role in establishing the set of inertial frames of reference and the significance of fictitious forces, even aiding in the development of general relativity in which gravity itself becomes a fictitious force.
The operations of numerous common rotating mechanical systems are most easily conceptualized in terms of centrifugal force. For example:
- A centrifugal governor regulates the speed of an engine by using spinning masses that move radially, adjusting the throttle, as the engine changes speed. In the reference frame of the spinning masses, centrifugal force causes the radial movement.
- A centrifugal clutch is used in small engine-powered devices such as chain saws, go-karts and model helicopters. It allows the engine to start and idle without driving the device but automatically and smoothly engages the drive as the engine speed rises. Inertial drum brake ascenders used in rock climbing and the inertia reels used in many automobile seat belts operate on the same principle.
- Centrifugal forces can be used to generate artificial gravity, as in proposed designs for rotating space stations. The Mars Gravity Biosatellite will study the effects of Mars-level gravity on mice with gravity simulated in this way.
- Spin casting and centrifugal casting are production methods that uses centrifugal force to disperse liquid metal or plastic throughout the negative space of a mold.
- Centrifuges are used in science and industry to separate substances. In the reference frame spinning with the centrifuge, the centrifugal force induces a hydrostatic pressure gradient in fluid-filled tubes oriented perpendicular to the axis of rotation, giving rise to large buoyant forces which push low-density particles inward. Elements or particles denser than the fluid move outward under the influence of the centrifugal force. This is effectively Archimedes' principle as generated by centrifugal force as opposed to being generated by gravity.
- Some amusement rides make use of centrifugal forces. For instance, a Gravitron's spin forces riders against a wall and allows riders to be elevated above the machine's floor in defiance of Earth's gravity.
Nevertheless, all of these systems can also be described without requiring the concept of centrifugal force, in terms of motions and forces in a stationary frame, at the cost of taking somewhat more care in the consideration of forces and motions within the system.
See also
- Stephen T. Thornton & Jerry B. Marion (2004). Classical Dynamics of Particles and Systems (5th ed.). Belmont CA: Brook/Cole. Chapter 10. ISBN 0-534-40896-6.
- John Robert Taylor (2004). Classical Mechanics. Sausalito CA: University Science Books. Chapter 9, pp. 327 ff. ISBN 1-891389-22-X.
- Robert Resnick & David Halliday (1966). Physics. Wiley. p. 121. ISBN 0-471-34524-5.
- Federal Aviation Administration (2007). Pilot's Encyclopedia of Aeronautical Knowledge. Oklahoma City OK: Skyhorse Publishing Inc. Figure 3–21. ISBN 1-60239-034-7.
- Richard Hubbard (2000). Boater's Bowditch: The Small Craft American Practical Navigator. NY: McGraw-Hill Professional. p. 54. ISBN 0-07-136136-7.
- Lawrence K. Wang & Norman C. Pereira (1979). Handbook of Environmental Engineering: Air and Noise Pollution Control. Humana Press. p. 63. ISBN 0-89603-001-6.
- Lee M. Grenci & Jon M. Nese (2001). A World of Weather: Fundamentals of Meteorology. Kendall Hunt. p. 272. ISBN 0-7872-7716-9.
- Jerrold E. Marsden & Tudor S. Ratiu (1999). Introduction to Mechanics and Symmetry: A Basic Exposition of Classical Mechanical Systems. Springer. p. 251. ISBN 0-387-98643-X.
- Alexander L. Fetter & John Dirk Walecka (2003). Theoretical Mechanics of Particles and Continua. Courier Dover Publications. pp. 38–39. ISBN 0-486-43261-0.
- John L. Synge (2007). Principles of Mechanics (Reprint of Second Edition of 1942 ed.). Read Books. p. 347. ISBN 1-4067-4670-3.
- Taylor (2005). p. 342.
- LD Landau and LM Lifshitz (1976). Mechanics (Third ed.). Oxford: Butterworth-Heinemann. p. 128. ISBN 978-0-7506-2896-9.
- Louis N. Hand, Janet D. Finch (1998). Analytical Mechanics. Cambridge University Press. p. 267. ISBN 0-521-57572-9.
- Mark P Silverman (2002). A universe of atoms, an atom in the universe (2 ed.). Springer. p. 249. ISBN 0-387-95437-6.
- Taylor (2005). p. 329.
- Cornelius Lanczos (1986). The Variational Principles of Mechanics (Reprint of Fourth Edition of 1970 ed.). Dover Publications. Chapter 4, §5. ISBN 0-486-65067-7.
- Morton Tavel (2002). Contemporary Physics and the Limits of Knowledge. Rutgers University Press. p. 93. ISBN 0-8135-3077-6. "Noninertial forces, like centrifugal and Coriolis forces, can be eliminated by jumping into a reference frame that moves with constant velocity, the frame that Newton called inertial."
- Louis N. Hand, Janet D. Finch (1998). Analytical Mechanics. Cambridge University Press. p. 324. ISBN 0-521-57572-9.
- I. Bernard Cohen, George Edwin Smith (2002). The Cambridge companion to Newton. Cambridge University Press. p. 43. ISBN 0-521-65696-6.
- Simon Newcomb (1878). Popular astronomy. Harper & Brothers. pp. 86–88.
- Lawrence S. Lerner (1996). Physics for Scientists and Engineers. Jones & Bartlett Publishers. p. 129. ISBN 0-7637-0253-6.
- Bowser, Edward Albert (1920). An elementary treatise on analytic mechanics: with numerous examples. D. Van Nostrand Company. Unknown parameter
- Robert and Gary Ehrlich (1998). What if you could unscramble an egg?. Rutgers University Press. ISBN 978-0-8135-2548-8.
- Herbert Goldstein (1950). Classical Mechanics. Addison-Wesley. pp. 24–25, 61–64. ISBN 0-201-02918-9.
- John Clayton Taylor (2001). Hidden unity in nature's laws. Cambridge University Press. p. 26. ISBN 0-521-65938-8.
- Henry M. Stommel and Dennis W. Moore (1989). An introduction to the Coriolis force. Columbia University Press. pp. 28–40. ISBN 978-0-231-06636-5.
- Taylor (2005). p. 358-9.
- Taylor (2005). p. 359.
- Frank Swetz, John Fauvel, Otto Bekken, Bengt Johansson, and Victor Katz (1997). Learn from the masters!. Mathematical Association of America. pp. 268–269. ISBN 978-0-88385-703-8.
- Whiting, J.S.S. (November 1983). "Motion in a central-force field". Physics Education 18 (6): pp. 256–257. Bibcode:1983PhyEd..18..256W. doi:10.1088/0031-9120/18/6/102. ISSN 0031-9120. Retrieved May 7, 2009.
- Hans Christian Von Baeyer (2001). The Fermi Solution: Essays on science (Reprint of 1993 ed.). Courier Dover Publications. p. 78. ISBN 0-486-41707-7.
- Myers, Rusty L. (2006). The basics of physics. Greenwood Publishing Group. p. 57. ISBN 0-313-32857-9.
|Look up centrifugal in Wiktionary, the free dictionary.|
|Look up centrifugal force in Wiktionary, the free dictionary.|
- Centripetal force and Centrifugal force, from an online Regents Exam physics tutorial by the Oswego City School District
- Centrifugal force at the HyperPhysics concepts site
- Centripetal and Centrifugal Forces at MathPages
- Motion over a flat surface and a parabolic surface Java physlets by Brian Fiedler (from School of Meteorology at the University of Oklahoma) illustrating fictitious forces.
- Animation clip showing scenes as viewed from both a stationary and a rotating frame of reference, visualizing the Coriolis and centrifugal forces.
- John Baez: Does centrifugal force hold the Moon up? | http://en.wikipedia.org/wiki/Centrifugal_force_(rotating_reference_frame) | 13 |
107 | In mathematics, Horner's method (also known as Horner scheme in the UK or Horner's rule in the U.S.) is either of two things: (i) an algorithm for calculating polynomials, which consists in transforming the monomial form into a computationally efficient form; or (ii) a method for approximating the roots of a polynomial. The latter is also known as Ruffini–Horner's method.
These methods are named after the British mathematician William George Horner, although they were known before him by Paolo Ruffini and, six hundred years earlier, by the Chinese mathematician Qin Jiushao.
Description of the algorithm
Given the polynomial
where are real numbers, we wish to evaluate the polynomial at a specific value of , say .
To accomplish this, we define a new sequence of constants as follows:
Then is the value of .
To see why this works, note that the polynomial can be written in the form
Thus, by iteratively substituting the into the expression,
We use synthetic division as follows:
x₀│ x³ x² x¹ x⁰ 3 │ 2 −6 2 −1 │ 6 0 6 └──────────────────────── 2 0 2 5
The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the x-value (3 in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of on division by is 5.
But by the remainder theorem, we know that the remainder is . Thus
In this example, if we can see that , the entries in the third row. So, synthetic division is based on Horner's method.
As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of on division by . The remainder is 5. This makes Horner's method useful for polynomial long division.
Divide by :
2 │ 1 -6 11 -6 │ 2 -8 6 └──────────────────────── 1 -4 3 0
The quotient is .
Let and . Divide by using Horner's method.
2 │ 4 -6 0 3 │ -5 ────┼──────────────────────┼─────── 1 │ 2 -2 -1 │ 1 │ │ └──────────────────────┼─────── 2 -2 -1 1 │ -4
The third row is the sum of the first two rows, divided by 2. Each entry in the second row is the product of 1 with the third-row entry to the left. The answer is
Floating point multiplication and division
Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where, (using the above notation): ai = 1, and x = 2. Then, x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2), x = 2, so powers of 2 are repeatedly factored out.
For example, to find the product of two numbers, (0.15625) and m:
To find the product of two binary numbers, d and m:
- 1. A register holding the intermediate result is initialized to d.
- 2. Begin with the least significant (rightmost) non-zero bit in m.
- 2b. Count (to the left) the number of bit positions to the next most significant non-zero bit. If there are no more-significant bits, then take the value of the current bit position.
- 2c. Using that value, perform a right-shift operation by that number of bits on the register holding the intermediate result
- 3. If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step #2 with the next most significant bit in m.
In general, for a binary number with bit values: () the product is:
At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation:
The denominators all equal one (or the term is absent), so this reduces to:
or equivalently (as consistent with the "method" described above):
In binary (base 2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1, is the multiplicative identity element), and a (21) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction.
The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used), and uses only 20% of the code space.
Polynomial root finding
Using Horner's method in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial of degree with zeros , make some initial guess such that . Now iterate the following two steps:
1. Using Newton's method, find the largest zero of using the guess .
2. Using Horner's method, divide out to obtain . Return to step 1 but use the polynomial and the initial guess .
These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.
Consider the polynomial,
which can be expanded to
From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next is divided by to obtain
which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by to obtain
which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain
which is shown in green and found to have a zero at −3. This polynomial is further reduced to
which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found.
Octave implementation
The following Octave code was used in the example above to implement Horner's method.
function [y b] = horner(a,x) % Input a is the polynomial coefficient vector, x the value to be evaluated at. % The output y is the evaluated polynomial and b the divided coefficient vector. b(1) = a(1); for i = 2:length(a) b(i) = a(i)+x*b(i-1); end y = b(length(a)); b = b(1:length(b)-1); end
Python implementation
The following Python code implements Horner's method.
def horner(x, *polynomial): """A function that implements the Horner Scheme for evaluating a polynomial of coefficients *polynomial in x.""" result = 0 for coefficient in polynomial: result = result * x + coefficient return result
Horner's method can be used to convert between different positional numeral systems – in which case x is the base of the number system, and the ai coefficients are the digits of the base-x representation of a given number – and can also be used if x is a matrix, in which case the gain in computational efficiency is even greater. In fact, when x is a matrix, further acceleration is possible which exploits the structure of matrix multiplication, and only instead of n multiplies are needed (at the expense of requiring more storage) using the 1973 method of Paterson and Stockmeyer.
Evaluation using the monomial form of a degree-n polynomial requires at most n additions and (n2 + n)/2 multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. (This can be reduced to n additions and 2n − 1 multiplications by evaluating the powers of x iteratively.) If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately 2n times the number of bits of x (the evaluated polynomial has approximate magnitude xn, and one must also store xn itself). By contrast, Horner's method requires only n additions and n multiplications, and its storage requirements are only n times the number of bits of x. Alternatively, Horner's method can be computed with n fused multiply–adds. Horner's method can also be extended to evaluate the first k derivatives of the polynomial with kn additions and multiplications.
Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal. Victor Pan proved in 1966 that the number of multiplications is minimal. However, when x is a matrix, Horner's method is not optimal.
This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree-n polynomial can be evaluated using only multiplications and n additions (see Knuth: The Art of Computer Programming, Vol.2).
Horner's paper entitled "A new method of solving numerical equations of all orders, by continuous approximation", was read before the Royal Society of London, at its meeting on on July 1, 1819, with Davies Gilbert, Vice-President and Treasurer, in the chair; this was the final meeting of the session before the Society adjorned for its Summer recess. When a sequel was read before the Society in 1823, it was again at the final meeting of the session. On both occasions, papers by James Ivory, FRS, were also read. In 1819, it was Horner's paper that got through to publication in the "Philosophical Transactions". later in the year, Ivory's paper falling by the way, despite Ivory being a Fellow; in 1823, when a total of ten papers were read, fortunes as regards publication, were reversed. But Gilbert, who had strong connections with the West of England and may have had social contact with Horner, resident as Horner was in Bristol and Bath, published his own survey of Horner-type methods earlier in 1823.
Horner's paper in Part II of Philosophical Transactions of the Royal Society of London for 1819 was warmly and expansively welcomed by a reviewer in the issue of The Monthly Review: or, Literary Journal for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. However, the reviewer noted that another, similar method had also recently been promoted by the architect and mathematical expositor, Peter Nicholson. This theme is developed in a further review of some of Nicholson's books in the issue of The Monthly Review for December, 1820, which in turn ends with notice of the appearance of a booklet by Theophilus Holdred, from whom Nicholson acknowledges he obtained the gist of his approach in the first place, although claiming to have improved upon it. The sequence of reviews is concluded in the issue of The Monthly Review for September, 1821, with the reviewer reasserting both Horner's priority and the primacy of his method, judiciously observing that had Holdred published forty years earlier, his contribution could more easily be recognized. The reviewer is exceptionally well-informed, even having sighted Horner's preparatory correspondence with Peter Barlow in 1818, seeking work of Budan. The Bodlean Library, Oxford has the Editor's annotated copy of The Monthly Review from which it is clear that the most active reviewer in mathematics in 1814 and 1815 (the last years for which this information has been published) was none other than Peter Barlow,one of the foremost specialists on approximation theory of the period, suggesting that it was Barlow, who wrote this sequence of reviews. As it also happened, Henry Atkinson, of Newcastle, devised a similar approximation scheme in 1809; he had consulted his fellow Geordie, Charles Hutton, another specialist and a senior colleague of Barlow at the Royal Military Academy, Woolwich, only to be advised that, while his work was publishable, it was unlikely to have much impact. J. R. Young, writing in the mid-1830s, concluded that Holdred's first method replicated Atkinson's while his improved method was only added to Holdred's booklet some months after its first appearance in 1820, when Horner's paper was already in circulation.
The feature of Horner's writing that most distinguishes it from his English contemporaries is the way he draws on the Continental literature, notably the work of Arbogast. The advocacy, as well as the detraction, of Horner's Method has this as an unspoken subtext. Quite how he gained that familiarity has not been determined. Horner is known to have made a close reading of John Bonneycastle's book on algebra. Bonneycastle recognizes that Arbogast has the general, combinatorial expression for the reversion of series, a project going back at least to Newton. But Bonneycastle's main purpose in mentioning Arbogast is not to praise him, but to observe that Arbogast's notation is incompatible with the approach he adopts. The gap in Horner's reading was the work of Paolo Ruffini, except that, as far as awareness of Ruffini goes, citations of Ruffini's work by authors, including medical authors, in Philosophical Transactions speak volumes: there are none - Ruffini's name only appears in 1814, recording a work he donated to the Royal Society. Ruffini might have done better if his work had appeared in French, as had Malfatti's Problem in the reformulation of Joseph Diaz Gergonne, or had he written in French, as had Antonio Cagnoli, a source quoted by Bonneycastle on series reversion (today, Cagnoli is in the Italian Wikipedia, as shown, but has yet to make it into either French or English).
Fuller develops the thesis that Horner's method was never published by Horner until after it was published by Holdred. But this is at variance with the contemporary reception of the works of both Horner and Holdred, as indicated in the previous paragraph, besides the numerous internal flaws in Fuller's paper, flaws that are so strange as to raise doubt as to Fuller's purpose (see the Talk page). Fuller also takes aim at Augustus De Morgan. Precocious though Augustus de Morgan was, he was not the reviewer for The Monthly Review, while several others - Thomas Stephens Davies, J. R. Young, Stephen Fenwick, T. T. Wilkinson - wrote Horner firmly into their records, not least Horner himself, as he published extensively up until the year of his death in 1837. His paper in 1819 was one that would have been difficult to miss. In contrast, the only other mathematical sighting of Holdred is a single named contribution to The Gentleman's Mathematical Companion, an answer to a problem.
It is questionable to what extent it was De Morgan's advocacy of Horner's priority in discovery that led to "Horner's method" being so called in textbooks, but it is true that those suggesting this tend themselves to know of Horner largely through intermediaries, of whom De Morgan made himself a prime example. However, this method qua method was known long before Horner. In reverse chronological order, Horner's method was already known to:
- Paolo Ruffini in 1809 (see Ruffini's rule)
- Isaac Newton in 1669 (but precise reference needed)
- the Chinese mathematician Zhu Shijie in the 14th century
- the Chinese mathematician Qin Jiushao in his Mathematical Treatise in Nine Sections in the 13th century
- the Persian mathematician Sharaf al-Dīn al-Tūsī in the 12th century
- the Chinese mathematician Jia Xian in the 11th century (Song Dynasty)
- The Nine Chapters on the Mathematical Art, a Chinese work of the Han Dynasty (202 BC – 220 AD) edited by Liu Hui (fl. 3rd century).
However, this observation on its own masks significant differences in conception and also, as noted with Ruffini's work, issues of accessibility.
Qin Jiushao, in his Shu Shu Jiu Zhang (Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-qintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. The first person writing in English to note the connection with Horner's method was Alexander Wylie, writing in The North China Herald in 1852; perhaps conflating and misconstruing different Chinese phrases, Wylie calls the method Harmoniously Alternating Evolution (which does not agree with his Chinese, linglong kaifang, not that at that date he uses pinyin), working the case of one of Qin's quartics and giving, for comparison, the working with Horner's method. Yoshio Mikami in Development of Mathematics in China and Japan published in Leipzig in 1913, gave a detailed description of Qin's method, using the quartic illustrated to the above right in a worked example; he wrote: "who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way.". However, as Mikami is also aware, it was not altogether impossible that a related work, Si Yuan Yu Jian (Jade Mirror of the Four Unknowns; 1303) by Zhu Shijie might make the shorter journey across to Japan, but seemingly it never did, although another work of Zhu, Suan Xue Qi Meng, had a seminal influence on the development of traditional mathematics in the Edo period, starting in the mid-1600s. Ulrich Libbrecht (at the time teaching in school, but subsequently a professor of comparative philosophy) gave a detailed description in his doctoral thesis of Qin's method, he concluded: It is obvious that this procedure is a Chinese invention....the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese. Here, the problems is that there is no more evidence for this speculation than there is of the method being known in India. Of course, the extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in Jiu Zhang Suan Shu, while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method he does not specify.
See also
- Clenshaw algorithm to evaluate polynomials in Chebyshev form
- De Boor's algorithm to evaluate splines in B-spline form
- De Casteljau's algorithm to evaluate polynomials in Bézier form
- Estrin's scheme to facilitate parallelization on modern computer architectures
- Lill's method to approximate roots graphically
- Ruffini's rule to divide a polynomial by a binomial of the form x − r
- Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein (2009). Introduction to Algorithms (3rd ed.). MIT Press. pp. 41, 900, 990.
- "Wolfram MathWorld: Horner's Rule".
- "Wolfram MathWorld: Horner's Method".
- "French Wikipedia: Méthode de Ruffini-Horner".
- Florian Cajori, Horner's method of approximation anticipated by Ruffini, Bulletin of the American Mathematical Society, Vol. 17, No. 9, pp. 409–414, 1911 (read before the Southwestern Section of the American Mathematical Society on November 26, 1910).
- It is obvious that this procedure is a Chinese invention, Ulrich Librecht, Chinese Mathematics in the Thirteenth Century, Chapter 13, Equations of Higher Degree, p178 Dover, ISBN 0-486-44619-0
- Kripasagar, March 2008, "Efficient Micro Mathematics", Circuit Cellar, issue 212, p. 62.
- Kress, Rainer, "Numerical Analysis", Springer, 1991, p.112.
- Higham, Nicholas. (2002). Accuracy and Stability of Numerical Algorithms. Philadelphia: SIAM. ISBN 0-89871-521-0. Section 5.4.
- W. Pankiewicz. "Algorithm 337: calculation of a polynomial and its derivative values by Horner scheme".
- Ostrowski, A. M. (1954). "On two problems in abstract algebra connected with Horner's rule", Studies in Math. Mech., pp. 40-48. New York: Academic Press.
- Pan, Y. Ja. (1966). "On means of calculating values of polynomials, Russian Math. Surveys" 21, pp. 105-136.
- Urich Libbrecht Chinese Mathematics in the Thirteenth Century, p181–191, Dover ISBN 0-486-44619-0
- William George Horner (July 1819). "A new method of solving numerical equations of all orders, by continuous approximation". Philosophical Transactions (Royal Society of London): pp. 308–335.
- JSTOR archive of the Royal Society of London. JSTOR 107508.
- Fuller A. T. :Horner versus Holdred: An Episode in the History of Root Computation, Historia Mathematica 26 (1999), 29–51
- O'Connor, John J.; Robertson, Edmund F., "Horner's method", MacTutor History of Mathematics archive, University of St Andrews.
- J. L. Berggren (1990). "Innovation and Tradition in Sharaf al-Din al-Tusi's Muadalat", Journal of the American Oriental Society 110 (2), p. 304–309.
- Temple, Robert. (1986). The Genius of China: 3,000 Years of Science, Discovery, and Invention. With a forward by Joseph Needham. New York: Simon and Schuster, Inc. ISBN 0-671-62028-2. Page 142.
- Yoshio Mikami, Chinese Mathematics in the Thirteenth Century, Chapter 11, Chin Chiu Shao, p77 Chelsea Publishing Co
- Ulrich Libbrecht, Chinese Mathematics in the Thirteenth Century, Chapter 13, Numerial Equations of Higher Degree, p208 Dover, ISBN 0-486-44619-0
- Horner, William George (July 1819). "A new method of solving numerical equations of all orders, by continuous approximation". Philosophical Transactions (Royal Society of London): pp. 308–335. Directly available online via the link, but also reprinted with appraisal in D.E.Smith: A Source Book in Mathematics, McGraw-Hill, 1929; Dover reprint, 2 vols 1959
- Spiegel, Murray R. (1956). Schaum's Outline of Theory and Problems of College Algebra. McGraw-Hill Book Company.
- Knuth, Donald (1997). The Art of Computer Programming. Vol. 2: Seminumerical Algorithms (3rd ed.). Addison-Wesley. pp. 486–488 in section 4.6.4. ISBN 0-201-89684-2.
- Kripasagar, Venkat (March 2008). "Efficient Micro Mathematics – Multiplication and Division Techniques for MCUs". Circuit Cellar magazine (212): p. 60.
- Mikami, Yoshio (1913). "11". The Development of Mathematics in China and Japan (1st ed.). Chelsea Publishing Co reprint. pp. 74–77. Yes, really! It looks as though the link is taking you to a completely different work, but you end up at Mikami's book, as you find on checking the specified pages.
- Ulrich, Librecht (2005). "13". Chinese Mathematics in the Thirteenth Century (2rd ed.). Dover. pp. 175–211. ISBN 0-486-44619-0.
- Wylie, Alexander (1897). Chinese Researches. Printed in Shanghai., Jottings on the Science of Chinese Arithmetic (reprinted from issues of The North China Herald (1852).
- T. Holdred (1820), A New Method of Solving Equations with Ease and Expedition; by which the True Value of the Unknown Quantity is Found Without Previous Reduction. With a Supplement, Containing Two Other Methods of Solving Equations, Derived from the Same Principle Richard Watts. Sold by Davis and Dickson, mathematical and philosophical booksellers, 17, St. Martin's-le-Grand; and by the author, 2, Denzel Street, Clare-Market, 56pp.. | http://en.wikipedia.org/wiki/Horner_scheme | 13 |
51 | Sections 10.7 - 10.9
Fluid dynamics is the study of how fluids behave when they're in motion. This can get very complicated, so we'll focus on one simple case, but we should briefly mention the different categories of fluid flow.
Fluids can flow steadily, or be turbulent. In steady flow, the fluid passing a given point maintains a steady velocity. For turbulent flow, the speed and or the direction of the flow varies. In steady flow, the motion can be represented with streamlines showing the direction the water flows in different areas. The density of the streamlines increases as the velocity increases.
Fluids can be compressible or incompressible. This is the big difference between liquids and gases, because liquids are generally incompressible, meaning that they don't change volume much in response to a pressure change; gases are compressible, and will change volume in response to a change in pressure.
Fluid can be viscous (pours slowly) or non-viscous (pours easily).
Fluid flow can be rotational or irrotational. Irrotational means it travels in straight lines; rotational means it swirls.
For most of the rest of the chapter, we'll focus on irrotational, incompressible, steady streamline non-viscous flow.
The equation of continuity states that for an incompressible fluid flowing in a tube of varying cross-section, the mass flow rate is the same everywhere in the tube. The mass flow rate is simply the rate at which mass flows past a given point, so it's the total mass flowing past divided by the time interval. The equation of continuity can be reduced to:
Generally, the density stays constant and then it's simply the flow rate (Av) that is constant.
There are basically two ways to make fluid flow through a pipe. One way is to tilt the pipe so the flow is downhill, in which case gravitational kinetic energy is transformed to kinetic energy. The second way is to make the pressure at one end of the pipe larger than the pressure at the other end. A pressure difference is like a net force, producing acceleration of the fluid.
As long as the fluid flow is steady, and the fluid is non-viscous and incompressible, the flow can be looked at from an energy perspective. This is what Bernoulli's equation does, relating the pressure, velocity, and height of a fluid at one point to the same parameters at a second point. The equation is very useful, and can be used to explain such things as how airplanes fly, and how baseballs curve.
The pressure, speed, and height (y) at two points in a steady-flowing, non-viscous, incompressible fluid are related by the equation:
Some of these terms probably look familiar...the second term on each side looks something like kinetic energy, and the third term looks a lot like gravitational potential energy. If the equation was multiplied through by the volume, the density could be replaced by mass, and the pressure could be replaced by force x distance, which is work. Looked at in that way, the equation makes sense: the difference in pressure does work, which can be used to change the kinetic energy and/or the potential energy of the fluid.
Bernoulli's equation has some surprising implications. For our first look at the equation, consider a fluid flowing through a horizontal pipe. The pipe is narrower at one spot than along the rest of the pipe. By applying the continuity equation, the velocity of the fluid is greater in the narrow section. Is the pressure higher or lower in the narrow section, where the velocity increases?
Your first inclination might be to say that where the velocity is greatest, the pressure is greatest, because if you stuck your hand in the flow where it's going fastest you'd feel a big force. The force does not come from the pressure there, however; it comes from your hand taking momentum away from the fluid.
The pipe is horizontal, so both points are at the same height. Bernoulli's equation can be simplified in this case to:
The kinetic energy term on the right is larger than the kinetic energy term on the left, so for the equation to balance the pressure on the right must be smaller than the pressure on the left. It is this pressure difference, in fact, that causes the fluid to flow faster at the place where the pipe narrows.
Consider a geyser that shoots water 25 m into the air. How fast is the water traveling when it emerges from the ground? If the water originates in a chamber 35 m below the ground, what is the pressure there?
To figure out how fast the water is moving when it comes out of the ground, we could simply use conservation of energy, and set the potential energy of the water 25 m high equal to the kinetic energy the water has when it comes out of the ground. Another way to do it is to apply Bernoulli's equation, which amounts to the same thing as conservation of energy. Let's do it that way, just to convince ourselves that the methods are the same.
Bernoulli's equation says:
But the pressure at the two points is the same; it's atmospheric pressure at both places. We can measure the potential energy from ground level, so the potential energy term goes away on the left side, and the kinetic energy term is zero on the right hand side. This reduces the equation to:
The density cancels out, leaving:
This is the same equation we would have found if we'd done it using the chapter 6 conservation of energy method, and canceled out the mass. Solving for velocity gives v = 22.1 m/s.
To determine the pressure 35 m below ground, which forces the water up, apply Bernoulli's equation, with point 1 being 35 m below ground, and point 2 being either at ground level, or 25 m above ground. Let's take point 2 to be 25 m above ground, which is 60 m above the chamber where the pressurized water is.
We can take the velocity to be zero at both points (the acceleration occurs as the water rises up to ground level, coming from the difference between the chamber pressure and atmospheric pressure). The pressure on the right-hand side is atmospheric pressure, and if we measure heights from the level of the chamber, the height on the left side is zero, and on the right side is 60 m. This gives:
Bernoulli's equation can be used to explain why curveballs curve. Let's say the ball is thrown so it spins. As air flows over the ball, the seams of the ball cause the air to slow down a little on one side and speed up a little on the other. The side where the air speed is higher has lower pressure, so the ball is deflected toward that side. To throw a curveball, the rotation of the ball should be around a vertical axis.
It's a little more complicated than that, actually. Although the picture here shows nice streamline flow as the air moves left relative to the ball, in reality there is some turbulence. The air does exert a force down on the ball in the figure above, so the ball must exert an upward force on the air. This causes air that travels below the ball in the picture to move up and fill the space left by the ball as it moves by, which reduces drag on the ball. | http://physics.bu.edu/~duffy/py105/Bernoulli.html | 13 |
51 | In 1964, a geologist in the Nevada wilderness discovered the oldest living thing on earth, after he killed it. The young man was Donald Rusk Currey, a graduate student studying ice-age glaciology in Eastern Nevada; the tree he cut down was of the Pinus longaeva species, also known as the Great Basin bristlecone pine. Working on a grant from the National Science Foundation, Currey was compiling the ages of ancient bristlecone trees to develop a glacial timeline for the region.
“Bristlecones are slow-growing and conservative, not the grow-fast, die-young types.”
Currey’s ring count for this particular tree reached backward from the present, past the founding of the United States, the Great Crusades, and even the Greek and Roman Empires, to the time of the ancient Egyptians. Sheltered in an unremarkable grove near Wheeler Peak, the bristlecone he cut down was found to be nearly 5,000 years old, taking root only a few hundred years after human history was first recorded. How could a half-dead pine barely 20 feet tall outdo the skyscraper-height sequoias, commonly thought to be the oldest trees alive?
The longevity of Great Basin bristlecones was first recognized in the 1950s by Dr. Edward Schulman, who shocked a scientific community that believed in a correlation between long lifespan and great size. Schulman systematically sampled Great Basin bristlecones in California and Nevada, and published his findings in a 1958 National Geographic article, which revealed several of the trees to be more than 4,000 years old. Schulman’s analysis supported the idea that “adversity begets longevity,” or that the severe conditions in which the bristlecone pine evolved actually helped extend its lifespan.
Bristlecone pine trees thrive in a rough, high-altitude environment, where there is little competition for resources from other plants. “Bristlecones tend to produce denser wood, which is more resistant to decay and damage from microbes,” says Anna Schoettle, an ecophysiologist with the U.S. Forest Service, Rocky Mountain Research Station. “Because their habitats are so harsh, the trees grow far apart, so fire and other disturbances tend not to sweep through those areas as frequently. Lightning may strike and kill a single tree, but it won’t necessarily spread to the neighboring trees.”
Even if a large portion of a bristlecone is damaged by erosion or fire, small strips of living bark, which Schulman called “life lines,” are able to function and keep the tree alive.
“Bristlecones will grow a thousand years or so, and then the bark will start dying off on one side,” says Tom Harlan, a researcher at the Laboratory of Tree-Ring Research at the University of Arizona. “Therefore, the tree can’t support the branches directly above that area, and they die. Pretty soon you’re left with a small strip of bark, which is supporting all of the foliage. It might be only 2 inches wide, but the pine is still considered a growing, healthy tree.”
Bristlecones generally don’t even reach seed-producing maturity until between 30 and 50 years of age, and can take hundreds of years to reach only a few feet in height. ”Their focus is on survival and not on competitive ability,” says Schoettle. “They’re slow-growing and conservative, not the grow-fast, die-young types.”
Perhaps to the benefit of the bristlecone, our cultural obsession with all things big has meant that we’ve mostly left them alone. During the 19th century, the lure of the Western United States depended on promises of vast farmland, sweeping mountain vistas, and gigantic trees. Towering far above the human plain, the enormous sequoias became a tourist destination in their own right.
But conservation hadn’t caught up with capitalism, and before the significance of these trees was fully understood, people were chopping them down to build houses or wacky roadside amusements. In 1881, a hole large enough to accommodate horse-drawn vehicles was cut through the living Wawona tree in Yosemite National Park. Wawona remained one of the park’s top attractions until it collapsed in 1969.
Despite their rugged beauty, the bristlecone’s incredible age isn’t obvious to the untrained eye, meaning the species is largely ignored by a snapshot-hungry public looking for the tallest tree around. “That’s something that people tell us all the time,” says Harlan. “‘You want an old tree?’ they say. ‘We got an old tree.’ In the majority of cases, they’re referring to a big tree and not necessarily an old tree.”
But the bristlecones have other enemies besides humans, like a deadly, non-native fungus called white pine blister rust, whose impact has been worsened by accelerated changes in the regional climate. As much as the Forest Service cares about protecting our oldest living trees, they’re also worried about the longevity of the species as a whole. “We’re trying to maintain the adaptive capacity and survival of the species,” Schoettle says. “And that’s not an easy feat with the changing environment that we’re in now.” Besides protecting living trees against human destruction, the Forest Service is engaged in projects to preserve the tree’s reproductive abilities by collecting seeds and studying its natural resistance to white pine blister rust.
We now know that the Great Basin bristlecones are the oldest surviving species of non-clonal organisms in the world. Technically, the eldest of all trees are actually clonal plants, or those which continually replicate themselves through layering (where a limb sprouts new roots) and vegetative cloning (where the root system expands underground and sends up new trunks). Though some clonal organisms have been estimated to be at least 80,000 years old, no individual part of the organism is actually alive for more than a few hundred years.
In contrast, non-clonal trees like sequoias and bristlecones rely on the structure established by their original seedlings for their entire lives, depending on the deceased wood created by earlier growth in a way that clonal plants do not. Every year during its growing season, a non-clonal tree adds cells to its trunk, which function to send water up to the leaves and sugar down to the roots. During the late autumn, the bristlecone produces cells much smaller in size than those during the active summer growth, resulting in the appearance of concentric rings when a trunk is cut crosswise.
Dendrochronology, or tree-ring dating, is still a fairly young field, established in the 1890s by astronomer A.E. Douglass while working at the Lowell Observatory in Northern Arizona. Attempting to uncover a correlation between sunspots and Earth’s weather patterns, Douglass hypothesized that tree rings would reflect these major climactic changes. While scientists had long understood the correlation between age and a tree’s ring count, Douglass was the first to piece together a detailed analysis of common patterns in the rings of different trees.
Douglass surveyed thousands of felled trees across the southwestern United States and used a method of cross-dating, or overlaying these patterns from trees felled during different eras, to trace major climactic events back thousands of years. When word got out about his timeline, Douglass was suddenly in high demand by archaeologists studying famous Native American sites like Chaco Canyon and Mesa Verde, whose age was previously determined by mere guesswork. Douglass cross-dated the rings on wooden beams found among the ruins to catalog the various sites in chronological order.
Decades later, Douglass established the Laboratory of Tree-Ring Research at the University of Arizona, which became the world leader in tree-ring analysis. Scientists affiliated with the Tree-Ring Lab, like Edward Schulman, eventually undertook the most comprehensive studies of Great Basin bristlecone pines.
A few years before Donald Currey began work in the Great Basin, several of the area’s most impressive trees were given individual names by activists working to establish a national park in the region. The bristlecone Currey would eventually fell was called Prometheus, after the Greek mythological character who stole fire from the gods and gave it to humans. Currey chose this particular specimen for his study because it seemed older than much of the surrounding forest, with a single living bark strip supporting a branch of lush foliage.
By the time of Currey’s survey, trees were typically dated using core samples taken with a hollow threaded bore screwed into a tree’s trunk. No larger than a soda straw, these cores then received surface preparations in a lab to make them easier to read under a microscope. While taking core samples from the Prometheus tree, which Currey labeled WPN-114, his boring bit snapped in the bristlecone’s dense wood. After requesting assistance from the Forest Service, a team was sent to fell the tree using chainsaws. Only days later, when Currey individually counted each of the tree’s rings, did he realize the gravity of his act.
Currey downplayed the discovery in a dry essay for Ecology magazine in 1965, in which he stated, “Allowing for the likelihood of missing rings and for the 100-inch height of the innermost counted ring, it may be tentatively concluded that WPN-114 began growing about 4,900 years ago.” Though its exact age is still debated, the Prometheus tree was certainly the oldest single tree scientists had ever encountered.
The Prometheus tree’s felling made it doubly symbolic, as the myth of its namesake captures both the human hunger for knowledge and the unintended negative consequences that often result from this desire. Though members of the scientific community and press were outraged that the tree was killed, Currey’s mistake ultimately provided the impetus to establish Great Basin National Park to protect the bristlecones. The death of the Prometheus tree also helped to change our larger perception of trees as an infinitely replenishing resource. “It’s not going to happen again,” says Schoettle. “But it wasn’t something that I think they struggled with at the time, because it was just a tree, and the mindset was that trees were a renewable resource and they would grow back. And it didn’t seem like it was any particularly special tree.”
To this day, Prometheus still holds the count for the most rings of any tree, at 4,862. The next oldest tree, called Methuselah, was identified by Edward Schulman in the 1950s and is still alive today. Though initially made public, Methuselah’s whereabouts in the White Mountains of California have now been purposely obscured to prevent the tree’s destruction. Schoettle says the Forest Service conceals Methuselah’s location “so that we don’t love it to death.” In fact, visibility is a threat to any natural tourist destination—just this past January, a woman accidentally burned down ‘the Senator,’ the world’s fifth-oldest living tree, at a highly trafficked site in Central Florida.
But Tom Harlan knows of a tree somewhere in the White Mountains that might be even older. “There is one that Schulman cored, but he died before he could count the rings,” says Harlan. “It’s the oldest thing he ever collected, older than Methuselah and probably older than Prometheus. But I won’t show anybody which one it is.” | http://www.collectorsweekly.com/articles/oldest-living-tree-tells-all/ | 13 |
104 | Inverse trigonometric functions
In mathematics, the inverse trigonometric functions (occasionally called cyclometric functions) are the inverse functions of the trigonometric functions with suitably restricted domains. They are the inverse sine, cosine, tangent, cosecant, secant and cotangent functions. They are used for computing the angle, from any of its trigonometric ratios. These functions have a wide range of use in navigation, physics, engineering, etc.
There are many notations used for the inverse trigonometric functions. The notations sin−1 (x), cos−1 (x), tan−1 (x), etc. are often used, but this convention logically conflicts with the common semantics for expressions like sin2(x), which refer to numeric power rather than function composition, and therefore may result in confusion between multiplicative inverse and compositional inverse. Another convention used by some authors[who?] is to use a majuscule first letter along with a −1 superscript, e.g., Sin−1 (x), Cos−1 (x), etc., which avoids confusing them with the multiplicative inverse, which must be represented as sin−1 (x), cos−1 (x), etc. A yet another convention is to use an arc- prefix, so that the confusion with the −1 superscript is resolved completely, e.g., arcsin x, arccos x, etc. This convention is used throughout the article. In computer programming languages the inverse trigonometric functions are usually called asin, acos, atan.
Etymology of the arc- prefix
When measuring in radians, an angle of θ radians will correspond to an arc whose length is rθ, where r is the radius of the circle. Thus, in the unit circle, "the arc whose cosine is x" is the same as "the angle whose cosine is x", because the measurement of the length of the arc of the circle is the same as the measurement of the angle in radians.
Principal values
Since none of the six trigonometric functions are one-to-one, they are restricted in order to have inverse functions. Therefore the ranges of the inverse functions are proper subsets of the domains of the original functions
For example, using function in the sense of multivalued functions, just as the square root function y = √x could be defined from that y2 = x, the function y = arcsin(x) is defined so that sin(y) = x. There are multiple numbers y such that sin(y) = x; for example, sin(0) = 0, but also sin(π) = 0, sin(2π) = 0, etc. It follows that the arcsine function is multivalued: arcsin(0) = 0, but also arcsin(0) = π, arcsin(0) = 2π, etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x in the domain the expression arcsin(x) will evaluate only to a single value, called its principal value. These properties apply to all the inverse trigonometric functions.
The principal inverses are listed in the following table.
|Name||Usual notation||Definition||Domain of x for real result||Range of usual principal value
|Range of usual principal value
|arcsine||y = arcsin x||x = sin y||−1 ≤ x ≤ 1||−π/2 ≤ y ≤ π/2||−90° ≤ y ≤ 90°|
|arccosine||y = arccos x||x = cos y||−1 ≤ x ≤ 1||0 ≤ y ≤ π||0° ≤ y ≤ 180°|
|arctangent||y = arctan x||x = tan y||all real numbers||−π/2 < y < π/2||−90° < y < 90°|
|arccotangent||y = arccot x||x = cot y||all real numbers||0 < y < π||0° < y < 180°|
|arcsecant||y = arcsec x||x = sec y||x ≤ −1 or 1 ≤ x||0 ≤ y < π/2 or π/2 < y ≤ π||0° ≤ y < 90° or 90° < y ≤ 180°|
|arccosecant||y = arccsc x||x = csc y||x ≤ −1 or 1 ≤ x||−π/2 ≤ y < 0 or 0 < y ≤ π/2||-90° ≤ y < 0° or 0° < y ≤ 90°|
If x is allowed to be a complex number, then the range of y applies only to its real part.
Relationships among the inverse trigonometric functions
If you only have a fragment of a sine table:
Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real).
From the half-angle formula , we get:
Relationships between trigonometric functions and inverse trigonometric functions
General solutions
Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of 2π. Sine and cosecant begin their period at 2πk − π/2 (where k is an integer), finish it at 2πk + π/2, and then reverse themselves over 2πk + π/2 to 2πk + 3π/2. Cosine and secant begin their period at 2πk, finish it at 2πk + π, and then reverse themselves over 2πk + π to 2πk + 2π. Tangent begins its period at 2πk − π/2, finishes it at 2πk + π/2, and then repeats it (forward) over 2πk + π/2 to 2πk + 3π/2. Cotangent begins its period at 2πk, finishes it at 2πk + π, and then repeats it (forward) over 2πk + π to 2πk + 2π.
This periodicity is reflected in the general inverses where k is some integer:
- Which, written in one equation, is:
- Which, written in one equation, is:
Extension to complex plane
Since the inverse trigonometric functions are analytic functions, they can be extended from the real line to the complex plane. This results in functions with multiple sheets and branch points. One possible way of defining the extensions is:
where the part of the imaginary axis which does not lie strictly between −i and +i is the cut between the principal sheet and other sheets;
where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not lie strictly between −1 and +1 is the cut between the principal sheet of arcsin and other sheets;
which has the same cut as arcsin;
which has the same cut as arctan;
where the the part of the real axis between −1 and +1 inclusive is the cut between the principal sheet of arcsec and other sheets;
which has the same cut as arcsec.
Derivatives of inverse trigonometric functions
The derivatives for complex values of z are as follows:
Only for real values of x:
For a sample derivation: if , we get:
Expression as definite integrals
Integrating the derivative and fixing the value at one point gives an expression for the inverse trigonometric function as a definite integral:
When x equals 1, the integrals with limited domains are improper integrals, but still well-defined.
Infinite series
Like the sine and cosine functions, the inverse trigonometric functions can be calculated using infinite series, as follows:
Leonhard Euler found a more efficient series for the arctangent, which is:
(Notice that the term in the sum for n = 0 is the empty product which is 1.)
Alternatively, this can be expressed:
Continued fractions for arctangent
Two alternatives to the power series for arctangent are these generalized continued fractions:
The second of these is valid in the cut complex plane. There are two cuts, from −i to the point at infinity, going down the imaginary axis, and from i to the point at infinity, going up the same axis. It works best for real numbers running from −1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the first) are just (nz)2, with each perfect square appearing once. The first was developed by Leonhard Euler; the second by Carl Friedrich Gauss utilizing the Gaussian hypergeometric series.
Indefinite integrals of inverse trigonometric functions
For real and complex values of x:
For real x ≥ 1:
All of these can be derived using integration by parts and the simple derivative forms shown above.
Using , set
Back-substitute for x to yield
Two-argument variant of arctangent
The two-argument atan2 function computes the arctangent of y / x given y and x, but with a range of (−π, π]. In other words, atan2(y, x) is the angle between the positive x-axis of a plane and the point (x, y) on it, with positive sign for counter-clockwise angles (upper half-plane, y > 0), and negative sign for clockwise angles (lower half-plane, y < 0). It was first introduced in many computer programming languages, but it is now also common in other fields of science and engineering.
In terms of the standard arctan function, that is with range of (−π/2, π/2), it can be expressed as follows:
This function may also be defined using the tangent half-angle formulae as follows:
provided that either x > 0 or y ≠ 0. However this fails if given x ≤ 0 and y = 0 so the expression is unsuitable for computational use.
The above argument order (y, x) seems to be the most common, and in particular is used in ISO standards such as the C programming language, but a few authors may use the opposite convention (x, y) so some caution is warranted. These variations are detailed at Atan2.
Arctangent function with location parameter
In many applications the solution of the equation is to come as close as possible to a given value . The adequate solution is produced by the parameter modified arctangent function
The function rounds to the nearest integer.
Logarithmic forms
Elementary proofs of these relations proceed via expansion to exponential forms of the trigonometric functions.
Example proof
Using the exponential definition of sine
(the positive branch is chosen)
Example proof (variant 2)
- Remove exp, multiply by -i and substitute theta.
Arctangent addition formula
This is derived from the tangent addition formula
Application: finding the angle of a right triangle
Inverse trigonometric functions are useful when trying to determine the remaining two angles of a right triangle when the lengths of the sides of the triangle are known. Recalling the right-triangle definitions of sine, for example, it follows that
Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using the Pythagorean Theorem: where is the length of the hypotenuse. Arctangent comes in handy in this situation, as the length of the hypotenuse is not needed.
For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angle θ with the horizontal, where θ may be computed as follows:
Practical considerations
For angles near 0 and π, arccosine is ill-conditioned and will thus calculate the angle with reduced accuracy in a computer implementation (due to the limited number of digits). Similarly, arcsine is inaccurate for angles near −π/2 and π/2. To achieve full accuracy for all angles, arctangent or atan2 should be used for the implementation.
See also
- Trigonometric function
- Tangent half-angle formula
- List of trigonometric identities
- Complex logarithm
- Argument (complex analysis)
- Square root
- Gauss's continued fraction
- Inverse hyperbolic function
- List of integrals of inverse trigonometric functions | http://en.wikipedia.org/wiki/Arctangent | 13 |
89 | Release 2 (9.2)
Part Number A96571-02
This chapter explains the concepts related to rules.
This chapter contains these topics:
A rule is a database object that enables a client to perform an action when an event occurs and a condition is satisfied. Rules are evaluated by a rules engine, which is a built-in part of Oracle. Both user-created applications and Oracle features, such as Streams, can be clients of the rules engine.
A rule consists of the following components:
Each rule is specified as a condition that is similar to the condition in the
WHERE clause of a SQL query. You can group related rules together into rule sets. A single rule can be in one rule set, multiple rule sets, or no rule sets.
A rule condition combines one or more expressions and operators and returns a Boolean value, which is a value of
NULL (unknown). An expression is a combination of one or more values and operators that evaluate to a value. A value can be data in a table, data in variables, or data returned by a SQL function or a PL/SQL function. For example, the following condition consists of two expressions (
30) and an operator (
This logical condition evaluates to
TRUE for a given row when the
department_id column is
30. Here, the value is data in the
department_id column of a table.
A single rule condition may include more than one condition combined with the
NOT conditional operators to form compound conditions. For example, consider the following compound condition:
This rule condition contains two conditions joined by the
OR conditional operator. If either condition evaluates to
TRUE, then the rule condition evaluates to
TRUE. If the conditional operator were
AND instead of
OR, then both conditions would have to evaluate to
TRUE for the entire rule condition to evaluate to
Rule conditions may contain variables. When you use variables in rule conditions, precede each variable with a colon (:). The following is an example of a variable used in a rule condition:
Variables enable you to refer to data that is not stored in a table. A variable may also improve performance by replacing a commonly occurring expression. Performance may improve because, instead of evaluating the same expression multiple times, the variable is evaluated once.
A rule condition may also contain an evaluation of a call to a subprogram. These conditions are evaluated in the same way as other conditions. That is, they evaluate to a value of
FALSE, or unknown. The following is an example of a condition that contains a call to a simple function named
is_manager that determines whether an employee is a manager:
Here, the value of
employee_id is determined by data in a table where
employee_id is a column.
You can use user-defined types for variables. Therefore, variables can have attributes. When a variable has attributes, each attribute contains partial data for variable. In rule conditions, you specify attributes using dot notation. For example, the following condition evaluates to
TRUE if the value of attribute
z in variable
Oracle9i Application Developer's Guide - Object-Relational Features for more information about user-defined types
A simple rule condition is a condition that has either of the following forms:
In a simple rule condition, a
simple_rule_expression is one of the following:
For table columns, variables, and variable attributes, all numeric (
INTEGER) and character (
VARCHAR2) types are supported. Use of other types of expressions results in non-simple rule conditions.
In a simple rule condition, an
operator is one of the following:
Use of other operators results in non-simple rule conditions.
A constant is a fixed value. A constant can be:
Therefore, the following conditions are simple rule conditions:
Rules with simple rule conditions are called simple rules. You can combine two or more simple rule conditions with the conditional operators
OR for a rule, and the rule remains simple. However, using the
NOT conditional operator in a rule's condition causes the rule to be non-simple. For example, rules with the following conditions are simple rules:
Simple rules are important for the following reasons:
When a client uses
DBMS_RULE.EVALUATE to evaluate an event, the client can specify that only simple rules should be evaluated by specifying
true for the
Oracle9i SQL Reference for more information about conditions, expressions, and operators
A rule evaluation context is a database object that defines external data that can be referenced in rule conditions. The external data can exist as variables, table data, or both. The following analogy may be helpful: If the rule condition were the
WHERE clause in a SQL query, then the external data in the rule's evaluation context would be the information referenced in the
FROM clause of the query. That is, the expressions in the rule condition should reference the tables, table aliases, and variables in the evaluation context to make a valid
A rule evaluation context provides the necessary information for interpreting and evaluating the rule conditions that reference external data. For example, if a rule refers to a variable, then the information in the rule evaluation context must contain the variable type. Or, if a rule refers to a table alias, then the information in the evaluation context must define the table alias.
The objects referenced by a rule are determined by the rule evaluation context associated with it. The rule owner must have the necessary privileges to access these objects, such as
SELECT privilege on tables,
EXECUTE privilege on types, and so on. The rule condition is resolved in the schema that owns the evaluation context.
For example, consider a rule evaluation context named
hr_evaluation_context that contains the following information:
depcorresponds to the
loc_id2are both of type
hr_evaluation_context rule evaluation context provides the necessary information for evaluating the following rule condition:
In this case, the rule condition evaluates to
TRUE for a row in the
hr.departments table if that row has a value in the
location_id column that corresponds to either of the values passed in by the
loc_id2 variables. The rule cannot be interpreted or evaluated properly without the information in the
hr_evaluation_context rule evaluation context. Also, notice that dot notation is used to specify the column
location_id in the
dep table alias.
The value of a variable referenced in a rule condition may be explicitly specified when the rule is evaluated, or the value of a variable may be implicitly available given the event.
Explicit variables are supplied by the caller at evaluation time. These values are specified by the
variable_values parameter when the
DBMS_RULE.EVALUATE procedure is run.
Implicit variables are not given a value at evaluation time. The value of an implicit variable is obtained by calling the variable value evaluation function. You define this function when you specify the
variable_types list during the creation of an evaluation context using the
DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT procedure. If the value for an implicit variable is specified during evaluation, then the specified value overrides the value returned by the variable value evaluation function.
variable_types list is of type
SYS.RE$VARIABLE_TYPE_LIST, which is a list of variables of type
SYS.RE$VARIABLE_TYPE. Within each instance of
SYS.RE$VARIABLE_TYPE in the list, the function used to determine the value of an implicit variable is specified as the
Whether variables are explicit or implicit is the choice of the designer of the application using the rules engine. The following are reasons for using an implicit variable:
DBMS_RULE.EVALUATEprocedure does not need to know anything about the variable, which may reduce the complexity of the application using the rules engine. For example, a variable may call a function that returns a value based on the data being evaluated.
DBMS_RULE.EVALUATEprocedure does not know the variable value based on the event, which may improve security if the variable value contains confidential information.
DBMS_RULE.EVALUATEprocedure does not need to specify many uncommon variables.
For example, in the following rule condition, the values of variable
x and variable
y could be specified explicitly, but the value of the variable
max could be returned by running the
y could be implicit variables, and variable
max could be an explicit variable. As you can see, there is no syntactic difference between explicit and implicit variables in the rule condition. You can determine whether a variable is explicit or implicit by querying the
DBA_EVALUATION_CONTEXT_VARS data dictionary view. For explicit variables, the
VARIABLE_VALUE_FUNCTION field is
NULL. For implicit variables, this field contains the name of the function called by the implicit variable.
A single rule evaluation context can be associated with multiple rules or rule sets. The following list describes which evaluation context is used when a rule is evaluated:
ADD_RULEprocedure in the
DBMS_RULE_ADMpackage, then the evaluation context specified in the
ADD_RULEprocedure is used for the rule when the rule set is evaluated.
ADD_RULEprocedure, then the evaluation context of the rule set is used for the rule when the rule set is evaluated.
You have the option of creating an evaluation function to be run with a rule evaluation context. You may choose to use an evaluation function for the following reasons:
You can associate the function with the rule evaluation context by specifying the function name for the
evaluation_function parameter when you create the rule evaluation context with the
CREATE_EVALUATION_CONTEXT procedure in the
DBMS_RULE_ADM package. Then, the rules engine invokes the evaluation function during the evaluation of any rule set that uses the evaluation context. The function must have each parameter in the
DBMS_RULE.EVALUATE procedure, and the type of each parameter must be same as the type of the corresponding parameter in the
DBMS_RULE.EVALUATE procedure, but the names of the parameters may be different.
An evaluation function has the following return values:
DBMS_RULE_ADM.EVALUATION_SUCCESS: The user specified evaluation function completed the rule set evaluation successfully. The rules engine returns the results of the evaluation obtained by the evaluation function to the rules engine client using the
DBMS_RULE_ADM.EVALUATION_CONTINUE: The rules engine evaluates the rule set as if there were no evaluation function. The evaluation function is not used, and any results returned by the evaluation function are ignored.
DBMS_RULE_ADM.EVALUATION_FAILURE: The user specified evaluation function failed. Rule set evaluation stops, and the rules engine returns the results of the evaluation obtained by the evaluation function to the rules engine client using the
If you always want to bypass the rules engine, then the evaluation function should return either
EVALUATION_FAILURE. However, if you want to filter events so that some events are evaluated by the evaluation function and other events are evaluated by the rules engine, then the evaluation function may return all three return values, and it returns
EVALUATION_CONTINUE when the rules engine should be used for evaluation.
If you specify an evaluation function for an evaluation context, then the evaluation function is run during evaluation when the evaluation context is used by a rule set or rule.
Oracle9i Supplied PL/SQL Packages and Types Reference for more information about the evaluation function specified in the
A rule action context contains optional information associated with a rule that is interpreted by the client of the rules engine when the rule is evaluated for an event. The client of the rules engine can be a user-created application or an internal feature of Oracle, such as Streams. Each rule has only one action context. The information in an action context is of type
SYS.RE$NV_LIST, which is a type that contains an array of name-value pairs.
The rule action context information provides a context for the action taken by a client of the rules engine when a rule evaluates to
TRUE. The rules engine does not interpret the action context. Instead, it returns the action context information when a rule evaluates to
TRUE. Then, a client of the rules engine can interpret the action context information.
For example, suppose an event is defined as the addition of a new employee to a company. If the employee information is stored in the
hr.employees table, then the event occurs whenever a row is inserted into this table. The company wants to specify that a number of actions are taken when a new employee is added, but the actions depend on which department the employee joins. One of these actions is that the employee is registered for a course relating to the department.
In this scenario, the company can create a rule for each department with an appropriate action context. Here, an action context returned when a rule evaluates to
TRUE specifies the number of a course that an employee should take. Here are the rule conditions and the action contexts for three departments:
|Rule Name||Rule Condition||Action Context Name-Value Pair|
These action contexts return the following instructions to the client application:
rule_dep_10rule instructs the client application to enroll the new employee in course number
rule_dep_20rule instructs the client application to enroll the new employee in course number
NULLaction context for the
rule_dep_30rule instructs the client application not to enroll the new employee any course.
Each action context can contain zero or more name-value pairs. If an action context contains more than one name-value pair, then each name in the list must be unique. In this example, the client application to which the rules engine returns the action context registers the new employee in the course with the returned course number. The client application does not register the employee for a course if a
NULL action context is returned or if the action context does not contain a course number.
If multiple clients use the same rule, or if you want an action context to return more than one name-value pair, then you can list more than one name-value pair in an action context. For example, suppose the company also adds a new employee to a department electronic mailing list. In this case, the action context for the
rule_dep_10 rule might contain two name-value pairs:
The following are considerations for names in name-value pairs:
Streams uses action contexts for rule-based transformations and, when subset rules are specified, for internal transformations that may be required on LCRs containing
You can add a name-value pair to an action context using the
ADD_PAIR member procedure of the
RE$NV_LIST type. You can remove a name-value pair from an action context using the
REMOVE_PAIR member procedure of the
RE$NV_LIST type. If you want to modify an existing name-value pair in an action context, then you should first remove it using the
REMOVE_PAIR member procedure and then add an appropriate name-value pair using the
ADD_PAIR member procedure.
The rules engine evaluates rule sets based on events. An event is an occurrence that is defined by the client of the rules engine. The client initiates evaluation of an event by calling the
DBMS_RULE.EVALUATE procedure. The information specified by the client when it calls the
DBMS_RULE.EVALUATE procedure includes the following:
SYS.RE$NV_LISTthat contains name-value pairs that contain information about the event. This optional information is not directly used or interpreted by the rules engine. Instead, it is passed to client callbacks, such as an evaluation function, a variable value evaluation function (for implicit variables), and a variable method function.
The client can also send other information about the event and about how to evaluate the event using the
DBMS_RULE.EVALUATE procedure. For example, the caller may specify if evaluation must stop as soon as the first
TRUE rule or the first
MAYBE rule (if there are no
TRUE rules) is found.
The rules engine uses the rules in the specified rule set to evaluate the event. Then, the rules engine returns the results to the client. The rules engine returns rules using the two
OUT parameters in the
maybe_rules. That is, the
true_rules parameter returns rules that evaluate to
TRUE, and, optionally, the
maybe_rules parameter returns rules that may evaluate to
TRUE given more information.
Figure 5-1 shows the rule set evaluation process:
DBMS_RULE.EVALUATEprocedure. Only rules that are in the specified rule set and use the specified evaluation context are used for evaluation.
TRUEto the client. Each returned rule is returned with its entire action context, which may contain information or may be
Partial evaluation occurs when the
DBMS_RULE.EVALUATE procedure is run without data for all the tables and variables in the specified evaluation context. During partial evaluation, some rules may reference columns, variables, or attributes that are unavailable, while some other rules may reference only available data.
For example, consider a scenario where only the following data is available during evaluation:
The following rules are used for evaluation:
R1has the following condition:
R2has the following condition:
R3has the following condition:
R4has the following condition:
Given this scenario,
R4 reference available data,
R2 references unavailable data, and
R3 references available data and unavailable data.
Partial evaluation always evaluates only simple conditions within a rule. If the rule condition has parts which are not simple, then the rule may or may not be evaluated completely, depending on the extent to which data is available. If a rule is not completely evaluated, then it can be returned as a
For example, given the rules in the previous scenario,
R1 and the first part of
R3 are evaluated, but
R4 are not evaluated. The following results are returned to the client:
FALSE, and so is not returned.
R2is returned as
MAYBEbecause information about attribute
v1.a2is not available.
R3is returned as
R3is a simple rule and the value of
v1.a1matches the first part of the rule condition.
R4is returned as
MAYBEbecause the rule condition is not simple. The client must supply the value of variable
v1for this rule to evaluate to
You can create the following types of database objects directly using the
You can create rules and rule sets indirectly using the
DBMS_STREAMS_ADM package. You control the privileges for these database objects using the following procedures in the
To allow a user to create rule sets, rules, and evaluation contexts in the user's own schema, grant the user the following system privileges:
These privileges, and the privileges discussed in the following sections, can be granted to the user directly or through a role.
When you grant a privilege on
If you want to grant access to an object in the
To create an evaluation context, rule, or rule set in a schema, a user must meet at least one of the following conditions:
To alter an evaluation context, rule, or rule set, a user must meet at least one of the following conditions:
ALTER_ON_RULE_SETobject privilege on the rule set.
To drop an evaluation context, rule, or rule set, a user must meet at least one of the following conditions:
This section describes the privileges required to place a rule in a rule set.
The user must meet at least one of the following conditions for the rule:
hrschema in a rule set, a user must be granted the
EXECUTE_ON_RULEprivilege for the
The user also must meet at least one of the following conditions for the rule set:
human_resourcesrule set in the
hrschema, a user must be granted the
ALTER_ON_RULE_SETprivilege for the
To evaluate a rule set, a user must meet at least one of the following conditions:
hrschema, a user must be granted the
EXECUTE_ON_RULE_SETprivilege for the
EXECUTE object privilege on a rule set requires that the grantor have the
EXECUTE privilege specified
OPTION on all rules currently in the rule set.
To use an evaluation context, a user must meet at least one of the following conditions for the evaluation context: | http://docs.oracle.com/cd/B10500_01/server.920/a96571/rules.htm | 13 |
70 | In quantum mechanics, the particle is described by a wave. The position is where the wave is concentrated and the momentum, a measure of the velocity, is the wavelength. Neither the position nor the velocity is precisely defined; the position is uncertain to the degree that the wave is spread out, and the momentum is uncertain to the degree that the wavelength is ill-defined.
The only kind of wave with a definite position is concentrated at one point, and such a wave has no wavelength. Conversely, the only kind of wave with a definite wavelength is an infinite regular periodic oscillation over all space, which has no definite position. So in quantum mechanics, there are no states which describe a particle with both a definite position and a definite momentum. The narrower the probability distribution is for the position, the wider it is in momentum.
For example, the uncertainty principle requires that when the position of an atom is measured with a photon, the reflected photon will change the momentum of the atom by an uncertain amount inversely proportional to the accuracy of the position measurement. The amount of uncertainty can never be reduced below the limit set by the principle, regardless of the experimental setup.
A mathematical statement of the principle is that every quantum state has the property that the root-mean-square (RMS) deviation of the position from its mean (the standard deviation of the X-distribution):
times the RMS deviation of the momentum from its mean (the standard deviation of P):
can never be smaller than a small fixed multiple of Planck's constant:
The uncertainty principle is related to the observer effect, with which it is often conflated. In the Copenhagen interpretation of quantum mechanics, the uncertainty principle is a theoretical limitation of how small this observer effect can be. Any measurement of the position with accuracy collapses the quantum state making the standard deviation of the momentum larger than .
While this is true in all interpretations, in many modern interpretations of quantum mechanics (many-worlds and variants), the quantum state itself is the fundamental physical quantity, not the position or momentum. Taking this perspective, while the momentum and position are still uncertain, the uncertainty is an effect caused not just by observation, but by any entanglement with the environment.
In 1925, following pioneering work with Hendrik Kramers, Heisenberg developed matrix mechanics, which replaced the ad-hoc old quantum theory with modern quantum mechanics. The central assumption was that the classical motion was not precise at the quantum level, and electrons in an atom did not travel on sharply defined orbits. Rather, the motion was smeared out in a strange way: the time Fourier transform only involving those frequencies which could be seen in quantum jumps.
Heisenberg's paper did not admit any unobservable quantities, like the exact position of the electron in an orbit at any time, he only allowed the theorist to talk about the Fourier components of the motion. Since the Fourier components were not defined at the classical frequencies, they could not be used to construct an exact trajectory, so that the formalism could not answer certain overly precise questions about where the electron was or how fast it was going.
The most striking property of Heisenberg's infinite matrices for the position and momentum is that they do not commute. His central result was the canonical commutation relation:
and this result does not have a clear physical interpretation.
In March 1926, working in Bohr's institute, Heisenberg formulated the principle of uncertainty thereby laying the foundation of what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg showed that the commutation relations implies an uncertainty, or in Bohr's language a complementarity. Any two variables which do not commute cannot be measured simultaneously — the more precisely one is known, the less precisely the other can be known.
One way to understand the complementarity between position and momentum is by wave-particle duality. If a particle described by a plane wave passes through a narrow slit in a wall, like a water-wave passing through a narrow channel the particle will diffract, and its wave will come out in a range of angles. The narrower the slit, the wider the diffracted wave and the greater the uncertainty in momentum afterwards. The laws of diffraction require that the spread in angle is about , where d is the slit width and is the wavelength. From de Broglie's relation, the size of the slit and the range in momentum of the diffracted wave are related by Heisenberg's rule:
In his celebrated paper (1927), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. In his Chicago lecture he refined his principle:
But it was Kennard in 1927 who first proved the modern inequality
where , and σx, σp are the standard deviations of position and momentum. Heisenberg himself only proved relation (2) for the special case of Gaussian states..
The uncertainty principle is often explained as the statement that the measurement of position necessarily disturbs a particle's momentum, and vice versa—i.e., that the uncertainty principle is a manifestation of the observer effect.
This explanation is sometimes misleading in a modern context, because it makes it seem that the disturbances are somehow conceptually avoidable--- that there are states of the particle with definite position and momentum, but the experimental devices we have today are just not good enough to produce those states. In fact, states with both definite position and momentum just do not exist in quantum mechanics, so it is not the measurement equipment that is at fault.
It is also misleading in another way, because sometimes it is a failure to measure the particle that produces the disturbance. For example, if a perfect photographic film contains a small hole, and an incident photon is not observed, then its momentum becomes uncertain by a large amount. By not observing the photon, we discover that it went through the hole, revealing the photons position.
It is misleading in yet another way, because sometimes the measurement can be performed far away. If two photons are emitted in opposite directions from the decay of positronium, the momentum of the two photons is opposite. By measuring the momentum of one particle, the momentum of the other is determined. This case is subtler, because it is impossible to introduce more uncertainties by measuring a distant particle, but it is possible to restrict the uncertainties in different ways, with different statistical properties, depending on what property of the distant particle you choose to measure. By restricting the uncertainty in p to be very small by a distant measurement, the remaining uncertainty in x stays large.
But Heisenberg did not focus on the mathematics of quantum mechanics, he was primarily concerned with establishing that the uncertainty is actually a property of the world--- that it is in fact physically impossible to measure the position and momentum of a particle to a precision better than that allowed by quantum mechanics. To do this, he used physical arguments based on the existence of quanta, but not the full quantum mechanical formalism.
The reason is that this was a surprising prediction of quantum mechanics, which was not yet accepted. Many people would have considered it a flaw that there are no states of definite position and momentum. Heisenberg was trying to show that this was not a bug, but a feature--- a deep, surprising aspect of the universe. In order to do this, he could not just use the mathematical formalism, because it was the mathematical formalism itself that he was trying to justify.
One way in which Heisenberg originally argued for the uncertainty principle is by using an imaginary microscope as a measuring device he imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.
If the photon has a short wavelength, and therefore a large momentum, the position can be measured accurately. But the photon will be scattered in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision will not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely.
If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon and hence the new momentum of the electron will be poorly resolved. If a small aperture is used, the accuracy of the two resolutions is the other way around.
The trade-offs imply that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower bound, which is up to a small numerical factor equal to Planck's constant. Heisenberg did not care to formulate the uncertainty principle as an exact bound, and preferred to use it as a heuristic quantitative statement, correct up to small numerical factors.
The Copenhagen interpretation of quantum mechanics and Heisenberg's Uncertainty Principle were seen as twin targets by detractors who believed in an underlying determinism and realism. Within the Copenhagen interpretation of quantum mechanics, there is no fundamental reality which the quantum state is describing, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be.
Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years.
The first of Einstein's thought experiments challenging the uncertainty principle went as follows:
Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to , and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement.
Another of Einstein's thought experiments was designed to challenge the time/energy uncertainty principle. It is very similar to the slit experiment in space, except here the narrow window through which the particle passes is in time:
Bohr spent a day considering this setup, but eventually realized that if the energy of the box is precisely known, the time at which the shutter opens is uncertain. In the case that the scale and the box are placed in a gravitational field, then in some cases it is the uncertainty of the position of the clock in the gravitational field that will alter the ticking rate, and this can introduce the right amount of uncertainty. This was ironic, because it was Einstein himself who first discovered gravity's effect on clocks.
Bohr was compelled to modify his understanding of the uncertainty principle after another thought experiment by Einstein. In 1935, Einstein, Podolski and Rosen published an analysis of widely separated entangled particles. Measuring one particle, Einstein realized, would alter the probability distribution of the other, yet here the other particle could not possibly be disturbed. This example led Bohr to revise his understanding of the principle, concluding that the uncertainty was not caused by a direct interaction..
But Einstein came to much more far reaching conclusions from the same thought experiment. He felt that a complete description of reality would have to predict the results of experiments from locally changing deterministic quantities, and therefore would have to include more information than the maximum possible allowed by the uncertainty principle.
In 1964 John Bell showed that this assumption can be tested, since it implies a certain inequality between the probability of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out local hidden variables.
While it is possible to assume that quantum mechanical predictions are due to nonlocal hidden variables, and in fact David Bohm invented such a formulation, this is not a satisfactory resolution for the vast majority of physicists. The question of whether a random outcome is predetermined by a nonlocal theory can be philosophical, and potentially intractable. If the hidden variables are not constrained, they could just be a list of random digits that are used to produce the measurement outcomes. To make it sensible, the assumption of nonlocal hidden variables is sometimes augmented by a second assumption--- that the size of the observable universe puts a limit on the computations that these variables can do. A nonlocal theory of this sort predicts that a quantum computer will encounter fundamental obstacles when it tries to factor numbers of approximately 10000 digits or more, an achievable task in quantum mechanics.
Popper thinks of these rare events as falsifications of the uncertainty principle in Heisenberg's original formulation. In order to preserve the principle, he concludes that Heisenberg's relation does not apply to individual particles or measurements, but only to many many identically prepared particles, to ensembles. Popper's criticism applies to nearly all probabilistic theories, since a probabilistic statement requires many measurements to either verify or falsify.
Popper's criticism does not trouble physicists. Popper's presumption is that the measurement is revealing some preexisting information about the particle, the momentum, which the particle already possesses. In the quantum mechanical description the wavefunction is not a reflection of ignorance about the values of some more fundamental quantities, it is the complete description of the state of the particle. In this philosophical view, the Copenhagen interpretation, Popper's example is not a falsification, since after the particle diffracts through the slit and before the momentum is measured, the wavefunction is changed so that the momentum is still as uncertain as the principle demands.
While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III discovered a much stronger formulation of the uncertainty principle. In the inequality of standard deviations, some states, like the wavefunction:
The interpretation of I is that the number of bits of information an observer acquires when the value of x is given to accuracy is equal to . The second part is just the number of bits past the decimal point, the first part is a logarithmic measure of the width of the distribution. For a uniform distribution of width the information content is . This quantity can be negative, which means that the distribution is narrower than one unit, so that learning the first few bits past the decimal point gives no information since they are not uncertain.
Taking the logarithm of Heisenberg's formulation of uncertainty in natural units.
Everett conjectured that for all quantum states:
He did not prove this, but he showed that Gaussian states are minima in function space for the left hand side, and that they saturate the inequality. Similar relations with less restrictive right hand sides were rigorously proven many decades later.
When linear operators A and B act on a function , they don't always commute. A clear example is when operator B multiplies by x, while operator A takes the derivative with respect to x. Then
For any two operators A and B:
The inequality above acquires its physical interpretation:
is the mean of observable X in the state ψ and
is the standard deviation of observable X in the system state ψ.
by substituting for A and for B in the general operator norm inequality, since the imaginary part of the product, the commutator, is unaffected by the shift:
The big side of the inequality is the product of the norms of and , which in quantum mechanics are the standard deviations of A and B. The small side is the norm of the commutator, which for the position and momentum is just .
In matrix mechanics, the commutator of the matrices X and P is always nonzero, it is a constant multiple of the identity matrix. This means that it is impossible for a state to have a definite values x for X and p for P, since then XP would be equal to the number xp and would equal PX.
The commutator of two matrices is unchanged when they are shifted by a constant multiple of the identity--- for any two real numbers x and p
Given any quantum state , define the number x
to be the expected value of the position, and
to be the expected value of the momentum. The quantities and are only nonzero to the extent that the position and momentum are uncertain, to the extent that the state contains some values of X and P which deviate from the mean. The expected value of the commutator
can only be nonzero if the deviations in X in the state times the deviations in P are large enough.
The size of the typical matrix elements can be estimated by summing the squares over the energy states :
So in order to produce the canonical commutation relations, the product of the deviations in any state has to be about .
This heuristic estimate can be made into a precise inequality using the Cauchy-Schwartz inequality, exactly as before. The inner product of the two vectors in parentheses:
is bounded above by the product of the lengths of each vector:
so, rigorously, for any state:
the real part of a matrix M is , so that the real part of the product of two Hermitian matrices is:
while the imaginary part is
The magnitude of is bigger than the magnitude of its imaginary part, which is the expected value of the imaginary part of the matrix:
Note that the uncertainty product is for the same reason bounded below by the expected value of the anticommutator, which adds a term to the uncertainty relation. The extra term is not as useful for the uncertainty of position and momentum, because it has zero expected value in a gaussian wavepacket, like the ground state of a harmonic oscillator. The anticommutator term is useful for bounding the uncertainty of spin operators though.
In Schrödinger's wave mechanics The quantum mechanical wavefunction contains information about both the position and the momentum of the particle. The position of the particle is where the wave is concentrated, while the momentum is the typical wavelength.
The wavelength of a localized wave cannot be determined very well. If the wave extends over a region of size L and the wavelength is approximately , the number of cycles in the region is approximately . The inverse of the wavelength can be changed by about without changing the number of cycles in the region by a full unit, and this is approximately the uncertainty in the inverse of the wavelength,
This is an exact counterpart to a well known result in signal processing --- the shorter a pulse in time, the less well defined the frequency. The width of a pulse in frequency space is inversely proportional to the width in time. It is a fundamental result in Fourier analysis, the narrower the peak of a function, the broader the Fourier transform.
Multiplying by , and identifying , and identifying .
The uncertainty Principle can be seen as a theorem in Fourier analysis: the standard deviation of the squared absolute value of a function, times the standard deviation of the squared absolute value of its Fourier transform, is at least 1/(16π²) (Folland and Sitaram, Theorem 1.1).
An instructive example is the (unnormalized) gaussian wave-function
The expectation value of X is zero by symmetry, and so the variance is found by averaging over all positions with the weight , careful to divide by the normalization factor.
The fourier transform of the gaussian is the wavefunction in k-space, where k is the wavenumber and is related to the momentum by DeBroglie's relation :
The last integral does not depend on p, because there is a continuous change of variables which removes the dependence, and this deformation of the integration path in the complex plane does not pass any singularities. So up to normalization, the answer is again a Gaussian.
The width of the distribution in k is found in the same way as before, and the answer just flips A to 1/A.
so that for this example
which shows that the uncertainty relation inequality is tight. There are wavefunctions which saturate the bound.
The Robertson Schrödinger relation gives the uncertainty relation for any two observables that do not commute:
One well-known uncertainty relation is not an obvious consequence of the Robertson-Schrödinger relation: the energy-time uncertainty principle.
but it was not obvious what Δt is, because the time at which the particle has a given state is not an operator belonging to the particle, it is a parameter describing the evolution of the system. As Lev Landau once joked "To violate the time-energy uncertainty relation all I have to do is measure the energy very precisely and then look at my watch!"
Nevertheless, Einstein and Bohr understood the heuristic meaning of the principle. A state which only exists for a short time cannot have a definite energy. In order to have a definite energy, the frequency of the state needs to be accurately defined, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy.
For example, in spectroscopy, excited states have a finite lifetime. By the time-energy uncertainty principle, they do not have a definite energy, and each time they decay the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow decaying states have a narrow linewidth.
The broad linewidth of fast decaying states makes it difficult to accurately measure the energy of the state, and researchers have even used microwave cavities to slow down the decay-rate, to get sharper peaks. The same linewidth effect also makes it difficult to measure the rest mass of fast decaying particles in particle physics. The faster the particle decays, the less certain is its mass.
One false formulation of the energy-time uncertainty principle says that measuring the energy of a quantum system to an accuracy requires a time interval . This formulation is similar to the one alluded to in Landau's joke, and was explicitly invalidated by Y. Aharonov and D. Bohm in 1961. The time in the uncertainty relation is the time during which the system exists unperturbed, not the time during which the experimental equipment is turned on.
In 1936, Dirac offered a precise definition and derivation of the time-energy uncertainty relation, in a relativistic quantum theory of "events". In this formulation, particles followed a trajectory in space time, and each particle's trajectory was parametrized independently by a different proper time. The many-times formulation of quantum mechanics is mathematically equivalent to the standard formulations, but it was in a form more suited for relativistic generalization. It was the inspiration for Shin-Ichiro Tomonaga's to covariant perturbation theory for quantum electrodynamics.
But a better-known, more widely-used formulation of the time-energy uncertainty principle was given only in 1945 by L. I. Mandelshtam and I. E. Tamm, as follows. For a quantum system in a non-stationary state and an observable represented by a self-adjoint operator , the following formula holds:
where is the standard deviation of the energy operator in the state , stands for the standard deviation of the operator and is the expectation value of in that state. Although, the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters Schrödinger equation. It is a lifetime of the state with respect to the observable . In other words, this is the time after which the expectation value changes appreciably. | http://www.reference.com/browse/exact+counterpart | 13 |
63 | In classical mechanics, the momentum (SI unit kg m/s) of an object is the product of the mass and velocity of the object. Conceptually, the momentum of a moving object can be thought of as how difficult it would be to stop the object. As such, it is a natural consequence of Newton's first and second laws of motion. Having a lower speed or having less mass (how we measure inertia) results in having less momentum.
Momentum is a conserved quantity, meaning that the total momentum of any closed system (one not affected by external forces, and whose internal forces are not dissipative as heat or light) cannot be changed.
The concept of momentum in classical mechanics was originated by a number of great thinkers and experimentalists. René Descartes referred to mass times velocity as the fundamental force of motion. Galileo in his Two New Sciences used the term "impeto" (Italian), while Newton's Laws of Motion uses motus (Latin), which has been interpreted by subsequent scholars to mean momentum. (For accurate measures of momentum, see the section "modern definitions of momentum" on this page.)
Momentum in Newtonian mechanics
If an object is moving in any reference frame, then it has momentum in that frame. It is important to note that momentum is frame dependent. That is, the same object may have a certain momentum in one frame of reference, but a different amount in another frame. For example, a moving object has momentum in a reference frame fixed to a spot on the ground, while at the same time having zero momentum in a reference frame that is moving along with the object.
The amount of momentum that an object has depends on two physical quantities—the mass and the velocity of the moving object in the frame of reference. In physics, the symbol for momentum is usually denoted by a small bold p (bold because it is a vector); so this can be written:
- p is the momentum
- m is the mass
- v the velocity
(using bold text for vectors).
The origin of the use of p for momentum is unclear. It has been suggested that, since m had already been used for "mass," the p may be derived from the Latin petere ("to go") or from "progress" (a term used by Leibniz).
The velocity of an object at a particular instant is given by its speed and the direction of its motion at that instant. Because momentum depends on and includes the physical quantity of velocity, it too has a magnitude and a direction and is a vector quantity. For example, the momentum of a five-kg bowling ball would have to be described by the statement that it was moving westward at two m/s. It is insufficient to say that the ball has ten kg m/s of momentum because momentum is not fully described unless its direction is also given.
Momentum for a system
Relating to mass and velocity
The momentum of a system of objects is the vector sum of the momenta of all the individual objects in the system.
- is the momentum
- mi is the mass of object i
- the vector velocity of object i
- is the number of objects in the system
Relating to force
Force is equal to the rate of change of momentum:
In the case of constant mass and velocities much less than the speed of light, this definition results in the equation —commonly known as Newton's second law.
If a system is in equilibrium, then the change in momentum with respect to time is equal to zero:
Conservation of momentum
The principle of conservation of momentum states that the total momentum of a closed system of objects (which has no interactions with external agents) is constant. One of the consequences of this is that the center of mass of any system of objects will always continue with the same velocity unless acted on by a force outside the system.
In an isolated system (one where external forces are absent) the total momentum will be constant—this is implied by Newton's first law of motion. Newton's third law of motion, the law of reciprocal actions, which dictates that the forces acting between systems are equal in magnitude, but opposite in sign, is due to the conservation of momentum.
Since momentum is a vector quantity it has direction. Thus, when a gun is fired, although overall movement has increased compared to before the shot was fired, the momentum of the bullet in one direction is equal in magnitude, but opposite in sign, to the momentum of the gun in the other direction. These then sum to zero which is equal to the zero momentum that was present before either the gun or the bullet was moving.
Momentum has the special property that, in a closed system, it is always conserved, even in collisions. Kinetic energy, on the other hand, is not conserved in collisions if they are inelastic (where two objects collide and move off together at the same velocity). Since momentum is conserved it can be used to calculate unknown velocities following a collision.
A common problem in physics that requires the use of this fact is the collision of two particles. Since momentum is always conserved, the sum of the momenta before the collision must equal the sum of the momenta after the collision:
- u signifies vector velocity before the collision
- v signifies vector velocity after the collision.
Usually, we either only know the velocities before or after a collision and would like to also find out the opposite. Correctly solving this problem means you have to know what kind of collision took place. There are two basic kinds of collisions, both of which conserve momentum:
- Elastic collisions conserve kinetic energy as well as total momentum before and after collision.
- Inelastic collisions don't conserve kinetic energy, but total momentum before and after collision is conserved.
A collision between two pool balls is a good example of an almost totally elastic collision. In addition to momentum being conserved when the two balls collide, the sum of kinetic energy before a collision must equal the sum of kinetic energy after:
Since the one-half factor is common to all the terms, it can be taken out right away.
Head-on collision (1 dimensional)
In the case of two objects colliding head on we find that the final velocity
which can then easily be rearranged to
Special Case: m1 much greater than m2
Now consider if [[mass] ] of one body say m1 is far more than m2 (m1>>m2). In that case m1+m2 is approximately equal to m1. And m1-m2 is approximately equal to m1.
Put these values in the above equation to calculate the value of v2 after collision. The expression changes to v2 final is 2*v1-v2. Its physical interpretation is in case of collision between two body one of which is very heavy, the lighter body moves with twice the velocity of the heavier body less its actual velocity but in opposite direction.
Special Case: m1 equal to m2
Another special case is when the collision is between two bodies of equal mass. Say body m1 moving at velocity v1 strikes body m2 that is at rest (v2). Putting this case in the equation derived above we will see that after the collision, the body that was moving (m1) will start moving with velocity v2 and the mass m2 will start moving with velocity v1. So there will be an exchange of velocities.
Now suppose one of the masses, say m2, was at rest. In that case after the collision the moving body, m1, will come to rest and the body that was at rest, m2, will start moving with the velocity that m1 had before the collision.
Please note that all of these observations are for an elastic collision.
This phenomenon called “Newton's cradle,” one of the most well-known examples of conservation of momentum, is a real life example of this special case.
In the case of objects colliding in more than one dimension, as in oblique collisions, the velocity is resolved into orthogonal components with one component perpendicular to the plane of collision and the other component or components in the plane of collision. The velocity components in the plane of collision remain unchanged, while the velocity perpendicular to the plane of collision is calculated in the same way as the one-dimensional case.
For example, in a two-dimensional collision, the momenta can be resolved into x and y components. We can then calculate each component separately, and combine them to produce a vector result. The magnitude of this vector is the final momentum of the isolated system.
A common example of a perfectly inelastic collision is when two snowballs collide and then stick together afterwards. This equation describes the conservation of momentum:
It can be shown that a perfectly inelastic collision is one in which the maximum amount of kinetic energy is converted into other forms. For instance, if both objects stick together after the collision and move with a final common velocity, one can always find a reference frame in which the objects are brought to rest by the collision and 100 percent of the kinetic energy is converted.
Momentum in relativistic mechanics
In relativistic mechanics, momentum is defined as:
- is the mass of the object moving,
- is the Lorentz factor
- is the relative velocity between an object and an observer
- is the speed of light.
Relativistic momentum becomes Newtonian momentum at low speeds.
Momentum of massless objects
Massless objects such as photons also carry momentum. The formula is:
- is Planck's constant,
- is the wavelength of the photon,
- is the energy the photon carries and
- is the speed of light.
Momentum in electromagnetism
When electric and/or magnetic fields move, they carry momentum. Light (visible light, UV, radio) is an electromagnetic wave and also has momentum. Even though photons (the particle aspect of light) have no mass, they still carry momentum. This leads to applications such as the solar sail.
Momentum is conserved in an electrodynamic system (it may change from momentum in the fields to mechanical momentum of moving parts).
- Halliday, David, Robert Resnick, and Jearl Walker. Fundamentals of physics. New York: Wiley, 1993. ISBN 0471524611
- Myers, Rusty L. The basics of physics. Basics of the hard sciences. Westport, Conn: Greenwood Press, 2006. ISBN 0313328579
- Serway, Raymond A., and John W. Jewett. Physics for scientists and engineers. Belmont, CA: Thomson-Brooks/Cole, 2004. ISBN 0534408427
- Tipler, Paul. Physics for Scientists and Engineers: Vol. 1: Mechanics, Oscillations and Waves, Thermodynamics (4th ed.). W. H. Freeman, 1998. ISBN 1-57259-492-6
- Young, Hugh D, and Roger A. Freedman. Physics for Scientists and Engineers, 11th ed. San Fransisco, CA: Pearson, 2003. ISBN 080538684X.
- Conservation of momentum A chapter from an online textbook Retrieved August 15, 2007.
- Newton's Cradle Retrieved August 15, 2007.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Momentum | 13 |
67 | Conservation of Mass
Glenn Research Center
The conservation of mass is a fundamental concept of
physics along with the conservation of
energy and the conservation of momentum.
Within some problem domain, the amount of mass remains constant--mass
is neither created nor destroyed. This seems quite obvious, as long
as we are not talking about black holes or very exotic physics
problems. The mass of any object can be determined by multiplying the
of the object by the density of the
object. When we move a solid object, as shown at the top of the
slide, the object retains its shape, density, and volume. The mass of
the object, therefore, remains a constant between state "a" and state
In the center of the figure, we consider an amount of a static
liquid or gas.
If we change the fluid from some state "a"
to another state "b" and allow it to come to rest, we find that,
unlike a solid, a fluid may change its shape. The amount of fluid,
however, remains the same. We
can calculate the amount of fluid by multiplying the density
times the volume. Since the mass remains constant, the product of the
density and volume also remains constant. (If the density remains
constant, the volume also remains constant.) The shape can change,
but the mass remains the same.
Finally, at the bottom of the slide, we consider the changes for a
fluid that is moving through our domain. There is no accumulation or
depletion of mass, so mass is conserved within the domain. Since the
fluid is moving, defining the amount of mass gets a little tricky.
Let's consider an amount of fluid that passes through point "a" of
our domain in some amount of time t. If the fluid passes through an
area A at velocity V, we can define the volume Vol
Vol = A * V * t
A units check gives area x length/time x time = area x length =
volume. Thus the mass at point "a" ma is simply density r
times the volume at "a".
ma = (r * A * V * t)a
If we compare the flow through another point in the domain, point
"b," for the same amount of time t, we find the mass at "b"
mb to be the
density times the velocity times the area times the time at "b":
mb = (r * A * V * t)b
From the conservation of mass, these two masses are the same
and since the
times are the same, we can eliminate the time dependence.
(r * A * V)a = (r * A * V)b
r * A * V = constant
The conservation of mass gives us an easy way to determine the
velocity of flow in a tube if the density is constant. If we can
determine (or set) the velocity at some known area, the
equation tells us the value of velocity for any other area. In our
animation, the area of "b" is one half the area of "a." Therefore,
the velocity at "b" must be twice the velocity at "a." If we desire a
certain velocity in a tube, we can determine the area necessary to
obtain that velocity. This information is used in the design of
wind tunnels. The quantity density
times area times velocity has the dimensions of mass/time and is
called the mass flow rate. This quantity
is an important parameter in determining the
thrust produced by a propulsion
As the speed of the flow approaches the
speed of sound the density of the
flow is no longer a constant and we must then use a
of the mass flow rate equation.
The conservation of mass equation also occurs in a differential
form as part of the
of fluid flow.
Here's is a still version of the graphic that you can use in your own
presentation of this material:
Basic Fluid Dynamics Equations:
- Beginner's Guide Home Page | http://www.grc.nasa.gov/WWW/K-12/airplane/mass.html | 13 |
67 | The following are some formula for calculating the area of additional shapes we値l use
to define weld
volume in a joint:
For the Triangle on the right: Area = B * H / 2
Note it is
like calculating the Area of a Rectangle and dividing by
formula works for
any size and shape Triangle. Look at the Triangle below
left. Below right are picture examples that help show why this
equation works for any shape Triangle:
duplicating the Blue Triangle and rearranging pieces we can construct a
Referring to the three pictures on the
1) In the
top picture the Blue Triangle is copied and turned upside down
It is shown in Green.
2) In the
picture we make a small Red Triangle to create a straight
perpendicular side on the Blue Triangle.
3) In the
bottom picture the Red Triangle is moved to the left side making a
straight side on the Green Triangle.
4) That makes a Rectangle with one
side still equal to B and the other H.
5) The AREA as defined for a
Rectangle is B * H.
But remember we duplicated the Blue Triangle so to get the AREA
of the Original Blue Triangle we have to divide by 2 hence
the formula: Area = B * H / 2
Area of a
Segment (Weld Reinforcement)
is one Area that is
often used in calculating weld metal area and
volume; it is called the area of a Circle Segment and shown in Red
on the photo left. This is what is used to calculate the area of
Weld Reinforcement. The accurate way to do this is to calculate the area of a
segment of Radius R (that is the combined Green and Red Areas;) use
the length of the Cord W (width of weld) and subtract the area of the
triangle formed by the Cord and distance from the Center of
the Circle and the Cord (the Green Area.) This leaves the Area between the
Cord and the Outer Area of the Circle (Red Area) or Weld Reinforcement
in our case.
would have to estimate the Radius of the circle making the
reinforcement and the angle of that Segment. Not an easy item to
estimate. Since we know the weld bead height (or the desired maximum height
by code, usually 3/32 or 1/8 inch) and the weld height is much smaller than
the weld width we can use a method that estimates the area and is
better than estimating the radius. The formula is:
Approximate Area of a Segment (Weld Reinforcement) = (2 H * W) / 3 + H 2 / 2W
Since weld reinforcement is not a
perfect circle, the value obtained is sufficiently accurate for any
engineering calculations needed. In fact since weld reinforcement is
not a portion of a perfect circle this approach may be closer to the actual
Having checked several typical weld reinforcements dimensions you
can use 72% of the Area of a Rectangle for the estimate of
these basic shapes you can calculate the area of almost all welds.
the following examples of the of weld joints; the weld area can be arranged
into Triangles, Rectangles and Segments.
calculate the weld metal volume that must be added to a weld joint you simply multiply the
Area times the Length in
the same dimensions. Therefore if the length is given in feet convert it to
inches so all dimensions
are in inches. Therefore Area in2 * Length in = Volume in3.
Remember dimensional analysis
works to check your work in 2 * in = in3.
Calculate Pounds of Welding Materials Needed:
Now the Volume of weld metal you値l
need to add is known , how much wire will you
need? The following are some material densities:
weights: 0.284 lb / in3; Aluminum = 0.098 lb / in3
somewhat on alloy and Stainless Steel 0.29 lb / in3 again
depending on the alloy. | http://www.netwelding.com/Calculate_Weld_Metal_Volume.htm | 13 |
111 | - "Atomic mass unit" redirects here. "Atomic mass" should not be confused with "atomic weight."
The atomic mass (ma) is the mass of a single atom, when the atom is at rest at its lowest energy level (or "ground state"). Given that a chemical element may exist as various isotopes, possessing different numbers of neutrons in their atomic nuclei, atomic mass is calculated for each isotope separately. Atomic mass is most often expressed in unified atomic mass units, where one unified atomic mass unit is defined as one-twelfth the mass of a single atom of the carbon-12 isotope.
Clarification of terminology
The atomic mass should be distinguished from other terms such as relative atomic mass and mass number.
- Relative atomic mass and atomic weight: The relative atomic mass (Ar) of an element is the ratio of the mass of an atom of the element to one-twelfth the mass of an atom of carbon-12. Because an element in nature is usually a mixture of isotopes, the relative atomic mass is also the weighted mean of the atomic masses of all the atoms in a particular sample of the element, weighted by isotopic abundance. In this sense, relative atomic mass was once known as atomic weight.
- Mass number: The mass number of an isotope is the total number of nucleons (neutrons plus protons) in the nucleus of each atom of the isotope. Rounding the atomic mass of an isotope usually gives the total nucleon count. The neutron count can then be derived by subtracting the atomic number (number of protons) from the mass number.
Often an element has one predominant isotope. In such a case, the actual numerical difference between the atomic mass of that main isotope and the relative atomic mass or standard atomic weight of the element can be very small, such that it does not affect most bulk calculations; but such an error can be critical when considering individual atoms. For elements with more than one common isotope, the difference between the atomic mass of the most common isotope and the relative atomic mass of the element can be as much as half a mass unit or more (as in the case of chlorine). The atomic mass of an uncommon isotope can differ from the relative atomic mass or standard atomic weight by several mass units.
An element may have different atomic weights depending on the source. Nevertheless, given the cost and difficulty of isotope analysis, it is usual to use the tabulated values of standard atomic weights, which are ubiquitous in chemical laboratories.
Unified atomic mass unit
The unified atomic mass unit (u), or dalton (Da), or, sometimes, universal mass unit, is a unit of mass used to express atomic and molecular masses. It is defined as one-twelfth of the mass of an unbound atom of carbon-12 (12C) at rest and in its ground state.
- 1 u = 1/NA gram = 1/ (1000 NA) kg (where NA is Avogadro's number)
- 1 u = 1.660538782(83)×10−27 kg = 931.494027(23) MeV/c2
The atomic mass unit (amu) is an older name for a very similar scale. The symbol amu for atomic mass unit is not a symbol for the unified atomic mass unit. It may be observed as an historical artifact, written during the time when the amu scale was used, or it may be used correctly when referring to its historical use. Occasionally, it may be used in error (possibly deriving from confusion about its historical usage).
The unified atomic mass unit, or dalton, is not an SI unit of mass, but it is accepted for use with SI under either name. Atomic masses are often written without any unit, and then the unified atomic mass unit is implied.
In biochemistry and molecular biology, when referring to macromolecules such as proteins or nucleic acids, the term "kilodalton" is used, with the symbol kDa. Because proteins are large molecules, their masses are given in kilodaltons, where one kilodalton is 1000 daltons.
The unified atomic mass unit is approximately equal to the mass of a hydrogen atom, a proton, or a neutron.
Technically, atomic mass is equal to the total mass of protons, neutrons, and electrons in the atom (when the atom is motionless), plus the mass contained in the binding energy of the atom's nucleus. However, the mass of an electron (being approximately 1/1836 of the mass of a proton) and the mass contained in nuclear binding (which is generally less than 0.01 u) may be considered negligible when compared with the masses of protons and neutrons. Thus, atomic mass is approximately equal to the total mass of the protons and neutrons in the nucleus of the atom. Thus, in general terms, an atom or molecule that contains n protons and neutrons will have a mass approximately equal to n u.
Chemical element masses, as expressed in u, would all be close to whole number values (within 2 percent and usually within 1 percent) were it not for the fact that atomic weights of chemical elements are averaged values of the various stable isotope masses in the abundances which they naturally occur. For example, chlorine has an atomic weight of 35.45 u because it is composed of 76 percent 35Cl (34.96 u) and 24 percent 37Cl (36.97 u).
Another reason for using the unified atomic mass unit is that it is experimentally much easier and more precise to compare masses of atoms and molecules (that is, determine relative masses) than to measure their absolute masses. Masses are compared with a mass spectrometer (see below).
Measurement of atomic masses
Direct comparison and measurement of the masses of atoms is achieved by the technique known as mass spectrometry. The equation is:
- mass contribution = (percent abundance) (mass)
In the history of chemistry, the first scientists to determine atomic weights were John Dalton, between 1803 and 1805, and Jöns Jakob Berzelius, between 1808 and 1826. Atomic weight was originally defined relative to that of the lightest element, hydrogen, which was assigned the unit 1.00. In the 1860s, Stanislao Cannizzaro refined atomic weights by applying Avogadro's law (notably at the Karlsruhe Congress of 1860). He formulated a law to determine atomic weights of elements: The different quantities of the same element contained in different molecules are all whole multiples of the atomic weight. On that basis, he determined atomic weights and molecular weights by comparing the vapor density of a collection of gases with molecules containing one or more of the chemical element in question.
In the first half of the twentieth century, up until the 1960s, chemists and physicists used two different atomic mass scales. The chemists used a scale such that the natural mixture of oxygen isotopes had an atomic mass 16, while the physicists assigned the same number 16 to the atomic mass of the most common oxygen isotope (containing eight protons and eight neutrons). However, because oxygen-17 and oxygen-18 are also present in natural oxygen, this led to two different tables of atomic mass.
The unified atomic mass unit was adopted by the International Union of Pure and Applied Physics in 1960 and by the International Union of Pure and Applied Chemistry in 1961. Hence, before 1961 physicists as well as chemists used the symbol amu for their respective (and slightly different) atomic mass units. The accepted standard is now the unified atomic mass unit (symbol u).
Comparison of u with the physical and chemical amu scales:
- 1 u = 1.000 317 9 amu (physical scale) = 1.000 043 amu (chemical scale).
The unified scale based on carbon-12, 12C, met the physicists' need to base the scale on a pure isotope, while being numerically close to the old chemists' scale.
Conversion factor between atomic mass units and grams
The standard scientific unit for dealing with atoms in macroscopic quantities is the mole (mol), which is defined arbitrarily as the amount of a substance with as many atoms or other units as there are in 12 grams of the carbon isotope C-12. The number of atoms in a mole is called Avogadro's number (NA), the value of which is approximately 6.022 × 1023 mol-1.
One mole of a substance always contains almost exactly the relative atomic mass or molar mass of that substance (which is the concept of molar mass), expressed in grams; however, this is almost never true for the atomic mass. For example, the standard atomic weight of iron is 55.847 g/mol, and therefore one mole of iron as commonly found on earth has a mass of 55.847 grams. The atomic mass of an 56Fe isotope is 55.935 u and one mole of 56Fe will in theory weigh 55.935g, but such amounts of the pure 56Fe isotope have never existed.
The formulaic conversion between atomic mass and SI mass in grams for a single atom is:
where u is the atomic mass unit and NA is Avogadro's number.
Relationship between atomic and molecular masses
Similar definitions apply to molecules. One can compute the molecular mass of a compound by adding the atomic masses of its constituent atoms (nuclides). One can compute the molar mass of a compound by adding the relative atomic masses of the elements given in the chemical formula. In both cases the multiplicity of the atoms (the number of times it occurs) must be taken into account, usually by multiplication of each unique mass by its multiplicity.
Mass defects in atomic masses
The pattern of the amounts by which the atomic masses deviate from their mass numbers is as follows: the deviation starts positive at hydrogen-1, becomes negative until a minimum is reached at iron-56, iron-58 and nickel-62, then increases to positive values in the heavy isotopes, with increasing atomic number. This equals to the following: nuclear fission in an element heavier than iron produces energy, and fission in any element lighter than iron requires energy. The opposite is true of nuclear fusion reactions: fusion in elements lighter than iron produces energy, and fusion in elements heavier than iron requires energy.
- ↑ relative atomic mass (atomic weight). IUPAC Gold Book. Retrieved December 20, 2008.
- ↑ unified atomic mass unit. IUPAC Gold Book. Retrieved December 20, 2008.
- ↑ A 12C atom contains 6 protons, 6 neutrons and 6 electrons, with the protons and neutrons having about the same mass and the electron mass being negligible in comparison.
- ↑ Isotopic Listing of Elements: Exact Masses and Isotopic Abundances. Scientific Instrument Services. Retrieved December 12, 2008.
- ↑ Andrew Williams, 2007. Origin of the Formulas of Dihydrogen and Other Simple Molecules. J. Chem. Ed. 84:1779.
- Bransden, B. H., and C. J. Joachain. 2003. Physics of Atoms and Molecules, 2nd ed. Harlow, UK: Prentice Hall. ISBN 058235692X.
- Demtröder, W. 2006. Atoms, Molecules and Photons: An Introduction to Atomic-, Molecular-, and Quantum-Physics. Berlin: Springer. ISBN 978-3540206316.
- Foot, Christopher J. 2005. Atomic Physics. (Oxford Master Series in Atomic, Optical and Laser Physics.) Oxford, UK: Oxford Univ. Press. ISBN 0198506961.
- Williams, Andrew. 2007. Origin of the Formulas of Dihydrogen and Other Simple Molecules. J. Chem. Ed. 84:1779.
All links retrieved November 24, 2012.
- Atomic Weights and Isotopic Compositions for All Elements. NIST.
- AME2003 Atomic Mass Evaluation National Nuclear Data Center.
- Non-SI units accepted for use with the SI, and units based on fundamental constants. BIPM.
- Fundamental Physical Constants: atomic mass unit-kilogram relationship. NIST.
- unified atomic mass unit. sizes.com.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Atomic_mass | 13 |
62 | Measuring round trip return timing is fundamental to radar, but it can be difficult to distinguish returns from the target of interest and other objects or background located at similar distances. The use of Doppler processing allows another characteristic of the return to be used – relative velocity. Doppler processing became possible with digital computers and today, nearly all radar systems incorporate Doppler processing.
By measuring the Doppler rate, the radar is able to measure the relative velocity of all objects returning echoes to the radar system – whether planes, vehicles, or ground features. Doppler filtering can be used to discriminate between objects moving at different relative velocities. An example is airborne radar trying to track a moving vehicle on the ground. Since the ground returns will be at the same range as the vehicle, the difference in velocity will be the means of discrimination.
The relationship between wavelength and frequency is:
λ = v / f
f = wave frequency (Hz or cycles per second)
λ = wavelength (meters)
v = speed of light (approximately 3 x 108 meters/second)
What happens in a radar system is that the pulse frequency is modified by the process of being reflected by a moving object. Consider the transmission of a sinusoidal wave. The distance from the crest of each wave to the next is the wavelength, which is inversely proportional to the frequency.
Each successive wave is reflected from the target object of interest. When this object is moving towards the radar system, the next wave crest reflected has a shorter round trip distance to travel, from the radar to the target and back to the radar. This is because the target has moved closer in the interval of time between the previous and current wave crest.
As long as this motion continues, the distance between the arriving wave crests is shorter than the distance between the transmitted wave crests. Since frequency is inversely proportional to wavelength, the frequency of the sinusoidal wave appears to have increased. If the target object is moving away from the radar system, then the opposite happens. Each successive wave crest has a longer round trip distance to travel, so the time between arrival of receive wave crests is lengthened, resulting in a longer (larger) wavelength, and a lower frequency.
Figure 1. Doppler frequency shifting
This effect only applies to the motion relative to the radar and the target object. If the object is moving at right angles to the radar there will be no Doppler frequency shift. An example of this would be airborne radar directed at the ground immediately below the aircraft. Assuming level terrain and the aircraft is at a constant altitude, the Doppler shift would be zero, as there is no change in the distance between the plane and ground.
If the radar is ground-based, then all Doppler frequency shifts will be due to the target object motion. If the radar is a vehicle or airborne-based, then the Doppler frequency shifts will be due to the relative motion between the radar and target object.
This can be of great advantage in a radar system. By binning the receive echoes both over range and Doppler frequency offset, target speed as well as range can be determined. Also, this allows easy discrimination between moving objects, such as an aircraft or vehicle, and the back ground clutter, which is generally stationary.
For example, imagine there is a radar operating in the X band at 10 GHz (λ = 0.03m or 3cm). The radar is airborne, traveling at 500 mph, is tracking a target ahead moving at 800 mph in the same direction. In this case, the speed differential is –300 mph, or –134 m/s.
Another target is traveling head on toward the airborne radar at 400 mph. This gives a speed differential of 900 mph, or 402 m/s. The Doppler frequency shift can be calculated as follows:
First target Doppler shift = 2 (–134m/s) / (0.03m) = –8.93 kHz
Second target Doppler shift = 2 (402m/s) / (0.03m) = 26.8 kHz
The receive signal will be offset from 10 GHz by the Doppler frequency. Notice that the Doppler shift is negative when the object is moving away (opening range) from the radar, and is positive when the object is moving towards the radar (closing range).
Pulsed frequency spectrum
For this to be of any use, the Doppler shift must be measured. First, the spectral representation of the pulse must be considered.
The frequency response of an infinite train of pulses is composed of discrete spectral lines in the envelope of the pulse frequency spectrum. The spectrum repeats at intervals of the PRF.
Figure 2. Pulse frequency spectrum
What is important is that this will impose restrictions on the detectable Doppler frequency shifts. In order to unambiguously identify the Doppler frequency shift, it must be less than the PRF frequency. Doppler frequency shifts greater than this will alias to a lower Doppler frequency. This ambiguity is similar to radar range returns beyond the range of the PRF interval time, as they alias into lower range bins.
Doppler frequency detection is performed by using a bank of narrow digital filters, with overlapping frequency bandwidth (so there are no nulls or frequencies that could go undetected). This is done separately for each range bin. Therefore, at each allowable range, Doppler filtering is applied. Just as the radar looks for peaks from the matched filter detector at every range bin, within every range it will test across the Doppler frequency band to determine the Doppler frequency offset in the receive pulse.
Doppler ambiguities can occur if the Doppler range is larger than the PRF. For example, in military airborne radar, the fastest closing rates will be with targets approaching, as both speeds of the radar-bearing aircraft and the target aircraft are summed. This should assume the maximum speed of both aircraft.
The highest opening rates might be when a target is flying away from the radar-bearing aircraft. Here, the radar-bearing aircraft is assumed to be traveling at minimum speed, as well as the target aircraft flying at maximum speed. It is also assumed that the target aircraft is flying a large angle θ from the radar-bearing aircraft flight path, which further reduces the radar-bearing aircraft speed in the direction of the target.
The maximum positive Doppler frequency (fastest closing rate) at 10 GHz / 3 cm is:
Radar –bearing aircraft maximum speed: 1200 mph = 536 m/s
Target aircraft maximum speed: 1200 mph = 536 m/s
Maximum positive Doppler = 2 (1072m/s) / (0.03m) = 71.5 kHz
The maximum negative Doppler frequency (fastest opening rate) at 10 GHz / 3 cm is:
Radar-bearing aircraft minimum speed: 300 mph = 134 m/s
Effective radar-bearing aircraft minimum speed with θ = 60 degree angle from target track (sin (60) = 0.5): 150 mph = 67 m/s
Target aircraft maximum speed: 1200 mph = 536 m/s
Maximum negative Doppler = 2 (67–536 m/s) / (0.03m) = 31.3 kHz
This results in a total Doppler range of 71.5 + 31.3 = 102.8 kHz. Unless the PRF exceeds 102.8 kHz, there will be aliasing of the detected Doppler rates, and the associated ambiguities.
If the PRF is assumed at 80 kHz, then Doppler aliasing will occur as shown in Figure 3.
Figure 3. Doppler aliasing example
There are two categories of radar clutter. There is mainlobe clutter and sidelobe clutter. Mainlobe clutter occurs when there are undesirable returns in the mainlobe or within the radar beamwidth. This usually occurs when the mainlobe intersects the ground. This can occur because the radar is aimed downward (negative elevation), there is higher ground such as mountains in the radar path, or even if the radar beam is aimed level and as the beam spreads with distance hits intersects the ground. Because the area of ground in the radar beam is often large, the ground return can be much larger than target returns.
Sidelobe clutter is unwanted returns that are coming from a direction outside the mainlobe. Sidelobe clutter is usually attenuated by 50 dB or more, due to the antenna directional selectivity or directional radiation pattern. A very common source of sidelobe clutter is ground return. When radar is pointed toward the horizon, there is a very large area of ground area covered by the sidelobes in the negative elevation region. The large reflective area covered by the sidelobe can cause significant sidelobe returns despite the antenna attenuation.
Different types of terrain will have a different “reflectivity”, which is a measure of how much radar energy is reflected back. This also depends on the angle of the radar energy relative to the ground surface. Some surfaces, like smooth water, reflect most of the radar energy away from the radar transmitter, particularly at shallow angles. A desert would reflect more of the energy back to the radar, while wooded terrain would reflect even more. Man made surfaces, such as in urban areas; tend to reflect the most energy back to the radar system.
Often targets are moving, and Doppler processing is an effective method to distinguish the target from the background clutter of the ground. However, the Doppler frequency of the ground will can be non-zero if the radar is in motion. Different points on the ground will have different Doppler returns, depending on how far ahead or behind the radar-bearing aircraft that a particular patch of ground is located. Doppler sidelobe clutter can be present over a wide range of Doppler frequencies.
Mainlobe clutter is more likely to be concentrated at a specific frequency, since the mainlobe is far more concentrated (typically 3 to 6 degrees of beam width), so the patch of ground illuminated is likely to be far smaller and all the returns at or near the same relative velocity.
A simple example (as shown in Figure 4) can help illustrate how the radar can combine range and Doppler returns to obtain a more complete picture of the target environment.
Figure 4. Interpreting Doppler radar returns
Figure 4 illustrates unambiguous range and Doppler returns. This assumes the PRF is low enough to receive all the returns in a single PRF interval and the PRF is high enough to include all Doppler return frequencies.
The ground return comes though the antenna sidelobe, known as sidelobe clutter. The reason ground return is often high is due to the amount of reflective area at close range, which results in a strong return despite the sidelobe attenuation of the antenna. The ground return will be at short range, essentially the altitude of the aircraft. In the mainlobe, the range return of the mountains and closing target are close together, due to similar ranges. It is easy to see how if just using the range return, it is easy for a target return to be lost in high terrain returns, known as mainlobe clutter.
The Doppler return gives a different view. The ground return is centered around 0 Hz. The ground slightly ahead of the radar-bearing plane is at slightly positive relative velocity, and the ground behind the plane is at slightly negative relative velocity. As the horizontal distance from the radar-bearing plane increases, the ground return weakens due to increased range.
The Doppler return from mountain terrain is now very distinct from the nearby closing aircraft target. The mountain terrain is moving at a relative velocity equal to the radar-bearing plane’s velocity. The closing aircraft relative velocity is the sum of both aircrafts velocity, which is much higher, producing a Doppler return with a high velocity. The other target aircraft, which is slowly opening the range with radar-bearing aircraft, is represented as a negative Doppler frequency return.
Different PRF frequencies have different advantages and disadvantages. The following discussion summarizes the trade-offs.
Low PRF operation is generally used for maximum range detection. It usually requires a high power transmit power, in order to receive returns of sufficient power for detection at a long range. To get the highest power, long transmit pulses are sent, and correspondingly long matched filter processing (or pulse compression) is used. This mode is useful for precise range determination. Strong sidelobe returns can often be determined by their relatively close ranges (ground area near radar system) and filtered out.
Disadvantages are that Doppler processing is relatively ineffective due to so many overlapping Doppler frequency ranges. This limits the ability to detect moving objects in the presence of heavy background clutter, such as moving objects on the ground.
High PRF operation spreads out the frequency spectrum of the receive pulse, allowing a full Doppler spectrum without aliasing or ambiguous Doppler measurements. A high PRF can be used to determine Doppler frequency and therefore relative velocity for all targets. It can also be used when a moving object of interest is obscured by a stationary mass, such as the ground or a mountain, in the radar return. The unambiguous Doppler measurements will make a moving target stand out from a stationary background. This is called mainlobe clutter rejection or filtering. Another benefit is that since more pulses are transmitted in a given interval of time, higher average transmit power levels can be achieved. This can help improve the detection range of a radar system in high PRF mode.
Medium PRF operation is a compromise. Both range and Doppler measurements are ambiguous, but each will not be aliased or folded as severely as the more extreme low or high PRF modes. This can provide a good overall capability for detecting both range and moving targets. However, the folding of the ambiguous regions can also bring a lot of clutter into both range and Doppler measurements. Small shifts in PRFs can be used to resolve ambiguities, as has been discussed, but if there is too much clutter, the signals may be undetectable or obscured in both range and Doppler.
One solution is to use the high PRF mode to identify moving targets, especially fast moving targets, and then switch to a low PRF operation to determine range. Another alternative is to use a technique called FM ranging. In this mode, the transmit duty cycle becomes 100% and the radar transmits and receives continuously.
Figure 5. FM ranging
The transmission is a continuously increasing frequency signal, and then at the maximum frequency, abruptly begins to continuously decrease in frequency until it reaches the minimum frequency. This cycle then repeats. The frequency over time looks like a “saw tooth wave”. The receiver can operate while during transmit operation, as the receiver is detecting time delayed versions of the transmit signal, which is at a different frequency than current transmit operation. Therefore, the receiver is not desensitized by the transmitter’s high power at the received signal frequency.
Through Doppler detection of what frequency is received, and knowing the transmitter frequency ramp timing, can be used to determine round-trip delay time, and therefore range. And the receive frequency “saw tooth” will be offset by the Doppler frequency. On a rapidly closing target, the receive frequencies will be all offset by a positive fDoppler
, which can be measured by the receiver once the peak receive frequency is detected.
In summary, Doppler processing enables radar systems to discriminate in target velocity, as well as range and angle of the target. This is critical to distinguish moving targets from the background clutter. Doppler processing depends on frequency domain processing, which can be efficiently computed using an algorithm known as the Fast Fourier Transform, or FFT. In Part 3
of the series on radar, an examination of how beamforming, pulse compression and Doppler processing can be implemented in radar systems will be examined.
Also see Part 1
of this five-part mini-series on “Radar Basics”.
About the author
As senior DSP technical marketing manager, Michael Parker is responsible for Altera’s DSP-related IP, and is also involved in optimizing FPGA architecture planning for DSP applications.
Mr. Parker joined Altera in January 2007, and has over 20 years of DSP wireless engineering design experience with Alvarion, Soma Networks, TCSI, Stanford Telecom and several startup companies. | http://www.eetimes.com/design/programmable-logic/4216419/Radar-Basics---Part-2--Pulse-Doppler-Radar | 13 |
117 | Computers are often used to automate repetitive tasks. Repeating identical or similar tasks without making errors is something that computers do well and people do poorly.
Repeated execution of a sequence of statements is called iteration. Because iteration is so common, Python provides several language features to make it easier. We’ve already seen the for statement in Chapter 3. This is a very common form of iteration in Python. In this chapter we are also going to look at the while statement — another way to have your program do iteration.
Recall that the for loop processes each item in a list. Each item in turn is (re-)assigned to the loop variable, and the body of the loop is executed. We saw this example in an earlier chapter.
We have also seen iteration paired with the update idea to form the accumulator pattern. For example, to compute the sum of the first n integers, we could create a for loop using the range to produce the numbers 1 thru n. Using the accumulator pattern, we can start with a running total and on each iteration, add the current value of the loop variable. A function to compute this sum is shown below.
To review, the variable theSum is called the accumulator. It is initialized to zero before we start the loop. The loop variable, aNumber will take on the values produced by the range(1,aBound+1) function call. Note that this produces all the integers starting from 1 up to the value of aBound. If we had not added 1 to aBound, the range would have stopped one value short since range does not include the upper bound.
The assignment statement, theSum = theSum + aNumber, updates theSum each time thru the loop. This accumulates the running total. Finally, we return the value of the accumulator.
There is another Python statement that can also be used to build an iteration. It is called the while statement. The while statement provides a much more general mechanism for iterating. Similar to the if statement, it uses a boolean expression to control the flow of execution. The body of while will be repeated as long as the controlling boolean expression evaluates to True.
The following figure shows the flow of control.
We can use the while loop to create any type of iteration we wish, including anything that we have previously done with a for loop. For example, the program in the previous section could be rewritten using while. Instead of relying on the range function to produce the numbers for our summation, we will need to produce them ourselves. To to this, we will create a variable called aNumber and initialize it to 1, the first number in the summation. Every iteration will add aNumber to the running total until all the values have been used. In order to control the iteration, we must create a boolean expression that evaluates to True as long as we want to keep adding values to our running total. In this case, as long as aNumber is less than or equal to the bound, we should keep going.
Here is a new version of the summation program that uses a while statement.
You can almost read the while statement as if it were in natural language. It means, while aNumber is less than or equal to aBound, continue executing the body of the loop. Within the body, each time, update theSum using the accumulator pattern and increment aNumber. After the body of the loop, we go back up to the condition of the while and reevaluate it. When aNumber becomes greater than aBound, the condition fails and flow of control continues to the return statement.
The same program in codelens will allow you to observe the flow of execution.
The names of the variables have been chosen to help readability.
More formally, here is the flow of execution for a while statement:
The body consists of all of the statements below the header with the same indentation.
This type of flow is called a loop because the third step loops back around to the top. Notice that if the condition is False the first time through the loop, the statements inside the loop are never executed.
The body of the loop should change the value of one or more variables so that eventually the condition becomes False and the loop terminates. Otherwise the loop will repeat forever. This is called an infinite loop. An endless source of amusement for computer scientists is the observation that the directions on shampoo, lather, rinse, repeat, are an infinite loop.
In the case shown above, we can prove that the loop terminates because we know that the value of n is finite, and we can see that the value of v increments each time through the loop, so eventually it will have to exceed n. In other cases, it is not so easy to tell.
Introduction of the while statement causes us to think about the types of iteration we have seen. The for statement will always iterate through a sequence of values like the list of names for the party or the list of numbers created by range. Since we know that it will iterate once for each value in the collection, it is often said that a for loop creates a definite iteration because we definitely know how many times we are going to iterate. On the other hand, the while statement is dependent on a condition that needs to evaluate to False in order for the loop to terminate. Since we do not necessarily know when this will happen, it creates what we call indefinite iteration. Indefinite iteration simply means that we don’t know how many times we will repeat but eventually the condition controlling the iteration will fail and the iteration will stop. (Unless we have an infinite loop which is of course a problem)
What you will notice here is that the while loop is more work for you — the programmer — than the equivalent for loop. When using a while loop you have to control the loop variable yourself. You give it an initial value, test for completion, and then make sure you change something in the body so that the loop terminates.
So why have two kinds of loop if for looks easier? This next example shows an indefinite iteration where we need the extra power that we get from the while loop.
Check your understanding
7.2.1: True or False: You can rewrite any for-loop as a while-loop.
7.2.2: The following code contains an infinite loop. Which is the best explanation for why the loop does not terminate?
n = 10 answer = 1 while ( n > 0 ): answer = answer + n n = n + 1 print answer
Suppose we want to entertain ourselves by watching a turtle wander around randomly inside the screen. When we run the program we want the turtle and program to behave in the following way:
Notice that we cannot predict how many times the turtle will need to flip the coin before it wanders out of the screen, so we can’t use a for loop in this case. In fact, although very unlikely, this program might never end, that is why we call this indefinite iteration.
So based on the problem description above, we can outline a program as follows:
create a window and a turtle while the turtle is still in the window: generate a random number between 0 and 1 if the number == 0 (heads): turn left else: turn right move the turtle forward 50
Now, probably the only thing that seems a bit confusing to you is the part about whether or not the turtle is still in the screen. But this is the nice thing about programming, we can delay the tough stuff and get something in our program working right away. The way we are going to do this is to delegate the work of deciding whether the turtle is still in the screen or not to a boolean function. Lets call this boolean function isInScreen We can write a very simple version of this boolean function by having it always return True, or by having it decide randomly, the point is to have it do something simple so that we can focus on the parts we already know how to do well and get them working. Since having it always return true would not be a good idea we will write our version to decide randomly. Lets say that there is a 90% chance the turtle is still in the window and 10% that the turtle has escaped.
Now we have a working program that draws a random walk of our turtle that has a 90% chance of staying on the screen. We are in a good position, because a large part of our program is working and we can focus on the next bit of work – deciding whether the turtle is inside the screen boundaries or not.
We can find out the width and the height of the screen using the window_width and window_height methods of the screen object. However, remember that the turtle starts at position 0,0 in the middle of the screen. So we never want the turtle to go farther right than width/2 or farther left than negative width/2. We never want the turtle to go further up than height/2 or further down than negative height/2. Once we know what the boundaries are we can use some conditionals to check the turtle position against the boundaries and return False if the turtle is outside or True if the turtle is inside.
Once we have computed our boundaries we can get the current position of the turtle and then use conditionals to decide. Here is one implementation:
def isInScreen(wn,t): leftBound = - wn.window_width()/2 rightBound = wn.window_width()/2 topBound = wn.window_height()/2 bottomBound = -wn.window_height()/2 turtleX = t.xcor() turtleY = t.ycor() stillIn = True if turtleX > rightBound or turtleX < leftBound: stillIn = False if turtleY > topBound or turtleY < bottomBound: stillIn = False return stillIn
There are lots of ways that the conditional could be written. In this case we have given stillIn the default value of True and use two if statements to set the value to False. You could rewrite this to use nested conditionals or elif statements and set stillIn to True in an else clause.
Here is the full version of our random walk program.
We could have written this program without using a boolean function, You might try to rewrite it using a complex condition on the while statement, but using a boolean function makes the program much more readable and easier to understand. It also gives us another tool to use if this was a larger program and we needed to have a check for whether the turtle was still in the screen in another part of the program. Another advantage is that if you ever need to write a similar program, you can reuse this function with confidence the next time you need it. Breaking up this program into a couple of parts is another example of functional decomposition.
Check your understanding
7.3.1: Which type of loop can be used to perform the following iteration: You choose a positive integer at random and then print the numbers from 1 up to and including the selected integer.
7.3.2: In the random walk program in this section, what does the isInScreen function do?
As another example of indefinite iteration, let’s look at a sequence that has fascinated mathematicians for many years. The rule for creating the sequence is to start from some given n, and to generate the next term of the sequence from n, either by halving n, whenever n is even, or else by multiplying it by three and adding 1 when it is odd. The sequence terminates when n reaches 1.
This Python function captures that algorithm. Try running this program several times supplying different values for n.
The condition for this loop is n != 1. The loop will continue running until n == 1 (which will make the condition false).
Each time through the loop, the program prints the value of n and then checks whether it is even or odd using the remainder operator. If it is even, the value of n is divided by 2 using integer division. If it is odd, the value is replaced by n * 3 + 1. Try some other examples.
Since n sometimes increases and sometimes decreases, there is no obvious proof that n will ever reach 1, or that the program terminates. For some particular values of n, we can prove termination. For example, if the starting value is a power of two, then the value of n will be even each time through the loop until it reaches 1.
You might like to have some fun and see if you can find a small starting number that needs more than a hundred steps before it terminates.
Particular values aside, the interesting question is whether we can prove that this sequence terminates for all values of n. So far, no one has been able to prove it or disprove it!
Think carefully about what would be needed for a proof or disproof of the hypothesis “All positive integers will eventually converge to 1”. With fast computers we have been able to test every integer up to very large values, and so far, they all eventually end up at 1. But this doesn’t mean that there might not be some as-yet untested number which does not reduce to 1.
You’ll notice that if you don’t stop when you reach one, the sequence gets into its own loop: 1, 4, 2, 1, 4, 2, 1, 4, and so on. One possibility is that there might be other cycles that we just haven’t found.
Choosing between for and while
Use a for loop if you know the maximum number of times that you’ll need to execute the body. For example, if you’re traversing a list of elements, or can formulate a suitable call to range, then choose the for loop.
So any problem like “iterate this weather model run for 1000 cycles”, or “search this list of words”, “check all integers up to 10000 to see which are prime” suggest that a for loop is best.
By contrast, if you are required to repeat some computation until some condition is met, as we did in this 3n + 1 problem, you’ll need a while loop.
As we noted before, the first case is called definite iteration — we have some definite bounds for what is needed. The latter case is called indefinite iteration — we are not sure how many iterations we’ll need — we cannot even establish an upper bound!
Check your understanding
7.4.1: Consider the code that prints the 3n+1 sequence in ActiveCode box 6. Will the while loop in this code always terminate for any value of n?
Loops are often used in programs that compute numerical results by starting with an approximate answer and iteratively improving it.
For example, one way of computing square roots is Newton’s method. Suppose that you want to know the square root of n. If you start with almost any approximation, you can compute a better approximation with the following formula:
better = 1/2 * (approx + n/approx)
Execute this algorithm a few times using your calculator. Can you see why each iteration brings your estimate a little closer? One of the amazing properties of this particular algorithm is how quickly it converges to an accurate answer.
The following implementation of Newton’s method requires two parameters. The first is the value whose square root will be approximated. The second is the number of times to iterate the calculation yielding a better result.
You may have noticed that the second and third calls to newtonSqrt in the previous example both returned the same value for the square root of 10. Using 10 iterations instead of 5 did not improve the the value. In general, Newton’s algorithm will eventually reach a point where the new approximation is no better than the previous. At that point, we could simply stop. In other words, by repeatedly applying this formula until the better approximation gets close enough to the previous one, we can write a function for computing the square root that uses the number of iterations necessary and no more.
This implementation, shown in codelens, uses a while condition to execute until the approximation is no longer changing. Each time thru the loop we compute a “better” approximation using the formula described earlier. As long as the “better” is different, we try again. Step thru the program and watch the approximations get closer and closer.
The while statement shown above uses comparison of two floating point numbers in the condition. Since floating point numbers are themselves approximation of real numbers in mathematics, it is often better to compare for a result that is within some small threshold of the value you are looking for.
Newton’s method is an example of an algorithm: it is a mechanical process for solving a category of problems (in this case, computing square roots).
It is not easy to define an algorithm. It might help to start with something that is not an algorithm. When you learned to multiply single-digit numbers, you probably memorized the multiplication table. In effect, you memorized 100 specific solutions. That kind of knowledge is not algorithmic.
But if you were lazy, you probably cheated by learning a few tricks. For example, to find the product of n and 9, you can write n - 1 as the first digit and 10 - n as the second digit. This trick is a general solution for multiplying any single-digit number by 9. That’s an algorithm!
Similarly, the techniques you learned for addition with carrying, subtraction with borrowing, and long division are all algorithms. One of the characteristics of algorithms is that they do not require any intelligence to carry out. They are mechanical processes in which each step follows from the last according to a simple set of rules.
On the other hand, understanding that hard problems can be solved by step-by-step algorithmic processess is one of the major simplifying breakthroughs that has had enormous benefits. So while the execution of the algorithm may be boring and may require no intelligence, algorithmic or computational thinking is having a vast impact. It is the process of designing algorithms that is interesting, intellectually challenging, and a central part of what we call programming.
Some of the things that people do naturally, without difficulty or conscious thought, are the hardest to express algorithmically. Understanding natural language is a good example. We all do it, but so far no one has been able to explain how we do it, at least not in the form of a step-by-step mechanical algorithm.
One of the things loops are good for is generating tabular data. Before computers were readily available, people had to calculate logarithms, sines and cosines, and other mathematical functions by hand. To make that easier, mathematics books contained long tables listing the values of these functions. Creating the tables was slow and boring, and they tended to be full of errors.
When computers appeared on the scene, one of the initial reactions was, “This is great! We can use the computers to generate the tables, so there will be no errors.” That turned out to be true (mostly) but shortsighted. Soon thereafter, computers and calculators were so pervasive that the tables became obsolete.
Well, almost. For some operations, computers use tables of values to get an approximate answer and then perform computations to improve the approximation. In some cases, there have been errors in the underlying tables, most famously in the table the Intel Pentium processor chip used to perform floating-point division.
Although a power of 2 table is not as useful as it once was, it still makes a good example of iteration. The following program outputs a sequence of values in the left column and 2 raised to the power of that value in the right column:
The string '\t' represents a tab character. The backslash character in '\t' indicates the beginning of an escape sequence. Escape sequences are used to represent invisible characters like tabs and newlines. The sequence \n represents a newline.
An escape sequence can appear anywhere in a string. In this example, the tab escape sequence is the only thing in the string. How do you think you represent a backslash in a string?
As characters and strings are displayed on the screen, an invisible marker called the cursor keeps track of where the next character will go. After a print function, the cursor normally goes to the beginning of the next line.
The tab character shifts the cursor to the right until it reaches one of the tab stops. Tabs are useful for making columns of text line up, as in the output of the previous program. Because of the tab characters between the columns, the position of the second column does not depend on the number of digits in the first column.
Check your understanding
7.7.1: What is the difference between a tab (\t) and a sequence of spaces?
Two dimensional tables have both rows and columns. You have probably seen many tables like this if you have used a spreadsheet program. Another object that is organized in rows and columns is a digital image. In this section we will explore how iteration allows us to manipulate these images.
A digital image is a finite collection of small, discrete picture elements called pixels. These pixels are organized in a two-dimensional grid. Each pixel represents the smallest amount of picture information that is available. Sometimes these pixels appear as small “dots”.
Each image (grid of pixels) has its own width and its own height. The width is the number of columns and the height is the number of rows. We can name the pixels in the grid by using the column number and row number. However, it is very important to remember that computer scientists like to start counting with 0! This means that if there are 20 rows, they will be named 0,1,2, and so on thru 19. This will be very useful later when we iterate using range.
In the figure below, the pixel of interest is found at column c and row r.
Each pixel of the image will represent a single color. The specific color depends on a formula that mixes various amounts of three basic colors: red, green, and blue. This technique for creating color is known as the RGB Color Model. The amount of each color, sometimes called the intensity of the color, allows us to have very fine control over the resulting color.
The minimum intensity value for a basic color is 0. For example if the red intensity is 0, then there is no red in the pixel. The maximum intensity is 255. This means that there are actually 256 different amounts of intensity for each basic color. Since there are three basic colors, that means that you can create 2563 distinct colors using the RGB Color Model.
Here are the red, green and blue intensities for some common colors. Note that “Black” is represented by a pixel having no basic color. On the other hand, “White” has maximum values for all three basic color components.
Color Red Green Blue Red 255 0 0 Green 0 255 0 Blue 0 0 255 White 255 255 255 Black 0 0 0 Yellow 255 255 0 Magenta 255 0 255
In order to manipulate an image, we need to be able to access individual pixels. This capability is provided by a module called image. The image module defines two classes: Image and Pixel.
Each Pixel object has three attributes: the red intensity, the green intensity, and the blue intensity. A pixel provides three methods that allow us to ask for the intensity values. They are called getRed, getGreen, and getBlue. In addition, we can ask a pixel to change an intensity value using its setRed, setGreen, and setBlue methods.
Method Name Example Explanation Pixel(r,g,b) Pixel(20,100,50) Create a new pixel with 20 red, 100 green, and 50 blue. getRed() r = p.getRed() Return the red component intensity. getGreen() r = p.getGreen() Return the green component intensity. getBlue() r = p.getBlue() Return the blue component intensity. setRed() p.setRed(100) Set the red component intensity to 100. setGreen() p.setGreen(45) Set the green component intensity to 45. setBlue() p.setBlue(156) Set the blue component intensity to 156.
In the example below, we first create a pixel with 45 units of red, 76 units of green, and 200 units of blue. We then print the current amount of red, change the amount of red, and finally, set the amount of blue to be the same as the current amount of green.
Check your understanding
188.8.131.52: If you have a pixel whose RGB value is (20, 0, 0), what color will this pixel appear to be?
To access the pixels in a real image, we need to first create an Image object. Image objects can be created in two ways. First, an Image object can be made from the files that store digital images. The image object has an attribute corresponding to the width, the height, and the collection of pixels in the image.
It is also possible to create an Image object that is “empty”. An EmptyImage has a width and a height. However, the pixel collection consists of only “White” pixels.
We can ask an image object to return its size using the getWidth and getHeight methods. We can also get a pixel from a particular location in the image using getPixel and change the pixel at a particular location using setPixel.
The Image class is shown below. Note that the first two entries show how to create image objects. The parameters are different depending on whether you are using an image file or creating an empty image.
Method Name Example Explanation Image(filename) img = image.Image(“cy.png”) Create an Image object from the file cy.png. EmptyImage() img = image.EmptyImage(100,200) Create an Image object that has all “White” pixels getWidth() w = img.getWidth() Return the width of the image in pixels. getHeight() h = img.getHeight() Return the height of the image in pixels. getPixel(col,row) p = img.getPixel(35,86) Return the pixel at column 35, row 86d. setPixel(col,row,p) img.setPixel(100,50,mp) Set the pixel at column 100, row 50 to be mp.
Consider the image shown below. Assume that the image is stored in a file called “luther.jpg”. Line 2 opens the file and uses the contents to create an image object that is referred to by img. Once we have an image object, we can use the methods described above to access information about the image or to get a specific pixel and check on its basic color intensities.
When you run the program you can see that the image has a width of 400 pixels and a height of 244 pixels. Also, the pixel at column 45, row 55, has RGB values of 165, 161, and 158. Try a few other pixel locations by changing the getPixel arguments and rerunning the program.
Check your understanding
184.108.40.206: In the example in ActiveCode box 10, what are the RGB values of the pixel at row 100, column 30?
Image processing refers to the ability to manipulate the individual pixels in a digital image. In order to process all of the pixels, we need to be able to systematically visit all of the rows and columns in the image. The best way to do this is to use nested iteration.
Nested iteration simply means that we will place one iteration construct inside of another. We will call these two iterations the outer iteration and the inner iteration. To see how this works, consider the simple iteration below.
for i in range(5): print(i)
We have seen this enough times to know that the value of i will be 0, then 1, then 2, and so on up to 4. The print will be performed once for each pass. However, the body of the loop can contain any statements including another iteration (another for statement). For example,
for i in range(5): for j in range(3): print(i,j)
The for i iteration is the outer iteration and the for j iteration is the inner iteration. Each pass thru the outer iteration will result in the complete processing of the inner iteration from beginning to end. This means that the output from this nested iteration will show that for each value of i, all values of j will occur.
Here is the same example in activecode. Try it. Note that the value of i stays the same while the value of j changes. The inner iteration, in effect, is moving faster than the outer iteration.
Another way to see this in more detail is to examine the behavior with codelens. Step thru the iterations to see the flow of control as it occurs with the nested iteration. Again, for every value of i, all of the values of j will occur. You can see that the inner iteration completes before going on to the next pass of the outer iteration.
Our goal with image processing is to visit each pixel. We will use an iteration to process each row. Within that iteration, we will use a nested iteration to process each column. The result is a nested iteration, similar to the one seen above, where the outer for loop processes the rows, from 0 up to but not including the height of the image. The inner for loop will process each column of a row, again from 0 up to but not including the width of the image.
The resulting code will look like the following. We are now free to do anything we wish to each pixel in the image.
for col in range(img.getWidth()): for row in range(img.getHeight()): #do something with the pixel at position (col,row)
One of the easiest image processing algorithms will create what is known as a negative image. A negative image simply means that each pixel will be the opposite of what it was originally. But what does opposite mean?
In the RGB color model, we can consider the opposite of the red component as the difference between the original red and 255. For example, if the original red component was 50, then the opposite, or negative red value would be 255-50 or 205. In other words, pixels with alot of red will have negatives with little red and pixels with little red will have negatives with alot. We do the same for the blue and green as well.
The program below implements this algorithm using the previous image. Run it to see the resulting negative image. Note that there is alot of processing taking place and this may take a few seconds to complete. In addition, here are two other images that you can use. Change the name of the file in the image.Image() call to see how these images look as negatives. Also, note that there is an exitonclick method call at the very end which will close the window when you click on it. This will allow you to “clear the screen” before drawing the next negative.cy.png goldygopher.png
Lets take a closer look at the code. After importing the image module, we create two image objects. The first, img, represents a typical digital photo. The second, newimg, is an empty image that will be “filled in” as we process the original pixel by pixel. Note that the width and height of the empty image is set to be the same as the width and height of the original.
Lines 8 and 9 create the nested iteration that we discussed earlier. This allows us to process each pixel in the image. Line 10 gets an individual pixel.
Lines 12-14 create the negative intensity values by extracting the original intensity from the pixel and subtracting it from 255. Once we have the newred, newgreen, and newblue values, we can create a new pixel (Line 16).
Finally, we need to insert the new pixel into the empty image in the same location as the original pixel that it came from in the digital photo.
Other pixel manipulation
There are a number of different image processing algorithms that follow the same pattern as shown above. Namely, take the original pixel, extract the red, green, and blue intensities, and then create a new pixel from them. The new pixel is inserted into an empty image at the same location as the original.
For example, you can create a gray scale pixel by averaging the red, green and blue intensities and then using that value for all intensities.
From the gray scale you can create black white by setting a threshold and selecting to either insert a white pixel or a black pixel into the empty image.
You can also do some complex arithmetic and create interesting effects, such as Sepia Tone
You have just passed a very important point in your study of Python programming. Even though there is much more that we will do, you have learned all of the basic building blocks that are necessary to solve many interesting problems. From and algorithm point of view, you can now implement selection and iteration. You can also solve problems by breaking them down into smaller parts, writing functions for those parts, and then calling the functions to complete the implementation. What remains is to focus on ways that we can better represent our problems in terms of the data that we manipulate. We will now turn our attention to studying the main data collections provided by Python.
Check your understanding
220.127.116.11: What will the following nested for-loop print? (Note, if you are having trouble with this question, review CodeLens 3).
for i in range(3): for j in range(2): print(i,j)a.
0 0 0 1 1 0 1 1 2 0 2 1b.
0 0 1 0 2 0 0 1 1 1 2 1c.
0 0 0 1 0 2 1 0 1 1 1 2d.
0 1 0 1 0 1
18.104.22.168: What would the image produced from ActiveCode box 12 look like if you replaced the lines:
newred = 255-p.getRed() newgreen = 255-p.getGreen() newblue = 255-p.getBlue()with the lines:
newred = p.getRed() newgreen = 0 newblue = 0
If you want to try some image processing on your own, outside of the textbook you can do so using the cImage module. You can download cImage.py from The github page . If you put cImage.py in the same folder as your program you can then do the following to be fully compatible with the code in this book.
import cImage as image img = image.Image("myfile.gif")
One important caveat about using cImage.py is that it will only work with GIF files unless you also install the Python Image Library. The easiest version to install is called Pillow. If you have the pip command installed on your computer this is really easy to install, with pip install pillow otherwise you will need to follow the instructions on the Python Package Index page. With Pillow installed you will be able to use almost any kind of image that you download.
This chapter showed us how to sum a list of items, and how to count items. The counting example also had an if statement that let us only count some selected items. In the previous chapter we also showed a function find_first_2_letter_word that allowed us an “early exit” from inside a loop by using return when some condition occurred. We now also have break to exit a loop (but not the enclosing function, and continue to abandon the current iteration of the loop without ending the loop.
Composition of list traversal, summing, counting, testing conditions and early exit is a rich collection of building blocks that can be combined in powerful ways to create many functions that are all slightly different.
The first six questions are typical functions you should be able to write using only these building blocks.
Add a print function to Newton’s sqrt function that prints out better each time it is calculated. Call your modified function with 25 as an argument and record the results.
Write a function print_triangular_numbers(n) that prints out the first n triangular numbers. A call to print_triangular_numbers(5) would produce the following output:
1 1 2 3 3 6 4 10 5 15
(hint: use a web search to find out what a triangular number is.)
Write a function, is_prime, which takes a single integer argument and returns True when the argument is a prime number and False otherwise.
Modify the the Random turtle walk program so that the turtle turns around when it hits the wall and goes the other direction. This bouncing off the walls should continue until the turtle has hit the wall 4 times.
Modify the previous program so that you have two turtles each with a random starting location. Keep the turtles moving and bouncing off the walls until they collide with each other.
Modify the previous program so that rather than a left or right turn the angle of the turn is determined randomly at each step. When the turtle hits the wall you must calculate the correct angle for the bounce.
Write a function to remove all the red from an image.
Write a function to convert the image to grayscale.
Write a function to convert an image to black and white.
Sepia Tone images are those brownish colored images that may remind you of times past. The formula for creating a sepia tone is as follows:
newR = (R × 0.393 + G × 0.769 + B × 0.189) newG = (R × 0.349 + G × 0.686 + B × 0.168) newB = (R × 0.272 + G × 0.534 + B × 0.131)
Write a function to convert an image to sepia tone. Hint: Remember that rgb values must be integers between 0 and 255.
Write a function to uniformly shrink or enlarge an image. Your function should take an image along with a scaling factor. To shrink the image the scale factor should be between 0 and 1 to enlarge the image the scaling factor should be greater than 1.
Write a function to rotate an image. Your function should take an image object along with the number of degrees to rotate. The rotational degrees can be positive or negative, and should be multiples of 90.
After you have scaled an image too much it looks blocky. One way of reducing the blockiness of the image is to replace each pixel with the average values of the pixels around it. This has the effect of smoothing out the changes in color. Write a function that takes an image as a parameter and smooths the image. Your function should return a new image that is the same as the old but smoothed.
When you scan in images using a scanner they may have lots of noise due to dust particles on the image itself or the scanner itself, or the images may even be damaged. One way of eliminating this noise is to replace each pixel by the median value of the pixels surrounding it.
Research the Sobel edge detection algorithm and implement it. | http://interactivepython.org/courselib/static/thinkcspy/MoreAboutIteration/moreiteration.html | 13 |
76 | Working with MathML
MathML is an XML-based markup language for representing mathematics. It was developed by the W3C to provide an effective way to display math in web pages and facilitate the transfer and reuse of mathematical content between applications. The great advantage is that it can encode information about both the meaning and appearance of mathematical notation. This makes it an ideal data format for storing and exchanging mathematical information. For example, a MathML equation can be copied out of a web page and directly pasted into an application like Mathematica for evaluation.
As a common and widely accepted standard for representing mathematics, MathML provides the foundation for many interesting and useful applications. For example, you can use MathML to create dynamic mathematical websites featuring interactive equations, set up a database of technical documents whose contents can be easily searched, indexed, and archived, or develop speech synthesis software for audio rendering of mathematics.
MathML has grown rapidly in popularity since it was first released in 1998, gaining broad support in both industry and academia. It is currently possible to view MathML equations in the leading web browsers, either directly or using freely available plugins. As more tools for authoring, viewing, and processing MathML become available, its importance is only expected to grow.
Wolfram Research was a key participant in the development of MathML and is committed to supporting this important web technology. Mathematica includes full support for MathML 2.0. You can import MathML equations into a Mathematica notebook and evaluate them, or export equations from a notebook as MathML and paste them into an HTML document. There are also several kernel commands for converting between MathML and the boxes and expressions used by Mathematica to represent mathematics.
Syntax of MathML
Since it is an XML application, the syntax rules of MathML are defined by the XML specification. Each MathML expression consists of a series of elements, written in the angle bracket syntax similar to HTML. Each element can take several attributes. The allowed elements and attributes are determined by the MathML DTD.
All MathML elements fall into one of three categories: interface elements, presentation elements, and content elements.
Interface elements, such as the top-level math element, determine how a MathML expression is embedded in other XML documents.
Presentation elements encode information about the visual two-dimensional structure of a mathematical expression. For example, the mrow, mfrac, msqrt, and msub elements represent a row, a fraction, a square root, and a subscripted expression, respectively.
Content elements encode information about the logical meaning of a mathematical expression. For example, plus and sin represent addition and the trigonometric sine function, and apply represents the operation of applying a function.
A given equation can be represented in several different ways in MathML:
- Presentation MathML—presentation elements only. It is useful in situations where only the display of mathematics is important. For example, to include equations in a web page that are intended only for viewing.
- Content MathML—content elements only. It is useful in situations where it is important to encode mathematical meaning. For example, you can use it to post an equation on a web page that readers can copy and paste into Mathematica for evaluation.
- Combined markup—combination of content and presentation elements. It is used when you want to encode both the appearance and meaning of equations. For example, you can use combined markup to specify a nonstandard notation for a common mathematical construct or to associate a specific mathematical meaning with a notation that usually has a different meaning.
Using Mathematica, you can generate presentation, content, or combined markup for any equation.
Presentation MathML consists of about 30 elements and 50 attributes, which encode the visual two-dimensional structure of a mathematical expression. For example, the Mathematica typeset expression would have the following MathML representation.
The entire expression is enclosed in a math element. This must be the root element for every instance of MathML markup. The other presentation elements are:
- mrow—displays its subelements in a horizontal row
- mi—represents an identifier such as the name of a function or variable
- mo—represents an operator or delimiter
Identifiers, operators, and numbers are each represented by different elements because each has slightly different typesetting conventions for fonts, spacing, and so on. For example, variables are typically rendered in an italic font, numbers are displayed in a normal font, and operators are rendered with extra space around them, depending on whether they occur in a prefix, postfix, or infix position.
In addition to the mi, mn, and mo elements, there are presentation elements corresponding to common notational structures such as fractions, square roots, subscripts, superscripts, and matrices. Any given formula can be represented by decomposing it into its constituent parts and replacing each notational construct by the corresponding presentation elements. For example, the typeset expression would have the following MathML representation.
Here, the mfrac, msqrt, and msup elements represent a fraction, a square root, and a superscripted expression, respectively. Each of these elements takes a fixed number of child elements, which have a specific meaning based on their position. These child elements are called arguments. For example, both the mfrac and msup elements take two arguments, with the following syntax.
<mfrac> numerator denominator </mfrac>
<msup> base superscript </msup>
The mrow element is used to enclose other elements that appear in a horizontal row. For example, the typeset expression would have the following MathML representation.
Here, the limits of the integral are shown using the presentation element msubsup, which takes three arguments, with the following syntax.
<msubsup> base subscript superscript </msubsup>
Another notable feature is that the symbols representing the integral sign, the exponential, and the differential d are represented using the character entities ∫, &exp;, and ⅆ. These are among approximately 2,000 special symbols defined by the MathML DTD. These can be included in a document using a named entity reference or a character entity reference, which uses the Unicode character code.
The mstyle element is used for applying styles to an equation. Any attributes specified in an mstyle element are inherited by all its child elements. You can use this element to specify properties like the font size and color for an equation. Note the use of the entity to denote multiplication.
The examples are intended only to illustrate how presentation markup works through a sampling of some of its elements. To see a complete listing of all the presentation elements and attributes, see the MathML specification at http://www.w3.org/TR/MathML2.
Content MathML consists of about 140 elements and 12 attributes, which encode the logical meaning of a mathematical expression. The content elements ci and cn are used to represent identifiers and numbers, respectively. They are analogous to the mi and mn elements in presentation markup. For example, the typeset expression would have the following content MathML representation.
The apply element is used to apply operators or functions to expressions. The first argument of the apply element is usually an empty element indicating an operator or function. The remaining arguments represent one or more expressions to which the first argument is applied. In this example, the first argument of the apply function is the empty element plus, which denotes addition.
The type attribute of cn describes the type of number encoded. It can take values real, integer, rational, complex-polar, complex-cartesian, and constant. The empty element sep is used to separate different parts of a number such as the numerator and denominator of a fraction or the real and imaginary parts of a complex number. For example:
The majority of content elements are empty elements representing specific operators or functions. The various elements are organized into groups named after the specific elementary subfields of mathematics.
- Arithmetic, Algebra, and Logic
There are elements corresponding to most operators and functions that are encountered in high school mathematics. For example, basic arithmetic operators are represented by plus, minus, times, divide, and power.
Integrals are specified using the int element. The variable of integration is represented using the element bvar. The upper and lower limits of integration are usually specified using the elements lowlimit and uplimit.
The interval element is used to specify closed and open intervals. It takes the attribute closure, which can take the values closed, open, closed-open, and open-closed corresponding to the four types of intervals possible. The default value for closure is closed.
You can also use the interval element to specify the limits of a definite integral as an alternative to using uplimit and lowlimit.
The matrix and matrixrow elements are used to represent a matrix and a row of a matrix, respectively. The eq element is used to express equality.
The examples are intended only to illustrate how content markup works through a representative sampling of some of its elements. To see a complete listing of all the content elements and attributes, see the MathML specification at http://www.w3.org/TR/MathML2.
There are two ways to import MathML equations into Mathematica:
- Copy and paste MathML equations from another application, such as a web browser, directly into a notebook. When you paste a valid MathML expression into a notebook, Mathematica brings up a dialog box asking if you want to paste the literal markup or interpret it. If you choose to interpret the markup, it is automatically converted into a Mathematica expression.
By default, MathML markup is imported as a Mathematica
box expression. You can convert the boxes into an expression using the ToExpression
MathML Import Options
The standard Import options can be used for greater control over the export process. The syntax for specifying a conversion option is as follows.
Import[file, expr, "MathML", option1->value1, option2->value2, ...]
Mathematica includes several functions for generating MathML from the boxes and expressions used internally by Mathematica to represent equations. You can enter an equation in a notebook using palettes, menus, or keyboard shortcuts and then convert it into MathML using one of these conversion functions. All the MathML conversion functions are located in the context.
to generate MathML from a box structure. By default, this generates presentation markup only.
to convert a typeset equation into MathML. By default, this generates presentation MathML.
will generate presentation and content MathML enclosed in a
The annotation-xml element is used to provide additional information of the type specified by its encoding attribute. Here, the encoding attribute has the value "MathML-Content" indicating that the annotation-xml element contains content MathML.
to generate either presentation MathML or content MathML only. Set
to suppress the header information.
ExportString evaluates its first argument before converting it to MathML. An expression that can be simplified on evaluation may give unexpected results.
Generate the presentation markup for the following definite integral.
Since the integral evaluates to 1, this command generates the MathML representation of 1 instead of the integral.
To get the MathML representation of the integral, force the integral to remain unevaluated by wrapping the Unevaluated
function around it.
Export and ExportString accept the options , , , and .
Using these options, you can control various features of the generated MathML, such as including an XML or DTD declaration, generating presentation markup, content markup, or both, and using an explicit namespace declaration and prefix.
You can specify the options explicitly each time you evaluate one of the MathML functions. Or use the SetOptions command to change the default values of the options for a particular function. The option values you set are then used for all subsequent evaluations of that function.
Use Mathematica's sophisticated typesetting capabilities to create properly formatted equations and then convert them into MathML for display on the web. There are several ways to export mathematical expressions from a Mathematica notebook as MathML.
- copies the selected expression onto the clipboard in MathML format. This is a convenient way to copy a specific mathematical formula from a notebook and paste it into an HTML document.
- Use , choosing XML - XHTML+MathML (*.xml) from the Save as Type: submenu. This converts your entire notebook into XHTML with all equations in the notebook saved as MathML. The equations are embedded in the XHTML file in the form of MathML "data islands", which can be displayed by a web browser, either directly or using a special plugin.
MathML Export Options
The standard options of the Export or ExportString functions can be used for greater control over the export process. The syntax for specifying a conversion option is:
Export[file, expr, "MathML", option1->value1, option2->value2, ...]
ExportString[expr, "MathML", option1->value1, option2->value2, ...].
Options can also be specified directly in any function that produces MathML as output:
XML`MathML`ExpressionToMathML[expr, "MathML", option1->value1, option1->value1, ...].
When exporting as MathML, you can use any of the Export options available for exporting general XML documents.
- —controls whether to export presentation MathML
- —controls whether to export content MathML
- —controls whether to include the Mathematica encoding of the expression as an annotation
- —provides a way to insert additional attributes into the root tag of a MathML expression
- —controls whether to include a namespace prefix for each MathML element
This option controls which annotations are added to the output MathML. The value is a list whose elements can be , , or . The order of the elements in the list is not relevant.
When is one of the annotations, an XML declaration, <?xml version="1.0"?>, is included in the header.
When is one of the annotations, an XML document type declaration of the form <!DOCTYPE ...> appears in the header. This statement specifies the DTD for the XML application in which the output is written.
automatically adds a header containing an XML declaration and a document type declaration for the MathML DTD to the output.
When does not contain , then the output MathML has no header. This is true even if contains other elements such as or .
"Presentation" and "Content"
These options control which type of MathML markup is generated. The default settings are "Presentation"->True and "Content"->False.
Export presentation MathML only.
Export content MathML only.
This option determines whether an extra annotation should be added when exporting a formula containing constructs specific to Mathematica that do not have a clear counterpart in MathML.
|"IncludeMarkupAnnotations"||True||Mathematica-specific information is included in a separate annotation element (default)|
|False||an extra annotation element is not added|
Values for .
With "IncludeMarkupAnnotations"->True, the Mathematica annotation is enclosed in a semantics element. This allows lossless import back into Mathematica.
does not have a corresponding character in Unicode. (Unicode's right arrow character looks the same but does not have the same code point.) When exporting
, an extra annotation is added to the markup.
, no extra annotation is included.
This option lets you add attributes to the root element of a MathML expression. The option has the syntax .
Export a MathML expression and specify that it should be displayed in inline form.
Export a MathML expression in display form.
This option controls whether plane 1 Unicode characters should be replaced with similar plane 0 characters. This is useful because currently most browsers cannot properly display plane 1 characters.
|"UseUnicodePlane1Characters"||True||special characters belonging to plane 1 of Unicode are exported without being replaced (default)|
|False||special characters belonging to plane 1 of Unicode are replaced by plane 0 characters with an attached attribute|
Values for .
set to True
, special plane 1 Unicode characters (e.g. Gothic, scripted, and double-struck characters) are written out with their plane 1 numeric character codes.
set to False
, any special plane 1 Unicode character is replaced by a corresponding plane 0 character with a suitable value of the mathvariant
Symbols for MathML Elements
Since certain content elements in MathML do not have a direct analog in Mathematica, a few symbols are specially defined in the context.
Symbols with the MathML markup they are meant to represent.
Examples that use these symbols. These all have a traditional typeset form. | http://reference.wolfram.com/mathematica/XML/tutorial/MathML.zh.html | 13 |
65 | See also the
Dr. Math FAQ:
Browse High School Higher-Dimensional Geometry
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Do cones or cylinders have edges?
Latitude and longitude.
MaximizIng the volume of a cylinder.
- Changing Angle of a Tank [06/11/2003]
Points A and B represent pressure sensors in fixed positions on the
base of a round tank. The chord through CD represents the water level
in the tank. Lines a and b are the heights of water registered by each
- Cones, Pyramids: Surface Area and Volume Formulas [03/11/2003]
How do you get the formula for surface area and volume for cones and
- Coordinate Systems, Longitude, Latitude [04/19/1997]
Can you explain pitch, roll, and yaw to me? Are there other systems for
measuring an object's position in space?
- Does a Cone have an Edge? A Vertex? [03/12/2002]
Our 4th grade math textbook defines a cone as "A solid figure with one
circular face and one vertex." This sounds reasonable until you read the
textbook's definitions for face, edge, and vertex.
- Fourth Dimension [05/13/1997]
Can you help me understand the fourth dimension?
- HyperCubes [3/21/1996]
Do you folks know of any videos that show the hyper-cube in action that
would be appropriate for the high school level?
- Is a Sphere 2-D or 3-D? [8/8/1996]
Is a sphere a two- or a three-dimensional object?
- Number of Cylinder Edges [04/01/2002]
My son was asked "how many edges are there on a solid cylinder?" on a
recent math examination. His answer was "2" and it was marked incorrect.
- Three-dimensional Plane Diagrams [03/10/1999]
Draw: two parallel planes with another plane intersecting them; two
parallel planes with an intersecting line.
- Volume equations for a sphere and pyramid [6/10/1996]
I am now thoroughly confused: we just learned the formulas for volume of
a sphere and volume of a pyramid, but, he wouldn't tell me how to do it.
- Volume of a Pyramid [05/16/1999]
Can you give a step-by-step proof for the volume of a pyramid?
- 3D Figures and Intersections [03/04/1999]
Determining whether a line and plane intersect, and where, using vectors.
- 3D Geometry [11/17/1997]
You can draw a line of minimum distance between and perpendicular to two
lines in 3space. I know how to get the distance and direction of this
line, but I want to locate the line in 3space so that I can find its
- A 3-D Object that Fits 3 Holes [06/03/1999]
How can I make an object that can fit in 3 holes of different shapes,
blocking all light, and sliding all the way through without forcing it?
- 3-D Shape That Can Be a Circle, Square, or Triangle in 2-D [09/08/2006]
Is there a shape that can fill and pass through a circular hole, a
square hole, and a triangular hole?
- The 4th Dimension [2/10/1996]
In my Geometry class we read the book, Sphereland . I couldn't
visualize Overcubes and Overspheres. Even though I know we really
shouldn't be able to see 4D figures, just as 2D creatures can't visualize
3D creatures, is there any way you could describe an oversphere?
- Adding a 6-Inch Layer of Gravel [03/13/2003]
Your company is constructing a soccer field for a high school. The
field is 110 yards long and 80 yards wide...
- Angle Between Two Points on the Globe [7/17/1996]
Given their longitude and latitude, how can you determine the angle in
radians between two cities?
- Angle Between Two Sides of a Pyramid [10/29/1999]
How can I compute the angle formed by two sides of a frustrum of a
- Area and Volume of a Football [3/28/1995]
How would one find the area of a football? Or then again, how would one
find the volume of a football?
- Area and Volume of a Pear [6/4/1996]
How do you find the area and volume of a pear?
- Area and Volume of Cuboid [09/09/2002]
How can I find the new volume and surface area of a cuboid after its
volume has been reduced by 10%?
- Area of a Cone [11/5/1994]
What is the area of a cone when given the height and the angle at the
- Area of a Pentagonal Pyramid [11/16/1995]
I would like to know the area of a pentagonal pyramid. The dimensions of
the sides of the base are 5cm ea. and the height is 23cm.
- The Area of a Roof [05/18/2002]
How can I determine the area of a hipped roof for a building 100
feet long and 80 feet wide, with a pitch of 6/12, and 3 feet of
overhang for the eaves?
- Area, Volume of a Cone [6/4/1996]
What are the formulae to find the volume and area of a cone?
- Bases and Faces [12/05/2001]
I can't figure out the difference between a base and a face on the shapes
we are learning.
- Bearing Between Two Points [12/19/2001]
Is there an easy way to calculate the "heading" (relative to North=0)
between two coordinates?
- Bearing Calculation [09/01/1997]
Given two cities at geographic coordinates (xA,yA) and (xB,yB), is there
a formula to calculate the bearing from city A to city B?
- Beyond the Third Dimension [5/16/1996]
I am searching for information on 'beyond the third dimension'.
- Beyond Three-Dimensional Geometry [06/17/2001]
If the first dimension is a line and the second dimension is a flat
figure, the the third dimension is, say, a cube, then what is the fourth
dimension? What is the fifth dimension?
- Board Feet from a Log [03/19/2003]
What is the board feet of a 10-foot log if the diameter is 16 inches
at one end and 14 inches at the other end?
- Box for a Ball [09/26/2002]
The volume of a ball is 36pi cm^3. How do I find the dimensions of a
rectangular box that is just large enough to hold the ball?
- Bricks to Cover a Steeple [08/23/1997]
How many 6"x12" bricks would it take to cover an octagonal steeple with a
diameter of 130 ft. and a height of 370 ft?
- Building a Cone [01/28/2002]
I am trying to draw cone (frustum) with a larger radius size.
- Building a Cone [10/28/2001]
I am trying to find a formula for building a cone for a chimney flashing.
It should be 21" tall with a top opening of 8", a bottom opening of 20",
and a vertical seam overlap of 2".
- Building a Manger [12/03/2001]
Given a base of 11" and two walls 7 1/2' and 6" high, both meeting the
base a 90-degree angles, what is the length of the roof and what are the
angle measures where the walls meet the roof?
- Building a Skateboard Ramp [9/19/1995]
I'm trying to build a skateboard ramp, pyramid with flat top, height one
foot, angle of ascent thirty degrees, other angles ninety degrees and
- Building a Wooden Square-Based Pyramid [01/26/2001]
I want to build a wooden pyramid with a square base 8"by 8".
- CADAEIBFEC and Other NCTM Questions [10/27/1998]
CADAEIBFEC is a mnemonic for an important piece of mathematical
information. What is it? | http://mathforum.org/library/drmath/sets/high_3d.html?start_at=1&num_to_see=40&s_keyid=38684425&f_keyid=38684426 | 13 |
58 | ODD MAN REFORMS
Character redefinition techniques
Last month we described Odd Man Out, an educational game designed to help preschool children develop visual discrimination skills. This month, we will look at the initial stages of the program for Odd Man Out and the special features of the Atari computers that make the creation of the program so simple.
WHAT IS A CHARACTER?
The basis for Odd Man Out is the ability to easily redefine the Atari character set. Once this skill is mastered, the rest is easy, since most of the program involves the manipulation of characters to create desired displays. This may sound mysterious, but it is quite simple. With that in mind, let's look at how the Atari creates the characters of the alphabet.
Turn on your computer and examine the characters displayd on the screen. If you look closely, you'll see that each character is composed of a series of dots; every character in the Atari character set is defined by an eight-by-eight dot matrix. In terms of computer memory, each character is represented by eight bytes, one byte for each row of dots in the character.
The computer interprets each byte as a series of ones and zeros. If the individual bit is a one, the computer places a dot on the screen.
Suppose the value of one byte of a character is 24. This is stored in your computer as 00011000, since computers only recognize binary numbers. When your Atari encounters this byte, it interprets it by lighting dots that correspond to the 1's, which creates a pattern on the screen. A series of eight such bytes could create something like this:
Armed with this knowledge, we can begin to develop our own character set.
CONSTRUCTING A CUSTOM CHARACTER SET
Let's examine one of the objects to be displayed in level 1 of Odd Man Out. In order to make the objects large enough to be identified by young children, we decided to form each object by combining four characters in a two by two matrix.
The house is divided into four sections, with each section representing one character. Converting this picture into its numerical representation results in the following:
Upper left section -- 1,2,4,8,16,63,32
Upper right section - 128,64,32,16,8,4,252,4
Lower left section -- 32,32,35,34,34,34,34,63
Lower right section - 4,4,196,68,68,68,68,252
Each of the objects displayed in the first three levels of the game was created in this manner. We drew our characters by hand and calculated the proper numerical values ourselves, but several commercial products let you create your characters on the screen rather than on graph paper.
CHANGING CHARACTER SETS
The standard character set requires four pages of memory. (There are 128 characters in the set, each of which is represented by eight bytes. Each page of memory contains 256 bytes. Thus, 128 characters x eight bytes per character/256 bytes per page=four pages.)
Now that we have defined a new character set, we need to store it at the beginning of a memory page. To accomplish this, we can use memory location 106, which is the pointer to the top of RAM (Random Access Memory). There is no memory available at the page pointed to by location 106. Because of this, we can fool the computer into thinking there is less available memory by decreasing the value stored in this location. This is exactly what we will do. However, since the computer had been using the area at the top of memory to store display information, we will need to reassign this information. To do this, we need only issue a Graphics command. Thus, we can safely reserve the four pages of memory we need by using this three-step process:
1) RAMTOP = PEEK(1O6)
2) POKE 106; RAMTOP - 4
3) GRAPHICS 2 (or any valid graphics command)
After we have reserved this area, the next step is to place one character set into it. The easiest and most straightforward way to accomplish this is to POKE the new character set into the area with the following program:
10 GRTOP = (RAMTOP-4)*256
20 FOR I=0 TO 1023
30 READ X
40 POKE GRTOP+I,X
50 NEXT I
60 DATA ---------------------
This method is simple, but it has two majar disadvantages. Not only is it slow, but more importantly in order to use a number of custom character sets, as we will in this program, you need to POKE in a new character set every time you use it. So, let's look at another, more versatile method of storing our character sets.
VARIABLE AND ARRAY TABLES
The Atari keeps track of the variables that have been used in a program by means of a table that holds information on as many as 128 variables. This information consists of eight entries per variable. The first entry identifies the type of variable involved - string, array or numerical. The second entry is the variable number (the first variable in the table is number zero, the second number one, and so on). The third and fourth entries determine where the information in the variable is stored. The fifth and sixth entries form a sixteen-bit number that represents the dimensional length of the variable. The seventh and eighth entries also form a sixteen- bit number. This represents the in-use length of the variable. Thus, the variable table we're discussing looks like this:
VT+0----------Type of variable
VT+2----------Low order byte of the offset to the value of the variable
VT+3----------High order byte of the offset to the value of the variable
VT+4----------Low order byte of the dimensioned length of the variable
VT+5----------High order byte of the dimensioned length of the variable
VT+6----------Low order byte of the in-use length of the variable
VT+7----------High order byte of the in-use length of the variable
Once the variable table has been located, we are almost able to determine where a particular value is stored. Almost. The Atari stores the actual string data in another table, the array table. The values stored in locations VT + 3 and VT + 4 of the variable table are an offset from the start of the array table. In order to find the actual location of these values, we must find the beginning of the array table and apply the offset found in the variable table. This sounds complicated, but it isn't. Two memory locations hold the addresses of these tables. We can find the beginning of the variable table by using the following:
Similarly, we can find the beginning of the array table In this way:
We should now have enough information to store our custom character set.
CHARACTER SET MANIPULATIONS
To use our custom character set, we need to undertake the following steps:
1) Reserve memory space
2) Locate and modify the variable table
3) Place the new character set into memory
4) Change the character set pointer
We already know how to reserve space and locate the variable table; the next step is modification of the variable table.
In Odd Man Out, we will be using both the standard and custom character sets to form a modified character set. We will set up two string variables, RAM$ and ROM$. ROM$ will hold the standard character set, RAM$ the modified character set. It is important that RAM$ and ROM$ be the first two variables introduced in the program. This can be accomplished by using the following as the first program statement:
This way, we know that RAM$ and ROM$ will be the first two entries in the variable table.
Even though each character set contains 1024 entries, we have purposely dimensioned these two variables to have length one, because we are going to reassign the memory locations to store this information. Had we dimensioned both variables to be 1,024 characters long, BASIC would have reserved 1024 bytes of memory for each variable in the array table. Since this offset is not where we want those values to be stored, 2K of RAM would have been wasted. By using the above method, however, only two bytes of RAM will not be used. Now, let's make the variable table modifications.
We want to store the modified character set in the four pages of reserved memory. To do this, we need to modify the address, dimensioned length, and in-use length of the RAM$ in the variable table. Listing 1 will accomplish this.
Let's look at what this listing does. Line 10 introduces RAM$ and ROM$ and ensures that they are the first two entries in the variable table. Line 20 reserves four pages of memory for the character set. The graphics command in line 30 moves the display information out of this reserved area. Line 40 finds the locations of the variable and array tables. Line 50 converts the page number to the memory location for the start of the character set. Line 60 calculates the offset from the start of the character set to the start of the array table. This is the value that will be stored in locations two and three of the variable table. Lines 70 and 80 convert the length and offset from a decimal to a two-byte representation. Due to a bug in Atari BASIC, it is necessary to dimension the string variable to 1025 instead of 1024. Lines 90, 100, and 110 store the offset, dimensioned, and in-use lengths into the variable table.
Type in Listing 1 and RUN it. When the computer prints READY, type: PRINT LEN(RAM$). The computer should respond: 1025. Thus, even though we dimensioned RAM$ to be one character long, because we changed the variable table the computer thinks RAM$ is 1025 characters in length.
The modification for R0M$ is similar; the only change required is: OFFROM = 57344-AT. (57344 is the start of the ROM character set.) Now we are ready to read the new character set into memory.
Since we have stored the existing and modified character sets in string variables, we will use string variables to store the custom character set we have created. This will allow us to change the modified character set by simple string variable assignments. We also will set up three string variables to hold the redefined characters used in the first three levels: OBJECT$, GEO$, and E$. As a result, at the beginning of the program we call read in all the data for the redefined characters and store it in these string variables. The creation of the modified character set required for a particular level of the game will then involve only a few string variable assignment statements. Because of this, character set modifications can be accompllshecl very rapidly.
The Atari already has a character set stored in ROM (Read Only Memory) that displays normal text and graphics characters. When you change text modes from Graphics 0 to Graphics 1 or 2, you can access only one half of this character set. Memory location 756 serves as a pointer to the beginning of the half of the character set in use. As a result, by changing the value stored in location 756, you can change the characters used by the computer. To display lower case letters and graphics characters, you POKE 756,226. Capital letters and punctuation characters can be displayed by POKEing 756,224 (the values 226 and 224 are the page numbers that contain the start of the character set). We can also change the value stored in location 756 so that it points to the start of our custom character set.
Now, we are ready to put theory into practice. The program in Listing 2 reserves space for the modified character set (lines 20 to 200). The variable table (lines 340 to 540) reads in the redefined characters (lines 580 to 680), and displays four objects on the screen at a time (lines 1460 to 1880). To change the display, press [RETURN]. This program is the heart of Odd Man Out.
This completes our introduction of the Atari character sets. No other microcomputer allows character sets to be manipulated so quickly and easily from BASIC.
There will be two more installments of code to complete the game. Once you have Listing 2 typed in and debugged, save it so you can add the next portion of the program next month. In our next article, we will look at joystick routines and character animation. | http://www.atarimagazines.com/v2n9/Oddmanreforms.html | 13 |
53 | In Tessellations, artist/instructor Thomas Freese shows students how they can make a
tessellating stamp and use it to create an interlocking pattern. The lesson integrates mathematics
and art as the children use geometry, measurement, repetition, and patterning to create unusual,
An introduction to the world of tessellations: definition, examples, and the construction of a
tessellating stamp. Children create a paper template of a modified square, a grid based on that
square, a foam and Plexiglas stamp, and a checkerboard-stamped pattern on their grid page.
Two 90-minute sessions would allow for the hands-on work and a minimum of discussion on the
Tessellations encourages children to explore the cognitive and observational skills
required to understand relative conservation of space in a two-dimensional, repetitive pattern
of an interlocking shape or unit. The activity allows children to learn and review basic geometric
terms, definitions, and theory, including regular polygons, lines, angles, points, etc. Children
employ basic mathematical skills in creating their stamps and their tessellating art: They use a
ruler to measure and form a grid with sections and parallel borders, they find the center of the
page, and they construct a uniform stamping grid of same-size squares.
The activity requires dexterity and coordination of craft materials to create a stamp within
acceptable standards. This is a basic lesson in understanding the printing process: how images
reverse and the need to register the mini-prints precisely within the grid guidelines. And it
allows the child to create a recognizable creature or symbol from an irregular tessellating contour.
This lesson helps students understand tessellations through a combination of art and mathematics
concepts and then put their intellectual understanding to work through the construction of a paper
template of tessellating shape, the use of measurement to make a stamping grid, and the creation
of a stamp with a unique and possibly recognizable image.
Sources for Examples of Tessellations
- M.C. Escher
- Bridget Riley
- Victor Vasarely
- Johannes Kepler
- tile and/or construction patterns
- geometric quilt patterns
- soccer balls
Connections to Educational Standards
The following Kentucky Academic Expectations are all related to Tessellations:
- 2.9: Students understand space and dimensionality concepts and use them appropriately and accurately.
- 2.10: Students understand measurement concepts and use measurements appropriately and accurately.
- 5.1: Students use critical thinking skills such as analyzing, etc.
- 5.2: Students use creative thinking skills to develop or invent novel, constructive ideas or products.
- index cards
- rulers and scissors
- 9" X 12" light-colored construction paper
- craft foam
- doublestick foam
- glue (if doublestick foam is not available)
- 2-1/2" Plexiglas squares
- stamp pads
- scrap paper to protect surfaces from ink
Alternative Materials/Sources for Materials
- index cards: any card stock that is crisp, not soft like construction paper. Tag board or
scrap paper from a print shop is fine.
- Plexiglas: Plexiglas scraps are available from hardware, glass, or frame shops. They are
cheap; in fact, store owners often will donate them to teachers or schools. Other materials,
such as cardboard, could be used to mount the stamps; the advantage of Plexiglas is
that you can see through it.
- Dale Seymour (see Resources below) makes materials that stamp: a foam sponge
class kit for printing on paper and fabric. Students could also carve potatoes, erasers,
or linoleum blocks to create stamps for this activity.
- doublestick foam: If doublestick foam is not immediately available, you can glue together
two layers of craft foam to make the stamps. Craft foam is available from arts and crafts
stores. If you dont have an arts and crafts store in your community, look for craft foam
and/or doublestick foam at office supply stores.
- If your students carve rubber into stamps, you wont need Plexiglas, craft foam, or
doublestick foam at all; but this method is recommended for older students only (4th grade and up).
Vocabulary Related to the Lesson
- acute angle: an angle that measures less than 90º
- congruent angles: angles that have the same measure
- equiangular triangle: a triangle with all three angles congruent (of equal measure)
- equilateral triangle: a triangle with all three sides congruent (of equal length)
- glide reflection: a transformation that moves a figure in a slide and mirrors it
- hexagon: a polygon with six sides
- interior (stamp) details: within the exterior line of the shape used for the stamp;
the lines and pattern that create an artistic image
- line of reflection: a line in a plane that lies equidistant from any two corresponding
opposite points in a figure that has reflective symmetry; also called mirror line
- modified square: the original polygon (square) that has been changed according to
geometric rules in order to tessellate
- mosaic: synonym for tessellation or tiling
- obtuse angle: an angle that measures more than 90º but less than 180º
- octagon: a polygon with eight sides
- paper template: the cut-out tracing form of a tessellating shape used to construct
- parallelogram: a quadrilateral whose opposite sides are congruent and parallel
- pentagon: a polygon with five sides
- perpendicular lines: lines that meet at right angles in a plane
- plane (surface): a two-dimensional, flat surface that is infinite
- polygon: a simple closed shape, bounded by line segments
- print registration: lining up the stamped image according to the grid guidelines and shape
- quadrilateral: a polygon with four sides
- rectangle: a quadrilateral that contains four right angles
- reflection (in a plane): a transformation that mirrors a figure in a plane
- regular polygon: a polygon with all its sides and all its angles congruent
- rhombus: an equilateral quadrilateral
- rotation (in a plane): a transformation that turns a figure about a point in a plane
- scalene triangle: a triangle with sides of three different lengths
- tessellation (plane): a covering of a plane, without any gaps or overlaps, by a pattern of
one or more congruent shapes
- tessellation (space): a filling of space, without any gaps or overlaps, by a pattern of
one or more three-dimensional shapes
- tiling: synonym for tessellation or mosaic
- transformation: in this lesson, a movement of a figure to a new location, leaving the
figure unchanged in size and shape
- translation: a transformation involving a slide of a rigid figure without rotation
- translational symmetry: characteristic of a figure that coincides with itself after an
appropriate translation or slide
- trapezoid: a quadrilateral with exactly two parallel sides
- vertex (of a polygon): the point of intersection of any two adjacent sides of the polygon
- vertex (of an angle): the point of intersection of the two rays that form the angle
Guide students through the following process:
- Place your finger on the bottom right corner of an index card or other piece of stiff paper stock.
- Using a ruler to measure the distance, place a pencil mark 1-1/2" to the left on the bottom
of the card and 1-1/2" above the right corner of the card, on the right edge.
- Place your ruler upright on the bottom 1-1/2" mark so the ruler is parallel to the right
edge of the card.
- Using the 1-1/2" mark on the right edge of the card as a guide, mark a point 1-1/2" up.
- Using the ruler, connect the three 1-1/2" marks to form a 1-1/2" square at the corner
of the index card.
- Along each side of the square, measure off 1/2" spaces, making two marks evenly spaced
along the side. Carefully connect the marks to form two sets of parallel linestwo horizontal
lines and two vertical lines.
- Now make two changes to the square, going from its two open sides. For example, you
could draw a simple half-circle going in from the bottom side and a triangle going in from the
Hints: Avoid placing changes on the corner. Also, simpler figures will be easier to cut out
with your scissors.
- Cut out the two pencil-drawn changes, taking care to cut along the drawn lines and to remove the
cut-outs in one piece.
- Slide the two cut-out pieces to their respective opposite sides and place them, pointing the same
direction as the cut-out sections, even with the squares line. Trace the cut-out pieces and then
cut out the entire shape for a final, tessellating template or tracing form.
Making a Full-Page Grid
- On a piece of light-colored 9" X 12" construction paper, find the center by marking
a point 4-1/2" up on each of the 9" sides. Then connect these two points with a
straight line. Then place a mark at the 6" center point of the 12" sides and
connect these two points with a straight line.
- Draw a tiny circle around the point where these two lines intersect. This is the center point
of the paper.
- Measure and mark every 1-1/2" along the two center lines. Then use these marks to draw
horizontal and vertical lines.
Assembling the Stamp
- Place your tracing form on top of the craft foam and trace and cut out the shape.
- Repeat Step 1 with the doublestick foam.
- Check to see whether the two cut-out foam pieces are accurate to the tracing form.
- Remove one side of the doublestick foams paper covering and mount the craft foam.
- Remove the other side of the doublestick foams paper covering and attach the combined forms to the Plexiglas. The Plexiglas provides a mount for your stamp.
- If you wish, use a pencil to press into the foam and draw line patterns. These will stamp out white and provide interior details to the tessellating shape.
The Stamping Process
- Always line up the corners of your 1-1/2" square stamp with the inside corners of the
squares on your 9" X 12" construction paper grid.
- Working with a partner and using a single color of ink, fully stamp the page in an alternating
pattern to create a checkerboard effect (half of the squares will be stamped and half blank).
Place scrap paper under your grid to protect your desk or table from ink stains. Keep your
stamp in the same position; dont flip or rotate it.
- Rinse your stamp in running water, dry it, and then start stamping the blank squares in a
second color. Once again, keep your stamp in the same position, neither flipping nor rotating it.
Response to Art
Children can create creatures, designs, or messages within the form of their tessellating shape.
They can talk about these embellishments and how they fit in the contour of the shape. They also
can review and report on the basics of the lesson: the definition of tessellation, examples, how
they created a modified parent polygon (from a square), how they made a grid and stamp, what
challenges to stamping or printing they encountered, and how they solved these problems (they may
have learned to inscribe letters in reverse into the foam).
- Make cards for a sale.
- Stamp on large, foam-core sheets and suspend them as mobiles in a large indoor space.
- Make T-shirts to wear.
- Put the 9" X 12" stamped paper sheets all together in a hallway mosaic.
- Laminate the prints and use them as placemats.
- Put prints on posters (along with student-written instructions) for display and for teaching
other students about tessellations.
What would the students like to do next with tessellations? Possibilities include more stamping;
creating a different stamp, different details, or a different parent polygon and grid; stamping
Additional extension ideas:
- constructing larger stamps for a paper-print mural
- designing tessellating figures on a computer
- making tessellating shapes out of wood for a puzzle
- finding and photo-documenting tessellations in building construction
- researching Islamic tile patterns
- doing a study of M.C. Escher and the influences that led to his work with tessellations
- doing a video interview with a quilt maker
- creating new kinds of tessellations by starting with different parent polygons (rectangles,
- researching the geometrythe sum of the angle measurementswhich proves the tiling theory
- creating prints on fabrics
- creating tessellating shapes from fabric and sewing them together
- locating a tessellation in your home or town, sketching it, and writing a basic analysis
- studying patterning in nature (Have a beekeeper visit and show the hexagons of the honeycomb.)
- cutting out non-tessellating polygons and experimenting to find repeating shapes that could
fill in the gaps
- exploring M.C. Eschers books to discover and analyze underlying grids
Students papers about tessellations, along with examples of the tessellating patterns they have created, make excellent portfolio entries.
- books by and about the Dutch artist M.C. Escher
- local quilt makers
- mathematics teachers
- Books, materials, manipulatives, and posters related to tessellations are available from
Dale Seymour Publications, P.O. Box 10888, Palo Alto, CA 95303, (800) 872-1100. The company
offers an extensive catalog that includes these items as well as other K-8 educational and
teacher resource materials in mathematics, science, and the arts.
- Symmetry and Tessellations activities links
- Craft foam (the non-sticky variety) is available from local craft stores or from S&S Arts
and Crafts, Norwich Avenue, Colchester, CT 06415, (800) 937-3482.
Last Updated: Monday, 29-Dec-2008 15:23:24 EST | http://www.ket.org/artonair/artists/freeseguide.htm | 13 |
149 | Letís consider a vector v whose initial point is the origin in an xy - coordinate system and whose terminal point is . We say that the vector is in standard position and refer to it as a position vector. Note that the ordered pair defines the vector uniquely. Thus we can use to denote the vector. To emphasize that we are thinking of a vector and to avoid the confusion of notation with ordered - pair and interval notation, we generally write
v = < a, b >.
The coordinate a is the scalar horizontal component of the vector, and the coordinate b is the scalar vertical component of the vector. By scalar, we mean a numerical quantity rather than a vector quantity. Thus, is considered to be the component form of v. Note that a and b are NOT vectors and should not be confused with the vector component definition.
Now consider with A = (x1, y1) and C = (x2, y2). Letís see how to find the position vector equivalent to . As you can see in the figure below, the initial point A is relocated to the origin (0, 0). The coordinates of P are found by subtracting the coordinates of A from the coordinates of C. Thus, P = (x2 - x1, y2 - y1) and the position vector is .
It can be shown that and have the same magnitude and direction and are therefore equivalent. Thus, = = < x2 - x1, y2 - y1 >.
The component form of with A = (x1, y1) and C = (x2, y2) is
= < x2 - x1, y2 - y1 >.
Example 1 Find the component form of if C = (- 4, - 3) and F = (1, 5).
Solution We have
= < 1 - (- 4), 5 - (- 3) > = < 5, 8 >.
Note that vector is equivalent to position vector with as shown in the figure above.
Now that we know how to write vectors in component form, letís restate some definitions.
The length of a vector v is easy to determine when the components of the vector are known. For v = < v1, v2 >, we have
|v|2 = v21 + v22 Using the Pythagorean theorem
|v| = √ .
The length, or magnitude, of a vector v = < v1, v2 > is given by |v| = √.
Two vectors are equivalent if they have the same magnitude and the same direction.
Let u = < u1, u2 > and v = < v1, v2 >. Then
< u1, u2 > = < v1, v2 > if and only if u1 = v1 and u2 = v2.
Operations on Vectors
To multiply a vector v by a positive real number, we multiply its length by the number. Its direction stays the same. When a vector v is multiplied by 2 for instance, its length is doubled and its direction is not changed. When a vector is multiplied by 1.6, its length is increased by 60% and its direction stays the same. To multiply a vector v by a negative real number, we multiply its length by the number and reverse its direction. When a vector is multiplied by 2, its length is doubled and its direction is reversed. Since real numbers work like scaling factors in vector multiplication, we call them scalars and the products kv are called scalar multiples of v.
For a real number k and a vector v = < v1, v2 >, the scalar product of k and v is
kv = k.< v1, v2 > = < kv1, kv2 >.
The vector kv is a scalar multiple of the vector v.
Example 2 Let u = < - 5, 4 > and w = < 1, - 1 >.Find - 7w, 3u, and - 1w.
- 7w = - 7.< 1, - 1 > = < - 7, 7 >,
3u = 3.< - 5, 4 > = < -15, 12 >,
- 1w = - 1.< 1, - 1 > = < - 1, 1 >.
Now we can add two vectors using components. To add two vectors given in component form, we add the corresponding components. Let u = < u1, u2 > and v = < v1, v2 >. Then
u + v = < u = < u1 + v1, u2 + v2 >
For example, if v = < - 3, 2 > and w = < 5, - 9 >, then
v + w = < - 3 + 5, 2 + (- 9) > = < 2, - 7 >
If u = < u1, u2 > and v = < v1, v2 >, then
u + v = < u1 + v1, u2 + v2 >.
Before we define vector subtraction, we need to define - v. The opposite of v = < v1, v2 >, shown below, is
- v = (- 1).v = (- 1)< v1, v2 > = < - v1, - v2 >
Vector subtraction such as u - v involves subtracting corresponding components. We show this by rewriting u - v as u + (- v). If u = < u1, u2 > and v = < v1, v2 >, then
u - v = u + (- v) = < u1, u2 > + < - v1, - v2 > = < u1 + (- v1), u2 + (- v2) > = < u1 - v1, u2 - v2 >
We can illustrate vector subtraction with parallelograms, just as we did vector addition.
If u = < u1, u2 > and v = < v1, v2 >, then
u - v = < u1 - v1, u2 - v2 >.
It is interesting to compare the sum of two vectors with the difference of the same two vectors in the same parallelogram. The vectors u + v and u - v are the diagonals of the parallelogram.
Example 3 Do the following calculations, where u = < 7, 2 > and v = < - 3, 5 >.
a) u + v
b) u - 6v
c)3u + 4v
d)|5v - 2u|
a) u + v = < 7, 2 > + < - 3, 5 > = < 7 + (- 3), 2 + 5 > = < 4, 7 >;
b)u - 6v = < 7, 2 > - 6.< - 3, 5 > = < 7, 2 > - < - 18, 30 > = < 25, - 28 >;
c) 3u + 4v = 3.< 7, 2> + 4.< - 3, 5 > = < 21, 6 > + < - 12, 20 > = < 9, 26 >;
d) |5v - 2u| = |5.< - 3, 5 > - 2.< 7, 2 >| = |< - 15, 25 > - < 14, 4 >| = |< - 29, 21 >| = √ = √1282 ≈ 35,8
Before we state the properties of vector addition and scalar multiplication, we need to define another special vectoróthe zero vector. The vector whose initial and terminal points are both is the zero vector, denoted by O, or < 0, 0 > . Its magnitude is 0. In vector addition, the zero vector is the additive identity vector:
v + O = v. < v1, v2 > + < 0, 0 > = < v1, v2 >
Operations on vectors share many of the same properties as operations on real numbers.
Properties of Vector Addition and Scalar Multiplication
For all vectors u, v, and w, and for all scalars b and c:
1. u + v = v + u.
2. u + (v + w) = (u + v) + w.
3. v + O = v.
4 1.v = v; 0.v = O.
5. v + (- v) = O.
6. b(cv) = (bc)v.
7. (b + c)v = bv + cv.
8. b(u + v) = bu + bv.
A vector of magnitude, or length, 1 is called a unit vector. The vector v = < - 3/5, 4/5 > is a unit vector because
|v| = |< - 3/5, 4/5 >| = √ = √ = √ = √1 = 1.
Example 4 Find a unit vector that has the same direction as the vector w = < - 3, 5 >.
Solution We first find the length of w:
|w| = √ = √34. Thus we want a vector whose length is 1/√34 of w and whose direction is the same as vector w. That vector is
u = w/√34 = < - 3, 5 >/√34 = < - 3/√34, 5/√34 >.
The vector u is a unit vector because
|u| = |w/√34| = = √ = √1 = 1.
If v is a vector and v ≠ O, then
(1/|v|)• v, or v/|v|,
is a unit vector in the direction of v.
Although unit vectors can have any direction, the unit vectors parallel to the x - and y - axes are particularly useful. They are defined as
i = < 1, 0 > and j = < 0, 1 >.
Any vector can be expressed as a linear combination of unit vectors i and j. For example, let v = < v1, v2 >. Then
v = < v1, v2 > = < v1, 0 > + < 0, v2 > = v1< 1, 0 > + v2 < 0, 1 > = v1i + v2j.
Example 5 Express the vector r = < 2, - 6 > as a linear combination of i and j.
r = < 2, - 6 > = 2i + (- 6)j = 2i - 6j.
Example 6 Write the vector q = - i + 7j in component form.
Solutionq = - i + 7j = -1i + 7j = < - 1, 7 >
Vector operations can also be performed when vectors are written as linear combinations of i and j.
Example 7 If a = 5i - 2j and b = -i + 8j, find 3a - b.
3a - b = 3(5i - 2j) - (- i + 8j) = 15i - 6j + i - 8j = 16i - 14j.
The terminal point P of a unit vector in standard position is a point on the unit circle denoted by (cosθ, sinθ). Thus the unit vector can be expressed in component form,
u = < cosθ, sinθ >,
or as a linear combination of the unit vectors i and j,
u = (cosθ)i + (sinθ)j,
where the components of u are functions of the direction angle θ measured counterclockwise from the x - axis to the vector. As θ varies from 0 to 2π, the point P traces the circle x2 + y2 = 1. This takes in all possible directions for unit vectors so the equation u = (cosθ)i + (sinθ)j describes every possible unit vector in the plane.
Example 8 Calculate and sketch the unit vector u = (cosθ)i + (sinθ)j for θ = 2π/3. Include the unit circle in your sketch.
u = (cos(2π/3))i + (sin(2π/3))j = (- 1/2)i + (√3/2)j
Let v = < v1, v2 > with direction angle θ. Using the definition of the tangent function, we can determine the direction angle from the components of v:
Example 9 Determine the direction angle θ of the vector w = - 4i - 3j.
Solution We know that
w = - 4i - 3j = < - 4, - 3 >.
Thus we have
tanθ = (- 3)/(- 4) = 3/4 and θ = tan- 1(3/4).
Since w is in the third quadrant, we know that θ is a third-quadrant angle. The reference angle is
tan- 1(3/4) ≈ 37°, and θ ≈ 180° + 37°, or 217°.
It is convenient for work with applied problems and in subsequent courses, such as calculus, to have a way to express a vector so that both its magnitude and its direction can be determined, or read, easily. Let v be a vector. Then v/|v| is a unit vector in the same direction as v. Thus we have
v/|v| = (cosθ)i + (sinθ)j
v = |v|[(cosθ)i + (sinθ)j] Multiplying by |v|
v = |v|(cosθ)i + |v|(sinθ)j.
Example 10 Airplane Speed and Direction. An airplane travels on a bearing of 100° at an airspeed of 190 km/h while a wind is blowing 48 km/h from 220°. Find the ground speed of the airplane and the direction of its track, or course, over the ground.
Solution We first make a drawing. The wind is represented by and the velocity vector of the airplane by . The resultant velocity vector is v, the sum of the two vectors:
v = + .
The bearing (measured from north) of the airspeed vector is 100°. Its direction angle (measured counterclockwise from the positive x - axis) is 350°. The bearing (measured from north) of the wind vector is 220°. Its direction angle (measured counterclockwise from the positive x - axis) is 50°. The magnitudes of and are 190 and 48, respectively.We have
= 190(cos350°)i + 190(sin350°)j, and
= 48(cos50°)i + 48(sin50°)j.
v = +
= [190(cos350°)i + 190(sin350°)j] + [48(cos50°)i + 48(sin50°)j]
= [190(cos350°)i + 48(cos50°)i] + [190(sin350°)j + 48(sin50°)j]
≈ 217,97i + 3,78j.
From this form, we can determine the ground speed and the course:
Ground speed ≈ √ ≈ 218 km/h.
We let α be the direction angle of v. Then
tanα = 3,78/217,97
α = tan- 13,78/217,97 ≈ 1°.
Thus the course of the airplane (the direction from north) is 90° - 1°, or 89°.
Angle Between Vectors
When a vector is multiplied by a scalar, the result is a vector. When two vectors are added, the result is also a vector. Thus we might expect the product of two vectors to be a vector as well, but it is not. The dot product of two vectors is a real number, or scalar. This product is useful in finding the angle between two vectors and in determining whether two vectors are perpendicular.
The dot product of two vectors u = < u1, u2 > and v = < v1, v2 > is
u • v = u1.v1 + u2.v2
(Note that u1v1 + u2v2 is a scalar, not a vector.)
Example 11 Find the indicated dot product when
u = < 2, - 5 >, v = < 0, 4 > and w = < - 3, 1 >.
a)u • w
b)w • v
a) u • w = 2(- 3) + (- 5)1 = - 6 - 5 = - 11;
b) w • v = (- 3)0 + 1(4) = 0 + 4 = 4.
The dot product can be used to find the angle between two vectors. The angle between two vectors is the smallest positive angle formed by the two directed line segments. Thus the angle θ between u and v is the same angle as between v and u,and 0 ≤ θ ≤ π.
If θ is the angle between two nonzero vectors u and v, then
cosθ = (u • v)/|u||v|.
Example 12 Find the angle between u = < 3, 7 > and v = < - 4, 2 >.
Solution We begin by finding u • v, |u|, and |v|:
u • v = 3(- 4) + 7(2) = 2,
|u| = √ = √58, and
|v| = √ = √20.
cosα = (u • v)/|u||v| = 2/√58.√20
α = cos- 1(2/√58.√20)
α ≈ 86,6°.
Forces in Equilibrium
When several forces act through the same point on an object, their vector sum must be O in order for a balance to occur. When a balance occurs, then the object is either stationary or moving in a straight line without acceleration. The fact that the vector sum must be O for a balance, and vice versa, allows us to solve many applied problems involving forces.
Example 13 Suspended Block. A 350-lb block is suspended by two cables, as shown at left. At point A, there are three forces acting: W, the block pulling down, and R and S, the two cables pulling upward and outward. Find the tension in each cable.
Solution We draw a force diagram with the initial points of each vector at the origin. For there to be a balance, the vector sum must be the vector O:
R + S + W = O.
We can express each vector in terms of its magnitude and its direction angle:
R = |R|[(cos125°)i + (sin125°)j],
S = |S|[(cos37°)i + (sin37°)j], and
W = |W|[(cos270°)i + (sin270°)j]
= 350(cos270°)i + 350(sin270°)j
= -350j cos270° = 0; sin270° = - 1.
Substituting for R, S, and W in R + S + W + O, we have
[|R|(cos125°) + |S|(cos37°)]i + [|R|(sin125°) + |S|(sin37°) - 350]j = 0i + 0j.
This gives us a system of equations:
|R|(cos125°) + |S|(cos37°) = 0,
|R|(sin125°) + |S|(sin37°) - 350 = 0.
Solving this system, we get
|R| ≈ 280 and |S| ≈ 201.
The tensions in the cables are 280 lb and 201 lb. | http://www.math10.com/en/geometry/vectors-operations/vectors-operations.html | 13 |
52 | There are literally dozens of proofs for the Pythagorean Theorem. The
proof shown here is probably the clearest and easiest to understand.|
The Pythagorean Theorem states that for any right triangle the
square of the hypotenuse equals the sum of the squares of the other
If we draw a right triangle having sides 'a' 'b' and 'c' (with 'c' being the
then according to the theorem, the length of c²
= a² + b²
In order to prove the theorem, we construct squares on each of the
sides of the triangle.
It is important to realize that squaring the length of side a is exactly the same thing as determining the area of the green square. (The same applies to side b with the red square and side c with the blue square and this is the important concept of this proof). Basically, if we can show that the area of the green square plus the area of the red square equals the area of the blue square, we have proven the Pythagorean Theorem.
Now let's construct those same squares around the remaining three sides of the blue square.
Gee, that diagram sure looks confusing doesn't it? However, you can see that those eight squares have "drawn" a square around the blue square.|
When we remove the 6 squares we just added, we'll have a diagram very similar to the first one except now the blue square is surrounded by a square with sides of a length 'a' plus 'b'.
The area of this new square would be (a + b)² and the new diagram would look like the one drawn below.
Area of green square = a² Area of red square = b²
The area of the larger square surrounding the blue square equals (a+b)²
which equals a² + 2ab + b².
Note that the blue square is surrounded by 4 right triangles, the area of
each being ½ (a•b) making the total area of all 4 triangles equal 2•a•b.
So, the area of the blue square = area of the surrounding square minus
the area of the 4 triangles.
Area of blue square = a² + 2ab + b² minus 2ab
Blue Square Area = c² = a² + b²
We have just proven the Pythagorean Theorem. | http://www.1728.org/pytproof.htm | 13 |
267 | This discussion addresses several different aspects of proof and includes many links to additional readings. You may want to jump to the activities, try some out, and then double back to the readings once you have had a chance to reflect on how you approach proofs. You can use the table of contents below to navigate around this chapter:
In everyday life, we frequently reach conclusions based on anecdotal evidence. This habit also guides our work in the more abstract realm of mathematics, but mathematics requires us to adopt a greater level of skepticism. Examplesno matter how manyare never a proof of a claim that covers an infinite number of instances.
A proof is a logical argument that establishes the truth of a statement. The argument derives its conclusions from the premises of the statement, other theorems, definitions, and, ultimately, the postulates of the mathematical system in which the claim is based. By logical, we mean that each step in the argument is justified by earlier steps. That is, that all of the premises of each deduction are already established or given. In practice, proofs may involve diagrams that clarify, words that narrate and explain, symbolic statements, or even a computer program (as was the case for the Four Color Theorem (MacTutor)). The level of detail in a proof varies with the author and the audience. Many proofs leave out calculations or explanations that are considered obvious, manageable for the reader to supply, or which are cut to save space or to make the main thread of a proof more readable. In other words, often the overarching objective is the presentation of a convincing narrative.
Postulates are a necessary part of mathematics. We cannot prove any statement if we do not have a starting point. Since we base each claim on other claims, we need a property, stated as a postulate, that we agree to leave unproven. The absence of such starting points would force us into an endless circle of justifications. Similarly, we need to accept certain terms (e.g., "point" or "set") as undefined in order to avoid circularity (see Writing Definitions). In general, however, proofs use justifications many steps removed from the postulates.
Before the nineteenth century, postulates (or axioms) were accepted as true but regarded as self-evidently so. Mathematicians tried to choose statements that seemed irrefutably truean obvious consequence of our physical world or number system. Now, when mathematicians create new axiomatic systems, they are more concerned that their choices be interesting (in terms of the mathematics to which they lead), logically independent (not redundant or derivable from one another), and internally consistent (theorems which can be proven from the postulates do not contradict each other). (Download Axiomatic Systems (Lee) and see sections 6.1, 8.1, and 8.4 in book 3b of Math Connections (Berlinghoff) for further explanations, activities, and problem sets on axiomatic systems, consistency, and independence). For example, non-Euclidean geometries have been shown to be as consistent as their Euclidean cousin. The equivalence between these systems does not mean that they are free of contradictions, only that each is as dependable as the other. This modern approach to axiomatic systems means that we consider statements to be true only in the context of a particular set of postulates.
To Establish a Fact with Certainty
There are many possible motives for trying to prove a conjecture. The most basic one is to find out if what one thinks is true is actually true. Students are used to us asking them to prove claims that we already know to be true. When students investigate their own research questions, their efforts do not come with a similar guarantee. Their conjecture may not be true or the methods needed may not be accessible. However, the only way that they can be sure that their conjecture is valid, that they have in fact solved a problem, is to come up with a proof.
Students confidence in a fact comes from many sources. At times, they appeal to an authoritative source as evidence for a claim: "it was in the text" or "Ms. Noether told us this last year." It has been my experience that such justifications carry little practical persuasive value. For example, a class discussed the irrationality of and proofs of that fact, yet an essay assignment on a proposal to obtain the complete decimal expansion of still generated student comments such as, "if eventually turns out not to be irrational then that project would be interesting." Thus, an authoritative claim of proof is only good until some other authority shows otherwise. Mathematical truths do tend to stand the test of time. When students create a proof themselves, they are less likely to think of the result as ephemeral. A proof convinces the prover herself more effectively than it might if generated by someone else.
To Gain Understanding
"I would be grateful if anyone who has understood this demonstration would explain it to me."
Fields Medal winner Pierre Deligne, regarding a theorem that he proved using methods that did not provide insight into the question.
There are proofs that simply prove and those that also illuminate. As in the case of the Deligne quote above, certain proofs may leave one unclear about why a result is true but still confident that it is. Proofs with some explanatory value tend to be more satisfying and appealing. Beyond our interest in understanding a given problem, our work on a proof may produce techniques and understandings that we can apply to broader questions. Even if a proof of a theorem already exists, an alternative proof may reveal new relationships between mathematical ideas. Thus, proof is not just a source of validation, but an essential research technique in mathematics.
If our primary consideration for attempting a proof is to gain insight, we may choose methods and types of representations that are more likely to support that objective. For example, the theorem that the midpoints of any quadrilateral are the vertices of a parallelogram can be proven algebraically using coordinates or synthetically (figure 1).
Figure 1. The diagrams for coordinate and synthetic proofs
A synthetic proof rests on the fact that the segment connecting the midpoints of two sides of a triangle, the midline, is parallel to the third side. In quadrilateral ABCD (right side of figure 1), the midlines of triangles ABD and CBD are both parallel to the quadrilateral diagonal BD and, therefore, to each other. It is clear that if point C were to move, the midline for triangle BCD would remain parallel to both BD and the midline of triangle ABD. To complete the proof, one would consider the midlines of triangle ADC and triangle ABC as well. The coordinate proof uses the coordinates of the midpoints to show that the slopes of opposite midlines are equal.
For many people, the synthetic proof is more revealing about why any asymmetries of the original quadrilateral do not alter the properties of the inner parallelogram. It also illustrates how a proof can be a research tool by answering other questions, such as "when will the inner quadrilateral be a rhombus?" Because midlines are one half the length of the parallel side, the inner parallelogram will have equal sides only when the diagonals of the original quadrilateral are congruent.
Sometimes our inability to develop a proof is revealing and leads us to reconsider our examples or intuitions. After countless attempts to prove that Euclids fifth postulate (the parallel postulate) was dependent on the other four, mathematicians in the nineteenth century finally asked what the consequences would be if the postulate were independent. The doubts that arose from the failure to obtain a proof led to the creation of non-Euclidean geometries.
To Communicate an Ideas to Others
Often, mathematicians (of both the student and adult variety) have a strong conviction that a conjecture is true. Their belief may stem from an informal explanation or some convincing cases. They do not harbor any internal doubt, but there is a broader audience that retains some skepticism. A proof allows the mathematician to convince others of the correctness of their idea. A Making Mathematics teacher, in the midst of doing research with colleagues, shared his feelings about proof:
Just so I can get it off of my chest, I hate doing proofs with a passion. Its the part of mathematics that I grew to hate when I was an undergraduate, and its what so many of my former students come back and tell me turned them off to continuing on as a math major. I remember having a professor who held us responsible for every proof he did in class. Wed probably have a dozen or more to know for each exam, in addition to understanding the material itself. I can remember just memorizing the steps, because the approaches were so bizarre that no "normal person" would ever think of them in a million years (yes, I know I'm stereotyping).
This teachers frustrations with proofs involved having to memorize arguments that were neither revealing (and therefore, not entirely convincing) nor sufficiently transparent about the process by which they were created. Yet, this same teacher, on encountering collegial doubts about his conjecture concerning Pascals triangle wrote, "Well, I decided to try and convince you all that the percentage of odds does in fact approach zero as the triangle grows by proving it." His efforts over several days produced a compelling proof. His conflicting attitudes and actions highlight the distinction between proofs as exercises and proofs as tools for communication and validation. A genuine audience can make an odious task palatable.
For the Challenge
Difficult tasks can be enjoyable. Many mathematical problems are not of profound significance, yet their resolution provides the person who solves them with considerable gratification. Such success can provide a boost in self-esteem and mathematical confidence. The process of surmounting hurdles to a proof can have all of the thrill of a good mystery. Students (and adults) are justifiably excited when they solve a problem unlike any they have previously encountered and which no one else may have ever unraveled.
To Create Something Beautiful
The more students engage in mathematics research, the more they develop their own aesthetic for mathematical problems and methods. The development of a proof that possesses elegance, surprises us, or provides new insight is a creative act. It is rewarding to work hard to make a discovery or develop a proof that is appealing. The mathematician Paul Erdös spoke of proofs that were "straight from the Book"the Book being Gods collection of all the perfect proofs for every theorem. Although Erdös did not actually believe in God, he did believe that there were beautiful truths waiting to be uncovered (Hoffman).
To Construct a Larger Mathematical Theory
We rarely consider mathematical ideas in a vacuum. Our desire to advance a broader mathematical problem is often a source of motivation when we attempt a proof. For example, a number of mathematicians spent many years attempting to characterize a class of objects known as simple groups (Horgan). Their cumulative efforts resulted in thousands of pages of proofs that together accomplished the task. Many of these proofs, significant in their own right, were of even greater value because of their contribution to the larger understanding that the mathematics community sought.
For a further discussion of the role of proof in school curricula, see Do We Need Proof in School Mathematics? (Schoenfeld, 1994).
We can prove many different types of claims.
In general, students should attempt a proof in response to one of the motivations listed in the Why Do We Prove? section. If students only attempt proofs as exercises, they come to see proof as an after-the-fact verification of what someone else already knowsit becomes disconnected from the process of acquiring new knowledge. However, students derive considerable satisfaction from proving a claim that has arisen from their own investigations.
If students in a class disagree about a conjecture, then that is a good time for the individuals who support it to look for a proof in order to convince the doubters. If a student seems particularly taken with a problem and starts to feel some sense of ownership for the idea, then she should attempt a proof in response to her own mathematical tastes. If two student claims have a connection, the students may want to prove the one that is a prerequisite for proving the other.
A focus on formal proof should grow gradually. When we emphasize formal proof too soon and too often, before students have developed a rich repertoire of proof techniques and understanding, their frustration with, and subsequent dislike of, the challenge can become an obstacle to further progress. It is always appropriate to ask students what led them to their conjectures and why they think they are true. We begin by asking for reasons, not formal proofs, and establish the expectation that explanations should be possible and are important. Note that we ask "why" regardless of the correctness of a claim and not just for false propositions. As we highlight that they always should be interested in why an idea is true, students begin to develop the habit of asking "why?" themselves.
A good time to ask a student to write out a proof is when you think that she has already grasped the connections within a problem that are essential to the development of a more formal argument. This timing will not only lead to an appreciation for how proofs can arise organically during research, it will also lead to some confidence regarding the creation of proofs.
It is not necessary for students to prove all of their claims just for the sake of thoroughness. Published articles often prove the hard parts and leave the easier steps "for the reader." In contrast, a student should begin by trying to prove her simpler assertions (although it may be difficult to figure out how hard a problem will be in advance). When students have conjectures, label them with the students names and post them in the class as a list of open problems. Then, as students grow in the rigor and complexity of their proofs, they can return to questions that have become accessible.
When a student does create a proof, have her describe it to a peer, give an oral presentation to the class, or write up her thinking and hand it out for peer review. The students should come to see themselves as each other's editorial board, as a group of collaborating mathematicians. They should not be satisfied if their classmates do not understand their argument. It is a long struggle getting to the point where we can write intelligible yet efficient mathematics. One of my students once presented proofs of a theorem four times before the class gave him the "official Q.E.D". Each of the first three presentations generated questions that helped him to refine his thinking, his definitions, and his use of symbols.
Learning to prove conjectures is a lifelong process, but there are some basic considerations and methods that students should focus on as they begin to develop rigorous arguments. The first concern is that they be clear about what they are trying to provethat they unambiguously identify the premises and the conclusions of their claim (see Conditional Statements in Conjectures).
The next goal should be to try to understand some of the connections that explain why the conjecture might be true. As we study examples or manipulate symbolic representations, we gain understanding that may lead to a proof. Because understanding and proof often evolve together, if a student wants to prove a conjecture that a classmate or teacher has presented, she should consider undertaking an investigation that will help her recreate the discovery of the result. This process may provide insight into how a proof might be produced. (See Schoenfeld (1992) for more discussion of problem solving and proof.)
Often, a proof involves a large number of steps that, in our thinking about the problem, we organize into a smaller number of sequences of related steps (similar to when computer programmers turn a number of commands into a single procedure). This "chunking" of many steps into one line of reasoning makes it possible to grasp the logic of a complicated proof. It also helps us to create an outline of a potential proof before we have managed to fill in all of the needed connections (see Proof Pending a Lemma below).
When we create a proof, we seek to build a bridge between our conjectures premise and its conclusion. The information in the premise will have a number of possible consequences that we can use. Similarly, we try to identify the many conditions that would suffice to prove our conclusion. For example, if we know that a number is prime, there are numerous properties of prime numbers that we might bring into play. If we seek to show that two segments are congruent, we might first show that they are corresponding sides of congruent figures, that they are both congruent to some third segment, or that it is impossible for one to be either shorter or longer than the other. Once we have considered the possibilities that stem from our premises and lead to our conclusions, we have shortened the length of our proof from "if premise, then conclusion" to "if consequence-of-premise, then conditions-leading-to-conclusion" (figure 2). A main task comes in trying to determine if any of these new statements (one for each combination of consequence and condition) is likely to be easier to prove than the original.
Figure 2. Searching for a path to a proof
Some conjectures conclusions involve more than one claim. Recognizing all of these requirements can be a challenge. For example, to show that the formula (n2 m2, 2mn, n2 + m2) is a complete solution to the problem of identifying Pythagorean triples, we need to show both that it always generates such triples and that no triples are missed by the formula. Cases such as this, in which we need to demonstrate both a claim and its converse, are common.
Sometimes, two approaches to proving a result will differ in both their method and what they teach us. A student working on the Amida-kuji project defined a minimal configuration of horizontal rungs as one that results in a particular rearrangement of the numbers using the fewest rungs possible. He then conjectured that the number of distinct minimal configurations would always be greatest for the reversal of n items (1 2 3 n goes to n 3 2 1) than for any other permutation of the n values. Does this student need to find and prove a formula for the number of minimal configurations for each permutation? Can he somehow compare the number of minimal configurations without actually counting them explicitly and show that one set is larger? These two approaches might both prove his claim, but they require distinctly different findings along the way.
Just as we make decisions about the sequencing of ideas that we use to construct a proof, so, too, do we choose from among an array of different technical tools. In the quadrilateral proof above, we represented the same setting using coordinates as well as synthetically. We transform our mathematical ideas into diagrams, numeric examples, symbolic statements, and words. Within those broad categories, there are numerous ways of representing information and relationships and each representation offers the possibility of new understandings.
We may further our understanding of a problem by looking at a simpler version of it. We can apply this same approach to proof: prove a special case or subset of cases before taking on the entire problem. For example, a student working on the Raw Recruits project first proved theorems about the cases with one or two misaligned recruits and then worked up to the general solution. Choosing the right simplification of a problem is important. Had the student focused on a fixed number of total recruits rather than of misaligned ones, she might not have been as successful finding patterns.
The list of proof techniques is endless. Providing students with a repertoire of a few powerful, general methods can give them the tools that they need to get started proving their conjectures. These first techniques also whet students appetites to learn more. Each students own research and reading of mathematics articles (see Reading Technical Literature in Getting Information) will provide additional models to consider when constructing a proof. When students begin work within a new mathematical domain, they will need to learn about the tools (representations, techniques, powerful theorems) common to the problems that they are studying.
It is not possible to give ironclad rules for when a given approach to proof will prove fruitful. Therefore, in addition to providing guidance ("It might be worthwhile holding one of your variables constant"), our job mentoring students engaged in proof is to ask questions that will help them reflect on their thinking. Is planning a part of their process (are they considering alternative strategies or just plowing ahead with the first approach that occurs to them)? Are they connecting the steps that they are exploring with the goal that they are trying to reach (can they explain how their current course of action might produce a useful result)? Are they periodically revisiting the terms of their conjecture to see that they have not drifted off course in their thinking? See Getting Stuck, Getting UnstuckCoaching and Questioning for further questions.
The most basic approach that students can use to develop understanding and then a proof is to study specific cases and seek to generalize them. For example, a student was exploring recursive functions of the form . She wanted to find an explicit formula for f and began by looking at with . Her first values:
revealed some patterns, but no breakthrough. She then took an algebraic perspective on the problem by looking at the form and not the value of the results. She decided to keep her examples general by not doing the arithmetic at each step:
This form revealed an explicit formula, , which pointed the way to a general rule for all a, b, and . This example demonstrates why it is sometimes advantageous not to simplify an expression.
Algebra is a familiar, all-purpose tool that we should encourage students to use more often. Many students primarily think of variables as specific unknowns and not as placeholders for an infinite number of examples (see the practice proofs and their solutions for examples of algebraic expressions used in this manner).
Examples as Disproof and Proof
An example cannot prove an affirmative statement about an infinite class of objects. However, a single example, called a counterexample, is sufficient to disprove a conjecture and prove the alternative possibility. For example, we know of many even perfect numbers (Weisstein). The discovery of a single odd perfect number would be an important proof that such numbers, conjectured not to exist, are possible.
When a conjecture involves a finite set of objects, we can prove the conjecture true by showing that it is true for every one of those objects. This exhaustive analysis is sometimes the only known means for answering a question. It may not be elegant, but it can get the job done if the number of instances to test is not overwhelmingly large. The mathematicians who proved the Four Color Theorem (MacTutor) broke the problem into 1476 cases and then programmed a computer to verify each one. Such proofs are not entirely satisfying because they are less likely than a proof that covers all cases simultaneously to have explanatory value.
We often break a problem down into categories of instances or cases and not all the way down to individual instances. For example, a theorem about triangles may require separate analyses for acute, right, and obtuse triangles. One challenge when proving via a case-by-case analysis is to have a rigorous means of showing that you have identified all of the different possible cases.
One of the more exciting experiences in mathematics is the recognition that two ideas are connected and that the truth of one is dependent on the truth of the other. Often a student will be working on a proof and discover that they have a line of reasoning that will work if some other claim is true. Encourage the student to develop their main argument and then return to see if they can fill in the missing link. A claim that is not a focus of your interest, but which you need for a larger proof, is called a lemma. As students working on a common problem share their discoveries through oral and written reports, they may recognize that a fellow researcher has already proven a needed lemma. Alternatively, they may realize that their conjecture is a straightforward consequence of a general result that another classmate has proven. We call a theorem that readily follows from an important result a corollary. These events contribute enormously to students understanding of mathematics as a communal activity.
There are many well-known cases of theorems that mathematicians have proven pending some other result. Of course, that means that they are not actually theorems until the lemma has been established. What is a theorem in these situations is the connection between two unproven results. For example, Gerhard Frey proved that if a long-standing problem known as the Taniyama-Shimura conjecture were true, then Fermats Last Theorem (MacTutuor) must be as well. This connection inspired Andrew Wiles to look for a proof of the Taniyama-Shimura conjecture.
How do we know that we have proven our conjecture? For starters, we should check the logic of each claim in our proof. Are the premises already established? Do we use the conclusions to support a later claim? Do we have a rigorous demonstration that we have covered all cases?
We next need to consider our audience. Is our writing clear enough for someone else to understand it? Have we taken any details for granted that our readers might need clarified? Ultimately, the acceptance of a proof is a social process. Do our mathematical peers agree that we have a successful proof? Although we may be confident in our work, unless others agree, no one will build upon or disseminate our proof. Our theorem may even be right while our proof is not. Only when our peers review our reasoning can we be assured that it is clear and does not suffer from logical gaps or flaws.
If a proof is unclear, mathematical colleagues may not accept it. Their clarifying questions can help us improve our explanations and repair any errors. On the other hand, mathematical truth is not democratically determined. We have seen many classes unanimously agree that a false assertion was true because the students failed to test cases that yielded counterexamples. Likewise, there have been classes with one voice of reason trying to convince an entire class of non-believers. The validity of a proof is determined over timereaders need time to think, ask questions, and judge the thoroughness of an exposition. Students should expect to put their proofs through the peer review process.
When do peers accept a proof? When they have understood it, tested its claims, and found no logical errors. When there are no intuitive reasons for doubting the result and it does not contradict any established theorems. When time has passed and no counterexamples have emerged. When the author is regarded as capable ("I dont understand this, but Marge is really good at math"). Some of these reasons are more important than others, but all have a role in practice.
See Davis and Hershs (1981) The Mathematical Experience for a fine collection of essays on the nature of proof, on methods of proof, and on important mathematical conjectures and theorems.
Since one reason we tackle proofs is for the challenge, we are entitled to a modest "celebration" when a proof is completed. The nicest honor is to name a theorem after the student or students who proved it. If you dub proofs after their creators (e.g., Lauras Lemma or the Esme-Reinhard Rhombus Theorem) and have them posted with their titles, students will be justifiably proud. Give conjectures titles, as well, in order to highlight their importance and as a way to promote them so that others will try to work on a proof.
Introduce students to the traditional celebration: ending a proof with "Q.E.D." Q.E.D. is an acronym for "quod erat demonstrandum," Latin for "that which was to be demonstrated." At the end of a proof by contradiction, students can use "Q.E.A.," which stands for "quad est absurdum" and means "that which is absurd" or "we have a contradiction here." These endings are the understated mathematical versions of "TaDa!" or "Eureka!" Modern, informal equivalents include AWD ("and were done") and W5 ("which was what we wanted") (Zeitz, p. 45). We have also seen "" and "MATH is PHAT!" at the end of student proofs. Professional publications are now more likely to end a proof with a rectangle () or to indent the proof to distinguish it from the rest of a discussion, but these are no fun at all.
Do remind students that once their celebration is over, their work is not necessarily done. They may still need to explore their theorem further to understand why it is true and not just that it is true, to come up with a clearer or more illuminating proof, or to extend their result in new directions. Additionally, proofs sometimes introduce new techniques that we can productively apply to other problems. In other words, the completion of a proof is a good time to take stock and figure out where to go next in ones research. Like movies that leave a loose strand on which to build a sequel, most math problems have natural next steps that we can follow.
We are not very pleased when we are forced to accept a mathematical truth by virtue of a complicated chain of formal conclusions and computations, which we traverse blindly, link by link, feeling our way by touch. We want first an overview of the aim and of the road; we want to understand the idea of the proof, the deeper context. - Hermann Weyl (1932)
The standard form for a mathematical proof is prose interwoven with symbolic demonstrations and diagrams. Students who write paragraph explanations will often comment that they do not yet have a "real proof." However, the two-column style that they believe to be the only acceptable format is often not as clear or informative as a proof with more English in it. Encourage them to add narrative to their proofs and to use whatever form seems most effective at communicating their ideas. Let them know that written language is a part of mathematics.
Weyl encourages us tell the story of our proof at the start so that each step in the presentation can be located on that roadmap. We should be able to say to ourselves "Oh, I see why she did that. She is setting up for this next stage" rather than "Where on Earth did that come from? Why did she introduce that variable?" Our goal is not to build suspense and mystery, but to provide the motivation for the important steps in our proofs. As noted earlier, we improve a lengthy proofs story by considering how the pieces of the proof fit together into connected chunks that we can present as separate theorems or lemmas. These chapters in the story reduce the number of arguments that our readers have to manage at any given stage in their effort to understand our proof.
Published proofs are often overly refined and hide from the reader the process by which the mathematician made her discoveries. As teachers, we want to encourage students to share the important details of that process. What methods did they consider? Why did some work and others not? What were the examples or special cases that informed their thinking? What dead ends did they run into? The teacher mentioned above, who disliked proof, was frustrated because the proofs that he had read were too polished to be a guide for how to develop a proof. The more we include our data, insights, experimentation, and derivations in our proofs, the more they will help others with their own mathematics.
We want to find a balance between the desire to convey the process of discovery, which is often circuitous, and the need to present a coherent argument. Students should develop an outline for each proof that reflects which ideas are dependent on which others. They should punctuate their narrative with clearly labeled definitions, conjectures, and theorems. Proofs should include examples that reveal both the general characteristics of the problem as well as interesting special cases. Examples are particularly helpful, not as a justification, but because they provide some context for understanding the more abstract portions of a proof. Examples may also help clarify imprecise notation or definitions.
Some additional recommendations for making proofs more readable:
If any parts of your research were carried out collaboratively or based on someone elses thinking, be sure to acknowledge their work and how you built upon it. For a full discussion on how to write up your results, see Writing a Report in Presenting Your Research.
We evaluate proofs at several levels. First, we need to see if we can understand what the proof says. If our mathematical background is sufficient to understand the proof, then, with effort, we should be able to make sense of it (see Reading Technical Literature in Getting Information). Next, we want to decide whether the proof is actually a successful proof. Do all of the pieces fit together? Are the explanations clear? Convincing? A good proof does not over-generalize. If a proof does not work in all cases, is it salvageable for some meaningful subset of cases?
Students should be given time to read each other's proofs. They should be skeptical readers who are trying to help their classmate improve their work. They should be supportive by offering helpful questions about claims that are unclear or steps that would improve the proof. The writer of a proof should expect to address any concerns and to work through several drafts before the class declares her work completed. Although we are tempted to believe in our own discoveries, we are also obliged to look for exceptions and holes in our reasoning and not leave the doubting just to our peers.
Once a proof passes the first hurdle and we believe it is correct, we come to a different set of criteria for judging proofs. These criteria are both aesthetic and functional and help us to understand why we would want to find different ways to prove a particular theorem. Here are some considerations that students might apply to proofs that they study (see Evaluating Conjectures for further considerations):
Each of us has our own aesthetic for which areas of mathematics and ways of solving problems are most appealing. Mathematicians will often call a proof "elegant" or "kludgy" based on their standards of mathematical beauty. Is a substitution, offered without motivation, that quickly resolves a problem (e.g., let f(x) = cotan(1 x/2)) magical, concise, or annoying? Whichever of the standards above move us to call a proof beautiful, it is an important recognition that judgments of beauty are part of mathematics. Share your own aesthetics with students and encourage them to develop their own. It is perfectly reasonable simply to enjoy geometric or number theoretic problems and solutions more than some other area of mathematics. Some students may love problems that start out complicated but then sift down to simple results. Help them to recognize and celebrate these interests while broadening their aesthetics through the sharing of ideas with each other.
Class Activity: One way to highlight the different characteristics of proofs is to ask students to study and compare alternative proofs of the same theorem. Handout Three Proofs that is irrational (table 1, below) and give students time to read all three slowly (note: students should be familiar with proof by contradiction). Ask them to write down questions that they have about the different steps in the proofs. Next have them work in small groups trying to answer the questions that they recorded and clarify how each proof achieves its goal. Have each student then write an evaluation of the proofs: Does each proof seem to be valid? If not, where do they identify a problem? Which proof appealed to them the most? Why? Ask them to consider the other criteria above and choose one or more to address in evaluating the proofs.
Students may note that there are similarities among the proofs. All three proofs are indirect and all three begin by eliminating the root and converting the problem to one of disproving the possibility that . These first steps reduce the problem to one involving only counting numbers instead of roots and remove the likelihood that any not-yet-proven assumptions about roots or irrational numbers will creep into the reasoning.
Students are drawn to different parts of the three proofs. Some prefer Proof B because it does not rely on the assumptionkids may call it a gimmickthat a and b have no common factors. This objection is a good occasion to discuss the "story" of how that assumption comes into proofs A and C. It is essential to establishing the contradiction later on in the proof, but how did the prover know it was needed? The answer is that they didnt and that it was put in place once the need was discovered (we have watched students develop this proof themselves and then stick in the condition in order to force the contradiction). If the authors of these proofs included details of their derivationsthe story of how they thought up the proofsthey would avoid the discomfort that the austere versions create.
Proof C relies on a case-by-case analysis that the different ending digits cannot match. Again, despite the bluntness of its means, it seems to explain why cannot reduce to 2. This method becomes more elegant with fewer cases when we look at the final digit in a smaller base such as base 3.
The point of the above discussion is not to have your students choose one "best" proof, but to have them weigh the pros and cons of each. We want them to discover that not everyone in the class has the same mathematical tastes. However, some criteria are more objective than others. For example, one important criterion is how easily a proof may be generalized to related problems. In the case of the three proofs in table 1, you may ask students to decide which extend readily to show that the roots of other integers (or all non-perfect squares) are irrational. We might also inquire why the same proof methods do not show that is irrational.
Another objective criterion is the sophistication of the mathematics needed to support a proof. Proof A requires fewer lemmas than the other two. Despite students frequent preference for proof B, it relies on the comparatively "heavy machinery" of the fundamental theorem of arithmetic (positive integers have a unique prime factorization). Mathematicians often applaud a proof that uses more elementary methods, but, in this case, the elementary approach is not necessarily easier to understand.
You can introduce the activity described above with other accessible theorems. Pythagorean Theorem and its Many Proofs (Bogomolny) and Pythagorean Theorem (Weisstein) provide several dozen different proofs of the Pythagorean Theorem. Make handouts of a variety of these proofs and have each student pick three to study. Which did they like best? Why? Do they prefer those that involved geometric dissections or algebraic calculations? Those that were shorter and skipped steps or those that explained each step carefully? Can the class provide the missing arguments for the less rigorous "proof without words" diagrams? Encourage them to see the particular appeal of each proof.
Earlier in this section, we suggested that students proof experiences are most effective when they emerge organically from student investigations. Nevertheless, for a number of reasons, there is value to students practicing creating proofs as well. For example, practice helps students hone techniques and instincts that they can use in work that is more open-ended. Additionally, some of the reasons given in Why Do We Prove? remain relevant even if we are told what to prove. When students share their proofs with each other, they get further practice reading proofs and comparing the different types of reasoning used to justify theorems.
The transfer of understandings derived from practice problems is particularly likely if the practice is not overly structured. Proof exercises not connected to the study of a particular content area (e.g., triangle congruence or induction) force students to think about which of their many skills might help solve the problem. For each one, they might ask, "Should we introduce a variable? Will an indirect proof work?" This way, they are practicing methods and making thoughtful choices. If students do not a have a clear reason for choosing one approach over another, point out to them they do not have to be paralyzed in the face of this uncertainty. They can just start experimenting with different representations of the information and different proof methods until one of them works.
Students first proofs are rarely polished or precise. They may over-emphasize one point while omitting an important consideration (see, for example, the student proof below). Without experience devising symbolic representations of their ideas, students representations are often inefficient or unhelpful. For example, a student working on the Amida Kuji project was asked by her teacher to clarify and strengthen an English argument using symbols. She devised substitutes for her words ("hi is a horizontal rung"), but the symbols had no value facilitating her computations and led to an argument that was more difficult to read. The proof had that "mathy" look to it, but, until the student had a better grasp of the underlying structures of the problem and their properties, she was in no position to develop a useful system of symbols.
When we respond to students early proofs, our emphasis should be on the proofs clarity and persuasiveness. Their arguments may take many forms: paragraphs, calculations, diagrams, lists of claims. Any of these may be appropriate. We want to help them identify any assumptions or connections that they have left unstated, but we also have to judge how convincing and complete a line of reasoning has to be. Can steps that are obvious be skipped? To whom must they be obvious? Does a proof have to persuade a peer, a teacher, or a less knowledgeable mathematics student? We want to help younger students develop rigor without bludgeoning them on specifics that they may not be ready to attend to. Can students adopt the attitude of the textbook favorite, "we will leave it as an exercise for the reader to verify that " ? Fine readings on this topic include "I would consider the following to be a proof " and "Types of Students Justifications" in the NCTM Focus Issue on the Concept of Proof (1998).
One answer to the above questions is that a students classmates should be able to understand and explain their proofs. If classmates are confused, they should explain where they lose the thread of an argument or what they think a sentence means so that the author can rewrite her proof to address these confusions. Once a proof has passed the peer test, we can note additional possible refinements that will help our students develop greater sophistication in their thinking and presentation over time. Try to focus on certain areas at a time and expand students rigor and use of symbols incrementally. We try to emphasize proper vocabulary first (see Definitions). The development of original and effective symbolic representations tends to take more time to appear.
Be aware of "hand-waving" in proofs. Hand-waving is what a magician does to distract his audience from a maneuver that he does not want them to notice. For mathematicians, hand-waving is a, perhaps unintentional, misdirection during a questionable part of an argument. The written equivalent often involves the words "must" or "could" (e.g., "the point must be in the circle " ) without justification of the claimed imperative. Sometimes we need to note, but accept, a bit of hand-waving because a gap is beyond a students ability to fill.
Many of the proof exercises provided here are more suitable for high school than middle school students. The whole class settings described below as well as practice problems 1, 4, 6, 7, 15, and 16 are likely to work with middle school students (although others may also be useful depending on the students background). Particularly with younger students, doing proof within explorations that help them see how a proof evolves naturally from questions and observations is more valuable than exercises that ask them to prove someone elses claims. When we are given a "to prove", we have to go back and explore the setting anyway in order to develop some intuition about the problem. Older students, who have a broader array of techniques from which to choose, are more likely to benefit from proof exercises.
Once a class has proven theorems in the context of longer research explorations, you can use the practice problems as a shorter activity. Choose a few problems to put on a handout and distribute them to each student. Give the students a few days to work on the problems and then discuss and compare their discoveries and proofs. Based on these discussions and peer responses, each student can then rewrite one of their proofs to produce a polished solution.
Kids need more experience trying to prove or disprove claims without knowing the outcome ahead of time. In genuine mathematical work, we pose a conjecture, but we are not sure that it is true until we have a proof or false until we have a counterexample. The practice problems below sometimes call attention to this indeterminate status by asking students to "prove or disprove" the claim. Some of them actually ask for a proof even though the claim is false. We include these red herrings because students are often overly confident about their own conjectures and need to develop greater skepticism. Students should not consider this feature foul play, but good training in skeptical thinking. We are often taught to see texts as unerring authorities, but even the most prestigious journals of mathematics and science occasionally publish results that turn out to be false or incomplete. We have found that students are delighted when, and will put great effort into proving that, a textbook or teacher is wrong. We are simply building in that opportunity.
Once a false statement has captured students attentions, challenge them to turn it into a true claim. Can they identify a significant set of cases for which the claim is true (e.g., by changing the domain to remove the counterexamples, see problem 10)? Can they generalize the claim (e.g., problem 7 is false, but the more general claim for two relatively prime divisors is true)?
The related games Yucky Chocolate and Chomp are good settings for early work with proof. These games are effective with both middle and high school classes. Both games begin with an n-by-m array of chocolate squares (n not necessarily different from m) in which the top left square of chocolate has become moldy.
Rules for the game of Yucky Chocolate: On each turn in the game of Yucky Chocolate, a player chooses to break the bar of chocolate along a horizontal or vertical line. These breaks must be between the rows of squares (figure 3). The rectangle that is broken off is "eaten" by that player. The game continues with the rectangle that includes the yucky square. You can introduce this game with real chocolate, but the incentive to break off large pieces for consumption may overwhelm any other strategic thinking. Players take turns until one player, the loser, is left with just the yucky piece to eat.
Figure 3. A horizontal break in the game of Yucky Chocolate leaves a 2 by 4 board
Introduce your class to the rules of the game and then have them pair off to play several rounds starting with a 4 by 6 board. They can play the game on graph paper, mark off the starting size of the chocolate bar, and then shade in eaten portions each turn. After a few rounds of play, students will start to notice winning end-game strategies. In one fifth-grade class, the students observed that when a player faced a 2-by-2 board, they always lost. Given that observation, additional play led them to see why a 3-by-3 board was also a losing position. They were able to turn these conjectures into theorems with simple case-by-case analyses. For the 3-by-3 board, the symmetry of the situation meant that there were really only two distinct moves possible (leaving a 2-by-3 or 1-by-3 board). Each of these moves gave the other player a winning move (reducing the board to a 1-by-1 or 2-by-2 case).
After the class realized that the smaller square positions were losers, some students took the inductive leap to conjecture that all n-by-n boards represented losing positions. One girl, who had never studied proof by induction, excitedly began explaining how each larger square array could be turned into the next smaller one and that she could always force the game down to the proven losing square positions. She had an intuitive understanding of the validity of an inductive argument. She then stopped and realized that her opponent might not oblige her by carving off just one column and that she did not know how big the next board might be. She had cast doubt on the reasoning of her own argument. She was facing another form of inductive proof in which one builds not just from the next smallest case but all smaller cases. After a while, the class was able to show that regardless of the move that an opponent facing an n-by-n board takes, there was always a symmetrical move that made a smaller square board. Therefore, they could inexorably force a win. This argument made it possible for a full analysis of the games that led to a win for the first player (n m) and those that should always be won by the second player.
Once students have a complete understanding of Yucky Chocolate, the game provides a nice opportunity for practicing problem posing. Ask the students to each develop one or more variations of the game. What characteristics can they change? Does the game remain interesting? Does it become more complicated? Do they have to change any rules to make it still make sense? Some of the changes that students have explored include moving the location of the moldy square, making the problem three-dimensional, changing the number of players, or playing with a triangular grid of chocolate.
Rules for the game of Chomp: The game of Chomp starts with the same slightly moldy chocolate bar, only the players take turns biting the chocolate bar with a right-angled mouth. These bites remove a chosen square and all remaining squares below and/or to the right of that square (figure 4. See Joyce for further examples).
Figure 4. Two turns in a game of Chomp
These bites can leave behind boards with complicated shapes that make it difficult to analyze which player should win for a given starting board. Student investigations can identify many sets of initial configurations (e.g., the 2-by-n or n-by-n cases) where a winning strategy can be determined and a proof produced (see Keeley And Zeilberger). Zeilbergers Three-Rowed Chomp provides an elegant existence proof that the first player in a game must always have a winning strategy. Being an existence proof, it provides no hint at how the winning strategy might be found. See Gardner, Joyce, Keeley, and Stewart for more on the game of Chomp. The article by Stewart also discusses Yucky Chocolate. The Keeley article provides a lovely discussion of one classs definitions, conjectures, and theorems about the game of Chomp.
Rather than work through an exploration of each quadrilateral type sequentially, provide the class with standard definitions of each and have them draw (or construct) examples of each. Point out that each shape has a number of properties that are a consequence of their definition (e.g., reflection symmetry) that are not explicitly part of their definition. The handout Quadrilateral Properties will encourage a systematic exploration of these properties, each of which can be turned into a conjecture (e.g., "if the diagonals of a quadrilateral are congruent and bisect each other, then the figure is a rectangle" or "if a figure is a rhombus then it is a parallelogram") that students can try to prove (see writing conjectures for more on this topic). For each proof, they should produce a labeled diagram and a statement of the given information in terms of those labels. The given information should be strictly limited to that provided in the definitions of the terms in the premise of the conjecture.
Once students have generated a number of proofs using the above activity, they can move on to explore the properties of the perpendicular bisectors or midpoints of the sides or the bisectors of the angles of the different quadrilaterals. They might even explore dividing the angles or sides into multiple equal parts (n-secting them). Dynamic geometry programs, such as Geometers Sketchpad, are particularly helpful in creating clear diagrams and taking accurate measurements that aid students in making discoveries with these settings.
Diagrams play a complex role in mathematics. Many mathematicians think about even quite abstract ideas using visual images. Algebraic ideas often have natural geometric representations. We will often try to draw a picture of a problem that we are exploring because the image conveys a great deal of information organized according to a set of meaningful relationships. However, pictures do have limitations that students need to appreciate. In trying to gain insight from a diagram, we are restricted by its static nature. It shows us just one instance. The appeal of dynamic programs such as Geometers Sketchpad is, in part, that they allow us to quickly view multiple examples. Diagrams can mislead us if they are not created with precision and even accurate pictures may possess properties that are not typical of all cases. While diagrams may persuade and inform us, they do not constitute proofs. As with other types of examples, a picture may look convincing simply because we have not yet imagined how to construct a counterexample.
We want to help our students learn how to use diagrams as tools for furthering their investigations and how to extract information from them. As they work on problems, we can prompt them to consider whether a graph or other visual representation can be generated and studied. When they are reading other peoples proofs, encourage them to study all labels and features and to connect those details to the text and symbolic statements in the discussionto see how they illuminate that discussion and whether they serve as an effective visual counterpart.
Students can also get practice interpreting diagrams by studying "proofs without words." These "proofs" are pictures that their author considers so enlightening that they readily convince us that we can dependably generalize the pattern to all cases. Depending on how wordless a proof without words is (and some do have the occasional accompanying text), the pictures can take some effort to analyze. Effective pictures can be the inspiration for a more formal proof. Winicki-Landman (1998, p. 724) cautions that some students may respond negatively to proofs without words if they feel that they will have to come up with such elegant diagrams themselves. Be sure to emphasize the value of working with diagrams and the purpose of these activities. When "proof pictures" do not even have variable labels, encourage students to choose variables for the different quantities in the picture and to see what the pictures tell them about those variables.
See Proof without words (Bogomolny) for further discussion and additional examples.
Solutions to these problems are provided below as a way for you to gauge the difficulty of the problems and to determine their appropriateness for your class. Do not expect or require the students solutions to match the ones provided here. Alternatively, try to work on the problems yourself and with your students so that you can model how you think about analyzing problems and constructing proofs. After you and the students have your own results, you can use the solutions to make interesting comparisons. As the class discusses the different solutions to the problems, be sure to highlight the different methods (e.g., induction, proof by contradiction, case-by-case analysis) that they used. This emphasis will reinforce the message that there are common techniques that are often effective.
Once students have worked through some initial proofs, it is good to anticipate the frustrations and barriers that they will face as they attempt longer and harder problems. The NOVA (1997) video The Proof, which details Andrew Wiles work on Fermats Last Theorem, provides a motivational lesson that also tells students about one of the great mathematics accomplishments of the past century. Although Wiles proof is intimidating in its inaccessibility, his personal struggle and emotional attachment to the task are inspiring. After watching the video about his seven-year journey, students have a greater appreciation for the role that persistence plays in successful endeavors. The article Ten lessons from the proof of Fermats Last Theorem (Kahan) can be used as a teachers guide for a follow-up discussion. See Student and Teacher Affect for a further discussion of motivational considerations.
Note: Some of these problems may ask you to prove claims that are not true. Be sure to approach each with some skepticismtest the claims and make sure that a proof attempt is called for. If you disprove a statement, try to salvage some part of the claim by changing a condition.
Bogomolny, Alexander (2001). Pythagorean triples. Cut-the-knot. Available online at http://www.cut-the-knot.com/pythagoras/pythTriple.html.
Bogomolny, Alexander (2001). Infinitude of primes. Cut-the-knot. Available online at http://www.cut-the-knot.com/proofs/primes.html.
Bogomolny, Alexander (2001). Non-Euclidean geometries, models. Cut-the-knot. Available online at http://www.cut-the-knot.com/triangle/pythpar/Model.html.
Bogomolny, Alexander (2001). Integer iterations on a circle. Cut-the-knot. Available online at http://www.cut-the-knot.com/SimpleGames/IntIter.html
Bogomolny, Alexander (2001). Proof without words. Cut-the-knot. Available online at http://www.cut-the-knot.com/ctk/pww.shtml.
Bogomolny, Alexander (2001). Proofs in Mathematics. Cut-the-knot. Available online at http://www.cut-the-knot.com/proofs/index.html
Berlignhoff, William, Clifford Slover, & Eric Wood (1998). Math connections. Armonk, N.Y.: Its About Time, Inc.
Brown, Stephen & Walters, Marion (1983). The art of problem posing. Hillsdale, NJ: Lawrence Erlbaum Associates.
Carmony, Lowell (1979, January). Odd pie fights. Mathematics Teacher, 61-64.
Chaitin, G. J. The Berry paradox. Available online at http://www.cs.auckland.ac.nz/CDMTCS/chaitin/unm2.html.
Davis, Philip and Reuben Hersh (1981). The mathematical experience. Boston, Massachusetts: Houghton Mifflin Company:
Deligne, Pierre (1977, 305(3)). Lecture notes in mathematics, 584. Springer Verlag.
Erickson, Martin & Joe Flowers (1999). Principles of mathematical problem solving. New Jersey, USA: Prentice Hall
Flores, Alfinio (2000, March). Mathematics without words. The College Mathematics Journal, 106.
Focus Issue on The Concept of Proof (1998, November). Mathematics
Teacher. Available from NCTM at http://poweredge.nctm.org/nctm/itempg.icl?secid=1&subsecid=12&orderidentifier=
Gardner, Martin (1986). Knotted doughnuts and other mathematical recreations. New York, N.Y.: W. H. Freeman and Company, 109-122.
Hoffman, Paul (1998). The man who loved only numbers. New York, New York: Hyperion.
Horgan, John (1996, April). The not so enormous theorem. Scientific American.
Joyce, Helen (2001, March). Chomp. Available on-line at http://plus.maths.org/issue14/xfile/.
Kahan, Jeremy (1999, September). Ten lessons from the proof of Fermats Last Theorem. Mathematics Teacher, 530-531.
Keeley, Robert J (1986, October). Chompan introduction to definitions, conjectures, and theorems. Mathematics Teacher, 516-519.
Knott, Ron (2000) Easier Fibonacci puzzles. Available online at http://www.mcs.surrey.ac.uk/Personal/R.Knott/Fibonacci/fibpuzzles.html.
Lee, Carl (2002). Axiomatic Systems. Available for download at ../../../handbook/teacher/Proof/AxiomaticSystems.pdf.
MacTutor History of Mathematics Archive (1996). The four colour theorem. Available online at http://www-history.mcs.st-andrews.ac.uk/history/HistTopics/The_four_colour_theorem.html.
MacTutor History of Mathematics Archive (1996). Fermats last theorem. Available online at http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Fermat's_last_theorem.html.
MegaMathematics (2002). Algorithms and ice cream for all. Available online at http://www.cs.uidaho.edu/~casey931/mega-math/workbk/dom/dom.html.
Nelson, Roger (1993). Proof without words. Washington, D.C.: The Mathematical Association of America.
NOVA (1997). The proof. WGBH/PBS. See http://www.pbs.org/wgbh/nova/proof/ for more information and http://www.pbs.org/wgbh/shop/novavidedu06detect.html#proof to order.
Peterson, Ivars (1996, December 23). Prime theorem of the century. MAA online: MathTrek. Available online at http://www.maa.org/mathland/mathland_12_23.html.
Peterson, Ivars (1998, February 23). The limits of mathematics. MAA online: MathTrek. Available online at http://www.maa.org/mathland/mathtrek_2_23_98.html.
Platonic Realms Interactive Mathematics Encyclopedia (PRIME). Gödels theorems. Available online at http://www.mathacademy.com/pr/prime/articles/godel/index.asp.
Schoenfeld, Alan (1992). Learning to think mathematically: problem solving, metacognition, and sense-making in mathematics. Available online at http://www-gse.berkeley.edu/faculty/aschoenfeld/LearningToThink/Learning_to_think_Math06.html#Heading18. Read from this point in the essay to the end.
Schoenfeld, Alan (1994, 13(1)). Do we need proof in school mathematics? in What do we know about mathematics curricula? Journal of Mathematical Behavior, 55-80. Available online at http://www-gse.berkeley.edu/Faculty/aschoenfeld/WhatDoWeKnow/What_Do_we_know 02.html#Heading4.
Stewart, Ian (1998, October). Mathematical recreations: playing with chocolate. Scientific American, 122-124.
Weisstein, Eric (2002). Perfect Number. Eric Weissteins World of Mathematics. Available online at http://mathworld.wolfram.com/PerfectNumber.html.
Weisstein, Eric (2002). Pythagorean Theorem. Eric Weissteins World of Mathematics. Available online at http://mathworld.wolfram.com/PythagoreanTheorem.html.
Weyl, Hermann(1932). Unterrichtsblätter für Mathematik und Naturwissenschaften, 38, 177-188. Translation by Abe Shenitzer (1995, August-September) appeared in The American Mathematical Monthly, 102:7, 646. Quote available online at http://www-groups.dcs.st-and.ac.uk/~history/Quotations/Weyl.html.
Winicki-Landman, Greisy (1998, November). On proofs and their performance as works of art. Mathematics Teacher, 722-725
Zeilberger, Doron (2002). Three-rowed Chomp. Available online at http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/chomp.pdf.
Zeitz, Paul (1999). The art and craft of problem solving. John Wiley and Sons: New York.
Translations of mathematical formulas for web display were created by tex4ht. | http://www2.edc.org/makingmath/handbook/teacher/proof/proof.asp | 13 |
266 | From Uncyclopedia, the content-free encyclopedia
Please note that as Proof has been shot dead, all information below has been rendered obsolete.
Methods of proof
There are several methods of proof which are commonly used:
Proof by Revenge
"2+2=5" "no it doesn't" "REVENGE!"
Proof by Adding a Constant
2 = 1 if we add a constant C such that 2 = 1 + C.
Multiplicative Identity Additive Identity
Multiply both expressions by zero, e.g.,
- 1 = 2
- 1 × 0 = 2 × 0
- 0 = 0
Since the final statement is true, so is the first.
See also Proof by Pedantry.
Proof by Altering (or Destroying) the Original Premise (or Evidence)
- A = 1 and B = 1 and A + B = 3
- [Long list of confusing statements]
- [Somewhere down the line, stating B = 2 and covering up the previous definition]
- [Long list of other statements]
- A + B = 3
Works best over long period of time.
Proof by Analogy
Draw a poor analogy. Say you have two cows. But one is a bull. After the proper gestation period, 1 + 1 = 3.
Proof by Anti-proof
If there is proof that has yet to be accounted for in your opponent's argument, then it is wholly discreditable and thus proof of your own concept. It also works if you claim to be unable to comprehend their proof. Example:
- I can't see how a flagellum can evolve by itself, therefore the theory of evolution is incorrect, therefore someone must have put them together, therefore convert now!
Note: This generally works equally well in both directions:
- I can't see how someone could have put a flagellum together, therefore the theory of Creation is incorrect, therefore it must have evolved by itself, therefore Let's Party!
Proof by August
Since August is such a good time of year, no one will disagree with a proof published then, and therefore it is true. Of course, the converse is also true, i.e., January is crap, and all the logic in the world will not prove your statement then.
Proof by Assumption
An offshoot of Proof by Induction, one may assume the result is true. Therefore, it is true.
Proof by Axiom
Assert an axiom A such that the proposition P you are trying to prove is true. Thus, any statement S contradicting P is false, so P is true. Q.E.D.
Proof by Belief
"I believe assertion A to hold, therefore it does. Q.E.D."
Proof by Bijection
This is a method of proof made famous by P. T. Johnstone. Start with a completely irrelevant fact. Construct a bijection from the irrelevant fact to the thing you are trying to prove. Talk about rings for a few minutes, but make sure you keep their meaning a secret. When the audience are all confused, write Q.E.D. and call it trivial. Example:
- To prove the Chinese Remainder Theorem, observe that if p divides q, we have a well-defined function. Z/qZ → Z/qZ is a bijection. Since f is a homomorphism of rings, φ(mn) = φ(m) × φ(n) whenever (n, m) = 1. Using IEP on the hyperfield, there is a unique integer x, modulo mn, satifying x = a (mod m) and x = b (mod n). Thus, Q.E.D., and we can see it is trivial.
Proof by B.O.
This method is a fruitful attack on a wide range of problems: don't have a shower for several weeks and play lots of sports.
Proof by Calling the Other Guy an Idiot
"I used to respect his views, but by stating this opinion, he has now proven himself to be an idiot." Q.E.D.
Proof by Arbitration
Often times in mathematics, it is useful to create abitrary "Where in the hell did that come from?" type theorems which are designed to make the reader become so confused that the proof passes as sound reasoning.
Proof by Faulty Logic
Math professors and logicians sometimes rely on their own intuition to prove important mathematical theorems. The following is an especially important theorem which opened up the multi-disciplinary field of YouTube.
- Let k and l be the two infinities: mainly, the negative infinity and the positive infinity. Then, there exists a real number c, such that k and l cease to exist. Such a s is zero. We conclude that the zero infinity exists and is in between the postive and negative infinities. This theorem opens up many important ideas. For example, primitive logic would dictate that the square root of infinity, r, is a number less than r.
"I proved, therefore I am proof." – Isaac Newton, 1678, American Idol.
Proof by Canada
Like other proofs, but replace Q.E.D. with Z.E.D. Best when submitted with a bowl of Kraft Dinner.
Proof by Cantona
Conduct the proof in a confident manner in which you are convinced in what you are saying is correct, but which is absolute bollocks – and try to involve seagulls in some way. Example:
- If sin x < x … for all x > 0 … and when … [pause to have a sip of water] … the fisherman … throws sardines off the back of the trawler … and x > 0 … then … you can expect the seagulls to follow … and so sin x = 0 for all x.
Proof by Cases
AN ARGUMENT MADE IN CAPITAL LETTERS IS CORRECT. THEREFORE, SIMPLY RESTATE THE PROPOSITION YOU ARE TRYING TO PROVE IN CAPITAL LETTERS, AND IT WILL BE CORRECT!!!!!1 (USE TYPOS AND EXCLAMATION MARKS FOR ESPECIALLY DIFFICULT PROOFS)
Proof by Chocolate
By writing what seems to be an extensive proof and then smearing chocolate to stain the most crucial parts, the reader will assume that the proof is correct so as not to appear to be a fool.
Proof by Complexity
Remember, something is not true when its proof has been verified, it is true as long as it has not been disproved. For this reason, the best strategy is to limit as much as possible the number of people with the needed competence to understand your proof.
Be sure to include very complex elements in your proof. Infinite numbers of dimensions, hypercomplex numbers, indeterminate forms, graphs, references to very old books/movies/bands that almost nobody knows, quantum physics, modal logic, and chess opening theory are to be included in the thesis. Make sentences in Latin, Ancient Greek, Sanskrit, Ithkuil, and invent languages.
Refer to the Cumbersome Notation to make it more complex.
Again, the goal: nobody must understand, and this way, nobody can disprove you.
Proof by (a Broad) Consensus
If enough people believe something to be true, then it must be so. For even more emphatic proof, one can use the similar Proof by a Broad Consensus.
Proof by Contradiction (reductio ad absurdum)
- Assume the opposite: "not p".
- Bla, bla, bla …
- … which leads to "not p" being false, which contradicts assumption (1). Whatever you say in (2) and (3), (4) will make p true.
Useful to support other proofs.
Proof by Coolness (ad coolidum)
Let C be the coolness function
- C(2 + 2 = 4) < C(2 + 2 = 5)
- Therefore, 2 + 2 = 5
Let ACB be A claims B.
- C(Y) > C(X)
- Therefore Q unless there is Z, C(Z) > C(Y) and (ZC¬Q)
Let H be the previous demonstration, N nothingness, and M me.
- C(M) < C(N)
- Therefore ¬H
and all this is false since nothingness is cooler.
Let J be previous counter-argument and K be HVJ.
- Substitude K for H in J
- Therefore ¬K
- ¬K implies ¬J and ¬H
- ¬J implies H
- Therefore H and ¬H
- Therefore non-contradiction is false ad coolidum
- Therefore C(Aristotle) < C(M)
Proof by Cumbersome Notation
Best done with access to at least four alphabets and special symbols. Matrices, Tensors, Lie algebra and the Kronecker-Weyl Theorem are also well-suited.
Proof by Default
Proof by Definition
Define something as such that the problem falls into grade one math, e.g., "I am, therefore I am".
Proof by Delegation
"The general result is left as an exercise to the reader."
Proof by Dessert
The proof is in the pudding.
Philosophers consider this to be the tastiest possible proof.
Proof by Diagram
Reducing problems to diagrams with lots of arrows. Particularly common in category theory.
Proof by Disability
Proof conducted by the principle of not drawing attention to somebody's disability – like a speech impediment for oral proofs, or a severed tendon in the arm for written proofs.
Proof by Disgust
State two alternatives and explain how one is disgusting. The other is therefore obviously right and true.
- Do we come from God or from monkeys? Monkeys are disgusting. Ergo, God made Adam.
- Is euthanasia right or wrong? Dogs get euthanasia. Dogs smell and lick their butts. Ergo, euthanasia is wrong.
- Is cannibalism right or wrong? Eeew, blood! Ergo, cannibalism is wrong.
Proof by Dissent
If there is a consensus on a topic, and you disagree, then you are right because people are stupid. See global warming sceptics, creationist, tobacco companies, etc., for application of this proof.
Proof by Distraction
Be sure to provide some distraction while you go on with your proof, e.g., some third-party announces, a fire alarm (a fake one would do, too) or the end of the universe. You could also exclaim, "Look! A distraction!", meanwhile pointing towards the nearest brick wall. Be sure to wipe the blackboard before the distraction is presumably over so you have the whole board for your final conclusion.
Don't be intimidated if the distraction takes longer than planned – simply head over to the next proof.
An example is given below.
- Look behind you!
- … and proves the existence of an answer for 2 + 2.
- Look! A three-headed monkey over there!
- … leaves 5 as the only result of 2 + 2.
- Therefore 2 + 2 = 5. Q.E.D.
This is related to the classic Proof by "Look, a naked woman!"
Proof by Elephant
Q: For all rectangles, prove diagonals are bisectors. A: None: there is an elephant in the way!
Proof by Engineer's Induction
- See also: Proof by Induction.
Suppose P(n) is a statement.
- Prove true for P(1).
- Prove true for P(2).
- Prove true for P(3).
- Therefore P(n) is true for all .
Proof by Exhaustion
This method of proof requires all possible values of the expression to be evaluated and due to the infinite length of the proof, can be used to prove almost anything since the reader will either get bored whilst reading and skip to the conclusion or get hopelessly lost and thus convinced that the proof is concrete.
Proof by Eyeballing
Quantities that look similar are indeed the same. Often drawing random pictures will aid with this process.
Corollary: If it looks like a duck and acts like a duck, then it must be a duck.
Proof by Flutterby Effect
Proofs to the contrary that you can (and do) vigorously and emphatically ignore therefore you don't know about, don't exist. Ergo, they can't and don't apply.
Corollary: If it looks like a duck, acts like a duck and quacks like a duck, but I didn't see it (and hey, did you know my Mom ruptured my eardrums), then it's maybe … an aadvark?
Proof by Gun
A special case of Proof by Intimidation: "I have a gun and you don't. I'm right, you're wrong. Repeat after me: Q.E.D."
Proof by Global Warming
If it doesn't contribute to Global Warming, it is null and void.
Proof by God
Also a special case of proof. "Don't question my religion, or you are supremely insensitive and God will smite you." Similar to Proof by Religion, but sanctioned by George W. Bush.
Proof by Hand Waving
- See main article: Hand waving.
Commonly used in calculus, hand waving dispenses with the pointless notion that a proof need be rigorous.
Proof by Hitler Analogy
The opposite of Proof by Wikipedia. If Hitler said,
- "I like cute kittens."
then – automatically – cute kittens are evil, and liking them proves that you caused everything that's wrong in the world for the last 50 years.
Simple Proof by Hubris
I exist, therefore I am correct.
Proof by Hypnosis
Try to relate your proof to simple harmonic motion in some way and then convince people to look at a swinging pendulum.
Proof by Imitation
Make a ridiculous imitation of your opponent in a debate. Arguments cannot be seriously considered when the one who proposes them was laughed at a moment before.
Make sure to use puppets and high-pitched voices, and also have the puppet repeat "I am a X", replacing X with any minority that the audience might disregard: gay, lawyer, atheist, creationist, zoophile, paedophile … the choice is yours!
Proof by Immediate Danger
Having a fluorescent green gas gently seep into the room through the air vents will probably be beneficial to your proof.
Proof by Impartiality
If you, Y, disagree with X on issue I, you can invariably prove yourself right by the following procedure:
- Get on TV with X.
- Open with an ad hominem attack on X and then follow up by saying that God hates X for X's position on I.
- When X attempts to talk, interrupt him very loudly, and turn down his microphone.
- Remind your audience that you are impartial where I is concerned, while X is an unwitting servant of Conspiracy Z, e.g., the Liberal Media, and that therefore X is wrong. Then also remind your audience that I is binary, and since your position on I is different from X's, it must be right.
- That sometimes fails to prove the result on the first attempt, but by repeatedly attacking figures X1, X2, …, Xn – and by proving furthermore (possibly using Proof by Engineer's Induction) that Xn is wrong implies Xn+1 is wrong, and by demonstrating that you cannot be an Xi because your stance on I differs due to a change in position i, demonstrating that while the set of Xi's is countable, the set containing you is uncountable by the diagonal argument, and from there one can apply Proof by Consensus, as your set is infinitely bigger – you can prove yourself right.
A noted master of the technique is Bill O'Reilly.
Proof by Induction
Proof by Induction claims that
where is the number of pages used to contain the proof and is the time required to prove something, relative to the trivial case.
For the common, but special case of generalising the proof,
where is the number of pages used to contain the proof, is the number of things which are being proved and is the time required to prove something, relative to the trivial case.
The actual method of constructing the proof is irrelevant.
Proof by Intimidation
One of the principal methods used to prove mathematical statements. Remember, even if your achievements have nothing to do with the topic, you're still right. Also, if you spell even slightly better, make less typos, or use better grammar, you've got even more proof. The exact statement of proof by intimidation is given below.
Suppose a mathematician F is at a position n in the following hierarchy:
- Fields Medal winner
- Tenured Professor
- Non-tenured professor
- Graduate Student
- Undergraduate Student
If a second mathematician G is at any position p such that p < n, then any statement S given to F by G is true.
Alternatively: Theorem 3.6. All zeros of the Riemann Zeta function lie on the critical line (have a real component of 1/2).
Proof: "… trivial …"
Proof by Irrelevant References
A proof that is backed up by citations that may or may not contain a proof of the assertion. This includes references to documents that don't exist. (Cf. Schott, Wiggenmeyer & Pratt, Annals of Veterninary Medicine and Modern Domestic Plumbing, vol. 164, Jul 1983.)
Proof by Jack Bauer
If Jack Bauer says something is true, then it is. No ifs, ands, or buts about it. End of discussion.
This is why, for example, torture is good.
Proof by Lecturer
It's true because my lecturer said it was true. QED.
Proof by Liar
If liar say, that he is a liar, he lies, because liars always lie, so he is not liar.
Simple, ain't it?
Proof by Kim G. S. Øyhus' Inference
- and .
- Ergo, .
- Therefore I'm right and you're wrong.
Proof by LSD
Wow! That is sooo real, man!
Proof by Margin Too Small
"I have discovered a truly marvelous proof of this, which this margin is too narrow to contain."
Proof by Mathematical Interpretive Dance
Proof by Misunderstanding
"2 is equal to 3 for sufficiently large values of 2."
Proof by Mockery
Let the other state his claim in detail, wait he lists and explain all his argument and, at any time, explose in laughter and ask, "No, are you serious? That must be a joke. You can't really think that, do you?" Then you leave the debate in laughter and shout, "If you all want to listen to this parody of argument, I shan't prevent you!"
Proof by Narcotics Abuse
Spike the drinks/food of all people attending with physcoaltering or hallucinogenic chemicals.
Proof by Obama
Yes, we can.
Proof by Obfuscation
A long, plotless sequence of true and/or meaningless syntactically related statements.
Proof by Omission
Make it easier on yourself by leaving it up to the reader. After all, if you can figure it out, surely they can. Examples:
- The reader may easily supply the details.
- The other 253 cases are analogous.
- The proof is left as an exercise for the reader.
- The proof is left as an exercise for the marker (guaranteed to work in an exam).
Proof by Ostention
- 2 + 2 = 5
Proof by Outside the Scope
All the non-trivial parts of the proof are left out, stating that proving them is outside the scope of the book.
Proof by Overwhelming Errors
A proof in which there are so many errors that the reader can't tell whether the conclusion is proved or not, and so is forced to accept the claims of the writer. Most elegant when the number of errors is even, thus leaving open the possibility that all the errors exactly cancel each other out.
Proof by Ødemarksism
- See also: Proof by Consensus.
- The majority thinks P.
- Therefore P is true (and dissenters should be silenced in order to reduce conflict from diversity).
The silencing of dissenters can be made easier with convincing arguments.
Proof by Penis Size
My dick's much bigger than yours, so I'm right.
Corollary: You don't have a penis, so I'm right.
Proof by Pornography
Include pornographic pictures or videos in the proof – preferably playing a porno flick exactly to the side of where you are conducting the proof. Works best if you pretend to be oblivious to the porn yourself and act as if nothing is unusual.
Proof by Process of Elimination
so 2 + 2 = 5
Proof by Promiscuity
I get laid much more than you, so I'm right.
Proof by Proving
Well proven is the proof that all proofs need not be unproven in order to be proven to be proofs. But where is the real proof of this? A proof, after all, cannot be a good proof until it has been proven. Right?
Proof by Question
If you are asking me to prove something, it must be true. So why bother asking?
Proof by Realization
A form of proof where something is proved by realizing that is true. Therefore, the proof holds.
Proof by Reduction
Show that the theorem you are attempting to prove is equivalent to the trivial problem of not getting laid. Particularly useful in axiomatic set theory.
Proof by Reduction to the Wrong Problem
Why prove this theorem when you can show it's identical to some other, already proven problem? Plus a few additional steps, of course …
- Example: "To prove the four colour theorem, we reduce it to the halting problem."
Proof by Religion
Related to Proof by Belief, this method of attacking a problem involves the principle of mathematical freedom of expression by asserting that the proof is part of your religion, and then accusing all dissenters of religiously persecuting you, due to their stupidity of not accepting your obviously correct and logical proof. See also Proof by God.
Proof by Repetition
AKA the Socratic method.
If you say something is true enough times, then it is true. Repeatedly asserting something to be true makes it so. To repeat many times and at length the veracity of a given proposition adds to the general conviction that such a proposition might come to be truthful. Also, if you say something is true enough times, then it is true. Let n be the times any given proposition p was stated, preferably in different forms and ways, but not necessarily so. Then it comes to pass that the higher n comes to be, the more truth-content t it possesses. Recency bias and fear of ostracism will make people believe almost anything that is said enough times. If something has been said to be true again and again, it must definitely be true, beyond any shadow of doubt. The very fact that something is stated endlessly is enough for any reasonable person to believe it. And, finally, if you say something is true enough times, then it is true. Q.E.D.
Exactly how many times one needs to repeat the statement for it to be true, is debated widely in academic circles. Generally, the point is reached when those around die through boredom.
- E.g., let A = B. Since A = B, and B = A, and A = B, and A = B, and A = B, and B = A, and A = B, and A = B, then A = B.
Proof by Restriction
If you prove your claim for one case, and make sure to restrict yourself to this one, you thus avoid any case that could compromise you. You can hope that people won't notice the omission.
Example: Prove the four-color theorem.
- Take a map of only one region. Only 1 color is needed to color it, and 1 ≤ 4. End of the proof.
If someone questions the completeness of the proof, others methods of proofs can be used.
Proof by the Rovdistic Principle
- See also: Proof by Belief.
- I like to think that 2 + 2 = 5.
- Therefore, 2 + 2 = 5. Q.E.D.
Proof by Russian Reversal
In Soviet Russia, proof gives YOU!
Proof by Self-evidence
Claim something and tell how self-evident it is: you are right!
Proof by Semantics
Proof by semantics is simple to perform and best demonstrated by example. Using this method, I will prove the famous Riemann Hypothesis as follows:
We seek to prove that the Riemann function defined off of the critical line has no non-trivial zeroes. It is known that all non-trivial zeroes lie in the region with 0 < Re(z) < 1, so we need not concern ourselves with numbers with negative real parts. The Riemann zeta function is defined for Re(z) > 1 by sum over k of 1/kz, which can be written 1 + sum over k from 2 of 1/kz.
Consider the group (C, +). There is a trivial action theta from this group to itself by addition. Hence, by applying theta and using the fact that it is trivial, we can conclude that sum (1/kz) over k from 2 is the identity element 0. Hence, the Riemann zeta function for Re(z) > 0 is simply the constant function 1. This has an obvious analytic continuation to Re(z) > 0 minus the critical line, namely that zeta(z) = 1 for all z in the domain.
Hence, zeta(z) is not equal to zero anywhere with Re(z) > 0 and Re(z) not equal to 1/2. Q.E.D.
Observe how we used the power of the homonyms "trivial" meaning ease of proof and "trivial" as in "the trivial action" to produce a brief and elegant proof of a classical mathematical problem.
Proof by Semitics
If it happened to the Jews and has been confirmed by the state of Israel, then it must be true.
Proof by Staring
- x2 − 1 = (x + 1)(x − 1)
This becomes obvious after you stare at it for a while and the symbols all blur together.
Proof by Substitution
One may substitute any arbitrary value for any variable to prove something. Example:
- Assume that 2 = P.
- Substitute 3 for P.
- Therefore, 2 = 3. Q.E.D.
Proof by Superior IQ
- See also: Proof by Intimidation.
If your IQ is greater than that of the other person in the argument, you are right and what you say is proven.
Proof by Surprise
The proof is accomplished by stating completely random and arbitrary facts that have nothing to do with the topic at hand, and then using these facts to mysteriously conclude the proof by appealing to the Axiom of Surprise. The most known user of this style of proof is Walter Rudin in Principles of Mathematical Analysis. To quote an example:
Theorem: If and is real, then .
Proof: Let be an integer such that , . For , . Hence, . Since , . Q.E.D.
Walter Rudin, Principles of Mathematical Analysis, 3rd Edition, p. 58, middle.
Proof by Tarantino
Proof by Tension
Try to up the tension in the room by throwing in phrases like "I found my wife cheating on me … with another woman", or "I wonder if anybody would care if I slit my wrists tomorrow". The more awkward the situation you can make, the better.
Proof by TeX
Proof by … Then a Miracle Happens
Similar to Proof by Hand Waving, but without the need to wave your hand.
Example: Prove that .
- … then a miracle happens.
- . Q.E.D.
Proof by Triviality
The Proof of this theorem/result is obvious, and hence left as an exercise for the reader.
Proof by Uncyclopedia
Uncyclopedia is the greatest storehouse of human knowledge that has ever existed. Therefore, citing any fact, quote or reference from Uncyclopedia will let your readers know that you are no intellectual lightweight. Because of Uncyclopedia's steadfast adherence to accuracy, any proof with an Uncyclopedia reference will defeat any and all detractors.
(Hint: In any proof, limit your use of Oscar Wilde quotes to a maximum of five.)
Proof by Volume
If you shout something really, really loud often enough, it will be accepted as true.
Also, if the proof takes up several volumes, then any reader will get bored and go do something more fun, like math.
Proof by War
My guns are much bigger than yours, therefore I'm right.
See also Proof by Penis Size.
Proof by Wolfram Alpha
If Wolfram Alpha says it is true, then it is true.
Proof by Wikipedia
If the Wikipedia website states that something is true, it must be true. Therefore, to use this proof method, simply edit Wikipedia so that it says whatever you are trying to prove is true, then cite Wikipedia for your proof.
Proof by Yoda
If stated the proof by Yoda is, then true it must be.
Proof by Your Mom
You don't believe me? Well, your mom believed me last night!
Proof by Actually Trying and Doing It the Honest W– *gunshot*
Let this be a lesson to you do-gooders.
Proof by Reading the Symbols Carefully
Proving the contrapositive theorem: Let (p→q), (¬q→¬p) be true statements. (p→q) if (and only if) (¬q→¬p).
The symbols → may also mean shooting and ¬ may also represent a gun. The symbols would then be read as this:
If statement p shoots statement q, then statement q possibly did not shoot statement p at all, because statement q is a n00b player for pointing the gun in the opposite direction of statement p.
Also, if statement q didn't shoot statement p on the right direction in time (due to n00biness), p would then shoot q.
Oh! I get it now. The power of symbol reading made the theorem sense. Therefore, the theorem is true.
The Ultimate Proof
However, despite all of these methods of proof, there is only one way of ensuring not only that you are 100% correct, but 1000 million per cent correct, and that everyone, no matter how strong or how argumentative they may be, will invariably agree with you. That, my friends, is being a girl. "I'm a girl, so there", is the line that all men dread, and no reply has been discovered which doesn't result in a slap/dumping/strop being thrown/brick being thrown/death being caused. Guys, when approached by this such form of proof, must destroy all evidence of it and hide all elements of its existence.
Some other terms one may come across when working with proofs:
A method of proof attempted at 3:00 A.M. the day a problem set is due, which generally seems to produce far better results at that time than when looked at in the light of day.
Q.E.D. stands for "Quebec's Electrical Distributor", commonly known as Hydro Quebec. It is commonly used to indicate where the author has given up on the proof and moved onto the next problem.
Can be substituted for the phrase "So there, you bastard!" when you need the extra bit of proof.
When handling or working with proofs, one should always wear protective gloves (preferably made of LaTeX).
The Burden of Proof
In recent years, proofs have gotten extremely heavy (see Proof by Volume, second entry). As a result, in some circles, the process of providing actual proof has been replaced by a practice known as the Burden of Proof. A piece of luggage of some kind is placed in a clear area, weighted down with lead weights approximating the hypothetical weight of the proof in question. The person who was asked to provide proof is then asked to lift this so-called "burden of proof". If he cannot, then he loses his balance and the burden of proof falls on him, which means that he has made the fatal mistake of daring to mention God on an Internet message board. | http://uncyclopedia.wikia.com/wiki/Proof?direction=prev&oldid=5603130 | 13 |
306 | IN APRIL 1985, the general secretaries of the communist and workers' parties of the Soviet Union, Bulgaria, Czechoslovakia, the German Democratic Republic (East Germany), Hungary, Poland, and Romania gathered in Warsaw to sign a protocol extending the effective term of the 1955 Treaty on Friendship, Cooperation, and Mutual Assistance, which originally established the Soviet-led political-military alliance in Eastern Europe. Their action ensured that the Warsaw Pact, as it is commonly known, will remain part of the international political and military landscape well into the future. The thirtieth anniversary of the Warsaw Pact and its renewal make a review of its origins and evolution particularly appropriate.
The Warsaw Pact alliance of the East European socialist states is the nominal counterweight to the North Atlantic Treaty Organization (NATO) on the European continent (see fig. A, this Appendix). Unlike NATO, founded in 1949, however, the Warsaw Pact does not have an independent organizational structure but functions as part of the Soviet Ministry of Defense. In fact, throughout the more than thirty years since it was founded, the Warsaw Pact has served as one of the Soviet Union's primary mechanisms for keeping its East European allies under its political and military control. The Soviet Union has used the Warsaw Pact to erect a facade of collective decision making and action around the reality of its political domination and military intervention in the internal affairs of its allies. At the same time, the Soviet Union also has used the Warsaw Pact to develop East European socialist armies and harness them to its military strategy.
Since its inception, the Warsaw Pact has reflected the changing pattern of Soviet-East European relations and manifested problems that affect all alliances. The Warsaw Pact has evolved into something other than the mechanism of control the Soviet Union originally intended it to be, and it has become increasingly less dominated by the Soviet Union since the 1960s. The organizational structure of the Warsaw Pact has grown and has provided a forum for greater intra-alliance debate, bargaining, and conflict between the Soviet Union and its allies over the issues of national independence, policy autonomy, and East European participation in alliance decision making. While the Warsaw Pact retains its internal function in Soviet-East European relations, its non-Soviet members have also developed sufficient military capabilities to become useful adjuncts of Soviet power against NATO in Europe.
Long before the establishment of the Warsaw Pact in 1955, the Soviet Union had molded the East European states into an alliance serving its security interests. While liberating Eastern Europe from Nazi Germany in World War II, the Red Army established political and military control over that region. The Soviet Union's size, economic weight, and sheer military power made its domination inevitable in this part of Europe, which historically had been dominated by great powers. The Soviet Union intended to use Eastern Europe as a buffer zone for the forward defense of its western borders and to keep threatening ideological influences at bay. Continued control of Eastern Europe became second only to defense of the homeland in the hierarchy of Soviet security priorities. The Soviet Union ensured its control of the region by turning the East European countries into subjugated allies.
During World War II, the Soviet Union began to build what Soviet sources refer to as history's first coalition of a progressive type when it organized or reorganized the armies of Eastern Europe to fight with the Red Army against the German Wehrmacht. The command and control procedures established in this military alliance would serve as the model on which the Soviet Union would build the Warsaw Pact after 1955. During the last years of the war, Soviet commanders and officers gained valuable experience in directing multinational forces that would later be put to use in the Warsaw Pact. The units formed between 1943 and 1945 also provided the foundation on which the Soviet Union could build postwar East European national armies.
The Red Army began to form, train, and arm Polish and Czechoslovak national units on Soviet territory in 1943. These units fought with the Red Army as it carried its offensive westward into German-occupied Poland and Czechoslovakia and then into Germany itself. By contrast, Bulgaria, Hungary, and Romania were wartime enemies of the Soviet Union. Although ruled by ostensibly fascist regimes, these countries allied with Nazi Germany mainly to recover territories lost through the peace settlements of World War I or seized by the Soviet Union under the terms of the 1939 Nazi- Soviet Non-Aggression Pact. However, by 1943 the Red Army had destroyed the Bulgarian, Hungarian, and Romanian forces fighting alongside the Wehrmacht. In 1944 it occupied Bulgaria, Hungary, and Romania, and shortly thereafter it began the process of transforming the remnants of their armies into allied units that could re-enter the war on the side of the Soviet Union. These allied units represented a mix of East European nationals fleeing Nazi occupation, deportees from Soviet-occupied areas, and enemy prisoners-of-war. Red Army political officers organized extensive indoctrination programs in the allied units under Soviet control and purged any politically suspect personnel. In all, the Soviet Union formed and armed more than 29 divisions and 37 brigades or regiments, which included more than 500,000 East European troops.
The allied national formations were directly subordinate to the headquarters of the Soviet Supreme High Command and its executive body, the Soviet General Staff. Although the Soviet Union directly commanded all allied units, the Supreme High Command included one representative from each of the East European forces. Lacking authority, these representatives simply relayed directives from the Supreme High Command and General Staff to the commanders of East European units. While all national units had so-called Soviet advisers, some Red Army officers openly discharged command and staff responsibilities in the East European armies. Even when commanded by East European officers, non-Soviet contingents participated in operations against the Wehrmacht only as part of Soviet fronts.
At the end of World War II, the Red Army occupied Bulgaria, Romania, Hungary, Poland, and eastern Germany, and Soviet front commanders headed the Allied Control Commission in each of these occupied countries. The Soviet Union gave its most important occupation forces a garrison status when it established the Northern Group of Forces (NGF) in 1947 and the Group of Soviet Forces in Germany (GSFG) in 1949. By 1949 the Soviet Union had concluded twenty-year bilateral treaties of friendship, cooperation, and mutual assistance with Bulgaria, Czechoslovakia, Hungary, Poland, and Romania. These treaties prohibited the East European regimes from entering into relations with states hostile to the Soviet Union, officially made these countries Soviet allies, and granted the Soviet Union rights to a continued military presence on their territory. The continued presence of Red Army forces guaranteed Soviet control of these countries. By contrast, the Soviet Union did not occupy either Albania or Yugoslavia during or after the war, and both countries remained outside direct Soviet control.
The circumstances of Soviet occupation facilitated the installation of communist-dominated governments called "people's democracies" in Eastern Europe. The indoctrinated East European troops that had fought with the Red Army to liberate their countries from Nazi occupation became politically useful to the Soviet Union as it established socialist states in Eastern Europe. The East European satellite regimes depended entirely on Soviet military power--and the continued deployment of 1 million Red Army soldiers--to stay in power. In return, the new East European political and military elites were obliged to respect Soviet political and security interests in the region.
While transforming the East European governments, the Soviet Union also continued the process of strengthening its political control over the East European armed forces and reshaping them along Soviet military lines after World War II. In Eastern Europe, the Soviet Union instituted a system of local communist party controls over the military based on the Soviet model. The East European communist parties thoroughly penetrated the East European military establishments to ensure their loyalty to the newly established political order. At the same time, the Soviet Union built these armies up to support local security and police forces against domestic disorder or other threats to communist party rule. Reliable East European military establishments could be counted on to support communist rule and, consequently, ensure continued Soviet control of Eastern Europe. In fact, in the late 1940s and the 1950s the Soviet Union was more concerned about cultivating and monitoring political loyalty in its East European military allies than increasing their utility as combat forces.
The postwar military establishments in Eastern Europe consisted of rival communist and noncommunist wartime antifascist resistance movements, national units established on Soviet territory during the war, prewar national military commands, and various other armed forces elements that spent the war years in exile or fighting in the West. Using the weight of the Red Army and its occupation authority, the Soviet Union purged or co-opted the noncommunist nationalists in the East European armies and thereby eliminated a group likely to oppose their restructuring along Soviet lines. In the case of communist forces, the Soviet Union trusted and promoted personnel who had served in the national units formed on its territory over native communists who had fought in the East European underground organizations independent of Soviet control.
After 1948 the East European armies adopted regular political education programs. This Soviet-style indoctrination was aimed primarily at raising communist party membership within the officer corps and building a military leadership cadre loyal to the socialist system and the national communist regime. Unquestionable political loyalty was more important than professional competence for advancement in the military hierarchy. Appropriate class origin became the principal criterion for admission to the East European officer corps and military schools. The Soviet Union and national communist party regimes transformed the East European military establishments into a vehicle of upward mobility for the working class and peasantry, who were unaccustomed to this kind of opportunity. Many of the officers in the new East European armed forces supported the new regimes because their newly acquired professional and social status hinged on the continuance of communist party rule.
The Soviet Union assigned trusted national communist party leaders to the most important East European military command positions despite their lack of military qualifications. The East European ministries of defense established political departments on the model of the Main Political Administration of the Soviet Army and Navy. Throughout the 1950s, prewar East European communists served as political officers, sharing command prerogatives with professional officers and evaluating their loyalty to the communist regime and compliance with its directives. Heavily armed paramilitary forces under the control of the East European internal security networks became powerful rivals for the national armies and checked their potentially great influence within the political system. The Soviet foreign intelligence apparatus also closely monitored the allied national military establishments.
Despite the great diversity of the new Soviet allies in terms of military history and traditions, the Sovietization of the East European national armies, which occurred between 1945 and the early 1950s, followed a consistent pattern in every case. The Soviet Union forced its East European allies to emulate Soviet Army ranks and uniforms and abandon all distinctive national military customs and practices; these allied armies used all Soviet-made weapons and equipment. The Soviet Union also insisted on the adoption of Soviet Army organization and tactics within the East European armies. Following the precedent established during World War II, the Soviet Union assigned Soviet officers to duty at all levels of the East European national command structures, from the general (main) staffs down to the regimental level, as its primary means of military control. Although officially termed advisers, these Soviet Army officers generally made the most important decisions within the East European armies. Direct Soviet control over the national military establishments was most complete in strategically important Poland. Soviet officers held approximately half the command positions in the postwar Polish Army despite the fact that few spoke Polish. Soviet officers and instructors staffed the national military academies, and the study of Russian became mandatory for East European army officers. The Soviet Union also accepted many of the most promising and eager East European officers into Soviet mid-career military institutions and academies for the advanced study essential to their promotion within the national armed forces command structures.
Despite Soviet efforts to develop political and military instruments of control and the continued presence of Soviet Army occupation forces, the Soviet Union still faced resistance to its domination of Eastern Europe. The Soviet troops in the GSFG acted unilaterally when the East German Garrisoned People's Police refused to crush the June 1953 workers' uprising in East Berlin. This action set a precedent for the Soviet use of force to retain control of its buffer zone in Eastern Europe.
In May 1955, the Soviet Union institutionalized its East European alliance system when it gathered together representatives from Albania, Bulgaria, Czechoslovakia, Hungary, Poland, and Romania in Warsaw to sign the multilateral Treaty on Friendship, Cooperation, and Mutual Assistance, which was identical to their existing bilateral treaties with the Soviet Union. Initially, the Soviets claimed that the Warsaw Pact was a direct response to the inclusion of the Federal Republic of Germany (West Germany) in NATO in 1955. The formation of a legally defined, multilateral alliance organization also reinforced the Soviet Union's claim to power status as the leader of the world socialist system, enhanced its prestige, and legitimized its presence and influence in Eastern Europe. However, as events inside the Soviet alliance developed, this initial external impetus for the formation of the Warsaw Pact lost its importance, and the Soviet Union found a formal alliance useful for other purposes. The Soviet Union created a structure for dealing with its East European allies more efficiently when it superimposed the multilateral Warsaw Pact on their existing bilateral treaty ties.
In the early 1950s, the United States and its Western allies carried out an agreement to re-arm West Germany and integrate it into NATO. This development threatened a vital Soviet foreign policy objective: the Soviet Union was intent on preventing the resurgence of a powerful German nation and particularly one allied with the Western powers. In an effort to derail the admission of West Germany to NATO, the Soviet representative at the 1954 Four- Power Foreign Ministers Conference in Berlin, Viacheslav Molotov, went so far as to propose the possibility of holding simultaneous elections in both German states that might lead to a re-unified, though neutral and unarmed, Germany. At the same time, the Soviet Union also proposed to the Western powers a general treaty on collective security in Europe and the dismantling of existing military blocs (meaning NATO). When this tactic failed and West Germany joined NATO on May 5, 1955, the Soviet Union declared that West Germany's membership in the Western alliance created a special threat to Soviet interests. The Soviet Union also declared that this development made its existing network of bilateral treaties an inadequate security guarantee and forced the East European socialist countries to "combine efforts in a strong political and military alliance." On May 14, 1955, the Soviet Union and its East European allies signed the Warsaw Pact.
While the Soviets had avoided formalizing their alliance to keep the onus of dividing Europe into opposing blocs on the West, the admission into NATO of the European state with the greatest potential military power forced the Soviet Union to take NATO into account for the first time. The Soviet Union also used West Germany's membership in NATO for propaganda purposes. The Soviets evoked the threat of a re-armed, "revanchist" West Germany seeking to reverse its defeat in World War II to remind the East European countries of their debt to the Soviet Union for their liberation, their need for Soviet protection against a recent enemy, and their corresponding duty to respect Soviet security interests and join the Warsaw Pact.
The Soviet Union had important reasons for institutionalizing the informal alliance system established through its bilateral treaties with the East European countries, concluded before the 1949 formation of NATO. As a formal organization, the Warsaw Pact provided the Soviet Union an official counterweight to NATO in East-West diplomacy. The Warsaw Pact gave the Soviet Union an equal status with the United States as the leader of an alliance of ostensibly independent nations supporting its foreign policy initiatives in the international arena. The multilateral Warsaw Pact was an improvement over strictly bilateral ties as a mechanism for transmitting Soviet defense and foreign policy directives to the East European allies. The Warsaw Pact also helped to legitimize the presence of Soviet troop--and overwhelming Soviet influence--in Eastern Europe.
The 1955 Treaty on Friendship, Cooperation, and Mutual Assistance between the Soviet Union and its East European allies, which established the Warsaw Pact, stated that relations among the signatories were based on total equality, mutual noninterference in internal affairs, and respect for national sovereignty and independence. It declared that the Warsaw Pact's function was collective self-defense of the member states against external aggression, as provided for in Article 51 of the United Nations Charter. The terms of the alliance specified the Political Consultative Committee (PCC) as the highest alliance organ. The founding document formed the Joint Command to organize the actual defense of the Warsaw Pact member states, declared that the national deputy ministers of defense would act as the deputies of the Warsaw Pact commander in chief, and established the Joint Staff, which included the representatives of the general (main) staffs of all its member states. The treaty set the Warsaw Pact's duration at twenty years with an automatic ten-year extension, provided that none of the member states renounced it before its expiration. The treaty also included a standing offer to disband simultaneously with other military alliances, i.e., NATO, contingent on East-West agreement about a general treaty on collective security in Europe. This provision indicated that the Soviet Union either did not expect that such an accord could be negotiated or did not consider its new multilateral alliance structure very important.
Until the early 1960s, the Soviet Union used the Warsaw Pact more as a tool in East-West diplomacy than as a functioning political-military alliance. Under the leadership of General Secretary Nikita Khrushchev, the Soviet Union sought to project a more flexible and less threatening image abroad and, toward this end, used the alliance's PCC to publicize its foreign policy initiatives and peace offensives, including frequent calls for the formation of an all-European collective security system to replace the continent's existing military alliances. The main result of Western acceptance of these disingenuous Soviet proposals would have been the removal of American troops from Europe, the weakening of ties among the Western states, and increasingly effective Soviet pressure on Western Europe. The Soviet Union also used the PCC to propose a nonaggression pact between NATO and the Warsaw Pact and the establishment of a nuclear-free zone in Central Europe.
In the first few years after 1955, little of the Warsaw Pact's activity was directed at building a multilateral military alliance. The Soviet Union concentrated primarily on making the Warsaw Pact a reliable instrument for controlling the East European allies. In fact, the putatively supranational military agencies of the Warsaw Pact were completely subordinate to a national agency of the Soviet Union. The Soviet General Staff in Moscow housed the alliance's Joint Command and Joint Staff and, through these organs, controlled the entire military apparatus of the Warsaw Pact as well as the allied armies. Although the highest ranking officers of the alliance were supposed to be selected through the mutual agreement of its member states, the Soviets unilaterally appointed a first deputy Soviet minister of defense and first deputy chief of the Soviet General Staff to serve as Warsaw Pact commander in chief and chief of staff, respectively. While these two Soviet officers ranked below the Soviet minister of defense, they still outranked the ministers of defense in the non-Soviet Warsaw Pact (NSWP) countries. The Soviet General Staff also posted senior colonel generals as resident representatives of the Warsaw Pact commander in chief in all East European capitals. Serving with the "agreement of their host countries," these successors to the wartime and postwar Soviet advisers for the allied armies equaled the East European ministers of defense in rank and provided a point of contact for the commander in chief, Joint Command, and Soviet General Staff inside the national military establishments. They directed and monitored the military training and political indoctrination programs of the national armies to synchronize their development with the Soviet Army. The strict Soviet control of the Warsaw Pact's high military command positions, established at this early stage, clearly indicated the subordination of the East European allies to the Soviet Union.
In 1956 the Warsaw Pact member states admitted East Germany to the Joint Command and sanctioned the transformation of its Garrisoned People's Police into a full-fledged army. But the Soviet Union took no steps to integrate the allied armies into a multinational force. The Soviet Union organized only one joint Warsaw Pact military exercise and made no attempt to make the alliance functional before 1961 except through the incorporation of East European territory into the Soviet national air defense structure.
In his 1956 secret speech at the Twentieth Congress of the Communist Party of the Soviet Union, General Secretary Khrushchev denounced the arbitrariness, excesses, and terror of the Joseph Stalin era. Khrushchev sought to achieve greater legitimacy for communist party rule on the basis of the party's ability to meet the material needs of the Soviet population. His de-Stalinization campaign quickly influenced developments in Eastern Europe. Khrushchev accepted the replacement of Stalinist Polish and Hungarian leaders with newly rehabilitated communist party figures, who were able to generate genuine popular support for their regimes by molding the socialist system to the specific historical, political, and economic conditions in their countries. Pursuing his more sophisticated approach in international affairs, Khrushchev sought to turn Soviet-controlled East European satellites into at least semisovereign countries and to make Soviet domination of the Warsaw Pact less obvious. The Warsaw Pact's formal structure served Khrushchev's purpose well, providing a facade of genuine consultation and of joint defense and foreign-policy decision making by the Soviet Union and the East European countries.
De-Stalinization in the Soviet Union made a superficial renationalization of the East European military establishments possible. The Soviet Union allowed the East European armies to restore their distinctive national practices and to re-emphasize professional military opinions over political considerations in most areas. Military training supplanted political indoctrination as the primary task of the East European military establishments. Most important, the Soviet Ministry of Defense recalled many Soviet Army officers and advisers from their positions within the East European armies. Although the Soviet Union still remained in control of its alliance system, these changes in the Warsaw Pact and the NSWP armies removed some of the most objectionable features of Sovietization.
In October 1956, the Polish and Hungarian communist parties lost control of the de-Stalinization process in their countries. The ensuing crises threatened the integrity of the entire Soviet alliance system in Eastern Europe. Although Khrushchev reacted quickly to rein in the East European allies and thwart this challenge to Soviet interests, his response in these two cases led to a significant change in the role of the Warsaw Pact as an element of Soviet security.
The October 1956, workers' riots in Poland defined the boundaries of national communism acceptable to the Soviet Union. The Polish United Workers Party found that the grievances that inspired the riots could be ameliorated without presenting a challenge to its monopoly on political power or its strict adherence to Soviet foreign policy and security interests. At first, when the Polish Army and police forces refused to suppress rioting workers, the Soviet Union prepared its forces in East Germany and Poland for an intervention to restore order in the country. However, Poland's new communist party leader, Wladyslaw Gomulka, and the Polish Army's top commanders indicated to Khrushchev and the other Soviet leaders that any Soviet intervention in the internal affairs of Poland would meet united, massive resistance. While insisting on Poland's right to exercise greater autonomy in domestic matters, Gomulka also pointed out that the Polish United Workers Party remained in firm control of the country and expressed his intention to continue to accept Soviet direction in external affairs. Gomulka even denounced the simultaneous revolution in Hungary and Hungary's attempt to leave the Warsaw Pact, which nearly ruptured the Soviet alliance system in Eastern Europe. Gomulka's position protected the Soviet Union's most vital interests and enabled Poland to reach a compromise with the Soviet leadership to defuse the crisis. Faced with Polish resistance to a possible invasion, the Soviet Union established its minimum requirements for the East European allies: upholding the leading role of the communist party in society and remaining a member of the Warsaw Pact. These two conditions ensured that Eastern Europe would remain a buffer zone for the Soviet Union.
By contrast, the full-scale revolution in Hungary, which began in late October with public demonstrations in support of the rioting Polish workers, openly flouted these Soviet stipulations. An initial domestic liberalization acceptable to the Soviet Union quickly focused on nonnegotiable issues like the communist party's exclusive hold on political power and genuine national independence. With overwhelming support from the Hungarian public, the new communist party leader, Imre Nagy, instituted multiparty elections. More important, Nagy withdrew Hungary from the Warsaw Pact and ended Hungary's alliance with the Soviet Union. The Soviet Army invaded with 200,000 troops, crushed the Hungarian Revolution, and brought Hungary back within limits tolerable to the Soviet Union. The five days of pitched battles left 25,000 Hungarians dead.
After 1956 the Soviet Union practically disbanded the Hungarian Army and reinstituted a program of political indoctrination in the units that remained. In May 1957, unable to rely on Hungarian forces to maintain order, the Soviet Union increased its troop level in Hungary from two to four divisions and forced Hungary to sign a status-of-forces agreement, placing the Soviet military presence on a solid and permanent legal basis. The Soviet Army forces stationed in Hungary officially became the Southern Group of Forces (SGF).
The events of 1956 in Poland and Hungary forced a Soviet re- evaluation of the reliability and roles of the NSWP countries in its alliance system. Before 1956 the Soviet leadership believed that the Stalinist policy of heavy political indoctrination and enforced Sovietization had transformed the national armies into reliable instruments of the Soviet Union. However, the East European armies were still likely to remain loyal to national causes. Only one Hungarian Army unit fought beside the Soviet troops that put down the 1956 revolution. In both the Polish and the Hungarian military establishments, a basic loyalty to the national communist party regime was mixed with a strong desire for greater national sovereignty. With East Germany still a recent enemy and Poland and Hungary now suspect allies, the Soviet Union turned to Czechoslovakia as its most reliable junior partner in the late 1950s and early 1960s. Czechoslovakia became the Soviet Union's first proxy in the Third World when its military pilots trained Egyptian personnel to fly Soviet-built MiG fighter aircraft. The Soviet Union thereby established a pattern of shifting the weight of its reliance from one East European country to another in response to various crises.
After the very foundation of the Soviet alliance system in Eastern Europe was shaken in 1956, Khrushchev sought to shore up the Soviet Union's position. Several developments made the task even more difficult. Between 1956 and 1962, the growing Soviet- Chinese dispute threatened to break up the Warsaw Pact. In 1962 Albania severed relations with the Soviet Union and terminated Soviet rights to the use of a valuable Mediterranean naval base on its Adriatic Sea coast. That same year, Albania ended its active participation in the Warsaw Pact and sided with the Chinese against the Soviets. Following the example of Yugoslavia in the late 1940s, Albania was able to resist Soviet pressures. Lacking a common border with Albania and having neither occupation troops nor overwhelming influence in that country, the Soviet Union was unable to use either persuasion or force to bring Albania back into the Warsaw Pact. Khrushchev used Warsaw Pact meetings to mobilize the political support of the Soviet Union's East European allies against China and Albania, as well as to reinforce its control of Eastern Europe and its claim to leadership of the communist world. More important, however, after Albania joined Yugoslavia and Hungary on the list of defections and near-defections from the Soviet alliance system in Eastern Europe, the Soviets began to turn the Warsaw Pact into a tool for militarily preventing defections in the future.
Although Khrushchev invoked the terms of the Warsaw Pact as a justification for the Soviet invasion of Hungary, the action was in no sense a cooperative allied effort. In the early 1960s, however, the Soviets took steps to turn the alliance's Joint Armed Forces (JAF) into a multinational invasion force. In the future, an appeal to the Warsaw Pact's collective self-defense provisions and the participation of allied forces would put a multilateral cover over unilateral Soviet interventions to keep errant member states in the alliance and their communist parties in power. The Soviet Union sought to legitimize its future policing actions by presenting them as the product of joint Warsaw Pact decisions. In this way, the Soviets hoped to deflect the kind of direct international criticism they were subjected to after the invasion of Hungary. However, such internal deployments were clearly contrary to the Warsaw Pact's rule of mutual noninterference in domestic affairs and conflicted with the alliance's declared purpose of collective self-defense against external aggression. To circumvent this semantic difficulty, the Soviets merely redefined external aggression to include any spontaneous anti-Soviet, anticommunist uprising in an allied state. Discarding domestic grievances as a possible cause, the Soviet Union declared that such outbreaks were a result of imperialist provocations and thereby constituted external aggression.
In the 1960s, the Soviet Union began to prepare the Warsaw Pact for its internal function of keeping the NSWP member states within the alliance. The Soviet Union took a series of steps to transform the Warsaw Pact into its intra-alliance intervention force. Although it had previously worked with the East European military establishments on a bilateral basis, the Soviet Union started to integrate the national armies under the Warsaw Pact framework. Marshal of the Soviet Union Andrei Grechko, who became commander in chief of the alliance in 1960, was uniquely qualified to serve in his post. During World War II, he commanded a Soviet Army group that included significant Polish and Czechoslovak units. Beginning in 1961, Grechko made joint military exercises between Soviet forces and the allied national armies the primary focus of Warsaw Pact military activities.
The Soviet Union arranged these joint exercises to prevent any NSWP member state from fully controlling its national army and to reduce the possibility that an East European regime could successfully resist Soviet domination and pursue independent policies. The Soviet-organized series of joint Warsaw Pact exercises was intended to prevent other East European national command authorities from following the example of Yugoslavia and Albania and adopting a territorial defense strategy. Developed in the Yugoslav and Albanian partisan struggles of World War II, territorial defense entailed a mobilization of the entire population for a prolonged guerrilla war against an intervening power. Under this strategy, the national communist party leadership would maintain its integrity to direct the resistance, seek international support for the country's defense, and keep an invader from replacing it with a more compliant regime. Territorial defense deterred invasions by threatening considerable opposition and enabled Yugoslavia and Albania to assert their independence from the Soviet Union. By training and integrating the remaining allied armies in joint exercises for operations only within a multinational force, however, the Soviet Union reduced the ability of the other East European countries to conduct military actions independent of Soviet control or to hinder a Soviet invasion, as Poland and Hungary had done in October 1956.
Large-scale multilateral exercises provided opportunities for Soviet officers to command troops of different nationalities and trained East European national units to take orders from the Warsaw Pact or Soviet command structure. Including Soviet troops stationed in the NSWP countries and the western military districts of the Soviet Union, joint maneuvers drilled Soviet Army forces for rapid, massive invasions of allied countries with the symbolic participation of NSWP units. Besides turning the allied armies into a multinational invasion force for controlling Eastern Europe, joint exercises also gave the Warsaw Pact armies greater capabilities for a coalition war against NATO. In the early 1960s, the Soviet Union modernized the NSWP armies with T-54 and T-55 tanks, self-propelled artillery, short-range ballistic missiles (SRBMs) equipped with conventional warheads, and MiG-21 and Su-7 ground attack fighter aircraft. The Soviet Union completed the mechanization of East European infantry divisions, and these new motorized rifle divisions trained with the Soviet Army for combined arms combat in a nuclear environment. These changes greatly increased the military value and effectiveness of the NSWP forces. In the early 1960s, the Soviet Union gave the East European armies their first real supporting role in its European theater operations.
Ironically, at the very time that the Soviet Union gave the Warsaw Pact more substance and modernized its force structure, resentment of Soviet political, organizational, and military domination of the Warsaw Pact and the NSWP armies increased. There was considerable East European dissatisfaction with a Warsaw Pact hierarchy that placed a subordinate of the Soviet minister of defense over the East European defense ministers. The Soviets considered the national ministers of defense, with the rank of colonel general, equivalent only to Soviet military district commanders. The strongest objections to the subordinate status of the NSWP countries inside the Warsaw Pact came from the Communist Party of Romanian (Partial Communist Roman) and military leadership under Nicolae Ceausescu.
The first indications of an independent Romanian course appeared while the Soviet Union was shoring up its hold on Eastern Europe through formal status-of-forces agreements with its allies. In 1958 Romania moved in the opposite direction by demanding the withdrawal from its territory of all Soviet troops, advisers, and the Soviet resident representative. To cover Soviet embarrassment, Khrushchev called this a unilateral troop reduction contributing to greater European security. Reducing its participation in Warsaw Pact activities considerably, Romania also refused to allow Soviet or NSWP forces, which could serve as Warsaw Pact intervention forces, to cross or conduct exercises on its territory.
In the 1960s Romania demanded basic changes in the Warsaw Pact structure to give the East European member states a greater role in alliance decision making. At several PCC meetings, Romania proposed that the leading Warsaw Pact command positions, including its commander in chief, rotate among the top military leaders of each country. In response, the Soviet Union tried again to mollify its allies and deemphasize its control of the alliance by moving the Warsaw Pact military organization out of the Soviet General Staff and making it a distinct entity, albeit still within the Soviet Ministry of Defense. The Soviet Union also placed some joint exercises held on NSWP territory under the nominal command of the host country's minister of defense. However, Soviet Army commanders still conducted almost two-thirds of all Warsaw Pact maneuvers, and these concessions proved too little and too late.
With the aim of ending Soviet domination and guarding against Soviet encroachments, Romania reasserted full national control over its armed forces and military policies in 1963 when, following the lead of Yugoslavia and Albania, it adopted a territorial defense strategy called "War of the Entire People." This nation-in-arms strategy entailed compulsory participation in civilian defense organizations, militias, and reserve and paramilitary forces, as well as rapid mobilization. The goal of Romania's strategy was to make any Soviet intervention prohibitively protracted and costly. Romania rejected any integration of Warsaw Pact forces that could undercut its ability to resist a Soviet invasion. For example, it ended its participation in Warsaw Pact joint exercises because multinational maneuvers required the Romanian Army to assign its forces to a non-Romanian command authority. Romania stopped sending its army officers to Soviet military schools for higher education. When the Romanian military establishment and its educational institutions assumed these functions, training focused strictly on Romania's independent military strategy. Romania also terminated its regular exchange of intelligence with the Soviet Union and directed counterintelligence efforts against possible Soviet penetration of the Romanian Army. These steps combined to make it a truly national military establishment responsive only to domestic political authorities and ensured that it would defend the country's sovereignty.
Romania's independent national defense policy helped to underwrite its assertion of greater policy autonomy. In the only Warsaw Pact body in which it continued to participate actively, the PCC, Romania found a forum to make its disagreements with the Soviet Union public, to frustrate Soviet plans, and to work to protect its new autonomy. The Soviet Union could not maintain the illusion of Warsaw Pact harmony when Romanian recalcitrance forced the PCC to adopt "coordinated" rather than unanimous decisions. Romania even held up PCC approval for several weeks of the appointment of Marshal of the Soviet Union Ivan Iakubovskii as Warsaw Pact commander in chief. However, Romania did not enjoy the relative geographical isolation from the Soviet Union that made Yugoslav and Albanian independence possible, and the Soviet Union would not tolerate another outright withdrawal from the Warsaw Pact.
In 1968 an acute crisis in the Soviet alliance system suddenly overwhelmed the slowly festering problem of Romania. The Prague Spring represented a more serious challenge than that posed by Romania because it occurred in an area more crucial to Soviet security. The domestic liberalization program of the Czechoslovak communist regime led by Alexander Dubcek threatened to generate popular demands for similar changes in the other East European countries and even parts of the Soviet Union. The Soviet Union believed it necessary to forestall the spread of liberalization and to assert its right to enforce the boundaries of ideological permissibility in Eastern Europe. However, domestic change in Czechoslovakia also began to affect defense and foreign policy, just as it had in Hungary in 1956, despite Dubcek's declared intention to keep Czechoslovakia within the Warsaw Pact. This worrying development was an important factor in the Soviet decision to invade Czechoslovakia in 1968--one that Western analysts have generally overlooked.
The new political climate of the Prague Spring and the lifting of press censorship brought into the open a longstanding debate within the Czechoslovak military establishment over the nature of the Warsaw Pact and Czechoslovakia's membership in it. In the mid- 1960s, this debate centered on Soviet domination of the NSWP countries and of the Warsaw Pact and its command structure. Czechoslovakia had supported Romania in its opposition to Soviet calls for greater military integration and backed its demands for a genuine East European role in alliance decision making at PCC meetings.
In 1968 high-ranking Czechoslovak officers and staff members at the Klement Gottwald Military Academy began to discuss the need for a truly independent national defense strategy based on Czechoslovakia's national interests rather than the Soviet security interests that always prevailed in the Warsaw Pact. The fundamental premise of such an independent military policy was that an all- European collective security system, mutual nonaggression agreements among European states, the withdrawal of all troops from foreign countries, and a Central European nuclear-free zone could guarantee the country's security against outside aggression better than its membership in the Warsaw Pact. Although the Soviet Union had advocated these same arrangements in the 1950s, Czechoslovakia was clearly out of step with the Soviet line in 1968. Czechoslovakia threatened to complicate Soviet military strategy in Central Europe by becoming a neutral country dividing the Warsaw Pact into two parts along its front with NATO.
The concepts underpinning this developing Czechoslovak national defense strategy were formalized in the Gottwald Academy Memorandum circulated to the general (main) staffs of the other Warsaw Pact armies. The Gottwald Memorandum received a favorable response from Poland, Hungary, and Romania. In a televised news conference, at the height of the 1968 crisis, the chief of the Communist Party of Czechoslovakia's military department, Lieutenant General Vaclav Prchlik, denounced the Warsaw Pact as an unequal alliance and declared that the Czechoslovak Army was prepared to defend the country's sovereignty by force, if necessary. In the end, the Soviet Union intervened to prevent the Czechoslovak Army from fully developing the military capabilities to implement its newly announced independent defense strategy, which could have guaranteed national independence in the political and economic spheres. The August 1968 invasion preempted the possibility of the Czechoslovak Army's mounting a credible deterrent against future Soviet interventions. The Soviet decision in favor of intervention focused, in large measure, on ensuring its ability to maintain physical control of its wayward ally in the future.
In contrast to its rapid, bloody suppression of the 1956 Hungarian Revolution, the Soviet Union engaged in a lengthy campaign of military coercion against Czechoslovakia. In 1968 the Soviet Union conducted more joint Warsaw Pact exercises than in any other year since the maneuvers began in the early 1960s. The Soviet Union used these exercises to mask preparations for, and threaten, a Warsaw Pact invasion of Czechoslovakia that would occur unless Dubcek complied with Soviet demands and abandoned his political liberalization program. Massive Warsaw Pact rear services and communications exercises in July and August enabled the Soviet General Staff to execute its plan for the invasion without alerting Western governments. Under the pretext of exercises, Soviet and NSWP divisions were brought up to full strength, reservists were called up, and civilian transportation resources were requisitioned. The cover that these exercises provided allowed the Soviet Union to deploy forces along Czechoslovakia's borders in Poland and East Germany and to demonstrate to the Czechoslovak leadership its readiness to intervene.
On August 20, a force consisting of twenty-three Soviet Army divisions invaded Czechoslovakia. Token NSWP contingents, including one Hungarian, two East German, and two Polish divisions, along with one Bulgarian brigade, also took part in the invasion. In the wake of its invasion, the Soviet Union installed a more compliant communist party leadership and concluded a status-of-forces agreement with Czechoslovakia, which established a permanent Soviet presence in that country for the first time. Five Soviet Army divisions remained in Czechoslovakia to protect the country from future "imperialist threats." These troops became the Central Group of Forces (CGF) and added to Soviet strength directly bordering NATO. The Czechoslovak Army, having failed to oppose the Soviet intervention and defend the country's sovereignty, suffered a tremendous loss of prestige after 1968. At Soviet direction, reliable Czechoslovak authorities conducted a purge and political re-education campaign in the Czechoslovak Army and cut its size. After 1968 the Soviet Union closed and reorganized the Klement Gottwald Military Academy. With its one-time junior partner now proven unreliable, the Soviet Union turned to Poland as its principal East European ally.
The Warsaw Pact invasion of Czechoslovakia showed the hollowness of the Soviet alliance system in Eastern Europe in both its political and its military aspects. The Soviet Union did not convene the PCC to invoke the Warsaw Pact during the 1968 crisis because a formal PCC session would have revealed a deep rift in the Soviet alliance and given Czechoslovakia an international platform from which it could have defended its reform program. The Soviet Union did not allow NSWP officers to direct the Warsaw Pact exercises that preceded the intervention in Czechoslovakia, and Soviet Army officers commanded all multinational exercises during the crisis. While the intervention force was mobilized and deployed under the Warsaw Pact's commander in chief, the Soviet General Staff transferred full operational command of the invasion to the commander in chief of the Soviet ground forces, Army General I. G. Pavlovskii. Despite the participation of numerous East European army units, the invasion of Czechoslovakia was not in any sense a multilateral action. The Soviet invasion force carried out all important operations on Czechoslovakia's territory. Moreover, the Soviet Union quickly withdrew all NSWP troops from Czechoslovakia to forestall the possibility of their ideological contamination. NSWP participation served primarily to make the invasion appear to be a multinational operation and to deflect direct international criticism of the Soviet Union.
While the participation of four NSWP armies in the Soviet-led invasion of Czechoslovakia demonstrated considerable Warsaw Pact cohesion, the invasion also served to erode it. The invasion of Czechoslovakia proved that the Warsaw Pact's internal mission of keeping orthodox East European communist party regimes in power-- and less orthodox ones in line--was more important than the external mission of defending its member states against external aggression. The Soviet Union was unable to conceal the fact that the alliance served as the ultimate mechanism for its control of Eastern Europe. Formulated in response to the crisis in Czechoslovakia, the so-called Brezhnev Doctrine declared that the East European countries had "limited" sovereignty to be exercised only as long as it did not damage the interests of the "socialist commonwealth" as a whole. Since the Soviet Union defined the interests of the "socialist commonwealth," it could force its NSWP allies to respect its overwhelming security interest in keeping Eastern Europe as its buffer zone.
The Romanian leader, Ceausescu, after refusing to contribute troops to the Soviet intervention force as the other East European countries had done, denounced the invasion of Czechoslovakia as a violation of international law and the Warsaw Pact's cardinal principle of mutual noninterference in internal affairs. Ceausescu insisted that collective self-defense against external aggression was the only valid mission of the Warsaw Pact. Albania also objected to the Soviet invasion and indicated its disapproval by withdrawing formally from the Warsaw Pact after six years of inactive membership.
The Warsaw Pact administers both the political and the military activities of the Soviet alliance system in Eastern Europe. A series of changes beginning in 1969 gave the Warsaw Pact the structure it retained through the mid-1980s.
The general (first) secretaries of the communist and workers' parties and heads of state of the Warsaw Pact member states meet in the PCC (see table A, this Appendix). The PCC provides a formal point of contact for the Soviet and East European leaders in addition to less formal bilateral meetings and visits. As the highest decision-making body of the Warsaw Pact, the PCC is charged with assessing international developments that could affect the security of the allied states and warrant the execution of the Warsaw Pact's collective self-defense provisions. In practice, however, the Soviet Union has been unwilling to rely on the PCC to perform this function, fearing that Hungary, Czechoslovakia, and Romania could use PCC meetings to oppose Soviet plans and policies. The PCC is also the main center for coordinating the foreign policy activities of the Warsaw Pact countries. Since the late 1960s, when several member states began to use the alliance structure to confront the Soviets and assert more independent foreign policies, the Soviet Union has had to bargain and negotiate to gain support for its foreign policy within Warsaw Pact councils.
In 1976 the PCC established the permanent Committee of Ministers of Foreign Affairs (CMFA) to regularize the previously ad hoc meetings of Soviet and East European representatives to the Warsaw Pact. Given the official task of preparing recommendations for and executing the decisions of the PCC, the CMFA and its permanent Joint Secretariat have provided the Soviet Union an additional point of contact to establish a consensus among its allies on contentious issues. Less formal meetings of the deputy ministers of foreign affairs of the Warsaw Pact member states represent another layer of alliance coordination. If alliance problems can be resolved at these working levels, they will not erupt into embarrassing disputes between the Soviet and East European leaders at PCC meetings.
The Warsaw Pact's military organization is larger and more active than the alliance's political bodies. Several different organizations are responsible for implementing PCC directives on defense matters and developing the capabilities of the national armies that constitute the JAF. However, the principal task of the military organizations is to link the East European armies to the Soviet armed forces. The alliance's military agencies coordinate the training and mobilization of East European national forces assigned to the Warsaw Pact. In turn, these forces can be deployed in accordance with Soviet military strategy against an NSWP country or NATO.
Soviet control of the Warsaw Pact as a military alliance is scarcely veiled. The Warsaw Pact's JAF has no command structure, logistics network, air defense system, or operations directorate separate from the Soviet Ministry of Defense. The 1968 invasion of Czechoslovakia demonstrated how easily control of the JAF could be transferred in wartime to the Soviet General Staff and Soviet field commanders. The dual roles of the Warsaw Pact commander in chief, who is a first deputy Soviet minister of defense, and the Warsaw Pact chief of staff, who is a first deputy chief of the Soviet General Staff, facilitate the transfer of Warsaw Pact forces to Soviet control. The subordination of the Warsaw Pact to the Soviet General Staff is also shown clearly in the Soviet military hierarchy. The chief of the Soviet General Staff is listed above the Warsaw Pact commander in chief in the Soviet order of precedence, even though both positions are filled by first deputy Soviet ministers of defense.
Ironically, the first innovations in the Warsaw Pact's structure since 1955 came after the invasion of Czechoslovakia, which had clearly underlined Soviet control of the alliance. At the 1969 PCC session in Budapest, the Soviet Union agreed to cosmetic alterations in the Warsaw Pact designed to address East European complaints about Soviet domination of the alliance. These changes included the establishment of the formal Committee of Ministers of Defense (CMD) and the Military Council as well as the addition of more non-Soviet officers to the Joint Command and the Joint Staff (see fig. B, this Appendix).
The CMD is the leading military body of the Warsaw Pact. In addition to the ministers of defense of the Warsaw Pact member states, the commander in chief and the chief of staff of the JAF are statutory members of the CMD. With its three seats on the CMD, the Soviet Union can exercise a working majority in the nine-member body with the votes of only two of its more loyal East European allies. The chairmanship of the CMD supposedly rotates among the ministers of defense. In any event, the brief annual meetings of the CMD severely limit its work to pro forma pronouncements or narrow guidelines for the Joint Command, Military Council, and Joint Staff to follow.
The Joint Command develops the overall training plan for joint Warsaw Pact exercises and for the national armies to promote the assimilation of Soviet equipment and tactics. Headed by the Warsaw Pact's commander in chief, the Joint Command is divided into distinct Soviet and East European tiers. The deputy commanders in chief include Soviet and East European officers. The Soviet officers serving as deputy commanders in chief are specifically responsible for coordinating the East European navies and air forces with the corresponding Soviet service branches. The East European deputy commanders in chief are the deputy ministers of defense of the NSWP countries. While providing formal NSWP representation in the Joint Command, the East European deputies also assist in the coordination of Soviet and non-Soviet forces. The commander in chief, deputy commanders in chief, and chief of staff of the JAF gather in the Military Council on a semiannual basis to plan and evaluate operational and combat training. With the Warsaw Pact's commander in chief acting as chairman, the sessions of the Military Council rotate among the capitals of the Warsaw Pact countries.
The Joint Staff is the only standing Warsaw Pact military body and the official executive organ of the CMD, commander in chief, and Military Council. As such, it performs the bulk of the Warsaw Pact's work in the military realm. Like the Joint Command, the Joint Staff has both Soviet and East European officers. These non- Soviet officers also serve as the principal link between the Soviet and East European armed forces. The Joint Staff organizes all joint exercises and arranges multilateral meetings and contacts of Warsaw Pact military personnel at all levels.
The PCC's establishment of official CMD meetings, the Military Council, and the bifurcation of the Joint Command and Joint Staff allowed for greater formal East European representation, as well as more working-level positions for senior non-Soviet officers, in the alliance. Increased NSWP input into the alliance decision-making process ameliorated East European dissatisfaction with continued Soviet dominance of the Warsaw Pact and even facilitated the work of the JAF. However, a larger NSWP role in the alliance did not reduce actual Soviet control of the Warsaw Pact command structure.
The 1969 PCC meeting also approved the formation of two more Warsaw Pact military bodies, the Military Scientific-Technical Council and the Technical Committee. These innovations in the Warsaw Pact structure represented a Soviet attempt to harness NSWP weapons and military equipment production, which had greatly increased during the 1960s. The Military Scientific-Technical Council assumed responsibility for directing armaments research and development within the Warsaw Pact, while the Technical Committee coordinated standardization. Comecon's Military-Industrial Commission supervised NSWP military production facilities (see Appendix B).
After 1969 the Soviet Union insisted on tighter Warsaw Pact military integration as the price for greater NSWP participation in alliance decision making. Under the pretext of directing Warsaw Pact programs and activities aimed at integration, officers from the Soviet Ministry of Defense penetrated the East European armed forces. Meetings between senior officers from the Soviet and East European main political administrations allowed the Soviets to monitor the loyalty of the national military establishments. Joint Warsaw Pact exercises afforded ample opportunity for the evaluation and selection of reliable East European officers for promotion to command positions in the field, the national military hierarchies, and the Joint Staff. Warsaw Pact military science conferences, including representatives from each NSWP general (main) staff, enabled the Soviets to check for signs that an East European ally was formulating a national strategy or developing military capabilities beyond Soviet control. In 1973 the deputy ministers of foreign affairs signed the "Convention on the Capacities, Privileges, and Immunities of the Staff and Other Administrative Organs of the Joint Armed Forces of the Warsaw Pact Member States," which established the principle of extraterritoriality for alliance agencies, legally sanctioned the efforts of these Soviet officers to penetrate the East European military establishments, and prevented any host government interference in their work. Moreover, the Warsaw Pact commander in chief still retained his resident representatives in the national ministries of defense as direct sources of information on the situation inside the allied armies.
The crisis in Czechoslovakia and Romania's recalcitrance gave a new dimension to the challenge facing the Soviet Union in Eastern Europe. The Soviet Union's East European allies had learned that withdrawing from the Warsaw Pact and achieving independence from Soviet control were unrealistic goals, and they aimed instead at establishing a greater measure of autonomy within the alliance. Romania had successfully carved out a more independent position within the bounds of the Warsaw Pact. In doing so, it provided an example to the other East European countries of how to use the Warsaw Pact councils and committees to articulate positions contrary to Soviet interests. Beginning in the early 1970s, the East European allies formed intra-alliance coalitions in Warsaw Pact meetings to oppose the Soviet Union, defuse its pressure on any one NSWP member state, and delay or obstruct Soviet policies. The Soviets could no longer use the alliance to transmit their positions to, and receive an automatic endorsement from, the subordinate NSWP countries. While still far from genuine consultation, Warsaw Pact policy coordination between the Soviet Union and the East European countries in the 1970s was a step away from the blatant Soviet control of the alliance that had characterized the 1950s. East European opposition forced the Soviet Union to treat the Warsaw Pact as a forum for managing relations with its allies and bidding for their support on issues like détente, the Third World, the Solidarity crisis in Poland, alliance burden-sharing, and relations with NATO.
In the late 1960s, the Soviet Union abandoned its earlier efforts to achieve the simultaneous dissolution of the two European military blocs and concentrated instead on legitimizing the territorial status quo in Europe. The Soviets asserted that the official East-West agreements reached during the détente era "legally secured the most important political-territorial results of World War II." Under these arrangements, the Soviet Union allowed its East European allies to recognize West Germany's existence as a separate state. In return the West, and West Germany in particular, explicitly accepted the inviolability of all postwar borders in Eastern Europe and tacitly recognized Soviet control of the eastern half of both Germany and Europe. The Soviets claim the 1975 Helsinki Conference on Security and Cooperation in Europe (CSCE), which ratified the existing political division of Europe, as a major victory for Soviet diplomacy and the realization of longstanding Soviet calls, issued through the PCC, for a general European conference on collective security.
The consequences of détente, however, also posed a significant challenge to Soviet control of Eastern Europe. First, détente caused a crisis in Soviet-East German relations. East Germany's leader, Walter Ulbricht, opposed improved relations with West Germany and, following Ceausescu's tactics, used Warsaw Pact councils to attack the Soviet détente policy openly. In the end, the Soviet Union removed Ulbricht from power, in 1971, and proceeded unhindered into détente with the West. Second, détente blurred the strict bipolarity of the cold war era, opened Eastern Europe to greater Western influence, and loosened Soviet control over its allies. The relaxation of East-West tensions in the 1970s reduced the level of threat perceived by the NSWP countries, along with their perceived need for Soviet protection, and eroded Warsaw Pact alliance cohesion. After the West formally accepted the territorial status quo in Europe, the Soviet Union was unable to point to the danger of "imperialist" attempts to overturn East European communist party regimes to justify its demand for strict Warsaw Pact unity behind its leadership, as it had in earlier years. The Soviets resorted to occasional propaganda offensives, accusing West Germany of revanchism and aggressive intentions in Eastern Europe, to remind its allies of their ultimate dependence on Soviet protection and to reinforce the Warsaw Pact's cohesion against the attraction of good relations with the West.
Despite these problems, the détente period witnessed relatively stable Soviet-East European relations within the Warsaw Pact. In the early 1970s, the Soviet Union greatly expanded military cooperation with the NSWP countries. The joint Warsaw Pact exercises, conducted in the 1970s, gave the Soviet allies their first real capability for offensive operations other than intra- bloc policing actions. The East European countries also began to take an active part in Soviet strategy in the Third World.
With Eastern Europe in a relatively quiescent phase, the Soviet Union began to build an informal alliance system in the Third World during the 1970s. In this undertaking the Soviets drew on their experiences in developing allies in Eastern Europe after 1945. Reflecting this continuity, the Soviet Union called its new Third World allies "people's democracies" and their armed forces "national liberation armies." The Soviets also drew on their East European resources directly by enlisting the Warsaw Pact allies as proxies to "enhance the role of socialism in world affairs," that is, to support Soviet interests in the Middle East and Africa. Since the late 1970s, the NSWP countries have been active mainly in Soviet-allied Angola, Congo, Ethiopia, Libya, Mozambique, the People's Democratic Republic of Yemen (South Yemen), and Syria.
The Soviet Union employed its Warsaw Pact allies as surrogates primarily because their activities would minimize the need for direct Soviet involvement and obviate possible international criticism of Soviet actions in the Third World. Avowedly independent East European actions would be unlikely to precipitate or justify a response by the United States. The Soviet Union also counted on closer East European economic ties with Third World countries to alleviate some of Eastern Europe's financial problems. From the East European perspective, involvement in the Third World offered an opportunity for reduced reliance on the Soviet Union and for semiautonomous relations with other countries.
In the 1970s, the East European allies followed the lead of Soviet diplomacy and signed treaties of friendship, cooperation, and mutual assistance with most of the important Soviet Third World allies. These treaties established a "socialist division of labor" among the East European countries, in which each specialized in the provision of certain aspects of military or economic assistance to different Soviet Third World allies. The most important part of the treaties concerned military cooperation; the Soviets have openly acknowledged the important role of the East European allies in providing weapons to the "national armies of countries with socialist orientation."
In the 1970s and 1980s, Bulgaria, Czechoslovakia, and East Germany were the principal Soviet proxies for arms transfers to the Third World. These NSWP countries supplied Soviet-manufactured equipment, spare parts, and training personnel to various Third World armies. The Soviet Union used these countries to transship weapons to the Democratic Republic of Vietnam (North Vietnam) in the early 1970s, Soviet-backed forces in the 1975 Angolan civil war, and Nicaragua in the 1980s. The Soviet Union also relied on East German advisers to set up armed militias, paramilitary police forces, and internal security and intelligence organizations for selected Third World allies. The Soviets considered this task especially important because an efficient security apparatus would be essential for suppressing opposition forces and keeping a ruling regime, allied to the Soviet Union, in power. In addition to on- site activities, Bulgaria, Czechoslovakia, and particularly East Germany trained Third World military and security personnel in Eastern Europe during the 1980s. During this period, the Soviet Union also relied on its East European allies to provide the bulk of Soviet bloc economic aid and credits to the countries of the Third World. Perhaps revealing their hesitancy about military activities outside the Warsaw Pact's European operational area, Hungary and Poland have confined their Third World involvement to commercial assistance. Both countries sent economic and administrative advisers to assist in the management of state- directed industrial enterprises in the Third World as part of a Soviet campaign to demonstrate the advantages of the "socialist path of development" to potential Third World allies.
The Warsaw Pact has added no new member states in the more than thirty years of its existence. Even at the height of its Third World activities in the mid- to late 1970s, the Soviet Union did not offer Warsaw Pact membership to any of its important Third World allies. In 1986, after the United States bombed Libya in retaliation for its support of international terrorism, the Soviet Union was reported to have strongly discouraged Libyan interest in Warsaw Pact membership, expressed through one or more NSWP countries, and limited its support of Libya to bilateral consultations after the raid. Having continually accused the United States of attempting to extend NATO's sphere of activity beyond Europe, the Soviets did not want to open themselves to charges of broadening the Warsaw Pact. In any event, the Soviet Union would be unlikely to accept a noncommunist, non-European state into the Warsaw Pact. Moreover, the Soviets have already had considerable success in establishing strong allies throughout the world, outside their formal military alliance.
Beginning in the late 1970s, mounting economic problems sharply curtailed the contribution of the East European allies to Soviet Third World activities. In the early 1980s, when turmoil in Poland reminded the Soviet Union that Eastern Europe remained its most valuable asset, the Third World became a somewhat less important object of Soviet attention.
The rise of the independent trade union Solidarity shook the foundation of communist party rule in Poland and, consequently, Soviet control of a country the Soviet Union considers critical to its security and alliance system. Given Poland's central geographic position, this unrest threatened to isolate East Germany, sever vital lines of communication to Soviet forces deployed against NATO, and disrupt Soviet control in the rest of Eastern Europe.
As in Czechoslovakia in 1968, the Soviet Union used the Warsaw Pact to carry out a campaign of military coercion against the Polish leadership. In 1980 and 1981, the Soviet Union conducted joint Warsaw Pact exercises with a higher frequency than at any time since 1968 to exert pressure on the Polish regime to solve the Solidarity problem. Under the cover that the exercises afforded, the Soviet Union mobilized and deployed its reserve and regular troops in the Byelorussian Military District as a potential invasion force. In the West-81 and Union-81 exercises, Soviet forces practiced amphibious and airborne assault landings on the Baltic Sea coast of Poland. These maneuvers demonstrated a ready Soviet capability for intervention in Poland.
In the midst of the Polish crisis, Warsaw Pact commander in chief Viktor Kulikov played a crucial role in intra-alliance diplomacy on behalf of the Soviet leadership. Kulikov maintained almost constant contact with the Polish leadership and conferred with the leaders of Bulgaria, East Germany, and Romania about a possible multilateral Warsaw Pact military action against Poland. In December 1981, Kulikov pressed Polish United Workers Party first secretary Wojciech Jaruzelski to activate his contingency plan for declaring martial law with the warning that the Soviet Union was ready to intervene in the absence of quick action by Polish authorities. As it turned out, the Polish government instituted martial law and suppressed Solidarity just as the Soviet press was reporting that these steps were necessary to ensure that Poland could meet its Warsaw Pact commitment to the security of the other member states.
From the Soviet perspective, the imposition of martial law by Polish internal security forces was the best possible outcome. Martial law made the suppression of Solidarity a strictly domestic affair and spared the Soviet Union the international criticism that an invasion would have generated. However, the use of the extensive Polish paramilitary police and riot troops suggested that the Soviet Union could not count on the Polish Army to put down Polish workers. Moreover, while the Brezhnev Doctrine of using force to maintain the leading role of the communist party in society was upheld in Poland, it was not the Soviet Union that enforced it.
Some question remains as to whether the Soviet Union could have used force successfully against Poland. An invasion would have damaged the Soviet Union's beneficial détente relationship with Western Europe. Intervention would also have added to the evidence that the internal police function of the Warsaw Pact was more important than the putative external collective self-defense mission it had never exercised. Moreover, Romania, and conceivably Hungary, would have refused to contribute contingents to a multinational Warsaw Pact force intended to camouflage a Soviet invasion. Failure to gain the support of its allies would have represented a substantial embarrassment to the Soviet Union. In stark contrast to the unopposed intervention in Czechoslovakia, the Soviets probably also anticipated tenacious resistance from the general population and the Polish Army to any move against Poland. Finally, an invasion would have placed a weighty economic and military burden on the Soviet Union; the occupation and administration of Poland would have tied down at least ten Soviet Army divisions for an extended period of time. Nevertheless, had there been no other option, the Soviet Union would certainly have invaded Poland to eliminate Solidarity's challenge to communist party rule in that country.
Although the Polish Army had previously played an important role in Soviet strategy for a coalition war against NATO, the Soviet Union had to revise its plans and estimates of Poland's reliability after 1981, and it turned to East Germany as its most reliable ally. In the early 1980s, because of its eager promotion of Soviet interests in the Third World and its importance in Soviet military strategy, East Germany completed its transformation from defeated enemy and dependent ally into the premier junior partner of the Soviet Union. Ironically, East Germany's efficiency and loyalty have made the Soviet Union uncomfortable. Encroaching somewhat on the leading role of the Soviet Union in the Warsaw Pact, East Germany has been the only NSWP country to institute the rank of marshal, matching the highest Soviet Army rank and implying its equality with the Soviet Union.
In the late 1970s and early 1980s, the West grew disenchanted with détente, which had failed to prevent Soviet advances in the Third World, the deployment of SS-20 intermediate-range ballistic missiles (IRBMs) aimed at West European targets, the invasion of Afghanistan, or the suppression of Solidarity. The Soviet Union used the renewal of East-West conflict as a justification for forcing its allies to close ranks within the Warsaw Pact. But restoring the alliance's cohesion and renewing its confrontation with Western Europe proved difficult after several years of good East-West relations. The East European countries had acquired a stake in maintaining détente for various reasons. In the early 1980s, internal Warsaw Pact disputes centered on relations with the West after détente, NSWP contributions to alliance defense spending, and the alliance's reaction to IRBM deployments in NATO. The resolution of these disputes produced significant changes in the Warsaw Pact as, for the first time, two or more NSWP countries simultaneously challenged Soviet military and foreign policy preferences within the alliance.
In the PCC meetings of the late 1970s and early 1980s, Soviet and East European leaders of the Warsaw Pact debated about the threat emanating from NATO. When the Soviet Union argued that a new cold war loomed over Europe, the East European countries insisted that the improved European political climate of détente still prevailed. On several occasions, the Soviets had to compromise on the relative weight of these two alternatives in the language of PCC declarations. Although the Soviet Union succeeded in officially ending détente for the Warsaw Pact, it was unable to achieve significantly greater alliance cohesion or integration.
Discussions of the "NATO threat" also played a large part in Warsaw Pact debates about an appropriate level of NSWP military expenditure. The Soviet Union used the 1978 PCC meeting to try to force its allies to match a scheduled 3-percent, long-term increase in the military budgets of the NATO countries. Although the East European countries initially balked at this Soviet demand, they eventually agreed to the increase. However, only East Germany actually honored its pledge, and the Soviet Union failed to achieve its goal of increased NSWP military spending.
The debate on alliance burden-sharing did not end in 1978. Beginning in the late 1970s, the Soviets carefully noted that one of the Warsaw Pact's most important functions was monitoring the "fraternal countries and the fulfillment of their duties in the joint defense of socialism." In 1983 Romania adopted a unilateral three-year freeze on its military budget at its 1982 level. In 1985 Ceausescu frustrated the Soviet Union by calling for a unilateral Warsaw Pact reduction in arms expenditures, ostensibly to put pressure on NATO to follow its example. At the same time, Hungary opposed Soviet demands for increased spending, arguing instead for more rational use of existing resources. In the mid-1980s, East Germany was the only Soviet ally that continued to expand its military spending.
The refusal of the NSWP countries to meet their Warsaw Pact financial obligations in the 1980s clearly indicated diminished alliance cohesion. The East European leaders argued that the costs of joint exercises, their support for Soviet Army garrisons, and the drain of conscription represented sufficient contributions to the alliance at a time of hardship in their domestic economies. In addition to providing access to bases and facilities opposite NATO, the East European communist regimes were also obligated to abide by Soviet foreign policy and security interests to earn a Soviet guarantee against domestic challenges to their continued rule. For its part, the Soviet Union paid a stiff price in terms of economic aid and subsidized trade with the NSWP countries to maintain its buffer zone in Eastern Europe.
The issue of an appropriate Warsaw Pact response to NATO's 1983 deployment of American Pershing II and cruise missiles, matching the Soviet SS-20s, proved to be the most divisive one for the Soviet Union and its East European allies in the early and mid- 1980s. After joining in a vociferous Soviet propaganda campaign against the deployment, the East European countries split with the Soviet Union over how to react when their "peace offensive" failed to forestall it.
In 1983 East Germany, Hungary, and Romania indicated their intention to "limit the damage" to East-West ties that could have resulted from the deployment of NATO's new missiles. In doing so, these countries raised the possibility of an independent role for the smaller countries of both alliances in reducing conflicts between the two superpowers. In particular, East Germany sought to insulate its profitable economic ties with West Germany, established through détente, against the general deterioration in East-West political relations. While East Germany had always been the foremost proponent of "socialist internationalism," that is, strict adherence to Soviet foreign policy interests, its position on this issue caused a rift in the Warsaw Pact. In effect, East Germany asserted that the national interests of the East European countries did not coincide exactly with those of the Soviet Union.
The Soviet Union and Czechoslovakia attacked the East German stand, accusing the improbable intra-bloc alliance of East Germany, Hungary, and Romania of undermining the class basis of Warsaw Pact foreign policy. The Soviet Union indicated that it would not permit its allies to become mediators between East and West. The Soviet Union forced East Germany to accept its "counterdeployments" of SS- 21 and SS-23 SRBMs and compelled SED general secretary Erich Honecker to cancel his impending visit to West Germany. The Soviets thereby reaffirmed their right to determine the conditions under which the Warsaw Pact member states would conduct relations with the NATO countries. However, the Soviet Union also had to forego any meeting of the PCC in 1984 that might have allowed its recalcitrant allies to publicize their differences on this issue.
As late as 1985, Soviet leaders still had not completely resolved the question of the proper connection between the national and international interests of the socialist countries. Some Soviet commentators adopted a conciliatory approach toward the East European position by stating that membership in the Warsaw Pact did not erase a country's specific national interests, which could be combined harmoniously with the common international interests of all the member states. Others, however, simply repeated the Brezhnev Doctrine and its stricture that a socialist state's sovereignty involves not only the right to independence but also a responsibility to the "socialist commonwealth" as a whole.
The 1968 Soviet invasion of Czechoslovakia was, tangentially, a warning to Romania about its attempts to pursue genuine national independence. But Ceausescu, in addition to refusing to contribute Romanian troops to the Warsaw Pact invasion force, openly declared that Romania would resist any similar Soviet intervention on its territory. Romania pronounced that henceforth the Soviet Union represented its most likely national security threat. After 1968 the Romanian Army accelerated its efforts to make its independent defense strategy a credible deterrent to a possible Soviet invasion of the country. In the 1970s Romania also established stronger ties to the West, China, and the Third World. These diplomatic, economic, and military relations were intended to increase Romania's independence from the Warsaw Pact and the Soviet Union, while guaranteeing broad international support for Romania in the event of a Soviet invasion.
Throughout the 1970s, Romania continued to reject military integration within the Warsaw Pact framework and military intervention against other member states, while insisting on the right of the East European countries to resolve their internal problems without Soviet interference. Romanian objections to the Soviet line within the Warsaw Pact forced the Soviet Union to acknowledge the "possibility of differences arising in the views of the ruling communist parties on the assessment of some international developments." To obtain Romanian assent on several questions, the Soviet Union also had to substitute the milder formulation "international solidarity" for "socialist internationalism"--the code phrase for the subordination of East European national interests to Soviet interests--in PCC declarations. Pursuing a policy opposed to close alliance integration, Romania resisted Soviet domination of Warsaw Pact weapons production as a threat to its autonomy and refused to participate in the work of the Military Scientific-Technical Council and Technical Committee (see The Military Organization of the Warsaw Pact, this Appendix). Nevertheless, the Soviets have insisted that a Romanian Army officer holds a position on the Technical Committee; his rank, however, is not appropriate to that level of responsibility. The Soviet claims are probably intended to obscure the fact that Romania does not actually engage in joint Warsaw Pact weapons production efforts.
Despite continued Romanian defiance of Soviet policies in the Warsaw Pact during the 1980s, the Soviet Union successfully exploited Romania's severe economic problems and bribed Romania with energy supplies on several occasions to gain its assent, or at least silence, in the Warsaw Pact. Although Romania raised the price the Soviet Union had to pay to bring it into line, Romanian dependence on Soviet economic support may foreshadow Romania's transformation into a more cooperative Warsaw Pact ally. Moreover, in 1985 Ceausescu dismissed Minister of Foreign Affairs Stefan Andrei and Minister of Defense Constantin Olteanu, who helped establish the country's independent policies and would have opposed closer Romanian involvement with the Warsaw Pact.
In his first important task after becoming general secretary of the Communist Party of the Soviet Union in March 1985, Mikhail S. Gorbachev organized a meeting of the East European leaders to renew the Warsaw Pact, which was due to expire that May after thirty years. There was little doubt that the Warsaw Pact member states would renew the alliance. However, there was some speculation that the Soviet Union might unilaterally dismantle its formal alliance structure to improve the Soviet image in the West and put pressure on NATO to disband. The Soviets could still have relied on the network of bilateral treaties in Eastern Europe, which predated the formation of the Warsaw Pact and had been renewed regularly. Combined with later status-of-forces agreements, these treaties ensured that the essence of the Soviet alliance system and buffer zone in Eastern Europe would remain intact, regardless of the Warsaw Pact's status. But despite their utility, the bilateral treaties could never substitute for the Warsaw Pact. Without a formal alliance, the Soviet Union would have to coordinate foreign policy and military integration with its East European allies through cumbersome bilateral arrangements. Without the Warsaw Pact, the Soviet Union would have no political equivalent of NATO for international negotiations like the CSCE and Mutual and Balanced Force Reduction talks, or for issuing its arms control pronouncements. The Soviet Union would also have to give up its equal status with the United States as an alliance leader.
Although the Soviet and East European leaders debated the terms of the Warsaw Pact's renewal at their April 1985 meeting--Ceausescu reportedly proposed that it be renewed for a shorter period--they did not change the original 1955 document, or the alliance's structure, in any way. The Soviets concluded that this outcome proved that the Warsaw Pact truly embodied the "fundamental long- term interests of the fraternal countries." The decision to leave the Warsaw Pact unamended was probably the easiest alternative for the Soviet Union and its allies; the alliance was renewed for another twenty-year term with an automatic ten-year extension.
In the mid- to late 1980s, the future of the Warsaw Pact hinged on Gorbachev's developing policy toward Eastern Europe. At the Twenty-seventh Congress of the Communist Party of the Soviet Union in 1986, Gorbachev acknowledged that differences existed among the Soviet allies and that it would be unrealistic to expect them to have identical views on all issues. There has been no firm indication, as yet, of whether Gorbachev would be willing to grant the Soviet allies more policy latitude or insist on tighter coordination with the Soviet Union. However, demonstrating a greater sensitivity to East European concerns than previous Soviet leaders, Gorbachev briefed the NSWP leaders in their own capitals after the 1985 Geneva and 1986 Reykjavik superpower summit meetings.
According to many Western analysts, mounting economic difficulties in the late 1980s and the advanced age of trusted, long-time communist party leaders, like Gustav Husak in Czechoslovakia, Todor Zhivkov in Bulgaria, and Janos Kadar in Hungary, presented the danger of domestic turmoil and internal power struggles in the NSWP countries. These problems had the potential to monopolize Soviet attention and constrain Soviet global activities. But the Soviet Union could turn these potential crises into opportunities, using its economic leverage to pressure its East European allies to adhere more closely to Soviet positions or to influence the political succession process to ensure that a new generation of leaders in Eastern Europe would respect Soviet interests. Soviet insistence on greater NSWP military spending could fuel further economic deterioration, leading to political unrest and even threats to the integrity of the Soviet alliance system in several countries simultaneously. Conversely, limited, Soviet-sanctioned deviation from orthodox socialism could make the East European regimes more secure and reduce the Soviet burden of policing the Warsaw Pact.
The Soviet ground forces constitute the bulk of the Warsaw Pact's military power. In 1987 the Soviet Union provided 73 of the 126 Warsaw Pact tank and motorized rifle divisions. Located in the Soviet Groups of Forces (SGFs) and four westernmost military districts of the Soviet Union, these Soviet Army divisions comprise the majority of the Warsaw Pact's combat-ready, full-strength units. Looking at the numbers of Soviet troops stationed in or near Eastern Europe, and the historical record, one could conclude that the Warsaw Pact is only a Soviet mechanism for organizing intra- alliance interventions or maintaining control of Eastern Europe and does not significantly augment Soviet offensive power vis-ŕ-vis NATO. Essentially a peacetime structure for NSWP training and mobilization, the Warsaw Pact has no independent role in wartime nor a military strategy distinct from Soviet military strategy. However, the individual NSWP armies play important parts in Soviet strategy for war, outside the formal context of the Warsaw Pact.
The goal of Soviet military strategy in Europe is a quick victory over NATO in a nonnuclear war. The Soviet Union would attempt to defeat NATO decisively before its political and military command structure could consult and decide how to respond to an attack. Under this strategy, success would hinge on inflicting a rapid succession of defeats on NATO to break its will to fight, knock some of its member states out of the war, and cause the collapse of the Western alliance. A quick victory would also keep the United States from escalating the conflict to the nuclear level by making retaliation against the Soviet Union futile. A rapid defeat of NATO would preempt the mobilization of its superior industrial and economic resources, as well as reinforcement from the United States, which would enable NATO to prevail in a longer war. Most significant, in a strictly conventional war the Soviet Union could conceivably capture its objective, the economic potential of Western Europe, relatively intact.
In the 1970s, Soviet nuclear force developments increased the likelihood that a European war would remain on the conventional level. By matching the United States in intercontinental ballistic missiles and adding intermediate-range SS-20s to its nuclear forces, the Soviet Union undercut NATO's option to employ nuclear weapons to avoid defeat in a conventional war. After the United States neutralized the Soviet SS-20 IRBM advantage by deploying Pershing II and cruise missiles, the Soviet Union tried to use its so-called "counterdeployments" of SS-21 and SS-23 SRBMs to gain a nuclear-war fighting edge in the European theater. At the same time, the Soviet Union made NATO's dependence on nuclear weapons less tenable by issuing Warsaw Pact proposals for mutual no-first- use pledges and the establishment of nuclear-free zones.
The Soviet plan for winning a conventional war quickly to preclude the possibility of a nuclear response by NATO and the United States was based on the deep-strike concept Soviet military theoreticians first proposed in the 1930s. After 1972 the Soviet Army put deep strike into practice in annual joint Warsaw Pact exercises, including "Brotherhood-in-Arms," "Union," "Friendship," "West," and "Shield." Deep strike would carry an attack behind the front lines of battle, far into NATO's rear areas. The Soviet Union would launch simultaneous missile and air strikes against vital NATO installations to disrupt or destroy the Western alliance's early warning surveillance systems, command and communications network, and nuclear delivery systems. Following this initial strike, the modern-day successor of the World War II-era Soviet mobile group formations, generated out of the SGFs in Eastern Europe, would break through and encircle NATO's prepared defenses in order to isolate its forward forces from reinforcement. Consisting of two or more tank and motorized rifle divisions, army- level mobile groups would also overrun important NATO objectives behind the front lines to facilitate the advance of Soviet follow- on forces, which would cross NSWP territory from the westernmost Soviet military districts.
The Warsaw Pact countries provide forward bases, staging areas, and interior lines of communication for the Soviet Union against NATO. Peacetime access to East European territory under the Warsaw Pact framework has enabled the Soviet military to pre-position troops, equipment, and supplies and to make reinforcement plans for wartime. In the 1970s, the Soviet Union increased road and rail capacity and built new airfields and pipelines in Eastern Europe. However, a quick Soviet victory through deep strike could be complicated by the fact that the attacking forces would have to achieve almost total surprise. Past Soviet mobilizations for relatively small actions in Czechoslovakia, Afghanistan, and Poland took an average of ninety days, while United States satellites observed the entire process. Moreover, the advance notification of large-scale troop movements, required under agreements made at the CSCE, would also complicate the concealment of mobilization. Yet the Soviet Union could disguise its offensive deployments against NATO as semi annual troop rotations in the GSFG, field exercises, or preparations for intervention against an ally.
The Warsaw Pact has no multilateral command or decision-making structure independent of the Soviet Army. NSWP forces would fight in Soviet, rather than joint Warsaw Pact, military operations. Soviet military writings about the alliances of World War I and World War II, as well as numerous recent works marking the thirtieth anniversary of the Warsaw Pact in 1985, reveal the current Soviet view of coalition warfare. The Warsaw Pact's chief of staff, A. I. Gribkov, has written that centralized strategic control, like that the Red Army exercised over the allied East European national units between 1943 and 1945 is valid today for the Warsaw Pact's JAF (see The Organization of East European National Units, 1943-45, this Appendix).
Soviet military historians indicate that the East European allies did not establish or direct operations on independent national fronts during World War II. The East European forces fought in units, at and below the army level, on Soviet fronts and under the Soviet command structure. The headquarters of the Soviet Supreme High Command exercised control over all allied units through the Soviet General Staff. At the same time, the commanders in chief of the allied countries were attached to and "advised" the Soviet Supreme High Command. There were no special coalition bodies to make joint decisions on operational problems. A chart adapted from a Soviet journal indicates that the Soviet-directed alliance in World War II lacked a multilateral command structure independent of the Red Army's chain of command, an arrangement that also reflects the current situation in the Warsaw Pact (see fig. C, this Appendix). The Warsaw Pact's lack of a wartime command structure independent of the Soviet command structure is clear evidence of the subordination of the NSWP armies to the Soviet Army.
Since the early 1960s, the Soviet Union has used the Warsaw Pact to prepare non-Soviet forces to take part in Soviet Army operations in the European theater of war. In wartime the Warsaw Pact commander in chief and chief of staff would transfer NSWP forces, mobilized and deployed under the Warsaw Pact aegis, to the operational control of the Soviet ground forces. After deployment the Soviet Union could employ NSWP armies, comprised of various East European divisions, on its fronts (see Glossary). In joint Warsaw Pact exercises, the Soviet Union has detached carefully selected, highly reliable East European units, at and below the division-level, from their national command structures. These specific contingents are trained for offensive operations within Soviet ground forces divisions. NSWP units, integrated in this manner, would fight as component parts of Soviet armies on Soviet fronts.
The East European countries play specific roles in Soviet strategy against NATO based on their particular military capabilities. Poland has the largest and best NSWP air force that the Soviet Union could employ in a theater air offensive. Both Poland and East Germany have substantial naval forces that, in wartime, would revert to the command of the Soviet Baltic Fleet to render fire support for Soviet ground operations. These two Soviet allies also have amphibious forces that could carry out assault landings along the Baltic Sea coast into NATO's rear areas. While its mobile groups would penetrate deep into NATO territory, the Soviet Union would entrust the less reliable or capable East European armies, like those of Hungary, Czechoslovakia, and Bulgaria, with a basically defensive mission. The East European countries are responsible for securing their territory, Soviet rear areas, and lines of communication. The air defense systems of all NSWP countries are linked directly into the Soviet Air Defense Forces command. This gives the Soviet Union an impressive early warning network against NATO air attacks.
The Soviet Union counts on greater cooperation from its Warsaw Pact allies in a full-scale war with NATO than in intra-alliance policing actions. Nevertheless, the Soviets expect that a protracted war in Europe would strain the cohesion of the Warsaw Pact. This view may derive from the experience of World War II, in which Nazi Germany's weak alliance partners, Romania, Hungary, and Bulgaria, left the war early and eventually joined the Soviet side. A stalemate in a protracted European war could lead to unrest, endanger communist party control in Eastern Europe, and fracture the entire Soviet alliance system. NSWP reliability would also decline, requiring the Soviet Army to reassign its own forces to carry out unfulfilled NSWP functions or even to occupy a noncompliant ally's territory.
Continuing Soviet concern over the combat reliability of its East European allies influences, to a great extent, the employment of NSWP forces under Soviet strategy. Soviet military leaders believe that the Warsaw Pact allies would be most likely to remain loyal if the Soviet Army engaged in a short, successful offensive against NATO, while deploying NSWP forces defensively. Under this scenario, the NSWP allies would absorb the brunt of NATO attacks against Soviet forces on East European territory. Fighting in Eastern Europe would reinforce the impression among the NSWP countries that their actions constituted a legitimate defense against outside attack. The Soviet Union would still have to be selective in deploying the allied armies offensively. For example, the Soviet Union would probably elect to pit East German forces against non-German NATO troops along the central front. Other NSWP forces that the Soviet Union employed offensively would probably be interspersed with Soviet units on Soviet fronts to increase their reliability. The Soviet Union would not establish separate East European national fronts against NATO. Independent NSWP fronts would force the Soviet Union to rely too heavily on its allies to perform well in wartime. Moreover, independent East European fronts could serve as the basis for a territorial defense strategy and successful resistance to future Soviet policing actions in Eastern Europe.
Soviet concern over the reliability of its Warsaw Pact allies is also reflected in the alliance's military-technical policy, which is controlled by the Soviets. The Soviet Union has given the East European allies less modern, though still effective, weapons and equipment to keep their armies several steps behind the Soviet Army. The Soviets cannot modernize the East European armies without concomitantly improving their capability to resist Soviet intervention.
As a result of its preponderance in the alliance, the Soviet Union has imposed a level of standardization in the Warsaw Pact that NATO cannot match. Standardization in NATO focuses primarily on the compatibility of ammunition and communications equipment among national armies. By contrast, the Soviet concept of standardization involves a broad complex of measures aimed at achieving "unified strategic views on the general character of a future war and the capabilities for conducting it." The Soviet Union uses the Warsaw Pact framework to bring its allies into line with its view of strategy, operations, tactics, organizational structure, service regulations, field manuals, documents, staff procedures, and maintenance and supply activities.
By the 1980s, the Soviet Union had achieved a degree of technical interoperability among the allied armies that some observers would consider to be a significant military advantage over NATO. However, the Soviet allies had weapons and equipment that were both outdated and insufficient in number. As one Western analyst has pointed out, the NSWP armies remain fully one generation behind the Soviet Union in their inventories of modern equipment and weapons systems and well below Soviet norms in force structure quantities. Although T-64 and T-72 tanks had become standard and modern infantry combat vehicles, including the BMP-1, comprised two-thirds of the armored infantry vehicles in Soviet Army units deployed in Eastern Europe, the NSWP armies still relied primarily on older T-54 and T-55 tanks and domestically produced versions of Soviet BTR-50 and BTR-60 armored personnel carriers. The East European air forces did not receive the MiG-23, first built in 1971, until the late 1970s, and they still did not have the most modern Soviet ground attack fighter-bombers, like the MiG- 25, MiG-27, and Su-24, in the mid- to late 1980s. These deficiencies called into question NSWP capabilities for joining in Soviet offensive operations against NATO and indicated primarily a rear-area role for the NSWP armies in Soviet strategy.
Within the Warsaw Pact, the Soviet Union decides which of the allies receive the most up-to-date weapons. Beginning in the late 1960s and early 1970s, the Soviet Union provided the strategically located Northern Tier countries, East Germany and Poland especially, with greater quantities of advanced armaments. By contrast, the less important Southern Tier, consisting of Hungary, Bulgaria and Romania, received used equipment that was being replaced in Soviet or Northern Tier forces. In the mid-1970s, overall NSWP force development slowed suddenly as the Soviet Union became more interested in selling arms to earn hard currency and gain greater influence in the Third World, particularly in the oil- rich Arab states of the Middle East. At the same time, growing economic problems in Eastern Europe made many Third World countries look like better customers for Soviet arms sales. Between 1974 and 1978, the Soviet Union sent the equivalent of US$18.5 million of a total US$27 million in arms transfers outside the Warsaw Pact. Moreover, massive Soviet efforts to replace heavy Arab equipment losses in the 1973 war against Israel and the 1982 Syrian-Israeli air war over Lebanon came largely at the expense of modernization for the East European allies. In the late 1980s, the NSWP countries clearly resented the fact that some Soviet Third World allies, including Algeria, Libya, and Syria, had taken delivery of the newest Soviet weapons systems, such as the MiG-25, not yet in their own inventories. The Soviet Union probably looked at a complete modernization program for the NSWP armies as unnecessary and prohibitively costly for either it or its allies to undertake.
The Soviet Union claims the right to play the leading role in the Warsaw Pact on the basis of its scientific, technical, and economic preponderance in the alliance. The Soviet Union also acknowledges its duty to cooperate with the NSWP countries by sharing military-technical information and developing their local defense industries. This cooperation, however, amounts to Soviet control over the supply of major weapons systems and is an important aspect of Soviet domination of the Warsaw Pact allies. Warsaw Pact military-technical cooperation prevents the NSWP countries from adopting autonomous policies or otherwise defying Soviet interests through a national defense capability based on domestic arms production. In discussions of the United States and NATO, the Soviets acknowledge that standardization and control of arms purchases are effective in increasing the influence of the leading member of an alliance over its smaller partners. In the same way, Soviet arms supplies to Eastern Europe have made the NSWP military establishments more dependent on the Soviet Union. To deny its allies the military capability to successfully resist a Soviet invasion, the Soviet Union does not allow the NSWP countries to produce sufficient quantities or more than a few kinds of weapons for their national armies.
Romania is the only Warsaw Pact country that has escaped Soviet military-technical domination. In the late 1960s, Romania recognized the danger of depending on the Soviet Union as its sole source of military equipment and weapons. As a result, Romania initiated heavy domestic production of relatively low-technology infantry weapons and began to seek non-Soviet sources for more advanced armaments. Romania has produced British transport aircraft, Chinese fast-attack boats, and French helicopters under various coproduction and licensing arrangements. Romania has also produced a fighter-bomber jointly with Yugoslavia. However, Romania still remains backward in its military technology because both the Soviet Union and Western countries are reluctant to transfer their most modern weapons to it. Each side must assume that any technology given to Romania could end up in enemy hands.
Apart from Romania, the Soviet Union benefits from the limited military production of its East European allies. It has organized an efficient division of labor among the NSWP countries in this area. Czechoslovakia and East Germany, in particular, are heavily industrialized and probably surpass the Soviet Union in their high- technology capabilities. The Northern Tier countries produce some Soviet heavy weapons, including older tanks, artillery, and infantry combat vehicles on license. However, the Soviet Union generally restricts its allies to the production of a relatively narrow range of military equipment, including small arms, munitions, communications, radar, optical, and other precision instruments and various components and parts for larger Soviet- designed weapons systems.
* * *
The 1980s have witnessed a dramatic increase in the amount of secondary source material published about the Warsaw Pact. The works of Alex Alexiev, Andrzej Korbonski, and Condoleezza Rice, as well as various Soviet writers, provide a complete picture of the Soviet alliance system and the East European military establishments before the formation of the Warsaw Pact. William J. Lewis's The Warsaw Pact: Arms, Doctrine, and Strategy is a very useful reference work with considerable information on the establishment of the Warsaw Pact and the armies of its member states. The works of Malcolm Mackintosh, a long-time observer of the Warsaw Pact, cover the changes in the Warsaw Pact's organizational structure and functions through the years. Christopher D. Jones's Soviet Influence in Eastern Europe: Political Autonomy and the Warsaw Pact and subsequent articles provide a coherent interpretation of the Soviet Union's use of the Warsaw Pact to control its East European allies. In "The Warsaw Pact at 25," Dale R. Herspring examines intra-alliance politics in the PCC and East European attempts to reduce Soviet domination of the Warsaw Pact. Soviet military journals are the best source for insights into the East European role in Soviet military strategy. Daniel N. Nelson and Ivan Volgyes analyze East European reliability in the Warsaw Pact. Nelson takes a quantitative approach to this ephemeral topic. By contrast, Volgyes uses a historical and political framework to draw his conclusions on the reliability issue. The works of Richard C. Martin and Daniel S. Papp present thorough discussions of Soviet policies on arming and equipping the NSWP allies. (For further information and complete citations, see Bibliography.) | http://memory.loc.gov/frd/cs/germany_east/gx_appnc.html | 13 |
51 | 3.1. Stellar Ages
Ever since Kelvin and Helmholtz first estimated the age of the Sun to be less than 100 million years, assuming that gravitational contraction was its prime energy source, there has been a tension between stellar age estimates and estimates of the age of the universe. In the case of the Kelvin-Helmholtz case, the age of the sun appeared too short to accomodate an Earth which was several billion years old. Over much of the latter half of the 20th century, the opposite problem dominated the cosmological landscape. Stellar ages, based on nuclear reactions as measured in the laboratory, appeared to be too old to accomodate even an open universe, based on estimates of the Hubble parameter. Again, as I shall outline in the next section, the observed expansion rate gives an upper limit on the age of the Universe which depends upon the equation of state, and the overall energy density of the dominant matter in the Universe.
There are several methods to attempt to determine stellar ages, but I will concentrate here on main sequence fitting techiniques, because those are the ones I have been involved in.
The basic idea behind main sequence fitting is simple. A stellar model is constructed by solving the basic equations of stellar structure, including conservation of mass and energy and the assumption of hydrostatic equilibrium, and the equations of energy transport. Boundary conditions at the center of the star and at the surface are then used, and combined with assumed equation of state equations, opacities, and nuclear reaction rates in order to evolve a star of given mass, and elemental composition.
Globular clusters are compact stellar systems containing up to 105 stars, with low heavy element abundance. Many are located in a spherical halo around the galactic center, suggesting they formed early in the history of our galaxy. By making a cut on those clusters with large halo velocities, and lowest metallicities (less than 1/100th the solar value), one attempts to observationally distinguish the oldest such systems. Because these systems are compact, one can safely assume that all the stars within them formed at approximately the same time.
Observers measure the color and luminosity of stars in such clusters, producing color-magnitude diagrams of the type shown in Figure 2 (based on data from .
Figure 4. Color-magnitude diagram for a typical globular cluster, M15. Vertical axis plots the magnitude (luminosity) of the stars in the V wavelength region and the horizontal axis plots the color (surface temperature) of the stars.
Next, using stellar models, one can attempt to evolve stars of differing mass for the metallicities appropriate to a given cluster, in order to fit observations. A point which is often conveniently chosen is the so-called main sequence-turnoff (MSTO) point, the point in which hydrogen burning (main sequence) stars have exhausted their supply of hydrogen in the core. After the MSTO, the stars quickly expand, become brighter, and are referred to as Red Giant Branch (RGB) stars. Higher mass stars develop a helium core that is so hot and dense that helium fusion begins. These form along the horizontal branch. Some stars along this branch are unstable to radial pulsations, the so-called RR Lyrae stars mentioned earlier, which are important distance indicators. While one in principle could attempt to fit theoretical isochrones (the locus of points on the predicted CM curve corresponding to different mass stars which have evolved to a specified age), to observations at any point, the main sequence turnoff is both sensitive to age, and involves minimal (though just how minimal remains to be seen) theoretical uncertainties.
Dimensional analysis tells us that the main sequence turnoff should be a sensitive function of age. The luminosity of main sequence stars is very roughly proportional to the third power of solar mass. Hence the time it takes to burn the hydrogen fuel is proportional to the total amount of fuel (proportional to the mass M), divided by the Luminosity - proportional to M3. Hence the lifetime of stars on the main sequence is roughly proportional to the inverse square of the stellar mass.
Of course the ability to go beyond this rough approximation depends completely on the on the confidence one has in one's stellar models. It is worth noting that several improvements in stellar modeling have recently combined to lower the overall age estimates of globular clusters. The inclusion of diffusion lowers the age of globular clusters by about 7% , and a recently improved equation of state which incorporates the effect of Coulomb interactions has lead to a further 7% reduction in overall ages. Of course, what is most important for the comparison of cosmological predictions with inferred age estimates is the uncertainties in these and other stellar model parameters, and not merely their best fit values.
Over the course of the past several years, I and my collaborators have tried to incorporate stellar model uncertainties, along with observational uncertainties into a self consistent Monte Carlo analysis which might allow one to estimate a reliable range of globular cluster ages. Others have carried out independent, but similar studies, and at the present time, rough agreement has been obtained between the different groups (i.e. see ).
I will not belabor the detailed history of all such efforts here. The most crucial insight has been that stellar model uncertainties are small in comparison to an overall observational uncertainty inherent in fitting predicted main sequence luminosities to observed turnoff magnitudes. This matching depends crucially on a determination of the distance to globular clusters. The uncertainty in this distance scale produces by far the largest uncertainty in the quoted age estimates.
In many studies, the distance to globular clusters can be parametrized in terms of the inferred magnitude of the horizontal branch stars. This magnitude can, in turn, be presented in terms of the inferred absolute magnitude, Mv(RR)of RR Lyrae variable stars located on the horizontal branch.
In 1997, the Hipparcos satellite produced its catalogue of parallaxes of nearby stars, causing an apparent revision in distance estimates. The Hipparcos parallaxes seemed to be systematically smaller, for the smallest measured parallaxes, than previous terrestrially determined parallaxes. Could this represent the unanticipated systematic uncertainty that David has suspected? Since all the detailed analyses had been pre-Hipparcos, several groups scrambled to incorporate the Hipparcos catalogue into their analyses. The immediate result was a generally lower mean age estimate, reducing the mean value to 11.5-12 Gyr, and allowing ages of the oldest globular clusters as low as 9.5 Gyr. However, what is also clear is that there is now an explicit systematic uncertainty in the RR Lyrae distance modulus which dominates the results. Different measurements are no longer consistent. Depending upon which distance estimator is correct, and there is now better evidence that the distance estimators which disagree with Hipparcos-based main sequence fitting should not be dismissed out of hand, the best-fit globular cluster estimate could shift up perhaps 1, or about 1.5 Gyr, to about 13 Gyr.
Within the past two years, Brian Chaboyer and I have reanalyzed globular cluster ages, incorporating new nuclear reaction rates, cosmological estimates of the 4He abundance, and most importantly, several new estimates of Mv(RR). The result is that while systematic uncertainties clearly still dominate, we argue that the mean age of the oldest globular clusters has increased about 1 Gyr, to be 12.7+3-2 (95%) Gyr, with a 95% confidence range of about 11-16 Gyr . It is this range that I shall now compare to that determined using the Hubble estimates given earlier.
3.2. Hubble Age
As alluded to earlier, in a Friedman-Robertson-Walker Universe, the age of the Universe is directly related to both the overall density of energy, and to the equation of state of the dominant component of this energy density. The equation of state is parameterized by the ratio = p / , where p stands for pressure and for energy density. It is this ratio which enters into the second order Friedman equation describing the change in Hubble parameter with time, which in turn determines the age of the Universe for a specific net total energy density.
The fact that this depends on two independent parameters has meant that one could reconcile possible conflicts with globular cluster age estimates by altering either the energy density, or the equation of state. An open universe, for example, is older for a given Hubble Constant, than is a flat universe, while a flat universe dominated by a cosmological constant can be older than an open matter dominated universe.
If, however, we incorporate the recent geometric determination which suggests we live in a flat Universe into our analysis, then our constraints on the possible equation of state on the dominant energy density of the universe become more severe. If, for existence, we allow for a diffuse component to the total energy density with the equation of state of a cosmological constant ( = - 1), then the age of the Universe for various combinations of matter and cosmological constant are shown below.
|0.2||0.8||15.3 ± 1.5|
|0.3||0.7||13.7 ± 1.4|
|0.35||0.65||12.9 ± 1.3|
Clearly, a matter-dominated flat universe is in trouble if one wants to reconcile the inferred Hubble age with the lower limit on the age of the universe inferred from globular clusters. In fact, if one took the above constraints at face value, such a Universe is ruled out on the basis of age estimates and the Hubble constant estimates. However, I am old enough to know that systematic uncertainties in cosmology often shift parameters well outside their formal two sigma, or even three sigma limits. In order to definitely rule out a flat matter dominated universe using a comparison of stellar and Hubble ages, uncertainties in both would have to be reduced by at least a factor of two. | http://ned.ipac.caltech.edu/level5/Krauss/Krauss3.html | 13 |
121 | Chapter 6: The Geology of Mars
Over the past 40 years, American and Soviet spacecraft have produced a wealth of information about Mars. Photomosaics of the entire planet have been made, topographic and surface relief maps have been published and geologic maps compiled in a variety of scales. Small robotic geologists have rolled across the surface taking pictures and analyzing rocks. Moreover, we may also have pieces of Mars here on Earth, delivered free of charge as small meteorites. Consequently, our understanding of the planet generations of humans thought to be most like Earth, has increased dramatically. Compared to the landforms on the Moon or Mercury, the surface features of Mars have origins more diverse and complex. Almost everything on Mars is not only big but gigantic. Impact shaped about half the surface; the largest crater is about twice the size of the largest crater on the Moon. Other areas have been formed by volcanic activity, with some volcanic shields reaching more than 20 kilometers above the surrounding surface. The largest volcano on Mars has ten times more volume than the largest volcano on Earth. Mars even has a tenuous atmosphere and bright polar caps. Photographs show the details of advancing and retreating frost shrouds. Wind action is a major geologic process, and moving sand and dust altered many features on the planet. To the surprise of all, ancient channels are found on the surface; these appear to have been formed by running water. Tectonic movements fractured parts of the planet and produced a great canyon system which has been enlarged by erosion. It is apparent that the geologic agents operating on Mars not only differed from place to place, but also varied throughout the planet's long history. Because it is larger than the asteroids, the Moon, and Mercury, Mars provides another reference point in the progression of planet size for understanding the fundamental principles of how and why the planets evolve.
1. The surface of Mars can be divided into two major regions: (a) the densely cratered, more ancient highlands in the southern hemisphere, and (b) the younger, lower plains in the north.
2. Mars probably possesses an internally differentiated structure with a metallic core, a thick mantle composed of iron-rich silicate minerals and a thin crust.
3. Cratering has been a major geologic process on Mars, and a record of an early period of intense meteorite bombardment has been preserved.
4. Volatiles outgassed from the interior formed an atmosphere and a simple hydrologic system. Later, as temperatures lowered, liquid water became locked up in the polar caps and in the pore spaces of rocks and soil, as groundwater or ice, and was only occasionally released in large floods.
5. Eolian processes have been observed in action on Mars, and many surface features have been modified by wind erosion or deposition.
6. Volcanism is revealed by three types of features: (a) huge shield volcanoes, (b) volcanic patera possibly unique to Mars, and (c) volcanic plains.
7. Large crustal domes and graben are the major tectonic features, and may be produced by thermal convection in the mantle.
8. Mars experienced a long and complex geologic history; it is not a primitive sphere dominated by impact scars, as are the Moon and Mercury. The larger size of Mars, and its compositional differences, may be responsible for its extended thermal evolution.
The Planet Mars
The planet Mars has fascinated observers for many centuries, partly because of its distinctive red glow but also because it is so near Earth. Indeed, many of its physical characteristics suggested that Mars might even harbor life. Fortuitously, the rotational axis of Mars currently tilts almost exactly the same amount as Earth's, causing cyclical season changes (which last about twice as long on Mars, because of its longer period of revolution about the Sun). Polar caps of carbon dioxide and water ice advance and retreat in response to the temperature changes. (Some imaginative scientists early in the twentieth century suggested that the color changes accompanying the season changes showed that plant life existed on the surface, advancing like a front toward the poles as each spring brought warmer temperatures.) Mars also possesses a thin atmosphere composed mainly of carbon dioxide. Nitrogen, oxygen, and a seasonally variable amount of water are also present in the atmosphere. The water-vapor content is much less than that in the air over terrestrial deserts. Tenuous clouds and migrating storms indicate that Earth-like processes occur within the atmosphere. Although temperatures are generally below freezing and the atmospheric pressure is a hundred times lower than that on Earth, the fourth planet from the Sun seems hospitable compared to the airless Moon or the searing heat on Mercury.
Mars is a fairly small planet, larger in diameter than Mercury or the Moon, but smaller and less massive than Earth (one-tenth mass of the Earth). The gravitational force exerted on objects near the surface is slightly more than one-third of Earth's constant tug and about the same as Mercury's. At 24 hours and 37 minutes, the martian day is only a bit longer than a day on Earth. Mars has two natural satellites. These tiny moons, called Phobos (about 20 km across) and Deimos (only 10 km across), are the size of small asteroids and will be described in Chapter 8.
Geologically, Mars is vastly different from Earth in spite of some outward similarities mentioned above. It is an "intermediate" planet, with an array of Moonlike, Earthlike, and uniquely martian features, which we will examine closer.
Major Geologic Provinces
The surface of Mars was completely photographed during the epic flight of Mariner 9 in 1971 and 1972. Subsequently, remarkably detailed photographs were obtained by the orbiting satellites of the Viking mission, which operated from 1976 to 1980. From these data, topographic and geologic maps of the whole planet have been made and an intriguing, increasingly complex interpretation of the martian surface is developing.
The generalized geologic map of Mars (Figure 6.1) shows the location of the major provinces and the distribution of the important terrain types. The surface of Mars may be divided into two major geologic provinces: (1) the old, densely cratered terrain in the south; and (2) the low, young, sparsely cratered plains in the north. The boundary separating the two almost coincides with a great circle inclined at 35 degrees to the equator. These landforms in these major provinces were modified by a surprising variety of surface and internal processes to produce terrain types not seen on the Moon or Mercury.
A geologic map of Mars shows the major regions: (1) The southern hemisphere is an old, densely cratered highland with a crater distribution more like Mercury's intercrater plains than like the lunar highlands. This hemisphere is furrowed by small, dendritic valley systems. (2) The northern hemisphere is dominated by younger, relatively smooth plains, apparently composed of sedimentary deposits and vast lava flows. A well-defined escarpment separates the two provinces, except where it is buried by younger lavas of the Tharsis region. Several distinctive terrain types have been formed by erosion and slope retreat along the escarpment. Major flood channels cross the escarpment and empty into the northern lowlands.
Densely Cratered Southern Highlands
The densely cratered terrain in the southern hemisphere of Mars is a highland somewhat like the ancient cratered terrain of Mercury or the highlands of the Moon, where craters of all sizes and shapes dominate the landscape. Several huge multiring basins, larger than Imbrium or Caloris, are the major features in parts of the highlands. Elevations are generally in excess of 1 km above the mean martian radius except in the smooth interiors of Hellas and Argyre basins. Much of the martian cratered terrain contains large areas of relatively smooth intercrater plains and a few old shield volcanoes.
Although the nature of the intercrater plains on Mercury is still in question, some martian plains are definitely volcanic. Some large channels that appear to have been carved by rivers or huge floods cross this ancient battered surface as well, giving it a decidedly nonlunar appearance.
The Northern Plains
The low plains in the northern hemisphere are separated from the southern highlands by a prominent cliff that is 2 to 3 km high in places. If Mars had an ocean 2.5 km deep, the ocean would cover the northern plains and leave most of the heavily cratered terrain high and dry as one huge landmass. The brightness of the plains is highly variable, reflecting the diverse origins of the different smooth surfaces. Photographs of this area show prominent mare-type lava flows, small lava or cinder cones, and ridges, all indicative of a volcanic origin. Dunes and wind streaks formed behind topographic obstacles attest to eolian (wind) modification and blanketing of large tracts. Since many of the major channel systems empty into basins in the north, stream sediments may be interlayered with the other deposits. It was in this area of complicated geology that the two Viking landers touched down.
Two continent-sized domical highs or upwarps capped with huge shield volcanoes are located in the northern hemisphere near the escarpment. The largest is in the Tharsis region of the western hemisphere, where the volcanic field appears to sit astride the global escarpment. The smaller field, Elysium, is located well within the northern plains of the eastern hemisphere.
The large structural dome in the Tharsis region has created a system of radial fractures that extend nearly halfway around the globe. Faulting and erosion along an east-west segment of the fracture system has developed Valles Marineris, the "Grand Canyon of Mars," which extends from the fractured bulge near Tharsis across the cratered plains, and breaks into several north trending channels near the escarpment.
The ultimate origin of the scarp separating the two physiographic provinces is still unknown. It is well defined in the eastern hemisphere but is masked in the west by lava flows from the Tharsis volcanoes. The escarpment is an erosional contact between the younger northern plains and the ancient cratered highlands in the south. It has been dissected by various types of stream erosion and mass movement that are unique to Mars. This erosion gradually forced the margin of the highlands to retreat southward. In places the cratered highlands south of the escarpment are fractured and modified by slumping. This landscape is accentuated as the scarp is approached, so that only chaotic masses of angular blocks, or mesas, are left standing above the plain--- creating a fantastic landscape called the fretted terrain. Farther into the plains, only small, rounded knobs are left; these eventually disappear to the north. Some major channels arise in this complexly eroded transition zone and modify the global escarpment.
The bright polar caps of Mars were first identified from Earth with the use of telescopes. Large permanent caps of water ice are located at each pole. The caps rest on a sequence of layered sedimentary deposits. Carbon dioxide ice is present at least temporarily at both poles. Revealing the power of the atmospheric circulation, sections of the surrounding terrain have been pitted and etched by wind-related erosional processes, and a discontinuous ring of dunes surrounds much of the northern cap photographed during the Viking mission.
On the Surface of Mars
Certainly one of the most important accomplishments during the last two decades of space exploration was the soft landing of a pair of spindly Viking spacecraft on Mars in 1976. The landers performed amazingly well and returned panoramic vistas of boulder-strewn plains from two widely separated spots on the planet, as well as chemical analyses of the regolith. Although both sites appear superficially the same, there are some significant differences. The sites were chosen because of their smoothness as seen from orbit and because of the strong indications that water may be present in the soil. They thus do not show us a view of the more spectacular landscapes of Mars found elsewhere.
Viking 1 landed in a low, smooth basin in the northern hemisphere several hundred kilometers from the global escarpment. Several large channels that drained water from the highlands to the west and south flow into Chryse Planitia (Gold Plain), and it was thought that water from them might have helped to support some sort of biological activity. The cameras on Viking 1 showed a dusty, orange-red surface strewn with boulders of all sizes and shapes (Figure 6.2). Drifts of windblown sand have collected behind and on top of many rocks; the wind has also fluted some pebbles and scoured away the sand to expose the light-colored bedrock beneath the soil.
One striking feature photographed by the lander is the bright sky (Figure 6.2). The Moon, because it has no atmosphere, has no sky and is surrounded only by dark space. The martian sky has a reddish tint imparted by dust particles suspended in the thin, carbon dioxide atmosphere. The dust may be red because of the oxidation of iron-rich minerals in the soil---a process similar to rusting.
The Viking 1 landing site on Mars is located near the mouth of several large channels which converge on Chryse Planitia in the northern hemisphere of Mars.
(A) Chryse Planitia was shaped by vast floods, which flowed north across the highly cratered southern hemisphere and east across Lunae Planum. (NASA and U.S. Geological Survey)
(B) The panoramic view of the Viking 1 site shows the meter high dunes and abundant angular rocks. The large boulder in the foreground is about 2 m across. Many of the features here indicate the importance of wind activity in shaping the details of the martian landscape. Some of the sand dunes have been eroded, so their internal cross stratification is exposed. (NASA)
Chryse Basin is a shallow sediment- and lava-filled trough. Although no impact craters occur near the spacecraft, several appear on the horizon, and views from orbit show that large channels extend to within about 60 km of the landing site. The history of the area probably began with the formation of a densely cratered surface that was later covered with lava flows. Channel formation along the borders of the basin may have flooded the area and deposited sediments eroded from the highlands to produce the rubbly landscape. Subsequent light bombardment and continual eolian modification have rendered further changes.
Far to the north, on the other side of Mars, Viking 2 set down about a month after its sister spacecraft, in Utopia Planitia. In spite of the low temperatures, abundant water vapor had been detected from orbit. During the summer days temperatures hovered at about -10 C (260 K) and the low at night reached a frigid -50 C (220 °K). The photos returned from Utopia show an orange-red plain littered with porous, spongelike rocks, similar to some terrestrial volcanic rocks (Figure 6.3). Drifts of sand and wind-shaped cobbles and boulders are not as common here as on the Chryse plain. A polygonal system of troughs that is not seen at the other landing site developed in the soil. The troughs may have formed as ice wedges in the soil, cracked it, and formed shallow linear depressions on the surface.
The Viking 2 landing site is located on the opposite side of Mars but is similar to the Viking 1 site.
(A) Viking 2 touched down in Utopia Planitia about 200 km west of Mie, a large and relatively young impact crater. The volcano Elysium Mons lies far to the south. High concentrations of atmospheric water vapor had been measured from orbit over this area. (NASA and U.S. Geological Survey)
(B) The view from Viking 2 reveals a vast plain littered with angular blocks and low drifts of sand. The blocks may be ejecta from the impact crater, or they may be the weathered remnants of a lava or debris flow. Like those at the Viking 1 site, many of these rocks are pitted. The scene is tilted because the lander is resting on a tilted surface. (NASA)
The view from orbit shows that Utopia Planitia is a relatively smooth plain, locally dotted with low volcanic domes and small craters. The crater frequency is lower here than at Chryse and must represent a younger surface. The simplest history of the area maintains that Viking 2 landed on the far edge of an ejecta blanket formed when the crater called Mie was excavated. Other theories hold that the rubbly surface was shaped by massive floods or as debris was dropped by a formerly more extensive ice cap.
Using various instruments the Viking landers alternately poked, prodded, scooped, hammered, cooked, and analyzed the martian soil, determining its composition and physical characteristics and trying to find out if it contained any evidence of life. Although no unambiguous signs of martian biota were found, we now know much more about the chemistry of martian soil. Comparison with terrestrial materials indicates that the martian soil may have originated from hydration or alteration of iron-rich basaltic rocks, consistent with the hints of volcanism in both areas.
The recent conclusion that a few meteorites are samples of Mars blasted away by a low angle impact, opens a new door for our exploration of Mars. We discussed these meteorites briefly in Chapter 3. These SNC meteorites (short for Shergotty, Nakhla, and Chassigny) are igneous rocks with basaltic compositions very similar to that found by the landers and rovers. The gases trapped within them have the same composition as that of the martian atmosphere as analyzed by several spacecraft. Their ages (some are 1.3 billion years old) are extremely young for any asteroid and suggest that they must have formed on a large planet that cooled slowly and had a long volcanic history. Their compositions tell us that there is significant water in the martian interior and at the surface; the abundances of moderately volatile elements like potassium suggest that Mars has a larger endowment of volatiles than Earth itself. And their ages give us a few punctuation points for the long martian history.
The Internal Structure of Mars
We have seen in previous discussions that both the Moon and Mercury have differentiated interiors made up of shells with different chemical compositions. Unfortunately the Viking landers did not detect "marsquakes" strong enough to precisely define the thickness of the various shells, but much has been inferred about their nature by variations in the gravity field, comparison with better understood planets, and calculations to predict the thermal evolution of the martian interior (Figure 6.4).
The interior of Mars is probably dominated by a thick mantle composed of iron and magnesium silicates. The core, probably composed of iron and sulfur, does not occupy as large a portion of Mars as metallic cores occupy in Earth and Mercury. The crust of Mars and most other planets is very thin, and probably resulted from the original differentiation of the planet. The crust may be 25 to 70 km thick. An asthenosphere of partially molten mantle material probably exists at a depth of about 250 km, marking the base of the lithosphere. (Calvin Hamilton, Views of the Solar System)
Mars is larger in diameter than Mercury and about twice as massive, but it has a lower bulk density (3.93 g/cm3). Apparently, Mars is composed of lighter materials than Mercury (5.44 g/cm3), which is consistent with nebular condensation models that predict significant incorporation of light volatile materials (water for example) in Mars that did not condense where Mercury formed. This compositional implication and its larger size, and consequently a greater ability to retain heat, lead one to suspect that the internal structure of Mars may be more like that of Earth than that of the Moon or Mercury.
Theoretical models of the interior structure predict just such a situation (Figure 6.4). Mariner 9 returned data that demonstrated that there is a significant density contrast between the shallow layers of Mars and its deep interior, including evidence for a core. The radius of the core is probably about 1500 to 2000 km. It is not composed just of iron but probably contains some sulfur or oxygen, which lowers its density. The SNC meteorites are also depleted in elements that have an affinity for sulfur; we speculate that these elements were removed into the sulfur-rich core by differentiation. Several Mars probes have shown that the martian magnetic field is very small. This is somewhat surprising in that Mars is larger than Mercury and presumably warmer, with a convecting core. It also spins much faster on its axis. Both features should create a stronger magnetic dynamo for Mars, and yet Mercury has a magnetic field and Mars does not. Perhaps, the martian core is completely solidified now, after eons of cooling. The larger size of the mercurian core, a unique composition, or tidal energy inputs may have kept it molten and convecting for much longer than the Moon.
The bulk of the mass of Mars probably lies in a thick silicate mantle about 1300 to 1800 km thick. It is very likely that the interior is still actively convecting, which may have created forces that tectonically deformed the crust. Moreover, some models of the thermal evolution of Mars predict that a thin layer within the mantle could still be partially molten and might serve as a source for active volcanism at the surface. This ductile asthenosphere probably occurs at a depth of about 250 km and marks the base of the overlying rigid lithosphere. The lithospheres of the Moon and Mercury are probably much thicker (1000 km and 600 km respectively); and partial melt zones, if they occur at all, are very near the compositional boundary between mantle and core in these other planets.
The martian lithosphere is probably composed of two units: the upper mantle just described, and the crust formed during the primordial differentiation of the planet. From comparisons with the Moon, the crust is thought to be rich in alumino-silicate minerals and very thin compared to the mantle. The thickness of the Moon's crust varies from 60 to 100 km and Earth's varies from 5 to 80 km, with an average thickness in the range 30 to 35 km. Analysis of the gravity field of Mars indicates that its crust averages 25 to 40 km thick and that it is probably thinner in the northern hemisphere. The crust may reach a maximum of around 70 km under the high Tharsis plateau; however, some models suggest a very thin crust or lithosphere beneath Tharsis. Resolution of this problem awaits further investigation.
Internal differentiation of a planet is thought to occur when the planet partially melts from either accretionary heat or radioactive decay early in its history. As rocks melt, gases and vapors are usually released first and, because they are much lighter, rise rapidly to the surface. The relatively large gravity field of Mars has enabled the planet to retain a tenuous outer shell of such gases. A planet's atmosphere may be considered to be its outer layer, an important part of its structure. The martian atmosphere is thin, exerting a pressure of only about 0.01 that of Earth's atmosphere, and composed mainly of carbon dioxide; but it has played a very significant part in the geologic evolution of Mars.
Impact Craters and Basins
Although the photographs from recent space missions have revealed a far greater variety of landforms on the surface of Mars than were seen on the Moon or Mercury, cratering still appears to have been the dominant process in shaping the surface of Mars, especially during the early history of the planet. The craters on Mars have many distinctive features, however, which reflect the gravitational attraction, surface processes, and erosional history of the planet. Many craters are shallow, flat-floored depressions that show evidence of much more erosion and modification by sedimentation than do those on Mercury or the Moon. Rayed craters are rare, but fresh craters do occur and are unique in that some are surrounded by lobate scarps, indicating that the ejecta may have moved like a mudflow after it fell to the surface.
The densely cratered highland surface of Mars was apparently created by the impact of many meteorites during the episodes of cratering that shaped the surfaces of the Moon and Mercury. In terms of size and shape, the craters themselves are nearly the same on all three bodies. Craters smaller than 10 to 15 km in diameter are simple, bowl-shaped depressions with raised rims and smooth walls and floors. Craters larger than this are complex, and usually have central peaks and terraces on the walls. Concentric rings, instead of peaks, are characteristic of most craters larger than 100 km in diameter; the same transition occurs in lunar craters when they exceed 200 km in diameter, possibly due to the effect of the lower surface gravity on the Moon. Hummocky ejecta blankets and fields of secondary craters are less prominent around martian craters, and classic rayed craters like Tycho are rare (Figure 6.5).
Rayed impact craters like those found on the Moon and Mercury are rare on Mars. The dark rayed crater shown here was photographed by the Mars Orbiting Camera. The crater is about 200 m across. (NASA and Malin Space Science Systems)
The morphology of crater ejecta on Mars is unique. Ejecta blankets of lunar craters are usually blocky near the rim and grade outward to finer particle sizes until the blanket merges imperceptibly with fields of secondary craters and rays. All of these features are consistent with the ballistic emplacement of the ejecta. Many martian craters, in contrast, have ejecta deposits that appear to have flowed over the surface like mudflows, and lack the delicate rays seen around craters on other planetary objects. The continuous ejecta deposit commonly extends from 1.5 to 2 times the radius of the crater---on the Moon this value is only about 0.7 and on Mercury (which has a gravity similar to Mars) only about 0.5. Apparently, the ejecta from those martian craters became fluidized in some fashion, allowing it to flow long distances after excavation. These craters are called rampart craters, fluidized craters, or splash craters.
A spectacular example of a rampart crater is the 18-km-diameter crater named Yuty shown in Figure 6.6. The ejecta consists of several relatively thin layers or sheets of material with tongue-shaped projections. It appears that the debris flowed outward like huge splashes of mud that surged outward when they hit the ground. A prominent ridge was produced at the front of each ejecta lobe. To the right, the ejecta flow overlaps an older degraded remnant of a larger crater and actually moved up and over the wall formed on the eroded ejecta blanket. To the south, a smaller and older crater separates two large lobes on the surface of the ejecta, but was also filled with liquified debris. Liquid water may have been incorporated into the material as it was thrown out of the crater, creating a rapidly moving mudflow. The collision of the meteorite with the surface may have melted ice in the regolith and mixed the water with the ejecta.
The mudlike ejecta of the crater Yuty is typical of the nature of the ejecta of many martian impact craters. The ejecta consists of a series of overlapping lobes. The smooth, rounded fronts of the lobes and their diversion around the small crater rim suggest that the debris moved close to the ground as a surge of mud. The ejecta deposits have nonetheless overtopped the eroded remnants of an older crater's ejecta (on the right side of the photo). The ejecta was fluidized at the time of impact, probably by the melting of near surface ground ice. Yuty lies in the flooded portion of Chryse Basin. (NASA)
Craters with diameters less than 15 km usually have a single ejecta sheet (Figure 6.7) that only extends out to about one crater radius. The surface of the sheet is typically marked with many concentric ridges and ends abruptly at a ridge or escarpment. Locally, rays or fields of secondaries or hummocks related to the crater extend beyond the edge of the ejecta lobe. The ejecta sheet appears to be thicker and more viscous than those associated with Yuty-type craters. Terraces are present on the walls of many of these craters and are significantly displaced from the crest of the rim.
Craters with fluidized ejecta are common at the surface of Mars and present a variety of forms:
(A) The complex lobate and radial patterns of the inner ejecta deposits are strikingly displayed. A relatively thick platform of ejecta surrounds the rim. A steep outward facing scarp is variably developed around the innermost ejecta. (NASA)
(B) The lobes of lineated debris that extend beyond the scarp were probably emplaced as mudflows across the fractured northern plains of Mars. Piles of debris at the base of the central peak formed as material slumped off the steep walls of the crater. (NASA)
Another type of rampart crater, which has elements of both types described above, is shown in Figure 6.8. Like the two craters just described, it is surrounded by an annulus of thick ejecta with a patterned surface. However, unlike the previous craters, radial ridges are an important component of the pattern. Although an escarpment bounds this lobe, a distinct ridge is usually absent. Beneath and extending to greater distances are sets of thin lobate ejecta sheets similar to those around Yuty. Occasionally this outer set of lobes is missing, but this may be a result of modification of the primary landform.
Rampart craters commonly have large central peaks or pits, which may result from the explosive expansion of ice as it vaporizes during impact. Terraces are absent on the walls of some rampart craters. For example, in Arandas (Figure 6.8) only a small amount of material lies in a heap around a large central peak.
The crater Arandas has features, including its distinctive ejecta pattern and its central peak, that suggest the involvement of ground ice in its development. (NASA)
If rampart craters really are the result of the incorporation of water into ejecta, detailed studies of their distribution on the planet may eventually show regional variations in ground ice (or ground water) and the smallest craters that have fluidized ejecta patterns may reveal the depth to which water lies beneath the surface.
Over twenty large, circular basins that are similar to the multiring lunar basins such as Orientale and Imbrium have been discovered on Mars. Some of them are readily apparent on the geologic map (Figure 6.1) and, although they have been significantly modified by both erosion and deposition from processes unique to Mars, they are in many ways similar to their lunar counterparts (Figure 6.9). The largest martian basin is Hellas, which is almost 2000 km across, much larger than either Imbrium or Caloris. Several vaguely defined rings have been identified, but they are highly degraded and not continuous. Much of the basin is covered by plains, so parts of the inner rings may be buried by basalt or sediment. The northern and eastern rims of Hellas are composed of belts of rugged, peaked mountains that resemble the Apennine Mountains that form the rim of the Imbrium basin on the Moon. The high rim has probably been eroded and much of the material deposited in the center of the depression. Other large basins, like Isidis and Argyre, are modified by erosion and partly covered with a sedimentary blanket, but similarities between them and the lunar basins are great enough to support the conclusion that they, too, were formed by the collision of asteroid-size bodies with Mars.
Argyre Basin is a highly degraded multiring basin in the southern hemisphere of Mars. This shaded relief map Argyre shows the southern and eastern rim of the basin. Argyre is about 1000 km across---about the same size as Imbrium Basin on the Moon. The interior of the basin is filled by a much younger, relatively smooth deposit of probable sedimentary origin. The deposit buries craters and has been deformed into wrinkle ridges on the right. High-standing massifs mark one ring of the basin, an ill-defined outer ring is visible to the right. Galle is a large (200 km in diameter) peak-ring basin on the right side of Argyre. (MOLA Science Team and G. Shirah, NASA GSFC Scientific Visualization Studio)
The early period of intense bombardment and basin excavation is recorded primarily in the rugged southern highlands. The only way to estimate absolute ages on Mars is by comparison with the cratering record of the Moon. If this comparison is valid, the most intense cratering occurred from 4.5 to 3.9 billion years ago. The frequency of large impact basins on Mars (about 1 per 10 million km2) is much less than on the Moon (8 per 10 million km2) or Mercury (14 per 10 million km2). Even assuming that the processes that produced the northern plains destroyed half of the martian basins, Mars would still have a much lower frequency than the Moon. The progression in basin density from the Sun outward suggests that perhaps the numbers of large impactors decreased outward from the Sun during the early history of the solar system. However, even for smaller craters this heavily cratered region is not as densely cratered as the Moon, a sure indication that burial (by lavas or sediments) and substantial erosional modification obliterated some of the earliest traces of the planet's bombardment.
Water on Mars
In contrast to the Moon and Mercury, Mars is not a dry planet. Volatile materials rich in water were probably incorporated in the planet during accretion. During differentiation these volatiles were partially outgassed and accumulated at the surface. Several arguments can be made for outgassing of the equivalent of a global ocean several hundred meters deep. Evidence for these ideas are the many channels, which closely resemble dry river beds on Earth. Many enormous channels originate in the southern highlands near the erosional escarpment and flow northward, emptying into the low plains. Other, smaller channels have furrowed much of the old highlands. If any of these channels had been seen on Earth, no one would hesitate to call them dry rivers, but their presence on Mars presents some most perplexing questions about the planet. With the present temperature and atmospheric pressure on Mars, water cannot exist for long in a liquid state at the surface; it either evaporates or is frozen. But the presence of river or fluvial channels shows that running water and even huge floods occurred in the past. When did liquid water exist, where did it come from, and where has it gone? These problems are not easy to solve and will continue to be discussed for some time.
Instruments on the Viking spacecraft showed that there are large quantities of water ice and vapor on Mars. Parts of the ice caps are water ice, and at night the martian atmosphere is saturated with water. Water is apparently also present in the pore spaces of rocks and soil. Most water in the martian hydrologic system is locked up as ice in the polar regions, as ground ice in frozen soils, or as ground water (liquid) at a depth of several hundred meters where the temperature is higher. A well-integrated hydrologic system composed of oceans and rivers (or their frozen equivalents---ice caps and glaciers) that transfers water back and forth between its components through evaporation, rainfall, and surface flow does not exist at present on Mars. However, the martian climate must have been very different at some time in its geologic past and a better-developed hydrologic system existed.
Perhaps the best evidence for the former existence of a hydrologic system on Mars comes from the networks of filamentous channels in the southern highlands. These channels or valleys show many characteristics of dendritic river systems on Earth (Figure 6.10). Many erode the rims of ancient craters or the slopes of volcanoes. Individual segments are commonly less than 50 km long and less than 1 km wide, but whole systems, consisting of many branching tributaries, may be up to 1000 km long. In some cases, they resemble stream networks that result from the collection of rainfall as it flowed down slopes. Deposits concentric to the rims of some large impact basins may be the remnants of deltas or alluvial fans where these rivers dropped the sand and gravel they were carrying into lakes or seas, but some valley systems end abruptly, with little evidence of a sedimentary deposit.
Dendritic valley networks are common in the southern highlands of Mars. This area is about 250 km across. Large areas between the channels are not dissected, suggesting to many scientists that the valleys were not formed by the accumulation of precipitation. Instead, some valleys appear to have been produced by sapping and the release of groundwater to the surface. (NASA)
Crater frequencies on these valleys show that most are very old, and probably formed 3.5 to 4.5 billion years ago, during the heavy bombardment of Mars. During this early part of martian history, the atmosphere may have been denser and occasional warm periods, resulting from the greenhouse effect or orbitally induced climatic change, may have allowed rain to fall, collect in small streams, and dissect portions of the ancient terrain. Seas or oceans may have developed in the low northern plains or in deep impact basins. The formation of dendritic valleys seems to have ceased by the end of the heavy bombardment, suggesting that a persistent ocean, it existed, disappeared by this time.
Although this may be the simplest explanation for the formation of valley networks, it is not consistent with some features of the channels. For example, some channels show strong evidence of structural control, have large areas of undissected plains between tributaries, and occasionally cut across craters (Figure 6.11). Groundwater seepage may explain these features. Channel networks could form by growth upslope from the point where water emanates from the ground (a spring), and would be strongly controlled by local faults and fractures. Whatever the source of the water, the early history of Mars seems to include substantial fluvial erosion and the development of a hydrologic cycle. Water was cycled from the atmosphere, to the surface, into the ground, back to the surface and by evaporation back into the atmosphere. By the close of this era, water most have become locked into the upper martian crust, as ice and deeper as ground water.
Nirgal Vallis shows many of the features associated with sapping. The channel is 800 km long but does not have an extensive drainage basin. Its numerous tributaries are short and confined to a narrow band adjacent to the main channel; many are stranded high on the valley's walls. By terrestrial standards, the valley network has large undissected areas between the channels. All of these features suggest an origin by sapping and indicate that runoff from precipitation was not important. (NASA/JPL/Arizona State University)
Other apparently fluvial features also exist on Mars. The most spectacular of these are the large channel systems, called outflow channels, which cross the eroded escarpment that separates the cratered highlands from the low northern plains. Figure 6.12 is a sketch map of the escarpment, showing the size and locations of major channels. Many arise at the eastern end of Valles Marineris and converge on Chryse Planitia, where Viking 1 touched down.
The major outflow channels of Mars are related to the global escarpment that separates the northern lowlands from the southern highlands.
The system of channels in Tiu Vallis is a good example of these broad outflow channels. Tiu Vallis lies at the eastern end of Valles Marineris; it is well over 600 km long, and, in some places, over 25 km wide (Figure 6.13). The floor of the valley is covered by a network of interlacing channels and streamlined erosional forms that may have been islands. Similar patterns are found in braided streams on Earth (Figure 6.14), which usually form in rivers heavily laden with erosional debris. It would be easy to believe that these pictures of martian channels were of parts of the desert areas of the southwestern United States, were it not for the craters and the huge size of the channel. Many layers of rock, which apparently were eroded by running water, are shown in the walls of the valley. Downstream, the channel broadens, the valley floor becomes flat, and the channel appears to empty into Chryse Planitia. In the area west of the Viking 1 landing site, the floodwaters apparently were dammed behind several wrinkle ridges until they overflowed and cut narrow gaps in the ridges (Figure 6.15). The ejecta blanket of the crater in Figure 6.15 has been completely stripped away in many places. The entire region has been sculpted by fluid flow, as channels twist through the area and cut across one another. Teardrop-shaped mesas that were once islands with craters atop them have been streamlined by fluid flow and occur over broad portions of Chryse Planitia, showing how extensive these floods were (Figure 6.16). The craters formed effective barriers to the flow and as the water streamed around them, teardrop-shaped mesas were eroded into the former plain. Terracing, produced as sediment built up around the islands or as rock layers were stripped away, is visible on the sides of many islands.
Details of the 600-km-long Tiu Valles. Near the bottom of the map, chaotic terrains form the sources of Tiu Valles and other outflow channels. Streamlines mark the flow direction of what must have been a series of vast floods derived from the outbreak of water from subsurface reservoirs. Farther downstream a deep gorge with erosional terraces on its walls developed from the Tiu floods. Evidence of early flow is seen near the top of the map. See Figure 6.17 as well.(NASA and U.S. Geological Survey)
Braided streams on Earth develop a network of interlacing channels that are similar to those seen in Tiu Vallis. (W.K. Hamblin)
Wrinkle ridges and craters dammed and diverted the flow of the floods that spread across Chryse Planitia from Tiu Vallis and other outflow channels. The ridge was eventually overtopped and narrow gaps were eroded. The erosional streamlines demonstrate the diversion of floodwaters. The crater shown is about 10 km across, and its ejecta deposits have been extensively eroded. (NASA)
Teardrop-shaped islands were formed where the floods were diverted around high-standing impact craters. Erosional terraces on the flanks of these streamlined forms demonstrate the progressive stripping of the surrounding surface. The craters are about 10 km across. (NASA/JPL/Arizona State University)
The major channels that flow from the southern highlands across the escarpment to the northern plains are large features---some are over 1000 km long, 100 km wide, and 4 km deep. Such tremendous amounts of erosion must have involved huge quantities of water, but where did the water come from? These outflow channels do not have large, integrated drainage systems with many branches resulting from continuous rain-fed flow. The few tributaries that are present are short and stubby and lack the intricate networks of progressively smaller tributaries that are characteristic of terrestrial streams. Instead, many martian channels have their headwaters in highly fractured regions, or in jumbled masses of large blocks called chaotic terrain. As shown in Figure 6.17, the terrain consists of a complex mosaic of broken slabs and angular blocks with intervening valleys. Chaotic terrain at the eastern end of Valles Marineris is shown in Figure 6.17; the sources of many of the outflow channels, including Tiu Valles, are visible as elongate depressions over 100 km across. Stream channels extend away from the chaotic terrain toward Chryse Planitia.
Chaotic terrain is well developed in the source regions for Tiu Valles and other outflow channels southeast of Chryse Planitia. The channels with lineated floors extend from regions with large polygonal blocks and masses of smaller rounded knobs. Some areas of chaos are not connected to channels. This fantastic landscape was probably developed as ground water catastrophically broke out to the surface. Collapse occurred over some areas. The chaotic blocks were then extensively modified to form knobs by the north-flowing floods that carved the channels. Figure 6.13 shows enlargements of the channels at the center of this map. (NASA and U.S. Geological Survey)
A possible mechanism for the formation of the chaotic terrain and the large channels involves melting of ice and escape of water from within the layers of the martian crust near the surface. As the ice melted, it may have drained rapidly when the meltwater reached a cliff face. The removal of water in the pore spaces could cause collapse of the overlying rock and the flowing water could cut the channels. Heat to melt subsurface ice reservoirs may have come from intrusions of molten rock; this relationship between volcanoes and channels is especially clear on the western slopes of Elysium Mons (Figure 6.18). Other, larger floods may have resulted from the breakout of confined groundwater. For example, high pore pressure could have been achieved by gravity-driven flow of groundwater down the eastern slope of the Tharsis bulge. This latter suggestion for the source of the liquid water is consistent with the apparent absence of volcanic features near the sources of the Chryse (and most other) outflow channels.
Sinuous channels issue from fractures on the flanks of the volcano Elysium Mons. The channels form a series of distributaries at the base of the steep northwestern flank of the Elysium dome. The water that carved these channels was probably released from reservoirs of ground ice beneath the volcano. Heat from the volcano provided the energy for melting. The scene is about 200 km across. (NASA/JPL/ Arizona State University)
The best terrestrial analog of the outflow channels occurs in Washington's Channeled Scabland. The Scabland consists of a complex network of deep channels cut into basaltic lava flows (Figure 6.19). The braided channels are 15 to 30 m deep, with steep walls and abandoned waterfalls, and cover an area of 40,000 km2. Giant ripple marks and huge bars of sand and gravel were created by catastrophic erosion and deposition. This spectacular topography was caused by the failure of a glacial ice dam, which released millions of cubic meters of water in tremendous floods over the plain. The area was flooded several times as glaciers advanced, formed dams, and then failed.
The Channeled Scabland of Washington consists of large anastomosing channels that cut through the loess and basalt of the Columbia Plateau. The channels were created by catastrophic floods that streamed across the plateau after the failure of a glacial dam that had blocked a river to the east of this satellite image. (U.S. Geological Survey Landsat, NASA WorldWind)
On Mars, the large unconfined outflow channels apparently did not originate from regional precipitation and collection of rain, but represent the products of several huge floods released at high velocities. During the waning stages of floods, streams were unable to transport all of their sediment and deposited it on the floors of channels to form the braided segments, bars, islands, and terraces. The outflow channels postdate the heavily cratered highlands. In fact, the frequency of fresh craters on the floors of the outflow channels indicates that they are young compared to the valley networks as well as features on the Moon or Mercury. Nonetheless, in an absolute sense they are still very old; water has not flowed through them for millions of years. Some have suggested that the channels around the Chryse Basin are 1 to 2.5 billion years old.
These tremendous floods of water from the outflow channels periodically filled and re-filled a martian ocean, Oceanus Borealis, in the low northern plains. The rise of the Tharsis and Elysium bulges and the development of volcanoes were triggers for the draining of water from the crust. The oceans may have been as deep as 2000 m. Each ocean gradually dried up by evaporation, by sublimation of sea ice, and by water seepage into the floor of the basin to form reservoirs of ground water. When the ocean was filled, water vapor was readily lost to the atmosphere by evaporation. This water vapor would effectively trap heat from the Sun in what is called the greenhouse effect, and temporarily raise the surface temperature. Evaporation from the seas and the later precipitation of rain or snow may have fed a few small, young valley networks. Evaporation from the ocean may have stabilized and fed the growth of a huge glacial ice cap at the south pole. This south polar cap was at one time much larger than it is today. Moreover, carbonate minerals form with ease in standing bodies of shallow water. Eventually, much of the atmospheric carbon dioxide was precipitated leaving the planet cold and dry with but a thin atmosphere. It seems that Mars has seen long periods of little change punctuated by brief periods of exceptional erosion, climate change, and carbonate deposition.
There are numerous indications that significant amounts of water as ice or liquid may occur beneath the martian surface. As we saw in a previous section, the ejecta deposits around many fresh martian craters appear to have been shaped by surface flow (Figure 6.6). Apparently, the ejecta was wet, because impact occurred in ground that contained ice or water and started to flow when it hit the surface. In other areas on Mars, low plateaus may be formed and progressively destroyed as ground ice melts or evaporates and the surface collapses (Figures 6.20 and 6.21). Small depressions eventually grow and merge, forming the scalloped edges of the shrinking plateaus. Moreover, some channels appear to have been fed by springs at the head of box canyons. These springs may have worn away the soft layers, removed support for the upper mass, and allowed the canyon to extend up the regional slope (Figure 6.22). This process is known as sapping and may be particularly active along the edges of channels cut previously. Extensive areas of fractured plains occur in the northern hemisphere (Figure 6.23), and may be similar to the patterned ground that occurs in Earth's polar regions as water in the soil freezes and thaws, forming the characteristic polygonal cracks at the surface. Although there are alternative explanations for some of these features, taken together they indicate that ice is present in the near-surface layers of much of the planet and has played a significant role in modifying and shaping martian landforms.
Erosional processes on Mars may include the sublimation of ground ice to create lowlands, which grow at the expense of plateaus with irregular margins. Sublimation or melting occurs at cliff faces. This process is similar to karst processes and in Earth's polar regions this terrain has been called thermokarst. Landscapes that evolve in this fashion must have volatile-rich surface layers. (NASA)
The northern flanks of Elysium Mons, near Hecates Tholus, are being modified by the collapse of lava-capped polygons. This type of chaos occurs in alcoves on cliff margins, presumably as ice sublimates or melts, allowing the collapse of overlying material. The knobby plains at the top of the photo are littered with remnants of a formerly more extensive upland, which retreated southward by this process. The largest crater shown in this mosaic is about 15 km across. (NASA)
Spring sapping at the base of cliffs in the box canyon terminations of Nirgal Vallis is probably responsible for its headward growth. Large areas between the tributaries are not dissected by valleys, and the tributaries terminate in amphitheaters or alcoves. Old wrinkle ridges cross the plain. This scene is 80 km across. (NASA/JPL/ Arizona State University)
Polygonally fractured plains cover vast areas of the northern lowlands of Mars. The fractures may be produced by freeze-thaw processes or they may have formed as desiccation or shrinkage cracks analogous to, but much larger than, those that develop when thin deposits of wet sediment dry. This scene is about 50 km across; individual troughs are about 1 km across. (NASA)
Polar Regions: Ice and Wind
The unique geologic character of Mars is accentuated by its polar ice caps, which are large enough to be visible from Earth. They are of special interest because they are centers of present-day geologic activity and are not relics produced billions of years ago. The polar caps are also a major reservoir of the water on Mars. The structure, rock types, and terrain of the north and south poles are very similar, and in some respects, resemble Earth's ice caps.
Cratered plains surround both polar areas and apparently extend beneath the ice. In the southern hemisphere, the plains are pitted or etched by irregular depressions. The pits were probably hollowed out by the wind. Extensive wind activity in the northern hemisphere has created a vast circumpolar dune field (Figure 6.24). Dunes are present but less common at the south pole. No young craters have been found in the area covered by dunes, because the actively shifting sands have buried or obscured them.
The north polar ice cap of Mars is surrounded by a vast field of dark sand dunes. This sand sea is larger than any found on Earth.
(A) The perennial ice cap of the north pole is virtually surrounded by a sea of sand as shown on this map. Transverse dunes and barchan dunes are the most common types. The spiral pattern in the ice cap is defined by exposures of layered deposits beneath the perennial ice cap.(NASA)
(B) A sand sea consists of a variety of dune types. Transverse dunes, shown here, are parallel to one another, with only minor branching or merging. The white patches are ice covered but sand free. The average distance between dune crests is about 500 m. The outlines of several craters buried by the sand are still visible; no craters younger than the dunes are visible. (NASA)
Closer to the poles, a thick series of layered deposits buries the cratered plains. These deposits are a type of sedimentary blanket, cut in many places by deep, spiraling channels that reveal its internal stratification (Figure 6.25). Elsewhere, it is smooth and nearly featureless except where dunes may lap over its margins. The surface is very young and lacks fresh impact craters. The layers are remarkable for their uniformity and continuity and may contain a record of physical processes on Mars in much the same way as sedimentary rocks preserve the geologic history of Earth. As many as 50 beds have been observed in a single slope; their total thickness may be more than a kilometer.
Layered deposits of the polar regions consist of alternating light and dark beds of sedimentary rock deposited by ice and wind.
(A) Layered deposits in the south polar region form a thick, craterless blanket burying an older cratered surface at the top. The margin of the deposits is obvious in this photo, which is 100 km across. The older surface displays the pitted or etched appearance of much of the surrounding terrain. (THEMIS 20031210A. NASA/JPL/Arizona State University)
(B) The delicate layering of the north polar deposits is exposed in cliffs cut through the terrain. Dark dunes of sand partially bury the surrounding lowlands. This scene is 100 km across. (NASA)
These sedimentary deposits consist of a series of alternating light and dark layers that are essentially horizontal and are thought to have originated from the combined result of ice and wind activity. Possibly the brighter layers are water frozen out of the atmosphere and the darker stripes are interlayered dust and sand blown in by the wind. If these laminated deposits have such a composition, they may contain tremendous amounts of water, which at one time must have resided in the atmosphere or elsewhere on the surface.
The rotational axis of Mars is presently inclined almost exactly the same as Earth's. As a consequence, Mars experiences seasonal changes. The white polar caps grow in the winter as frost and ice collect on the surface and shrink as the frosty hood sublimates during the summer (Figure 6.26), leaving only a smaller residual ice cap. This temporary frost layer is probably made of dry ice (frozen carbon dioxide) and may be as much as 50 cm thick during the winter. An important observation from the Viking orbiters showed that the residual or permanent cap in the north consists of dirty water ice and is not composed solely of frozen carbon dioxide as once thought. Carbon dioxide ice may dominate the south polar cap, but at the low temperatures encountered there it is difficult to detect water ice, even if it is present. However, the residual ice caps are thin, possibly only a few meters thick and probably less than 1 km. Movement of glacial ice cannot occur until the weight of the ice exceeds its strength; to initiate movement in the low gravitational field of Mars requires greater thicknesses of ices than would be needed on Earth. Thus, at present, the ice caps of Mars may lack the great erosive power of Earth's continental glaciers. Permanent ice covers most of the layered deposits and also occurs as isolated patches in craters or surrounded by dunes farther from the poles. One of the most interesting discoveries is that both polar caps have spiral patterns of dark, frost-free valleys that extend through the residual ice cap and layered terrains. They could be erosional channels formed as the wind sweeps across this bleak, craterless expanse (Figure 6.27). Alternatively, the pinwheel pattern may reflect selective deposition and sublimation of ice, determined by the atmosphere's circulation.
Channels in the polar ice caps change with the seasons as the ice caps expand and contract over the polar regions. These images were taken by the Hubble Space Telescope.
A spiral pattern of valleys cuts the residual ice cap and the layered deposits of the South Pole. The south pole is surrounded by layered sheets of reddish dust and ice. (Malin Space Science Systems and NASA WorldWind)
Although the present ice sheets appear to be too thin to flow as glaciers, there is some evidence that thick glaciers formed repeatedly. Glacial ice transports sediment by dragging it along the base of the ice. The sediment may be piled up into long ridges, called moraines, at the front of the glacier. When the ice sheet retreats these ridges are left as evidence of the former extent of the ice sheet. Sinuous ridges on Mars have been interpreted to be glacial moraines that formed when an ancient glacier scoured the landscape. Although there very existence is still extremely controversial, such glaciers may have formed contemporaneous with the seas created by the outflow channels.
Whatever the exact nature and timing of water or ice activity on Mars, it is clear from the presence of stream channels, ground ice, and polar ice caps that water is and has been available to shape the surface. Water resides on Mars at present as vapor in the atmosphere, as ice at the poles, and probably as groundwater or ice in rocks near the surface.
The Martian Hydrologic System
The circulation of water on Earth's surface takes place within a huge hydrologic system that operates continually. The oceans form a vast reservoir of water, which evaporates and moves with the atmosphere. It is then precipitated as rain and snow and constantly bathes the surface of the land. Water may move as surface runoff in streams, as groundwater through pore spaces in rocks, in glacial ice, and eventually as water vapor in the atmosphere before it returns to the ocean and is recirculated. Heat from the Sun drives this system and determines the physical state of the water.
Apparently, Mars developed portions of a similar hydrologic system during part of its history. We have seen that Mars appears to have a differentiated interior that resulted from melting and redistribution of the elements within it. Interior melting has other important consequences. When rocks with appropriate compositions are melted, some of the elements incorporated in the rock---such as oxygen, carbon, hydrogen, argon, nitrogen, and neon (all volatile elements)---are released and form other compounds that are stable under the new conditions. Some of these compounds ultimately form light gases (such as water and carbon dioxide) that easily rise with the molten rock and escape to the surface to form a secondary atmosphere. If the gravity of the planet is sufficient to retain the gas, and if temperatures are high enough, water will remain in a gaseous state; at lower temperatures it may become a liquid and collect in rivers, lakes, and oceans, forming a hydrosphere and initiating circulation in a hydrologic system that shapes surface features, deposits new rock bodies, and has important implications for the evolution of life. If liquid water was stable at the surface of Mars, it would have collected in craters and impact basins. However, the composition of the early martian atmosphere probably did not remain in its primordial state. Chemical reactions of the atmospheric gases with the surface materials modified the composition of the atmosphere and extracted carbon dioxide; additions to the bulk of the atmosphere occurred from continued melting and differentiation in the interior. Some light gases slowly but inexorably escaped into space, while others, such as water vapor and carbon dioxide, froze out of the atmosphere when the temperatures became low enough and formed polar ice caps and ground ice.
Estimates of the total amount of water that may have been released from the martian interior range from a globe-encircling layer 10 m thick to one a kilometer or more in thickness; a reasonable estimate is probably 50 to 200 m. (Earth outgassed enough water to cover its surface, if it were a perfect sphere, to a depth of about 3 km.) In any case, a large quantity of water has been released from the interiors of both Earth and Mars. From the features we have discussed in this section, it is at least conceivable that during the very early history of Mars, a relatively well-integrated hydrologic system existed. The small channels in the ancient highlands may record an epoch of rainfall and collection in rivers and possibly small seas or lakes. If so, Mars must have been warmer then and had a thicker atmosphere that made it possible for liquid water to flow across the surface. Since that time much of the globe has been resurfaced, erasing most of the evidence of this erosional epoch and any evidence for an ancient ocean. About 4 to 3.5 billion years ago, the atmosphere of Mars must have cooled and ice began to form---first in the polar areas, forming ice caps, and later at lower latitudes. Water-saturated ground froze in some places and created ground ice deposits. Later, liquid water, from reservoirs beneath the surface, may have been released when the huge outflow channels and chaotic terrain were formed on the flanks of the Tharsis and Elysium domes. Exposures of ice in cliffs or scarps may have melted or sublimated, forming the box-canyon streams and scalloped plateaus. Meteorite impact, though greatly reduced in this later period, catastrophically melted pockets of ice and incorporated the water into ejecta that flowed like mud as it returned to the surface. During this later, colder episode, the hydrologic system, with its host of components and processes, lost much of its recycling capability (Figure 6.28). Presently, a rudimentary circulation system transfers small quantities of water and carbon dioxide from pole to pole with seasonal changes.
The martian hydrologic system involves the polar ice caps and water stored as a liquid or as ice beneath the surface. Occasionally water released to the surface carved huge outflow channels. The present hydrologic system consists of seasonal cycling of carbon dioxide and water frosts between the polar regions and the atmosphere. Ground ice is only rarely mobilized by meteorite impact. But even today small amounts of liquid water may seep from springs on the walls of craters.
The effects of flowing fluids (gases and liquids) at the surfaces of planets are dramatic. Compare for example, the surfaces of the airless, waterless Moon with Mars. Having considered the role of water on Mars, we now turn our attention to the role of the fluids in the atmosphere.
When Mariner 9 entered orbit around Mars in the fall of 1971, a planetwide dust storm was raging, completely obscuring the surface of the entire globe. The great storm had begun some two months earlier and was observed with Earth-based telescopes as a yellowish cloud which rapidly expanded over the entire planet. The magnitude of such a storm is hard to imagine. Winds comparable to those in a strong hurricane on Earth raged continuously for several months, and dust was blown many kilometers into the air covering the entire globe. Similar dust storms developed during the Viking missions.
The pictures taken by Mariner 9 and the Viking spacecraft revealed that eolian activity has played an important and active role in shaping the surface features of Mars. A variety of wind-shaped landforms have been observed, including (1) groups of long, parallel streaks, (2) dune fields, (3) large stretches of dust and sand mantled terrain, and (4) linear grooves and ridges. The features collectively indicate that wind activity on Mars is quite likely the dominant surface process presently in action, constantly moving and redepositing loose surface material.
The most obvious eolian features on Mars are the systems of parallel plumes or streaks that originate at craters, ridges, or cliffs, and extend hundreds of kilometers across the surface of the planet (Figure 6.29). These features are the result of wind erosion and deposition. Erosion may occur behind some obstacles that create a turbulent flow of the wind. In this case, fine particles may be swept away. In other places a crater rim may be associated with a wind shadow, a pocket of quiet air down wind from the crater, where sand or dust can accumulate. There are light and dark streaks. Pictures taken of the same area at different times show that the shapes of some markings are always changing.
Bright wind streaks with splotchy, irregular dark halos form behind obstructions to the wind and reveal the principal wind directions, in this case from right to left. Large and small craters on this lava plain southwest of Tharsis were responsible for these streaks. This frame is 230 km across. (NASA/Malin Space Science Systems PIA04693)
Dunes are great moving piles of sand. Individual grains are moved by the drag of the wind blowing across them. Dunes will not form from dust-sized particles, but require larger grains typical of what we call sand. Dune fields are present on Mars, but are not as conspicuous as the streaks. A field of such dunes covers an area of more than 200 km2 (Figure 6.30) in the Hellespontus region. Well-developed dune forms are widespread in the polar regions; the circumpolar dune field forms a sand sea larger than any on Earth (Figure 6.24). The identification of dunes is important because it proves that martian winds are strong enough to lift loose particles and transport them in spite of the tenuous atmosphere. When these airborne grains hit other surfaces, erosion can occur.
(a) Small dune fields have formed on the floors of several craters in this region of the southern highlands, west of Hellas Planitia. The largest field of dark dunes covers an area of about 60 by 30 km. (PIA03206 NASA/JPL)
(b) Dunes are accumulations of wind-blown sand. The high dunes were formed by winds blowing from left to right in this image. Small dunes oriented perpendicular to the large dunes have formed between the large dunes. (NASA/Malin Space Science Systems MOC1901429)
The surface pictures from Viking landers further show the importance of wind activity. Small drifts, with features remarkably similar to many seen in terrestrial deserts, are seen in the spectacular picture of the martian landscape in Figure 6.2. The blocky, angular rocks that mantle the area resemble "desert pavements" produced on Earth where the wind has selectively transported fine sediments, leaving behind a coarse rubble. Drifts of sand have collected behind some rocks where the wind was diverted and the particles it was carrying dropped.
Other evidence for rather large amounts of eolian deposition are the layered deposits of the polar regions, which cover thousands of square kilometers. A thin layer of wind-blown debris may mantle much of the terrain in the high latitudes and subdue the topography beneath it.
Deposits of Wind-Blown Sand and Dust
Large tracts of the martian highlands show strong evidence that a thick eolian deposit has mantled and subdued the topography of an older surface beneath it (Figure 6.31). The deposits appear to have accumulated in craters and on the surrounding intercrater plains after the intense meteoritic bombardment. The deposits were subsequently stripped away in part, leaving mesas with steep cliffs to mark their former extent. The mantlelike nature of the deposits and their easily eroded character may indicate that they were deposited as extensive sheets of loess during the middle history of Mars. Loess forms when dust-sized particles suspended by the wind settle uniformly over a landscape. Similar layered deposits exist in the polar regions (Figure 6.25), where they cover thousands of square kilometers. The polar deposits appear to be younger and may be interlayered with the deposits of ice or frost.
A partially eroded blanket of loess (wind-deposited dust) may explain the appearance of this part of the heavily cratered terrain. An easily eroded deposit fills craters and adjacent intercrater plains. Deposition and erosion of this sedimentary cloak were probably caused by eolian processes. (NASA)
Eolian Erosional Features
The effects of eolian processes are seen on many of the martian craters. Although they resemble lunar craters in general appearance, they have been distinctly modified by processes other than impact and in this sense they are degraded. Careful study of Figure 6.30 shows that some crater rims appear to be abraded or worn down. There is no sharp upturned lip at the crater margin, as is typical of fresh craters on the Moon, and the rays and ejecta blankets have been eroded or buried. Some investigators have suggested that the wind, in concert with other processes, has subdued the expressions of these craters. Elsewhere, wind erosion is strongly suggested by the pronounced alignment of long, narrow ridges or spines that project from low plateaus, Figure 6.32. The ridge crests are sharp and keel-like, and the ends taper sharply. Similar features, called yardangs, occur in desert regions on Earth; their shape and alignments show that they were formed by wind erosion. It appears that large regions near the equator of Mars were stripped of soft surface layers by this process. Etched terrain, which is particularly common near the south pole, is likewise the product of a different, but less intense, type of wind erosion (Figure 6.33). The basins appear to be the result of deflation. Some believe that the pinwheel shape of the ice caps is due to erosion by outward-spiraling polar winds.
Yardangs are linear ridges formed by eolian erosion of the intervening valleys. Yardangs emanate from an incompletely stripped mesa consisting of easily eroded material. The yardangs formed in a deposit that is younger than lava flows that surround Olympus Mons. (European Space Agency/DLR/FU Berlin, G. Neukum)
The pitted or etched appearance of this south polar terrain is probably the result of eolian deflation. Vast regions of similar landforms surround the south pole. This scene is about 200 km across. (NASA, Malin Space Science Systems)
The Martian Eolian Regime
The surface features described above, together with studies of the martian atmosphere, have permitted scientists to draw some conclusions about many parts of the eolian system on Mars and compare it to that of Earth and other planets. Even though a variety of cloud formations have been observed on Mars, the martian atmosphere is still very thin (about 1 100 the pressure of Earth's) and cold. The pressure exerted by the atmosphere at the surface of Mars corresponds to the pressure found at heights of 30 to 40 km above sea level on Earth. Several important consequences result from these differences. For example, the velocity of the wind necessary to start the movement of grains is estimated to be 10 times greater on Mars than on Earth, but once the wind lofts these grains they could have a hundred times more kinetic energy (kinetic energy = 1/2 x mass x velocity2). Mariner 9 data indicate that dust storms in the atmosphere move at velocities in excess of 200 km hr and gusts of 500 to 600 km/h (half the speed of sound in Earth's atmosphere) do not seem unreasonable, far exceeding wind velocities on Earth. The Viking landers measured wind velocities up to 30 m sec. Moreover, the thin atmosphere on Mars produces practically no cushioning effect as grains collide. These two factors permit very small particles to act as very effective instruments of erosion. These considerations suggest high erosion rates. In addition, the relatively weak gravitational field plus the thin atmosphere on Mars combine to permit dust and sand to reach heights three to four times greater than on Earth, increasing the opportunity to form huge loess mantles.
Fine, unconsolidated material similar to the lunar regolith, with glass beads and rock fragments, is undoubtedly produced from the impact of meteorites---so loose material is readily available to be picked up and transported by the strong winds. In addition, sand and dust may be produced by volcanic eruptions, mass wasting, and in the stream channels, as well as by continued wind erosion. Ice particles may also be moved by the wind.
Wind action may have continued on Mars without significant interruptions since the origin of its atmosphere. Only rarely have volcanic activity, tectonics, or running water interrupted the processes of the ceaseless winds. On Earth, plate tectonics continually create and destroy the crust, and flowing water has been the dominant surface process from the very beginning. The products of wind action on Earth have been largely restricted to the desert regions and have often been masked by or intimately mixed with the effects of the more universal processes of running water. This has not been the case on Mars; the action of running water has been very limited in scope and duration. All of these factors collectively suggest that eolian processes may have been more intense and proceeded faster on Mars than on Earth. Indeed, wind action is considered by many geologists to be the principal cause of surface changes, shaping martian landforms as universally as running water affects the surface of Earth. It is perhaps amazing that any evidence of the very early history of Mars is preserved at all.
Thus, a major feature that distinguishes Mars from Mercury or the Moon is the existence of a martian atmosphere. This tenuous shell of carbon dioxide and nitrogen, which may at first seem insignificant from a geologic point of view, has markedly changed the surface features of Mars. Even so, the present atmosphere is probably only a fraction of the total amount of volatile gases that were outgassed from the planet. Several estimates based on the present composition of the atmosphere suggest that a layer of water 50 to 200 m thick spread evenly over the planet may have been outgassed; an amount of carbon dioxide equivalent to 1 to 3 bars (1 bar equals the pressure exerted by Earth's atmosphere at sea level) may also have been expelled during the differentiation of the interior. Much of the water released from the interior is now locked up in the polar caps and within the regolith. Even though the formation of carbonates (an important sink for atmospheric volatiles) at the surface is much less efficient than on Earth, the process may have removed some of the carbon dioxide from the martian atmosphere. Carbon dioxide is also locked in the regolith and absorbed onto mineral surfaces. Some nitrogen may have escaped from the top of the atmosphere into space. These processes and the lack of a biologic cycle, which both forms free oxygen (O2) and helps tie up carbon dioxide in solids, have created a martian atmosphere that is similar in composition to that of Venus but much thinner---95 percent carbon dioxide, 3 percent nitrogen, and 1.6 percent argon with only traces of water vapor, oxygen, and other gases.
The Viking orbiter photography, with its great detail, provides dramatic evidence that mass movement, the gravity-driven downhill movement of unconsolidated material, is an important process in the evolution of the landscape of Mars and is especially important in the enlargement of the canyons and development of chaotic terrain. The variety of mass-movement features on Mars includes those produced by the rapid and devastating effects of tremendous landslides as well as features produced by slow, downslope creep of loose material.
Figure 6.34 shows a section of Valles Marineris and illustrates the role that mass movement has played in slope retreat and enlargement of the canyon. Here the canyon is about 5 km deep, and several massive landslides are visible. Resistant rock layers form a steep cliff at the top of the plateau and appear to have broken into a series of relatively coherent slump blocks, whereas the lower slopes appear to have flowed away in a surge of moving debris. Some of these tongue-shaped deposits extend across the canyon floor, at least 50 km from their point of origin.
Landslides in Valles Marineris are shown in this photo. The landslide on the far wall has two components---an upper blocky portion, which is probably disrupted cap rock, and a finely striated lobate extension, which is probably debris derived from the old cratered terrain exposed in the lower canyon walls. Similar lineations are found on terrestrial landslides and show the direction of movement. This part of Valles Marineris is about 5 km deep. (NASA)
Figure 6.35 provides a broader view of Valles Marineris and the gigantic landslides along the walls. The canyon was originally formed by subsidence along a series of parallel faults extending from the crest of the Tharsis upwarp. Many small grabens can be seen on the flat plateau surface; earthquakes along these faults may have triggered the great slumps that widen the walls of the major canyons.
Fault-bounded troughs of Valles Marineris are extensively modified by huge landslides. Most of these landslides lack the lineated debris lobes shown in Figure 6.34 because the canyon is much narrower here. (NASA)
Mass movement is, of course, of prime importance in the development of the chaotic terrain discussed in a previous section. As ground ice melts or as aquifers break through to the surface, the support for the overlying rock is removed. It may then collapse and form huge, muddy debris flows and slump blocks on the steep walls (Figure 6.17).
Similar mass-movement processes probably sculpted the fretted and knobby terrain skirting the global escarpment (Figure 6.36). The fretted terrain consists of a maze of flat-topped buttes and linear valleys, which are probably controlled by fractures in the crust. This photo shows how the escarpment retreats---material is shed off the cliffs by mass movement and is eventually removed by fluvial or eolian processes or buried by later lava flows or other debris flows. Melting or sublimation of ice in these materials may substantially contribute to this mode of erosional evolution. Eventually only small knobs or hills are left to mark the former extent of the plateau.
Fretted terrain, made up of isolated high-standing mesas separated by intervening troughs and plains, occurs along the escarpment separating the two distinctive terrains on Mars. Piles of debris slumped off the mesas and accumulated at the cliff bases. Eolian processes may eventually remove some of this material, but sublimation of ice may also contribute to the retreat of these scarps. (NASA and U.S. Geological Survey)
The detailed photograph shown in Figure 6.37 reveals another type of slow mass movement in the fretted terrain. On the floors of these trenchlike valleys surface materials appear to have moved or flowed slowly downhill, possibly aided by freezing and thawing of ice in the spaces between fragments. In this area mass movement causes slope retreat and also transports the material downslope, like a valley of glacier on Earth.
Mass movement off the walls of this portion of the fretted terrain has created a deposit with lineated surfaces. There may have been some some slow, glacierlike, down-valley movement of debris-covered ice parallel to these lineations. (NASA)
Although landforms produced by the direct, gravity-driven movement of slope materials have been observed on the steep walls of lunar craters, the process has not proceeded as dramatically as on Mars. The role of melting ground ice on Mars is probably the most important reason for the great difference.
Martian Volcanic Features
The discovery of huge volcanoes in the northern hemisphere of Mars was one of the most significant results of the flight of Mariner 9 in 1971. Previous
Mariner missions (4, 6, and 7) sent back pictures of the southern hemisphere, which, for the most part, is densely cratered and superficially resembles the lunar highlands. Therefore, the existence of enormous volcanoes on Mars was entirely unexpected. Yet, when the rest of the planet was photographed by Mariner 9, it became clear that prominent volcanic features occur over a significant part of the planet, and their lack of superposed craters suggests that Mars continued to experience volcanism relatively late in its history (during the last billion years). Later, detailed studies revealed that there are many older volcanoes as well, although they are highly eroded and difficult to recognize. It appears that Mars has had a long and interesting volcanic history.
Three major types of volcanic features, including several types not found on Mercury or the Moon, are found on Mars. The most striking are the giant shields concentrated primarily in the northern hemisphere. There are also large volcanic structures with very low profiles and large central craters called patera (saucers). The third type of volcanic feature, less spectacular but nonetheless very significant, includes the volcanic plains, which form much of the sparsely cratered regions in the northern hemisphere and are similar to the plains of Mercury and to the lunar maria.
The most spectacular volcanic features on Mars are the enormous shield volcanoes, which have no analog on the Moon or Mercury. Most of the shield volcanoes occur in the Tharsis region, where twelve large and several smaller volcanoes developed (Figure 6.38). The Elysium region is smaller and has only three large volcanoes (Figure 6.39). The largest volcano is at least twice the size of the largest volcano on Earth. The most dramatic is Olympus Mons, which lies west of the Tharsis ridge. It is about 550 km in diameter (five times larger than the largest on Earth) and rises 25 km (82,000 ft) above the surrounding plain. This is half again the distance from the depths of the Mariana Trench to the top of Mt. Everest, the deepest and highest points on Earth. The mosaic in Figure 6.40 shows that Olympus Mons has a complex crater at its summit called a caldera. These large calderas are not vents but depressions, produced when the peak of the cone collapsed and subsided along circular faults as magma was withdrawn from shallow reservoirs in the volcano. High-resolution photography of the flanks of Olympus Mons shows many of the individual flows that make up the cone (Figure 6.41). There are many long ridges radial to the central caldera, narrow channels that resemble collapsed lava tubes or lava channels, and fingerlike flows. All of these flow types are found on terrestrial shield volcanoes, suggesting that the physical nature of the lava that made them is similar to terrestrial basalt.
The Tharsis region of Mars is dominated by a dozen relatively young volcanoes and vast lava-covered plains. These features developed around a huge crustal bulge. Long fractures and graben systems radiate away from the uplift and cut older lava plains as well as the heavily cratered southern highlands. (NASA and U.S. Geological Survey)
The Elysium region contains only three large volcanoes, but like Tharsis it sits atop a crustal swell and is associated with a long fracture system. Much of the surrounding plain is covered by lava. (NASA and U.S. Geological Survey)
Olympus Mons is the largest of the Tharsis volcanoes; it rises 25 km above the surrounding plains. A complex collapse caldera marks its summit. This map of the 550-km-diameter volcano shows the radial texture of the flanks created by lava flows. The low semiconcentric ridges may mark different stages in the growth of the volcano. The erosional scarp (up to 10 km high) is buried on the northeast by younger lavas. A large aureole of strongly ridged lobes surrounds the base of the volcano. The origin of the lobes is controversial---suggestions range from accumulations of pyroclastic flows to debris sloughed off the volcano to form the scarp. (NASA and U.S. Geological Survey) A three-dimensional perspective of Olympus Mons is found here.
Countless lava flows form the flanks of Olympus Mons. Many have narrow channels. The lavas have cascaded down the steep, faulted margin of the volcano. Near the bottom of the image, a leveed lava channel flowed into a trough, but was then buried by a thick layer of dust. (PIA01890: NASA/JPL/Malin Space Science Systems)
Portions of the base of Olympus Mons form a steep cliff several kilometers high, which is receding by slumping and gully erosion. Originally the volcanic cone probably graded smoothly into the surrounding plain. In places the scarp has been flooded by younger lavas, reestablishing a smooth profile (Figure 6.40).
The other large shield volcanoes in the Tharsis and Elysium areas resemble Olympus Mons but differ in size and detail. All are relatively young features, as is indicated by the remarkably fresh surface features of the lava flows, sharp rims of their summit caldera, and the lack of impact craters. Olympus Mons is probably the youngest of the large shield volcanoes; according to some crater counts, it may be as young as 200 million years old and would have been built while reptiles dominated life on Earth.
The large shield volcanoes of the Tharsis and Elysium regions appear to be associated with large domes in the lithosphere, both of which may be the result of thermal processes within the deep interior, which bulged and fractured the lithosphere and produced the magmas.
In addition to the large shield volcanoes, a number of smaller volcanic shields are scattered across the planet. They are typically slightly steeper than the large shields but they commonly have summit calderas and radiating channels (Figure 6.42). They may simply be small shield volcanoes produced over a limited magma source, but some geologists believe that because they have such large calderas they are the summits of nearly buried shields.
Small Tharsis volcanoes have steep sides and relatively large calderas. They have been at least partially buried by younger flows.
(A) Ulysses Patera has a caldera that is 55 km across. The volcano has two sizable impact craters on its flanks and is cut by graben radial to the Tharsis uplift. (NASA) Click here for an oblique three-dimensional view.
(B) Biblis Patera has a caldera 50 km in diameter from which several large lava channels issue. These channels appear to have developed by thermal erosion, caused by hot lavas flowing down the flanks of the volcano. Smaller channels are also visible. Young grabens cut the lava covered plains and the shield. (NASA)
Possibly the largest volcanic structure on Mars is not a shield volcano at all, but a very low profile feature with a large caldera, or vent area called Alba Patera (Figure 6.43). It may be over 1500 km in diameter. The central portion of the structure is surrounded by a circular set of fractures along which the volcano may have subsided somewhat. Extremely long lava flows that appear to be as young as those in Tharsis emanate from its center and extend for more than 1000 km away from their vents. Small hills around the margins of this huge structure my be cinder cones or lava domes. In short, it appears that Alba Patera was formed by a complex series of volcanic and tectonic events. No similar volcanic structures have been found on any of the other planets.
The volcano Alba Patera is surrounded by a large set of arcuate grabens that partially encircle the summit region of this low volcanic edifice and cut the youngest lavas. The caldera is located at the center of the map; lavas that erupted from this vent have partially filled a larger, older caldera just to the west (left). See Figure 6.38 for a regional perspective. (NASA and U.S. Geological Survey)
Other patera volcanoes occur in the southern hemisphere and appear to be very old (Figure 6.44). Tyrrhena Patera, (Figure 6.45) northeast of Hellas Basin, is extremely degraded and appears to be surrounded by younger plains. By crater counts, Tyrrhena Patera is estimated to be over 3 billion years old. Thus, eruptions from central volcanoes, shields, and pateras, have apparently extended over a large span of martian history. Some pateras may have experienced an early phase of explosive volcanism that emplaced thick sheets of volcanic ash. The explosions may have been generated as hot magma rose through the water-saturated regolith of the ancient highlands. The tremendous change in the volume of water as it passes from liquid to gaseous states may trigger such eruptions. On Tyrrhena these initial explosions were followed by the quieter eruptions of lavas, which appear to have formed the deep channels on its flanks. Alternatively, the channels may have been cut by water released from the regolith by volcanic activity.
Apollinaris Patera, an ancient volcano in the southern highlands, may have erupted ash, as is suggested by the easily eroded deposits on its flanks. (Malin Space Science Systems MOC2-119)
Tyrrhena Patera, located 1500 km northeast of the rim of Hellas impact basin, may have been the site of explosive volcanic activity during the early history of Mars. The deep channels may have been cut by younger lavas. This frame covers an area 280 km across. (NASA)
Although the great shield volcanoes present the most spectacular evidence of volcanic activity on Mars, the lava flows of the plains regions may represent much greater volumes of volcanic material extruded from the interior and are certainly important because of their similarity to the flood basalts on other planets. Indeed over 60 percent of the planet is covered by plains, a substantial portion of which may be of volcanic origin. Geologic evidence shows that volcanic plains were formed in the ancient highlands (similar to Mercury's intercrater plains) and in the vast northern lowlands. Their emplacement appears to have spanned the entire length of martian history. Knowledge of their absolute ages will be important for determining the thermal evolution of Mars.
Various features are used to identify volcanic plains, the most obvious being the presence of flow fronts (Figure 6.46A). Less definitive are the wrinkle ridges, which are ubiquitous on the highland plains thought to be volcanic (Figure 6.46B). The sheetlike nature of most of these deposits and the large volumes involved suggest that the eruptions occurred from large fracturelike vents with high flow rates typical for flood lavas. Some plains (Figure 6.46C) are dotted by low conical mounds several kilometers across, with depressions at their summits. These low shields and the surrounding plain are the result of small eruptions from numerous pipelike vents that may be localized along large fracture systems and are representative of basaltic plains.
Volcanic plains are not the most obvious volcanic features on Mars, but represent the largest volume of volcanic material extruded on the surface.
(A) Flow fronts for the vast sheets of lava that have flooded parts of the cratered terrain are well developed in this region south of Tharsis. The large crater with the remnant central peak is 100 km in diameter. (NASA)
(B) Wrinkle ridges form when compression buckles a mechanically strong surface layer. Even though they are not formed by volcanic processes, some suggest they are indicative of lava-covered plains. (NASA Viking 608A45)
(C) Plains-style volcanism develops small, low-shield volcanoes like the three distributed across the middle part of this photo. The vents are one-half to one kilometer across and cap very low shields with diameters of only 5 km or so. Smooth lava plains surround the shield and bury an older complexly faulted terrain. (NASA)
(D) Young, smooth plains, like those in the upper left part of this photo, have been interpreted to be deposits of volcanic ash. The deposits are easily eroded and mantle underlying terrains, but no vents have yet been identified. The deposits occur near the equator of Mars. An alternative explanation holds that the smooth plains are accumulations of ice and dust. (NASA)
The detailed nature of the northern plains remains unresolved, but much of the area probably consists of lava flows. Information collected by the Viking landers suggests that the bedrock may consist of iron-rich, basaltic lavas. Moreover, the rocks seen on the surface by the landers are frothy, like those produced in gas-rich lava flows on Earth. Flow margins are also evident in some areas and resemble those in the lunar maria. Some of the youngest plains on the planet, lying near the equator, have been interpreted as large ash-flow or ash-fall deposits (Figure 6.46D), but no vents have yet been identified. In short, the geology of the martian plains is much more complex than that of the lunar maria, as the plains were probably built up by fluvial and eolian deposits as well as volcanic flows. Subsequent modifications by ice-related phenomena have further complicated our attempts to understand the origin of the northern plains.
The large range of ages for the various volcanic features clearly shows that Mars has been thermally active during most, if not all, of its history. In addition, we have seen evidence for several types of volcanic processes that did not develop on the Moon or the portion of Mercury photographed by Mariner 10. The unique style of volcanic processes indicates significant planetary differences, which must be, at least in part, the result of the size and composition of Mars.
Martian Tectonic Features
The presence of undeformed craters across the surface of Mars clearly indicates that the crust has not been subjected to extensive horizontal compression since the period of intense bombardment. There are no mountain ranges made of folded layers of rock and no active system of moving plates. Yet there are some very important tectonic features on Mars, produced by extension of and compression of the rigid lithosphere. These are convincing evidence that significant tectonic deformation has occurred throughout martian history.
The unique tectonic character of Mars is marked by large domal upwarps in the crust and extensive fracture systems associated with the production of these bulges. Other small wrinkle ridges and graben are also present, but were probably the result of local vertical adjustments.
Several large domal upwarps occur on Mars; features which are not found on the Moon or Mercury. The two largest are in the Tharsis and Elysium regions, near the global escarpment. Both are capped with large volcanic cones (Figure 6.1). The Tharsis dome is a broad bulge 4000 km in diameter and 6 to 7 km high. A row of volcanoes, some with an extra 15 km of relief, cross the crest of the dome. This upwarp is much steeper on the northwestern side, where it rises from the low northern plains and therefore has an asymmetric profile. The Elysium dome is smaller, only 1500 to 2000 km across and 2 to 3 km high. The volcanoes there are less numerous and smaller.
The domes in the lithosphere on Mars are associated with large fracture systems that make spectacular patterns on the surface. The most extensive fracture system is northeast of Tharsis where a fanlike array of grabens and fractures converge toward the row of shield volcanoes. This system of faults and fractures extends across a third of the planet. The fractures form magnificent sets of grabens which are typically 1 to 5 km wide and may be several thousand kilometers long (Figure 6.47). Wherever older terrain is exposed, the rocks are intensely fractured by the intersection of several sets of grabens with different orientations. It is almost certain that these fault systems extend beneath the young volcanic terrain. They represent a complex history of structural deformation by extension of the lithosphere.
Fractures, grabens, and wrinkle ridges that surround Tharsis cover almost one-third of Mars. The nearly radial orientation of the fractures is apparent, and is probably the result of lithospheric fracturing during the rise of the Tharsis bulge. (Redrawn after M.J. Carr)
The fractures appear to have been produced as the brittle lithosphere bulged upward and cracked. The location of the volcanoes on Tharsis ridge was probably controlled or facilitated by these fractures. The association in space and time of large domes in the martian lithosphere, graben and rift formation, and volcanism suggests that they may all have a common cause that is rooted in the flow of material in the mantle. Convective movements within the mantle may have pushed up and arched the overlying lithosphere to produce the domes. If solids in the mantle rise very far, they may become partially molten in the low pressure environment. This type of melting will occur even in the absence of any temperature rise. Once a molten magma is formed it may rise even farther, because of its low density, and eventually feed a volcano. It seems that mantle convection may occur by the buoyant rise of less dense (warm) mantle material in cylindrical pipes of upwelling mantle. Such feature have been called mantle plumes. The return flow of cold dense mantle probably occurs in cylindrical plumes of downwelling mantle.
From crater densities on the faulted surfaces, it seems quite likely that the major tectonic events that produced the Tharsis rise ended about 1 billion years ago, and were followed by a series of volcanic eruptions that formed the shield volcanoes and young plains. Apparently, Mars remained tectonically active long after the smaller Moon had cooled and "died."
A better insight into the type of tectonism that operated on Mars can be derived from a close inspection of the western part of the Tharsis fracture system. Earlier, we mentioned the vast system of interconnecting canyons called Valles Marineris; their location and size is controlled at least in part by fractures developed around the Tharsis dome. The huge dimensions of this feature are vividly shown in Figure 6.48, in which an outline map of the United States is superimposed for scale. This great canyon system is more than 4000 km long, 700 km wide, and as much as 7 km (20,000 feet) deep, dwarfing terrestrial river canyons. By comparison, the Grand Canyon of the Colorado River would be a minor tributary. The canyons of Valles Marineris consist of a series of parallel depressions with steep walls that drop abruptly from an upland plain. The troughs are highly irregular in detail, with sharp indentations caused by landslides, and scallops and side canyons that may have been shaped by running water derived from ground ice. Much of the relief, however, is clearly the result of faulting along the fault system radial to the Tharsis bulge. Considerable erosion has widened the canyon walls, but the trace of the faults, along which vertical movement has occurred, is clear at the base of the cliffs.
Valles Marineris is related to the development of the Tharsis uplift. Downfaulting or rifting along faults radial to Tharsis created a system of troughs nearly 4,000 km long. Modification by fluvial and eolian processes and by mass movement has shaped this area's present appearance. The central part of Valles Marineris consists of Ius (south) and Tithonium (north) Chasmas. The linear bounding faults are clearly visible at the base of faceted cliffs in some places, but in many places they are obscured by ridged landslide debris. Narrow graben systems cut the upland plain and are parallel to the main canyons. Branching valleys, probably developed by seepage erosion, are common on the south walls of the canyon; landslides are more abundant on the north. (NASA and U.S. Geological Survey)
Valles Marineris can be divided into three major divisions from west to east. Nearest the dome in the Tharsis region is an intricate labyrinth of intersecting canyons called Noctis Labyrinthus (Figure 6.49). The canyons that form this section are controlled by a set of intersecting fracture systems and are characteristically short and narrow. They thus divide the cratered plains of the plateau into a mosaic of blocks forming a huge maze. The labyrinth may have developed directly over the dome which produced the fractures and stands at an elevation of about 10 km. This set of canyons appears to have formed by localized collapse as the floors subsided rather than by erosion and transportation of material out of the canyons.
Noctis Labyrinthus is a network of interconnecting grabens centered on the highest nonvolcanic part of the Tharsis uplift. The smaller, north-trending fracture system to the southwest is much older than Noctis Labyrinthus. (NASA and U.S. Geological Survey)
The central portion of the canyons consists of a system of long parallel troughs which extend a distance of 2400 km. They are the deepest depressions, with some walls rising 7 km above the valley floor. Throughout this section, Valles Marineris consists of chains of pits, all oriented in an east-southeast direction. In many places the floor is covered with landslide debris or cracked by movements along faults, suggesting that the topography was produced by subsidence of the floor along faults at the foot of the canyon walls, but usually the floor is smooth and featureless. In some of the broader canyons, plateaus of layered rocks have been found on the floor. Some scientists suggest that these are the eroded remnants of sediments deposited in ancient lakes that formed within the canyons; others suggest that they are merely the eroded remnants of the ancient martian crust isolated within the canyons.
The far eastern part of Valles Marineris is a series of irregular depressions that merge into chaotic terrain even farther east (Figure 6.50). Large areas of collapsed terrain have probably resulted from the removal of ground ice. Faults are not as obvious here, and the canyons are much more shallow and the floors are hummocky; apparently, these canyons were formed more by flowing water than by subsidence along faults. Valles Marineris is continuous with several chaotic areas, the largest being Aureum Chaos and Hydroates Chaos, which connect with the large channels that drain into the northern lowlands.
The eastern canyons of Valles Marineris broaden dramatically at the eastern end of Coprates Chasma. The location of the walls no longer appears to be controlled by faulting, and the canyon is not as deep as its western sections. The canyon is continuous, with large areas of chaotic terrain, which appear to be the sources of several of the outflow channels that emptied into Chryse Planitia which lies just north of the area covered by this map. (NASA and U.S. Geological Survey)
The enormous canyons of Valles Marineris are complex, involving tectonic activity and a variety of erosional processes. Faulting related to the creation of the Tharsis bulge appears to have played a key role in its formation by providing differential relief and zones of weakness, both necessary conditions for significant erosion to occur on Mars.
The features on Earth most comparable in size and nature to Valles Marineris are the Red Sea and the east African rift valleys. A rift valley forms where the crust is pulled apart, creating a system of parallel faults that allow an inner zone to subside, creating a large valley. Valles Marineris may have originated in a similar manner, as fractures developed along the crest of a large flexure in the crust and allowed the interior block to subside. Subsequently, the rims of the valleys were sculpted by erosional processes, producing the present topography. A much smaller set of fractures occurs around the Elysium dome and may reflect similar events that occurred there.
Other areas of Mars are cut by smaller grabens and faults (Figure 6.47). In fact, much of the global escarpment is transected by linear patterns, suggesting that its evolution has been controlled in part by a set of fractures and joints, which have accelerated its recession. The thinner crust in the northern plains may have been more extensively fractured early in the history of Mars, possibly even as a result of thermal expansion of the entire planet.
In the plains, wrinkle or mare-type ridges are very well developed and extend for many kilometers across the surface (Figure 6.51). Generally, the ridges occur in sets that parallel regional depressions, as in the Chryse area, which may indicate that they were produced by buckling of the near-surface deposits (perhaps lava flows) just as has been postulated for the Moon.
Wrinkle ridges cross this region on the western flanks of Chryse Basin. East-flowing catastrophic floods have extensively modified this area. (NASA and US Geological Survey)
The development of domal upwarps and associated fracture systems clearly represents major tectonic events in the geologic history of Mars, events not shared by the Moon or Mercury. It appears that this tectonism had some relationship with the focus of the youngest volcanic activity on the planet. On the Moon, we saw that tectonic activity and the deformation of the outer layers of the planet ceased between 2 and 3 billion years ago and, even then most of the changes were only minor, related to more or less passive changes, such as the vertical adjustments of mare basins. Mercury experienced global contraction as it cooled. Mars is a more dynamic planet and experienced some tectonism in the form of crustal doming and rift formation at least until a billion or so years ago. Apparently because of the larger size of Mars, it progressed more slowly through its thermal evolution and the duration of its tectonic systems was longer producing young features. Mars has a mixture of Moonlike features modified by some Earthlike processes. Even though no evidence of past plate motions has been preserved on the surface, mantle convection has produced some very obvious changes to the cratered landscape.
The Geologic History of Mars
By careful mapping of the surface of Mars, distinctive terrain units have been identified on the basis of their topography, color, and brightness. Using the principles of superposition and crater frequency, these geologic units, and the processes which formed them, can be placed in their proper time sequence. The formal names applied to these periods of time are, from oldest to youngest, Noachian, Hesperian, and Amazonian. The names are taken from prominent geographic regions on Mars. The absolute ages of these geologic periods are subject to great uncertainty, because no rocks have been collected from the surface of Mars and because the rate of meteorite impact on Mars is not known and cannot be simply tied to the dated cratering record on the Moon. Even though the actual ages of these developmental stages are not known precisely, they provide an important framework for developing the history of Mars.
Six major stages in the evolution of Mars are recognized; each stage involves distinct events or processes but they may overlap somewhat in time. The effects of the major events in the geologic evolution of the interior and surface of Mars are shown in the series of diagrams in Figures in 6.52 and 6.53.
The thermal history and internal structure of Mars are shown on this schematic diagram. Accretion of Mars may have led to widespread shallow melting and the creation of a magma ocean hundreds of kilometers deep. The primitive crust may have crystallized from this magma ocean. Core formation was probably nearly simultaneous with the epoch of crust formation, as iron and iron sulfide became molten and, because of their great density, sank to form a metallic core. Radioactive decay added heat to the interior of the planet and sustained an asthenosphere, which fed volcanism at the surface. The Tharsis and Elysium domes also formed at this time. The lithosphere thickened with time and the zone of partial melting in the mantle may be entirely absent at present.
Stage 1. Accretion and Differentiation.
Mars, like all the other planets, is believed to have formed by accretion of a myriad of smaller bodies over a fairly short period of time about 4.6 billion years ago. As the planet grew and developed a larger gravity field, the infalling debris was collected at higher speeds. Consequently, the impacts released large amounts of heat. Probably much of the planet was melted by this accretionary heat that was augmented by radiogenic heat produced internally. As a result, Mars may have differentiated internally, forming an atmosphere, crust, mantle, and core. If the initial heating of Mars was great enough, a magma ocean may have formed from which the primordial crust crystallized. This crust was probably basaltic and not anorthositic, as on the Moon. Great variations in crustal thickness may have been inherited from this time. On Mars, the crust in the northern hemisphere appears to be thinner than that in the south.
The gases and water that are presently at or near the surface were probably released from the interior of Mars during this early stage, to form a primordial atmosphere and hydrosphere (Figure 6.53A). The presence of these fluids near the surface of Mars resulted from its more volatile-rich composition as compared to Mercury or the Moon.
The geologic evolution of Mars can be summarized in six stages. Schematic maps of the western hemisphere are shown.
Stage 2. Noachian Period.
During and following its formation as a planet, Mars was subjected to a period of intense meteoritic bombardment. Crust formation predated the end of the catastrophic bombardment, and a densely cratered surface such as that shown in Figure 6.53B, including several multiring basins, was formed. In fact, a few geologists have speculated that the ultimate cause of the north-south dichotomy was the impact of a huge body, which thinned the crust of the northern hemisphere. The asteroids, the Moon, Mercury, and presumably even Earth experienced similar impact-dominated early histories.
An early period of widespread lava flooding is represented by extensive tracts of plains between and within the ancient craters. The plains are similar in appearance to the mercurian intercrater plains, and in some cases, show distinctive features of volcanic activity. Thus, the oldest rocks on Mars must consist of interlayered and overlapping ejecta blankets and lava flows formed shortly after the planet's accretion. By about 3.5 billion years ago, the rate of meteorite impact had probably declined to very near the present low rate.
The early martian atmosphere was quite likely more dense, and temperatures were probably higher, perhaps sufficient to allow liquid water to flow across the surface. If so, rainfall and flooding would have caused significant erosion of the highlands and the runoff may have collected in local basins in the southern highlands and in the northern lowlands as ephemeral lakes or seas. The small filamentous channels of the highlands may be remnants of these ancient river systems. However, evidence for a thicker atmosphere early in martian history is at best equivocal. An alternative explanation for the valley networks holds that they were formed as groundwater, stored in the crust at some earlier time, seeped to the surface and created valleys by sapping at springs.
Stage 3. Early Hesperian Period.
Vast sheets of flood lava formed some of the ridged highland plains. Lavas are probably interlayered with eolian sediments in the huge areas of highlands that were resurfaced at this time. Several large volcanic fields formed in the southern hemisphere around volcanic patera, such as Tyrrhena Patera.
Regional uplift in the Tharsis area began and was accompanied by radial faulting and volcanism, presumably because of plume convection in the mantle. Valles Marineris began its development along a structural trough in the fracture system. Much of the crust in the northern hemisphere may have been extensively fractured at this time, possibly because it was thinner there than in the southern hemisphere. In addition, slow expansion of the entire planet due to radioactive warming of its deep interior may have contributed to the tectonic breakup of the northern hemisphere (Figure 6.53C).
Probably during this or an earlier stage, the temperature at the surface began to drop as less heat was being released from the interior. Water became locked up beneath the surface as ground ice or in the polar caps. Carbon dioxide from the atmosphere may have been trapped in carbonate minerals formed at the surface. Atmospheric pressure probably dropped, partly as a result of the removal of water vapor and carbon dioxide from the air, but also because of the slow, inexorable loss of gas into space. Both of these processes made it very unlikely for water to exist as a liquid on the surface. Much of the erosion along the global escarpment occurred just before or during Stage 4.
Stage 4. Late Hesperian Period.
Widespread volcanic activity and deposition of eolian and perhaps fluvial sedimentary rocks occurred in the northern hemisphere to form great, low, sparsely cratered plains (Figure 6.53D). Eruptions of flood lavas formed the lunar maria and plains of Mercury and then stopped, probably several billion years ago, but on Mars, volcanic activity continued intermittently over a much longer period of time, possibly to the present. The amount of volcanism appears to have declined with time. The large volcanoes in Elysium, sitting atop a large crustal swell, began to erupt during this epoch. Highland volcanic centers also developed.
Stage 5. Late Hesperian and Early Amazonian Periods.
Recurrent uplift in the Tharsis region formed additional radial faults and was accompanied by episodic volcanic outpourings to create the Tharsis volcanoes. Valles Marineris was progressively modified by filling with sediments and then enlarged by continued faulting and slumping. Catastrophic outbreaks of groundwater, possibly as a result of the formation of the Tharsis rise, released floods of water, to form the chaotic terrain and large outflow channels of the eastern canyon system. These catastrophic floods carried sediment downslope to Chryse Basin in the northern lowlands and filled temporary seas. The chaotic terrain enlarged and the global escarpment continued to retreat southward. Major volcanism and related outburst of groundwater occurred in the Elysium province as well. Eolian and volcanic processes also resurfaced portions of the northern plains (Figure 6.53E).
Stage 6. Middle and Late Amazonian Period.
Volcanic extrusions partly covered the Tharsis area with fresh lava, and the most recent lava flows on Olympus Mons were erupted at this time (Figure 6.53F). However, the volume of magma erupted was significantly lower than in previous epochs, a clear indication that Mars was cooling to the critical level beyond which no volcanism can occur. Channels, probably related to melting of ground ice by volcanic activity, formed west of Olympus Mons. Eolian activity persisted, modifying the entire surface while the polar layered terrains and surrounding dune fields continued to develop.
Mars has certainly passed its peak in geologic activity; it is likely that even the volcano Olympus Mons has been inactive for tens or hundreds of millions of years. As Mars continues to cool, radiating its energy away to space, active geologic processes such as faulting and volcanism will continue to wind down. At present, eolian processes are the dominant active geologic agents shaping martian surface features, although sublimation of ice and mass movement may continue to cause slope retreat along the global escarpment and along steep crater or canyon walls.
Although critical data are missing, particularly radiometric dates of the various rock units, a complex and fascinating history of Mars is arising from the study of relative ages. Tantalizing hints of further complexities and geologic paradoxes remain for generations of scientists to decipher and integrate into this framework for the geologic history of Mars.
The geological exploration of Mars has been a highlight of the space program and has revolutionized our knowledge of the red planet. Eight American spacecraft have successfully returned thousands of photographs and other data pertinent to deciphering its geologic history. Recent space missions have revealed that Mars is an enormously exciting place of vast geologic interest. We have seen Mars closeup, landed on its surface, mapped its terrains, and found out what Mars is really like. Some of the more important discoveries are summarized below.
Much of the martian surface is cratered, some of it so intensely that it may be inherited from the close of the heavy bombardment (on the Moon these regions are about 4 billion years old). The craters on Mars are similar in size to those on the Moon, but show evidence of greater modification by erosion and burial. Many martian craters have a unique appearance, caused by the motions of fluid ejecta that flowed across the surface.
The great shield volcanoes in the Tharsis and Elysium regions and the vast volcanic plains clearly indicate that Mars has experienced more volcanic activity later in its history than the Moon. Moreover, distinctive central volcanoes are important on Mars but not seen on Mercury or the Moon.
Two large domal upwarps in the Tharsis and Elysium regions stand out as the major tectonic features of Mars. A pattern of radial fractures extends out from the Tharsis region over much of the western hemisphere. The great canyons of Noctis Labyrinthus and Valles Marineris formed as part of this system of extensional faults. Compressive forces have also deformed large regions of Mars, buckling the crust to form wrinkle ridges. Even so, no folded mountain belts, like those on Earth, formed and no system of mobile lithospheric plates was sustained. Tectonic (domes and graben) and volcanic features (localized on and near the domes) on Mars suggest that mantle plumes are an important kind of mantle convection.
Systems of stream channels exist on Mars. The largest originate in the southern highlands, cross the global escarpment, and terminate in the low plains to the north where shallow temporary seas waxed and waned. Their headwaters originate in chaotic terrain marked by collapse structures. Smaller, more ancient channels occur on old volcanic mountains, crater rims, and as small branching systems. They may represent a major erosional interval early in martian history, when liquid water was available at the surface, when temperatures were higher, and when the atmosphere was thicker. Much of this water now resides in polar ice caps or as ice within the surface layers.
Huge landslides and debris flows and other types of gravity driven mass movement have been important processes in the development of the martian landscape, enlarging or modifying the canyons of Valles Marineris. The chaotic terrain is another expression of large-scale mass movement, involving collapse as subsurface ice melted and flowed away.
Winds gusting up to 200 km per hour sweep much of the surface during the global dust storms that periodically rage across the planet. Numerous streaks, dunes, and grooves show that the wind may be the dominant geologic process still active. Surface photographs from the Viking landers show many small eolian features such as sand ridges, dunes, and a type of desert pavement.
Superposition and crosscutting relationships show that Mars has had an eventful geologic history involving, in sequence, (1) accretion and internal differentiation, including outgassing of an atmosphere, (2) formation of a densely cratered surface, perhaps during a period when the atmosphere was thicker and a hydrologic system operated, (3) uplift and radial fracturing of the crust in the Tharsis region, (4) widespread obliteration of the cratered terrain in the northern hemisphere, perhaps accompanied by extrusion of lava and deposition of sediment to form the northern plains, (5) renewed uplift and volcanism in the Tharsis region with the development of Valles Marineris and episodic release of groundwater in vast floods to create the outflow channels, and (6) modification of the surface, mainly by eolian deposition and erosion in a cold, dry environment.
Although Mars is still a relatively primitive planet, in the sense that it has large tracts of heavily cratered terrain, it is far more Earth-like than the other inner planets we have examined. Mars has an atmosphere and a partial hydrologic system that worked together to provide a diverse arrangement of landforms. Mars is intermediate in size between the small terrestrial planets (Moon and Mercury) and the larger ones (Earth and Venus), and its thermal evolution also appears to have been intermediate, resulting in the production of mantle plumes, lithospheric domes and an extensive and varied volcanic history.
Important factors that affect the thermal evolution of a planet and in turn help determine the nature of its geologic processes include its mass, diameter, and composition. Mars has almost twice the mass of Mercury, but a larger diameter and lower bulk density. Thus, it has a slightly smaller surface-area mass ratio. Calculations suggest that the martian cooling rate was slower, maintaining high internal temperatures for billions of years. Perhaps as a result of this slower cooling rate, Mars experienced a longer period of volcanic activity than is apparent on Mercury or the Moon (Figure 6.52).
Some scientists think that both the low density and long thermal evolution of Mars may have been caused, in part, by significant differences in the composition of Mars (and not just its size), as compared to either Mercury or the Moon. The presence of an atmosphere and water-related features support this assumption, as both are absent on Mercury or the Moon. Mars accreted at a distance farther from the early Sun than the rest of the inner planets and possibly at lower temperatures or pressures. These conditions probably allowed volatile-rich materials to condense from the nebula and become incorporated in the planet. If so, internal melting and convection in Mars may have occurred at lower temperatures than in Mercury, for example, and could have persisted to more recent periods of time. Other compositional differences, especially regarding the proportions of radioactive elements may have helped slow the cooling history of Mars as compared to the Moon and Mercury.
Mars is more complex and geologically diverse than the Moon and Mercury in other ways as well. Tectonic disturbances, such as the Tharsis uplift, have altered the face of the planet dramatically. No contraction on a global scale is indicated. Internal circulation, or convection, in the hot, plastic mantle is thought to be the driving force behind much of this tectonic diversity. These internal motions provided forces that arched and cracked the thickening lithosphere, producing the vast fracture systems and localizing volcanic activity. However, the lithosphere of Mars apparently lacked lateral density contrasts and thickened more rapidly than Earth's. Thus, the lithosphere was not broken into a system of moving plates. Instead, persistent melting beneath the lithosphere produced huge volcanic piles, not the volcanic chains formed on a moving lithosphere like Earth's.
1. How may the interior of Mars differ from that of Mercury and the Moon?
2. Why are we unlikely to find liquid water on the surface of Mars? Does liquid water occur anywhere on or in the planet?
3. What does the layered nature of the martian polar deposits imply about the history of the martian climate?
4. Why is Mars red?
5. Describe the craters on Mars. How and why do they differ from those formed on the Moon?
6. Describe the volcanoes on Mars. Why are they so much larger than those found on Earth?
7. What are the principal differences between the northern and southern hemispheres of Mars?
8. Describe the origin and evolution of the atmosphere of Mars.
9. What is the evidence that Mars once had a warmer climate and denser atmosphere?
10. What tectonic features are found on Mars?
11. Describe Valles Marineris. How did it form? Does it have a close analog on the Moon or Mercury?
12. Describe the fluvial features on the surface of Mars. Are they caused by episodic or long-lived processes?
13. What is the evidence for "catastrophic" flooding? Describe the birth, growth, and death of a northern sea.
14. What landforms are produced by wind erosion and deflation?
15. Outline the geologic history of Mars. In what ways is it similar to the history of the Moon and Mercury?
16. Explain the cause of the differences between the surface features of the Moon and Mars.
17. If you were planning the next mission to Mars, what would be important objectives? What two landing sites would be best? Why?
Arvidson, R. E., A. B. Binder, and K. L. Jones. 1978. The Surface of Mars. Scientific American, Vol. 238, No. 3, pp. 76-89.
Baker, V. R. 1982. The channels of Mars. Austin: University of Texas Press.
Batson, R. M., P. M. Bridges, and J. L. Inge. 1979. Atlas of Mars. NASA SP-438.
Carr, M. H. 1981. The surface of Mars. New Haven, CT: Yale University Press.
Journal of Geophysical Research. 1977. Vol. 82, No. 28.
Science. Vol. 193, No. 4255, pp. 759-815.
Science. Vol. 194, No. 4271, pp. 1274-1353 | http://explanet.info/Chapter06.htm | 13 |
52 | Accuracy and precision
In the fields of science, engineering, industry, and statistics, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual (true) value. The precision of a measurement system, also called reproducibility or repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Although the two words reproducibility and repeatability can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method.
A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision.
A measurement system is designated valid if it is both accurate and precise. Related terms include bias (non-random or directed effects caused by a factor or factors unrelated to the independent variable) and error (random variability).
The terminology is also applied to indirect measurements—that is, values obtained by a computational procedure from observed data.
In addition to accuracy and precision, measurements may also have a measurement resolution, which is the smallest change in the underlying physical quantity that produces a response in the measurement.
In the case of full reproducibility, such as when rounding a number to a representable floating point number, the word precision has a meaning not related to reproducibility. For example, in the IEEE 754-2008 standard it means the number of bits in the significand, so it is used as a measure for the relative accuracy with which an arbitrary number can be represented.
Accuracy versus precision: the target analogy
Accuracy is the degree of veracity while in some contexts precision may mean the degree of reproducibility. Accuracy is dependent on how data is collected, and is usually judged by comparing several measurements from the same or different sources.
The analogy used here to explain the difference between accuracy and precision is the target comparison. In this analogy, repeated measurements are compared to arrows that are shot at a target. Accuracy describes the closeness of arrows to the bullseye at the target center. Arrows that strike closer to the bullseye are considered more accurate. The closer a system's measurements are to the accepted value, the more accurate the system is considered to be.
To continue the analogy, if a large number of arrows are shot, precision would be the size of the arrow cluster. (When only one arrow is shot, precision is the size of the cluster one would expect if this were repeated many times under the same conditions.) When all arrows are grouped tightly together, the cluster is considered precise since they all struck close to the same spot, even if not necessarily near the bullseye. The measurements are precise, though not necessarily accurate.
However, it is not possible to reliably achieve accuracy in individual measurements without precision—if the arrows are not grouped close to one another, they cannot all be close to the bullseye. (Their average position might be an accurate estimation of the bullseye, but the individual arrows are inaccurate.) See also circular error probable for application of precision to the science of ballistics.
Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the known value. The accuracy and precision of a measurement process is usually established by repeatedly measuring some traceable reference standard. Such standards are defined in the International System of Units (abbreviated SI from French: Système international d'unités) and maintained by national standards organizations such as the National Institute of Standards and Technology in the United States.
This also applies when measurements are repeated and averaged. In that case, the term standard error is properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, the central limit theorem shows that the probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements.
With regard to accuracy we can distinguish:
- the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration.
- the combined effect of that and precision.
A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Here, when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 8,436 m would imply a margin of error of 0.5 m (the last significant digits are the units).
A reading of 8,000 m, with trailing zeroes and no decimal point, is ambiguous; the trailing zeroes may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 × 103 m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 × 103 m indicates that all three zeroes are significant, giving a margin of 0.5 m. Similarly, it is possible to use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 × 103 m. In fact, it indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead to false precision errors when accepting data from sources that do not obey it.
Precision is sometimes stratified into:
- Repeatability — the variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating during a short time period; and
- Reproducibility — the variation arising using the same measurement process among different instruments and operators, and over longer time periods.
Terminology of ISO 5725
A shift in the meaning of these terms appeared with the publication of the ISO 5725 series of standards. According to ISO 5725-1, the terms trueness and precision are used to describe the accuracy of a measurement. Trueness refers to the closeness of the mean of the measurement results to the "correct" value and precision refers to the closeness of agreement within individual results. Therefore, according to the ISO standard, the term "accuracy" refers to both trueness and precision. The standard also avoids the use of the term bias, because it has different connotations outside the fields of science and engineering, as in medicine and law. The terms "accuracy" and "trueness" were again redefined in 2008 with a slight shift in their exact meanings in the "BIPM International Vocabulary of Metrology", items 2.13 and 2.14
|Accuracy according to BIPM and ISO 5725|
In binary classification
Accuracy is also used as a statistical measure of how well a binary classification test correctly identifies or excludes a condition.
|Condition as determined by Gold standard|
|Positive||True positive||False positive||→ Positive predictive value or Precision|
|Negative||False negative||True negative||→ Negative predictive value|
Sensitivity or recall
Specificity (or its complement, Fall-Out)
An accuracy of 100% means that the measured values are exactly the same as the given values.
Also see Sensitivity and specificity.
Accuracy may be determined from Sensitivity and Specificity, provided Prevalence is known, using the equation:
The accuracy paradox for predictive analytics states that predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. It may be better to avoid the accuracy metric in favor of other metrics such as precision and recall. In situations where the minority class is more important, F-measure may be more appropriate, especially in situations with very skewed class imbalance.
Another useful performance measure is the balanced accuracy which avoids inflated performance estimates on imbalanced datasets. It is defined as the arithmetic mean of sensitivity and specificity, or the average accuracy obtained on either class:
If the classifier performs equally well on either class, this term reduces to the conventional accuracy (i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the conventional accuracy is above chance only because the classifier takes advantage of an imbalanced test set, then the balanced accuracy, as appropriate, will drop to chance. A closely related chance corrected measure is:
while a direct approach to debiasing and renormalizing Accuracy is Cohen's kappa whilst Informedness has been shown to be a Kappa family debiased renormalization of Recall. Informedness and Kappa have the advantage that chance level is defined to be 0, and they have the form of a probability. Informedness has the stronger property that it is the probability that an informed decision is made (rather than a guess), when positive. When negative this is still true for the absolutely value of Informedness, but the information has been used to force an incorrect response.
In psychometrics and psychophysics
In psychometrics and psychophysics, the term accuracy is interchangeably used with validity and constant error. Precision is a synonym for reliability and variable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior. Reliability is established with a variety of statistical techniques, classically through an internal consistency test like Cronbach's alpha to ensure sets of related questions have related responses, and then comparison of those related question between reference and target population.
In logic simulation
In logic simulation, a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality.
In information systems
||This article may be confusing or unclear to readers. (March 2013)|
The concepts of accuracy and precision have also been studied in the context of data bases, information systems and their sociotechnical context. The necessary extension of these two concepts on the basis of theory of science suggests that they (as well as data quality and information quality) should be centered on accuracy defined as the closeness to the true value seen as the degree of agreement of readings or of calculated values of one same conceived entity, measured or calculated by different methods, in the context of maximum possible disagreement.
See also
- ± or Plus-minus sign
- Accuracy class
- ANOVA Gauge R&R
- ASTM E177 Standard Practice for Use of the Terms Precision and Bias in ASTM Test Methods
- Engineering tolerance
- Experimental uncertainty analysis
- Failure assessment
- Precision bias
- Precision engineering
- Precision (statistics)
- Accepted and experimental value
- Binary classification
- Brier score
- Confusion matrix
- Detection theory
- Gain (information retrieval)
- Matthews correlation coefficient
- Precision and recall curves
- Receiver operating characteristic or ROC curve
- Sensitivity and specificity
- Selectivity[disambiguation needed]
- Sensitivity index
- Statistical significance
- Youden's J statistic
- JCGM 200:2008 International vocabulary of metrology — Basic and general concepts and associated terms (VIM)
- John Robert Taylor (1999). An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. University Science Books. pp. 128–129. ISBN 0-935702-75-X.
- BS ISO 5725-1: "Accuracy (trueness and precision) of measurement methods and reults - Part 1: General principles and definitions", pp.1 (1994)
- K.H. Brodersen, C.S. Ong, K.E. Stephan, J.M. Buhmann (2010). The balanced accuracy and its posterior distribution. Proceedings of the 20th International Conference on Pattern Recognition, 3121-3124.
- Powers, David M W (2007/2011). "Evaluation: From Precision, Recall and F-Factor to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies 2 (1): 37–63.
- Powers, David M. W. (2012). "The Problem with Kappa". Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop.
- John M. Acken, Encyclopedia of Computer Science and Technology, Vol 36, 1997, page 281-306
- 1990 Workshop on Logic-Level Modelling for ASICS, Mark Glasser, Rob Mathews, and John M. Acken, SIGDA Newsletter, Vol 20. Number 1, June 1990
- Ivanov, K. (1972). "Quality-control of information: On the concept of accuracy of information in data banks and in management information systems".
|Look up accuracy, or precision in Wiktionary, the free dictionary.|
- BIPM - Guides in metrology - Guide to the Expression of Uncertainty in Measurement (GUM) and International Vocabulary of Metrology (VIM)
- "Beyond NIST Traceability: What really creates accuracy" - Controlled Environments magazine
- Precision and Accuracy with Three Psychophysical Methods
- Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, Appendix D.1: Terminology
- Accuracy and Precision
- Accuracy vs Precision — a brief, clear video by Matt Parker | http://en.wikipedia.org/wiki/Accuracy_and_precision | 13 |
111 | A collective term for a series of quantitative characteristics (in terms of numbers, vectors, tensors) describing the degree to which some object (a curve, a surface, a Riemannian space, etc.) deviates in its properties from certain other objects (a straight line, a plane, a Euclidean space, etc.) which are considered to be flat. The concepts of curvature are usually defined locally, i.e. at each point. These concepts of curvature are connected with the examination of deviations which are small to the second order; hence the object in question is assumed to be specified by -smooth functions. In some cases the concepts are defined in terms of integrals, and they remain valid without the -smoothness condition. As a rule, if the curvature vanishes at all points, the object in question is identical (in small sections, not in the large) with the corresponding "flat" object.
The curvature of a curve.
Let be a regular curve in the -dimensional Euclidean space, parametrized in terms of its natural parameter . Let and be the angle between the tangents to at the points and of and the length of the arc of the curve between and , respectively. Then the limit
is called the curvature of the curve at . The curvature of the curve is equal to the absolute value of the vector , and the direction of this vector is just the direction of the principal normal to the curve. For the curve to coincide with some segment of a straight line or with an entire line it is necessary and sufficient that its curvature vanishes identically.
The curvature of a surface.
Let be a regular surface in the three-dimensional Euclidean space. Let be a point of , the tangent plane to at , the normal to at , and the plane through and some unit vector in . The intersection of the plane and the surface is a curve, called the normal section of the surface at the point in the direction . The number
where is the natural parameter on , is called the normal curvature of in the direction . The normal curvature is equal to the curvature of the curve up to the sign.
The tangent plane contains two perpendicular directions and such that the normal curvature in any direction can be expressed by Euler's formula:
where is the angle between and . The numbers and are called the principal curvatures, and the directions and are known as the principal directions of the surface. The principal curvatures are extremal values of the normal curvature. The construction of the normal curvature at a given point of the surface may be represented graphically as follows. When , the equation
where is the radius vector, defines a certain curve of the second order in the tangent plane , known as the Dupin indicatrix. The Dupin indicatrix can only be one of the following three curves: an ellipse, a hyperbola or a pair of parallel lines. The points of the surface are accordingly classified as elliptic, hyperbolic or parabolic. At an elliptic point, the second fundamental form of the surface is of fixed sign; at a hyperbolic point the form is of variable sign; and at a parabolic point it is degenerate. If all normal curvatures at a point are zero, the point is said to be flat. If the Dupin indicatrix is a circle it is called an umbilical (or spherical) point.
The principal directions are uniquely determined (up to the order), unless the point in question is an umbilical point or a flat point. In these cases every direction is principal. In this connection one has the following theorem of Rodrigues: A direction is principal if and only if
where is the radius vector of the surface and the unit normal vector.
A curve on a surface is called a curvature line if its direction at every point is principal. In a neighbourhood of every point on a surface, other than an umbilical point or a flat point, the surface may be so parametrized that its coordinate curves are curvature lines.
is called the mean curvature of the surface. The quantity
is called the Gaussian (or total) curvature of the surface. The Gaussian curvature is an object of the intrinsic geometry of the surface, i.e. it can be expressed in terms of the first fundamental form:
where are the coefficients of the first fundamental form of the surface.
Using formula (1), one defines the Gaussian curvature for an abstract two-dimensional Riemannian manifold with line element . A surface is locally isometric to a plane if and only if its Gaussian curvature vanishes identically.
The curvature of a Riemannian space.
Let be a regular -dimensional Riemannian space and let be the space of regular vector fields on . The curvature of is usually characterized by the Riemann (curvature) tensor (cf. Riemann tensor), i.e. by the multilinear mapping
where is the Levi-Civita connection on and denotes the Lie bracket. If one puts , in some local coordinate system , one can rewrite (2) as follows:
where; is the symbol for covariant differentiation.
Thus, the Riemann tensor is a quantitative characteristic of the non-commutativity of the second covariant derivatives in a Riemannian space. It also yields a quantitative description of certain other properties of Riemannian spaces — properties that distinguish them from Euclidean spaces.
The coefficients of the Riemann tensor in the local coordinate system may be expressed in terms of the Christoffel symbols and the coefficients of the metric tensor, as follows:
where is the Riemann tensor with fourth covariant index, or — in a coordinate-free notation — the mapping (where denotes the scalar product).
The Riemann tensor possesses the following symmetry properties:
which may be written in local coordinates in the form:
The Riemann tensor has algebraically independent components. The covariant derivatives of the Riemann tensor satisfy the (second) Bianchi identity:
where is the covariant derivative of with respect to . In local coordinates, this identity is
The Riemann tensor is sometimes defined with the opposite sign.
A Riemannian space is locally isometric to a Euclidean space if and only if its Riemann tensor vanishes identically.
Another, equivalent, approach is sometimes adopted with regard to describing the curvature of a Riemannian space . Let be a two-dimensional linear space in the tangent space to at a point . Then the sectional curvature of at in the direction is defined as
where and are vectors defining . The same area element may be defined by different vectors and , but is independent of the specific vectors chosen. For a two-dimensional Riemannian space, the sectional curvature coincides with the Gaussian curvature. The Riemann tensor can be expressed in terms of the sectional curvatures:
Weaker characteristics of the curvature of a Riemannian space are also used — the Ricci tensor, or Ricci curvature:
and the scalar curvature:
The Ricci tensor is symmetric: .
The curvature is sometimes characterized in terms of more complicated constructions — particularly quadratic ones — based on the Riemann tensor. One of the most common invariants of this type is
which is used in investigating the Schwarzschild gravity field.
For a two-dimensional space, the Riemann tensor is
where is the Gaussian curvature. In this case the scalar curvature is equal to . For a three-dimensional space the Riemann tensor has the form
where is the metric tensor, is the Ricci tensor and is the scalar curvature.
If the sectional curvatures are independent both of the point and of the two-dimensional direction, the space is known as a space of constant curvature; the Riemann tensor of such a space has the form (3) (the constant is then called the curvature of the space ). When it turns out that, if in all points the curvature is independent of the direction, then is a space of constant curvature (Schur's theorem).
The curvature of submanifolds.
Let be a regular surface in , let be a curve on and let be the tangent plane to at a point on . Suppose that a small neighbourhood of is projected onto the plane and let be the projection of the curve on . The geodesic curvature of the curve at is defined as the number equal in absolute value to the curvature of the curve at . The geodesic curvature is considered positive if the rotation of the tangent to as one passes through forms a right-handed screw with the direction of the normal to the surface. The geodesic curvature is an object of the intrinsic geometry of . It can be evaluated from the formula
where is the natural equation of the curve in local coordinates on , are the components of the metric tensor of in these coordinates, are the Christoffel symbols, and is the totally discriminant tensor. Using formula (4) one can define the geodesic curvature for curves on an abstract two-dimensional Riemannian space. A curve on a Riemannian manifold coincides with a geodesic or with part of a geodesic if and only if its geodesic curvature vanishes identically.
Let be a two-dimensional submanifold of a three-dimensional Riemannian space . There are two approaches to the definition of the curvature for . On the one hand, one can consider as a Riemannian space whose metric is induced by that of , and then use formula (1) to define its curvature. This yields what is called the internal curvature. On the other hand, one can carry out the same construction that gives the definition of the curvature for surfaces in a Euclidean space and apply it to submanifolds in a Riemannian space. The result is a different concept of the curvature, known as the external curvature. One has the following relationship:
where is the curvature of in the direction of the tangent plane to , and and are the internal and external curvatures, respectively.
The concepts of normal, internal and external curvatures can be generalized with respect to the dimension and codimension of the submanifold in question.
The concept of the Riemann tensor may be generalized to various spaces with a weaker structure than Riemannian spaces. For example, the Riemann and Ricci tensors depend only on the affine structure of the space and may also be defined in spaces with an affine connection, although in that case they do not possess all the symmetry properties as above. For example, . Other examples of this type are the conformal curvature tensor and the projective curvature tensor. The conformal curvature tensor (Weyl tensor) is
where the brackets denote alternation with respect to the relevant indices. Vanishing of the conformal curvature tensor is a necessary and sufficient condition for the space to coincide locally with a conformal Euclidean space. The projective curvature tensor is
where is the Kronecker symbol and is the dimension of the space. Vanishing of the projective curvature tensor is a necessary and sufficient condition for the space to coincide locally with a projective Euclidean space.
The concept of curvature generalizes to the case of non-regular objects, in particular, to the case of the theory of two-dimensional manifolds of bounded curvature. Here the curvature in a space is defined not at a point, but in a domain, and one is concerned with the total or integral curvature of a domain. In the regular case the total curvature is equal to the integral of the Gaussian curvature. The total curvature of a geodesic triangle may be expressed in terms of the angles at its vertices:
this relationship is a special case of the Gauss–Bonnet theorem. Formula (5) has been used as a basis for the definition of the total curvature in manifolds of bounded curvature.
The curvature is one of the fundamental concepts in modern differential geometry. Restrictions on the curvature usually yield meaningful information about an object. For example, in the theory of surfaces in , the sign of the Gaussian curvature defines the type of a point (elliptic, hyperbolic or parabolic). Surfaces with an everywhere non-negative Gaussian curvature share a whole spectrum of properties, by virtue of which they can be grouped together in one natural class (see , ). Surfaces with zero mean curvature (see Minimal surface) have many specific properties. The theory of non-regular surfaces especially studies classes of surfaces of bounded integral absolute Gaussian or mean curvature.
In Riemannian spaces, a uniform bound on the sectional curvatures at any point and in any two-dimensional direction makes it possible to use comparison theorems. The latter enable one to compare the rate of deviation of the geodesics and the volumes of domains in a given space with the characteristics of the corresponding curves and domains in a space of constant curvature. Some of the restrictions on even predetermine the topological structure of the space as a whole. For example:
The sphere theorem. Let be a complete simply-connected Riemannian space of dimension and let . Then is homeomorphic to the sphere .
The Hadamard–Cartan and Gromoll–Meyer theorems. Let be a complete Riemannian space of dimension . If everywhere and is simply connected, or if everywhere and is not compact, then is homeomorphic to the Euclidean space .
The concepts of curvature are utilized in various natural sciences. Thus, when a body is moving along a trajectory, there is a relationship between the curvature of the trajectory and the centrifugal force. The Gaussian curvature first appeared in Gauss' work on cartography. The mean curvature of the surface of a liquid is related to the capillary effect. In relativity theory there is a connection between the distribution of mass and energy (more precisely, between the energy-momentum tensor) and the curvature of space-time. The conformal curvature tensor is used in the theory of formation of particles in a gravitational field.
|||P.K. [P.K. Rashevskii] Rashewski, "Riemannsche Geometrie und Tensoranalyse" , Deutsch. Verlag Wissenschaft. (1959) (Translated from Russian)|
|||A.V. Pogorelov, "Differential geometry" , Noordhoff (1959) (Translated from Russian)|
|||W. Blaschke, "Vorlesungen über Differentialgeometrie und geometrische Grundlagen von Einsteins Relativitätstheorie. Elementare Differentialgeometrie" , 1 , Springer (1921)|
|||A.D. Aleksandrov, "Die innere Geometrie der konvexen Flächen" , Akademie Verlag (1955) (Translated from Russian)|
|||D. Gromoll, W. Klingenberg, W. Meyer, "Riemannsche Geometrie im Grossen" , Springer (1968)|
|||A.V. Pogorelov, "Extrinsic geometry of convex surfaces" , Amer. Math. Soc. (1972) (Translated from Russian)|
Formula (1) can be expressed in various ways, e.g., in it reads:
The first Bianchi identity is the usual name given to the fourth symmetry relation for the Riemann tensor, i.e. to . The second Bianchi identity is the relation
called the Bianchi identity above.
Such concepts as mean curvature, conformal curvature tensors, geodesic curvature, and projective curvature tensor are also defined in higher dimensional settings (than surfaces), cf. e.g. [a2] (mean curvature), [a3], (conformal and projective curvature tensors). (Cf. also Conformal Euclidean space.) The absolute value of the geodesic curvature of a curve on a surface is , where is assumed to be described by its arc length parameter (natural parameter) and is the Levi-Civita connection on the surface. For the concepts of natural parameter and natural equation of a curve, cf. Natural equation. The various fundamental (quadratic) forms of a surface are discussed in Fundamental forms of a surface; Geometry of immersed manifolds and Second fundamental form.
The sectional curvature of a Riemannian space at in the direction of the tangent plane is also called the Riemannian curvature.
Let denote the Ricci tensor and let be the quadratic form on given by at . Then the value for a unit vector is the mean of for all plane directions in containing , and is called the Ricci curvature or mean curvature of the direction at . The mean of all the is the scalar curvature at , cf. also Ricci tensor and Ricci curvature. If is a Kähler manifold and is restricted to a complex plane (i.e. a plane invariant under the almost-complex structure), then is called the holomorphic sectional curvature.
For a simply-closed space curve of length the integral is called the total curvature of ; generally , and if and only if is a closed curve lying in a plane (W. Fenchel). Fix an origin 0 in and consider the unit sphere around 0. For each point of let be the point on such that is the (displaced) unit tangent vector to at . As runs over the trace out a curve on , the spherical indicatrix of . The correspondence is called the spherical representation. The total curvature of is equal to the length of . Instead of the tangents to one can also use the principal normal and binormal vectors and perform a similar construction yielding other spherical indicatrices, cf. Spherical indicatrix.
|[a1]||C.C. Hsiung, "A first course in differential geometry" , Wiley (1981) pp. Chapt. 3, Sect. 4|
|[a2]||S. Kobayashi, K. Nomizu, "Foundations of differential geometry" , 2 , Interscience (1969) pp. 33|
|[a3]||J.A. Schouten, "Ricci-calculus. An introduction to tensor analysis and its geometrical applications" , Springer (1954) pp. Chapt. VI (Translated from German)|
Curvature. D.D. Sokolov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Curvature&oldid=12026 | http://www.encyclopediaofmath.org/index.php?title=Curvature | 13 |
56 | The first rockets
ever built, the fire-arrows of the Chinese, were not very reliable. Many
just exploded on launching. Others flew on erratic courses and landed
in the wrong place. Being a rocketeer in the days of the fire-arrows must
have been an exciting, but also a highly dangerous activity.
rockets are much more reliable. They fly on precise courses and are capable
of going fast enough to escape the gravitational pull of Earth. Modern
rockets are also more efficient today because we have an understanding
of the scientific principles behind rocketry. Our understanding has led
us to develop a wide variety of advanced rocket hardware and devise new
propellants that can be used for longer trips and more powerful takeoffs.
Rocket Engines and
Most rockets today operate
with either solid or liquid propellants. The word propellant does not mean
simply fuel, as you might think; it means both fuel and oxidizer. The fuel
is the chemical rockets burn but, for burning to take place, an oxidizer
(oxygen) must be present. Jet engines draw oxygen into their engines from
the surrounding air. Rockets do not have the luxury that jet planes have;
they must carry oxygen with them into space, where there is no air.
Solid rocket propellants,
which are dry to the touch, contain both the fuel and oxidizer combined
together in the chemical itself. Usually the fuel is a mixture of hydrogen
compounds and carbon and the oxidizer is made up of oxygen compounds.
Liquid propellants, which are often gases that have been chilled until
they turn into liquids, are kept in separate containers, one for the fuel
and the other for the oxidizer. Then, when the engine fires, the fuel
and oxidizer are mixed together in the engine.
rocket has the simplest form of engine. It has a nozzle, a case, insulation,
propellant, and an igniter. The case of the engine is usually a relatively
thin metal that is lined with insulation to keep the propellant from burning
through. The propellant itself is packed inside the insulation layer.
rocket engines feature a hollow core that runs through the propellant.
Rockets that do not have the hollow core must be ignited at the lower
end of the propellants and burning proceeds gradually from one end of
the rocket to the other. In all cases, only the surface of the propellant
burns. However, to get higher thrust, the hollow core is used. This increases
the surface of the propellants available for burning. The propellants
burn from the inside out at a much higher rate, and the gases produced
escape the engine at much higher speeds. This gives a greater thrust.
Some propellant cores are star shaped to increase the burning surface
To fire solid propellants,
many kinds of igniters can be used. Fire-arrows were ignited by fuses,
but sometimes these ignited too quickly and burned the rocketeer. A far
safer and more reliable form of ignition used today is one that employs
electricity. An electric current, coming through wires from some distance
away, heats up a special wire inside the rocket. The wire raises the temperature
of the propellant it is in contact with to the combustion point.
Other igniters are
more advanced than the hot wire device. Some are encased in a chemical
that ignites first, which then ignites the propellants. Still other igniters,
especially those for large rockets, are rocket engines themselves. The
small engine inside the hollow core blasts a stream of flames and hot
gas down from the top of the core and ignites the entire surface area
of the propellants in a fraction of a second.
The nozzle in a
solid-propellant engine is an opening at the back of the rocket that permits
the hot expanding gases to escape. The narrow part of the nozzle is the
throat. Just beyond the throat is the exit cone.
The purpose of the
nozzle is to increase the acceleration of the gases as they leave the
rocket and thereby maximize the thrust. It does this by cutting down the
opening through which the gases can escape. To see how this works, you
can experiment with a garden hose that has a spray nozzle attachment.
This kind of nozzle does not have an exit cone, but that does not matter
in the experiment. The important point about the nozzle is that the size
of the opening can be varied.
Start with the opening
at its widest point. Watch how far the water squirts and feel the thrust
produced by the departing water. Now reduce the diameter of the opening,
and again note the distance the water squirts and feel the thrust. Rocket
nozzles work the same way.
As with the inside
of the rocket case, insulation is needed to protect the nozzle from the
hot gases. The usual insulation is one that gradually erodes as the gas
passes through. Small pieces of the insulation get very hot and break
away from the nozzle. As they are blown away, heat is carried away with
The other main kind
of rocket engine is one that uses liquid propellants. This is a much more
complicated engine, as is evidenced by the fact that solid rocket engines
were used for at least seven hundred years before the first successful
liquid engine was tested. Liquid propellants have separate storage tanks
- one for the fuel and one for the oxidizer. They also have pumps, a combustion
chamber, and a nozzle.
The fuel of a liquid-propellant
rocket is usually kerosene or liquid hydrogen; the oxidizer is usually
liquid oxygen. They are combined inside a cavity called the combustion
chamber. Here the propellants burn and build up high temperatures and
pressures, and the expanding gas escapes through the nozzle at the lower
end. To get the most power from the propellants, they must be mixed as
completely as possible. Small injectors (nozzles) on the roof of the chamber
spray and mix the propellants at the same time. Because the chamber operates
under high pressures, the propellants need to be forced inside. Powerful,
lightweight turbine pumps between the propellant tanks and combustion
chambers take care of this job.
With any rocket,
and especially with liquid-propellant rockets, weight is an important
factor. In general, the heavier the rocket, the more the thrust needed
to get it off the ground. Because of the pumps and fuel lines, liquid
engines are much heavier than solid engines.
One especially good
method of reducing the weight of liquid engines is to make the exit cone
of the nozzle out of very lightweight metals. However, the extremely hot,
fast-moving gases that pass through the cone would quickly melt thin metal.
Therefore, a cooling system is needed. A highly effective though complex
cooling system that is used with some liquid engines takes advantage of
the low temperature of liquid hydrogen. Hydrogen becomes a liquid when
it is chilled to -253C. Before injecting the hydrogen into the combustion
chamber, it is first circulated through small tubes that lace the walls
of the exit cone. In a cutaway view, the exit cone wall looks like the
edge of corrugated cardboard. The hydrogen in the tubes absorbs the excess
heat entering the cone walls and prevents it from melting the walls away.
It also makes the hydrogen more energetic because of the heat it picks
up. We call this kind of cooling system regenerative cooling.
Engine Thrust Control
Controlling the thrust
of an engine is very important to launching payloads (cargoes) into orbit.
Too much thrust or thrust at the wrong time can cause a satellite to be
placed in the wrong orbit or set too far out into space to be useful. Too
little thrust can cause the satellite to fall back to Earth.
Liquid-propellant engines control the thrust by varying the amount of
propellant that enters the combustion chamber. A computer in the rocket's
guidance system determines the amount of thrust that is needed and controls
the propellant flow rate. On more complicated flights, such as going to
the Moon, the engines must be started and stopped several times. Liquid
engines do this by simply starting or stopping the flow of propellants
into the combustion chamber.
rockets are not as easy to control as liquid rockets. Once started, the
propellants burn until they are gone. They are very difficult to stop
or slow down part way into the burn. Sometimes fire extinguishers are
built into the engine to stop the rocket in flight. But using them is
a tricky procedure and doesn't always work. Some solid-fuel engines have
hatches on their sides that can be cut loose by remote control to release
the chamber pressure and terminate thrust.
The burn rate of
solid propellants is carefully planned in advance. The hollow core running
the length of the propellants can be made into a star shape. At first,
there is a very large surface available for burning, but as the points
of the star burn away, the surface area is reduced. For a time, less of
the propellant burns, and this reduces thrust. The Space Shuttle uses
this technique to reduce vibrations early in its flight into orbit.
Although most rockets
used by governments and research organizations are very reliable, there
is still great danger associated with the building and firing of rocket
engines. Individuals interested in rocketry should never attempt to build
their own engines. Even the simplest-looking rocket engines are very complex.
Case-wall bursting strength, propellant packing density, nozzle design,
and propellant chemistry are all design problems beyond the scope of most
amateurs. Many home-built rocket engines have exploded in the faces of their
builders with tragic consequences.
and Control Systems
Building an efficient
rocket engine is only part of the problem in producing a successful rocket.
The rocket must also be stable in flight. A stable rocket is one that flies
in a smooth, uniform direction. An unstable rocket flies along an erratic
path, sometimes tumbling or changing direction. Unstable rockets are dangerous
because it is not possible to predict where they will go. They may even
turn upside down and suddenly head back directly to the launch pad.
Making a rocket
stable requires some form of control system. Controls can be either active
or passive. The difference between these and how they work will be explained
later. It is first important to understand what makes a rocket stable
All matter, regardless
of size, mass, or shape, has a point inside called the center of mass
(CM). The center of mass is the exact spot where all of the mass of that
object is perfectly balanced. You can easily find the center of mass of
an object such as a ruler by balancing the object on your finger. If the
material used to make the ruler is of uniform thickness and density, the
center of mass should be at the halfway point between one end of the stick
and the other. If the ruler were made of wood, and a heavy nail were driven
into one of its ends, the center of mass would no longer be in the middle.
The balance point would then be nearer the end with the nail.
The center of mass
is important in rocket flight because it is around this point that an
unstable rocket tumbles. As a matter of fact, any object in flight tends
to tumble. Throw a stick, and it tumbles end over end. Throw a ball, and
it spins in flight. The act of spinning or tumbling is a way of becoming
stabilized in flight. A Frisbee will go where you want it to only if you
throw it with a deliberate spin. Try throwing a Frisbee without spinning
it. If you succeed, you will see that the Frisbee flies in an erratic
path and falls far short of its mark.
In flight, spinning
or tumbling takes place around one or more of three axes. They are called
roll, pitch, and yaw. The point where all three of these axes intersect
is the center of mass. For rocket flight, the pitch and yaw axes are the
most important because any movement in either of these two directions
can cause the rocket to go off course. The roll axis is the least important
because movement along this axis will not affect the flight path. In fact,
a rolling motion will help stabilize the rocket in the same way a properly
passed football is stabilized by rolling (spiraling) it in flight. Although
a poorly passed football may still fly to its mark even if it tumbles
rather than rolls, a rocket will not. The action-reaction energy of a
football pass will be completely expended by the thrower the moment the
ball leaves the hand. With rockets, thrust from the engine is still being
produced while the rocket is in flight. Unstable motions about the pitch
and yaw axes will cause the rocket to leave the planned course. To prevent
this, a control system is needed to prevent or at least minimize unstable
In addition to center
of mass, there is another important center inside the rocket that affects
its flight. This is the center of pressure (CP). The center of pressure
exists only when air is flowing past the moving rocket. This flowing air,
rubbing and pushing against the outer surface of the rocket, can cause
it to begin moving around one of its three axes. Think for a moment of
a weather vane. A weather vane is an arrow-like stick that is mounted
on a rooftop and used for telling wind direction. The arrow is attached
to a vertical rod that acts as a pivot point. The arrow is balanced so
that the center of mass is right at the pivot point. When the wind blows,
the arrow turns, and the head of the arrow points into the on-coming wind.
The tail of the arrow points in the downwind direction.
The reason that
the weather vane arrow points into the wind is that the tail of the arrow
has a much larger surface area than the arrowhead. The flowing air imparts
a greater force to the tail than the head, and therefore the tail is pushed
away. There is a point on the arrow where the surface area is the same
on one side as the other. This spot is called the center of pressure.
The center of pressure is not in the same place as the center of mass.
If it were, then neither end of the arrow would be favored by the wind
and the arrow would not point. The center of pressure is between the center
of mass and the tail end of the arrow. This means that the tail end has
more surface area than the head end.
It is extremely
important that the center of pressure in a rocket be located toward the
tail and the center of mass be located toward the nose. If they are in
the same place or very near each other, then the rocket will be unstable
in flight. The rocket will then try to rotate about the center of mass
in the pitch and yaw axes, producing a dangerous situation. With the center
of pressure located in the right place, the rocket will remain stable.
for rockets are intended to keep a rocket stable in flight and to steer
it. Small rockets usually require only a stabilizing control system. Large
rockets, such as the ones that launch satellites into orbit, require a
system that not only stabilizes the rocket, but also enable it to change
course while in flight.
Controls on rockets
can either be active or passive. Passive controls are fixed devices that
keep rockets stabilized by their very presence on the rocket's exterior.
Active controls can be moved while the rocket is in flight to stabilize
and steer the craft.
The simplest of
all passive controls is a stick. The Chinese fire-arrows were simple rockets
mounted on the ends of sticks. The stick kept the center of pressure behind
the center of mass. In spite of this, fire-arrows were notoriously inaccurate.
Before the center of pressure could take effect, air had to be flowing
past the rocket. While still on the ground and immobile, the arrow might
lurch and fire the wrong way.
Years later, the
accuracy of fire-arrows was improved considerably by mounting them in
a trough aimed in the proper direction. The trough guided the arrow in
the right direction until it was moving fast enough to be stable on its
As will be explained
in the next section, the weight of the rocket is a critical factor in
performance and range. The fire-arrow stick added too much dead weight
to the rocket, and therefore limited its range considerably.
An important improvement
in rocketry came with the replacement of sticks by clusters of lightweight
fins mounted around the lower end near the nozzle. Fins could be made
out of lightweight materials and be streamlined in shape. They gave rockets
a dartlike appearance. The large surface area of the fins easily kept
the center of pressure behind the center of mass. Some experimenters even
bent the lower tips of the fins in a pinwheel fashion to promote rapid
spinning in flight. With these "spin fins," rockets become much more stable
in flight. But this design also produces more drag and limits the rocket's
With the start of
modern rocketry in the 20th century, new ways were sought to improve rocket
stability and at the same time reduce overall rocket weight. The answer
to this was the development of active controls. Active control systems
included vanes, movable fins, canards, gimbaled nozzles, vernier rockets,
fuel injection, and attitude-control rockets. Tilting fins and canards
are quite similar to each other in appearance. The only real difference
between them is their location on the rockets. Canards are mounted on
the front end of the rocket while the tilting fins are at the rear. In
flight, the fins and canards tilt like rudders to deflect the air flow
and cause the rocket to change course. Motion sensors on the rocket detect
unplanned directional changes, and corrections can be made by slight tilting
of the fins and canards. The advantage of these two devices is size and
weight. They are smaller and lighter and produce less drag than the large
Other active control
systems can eliminate fins and canards altogether. By tilting the angle
at which the exhaust gas leaves the rocket engine, course changes can
be made in flight. Several techniques can be used for changing exhaust
Vanes are small
finlike devices that are placed inside the exhaust of the rocket engine.
Tilting the vanes deflects the exhaust, and by action-reaction the rocket
responds by pointing the opposite way.
Another method for
changing the exhaust direction is to gimbal the nozzle. A gimbaled nozzle
is one that is able to sway while exhaust gases are passing through it.
By tilting the engine nozzle in the proper direction, the rocket responds
by changing course
can also be used to change direction. These are small rockets mounted
on the outside of the large engine. When needed they fire, producing the
desired course change.
In space, only by
spinning the rocket along the roll axis or by using active controls involving
the engine exhaust can the rocket be stabilized or have its direction
changed. Without air, fins and canards have nothing to work upon. (Science
fiction movies showing rockets in space with wings and fins are long on
fiction and short on science.) The most common kinds of active control
used in space are attitude-control rockets. Small clusters of engines
are mounted all around the vehicle. By firing the right combination of
these small rockets, the vehicle can be turned in any direction. As soon
as they are aimed properly, the main engines fire, sending the rocket
off in the new direction.
There is another important
factor affecting the performance of a rocket. The mass of a rocket can make
the difference between a successful flight and just wallowing around on
the launch pad. As a basic principle of rocket flight, it can be said that
for a rocket to leave the ground, the engine must produce a thrust that
is greater than the weight of the vehicle. The weight is equal to the mass
of the vehicle times the gravitational acceleration. It is obvious that a rocket
with a lot of unnecessary mass will not be as efficient as one that is trimmed
to just the bare essentials. For an ideal rocket, the total mass of the
vehicle should be distributed following this general formula:
Payloads may be satellites,
astronauts, or spacecraft that will travel to other planets or moons.
- Of the total mass,
91 percent should be propellants; 3 percent should be tanks, engines,
fins, etc.; and 6 percent can be the payload.
In determining the
effectiveness of a rocket design, rocketeers speak in terms of mass fraction
(MF). The mass of the propellants of the rocket divided by the total mass
of the rocket gives mass fraction:
MF = (Mass of Propellants)/(Total
The mass fraction
of the ideal rocket given above is 0.91. From the mass fraction formula
one might think that an MF of 1.0 is perfect, but then the entire rocket
would be nothing more than a lump of propellants that would simply ignite
into a fireball. The larger the MF number, the less payload the rocket
can carry; the smaller the MF number, the less its range becomes. An MF
number of 0.91 is a good balance between payload-carrying capability and
range. The Space Shuttle has an MF of approximately 0.82. The MF varies
between the different orbiters in the Space Shuttle fleet and with the
different payload weights of each mission.
Large rockets, able
to carry a spacecraft into space have serious weight problems. To reach
space find proper orbital velocities, a great deal of propellant is needed;
therefore, the tanks, engines, and associated hardware become larger.
Up to a point, bigger rockets fly farther than smaller rockets, but when
they become too large their structures weigh them down too much, and the
mass fraction is reduced to an impossible number.
A solution to the
problem of giant rockets weighing too much can be credited to the 16th-century
fireworks maker Johann Schmidlap. Schmidlap attached small rockets to
the top of big ones. When the large rocket was exhausted, the rocket casing
was dropped behind and the remaining rocket fired. Much higher altitudes
were achieved by this method. (The Space Shuttle follows the step rocket
principle by dropping off its solid rocket boosters and external tank
when they are exhausted of propellants.) The rockets used by Schmidlap
were called step rockets. Today this technique of building a rocket is
called staging. Thanks to staging, it has become possible not only to
reach outer space but the Moon and other planets too.
Return to Rocket Activities
Return to Aerospace Activities Page
Model Rocketry Index | http://www.grc.nasa.gov/WWW/K-12/TRC/Rockets/practical_rocketry.html | 13 |
61 | Physics I For Dummies
Physics involves a lot of calculations and problem solving. Having on hand the most frequently used physics equations and formulas helps you perform these tasks more efficiently and accurately. This Cheat Sheet also includes a list physics constants that you’ll find useful in a broad range of physics problems.
Physics Equations and Formulas
Physics is filled with equations and formulas that deal with angular motion, Carnot engines, fluids, forces, moments of inertia, linear motion, simple harmonic motion, thermodynamics, and work and energy.
Here’s a list of some important physics formulas and equations to keep on hand — arranged by topic — so you don’t have to go searching to find them.
Equations of angular motion are relevant wherever you have rotational motions around an axis. When the object has rotated through an angle of θ with an angular velocity of ω and an angular acceleration of α, then you can use these equations to tie these values together.
You must use radians to measure the angle. Also, if you know that the distance from the axis is r, then you can work out the linear distance traveled, s, velocity, v, centripetal acceleration, ac, and force, Fc. When an object with moment of inertia, I (the angular equivalent of mass), has an angular acceleration, α, then there is a net torque Στ.
A heat engine takes heat, Qh, from a high temperature source at temperature Th and moves it to a low temperature sink (temperature Tc) at a rate Qc and, in the process, does mechanical work, W. (This process can be reversed such that work can be performed to move the heat in the opposite direction — a heat pump.) The amount of work performed in proportion to the amount of heat extracted from the heat source is the efficiency of the engine. A Carnot engine is reversible and has the maximum possible efficiency, given by the following equations. The equivalent of efficiency for a heat pump is the coefficient of performance.
A volume, V, of fluid with mass, m, has density, ρ. A force, F, over an area, A, gives rise to a pressure, P. The pressure of a fluid at a depth of h depends on the density and the gravitational constant, g. Objects immersed in a fluid causing a mass of weight, Wwater displaced, give rise to an upward directed buoyancy force, Fbuoyancy. Because of the conservation of mass, the volume flow rate of a fluid moving with velocity, v, through a cross-sectional area, A, is constant. Bernoulli’s equation relates the pressure and speed of a fluid.
A mass, m, accelerates at a rate, a, due to a force, F, acting. Frictional forces, FF, are in proportion to the normal force between the materials, FN, with a coefficient of friction, μ. Two masses, m1 and m2, separated by a distance, r, attract each other with a gravitational force, given by the following equations, in proportion to the gravitational constant G:
Moments of inertia
The rotational equivalent of mass is inertia, I, which depends on how an object’s mass is distributed through space. The moments of inertia for various shapes are shown here:
Disk rotating around its center:
Hollow cylinder rotating around its center: I = mr2
Hollow sphere rotating an axis through its center:
Hoop rotating around its center: I = mr2
Point mass rotating at radius r: I = mr2
Rectangle rotating around an axis along one edge where the other edge is of length r:
Rectangle rotating around an axis parallel to one edge and passing through the center, where the length of the other edge is r:
Rod rotating around an axis perpendicular to it and through its center:
Rod rotating around an axis perpendicular to it and through one end:
Solid cylinder, rotating around an axis along its center line:
The kinetic energy of a rotating body, with moment of inertia, I, and angular velocity, ω:
The angular momentum of a rotating body with moment of inertia, I, and angular velocity, ω:
When an object at position x moves with velocity, v, and acceleration, a, resulting in displacement, s, each of these components is related by the following equations:
Simple harmonic motion
Particular kinds of force result in periodic motion, where the object repeats its motion with a period, T, having an angular frequency, ω, and amplitude, A. One example of such a force is provided by a spring with spring constant, k. The position, x, velocity, v, and acceleration, a, of an object undergoing simple harmonic motion can be expressed as sines and cosines.
The random vibrational and rotational motions of the molecules that make up an object of substance have energy; this energy is called thermal energy. When thermal energy moves from one place to another, it’s called heat, Q. When an object receives an amount of heat, its temperature, T, rises.
Kelvin (K), Celsius (C), and Fahrenheit (F) are temperature scales. You can use these formulas to convert from one temperature scale to another:
The heat required to cause a change in temperature of a mass, m, increases with a constant of proportionality, c, called the specific heat capacity. In a bar of material with a cross-sectional area A, length L, and a temperature difference across the ends of ΔT, there is a heat flow over a time, t, given by these formulas:
The pressure, P, and volume, V, of n moles of an ideal gas at temperature T is given by this formula, where R is the gas constant:
In an ideal gas, the average energy of each molecule KEavg, is in proportion to the temperature, with the Boltzman constant k:
Work and energy
When a force, F, moves an object through a distance, s, which is at an angle of Θ,then work, W, is done. Momentum, p, is the product of mass, m, and velocity, v. The energy that an object has on account of its motion is called KE.
A List of Physics Constants
Physics constants are physical quantities with fixed numerical values. The following list contains the most common physics constants, including Avogadro’s number, Boltzmann’s constant, the mass of electron, the mass of a proton, the speed of light, the gravitational constant, and the gas constant.
Mass of electron:
Mass of proton:
Speed of light: | http://www.dummies.com/how-to/content/physics-i-for-dummies-cheat-sheet.html | 13 |
56 | A vector processor is a processor that can operate on entire vectors with one instruction, i.e. the operands of some instructions specify complete vectors. For example, consider the following add instruction:
C = A + B
In both scalar and vector
machines this means ``add the contents of
A to the contents of
and put the sum in
C.'' In a scalar machine the operands are
numbers, but in vector processors the operands are vectors and
the instruction directs the machine to compute the pairwise sum
of each pair of vector elements. A processor register, usually
called the vector length register, tells the processor how many
individual additions to perform when it adds the vectors.
A vectorizing compiler is a compiler that will try to recognize when loops can be transformed into single vector instructions. For example, the following loop can be executed by a single instruction on a vector processor:
DO 10 I=1,N A(I) = B(I) + C(I) 10 CONTINUE
This code would be translated into an instruction that would
set the vector
N followed by a vector add instruction.
The use of vector instructions pays off in two different ways. First, the machine has to fetch and decode far fewer instructions, so the control unit overhead is greatly reduced and the memory bandwidth necessary to perform this sequence of operations is reduced a corresponding amount. The second payoff, equally important, is that the instruction provides the processor with a regular source of data. When the vector instruction is initiated, the machine knows it will have to fetch pairs of operands which are arranged in a regular pattern in memory. Thus the processor can tell the memory system to start sending those pairs. With an interleaved memory, the pairs will arrive at a rate of one per cycle, at which point they can be routed directly to a pipelined data unit for processing. Without an interleaved memory or some other way of providing operands at a high rate the advantages of processing an entire vector with a single instruction would be greatly reduced.
A key division of vector processors arises from the way the instructions access their operands. In the memory to memory organization the operands are fetched from memory and routed directly to the functional unit. Results are streamed back out to memory as the operation proceeds. In the register to register organization operands are first loaded into a set of vector registers, each of which can hold a segment of a register, for example 64 elements. The vector operation then proceeds by fetching the operands from the vector registers and returning the results to a vector register.
The advantage of memory to memory machines is the ability to process very long vectors, whereas register to register machines must break long vectors into fixed length segments. Unfortunately, this flexibility is offset by a relatively large overhead known as the startup time, which is the time between the initialization of the instruction and the time the first result emerges from the pipeline. The long startup time on a memory to memory machine is a function of memory latency, which is longer than the time it takes to access a value in an internal register. Once the pipeline is full, however, a result is produced every cycle or perhaps every other cycle. Thus a performance model for a vector processor is of the form
where is the startup time, is the length of the vector and is an instruction dependent constant, usually , 1 or 2.
Examples of this type of architecture include the Texas Instruments Inc. Advanced Scientific Computer and a family of machines built by Control Data Corp. known first as the Cyber 200 series and later the ETA-10 when Control Data Corp. founded a separate company known as ETA Systems Inc. These machines appeared in the mid 1970s after a long development cycle that left them with dated technology and disappeared in the mid 1980s. For a thorough discussion of their characteristics, see Hockney and Jesshope . One of the major reasons for their demise was the large startup time, which was on the order of 100 processor cycles. This meant that short vector operations were very inefficient, and even for vectors of length 100 the machines were delivering only about half their maximum performance. In a later section we will see how this vector length that yields half of peak performance is used to characterize vector computers.
In the register to register machines the vectors have a relatively short length, 64 in the case of the Cray family, but the startup time is far less than on the memory to memory machines. Thus these machines are much more efficient for operations involving short vectors, but for long vector operations the vector registers must loaded with each segment before the operation can continue. Register to register machines now dominate the vector computer market, with a number of offerings from Cray Research Inc., including the Y-MP and the C-90. The approach is also the basis for machines from Fujitsu, Hitachi and NEC. Clock cycles on modern vector processors range from 2.5ns (NEC SX-3) to 4.2ns (Cray C90), and single processor performance on LINPACK benchmarks is in the range of 1000 to 2000 MFLOPS (1 to 2 GFLOPS).
The basic processor architecture of the Cray supercomputers has changed little since the Cray-1 was introduced in 1976 . There are 8 vector registers, named V0 through V7, which each hold 64 64-bit words. There are also 8 scalar registers, which hold single 64-bit words, and 8 address registers (for pointers) that have 20-bit words. Instead of a cache, these machines have a set of backup registers for the scalar and address registers; transfer to and from the backup registers is done under program control, rather than by lower level hardware using dynamic memory referencing patterns.
The original Cray-1 had 12 pipelined data processing units; newer Cray systems have 14. There are separate pipelines for addition, multiplication, computing reciprocals (to divide by , a Cray computes ), and logical operations. The cycle time of the data processing pipelines is carefully matched to the memory cycle times. The memory system delivers one value per clock cycle through the use of 4-way interleaved memory.
An interesting feature introduced in the Cray computers is the notion of vector chaining. Consider the following two vector instructions:
V2 = V0 * V1 V4 = V2 + V3
The output of the first
instruction is one of the operands of the second instruction.
Recall that since these are vector instructions, the first
instruction will route up to 64 pairs of numbers to a pipelined
multiplier. About midway through the execution of this
instruction, the machine will be in an interesting state: the
first few elements of
V2 will contain recently computed products;
the products that will eventually go into the next elements of
are still in the multiplier pipeline; and the remainder of the
operands are still in
V1, waiting to be
fetched and routed
to the pipeline. This situation is shown in
where the operands from
V1 that are currently
multiplier pipeline are indicated by gray cells. At this point,
the system is fetching
V1[k] to route them to the first
stage of the pipeline and
V2[j] is just leaving the pipeline.
Vector chaining relies on the path marked with an asterisk. While
V2[j] is being stored in the vector register, it is also routed
directly to the pipelined adder, where it is matched with
As the figure shows, the second instruction can begin even before
the first finished, and while both are executing the machine is
producing two results per cycle (
instead of just one.
Without vector chaining, the peak performance of the Cray-1 would have been 80 MFLOPS (one full pipeline producing a result every 12.5ns, or 80,000,000 results per second). With three pipelines chained together, there is a very short burst of time where all three are producing results, for a theoretical peak performance of 240 MFLOPS. In principle vector chaining could be implemented in a memory-to-memory vector processor, but it would require much higher memory bandwidth to do so. Without chaining, three ``channels'' must be used to fetch two input operand streams and store one result stream; with chaining, five channels would be needed for three inputs and two outputs. Thus the ability to chain operations together to double performance gave register- to-register designs another competitive edge over memory-to- memory designs. | http://www.phy.ornl.gov/csep/ca/node24.html | 13 |
56 | Return to my Mathematics pages
Go to my home page
© Copyright 1999, Jim Loy
At the left we see compasses and straightedge. The compasses are used for drawing arcs and for duplicating lengths. The straight edge is used (with a pencil or pen) to draw straight lines. There are no marks on this straight edge (or we ignore the marks that are there). So we do not use the straight edge to measure lengths. These are the only tools used to do geometric constructions. And Euclid provided many proofs concerning how to do things with these tools. Since then, mathematicians have also shown that certain things cannot be done with these tools. See Trisection Of An Angle.
To the right are a few of the things that can be rather easily done with these tools. I have constructed an equilateral triangle on a given base. I have bisected a line segment (a similar construction is to draw a perpendicular line at a given point, either on the segment or not on the segment). And I have bisected an angle. We can also duplicate an angle, divide a line segment into any number of equal-length segments, and quite a few other things.
Normally, we copy a length by moving the compasses to make two points on an already drawn line (See Collapsible Compasses). Similarly, we copy an angle by copying an arc with that central angle. To draw parallel lines, we duplicate an angle that both lines make with a transversal (line that intersects both parallels), or in other ways. We can construct squares, rectangles, parallelograms, and triangles meeting various conditions.
The diagram on the left shows how to trisect a line segment (AB). Through A, draw an arbitrary line. On this line mark off an arbitrary segment, starting at A. Duplicate this segment twice, producing a trisected segment AC (see the diagram). Draw the line BC. Through the other two points on the trisected segment, draw lines parallel to BC. These two lines trisect the segment AB. You can use the same method to divide a segment into any number of congruent smaller segments.
It is easy to construct a triangle given the lengths of the three sides (diagram on the right), or two sides and the included angle, or two angles and the included side. Given the ease of these constructions, it seems strange that the corresponding congruence "theorems" (Congruence Of Triangles, Part I and Congruence Of Triangles, Part II) cannot be proven, except by slightly shady means (superposition).
See The Centers of a Triangle for clues on how to inscribe a circle in a triangle (bisect two of the angles), and circumscribe a circle about a triangle (perpendicular bisect two of the sides). The proofs of these methods are surprisingly simple.
One of the things that we cannot legally do is to draw a line to a circle (or to a line, for that matter). We can only draw lines through two points, so if you want to draw a line to a circle, you must first find the proper point on the circle, and then draw the line. The attempted construction of parallel lines on the left is therefore illegal; we need to draw some perpendicular lines to find two points, and then draw the line parallel to the other line. This is just one of the unstated, but reasonable rules which limit our constructions.
Euclid demonstrated other constructions: Construct the circle that goes through three given points (diagram on the left, draw two line segments using these points and the center is where the perpendicular bisectors of these segments intersect). Through a given point outside a circle, draw a tangent to the circle (one way is to draw a circle whose diameter is the segment from the given circle's center and the given point). A similar construction is to draw a line tangent to two circles.
Also see these pages concerning constructions:
On the left is a construction of the Golden Rectangle and the Golden Ratio. On the right is one way to construct the square root (or square) of a length (The Pythagorean Theorem can also be used to construct the square root, or square of any given length). Other proportions and products can be constructed in similar ways, and in other ways (with similar triangles, for example).
On the left is the construction of two of the four tangents to two non-intersecting circles. The tangents are parallel to other more easily constructed lines. Two intersecting circles have two common tangents, and the construction is similar. And if one circle is inside the other, there are no common tangents.
It is easy to double a square (construct a square with double the area). Just use the diagonal as the side of the bigger square. In the diagram, I have doubled the blue square, producing the yellow square.
Given three points, construct the three circles with centers at the three points, and externally tangent to the other circles. In the diagram I have drawn a triangle. The radius of each circle is the semiperimeter (half the sum of the lengths of the sides of the triangle) minus the length of the opposite side. That fact (or something similar) makes the construction easy.
Return to my Mathematics pages
Go to my home page | http://www.jimloy.com/geometry/construc.htm | 13 |
65 | Food provides an
organism the materials it needs for energy,
growth, repair, and reproduction. These materials are called
nutrients. They can be divided into two main groups: macronutrients
and micronutrients. Macronutrients, or nutrients required
in large amounts, include carbohydrates, fats, and proteins.
These are the foods used for energy, reproduction, repair
and growth. Micronutrients, or nutrients required
in only small amounts, include vitamins and minerals. Most
foods contain a combination of the two groups. A balanced diet must contain all the
essential nutritional elements such as proteins, carbohydrates,
fats, vitamins, minerals, and water. If a diet is deficient
in any of these nutrients, health becomes impaired and diseases
may be the result.
not a nutrient per se, water is essential to the body for
the maintenance of intracellular and extra cellular fluids.
It is the medium in which digestion and absorption take
place, nutrients are transported to cells and metabolic
waste products are removed. The quality of water provided
to reptiles should be of utmost concern. Water and “soft
foods” (foods containing >20% moisture) are frequently implicated
in exposures to high concentrations of bacteria. An open
water container that becomes contaminated with fecal material
or food will promote rapid bacterial proliferation. In water
containing added vitamins, there can be a 100-fold increase
in the bacterial count in 24 hours. Changing the water and
rinsing the container will obviously decrease the bacterial
load, but an active biofilm remains on the container walls
unless it is disinfected or washed thoroughly. Contamination
in the water container, in addition to the aqueous medium
and compatible environmental temperatures, provide all the
requirements for microorganisms to thrive. Likewise, high-moisture
foods such as canned foods, paste foods, sprouts, fruits
and vegetables provide excellent growth media for microorganisms.
At warm environmental temperatures, these types of foods
can become contaminated in as little as four hours. Water
intake will be greatly influenced by the type of diet provided.
Most reptiles can derive the majority of their water requirement
from foodstuffs when the diet consists primarily of fruits,
vegetables or moist foods. Processed diets tend to increase
the reptile’s water intake because they generally are dry,
lower in fat and tend to have overall higher nutrient levels.
Slightly moister feces are often observed in reptiles on
a formulated diet.
Protein is the group of highly complex
organic compounds found in all living cells. Protein is
the most abundant class of all biological molecules, comprising
about 50% of cellular dry weight. Proteins are large molecules
composed of one or more chains of varying amounts of the
same 22 amino acids, which are linked by peptide bonds.
Protein chains may contain hundreds of amino acids; some
proteins also incorporate phosphorus or such metals as iron,
zinc, and copper. There are two general classes of proteins.
Most are Globular (functional) like enzymes (see 2.3.2),
carrier proteins, storage proteins, antibodies and certain
hormones. The other proteins, the structural proteins,
help organize the structure of tissues and organs, and give
them strength and flexibility. Some of these structural
proteins are long and fibrous. Dietary protein is food that contains
the amino acids necessary to construct the proteins described
above. Complete proteins, generally of animal origin, contain
all the amino acids necessary for reptile growth and maintenance;
incomplete proteins, from plant sources such as grains,
legumes, and nuts, lack sufficient amounts of one or more
essential amino acid.
Fibrous or structural proteins
The protein collagen is the most
important building block in the entire animal world.
More than a third of the reptile's protein is collagen
and makes up 75% of the skin. It controls cell shape
Vitamin C is required for the production
and maintenance of collagen, a protein substance that
forms the base of connective tissues in the body such
as bones, teeth, skin, and tendons. Collagen is the
protein that helps heal wounds, mend fractures, and
support capillaries in order to prevent bruises.
Collagen is the fibrous protein constituent of skin,
cartilage, bone, and other connective tissue.
Keratin produces strong and elastic
tissue, the protein responsible for forming scales and
claws in reptiles. These scales protect the reptile
body from the effects of both water and sun, and prevent
them from drying out.
Elastin is the structural protein
that gives elasticity to the tissues and organs. Elastin
found predominantly in the walls of arteries, in the
lungs, intestines, and skin, as well as in other elastic
tissues. It functions in connective tissue in partnership
with collagen. Whereas collagen provides rigidity, elastin
is the protein, which allows the connective tissues
in blood vessels and heart tissues, for example, to
stretch and then recoil to their original positions.
Hemoglobin is the respiratory pigment
found in the red blood. It is produced in the bone marrow
and carries oxygen from the lungs to the body. An inadequate
amount of circulating hemoglobin results in a condition
Myoglobin is close related to hemoglobin.
It transports oxygen to the muscle tissues. It is a
small, bright red protein which function is to store
an oxygen molecule (O2), which is released
during periods of oxygen deprivation, when the muscles
are hard at work.
Albumins are widely distributed in
plant and animal tissues, e.g., ovalbumin of egg, lactalbumin
of milk, and leucosin of wheat. Some contain carbohydrates.
Normally constituting about 55% of the plasma proteins,
albumins adhere chemically to various substances in
the blood, e.g., amino acids, and thus play a role in
their transport. Albumins and other blood proteins aid
in regulating the distribution of water in the body.
Actin & myosin
Actin is found in the muscle tissue
together with myosin where they interact to make muscle
contraction possible. Actin and myosin are also found
in all cells to bring about cell movement.
Ferritin is found predominantly in
the tissue of the liver, used for the storage of iron.
Binds with iron and transports it
for storage in the liver, or to bone marrow, where it
is used for the formation of hemoglobin.
Ferredoxin acts as a transport protein
for electrons involved in photosynthesis.
supply a reptile with the energy it needs to function. They
are found almost exclusively in plant foods, such as fruits,
vegetables, peas, and beans. Milk and milk products are
the only foods derived from animals that contain a significant
amount of carbohydrates. Carbohydrates
are divided into two groups, simple carbohydrates and complex
are the main source of blood glucose, which is a major fuel
for all of the cells and the only source of energy for the
brain and red blood cells. Except for fibre, which cannot
be digested, both simple and complex carbohydrates are converted
into glucose. The glucose is then either used directly to
provide energy, or stored in the liver for future use. Carbohydrates
take less oxygen to metabolize than protein or fat.
Simple carbohydrates (monosaccharide)
carbohydrates, include fructose, sucrose, and glucose,
as well as several other sugars. Fruits are one of the
richest natural sources of simple carbohydrates.
Simple carbohydrates are the main source of blood glucose,
which is a major energy source and the only for the
brain and red blood cells. When a reptile consumes more
single carbohydrates than it uses, a portion may be
stored as fat.
carbohydrates are also made up of sugars, but the sugar
molecules are strung together to form longer, more complex
chains. Complex carbohydrates include fibre, cellulose,
starch, glycogen, etc.
occurs in animal tissues, especially in the liver and
muscle cells. It is the major store of carbohydrate
energy in animal cells.
The liver removes glucose from the blood when blood
glucose levels are high. Through a process called glycogenesis,
the liver combines the glucose molecules in long chains
to create glycogen. When the amount of glucose in the
blood falls below the level required, the liver reverses
this reaction, transforming glycogen into glucose.
consists of two glucose polymers, amylose and amylopectin.
It occurs widely in plants, especially in roots, seeds,
and fruits, as a carbohydrate energy store. Starch is
therefore a major energy source, when digested it ultimately
is the part of plants and insects that are resistant to
digestive enzymes. As a result, only a relatively small
amount of fibre is digested or metabolized in the stomach
or intestines. Instead, most of it moves through the gastrointestinal
tract and ends up in the faeces.
Although most fibre is not digested, it delivers several
important benefits. Fibre retains water, resulting in softer
and bulkier faeces that prevent constipation and, fibre
binds with certain and eliminates these substances from
the body. Dietary fibre falls into four groups: celluloses,
hemicelluloses, lignins, and pectins.
is the major substance of the cell wall of all plants,
algae and some fungi. With some exceptions among insects
(see chitin), true cellulose is not found in animal
a form of cellulose, naturally occurs in the exoskeleton
of insects. It speeds the transit of foods through the
digestive system and promotes the growth of beneficial
bacteria in the intestines. Chitin can thereby improve
digestion, cleanse the colon, and prevent diarrhea and
constipation. Chitin is known to differ from other polysaccharides
in that it has a strong positive charge that lets it
chemically bond with fats. Because it is mostly indigestible,
it can then prevent lipids from being absorbed in the
digestive tract. Chitosan is derivative from chitin,
more soluble in water.
Fat is an essential nutrient and plays
several vital roles. Fat insulates internal organs and nerves,
carries fat-soluble vitamins throughout the body, helps
repair damaged tissue and fight infections, and provides
a source of energy. Fats are the way a reptile stores up
energy. Fats are stored in adipose tissues. These tissues
are situated under the skin, around the kidneys and mainly
in the tail (squamata and crocodilia). Amphibians store
fat in an organ attached to the kidneys, the fat body. Some
reptiles and amphibians depend on their fat stores during
hibernation. During growth, fat is necessary for normal
brain development. Throughout life, it is essential to provide
energy. Fat is, in fact, the most concentrated source of
energy available. However, adult animals require only small
amounts of fat, much less then is provided by the average
diet. Fats are composed of building blocks
call fatty acids. There are three major categories of fatty
acids: saturated, unsaturated, and polyunsaturated.
Amino acids are the basic chemical building
blocks of life, needed to build all the vital proteins,
hormones and enzymes required by all living organisms, from
the smallest bacterium to the largest mammal. Proteins are
needed to perform a host of vital functions, and can only
exist when an organism has access to amino acids that can
be combined into long molecular chains. An organism is continuously at work,
breaking dietary proteins down into individual amino acids,
and then reassembling these amino acids into new structures.
Amino acids are linked together to form proteins and enzymes.
Reptiles to construct muscles, bones, organs, glands, connective
tissues, nails, scales and skin use these proteins. Amino
acids are also necessary to manufacture protein structures
required for genes, enzymes, hormones, neurotransmitters
and body fluids. In the central nervous system, amino acids
act as neurotransmitters and as precursors to neurotransmitters
used in the brain to receive and send messages. Amino acids
are also required to allow vitamins and minerals to be utilized
properly. As long as a reptile has a reliable source
of dietary proteins containing the essential amino acids
it can adequately meet most of its needs for new protein
synthesis. Conversely, if a reptile is cut off from dietary
sources of the essential amino acids, protein synthesis
is affected and serious health problems can arise.
Depending upon the structure, there are
approximately twenty-nine commonly known amino acids that
account for the thousands of different types of proteins
present in all life forms. Many of the amino acids required
maintaining health can be produced in the liver from proteins
found in the diet. These nonessential amino acids are alanine,
aspartic acid, asparagine, glutamic acid, glutamine, glycine,
proline, and serine. The remaining amino acids, called the
essential amino acids, must be obtained from outside sources.
These essential amino acids are arginine, histidine, isoleucine,
leucine, Iysine, methionine, phenylalanine, threonine, tryptophan,
Essential Amino Acids
is an amino acid which becomes an essential amino acid
when under stress or is in an injured state. Depressed
growth results from lack of dietary arginine.
Arginine increases collagen; the protein providing the
main support for bone, cartilage, tendons, connective
tissue, and skin. It increases wound breaking strength
and improves the rate of wound healing. The demand for
arginine in animals occurs in response to physical trauma
like; injury, burns, dorsal skin wounds, fractures,
physical pain registered by the skin, malnutrition and
muscle and bone growth spurts.
is intricately involved in a large number of critical
metabolic processes, ranging from the production of
red and white blood cells to regulating antibody activity.
Histidine also helps to maintain the myelin sheaths
which surround and insulate nerves. In particular, Histidine
has been found beneficial for the auditory nerves, and
a deficiency of this vital amino acid has been noted
in cases of nerve deafness.
Histidine also acts as an inhibitory neurotransmitter,
supporting resistant to the effects of stress.
Histidine is naturally found in most animal and vegetable
is an essential amino acid found abundantly in most
foods. Isoleucine is concentrated in the muscle tissues
and is necessary for hemoglobin formation, and in stabilizing
and regulating blood sugar and energy levels. A deficiency
of isoleucine can produce symptoms similar to those
of hypoglycemia. Isoleucine is a branched chain amino
acid (BCAA), the others are Isoleucine, Leucine and
Valine. They play an important roll in treating injuries
and physical stress conditions.
is an essential amino acid which cannot be synthesized
but must always be acquired from dietary sources. Leucine
stimulates protein synthesis in muscles, and is essential
for growth. Leucine also promotes the healing of bones,
skin and muscle tissue.
Leucine, and the other branched-chain amino acids, Isoleucine
and Valine, are frequently deficient and increased requirements
can occur after stress.
is one of the essential amino acids that cannot be manufactured
by reptiles, but must be acquired from food sources
or supplements. It has an immune enhancing, high doses
of Lysine stop viral growth and reproduction through
the production of antibodies, hormones and enzymes.
In juveniles lysine is needed for proper growth and
bone development. Its aids calcium absorption and maintains
nitrogen balance in adults. It is also instrumental
in the formation of collagen, which is the basic matrix
of the connective tissues, skin, cartilage and bone.
Lysine aids in the repair of tissue, and helps to build
muscle protein, all of which are important for recovery
from injuries. Lysine deficiencies can result in lowered
immune function, loss of energy, bloodshot eyes, shedding
problems, retarded growth, and reproductive disorders,
and increases urinary excretion of calcium. Lysine has
no known toxicity.
is an essential amino acid that is not synthesized and
must be obtained from food or supplements. It is one
of the sulphur containing amino acids and is important
in many functions. Through its supply of sulphur, it
improves the tone and pliability of the skin, conditions
the scales and strengthens claws. The mineral sulphur
is involved with the production of protein. Methionine
is essential for the absorption and transportation of
selenium and zinc in the body. It also acts as a lipotropic
agent to prevent excess fat buildup in the liver, and
is an excellent chelator of heavy metals, such as lead,
cadmium and mercury, binding them and aiding in their
is one of the amino acids which reptiles cannot manufacture
themselfs, but must acquire from food or supplements.
Phenylalanine is a precursor of tyrosine, and together
they lead to the formation of thyroxine or thyroid hormone,
and of epinephrine and norepinephrine which is converted
into a neurotransmitter, a brain chemical which transmits
nerve impulses. This neurotransmitter is used by the
brain to manufacture norepinephrine which promotes mental
alertness, memory, and behavior.
an essential amino acid, is not manufactured by reptiles
and must be acquired from food or supplements. It is
an important constituent in many proteins and is necessary
for the formation of tooth enamel protein, collagen
and elastin. It is a precursor to the amino acids glycine
and serine. It acts as a lipotropic in controlling fat
build-up in the liver.
Nutrients are more readily absorbed when threonine is
present. Threonine is an immune stimulant and deficiency
has been associated with weakened cellular response
and antibody formation.
an essential amino acid, is one of the amino acids which
reptiles cannot manufacture them self, but most acquire
from food or supplements. It is the least abundant in
proteins and also easily destroyed by the liver. Tryptophan
is necessary for the production of the B-vitamin niacin,
which is essential for your brain to manufacture the
key neurotransmitters. It helps control hyperactivity,
relieves stress, and enhances the release of growth
hormones. Tryptophan to control aggressive behavior
in some reptiles.
is one of the amino acids which reptiles cannot for
manufacture them self but must acquire from food sources.
Valine is found in abundant quantities in most food.
Valine has a stimulant effect. Healthy growth depends
on it. A deficiency results in a negative hydrogen balance.
Non-Essential Amino Acids
Valine can be metabolized to produce energy, which spares
amino acid that can be manufactured by reptiles from
other sources as needed. Alanine is one of the simplest
of the amino acids and is involved in the energy-producing
breakdown of glucose. In conditions of sudden anaerobic
energy need, when muscle proteins are broken down for
energy, alanine acts as a carrier molecule to take the
nitrogen-containing amino group to the liver to be changed
to the less toxic urea, thus preventing buildup of toxic
products in the muscle cells when extra energy is needed.
No deficiency state is known.
is a nonessential amino acid and structurally similar
to aspartic acid, with an additional amino group on
the main carbon skeleton. Asparaginine aids in the metabolic
functioning of brain and nervous system cells, and may
be a mild immune stimulant as well.
Acid is a nonessential amino acid that the body can
make from other sources in sufficient amounts to meet
its needs. It is a critical part of the enzyme in the
liver that transfers nitrogen-containing amino groups,
either in building new proteins and amino acids, or
in breaking down proteins and amino acids for energy
and detoxifying the nitrogen in the form of urea.
Its ability to increase endurance is thought to be a
result of its role in clearing ammonia from the system.
Aspartic acid is one of two major excitatory amino acids
within the brain (The other is glutamic acid).
Depleted levels of aspartic acid may occur temporarily
within certain tissues under stress, but, because the
body is able to make its own aspartic acid to replace
any depletion, deficiency states do not occur. Aspartic
acid is abundant in plants, especially in sprouting
seeds. In protein, it exists mainly in the form of its
amide, asparagine. Aspartic acid is considered nontoxic.
is a dipeptide - an amino acid made from two other aminos,
methionine and lysine. It can be synthesized in the
liver if sufficient amounts of lysine, B1, B6 and iron
are available. Carnitine has been shown to have a major
role in the metabolism of fat and by increasing fat
utilization. It transfers fatty acids across the membranes
of the mitochondria where they can be utilized as sources
of energy. It also increases the rate at which the liver
Carnitine is stored primarily in the skeletal muscles
and heart, where it is needed to transform fatty acids
into energy for muscular activity.
is a high sulphur containing amino acid synthesized
by the liver. It is an important precursor to Glutathione,
one of the body's most effective antioxidants and free
radical destroyers. Free radicals are toxic waste products
of faulty metabolism, radiation and environmental pollutants
which oxidize and damage body cells. Glutathione also
protects red blood cells from oxidative damage and aids
in amino acid transport. It works most effectively when
taken in conjunction with vitamin E and selenium.
Through this antioxidant enzyme process, cysteine may
contribute to a longer life. It has immune enhancing
properties, promotes fat burning and muscle growth and
also tissue healing after injury or burns. 8% of the
scales consists of cysteine.
is a stable form of the amino acid cysteine. A reptile
is capable of converting one to the other as required
and in metabolic terms they can be thought of as the
same. Both cystine and cysteine are rich in sulphur
and can be readily synthesized. Cystine is found abundantly
in scale keratin, insulin and certain digestive enzymes.
Cystine or cysteine is needed for proper utilization
of vitamin B6. By reducing the body's absorption of
copper, cystine protects against copper toxicity, which
has been linked to behavioral problems. It is also found
helpful in the healing of wounds, and is used to break
down mucus deposits in illnesses such as bronchitis
and cystic fibrosis.
Gamma aminobutyric acid
aminobutyric acid is an important amino acid which functions
as the most prevalent inhibitory neurotransmitter in
the central nervous system. It works in partnership
with a derivative of Vitamin B-6, and helps control
the nerve cells from firing too fast, which would overload
acid is biosynthesized from a number of amino acids
including ornithine and arginine. When aminated, glutamic
acid forms the important amino acid glutamine. It can
be reconverted back into glutamine when combined with
ammonia can create confusion over which amino acid does
Glutamic acid (sometimes called glutamate) is a major
excitatory neurotransmitter in the brain and spinal
cord, and is the precursor to glutathione and Gamma-Aminobutyric
Acid (GABA). Glutamic acid is also a component of folic
acid. After glutamic acid is formed in the brain from
glutamine, it then has two key roles. The body uses
glutamic acid to fuel the brain and to inhibit neural
excitation in the central nervous system. Besides glucose,
it is the only compound used for fuel by the brain.
The second function is detoxifying ammonia in the brain
and removing it. It then reconverts to its original
form of glutamine.
is an amino acid widely used to maintain good brain
functioning. Glutamine is a derivative of glutamic acid
which is synthesized from the amino acids arginine,
ornithine and proline. Glutamine improves mental alertness
and mood. It is found abundantly in animal proteins
and needed in high concentrations in serum and spinal
fluid. When glutamic acid combines with ammonia, a waste
product of metabolic activity, is converted into glutamine.
is an amino acid that is a major part of the pool of
amino acids which aid in the synthesis of non essential
amino acids in the body. Glycine can be easily formed
in the liver or kidneys from Choline and the amino acids
Threonine and Serine. Likewise, Glycine can be readily
converted back into Serine as needed. Glycine is also
one of the few amino acids that can spare glucose for
energy by improving glycogen storage.
Glycine is required by the body for the maintenance
of the central nervous system, and also plays an important
function in the immune system were it is used in the
synthesis of other non-essential amino acids. Glycine
can reduce gastric acidity, and in higher doses, can
stimulate growth hormone release and contribute to wound
healing. Glycine comprises up to a third of the collagen
and is required for the synthesis of hemoglobin, the
oxygen-carrying molecule in the blood.
is made from the amino acid arginine and in turn is
a precursor to form glutamic acid, citruline, and proline.
Ornithine's value lies in its ability to enhance liver
function, protect the liver and detoxify harmful substances.
It also helps release a growth hormone when combined
with arginine, this growth hormone is also an immune
Arginine and ornithine have improved immune responses
to bacteria and viruses. Ornithine has been shown to
aid in wound healing and support liver regeneration.
is synthesized from the amino acids glutamine or ornithine.
It is one of the main components of collagen, the connective
tissue structure that binds and supports all other tissues.
Proline improves skin texture but collagen is neither
properly formed nor maintained if Vitamin C is lacking.
is an amino acid naturally found in vegetables, fruits,
and insects. It is also normally present in large amounts
in the bone marrow and blood. Pyroglutamate improves
memory and learning in rats, but it is not known it
has any effect on reptiles.
is synthesized from the amino acids glycine or threonine.
Its production requires adequate amounts of B-7 (niacin),
B-6, and folic acid. It is needed for the metabolism
of fats and fatty acids, muscle growth and a healthy
immune system. It aids in the production of immunoglobulins
and antibodies. It is a constituent of brain proteins
and nerve coverings. It is important in the formation
of cell membranes, involved in the metabolism of purines
and pyrimidines, and muscle synthesis.
is one of the most abundant amino acids. It is found
in the central nervous system, skeletal muscle and is
very concentrated in the brain and heart. It is synthesized
from the amino acids methionine and cysteine, in conjunction
with vitamin B6. Animal protein is a good source of
taurine, as it is not found in vegetable protein. Like
magnesium, taurine affects cell membrane electrical
excitability by normalizing potassium flow in and out
of heart muscle cells. It has been found to have an
effect on blood sugar levels similar to insulin. Taurine
helps to stabilize cell membranes and seems to have
some antioxidant and detoxifying activity. It helps
the movement of potassium, sodium, calcium and magnesium
in and out of cells, which helps generate nerve impulses.Taurine
is necessary for the chemical reactions that produce
is an amino acid which is synthesized from phenylalanine.
It is a precursor of the important brain neurotransmitters
epinephrine, norepinephrine and dopamine. Dopamine is
vital to mental function and seems to play a role in
Tyrosine is also used by the thyroid gland to produce
one of the major hormones, Thyroxin. This hormone regulates
growth rate, metabolic rate, skin health and mental
health. It is used in the treatment of anxiety. Animals
subjected to stress in the laboratory have been found
to have reduced levels of the brain neurotransmitter
norepinephrine. Doses of tyrosine prior to stressing
the animals prevents reduction of norepinephrine.
are naturally occurring elements found in the earth and
work in reptiles as coenzymes to allow the reptile to perform
vital functions. Minerals compose body fluids, blood and
bone, and the central nervous system functions.
The dependence on specific minerals is based upon millions
of years of evolutionary development that can be traced
back to the earliest living organisms. Over time mineral
salts have been released into the environment by the breakdown
and weathering of rock formations rich in elemental deposits.
Accumulating in the soil and oceans, minerals are passed
from micro organisms to plants and on to herbivorous creatures.
Reptiles then obtain minerals primarily from the plants,
insects and animals that make up their diet.
Minerals can be broken down into two basic groups: bulk,
or macro minerals, and trace, or micro minerals. The macro
minerals, such as calcium, magnesium, sodium (salt), potassium
and phosphorus are needed in fairly substantial amounts
for proper health. By comparison, the trace minerals are
needed in far smaller quantities and include substances
such as zinc, iron, copper, manganese, chromium, selenium,
After ingestion, dietary minerals enter the stomach where
they are attached to proteins in order to enhance absorption
into the blood stream. After minerals are absorbed they
are delivered by the blood stream to individual cells for
transport across cell membranes. Minerals must often compete
with other minerals for absorption, and in certain cases
must be in a proper balance with other minerals to be properly
utilized. For example, an excess of zinc can cause a depletion
of copper, and too much calcium can interfere with the absorption
of magnesium and phosphorus.
Minerals are generally considered safe, though high dosages
for long periods can lead to toxic effects.
is the most abundant mineral in a reptile and one of
the most important. This mineral constitutes about 1.5-2.0
percent of their body weight. Almost all (98 percent)
calcium is contained in the bones and the rest in the
other tissues or in circulation.
Many other nutrients, vitamin D-3, and certain hormones
are important to calcium absorption, function, and metabolism.
Phosphorus as well as calcium is needed for normal bones,
as are magnesium, silicon, strontium and possibly boron.
The ratio of calcium to phosphorus in bones is about
2.5:1; the best proportions of these minerals in the
diet for proper metabolism are currently under question.
Calcium works with magnesium in its functions in the
blood, nerves, muscles, and tissues, particularly in
regulating heart and muscle contraction and nerve conduction.
Vitamin D-3 is needed for much calcium (and phosphorus)
to be absorbed from the digestive tract.
Maintaining a balanced blood calcium level is essential
to life. If there is not enough calcium in the diet
to maintain sufficient amounts of calcium in the blood,
calcium then will be drawn out of the bones and increase
intestinal absorption of available calcium. So even
though most of the calcium is in the bones, the blood
and cellular concentrations of this mineral are maintained
Various factors can improve our calcium absorption.
Besides vitamin D-3, vitamins A and C can help support
normal membrane transport of calcium. Protein intake
helps absorption of calcium, but too much protein may
reduce it. Some dietary fat may also help absorption,
but high fat may reduce it.
A fast-moving intestinal tract can also reduce calcium
absorption. Stress also can diminish calcium absorption,
possibly through its effect on stomach acid levels and
digestion. Though calcium in the diet improves the absorption
of the important vitamin B-12, too much of it may interfere
with the absorption of the competing minerals magnesium,
zinc, iron, and manganese.
Because of the many complex factors affecting calcium
absorption, anywhere from 30-80 percent may end up being
excreted. Some may be eliminated in the feces. The kidneys
also control calcium blood levels through their filtering
and reabsorption functions.
makes up about 0.15 percent of the weight and is found
mainly in the extracellular fluid along with sodium.
As one of the mineral electrolytes, chloride works closely
with sodium and water to help the distribution of body
Chloride is easily absorbed from the small intestine.
It is eliminated through the kidneys, which can also
retain chloride as part of their finely controlled regulation
of acid-base balance.
is a very important essential macro mineral, even though
it is only 0.05 percent of the body weight. It is involved
in several hundred enzymatic reactions, many of which
contribute to production of energy. As with calcium,
the bones act as a reservoir for magnesium in times
of need. The remaining magnesium is contained in the
blood, fluids, and other tissues. The process of digestion
and absorption of magnesium is very similar to that
of calcium. Diets high in protein or fat, a diet high
in phosphorus or calcium (calcium and magnesium can
compete), may decrease magnesium absorption.
Usually, about 40-50 percent of the magnesium is absorbed,
though this may vary from 25-75 percent depending on
stomach acid levels, body needs, and diets. Stress may
increase magnesium excretion. The kidneys can excrete
or conserve magnesium according to body needs. The intestines
can also eliminate excess magnesium in the faeces.
is the second most abundant element (after calcium)
present in a reptile’s body, makes up about 1 percent
of the total body weight. It is present in every cell,
but 85 percent of the phosphorus is found in the bones.
In the bones, phosphorus is present in the phosphate
form as the bone salt calcium phosphate in an amount
about half that of the total calcium. Both these important
minerals are in constant turnover, even in the bone
A reptile uses a variety of mechanisms to control the
calcium-phosphorus ratio and metabolism. Phosphorus
is absorbed more efficiently than calcium. Nearly 70
percent of phosphorus is absorbed from the intestines,
although the rate depends somewhat on the levels of
calcium and vitamin D. Most phosphorus is deposited
in the bones, the rest is contained in the cells and
other tissues. Much is found in the red blood cells.
Iron, aluminium, or magnesium, which may all form insoluble
phosphates and be eliminated in the faeces. This results
in a decrease of phosphorus absorption.
Phosphorus is involved in many functions besides forming
bones. Phosphorus is important to the utilization of
carbohydrates and fats for energy production and also
in protein synthesis for the growth, maintenance, and
repair of all tissues and cells. It helps in kidney
function and acts as a buffer for acid-base balance
in the body. Phosphorus aids muscle contraction, including
the regularity of the heartbeat, and is also supportive
of proper nerve conduction. This important mineral supports
the conversion of niacin and riboflavin to their active
There is no known toxicity specific to phosphorus; however,
high dietary phosphorus, can readily affect calcium
metabolism. Potential calcium deficiency symptoms may
be more likely when the phosphorus dosage is very high.
Symptoms of phosphorus deficiency may include weakness,
weight loss, decreased growth, poor bone development,
and symptoms of rachitis may occur in phosphorus-deficient
is a very significant mineral, important to both cellular
and electrical function. It is one of the main blood
minerals called "electrolytes" (the others are sodium
and chloride), which means it carries a tiny electrical
charge (potential). Potassium is the primary positive
ion found within the cells, where 98 percent of potassium
is found. Magnesium helps maintain the potassium in
the cells, but the sodium and potassium balance is as
finely tuned as those of calcium and phosphorus or calcium
and magnesium. Potassium is well absorbed from the small
intestine, with about 90 percent absorption and is one
of the most soluble minerals. Most excess potassium
is eliminated in the urine.
Along with sodium, it regulates the water balance and
the acid-base balance in the blood and tissues. Potassium
is very important in cellular biochemical reactions
and energy metabolism; it participates in the synthesis
of protein from amino acids in the cell. Potassium also
functions in carbohydrate metabolism; it is active in
glycogen and glucose metabolism, converting glucose
to glycogen that can be stored in the liver for future
energy. Potassium is important for normal growth and
for building muscle.
Elevations or depletions of this important mineral can
cause many problems. Maintaining consistent levels of
potassium in the blood and cells is vital to function.
Even with high dosages of potassium, the kidneys will
clear any excess, and blood levels will not be increased.
Low potassium may impair glucose metabolism. In more
severe potassium deficiency, there can be serious muscle
weakness, bone fragility, central nervous system changes,
and even death.
is another mineral that is not commonly written about
as an essential nutrient. It is present in the soil
and is actually the most abundant mineral in the earth's
crust, as carbon is the most abundant in plant and animal
tissue. Silicon is very hard and is found in rock crystals
such as quartz or flint. Silicon molecules in the tissues,
such as the nails and connective tissue, give them strength
and stability. Silicon is present in bone, blood vessels,
cartilage, and tendons, helping to make them strong.
Silicon is important to bone formation, as it is found
in active areas of calcification. It is also found in
plant fibres and is probably an important part of their
structure. This mineral is able to form long molecules,
much the same as is carbon, and gives these complex
configurations some durability and strength. Retarded
growth and poor bone development is the result of a
silicon-deficient diet. Collagen contains silicon, helping
hold the body tissues together. This mineral works with
calcium to help restore bones.
It deeply penetrates the tissues and help to clear stored
toxins. The essential strength and stability this mineral
provides to the tissues should give them protection
is the primary positive ion found in the blood and body
fluids; it is also found in every cell although it is
mainly extracellular, working closely with potassium,
the primary intracellular mineral. Sodium is one of
the electrolytes, along with potassium and chloride,
and is closely tied in with the movement of water; "where
sodium goes, water goes." Sodium chloride is present
in solution on a large part of the earth's surface in
ocean water. Along with potassium, sodium helps to regulate
the fluid balance both within and outside the cells.
Through the kidneys, by buffering the blood with a balance
of all the positive or negative ions present, these
two minerals help control the acid-base balance as well.
Sodium and potassium interaction helps to create an
electrical potential (charge) that enables muscles to
contract and nerve impulses to be conducted. Sodium
is also important to hydrochloric acid production in
the stomach and is used during the transport of amino
acids from the gut into the blood.
Since sodium is needed to maintain blood fluid volume,
excessive sodium can lead to increased blood volume,
especially when the kidneys do not clear it efficiently.
In the case of sodium, there is more of a concern with
toxicity from excesses than with deficiencies.
Sodium deficiency is less common than excess sodium,
as this mineral is readily available in all diets, but
when it does occur, deficiency can cause problems. The
deficiency is usually accompanied by water loss. When
sodium and water are lost together, the extracellular
fluid volume is depleted, which can cause decreased
blood volume, increased blood count and muscle weakness.
When sodium is lost alone, water flows into the cells,
causing cellular swelling and symptoms of water intoxication.
With low sodium, there is also usually poor carbohydrate
is an interesting non-metallic element that is found
mainly as part of larger compounds. Sulphur is present
in four amino acids: methionine, an essential amino
acid and the nonessential cystine and cysteine and.
Sulphur is also present in two B vitamins; thiamine
is important to skin and biotin to the scales. Sulphur
is also available as various sulphates or sulphides.
But overall, sulphur is most important as part of protein.
Sulphur is absorbed from the small intestine primarily
as the four amino acids or from sulphates in water,
fruits and vegetables. Sulphur is stored in all cells,
especially the skin, scales, and nails. Excess amounts
are eliminated through the urine or in the faeces.
As part of four amino acids, sulphur performs a number
of functions in enzyme reactions and protein synthesis.
It is necessary for formation of collagen, the protein
found in connective tissue. Sulphur is also present
in keratin, which is necessary for the maintenance of
the skin, scales, and nails, helping to give strength,
shape, and hardness to these protein tissues. There
is minimal reason for concern about either toxicity
or deficiency of sulphur. No clearly defined symptoms
exist with either state. Sulphur deficiency is common
with low-protein diets, or with a lack of intestinal
bacteria, though none of these seems to cause any problems
in regard to sulphur functions and metabolism.
Essential Trace Minerals (Trace elements)
is a vital molecule in regulating carbohydrate metabolism
by enhancing insulin function for proper use of glucose.
Together with two niacin molecules, and three amino
acids; glycine, cysteine, and glutamic acid.
Chromium is really considered an "ultra-trace" mineral,
since it is needed in such small quantities to perform
its essential functions. The blood contains about 20
parts per billion (ppb), a fraction of a microgram.
Even though it is in such small concentrations, this
mineral is important to a reptile’s health. The kidneys
clear any excess from the blood, while much of chromium
is eliminated through the faeces. This mineral is stored
in many parts, including the skin, fat, muscles and
kidneys. Because of the low absorption and high excretion
rates of chromium, toxicity is not at all common in
reptiles, but it is in amphibians.
is another essential mineral needed in very small amounts
in the diet.
Cobalt, as part of vitamin B12, is not easily absorbed
from the digestive tract. It is stored in the red blood
cells and the plasma, as well as in the liver, and kidneys.
As part of vitamin B12, cobalt is essential to red blood
cell formation and is also helpful to other cells.
Toxicity can occur from excess inorganic cobalt found
as a food contaminant. High dosage may affect the thyroid
or cause overproduction of red blood cells, thickened
blood, and increased activity in the bone marrow.
Deficiency of cobalt is not really a concern with enough
vitamin B12. As cobalt deficiency leads to decreased
availability of B12, there is an increase of many symptoms
and problems related to B12 deficiency, particularly
is important as a catalyst in the formation of haemoglobin,
the oxygen-carrying molecule. It helps oxidize vitamin
C and works with C to form collagen (part of cell membranes
and the supportive matrix in muscles and other tissues),
especially in the bone and connective tissue. It helps
the cross-linking of collagen fibres and thus supports
the healing process of tissues and aids in proper bone
formation. An excess of copper may increase collagen
and lead to stiffer and less flexible tissues.
Copper is found in many enzymes; most important is the
cytoplasmic superoxide dismutase. Copper enzymes play
a role in oxygen-free radical metabolism, and in this
way have a mild anti-inflammatory effect. Copper also
functions in certain amino acid conversions. Being essential
in the synthesis of phospholipids, copper contributes
to the integrity of the myelin sheaths covering nerves.
It also aids the conversion of tyrosine to the pigment
melanin, which gives scales and skin their colouring.
Copper, as well as zinc, is important for converting
T3 (triiodothyronine) to T4 (thyroxine), both thyroid
hormones. Low copper levels may reduce thyroid functions.
Copper, like most metals, is a conductor of electricity;
it helps the nervous system function. It also helps
control levels of histamine. Copper in the blood is
fixed to the protein cerulosplasmin, and copper is part
of the enzyme histaminase, which is involved in the
metabolism of histamine.
Problems of copper toxicity may include stress, hyperactivity,
nervousness, and discoloration of the skin and scales.
Copper deficiency is commonly found together with iron
deficiency. High zinc dosage can lead to lower copper
levels and some symptoms of copper deficiency. The reduced
red blood cell function and shortened red cell life
span can influence energy levels and cause weakness
and may also affect tissue health and healing. Weakened
immunity, skeletal defects related to bone demineralization,
and poor nerve conductivity, might all result from copper
depletions. Copper deficiency results in several abnormalities
of the immune system, such as reduced cellular immune
response, reduced activity of white blood cells and
an increased infection rate.
is an essential nutrient for production of thyroid hormones
and therefore is required for normal thyroid function.
The thyroid hormones, particularly thyroxine, which
is 65 percent iodine, are responsible for basal metabolic
rate, the reptile’s use of energy. Thyroid is required
for cell respiration and the production of energy and
further increases oxygen consumption and general metabolism.
The thyroid hormones, thyroxine and triiodothyronine,
are also needed for normal growth and development, protein
synthesis, and energy metabolism. Nerve and bone formation,
reproduction, and the condition of the skin, scales,
and nails. Thyroid and, thus, iodine also affect the
conversion of carotene to vitamin A and of ribonucleic
acids to protein; cholesterol synthesis; and carbohydrate
There is no significant danger of toxicity of iodine
from a natural diet, though some care must be taken
when supplementing iodine. High iodine dosage, however,
may actually reduce thyroxine production and thyroid
function. Deficiencies of iodine have been very common,
especially in areas where the soil is depleted, as discussed
earlier. Several months of iodine deficiency leads to
slower metabolism, decreased resistance to infection,
and a decrease in sexual energy.
function of iron is the formation of haemoglobin. Iron
is the central core of the haemoglobin molecule, which
is the essential oxygen-carrying component of the red
blood cell (RBC). In combination with protein, iron
is carried in the blood to the bone marrow, where, with
the help of copper, it forms haemoglobin. The ferritin
and transferrin proteins actually hold and transport
the iron. Haemoglobin carries the oxygen molecules throughout
the body. Red blood cells pick up oxygen from the lungs
and distribute it to the rest of the tissues, all of
which need oxygen to survive. Iron's ability to change
back and forth between its ferrous and ferric forms
allows it to hold and release oxygen. Myoglobin is similar
to haemoglobin in that it is an iron-protein compound
that holds oxygen and carries it into the muscles, mainly
the skeletal muscles and the heart. It provides higher
amounts of oxygen to our muscles with increased activity.
Myoglobin also acts as an oxygen reservoir in the muscle
cells. So muscle performance actually depends on this
function of iron, besides the basic oxygenation by haemoglobin
through normal blood circulation.
Usually, it takes moderately high amounts over a long
period with minimal losses of this mineral to develop
any iron toxicity problems.
Iron deficiency occurs fairly commonly when a rapid
growth period increases iron needs, which are often
not met with additional dietary intake. Females need
more iron than males. Symptoms are weight loss from
decreased appetite, loss of energy, lowered immunity
(a weakened resistance), and may cause a strange symptom;
eating and licking inedible objects, such as stones,
mud, or glass.
is involved in many enzyme systems-that is, it helps
to catalyze many biochemical reactions. There are some
biochemical suggestions that manganese is closer to
magnesium in more than just name. It is possible that
magnesium can substitute for manganese in certain conditions
when manganese is deficient.
Manganese activates the enzymes necessary for a reptile
to use biotin, thiamine (B1), vitamin C, and choline.
It is important for the digestion and utilization of
food, especially proteins, through peptidase activity,
and it is needed for the synthesis of cholesterol and
fatty acids and in glucose metabolism.
Manganese may be one of the least toxic minerals.
Manganese deficiency can lead to sterility, and to infertile
eggs or to poor growth in the offspring. There is decreased
bone growth, especially in length.
is a vital part of three important enzyme systems—xanthine
oxidase, aldehyde oxidase, and sulphite oxidase—and
so has a vital role in uric acid formation and iron
utilization, in carbohydrate metabolism, and sulphite
detoxification as well. Xanthine oxidase (XO) helps
in the production of uric acid, an end product of protein
(purine) metabolism and may also help in the mobilization
of iron from liver reserves. Aldehyde oxidase helps
in the oxidation of carbohydrates and other aldehydes,
including acetaldehyde produced from ethyl alcohol.
Sulphate oxidase helps to detoxify sulphurs.
Animals given large amounts experience weight loss,
slow growth, anaemia, or diarrhea, though these effects
may be more the result of low levels of copper, a mineral
with which molybdenum competes. Molybdenum-deficient
diets seem to produce weight loss and decreased life
has a variety of functions; its main role is as an antioxidant
in the enzyme selenium-glutathione peroxidase. Selenium
is part of a nutritional antioxidant system that protects
cell membranes and intracellular structural membranes
from lipid peroxidation. It is actually the selenocysteine
complex that is incorporated into glutathione peroxidase
(GP), an enzyme that helps prevent cellular degeneration
from the common peroxidase free radicals, such as hydrogen
peroxide. GP also aids red blood cell metabolism and
prevent chromosome damage in tissue cultures. As an
antioxidant, selenium in the form of selenocysteine
prevents or slows the biochemical aging process of tissue
degeneration and hardening-that is, loss of youthful
elasticity. This protection of the tissues and cell
membranes is enhanced by vitamin E. Selenium also protects
reptiles from the toxic effects of heavy metals and
other substances. Selenium may also aid in protein synthesis,
growth and development, and fertility, especially in
the male. It has been shown to improve sperm production
High doses of selenium can lead to weight loss, liver
and kidney malfunction, and even death if the levels
are high enough. With selenium deficiency, there may
be increased risk of infections. Many other metals,
including cadmium, arsenic, silver, copper, and mercury,
are thought to be more toxic in the presence of selenium
involved in a multitude of functions and is part of
many enzyme systems. With regard to metabolism, zinc
is part of alcohol dehydrogenase, which helps the liver
detoxify alcohols (obtained often from eating rotten,
high sugar fruits), including ethanol, methanol, ethylene
glycol, and retinol (vitamin A). Zinc is also thought
to help utilize and maintain levels of vitamin A. Through
this action, zinc may help maintain healthy skin cells
and thus may be helpful in generating new skin. By helping
collagen formation, zinc may also improve wound healing.
Zinc is needed for lactate and malate dehydrogenases,
both important in energy production. Zinc is a cofactor
for the enzyme alkaline phosphatase, which helps contribute
phosphates to bones. Zinc is also part of bone structure.
Zinc is important to male sex organ function and reproductive
fluids. It is in high concentration in the eye, liver,
and muscle tissues suggesting its functions in those
Zinc in carboxypeptidase (a digestive enzyme) helps
in protein digestion. Zinc is important for synthesis
of nucleic acids, both DNA and RNA. Zinc has also been
shown to support immune function.
Zinc is fairly non-toxic. There are many symptoms and
decreased functions due to zinc deficiency. It may cause
slowed growth or slow sexual behaviour. Lowered resistance
and increased susceptibility to infection may occur
with zinc deficiency, which is related to a decreased
cellular immune response. Sensitivity and reactions
to environmental chemicals may be exaggerated in a state
of zinc deficiency, as many of the important detoxification
enzyme functions may be impaired.
Reptiles with zinc deficiency may show poor appetite
and slow development. Dwarfism and a total lack of sexual
function may occur with serious zinc deficiency. Fatigue
is common. Sterility may result from zinc deficiency.
Birth defects have been associated with zinc deficiency
during pregnancy in experimental animals. The offspring
showed reduced growth patterns.
Other Trace minerals
is an important trace mineral necessary for the proper
absorption and utilization of calcium for maintaining
bone density and the prevention of loss of bone mass.
It possibly affects calcium, magnesium, and phosphorus
balance and the mineral movement and makeup of the bones
by regulating the hormones.
helps strengthen the crystalline structure of bones.
The calcium fluoride salt forms a fluorapatite matrix,
which is stronger and less soluble than other calcium
salts and therefore is not as easily reabsorbed into
circulation to supply calcium needs. In bones, fluoride
reduces loss of calcium and thereby may reduce osteoporosis.
No other functions of fluoride are presently known,
though it has been suggested to have a role in growth,
in iron absorption, and in the production of red blood
Toxicity from fluoride is definitely a potential problem.
As stated, fluoridated water must be closely monitored
to keep the concentration at about 1 ppm. At 8 to about
20 ppm, initial tissue sclerosis will occur, especially
in the bones and joints. At over 20 ppm, much damage
can occur, including decreased growth and cellular changes,
especially in the metabolically active organs such as
the liver, kidneys, and reproductive organs. More than
50 ppm of fluoride intake can be fatal. Animals eating
extra fluoride in grains, vegetables or in water have
been shown to have bone lesions. Fat and carbohydrate
metabolism has also been affected. There are many other
concerns about fluoride toxicity, including bone malformations.
Sodium fluoride is less toxic than most other fluoride
salts. In cases of toxicity, extra calcium will bind
with the fluoride, making a less soluble and less active
Fluoride deficiency is less of a concern. It is possible,
that traces of fluoride are essential, but it is not
clear whether it is a natural component of the tissues.
Low fluoride levels may correlate with a higher amount
of bone fractures, but that is usually in the presence
mineral germanium itself may be needed in small; however,
research has not yet shown this. It is found in the
soil, in foods, and in many plants, such as aloe vera,
garlic, and ginseng. The organo-germanium does not release
the mineral germanium to the tissues for specific action,
but is absorbed, acts, and is eliminated as the entire
not yet known what particular function of lithium may
make it an essential nutrient. It is thought to stabilize
serotonin transmission in the nervous system; it influences
sodium transport; and it may even increase lymphocytic
(white blood cell) proliferation and depress the suppressor
cell activity, thus strengthening the immune system.
Deficiency of lithium is not really known. Symptoms
of lithium toxicity include diarrhea, thirst, increased
urination, and muscle weakness. Skin eruptions may also
occur. With further toxicity, staggering, seizures,
kidney damage, coma, and even death may occur.
function of nickel is still somewhat unclear. Nickel
is found in the body in highest concentrations in the
nucleic acids, particularly RNA, and is thought to be
somehow involved in protein structure or function. It
may activate certain enzymes related to the breakdown
or utilization of glucose. Low nickel can lead to decreased
growth, dermatitis, pigment changes, decreased reproduction
capacities, and compromised liver function.
are currently no known essential functions of rubidium.
In studies with mice, rubidium has helped decrease tumour
growth, possibly by replacing potassium in cell transport
mechanisms or by rubidium ions attaching to the cancer
cell membranes. Rubidium may have a tranquilizing or
hypnotic effect in some animals.
There is no known deficiency or toxicity for rubidium.
may help improve the cell structure and mineral matrix
of the bones, adding strength and helping to prevent
soft bones, though it is not known if low levels of
strontium causes these problems.
There have been no cases of known toxicity from natural
strontium. Strontium deficiency may correlate with decreased
growth, poor calcification of the bones.
has a function, it may be related to protein structure
or oxidation and reduction reactions, though tin is
generally a poor catalyst. Tin may interact with iron
and copper, particularly in the gut, and so inhibit
Though tin is considered a mildly toxic mineral, there
are no known serious diseases. Studies in rats showed
mainly a slightly shortened life span. There are no
known problems from tin deficiency.
is known about vanadium function. Vanadium seems to
be involved in catecholamine and lipid metabolism. It
has been shown to have an effect in reducing the production
of cholesterol. Other research involves its role in
calcium metabolism, in growth, reproduction, blood sugar
regulation, and red blood cell production. The enzyme-stimulation
role of vanadium may involve it in bone formation and,
through the production of coenzyme A, in fat metabolism.
Vanadium has been thought to be essentially non-toxic,
possibly because of poor absorption. In reptiles, vanadium
deficiency causes some problems with the scales and
shedding, bone development, and reproduction.
Toxic Trace Minerals
has only recently been considered a problem mineral.
Though it is not very toxic in normal levels, neither
has it been found to be essential. Aluminium is very
abundant in the earth and in the sea. It is present
in only small amounts in animal and plant tissues. The
best way to prevent aluminium build-up is to avoid the
sources of aluminium. Some tap waters contain aluminium;
this can be checked. Avoiding aluminium water dishes
and replacing it with stainless steel, ceramic, or plastic
is a good idea.
arsenic's reputation as a poison, it actually has fairly
low toxicity in comparison with some other metals. In
fact, it has been shown to be essential animals. Organic
arsenic as and elemental arsenic both found naturally
in the earth and in foods do not readily produce toxicity.
In fact, they are handled fairly easily and eliminated
by the kidneys. The inorganic arsenites or trivalent
forms of arsenic, such as arsenic trioxide seem to create
the problems. They accumulate in the body, particularly
in the skin, scales, and nails, but also in internal
organs. Arsenic can accumulate when kidney function
is decreased. Luckily, absorption of arsenic is fairly
low, so most is eliminated in the faeces and some in
In some studies, arsenic has been shown to promote longevity
Like lead, it is an underground mineral that did not
enter our air, food, and water in significant amounts
until it was mined as part of zinc deposits. As cadmium
and zinc are found together in natural deposits, so
are they similar in structure and function. Cadmium
may actually displace zinc in some of its important
enzymatic and organ functions; thus, it interferes with
these functions or prevents them from being completed.
The zinc-cadmium ratio is very important, as cadmium
toxicity and storage are greatly increased with zinc
deficiency, and good levels of zinc protect against
tissue damage by cadmium. The refinement of grains reduces
the zinc-cadmium ratio, so zinc deficiency and cadmium
toxicity are more likely when the diet is high in refined
Besides faecal losses, mainly the kidneys excrete it.
This mineral is stored primarily in the liver and kidneys.
In rat studies, higher levels of cadmium are associated
with an increase in heart size, higher blood pressure,
progressive atherosclerosis, and reduced kidney function.
And in rats, cadmium toxicity is worse with zinc deficiency
and reduced with higher zinc intake.
Cadmium appears to depress some immune functions, mainly
by reducing host resistance to bacteria and viruses.
Cadmium also affects the bones. It was associated with
weak bones that lead to deformities, especially of the
spine, or to more easily broken bones.
metal lead is the most common toxic mineral, it is the
worst and most widespread pollutant, though luckily
not the most toxic; cadmium and mercury are worse. Though
this is not completely clear, lead most likely interferes
with functions performed by essential minerals such
as calcium, iron, copper, and zinc. Lead does interrupt
several red blood cell enzymes. Especially in brain
chemistry, lead may create abnormal function by inactivating
important zinc-, copper-, and iron-dependent enzymes.
Lead affects both the brain and the peripheral nerves
and can displace calcium in bone. Lead also lowers host
resistance to bacteria and viruses, and thus allows
an increase in infections.
Calcium and magnesium reduce lead absorption. Iron,
copper, and zinc also do this. With low mineral intake,
lead absorption and potential toxicity are increased.
Algin, as from kelp (seaweed) or the supplement sodium
alginate, helps to bind lead and other heavy metals
in the gastrointestinal tract and carry them to elimination
and reduce absorption. With this, though, essential
vitamins and minerals, such as the Bs, iron, calcium,
zinc, copper, and chromium, help decrease lead absorption.
or "quicksilver," is a shiny liquid metal that is fairly
toxic, though the metallic mercury is less so. Especially
a problem is methyl or ethyl mercury, or mercuric chloride,
which is very poisonous.
Some mercury is retained in the tissues, mainly in the
kidneys, which store about 50 percent of the body mercury.
The blood, bones, liver, spleen, brain, and fat tissue
also hold mercury. This potentially toxic metal does
get into the brain and nerve tissue, so central nervous
system symptoms may develop. But mercury is also eliminated
daily through the urine and faeces. Mercury has no known
essential functions and probably affects the inherent
protein structure, which may interfere with functions
relating to protein production. Mercury may also interfere
with some functions of selenium, and can be an immunosuppressant.
Algin can decrease absorption of mercury, especially
inorganic mercury. Selenium binds both inorganic and
methyl mercury; mercury selenide is formed and excreted
in faecal matter. Selenium is, for many reasons, an
important nutrient; it does seem to protect against
heavy metal toxicity.
Vitamins are essential for the proper
regulation of reproduction, growth, and energy production.
Since reptiles are unable to manufacture most of the vitamins
required for good health, they must be obtained from dietary
sources, either from food or from supplements.Vitamins are commonly referred to as
micronutrients because of the extremely small amounts required
to maintain optimal health, as compared to macronutrients
such as fats, protein and carbohydrates, which are required
in much greater amounts. Vitamins, unlike the macronutrients,
are not a source of calories, but without adequate amounts,
reptiles cannot utilize the macronutrients, and health and
energy levels will suffer.Vitamins are divided into two sub-categories,
fat-soluble vitamins and water-soluble vitamins. The four
fat soluble vitamins, vitamins A, D, E and K, share a chemical
relationship, based on the common need for cholesterol in
their synthesis. The fat-soluble vitamins can be stored
in fatty tissues and be released at a later time as needed.
The vitamin B complex consists of a family of nutrients
that have been grouped together due to the interrelationships
in their function in enzyme systems, as well as their distribution
in natural food sources. All of the B vitamins are soluble
in water and are easily absorbed. Unlike fat-soluble nutrients,
the B-complex vitamins cannot be stored in the body, and
must therefore be replaced daily from food sources or supplements.
Unlike humans, reptiles are able to synthesize
their need of vitamin C.
Water Soluble Vitamins
bios, vitamin B8, vitamin H, protective factor
X, and coenzyme R.
is a water-soluble vitamin and member of the B-complex
family. Biotin is an essential nutrient that is required
for cell growth and for the production of fatty acids.
Biotin also plays a central role in carbohydrate and
protein metabolism and is essential for the proper utilization
of the other B-complex vitamins. Biotin contributes
to healthy skin and scales. A biotin deficiency is rare,
as biotin is easily synthesized in the intestines by
bacteria, usually in amounts far greater than are normally
require for good health. Those at highest risk for biotin
deficiency are reptiles with digestive problems that
can interfere with normal intestinal absorption, and
those treated with antibiotics or sulfa drugs, which
can inhibit the growth of the intestinal bacteria that
vitamin M and vitamin B9
Acid is a water-soluble nutrient belonging to the B-complex
family. The name folic acid is derived from the Latin
word "folium", so chosen since this essential nutrient
was first extracted from green leafy vegetables, or
foliage. Among its various important roles, folic acid
is a vital coenzyme required for the proper synthesis
the nucleic acids that maintain the genetic codes and
insure healthy cell division. Adequate levels of folic
acid are essential for energy production and protein
metabolism, for the formulation of red blood cells,
and for the proper functioning of the intestinal tract.
Folic acid deficiency affects all cellular functions,
but most importantly it reduces the reptile's ability
to repair damaged tissues and grow new cells. Tissues
with the highest rate of cell replacement, such as red
blood cells, are affected first. Folic acid deficiency
symptoms include diarrhea, poor nutrient absorption
and malnutrition leading to stunted growth and weakness.
is one of the bioflavonoids, naturally occurring nutrients
usually found in association with Vitamin C. The bioflavonoids,
sometimes called Vitamin P, were found to be the essential
component in correcting this bruising tendency and improving
the permeability and integrity of the capillary lining.
These bioflavonoids include Hesperidin, Citrin, Rutin,
Flavones, Flavonals, Calechin, and Quercetin.
Hesperidin deficiency has been linked with abnormal
capillary leaking causing weakness. Supplemental Hesperidin
may also help reduce excess swelling in the limbs due
to fluid accumulation. Like other bioflavonoids, hesperidin
works together with Vitamin C and other bioflavonoids.
No signs of toxicity have been observed with normal
B-1 is a nutrient with a critical role in maintaining
the central nervous system. Adequate thiamine levels
can dramatically affect physiological wellbeing. Conversely,
inadequate levels of B1 can lead to eye weakness and
loss of physical coordination.
Vitamin B1 is required for the production of hydrochloric
acid, for forming blood cells, and for maintaining healthy
circulation. It also plays a key role in converting
carbohydrates into energy, and in maintaining good muscle
tone of the digestive system.
Like all the B-vitamins, B-1 is a water-soluble nutrient
that cannot be stored in the body, but must be replenished
on a daily basis. B-1 is most effective when given in
a balanced complex of the other B vitamins.
A chronic deficiency of thiamine will lead to damages
of the central nervous system. Thiamine levels can be
affected in combination with antibiotics and sulfa drugs.
A diet high in carbohydrates can also increase the need
B-2 is an easily absorbed, water-soluble micronutrient
with a key role in maintaining health. Like the other
B vitamins, riboflavin supports energy production by
aiding in the metabolization of fats, carbohydrates
and proteins. Vitamin B-2 is also required for red blood
cell formation and respiration, antibody production,
and for regulating growth and reproduction. Riboflavin
is known to increase energy levels and aid in boosting
immune system functions. It also plays a key role in
maintaining healthy scales, skin and nails. A deficiency
of vitamin B-2 may be indicated by the appearance of
skin and shedding problems. Gravid females need Vitamin
B-2, as it is critical for the proper growth and development
of the eggs.
Niacin, Niacinamide and Nicotinic Acid
B-3 is an essential nutrient required by all animals
for the proper metabolism of carbohydrates, fats, and
proteins, as well as for the production of hydrochloric
acid for digestion. B-3 also supports proper blood circulation,
healthy skin, and aids in the functioning of the central
nervous system. Because of its role in supporting the
higher functions of the brain and cognition. Lastly,
adequate levels of B-3 are vital for the proper synthesis
of insulin, and the sex hormones such as estrogen, testosterone,
A deficiency in vitamin B-3 can result in a disorder
characterized by malfunctioning of the nervous system,
diarrhea, and skin- and shedding problems.
acid is a water-soluble B vitamin that cannot be stored
in the body but must be replaced daily, either from
diet or from supplements.
Pantothenic acids' most important function is as an
essential component in the production of coenzyme A,
a vital catalyst that is required for the conversion
of carbohydrates, fats, and protein into energy. Pantothenic
acid is also referred to as an antistress vitamin due
to its vital role in the formation of various adrenal
hormones, steroids, and cortisone, as well as contributing
to the production of important brain neuro-transmitters.
B-5 is required for the production of cholesterol, bile,
vitamin D, red blood cells, and antibodies.
Lack of B5 can lead to a variety of symptoms including
skin disorders, digestive problems and muscle cramps.
B-6 is a water-soluble nutrient that cannot be stored
in the body, but must be obtained daily from either
dietary sources or supplements.
Vitamin B-6 is an important nutrient that supports more
vital functions than any other vitamin. This is due
to its role as a coenzyme involved in the metabolism
of carbohydrates, fats, and proteins. Vitamin B-6 is
also responsible for the manufacture of hormones, red
blood cells, neurotransmitters and enzymes. Vitamin
B-6 is required for the production of serotonin, a brain
neurotransmitter that controls appetite, sleep patterns,
and sensitivity to pain. A deficiency of vitamin B-6
can quickly lead to a profound malfunctioning of the
central nervous system.
Among its many benefits, vitamin B-6 is recognized for
helping to maintain healthy immune system functions.
Cobalamin and Cyanocobalamin
B-12 is a water-soluble compound of the B vitamin family
with a unique difference. Unlike the other B-vitamins
which cannot be stored, but which must be replaced daily,
vitamin B12 can be stored for long periods in the liver
Vitamin B-12 is a particularly important coenzyme that
is required for the proper synthesis of DNA, which controls
the healthy formation of new cells throughout the body.
B-12 also supports the action of vitamin C, and is necessary
for the proper digestion and absorption of foods, for
protein synthesis, and for the normal metabolism of
carbohydrates and fats. Additionally, vitamin B-12 prevents
nerve damage by contributing to the formation of nerve
cells insulators. B-12 also maintains fertility, and
helps promote normal growth and development.
Since vitamin B-12 can be easily stored in the reptile’s
body, and is only required in tiny amounts, symptoms
of severe deficiency usually take time to appear. When
symptoms do surface, it is likely that deficiency was
due to digestive disorders or malabsorption rather than
to poor diet. The source of B-12 in herbivorous reptiles
is not known, since B-12 only comes from animal sources.
Due to its role in healthy cell formation, a deficiency
of B-12 disrupts the formation of red blood cells, leading
to reduced numbers of poorly formed red cells. Symptoms
include loss of appetite. B-12 deficiency can lead to
improper formation of nerve cells, resulting in irreversible
C is powerful water-soluble antioxidant that is vital
for the growth and maintenance of all body tissues.
Though easily absorbed by the intestines, vitamin C
cannot be stored in the body, and is excreted in the
urine. Reptiles are able to synthesize their need of
vitamin C, unlike humans, along with apes and guinea
pigs. One of vitamin C's most vital roles is in the
production of collagen, an important cellular component
of connective tissues, muscles, tendons, bones, scales
and skin. Collagen is also required for the repair of
blood vessels, bruises, and broken bones.
This easily destroyed nutrient also protects us from
the ravages of free radicals, destroying cell membranes
on contact and damaging DNA strands, leading to degenerative
diseases. The antioxidant activity of vitamin C can
also protect reptiles from the damaging effects of radiation.
Vitamin C also aids in the metabolization of folic acid,
regulates the uptake of iron, and is required for the
conversion of some amino acids. The hormone responsible
for sleep, pain control and well being, also requires
adequate supplies of vitamin C. A deficiency of ascorbic
acid can impair the production of collagen and lead
to retarded growth, reduced immune response, and increased
susceptibility to infections.
Fat Soluble Vitamins
retinol (=preformed vitamin A)
A is a vital fat-soluble nutrient and antioxidant that
can maintain healthy skin and confer protection against
diseases. Vitamin A is commonly found in two forms;
as preformed vitamin A (retinol) and as provitamin A,
Vitamin A deficiency can lead to blindness and defective
formation of bones and scales. In addition to promoting
good vision, other recognized major benefits of vitamin
A include its ability to boost the immune system, speed
recovery from infections, promote wound healing and
D-3 is required for the proper regulation and absorption
of the essential minerals calcium and phosphorus. Vitamin
D-3 can be produced photochemically by the action of
sunlight or ultraviolet light from the precursor sterol
7-dehydrocholesterol which is present in the epidermis
or skin of most higher animals. Adequate levels of Vitamin
D-3 are required for the proper absorption of calcium
and phosphorus in the small intestines. Vitamin D-3
further supports and regulates the use of these minerals
for the growth and development of the bones and teeth.
Because of this vital link, adequate intake of Vitamin
D-3 is critically important for the proper mineralization
of the bones in developing reptiles. Vitamin D-3 also
aids in the prevention and treatment of metabolic bone
disease and hypocalcemia in adults.
A prolonged Vitamin D-3 deficiency may result in rachitis,
a bone deformity, and in softening of the bone tissue.
Ultra violet rays (in the UVB range) acting directly
upon the skin can synthesis vitamin D-3, so exposure
to sunlight brief but regular is usually an effective
way to for assure adequate levels of Vitamin D-3.
Since the body can endogenously produce vitamin D-3
and since it is retained for long periods of time by
vertebrate tissue, it is difficult to determine with
precision the minimum daily requirements for this seco-steroid.
The requirement for vitamin D is also known to be dependent
on the concentration of calcium and phosphorus in the
diet, the physiological stage of development, age, sex
and degree of exposure to the sun (geographical location).
High levels of vitamin D-3 can be toxic and have the
same effects as a deficiency.
E functions as a powerful antioxidant to protect fatty
tissues from free radical damage. Free radicals are
extremely dangerous and reactive oxygen compounds that
are constantly being produced from a variety of natural
sources such as radiation and the breakdown of proteins
in the body. Left unchecked, free radicals course rupturing
cell membranes, causing massive damage to skin, connective
tissues and cells.
Vitamin E also plays an important reproduction and muscle
K is an essential fat-soluble vitamin that is required
for the regulation of normal blood clotting functions.
Dietary vitamin K is found primarily in the form of
dark leafy vegetables, but most of the needs for this
micronutrient are met by micro organisms that synthesize
vitamin K in the intestinal tract.
Vitamin K's main function is in to synthesize a protein
vital for blood clotting. Vitamin K also aids in converting
glucose into glycogen for storage in the liver, and
may also play a role in forming bone formation. Vitamin
K deficiency can result in impaired blood clotting and
internal bleeding. A deficiency of vitamin K can be
caused by use of antibiotics, which can inhibit the
growth of the intestinal micro organisms required for
the synthesis of vitamin K.
are substances which are transformed to vitamins in
(sometimes known as provitamin D3) is a chemical
that, in the presence of ultraviolet light, is converted
by the body to previtamin D3 which is then
isomerized into Vitamin D3.
is an exciting and powerful fat-soluble antioxidant
with tremendous ability to neutralize free radicals
and fight infectious diseases. Beta-carotene is also
referred to as provitamin A because in its natural form
it is not readily available for use in reptiles. When
there is need for extra vitamin A, beta carotene undergoes
a transformation as powerful liver enzymes split each
molecule of beta carotene to form two molecules of vitamin
A. This unique feature enables beta carotene to be non-toxic
at high doses whereas vitamin A can produce toxic effects
in relatively low doses. In addition to promoting good
vision, beta-carotene also boosts immune functions,
speeds recovery from infections and promotes wound healing.
are an essential ingredient of the digestion process. From
the time food enters the mouth, enzymes are at work breaking
the food down into smaller and smaller components until
it can be absorbed through the intestinal wall and into
the blood stream. These enzymes come from two sources, those
found in the food itself, and those secreted in the body.Food
naturally contains the enzymes necessary to aid in its digestion.
When food is chewed enzymes are liberated to aid in digestion.
Enzymes called protease break down proteins into polypeptides
(smaller amino acids chains) and eventually into single
amino acids. Amylase reduces complex carbohydrates to simpler
sugars like sucrose, lactose, and maltose. The Lipase enzyme
turns fat into free fatty acids and glycerol. Cellulases
break the bonds found in fibre and liberate the nutritional
value of fruits and vegetables.A reptile
is capable of producing similar enzymes, with the exception
of cellulase, necessary to digest food and allow for the
absorption of nutrients. Most
food enzymes are functionally destroyed in processed food,
leaving them without natural enzyme activity. Reptiles need
a certain amount of enzymes to properly digest food and
thus must produce more of their own enzymes in order to
make up the difference. The digestive processes can be come
over-stressed leading to an inadequacy of enzymes production
in the organs designed to do so. This digestive inadequacy
can cause improper digestion and poor absorption of nutrients
having far reaching effects. The consequences of malabsorption
may include impaired immune system function, poor wound
healing and skin problems. Supplementing
with added enzymes can improve digestion and help assure
maximum nutrient absorption.
is responsible for digesting proteins in the food, which
is probably one of the most difficult substances to
metabolize. Because of this, protease is considered
to be one of the most important enzymes. If the digestive
process is incomplete, undigested protein can wind up
in the circulatory system, as well as in other parts
of the body.
is a group of enzymes that are present in saliva, pancreatic
juice, and parts of plants and catalyze the hydrolysis
of starch to sugar to produce carbohydrate derivatives.
Amylase, also called diastase, is responsible for digesting
carbohydrates in the food. It hydrolyzes starch, glycogen,
and dextrin to form in all three instances glucose,
maltose, and limit-dextrin. Salivary amylase is known
as ptyalin. Ptyalin begins polysaccharide digestion
in the mouth; the process is completed in the small
intestine by the pancreatic amylase, sometimes called
is an enzyme capable of degrading lipid molecules. The
bulk of dietary lipids are a class called triacylglycerols
and are attacked by lipases to yield simple fatty acids
and glycerol, molecules which can permeate the membranes
of the stomach and small intestine for use by the body.
is included to break down plant fibre (cellulose). Cellulase
is actually a complex consisting of three distinct enzymes
which together convert cellulose to glucose. Without
it plant fibre passes through undigested.
Probiotics are friendly bacteria
found in the mouth and intestines of healthy reptiles. These
microorganisms help defend against invading bacteria and
yeasts. Probiotic bacteria contribute to gastrointestinal
health by providing a synergistic environment and producing
health promoting substances including some vitamins. They
can regulate bowel movements and halt diarrhea while at
the same time enhancing the immune system. The use of antibiotics
kills the beneficial and the harmful bacteria. Supplemental
replenishment with probiotics can quickly return the flora
balance to normal, thus preventing many of the common side
effects associated with antibiotic treatment. | http://www.phelsumania.com/public/captive%20care/nutrition.html | 13 |
83 | ||This article needs additional citations for verification. (May 2012)|
A critical mass is the smallest amount of fissile material needed for a sustained nuclear chain reaction. The critical mass of a fissionable material depends upon its nuclear properties (specifically, the nuclear fission cross-section), its density, its shape, its enrichment, its purity, its temperature, and its surroundings. The concept is important in nuclear weapon design.
Explanation of criticality
When a nuclear chain reaction in a mass of fissile material is self-sustaining, the mass is said to be in a critical state in which there is no increase or decrease in power, temperature, or neutron population.
A numerical measure of a critical mass is dependent on the effective neutron multiplication factor k, the average number of neutrons released per fission event that go on to cause another fission event rather than being absorbed or leaving the material. When , the mass is critical, and the chain reaction is barely self-sustaining.
A subcritical mass is a mass of fissile material that does not have the ability to sustain a fission chain reaction. A population of neutrons introduced to a subcritical assembly will exponentially decrease. In this case, . A steady rate of spontaneous fissions causes a proportionally steady level of neutron activity. The constant of proportionality increases as k increases.
A supercritical mass is one where there is an increasing rate of fission. The material may settle into equilibrium (i.e. become critical again) at an elevated temperature/power level or destroy itself, by which equilibrium is reached. In the case of supercriticality, .
Changing the point of criticality
The mass where criticality occurs may be changed by modifying certain attributes such as fuel, shape, temperature, density and the installation of a neutron-reflective substance. These attributes have complex interactions and interdependencies. This section explains only the simplest ideal cases.
- Varying the amount of fuel
It is possible for a fuel assembly to be critical at near zero power. If the perfect quantity of fuel were added to a slightly subcritical mass to create an "exactly critical mass", fission would be self-sustaining for one neutron generation (fuel consumption makes the assembly subcritical).
If the perfect quantity of fuel were added to a slightly subcritical mass, to create a barely supercritical mass, the temperature of the assembly would increase to an initial maximum (for example: 1 K above the ambient temperature) and then decrease back to room temperature after a period of time, because fuel consumed during fission brings the assembly back to subcriticality once again.
- Changing the shape
A mass may be exactly critical without being a perfect homogeneous sphere. More closely refining the shape toward a perfect sphere will make the mass supercritical. Conversely changing the shape to a less perfect sphere will decrease its reactivity and make it subcritical.
- Changing the temperature
A mass may be exactly critical at a particular temperature. Fission and absorption cross-sections increase as the relative neutron velocity decreases. As fuel temperature increases, neutrons of a given energy appear faster and thus fission/absorption is less likely. This is not unrelated to doppler broadening of the U238 resonances but is common to all fuels/absorbers/configurations. Neglecting the very important resonances, the total neutron cross section of every material exhibits an inverse relationship with relative neutron velocity. Hot fuel is always less reactive than cold fuel (over/under moderation in LWR is a different topic). Thermal expansion associated with temperature increase also contributes a negative coefficient of reactivity since fuel atoms are moving farther apart. A mass that is exactly critical at room temperature would be sub-critical in an environment anywhere above room temperature due to thermal expansion alone.
- Varying the density of the mass
The higher the density, the lower the critical mass. The density of a material at a constant temperature can be changed by varying the pressure or tension or by changing crystal structure (see Allotropes of plutonium). An ideal mass will become subcritical if allowed to expand or conversely the same mass will become supercritical if compressed. Changing the temperature may also change the density; however, the effect on critical mass is then complicated by temperature effects (see "Changing the temperature") and by whether the material expands or contracts with increased temperature. Assuming the material expands with temperature (enriched Uranium-235 at room temperature for example), at an exactly critical state, it will become subcritical if warmed to lower density or become supercritical if cooled to higher density. Such a material is said to have a negative temperature coefficient of reactivity to indicate that its reactivity decreases when its temperature increases. Using such a material as fuel means fission decreases as the fuel temperature increases.
- Use of a neutron reflector
Surrounding a spherical critical mass with a neutron reflector further reduces the mass needed for criticality. A common material for a neutron reflector is beryllium metal. This reduces the number of neutrons which escape the fissile material, resulting in increased reactivity.
- Use of a tamper
In a bomb, a dense shell of material surrounding the fissile core will contain, via inertia, the expanding fissioning material. This increases the efficiency. A tamper also tends to act as a neutron reflector. Because a bomb relies on fast neutrons (not ones moderated by reflection with light elements, as in a reactor), because the neutrons reflected by a tamper are slowed by their collisions with the tamper nuclei, and because it takes time for the reflected neutrons to return to the fissile core, they take rather longer to be absorbed by a fissile nucleus. But they do contribute to the reaction, and can decrease the critical mass by a factor of four. Also, if the tamper is (e.g. depleted) uranium, it can fission due to the high energy neutrons generated by the primary explosion. This can greatly increase yield, especially if even more neutrons are generated by fusing hydrogen isotopes, in a so-called boosted configuration.
Critical mass of a bare sphere
The shape with minimal critical mass and the smallest physical dimensions is a sphere. Bare-sphere critical masses at normal density of some actinides are listed in the following table.
The critical mass for lower-grade uranium depends strongly on the grade: with 20% U-235 it is over 400 kg; with 15% U-235, it is well over 600 kg.
The critical mass is inversely proportional to the square of the density. If the density is 1% more and the mass 2% less, then the volume is 3% less and the diameter 1% less. The probability for a neutron per cm travelled to hit a nucleus is proportional to the density. It follows that 1% greater density means that the distance travelled before leaving the system is 1% less. This is something that must be taken into consideration when attempting more precise estimates of critical masses of plutonium isotopes than the approximate values given above, because plutonium metal has a large number of different crystal phases which can have widely varying densities.
Note that not all neutrons contribute to the chain reaction. Some escape and others undergo radiative capture.
Let q denote the probability that a given neutron induces fission in a nucleus. Let us consider only prompt neutrons, and let ν denote the number of prompt neutrons generated in a nuclear fission. For example, ν ≈ 2.5 for uranium-235. Then, criticality occurs when ν·q = 1. The dependence of this upon geometry, mass, and density appears through the factor q.
Given a total interaction cross section σ (typically measured in barns), the mean free path of a prompt neutron is where n is the nuclear number density. Most interactions are scattering events, so that a given neutron obeys a random walk until it either escapes from the medium or causes a fission reaction. So long as other loss mechanisms are not significant, then, the radius of a spherical critical mass is rather roughly given by the product of the mean free path and the square root of one plus the number of scattering events per fission event (call this s), since the net distance travelled in a random walk is proportional to the square root of the number of steps:
Note again, however, that this is only a rough estimate.
In terms of the total mass M, the nuclear mass m, the density ρ, and a fudge factor f which takes into account geometrical and other effects, criticality corresponds to
which clearly recovers the aforementioned result that critical mass depends inversely on the square of the density.
Alternatively, one may restate this more succinctly in terms of the areal density of mass, Σ:
where the factor f has been rewritten as to account for the fact that the two values may differ depending upon geometrical effects and how one defines Σ. For example, for a bare solid sphere of Pu-239 criticality is at 320 kg/m2, regardless of density, and for U-235 at 550 kg/m2. In any case, criticality then depends upon a typical neutron "seeing" an amount of nuclei around it such that the areal density of nuclei exceeds a certain threshold.
This is applied in implosion-type nuclear weapons where a spherical mass of fissile material that is substantially less than a critical mass is made supercritical by very rapidly increasing ρ (and thus Σ as well) (see below). Indeed, sophisticated nuclear weapons programs can make a functional device from less material than more primitive weapons programs require.
Aside from the math, there is a simple physical analog that helps explain this result. Consider diesel fumes belched from an exhaust pipe. Initially the fumes appear black, then gradually you are able to see through them without any trouble. This is not because the total scattering cross section of all the soot particles has changed, but because the soot has dispersed. If we consider a transparent cube of length on a side, filled with soot, then the optical depth of this medium is inversely proportional to the square of , and therefore proportional to the areal density of soot particles: we can make it easier to see through the imaginary cube just by making the cube larger.
Several uncertainties contribute to the determination of a precise value for critical masses, including (1) detailed knowledge of cross sections, (2) calculation of geometric effects. This latter problem provided significant motivation for the development of the Monte Carlo method in computational physics by Nicholas Metropolis and Stanislaw Ulam. In fact, even for a homogeneous solid sphere, the exact calculation is by no means trivial. Finally note that the calculation can also be performed by assuming a continuum approximation for the neutron transport. This reduces it to a diffusion problem. However, as the typical linear dimensions are not significantly larger than the mean free path, such an approximation is only marginally applicable.
Finally, note that for some idealized geometries, the critical mass might formally be infinite, and other parameters are used to describe criticality. For example, consider an infinite sheet of fissionable material. For any finite thickness, this corresponds to an infinite mass. However, criticality is only achieved once the thickness of this slab exceeds a critical value.
Criticality in nuclear weapon design
Until detonation is desired, a nuclear weapon must be kept subcritical. In the case of a uranium bomb, this can be achieved by keeping the fuel in a number of separate pieces, each below the critical size either because they are too small or unfavorably shaped. To produce detonation, the uranium is brought together rapidly. In Little Boy, this was achieved by firing a piece of uranium (a 'doughnut'), down a gun barrel onto another piece, (a 'spike'), a design referred to as a gun-type fission weapon.
A theoretical 100% pure Pu-239 weapon could also be constructed as a gun-type weapon, like the Manhattan Project's proposed Thin Man design. In reality, this is impractical because even "weapons grade" Pu-239 is contaminated with a small amount of Pu-240, which has a strong propensity toward spontaneous fission. Because of this, a reasonably sized gun-type weapon would suffer nuclear reaction before the masses of plutonium would be in a position for a full-fledged explosion to occur.
Instead, the plutonium is present as a subcritical sphere (or other shape), which may or may not be hollow. Detonation is produced by exploding a shaped charge surrounding the sphere, increasing the density (and collapsing the cavity, if present) to produce a prompt critical configuration. This is known as an implosion type weapon.
See also
- Nuclear chain reaction
- Nuclear weapon design
- Criticality accident
- Nuclear criticality safety
- Geometric and Material Buckling
- Serber, Robert, The Los Alamos Primer: The First Lectures on How to Build an Atomic Bomb, (University of California Press, 1992) ISBN 0-520-07576-5 Original 1943 "LA-1", declassified in 1965, plus commentary and historical introduction
- Nuclear Weapons Design & Materials, The Nuclear Threat Initiative website.
- Final Report, Evaluation of nuclear criticality safety data and limits for actinides in transport, Republic of France, Institut de Radioprotection et de Sûreté Nucléaire, Département de Prévention et d'étude des Accidents.
- Chapter 5, Troubles tomorrow? Separated Neptunium 237 and Americium, Challenges of Fissile Material Control (1999), isis-online.org
- http://www.lanl.gov/news/index.php?fuseaction=home.story&story_id=1348[broken citation]
- Updated Critical Mass Estimates for Plutonium-238, U.S. Department of Energy: Office of Scientific & Technical Information
- Amory B. Lovins, Nuclear weapons and power-reactor plutonium, Nature, Vol. 283, No. 5750, pp. 817–823, February 28, 1980
- http://typhoon.tokai-sc.jaea.go.jp/icnc2003/Proceeding/paper/6.5_022.pdf[broken citation] Dias et al.
- Hirshi Okuno and Hirumitsu Kawasaki, Technical Report, Critical and Subcritical Mass Calculations for Curium-243 to -247 , Japan National Institute of Informatics, Reprinted from Journal of Nuclear Science and Technology, Vol. 39, No. 10, p.1072–1085 (October 2002)[copyright violation?] | http://en.wikipedia.org/wiki/Critical_mass | 13 |
186 | Most of us are familiar with the multiplication of polynomials. It is one of the first things taught in algebra. Polynomial multiplication, as you are well aware, involves the application of the distributive property of multiplication over addition, the addition of exponents, collecting of like terms, etc. to get a new polynomial from two or more other polynomials. However, the division of one polynomial by another is not familiar to most because it is seldom taught or even mentioned.
In the previous lesson, we identified one of the solutions to a quartic equation based on the values of the coefficients of the different powers of the unknown variable, x. Once the solution is identified, we needed to perform a polynomial division to get the residual cubic equation from the given quartic equation and one of its solutions. We touched upon the procedure for performing this tangentially, but we did not take it up formally. In this lesson, we will formalize the procedure for performing simple polynomial divisions. In particular, we will deal with the division of higher-order polynomials by polynomials of the first degree (linear polynomials) in this lesson. In future lessons, we will deal with divisions where the divisor is a polynomial of higher degree.
In the most general terms, in this lesson, we are dealing with divisions of the following form:
Divide ax^n + bx^(n-1) + ... + constant by (cx + d)
In the previous lesson, (cx + d) was either (x + 1) or (x - 1). But the method we used in that lesson is applicable to other divisors also. We already know how to do some basic divisions without even realizing we are doing polynomial division. As in any other division, the answer will have a quotient and a remainder. In some cases, the remainder may be zero. In some cases, the quotient may be zero too.
Consider the following example of division by a linear polynomial, which practically anyone can actually perform without any hesitation:
Divide 4x^2 + 6x + 7 by 2x.
In this case, c = 2 and d = 0. Just by inspection, we can say that the quotient of this division is 2x + 3, and the remainder is 7. The way to verify whether we obtained the correct answer, just as in the case of arithmetic division is to check whether dividend = quotient * divisor + remainder.
In the above case, we can indeed verify that 4x^2 + 6x + 7 = 2x*(2x + 3) + 7. Thus, we can be confident that our division was correct.
Now, consider another example of polynomial division which may not be readily apparent as polynomial division. Divide 4x^2 + 6x + 7 by 2. In this case c = 0 and d = 2. Once again, just be inspection, we can say that the quotient is 2x^2 + 3x + 3, and the remainder is 1. We can verify that this is the correct answer by seeing that 4x^2 + 6x + 7 = 2*(2x^2 + 3x + 3) + 1.
However, when neither c nor d is zero, the situation becomes trickier. That is the kind of division that is not dealt with in school in any detail. It is the kind of division that is not taught to students as a normal part of algebra education for whatever reason. It is actually very useful to teach this kind of division, as we will see later in this lesson!
Consider the division of 4x^2 + 6x + 7 by x + 2. Essentially, we have to find (4x^2 + 6x + 7)/(x + 2). How exactly do we perform this division? The answer turns out to be quite simple actually. We have already seen some examples of how we do such a division in the previous lesson. Let us formalize the method in this lesson.
Consider the division of a polynomial ax^n + bx^(n-1) + ... + constant by the first degree expression cx + d. Consider the ratio c/d. The secret to polynomial division is to rewrite the numerator using this ratio just like we did in our lessons on cubic equations and quartic equations.
We have to rewrite the dividend in the form below:
ax^n + ex^(n-1) + fx^(n-1) + ... + yx + zx + constant1 + constant2
And the coefficients have to satisfy the conditions below:
c/d = a/e = f/g = ... = z/constant1
e + f = b
y + z = coefficient of x in the given dividend
constant1 + constant2 = constant
In some cases, constant2 could be zero. When that is the case, the division does not have a remainder. When constant2 is not zero, that is the remainder (at least an intermediate remainder) of the division.
Let us see how these rules are applied in the case of (4x^2 + 6x + 7)/(x + 2). In this case, c/d = 1/2. Thus, we have to rewrite the dividend using the ration 1/2.
4x^2 + 6x + 7 = 4x^2 + ax + bx + constant1 + constant2
1/2 = 4/a = b/constant1
a + b = 6
constant1 + constant2 = 7
We see that we can accomplish this by writing the dividend as below:
4x^2 + 8x - 2x - 4 + 11
This can then be factorized as below:
4x(x + 2) - 2(x + 2) + 11
This then tells that the quotient of the division is (4x - 2), and the remainder is 11. As you can see, the method itself is quite simple and straightforward. If you can work with ratios, then you can perform polynomial division! We will spend some time on a couple more examples, then we will move on to some special cases, and then conclude with a very useful practical application of the technique.
Consider (4x^3 + 6x^2 - 8x + 15)/(2x - 1). In this case, c/d = -2/1. Thus, we rewrite the dividend using this ratio as below:
4x^3 -2x^2 + 8x^2 - 4x - 4x + 2 + 13
We can then factorize the expression above as below:
2x^2(2x - 1) + 4x(2x - 1) - 2(2x - 1) + 13
Thus, the quotient of the division is 2x^2 + 4x - 2, and the remainder is 13.
Now, consider (3x^4 + 4x^3 - 11x^2 + 6x - 5)/(2x + 1). In this case c/d = 2/1. Thus, we rewrite the dividend using this ratio as below:
3x^4 + 1.5x^3 + 2.5x^3 + 1.25x^2 - 12.25x^2 - 6.125x + 12.125x + 6.0625 - 11.0625
We can then factorize the expression as below:
1.5x^3(2x + 1) + 1.25x^2(2x + 1) - 6.125x(2x + 1) + 6.0625(2x + 1) - 11.0625
The quotient is then (1.5x^3 + 1.25x^2 - 6.125x + 6.0625), and the remainder is -11.0625. We can rewrite the quotient as 24x^3/16 + 20x^2/16 - 98x/16 + 97/16, and the remainder as -177/16. Thus, the quotient can be rewritten as (24x^3 + 20x^2 - 98x + 97)/16, and the remainder is -177/16.
Thus, we see that the quotient and remainder can contain fractional terms. And, in this case, the remainder was negative too! Thus, polynomial division can result in some results that can cause us to scratch our heads. But, we can verify that the answer we got is correct by simply multiplying the quotient with the divisor, and adding the remainder to the result to see whether we get back our original dividend.
Now consider a division as below:
(2x^3 + 5x^2 - 6x + 8)/(2x + 2)
We see that the ratio c/d in this case is 2/2 = 1. Thus, we could rewrite the dividend expression as below:
2x^3 + 2x^2 + 3x^2 + 3x - 9x - 9 + 17
We would then factorize it as below:
2x^2(x + 1) + 3x(x + 1) - 9(x + 1) + 17
We immediately see that the expressions in the parentheses are x + 1, not our divisor, 2x + 2. Thus, if we decide, based on the factorization, that our quotient is 2x^2 + 3x - 9, with a remainder of 17, we will find that we are wrong. In fact, (2x^2 + 3x - 9)(2x + 2) + 17 = 4x^3 + 10x^2 - 12x - 1, which is not what we started with as our original dividend. This, then tells us that the actual factorization of the rewritten dividend is as below:
x^2(2x + 2) + (3x/2)(2x + 2) - (9/2)(2x + 2) + 17
Thus, the quotient would be (x^2 + 3x/2 - 9/2) and the remainder would be 17. This can be verified as the correct answer. Thus, we need to pay extra attention when the ratio c/d can be simplified by the presence of common factors. The common factor, in this case, prevented us from getting the divisor as a common factor during the factorization when we factorized the rewritten dividend. We then had to introduce fractions during the factorization to get the divisor as the common factor during the factorization.
A simpler solution to the above problem would be to take the common factor out of the divisor and divide the dividend by this common factor in advance. In this case, the common factor in the divisor is 2. So, we can convert the dividend to x^3 + (5/2)x^2 - 3x + 4 before we attempt the division. Now, our division problem is reduced to (x^3 + (5/2)x^2 - 3x + 4)/(x + 1). The ratio of terms in the divisor is 1/1 = 1. Thus, we can perform the division by rewriting the dividend as below:
x^3 + x^2 + (3/2)x^2 + (3/2)x - (9/2)x - 9/2 + 17/2
This will then enable us to perform the factorization as below:
x^2(x + 1) + (3x/2)(x + 1) - (9/2)(x + 1) + 17/2
The quotient is then x^2 + 3x/2 + 9/2. But since we divided the numerator and denominator by the common factor, 2, remember that the remainder is now to be multiplied by this common factor to get the final remainder. Thus, the remainder is not 17/2, but is instead 17!
Having fractional terms as part of the dividend may appear a little intimidating. So, another solution would be to take the common factor out of the divisor and keep it aside. Perform the division as usual, then divide the resulting quotient by the common factor.
In this case, our initial division resulted in a quotient of 2x^2 + 3x - 9. We see that we can then divide this by the common factor, 2, to get the correct quotient. Remember not to divide the remainder by the common factor!
Now, let us consider the last special case. Consider the division of 6x + 7 by x + 1. In this case, the dividend is also a linear polynomial. But the method is still the same as before. The ratio, c/d, in this case is 1/1 = 1. Thus, we rewrite the dividend expression as below in preparation for factorization:
6x + 6 + 1
This then leads to the factorization as below:
6(x + 1) + 1
We then conclude that the quotient is 6 and the remainder is 1.
Let us now record our observations about polynomial division, with a first-degree polynomial as the divisor, below:
- The quotient will be a polynomial of one degree lower than the original dividend (thus, if the dividend is also a linear expression, as in our last example, then the quotient will be just a constant with no x term)
- The remainder will be a constant, but can be either positive or negative
- If the divisor has a constant as a common factor, we can set the constant aside, divide by the divisor reduced to lowest terms, then divide the quotient by the common factor to get the correct quotient (no change needs to be made to the remainder)
- Alternatively, we can divide the dividend by the common factor, then perform the division by the divisor reduced to lowest terms to get the quotient, then multiply the remainder by the common factor to get the correct remainder
As you can see, the division of a polynomial by a linear polynomial is quite easy once the method of ratios is laid out. This is an application of the Madhyamadhyena Adhyamanthyena sutra. Now, as promised, let us examine why this technique is useful. The secret to its usefulness lies in the insight that practically every real number is a polynomial. Consider the number 324, for instance.
324 = 3*10^2 + 2*10 + 4
This is the same as 3x^2 + 2x + 4, where x = 10! In fact, we can even express it is 3x^2 + 3x - 6, where x = 10. In fact there are several ways to express a given number as a polynomial, even using just x = 10. Using a different value of x enables us to rewrite the number in other bases (such as octal, where x = 8, hexadecimal, where x = 16 and binary, where x = 2). Because of this, it is easy to see that we can use our insights from polynomial division to perform regular arithmetic division with just as much ease! In fact, this is the practical application of polynomial division that I have been hinting at since the beginning of this lesson!!
Take the case of 324/11 for instance. This can be rewritten as (3x^2 + 2x + 4)/(x + 1), where x = 10. We already know how to do this with no problems. We rewrite the dividend expression as below:
3x^2 + 3x - x - 1 + 5
We then factorize it to get:
3x(x + 1) -1(x + 1) + 5
We then conclude that the quotient is 3x - 1 and the remainder is 5. 3x - 1 when x = 10 is 29. Thus, we have just performed the division, 324/11 and found that the quotient is 29 with a remainder of 5. I encourage you to verify that this is indeed correct! Let us now take on a few more examples to see just how easy the method is.
Take 372/12 for instance. This can be rewritten as (3x^2 + 7x + 2)/(x + 2) where x = 10. The ratio, c/d, is then 1/2. We can then rewrite the dividend expression as below, and factorize it:
3x^2 + 6x + x + 2, which can be factorized as
3x(x + 2) + 1(x + 2)
Thus, we see that the quotient is 3x + 1 (or 31 with 10 as the value of x), and the remainder is 0!
Now, let us take a few slightly trickier cases. Consider 412/12, for instance. This can be rewritten as (4x^2 + x + 2)/(x + 2), where x = 10. The ratio, c/d is once again 1/2. Then, the dividend can be rewritten and factorized as below:
4x^2 + 8x - 7x - 14 + 16 which can be factorized as
4x(x + 2) - 7(x + 2) + 16
This then tells us that the quotient is 4x - 7 (or 33, since x = 10), and the remainder is 16. This can not be correct. We can verify using a calculator that the quotient in this case should be 34 and the remainder should be 4. It turns out that our solution is correct, but it does not meet the standards for normal division. In particular, in this case, the remainder turned out to be bigger than the divisor. To correct this, add one to the quotient and reduce the remainder by the divisor. If necessary, repeat the operation until the remainder becomes less than the divisor. Following this procedure, we can then adjust our solution to a quotient of 34 (add one to 33), and a remainder of 4 (subtract the divisor, 12, from the original remainder of 16). Now, we can say that the answer is in truly correct form!
Now consider the case of 271/13. This can be rewritten as (2x^2 + 7x + 1)/(x + 3), where x = 10. The ratio, c/d is 1/3 in this case. We can then rewrite the dividend expression, and do the factorization as below:
2x^2 + 6x + x + 3 - 2 which can be factorized as
2x(x + 3) + 1(x + 3) - 2
We can then interpret this as saying that the quotient is (2x + 1), or 21 since x = 10, and the remainder is -2. Once again, we are left with an answer that seems wrong. We are not used to dealing with negative remainders when we do divisions normally. But the correction for this problem, once again, is very simple. We simply subtract one from the quotient and add the divisor to the remainder. Repeat this until the remainder is positive. Thus, we can correct our answer to a quotient of 20 (subtract 1 from 21), and a remainder of 11 (add the divisor, 13, to -2). It is easy to verify that a quotient of 20 and a remainder of 11 is indeed correct.
Next, let us consider 4878/18. We can write it as (4x^3 + 8x^2 + 7x + 8)/(x + 8), where x = 10. But the ratio, c/d now becomes 1/8. Using that ratio, we would be forced to rewrite the dividend expression as below:
4x^3 + 32x^2 - 24x^2 - 192x + 199x + 1592 - 1584
Obviously, this is correct, and does give us the following factorization:
4x^2(x + 8) - 24x(x + 8) + 199(x + 8) - 1584
This then translates to a quotient of 4x^2 - 24x + 199, which equals 400 - 240 + 199 = 359, and a remainder of -1584. To get this answer to standard form would require a lot of subtractions of 1 from the quotient and additions of 18 to the remainder!
Instead, let us consider a different approach to this problem. We can rewrite the given problems as (4x^3 + 8x^2 + 7x + 8)/(2x - 2), where x = 10. Now, we see that c/d = -1 and the divisor has a common factor of 2. Let us take the common factor out initially and do the rewriting and factorization as below:
4x^3 - 4x^2 + 12x^2 - 12x + 19x - 19 + 27 which can be factorized as
4x^2(x - 1) + 12x(x - 1) + 19(x - 1) + 27
This translates to a quotient of 4x^2 + 12x + 19, which can be translated as 539 by substituting x = 10. Now, remember to divide the quotient by 2 (the common factor in the divisor), and we get 269.50. Since the remainder is greater than 18, we also have to subtract 18 from it and add 1 to the quotient. This gives us a quotient of 270.50 and a remainder of 9. Since 0.50*18 is 9, we can further simplify it to a quotient of 270 with a remainder of 18 (subtract the 0.50 from the quotient and add 0.50*18 to the remainder), which once again can be reduced to a quotient of 271 with no remainder. We can verify that this is indeed the correct answer.
The approach we took the second time around is reminiscent of the use of vinculums in various arithmetic operations. Vinculums are dealt with in great detail in a lesson dedicated to them. It may be useful to review that lesson once more for more insights into the usefulness of the concept of vinculums.
Now, let us consider a more tricky case. We will try 3421/77. This can be rewritten as (3x^3 + 4x^2 + 2x + 1)/(7x + 7), where x = 10. We see that there is a common factor of 7 in the divisor, and a c/d ratio of 1. Using the ratio, we can rewrite the dividend and factorize it as below:
3x^3 + 3x^2 + x^2 + x + x + 1, which can be factorized as
3x^2(x + 1) + x(x + 1) + 1(x + 1)
This directly translates to a quotient of 311 with a remainder of 0. We now have to remember to divide the quotient by the common factor, 7. This presents us problems since we are left with another division problem almost as difficult as the original problem. However, we have reduced the problem by at least an order of magnitude, and in fact, we can perform the division by 7 in our heads to get a final quotient of 44 and a remainder of 3. Notice that this remainder is from a division by 7 while the real remainder needs to be from a division by 77. So, to get the true remainder to the problem, we now multiply this remainder of 3 from this step by 77/7 (original divisor/common factor), which is 11. Thus, the final answer is 44 with a remainder of 33.
An easier way to remember this might be to express the answer as 44 and 3/7 with a remainder of 0. Now, we convert the 3/7 to the final remainder by multiplying by the divisor, 77, to get 33 as the final remainder. We can verify that 44*77 + 33 = 8843, so we know our answer is correct.
To obviate the need for a manual division, we can once again the use the concept of vinculums to rewrite the given problem as (3x^3 + 4x^2 + 2x + 1)/(8x - 3). This then gives us a c/d ratio of -8/3. But this leads to new problems as we see below:
3x^3 + 4x^2 + 2x + 1 can be rewritten with a ratio of -8/3 as
3x^3 - (9/8) x^2 + (41/8)x^2 - (123/64)x + (251/64)x - 753/512 + 1264/512 which can be factorized as
(3/8)x^2(8x - 3) + (64/41)x(8x - 3) + (512/251)(8x - 3) + 1264/512
This then gives us a quotient of 300/8 + 640/41 + 512/251 (after substituting x = 10 in the expression above) with a remainder of 1264/512. As you can see, this is not a very convenient expression to work with and convert into a standard form for presentation as a normal quotient and remainder. So, the lesson to take away from this is to be careful and not carried away by vinculums too much!
The concept of polynomial division is useful to learn since it can translate directly into an easy method for arithmetic division also. But, sometimes, it does not produce results any faster or easier than other methods of division. This is important to recognize as a shortcoming of the method. Different methods have their own strengths and weaknesses. If the ratio we need to work with is inconvenient and can not be converted into a more convenient ratio by the use of vinculums or other tricks, we have to recognize that polynomial division may not be the best approach to the division problem.
Moreover, in this lesson, we have dealt with only linear divisors. This can limit the technique to just 2-digit divisors under most conditions. We will leave this lesson with an example of a division by a 3-digit divisor, but in general, we will deal with division by more digits in the next lesson, when we will deal with polynomial division where the divisor is not linear. For an example of a division by a 3-digit divisor that can be accommodated within the scope of this lesson (polynomial division by linear polynomials), please read the full lesson here.
As you can see, polynomial division has several uses, chief among them, the factorization of polynomial expressions. But, as we saw in this lesson, it can also translate into an easier way to do some arithmetic divisions also. Hope you will take the time to practice some of the techniques, both with polynomial expressions as well as with arithmetic expressions, so that you can be confident about their application when it is appropriate. Good luck, and happy computing! | http://vedicmathsindia.blogspot.com/2010/03/polynomial-division-1.html | 13 |
87 | In any presentation covering the quantitative physics of a class of systems, it is important to beware of the units of measurement used! In this presentation of stepping motor physics, we will assume standard physical units:
A force of one pound will accelerate a mass of one slug at one foot per second squared. The same relationship holds between the force, mass, time and distance units of the other measurement systems. Most people prefer to measure angles in degrees, and the common engineering practice of specifying mass in pounds or force in kilograms will not yield correct results in the formulas given here! Care must be taken to convert such irregular units to one of the standard systems outlined above before applying the formulas given here!
English CGS MKS MASS slug gram kilogram FORCE pound dyne newton DISTANCE foot centimeter meter TIME second second second ANGLE radian radian radian
For a motor that turns S radians per step, the plot of torque versus angular position for the rotor relative to some initial equilibrium position will generally approximate a sinusoid. The actual shape of the curve depends on the pole geometry of both rotor and stator, and neither this curve nor the geometry information is given in the motor data sheets I've seen! For permanent magnet and hybrid motors, the actual curve usually looks sinusoidal, but looks can be misleading. For variable reluctance motors, the curve rarely even looks sinusoidal; trapezoidal and even assymetrical sawtooth curves are not uncommon.
For a three-winding variable reluctance or permanent magnet motors with S radians per step, the period of the torque versus position curve will be 3S; for a 5-phase permanent magnet motor, the period will be 5S. For a two-winding permanent magnet or hybrid motor, the most common type, the period will be 4S, as illustrated in Figure 2.1:
Figure 2.1Again, for an ideal 2 winding permanent magnet motor, this can be mathematically expressed as:
T = -h sin( ((π / 2) / S) θ )Where:
T -- torqueBut remember, subtle departures from the ideal sinusoid described here are very common.
h -- holding torque
S -- step angle, in radians
θ = shaft angle, in radians
The single-winding holding torque of a stepping motor is the peak value of the torque versus position curve when the maximum allowed current is flowing through one motor winding. If you attempt to apply a torque greater than this to the motor rotor while maintaining power to one winding, it will rotate freely.
It is sometimes useful to distinguish between the electrical shaft angle and the mechanical shaft angle. In the mechanical frame of reference, 2π radians is defined as one full revolution. In the electrical frame of reference, a revolution is defined as one period of the torque versus shaft angle curve. Throughout this tutorial, θ refers to the mechanical shaft angle, and ((π/2)/S)θ gives the electrical angle for a motor with 4 steps per cycle of the torque curve.
Assuming that the torque versus angular position curve is a good approximation of a sinusoid, as long as the torque remains below the holding torque of the motor, the rotor will remain within 1/4 period of the equilibrium position. For a two-winding permanent magnet or hybrid motor, this means the rotor will remain within one step of the equilibrium position.
With no power to any of the motor windings, the torque does not always fall to zero! In variable reluctance stepping motors, residual magnetization in the magnetic circuits of the motor may lead to a small residual torque, and in permanent magnet and hybrid stepping motors, the combination of pole geometry and the permanently magnetized rotor may lead to significant torque with no applied power.
The residual torque in a permanent magnet or hybrid stepping motor is frequently referred to as the cogging torque or detent torque of the motor because a naive observer will frequently guess that there is a detent mechanism of some kind inside the motor. The most common motor designs yield a detent torque that varies sinusoidally with rotor angle, with an equilibrium position at every step and an amplitude of roughly 10% of the rated holding torque of the motor, but a quick survey of motors from one manufacturer (Phytron) shows values as high as 23% for one very small motor to a low of 2.6% for one mid-sized motor.
So long as no part of the magnetic circuit saturates, powering two motor windings simultaneously will produce a torque versus position curve that is the sum of the torque versus position curves for the two motor windings taken in isolation. For a two-winding permanent magnet or hybrid motor, the two curves will be S radians out of phase, and if the currents in the two windings are equal, the peaks and valleys of the sum will be displaced S/2 radians from the peaks of the original curves, as shown in Figure 2.2:
Figure 2.2This is the basis of half-stepping. The two-winding holding torque is the peak of the composite torque curve when two windings are carrying their maximum rated current. For common two-winding permanent magnet or hybrid stepping motors, the two-winding holding torque will be:
h2 = 20.5 h1where:
h1 -- single-winding holding torqueThis assumes that no part of the magnetic circuit is saturated and that the torque versus position curve for each winding is an ideal sinusoid.
h2 -- two-winding holding torque
Most permanent-magnet and variable-reluctance stepping motor data sheets quote the two-winding holding torque and not the single-winding figure; in part, this is because it is larger, and in part, it is because the most common full-step controllers always apply power to two windings at once.
If any part of the motor's magnetic circuits is saturated, the two torque curves will not add linearly. As a result, the composite torque will be less than the sum of the component torques and the equilibrium position of the composite may not be exactly S/2 radians from the equilibria of the original.
Microstepping allows even smaller steps by using different currents through the two motor windings, as shown in Figure 2.3:
Figure 2.3For a two-winding variable reluctance or permanent magnet motor, assuming nonsaturating magnetic circuits, and assuming perfectly sinusoidal torque versus position curves for each motor winding, the following formula gives the key characteristics of the composite torque curve:
h = ( a2 + b2 )0.5Where:
x = ( S / (π / 2) ) arctan( b / a )
a -- torque applied by winding with equilibrium at 0 radians.In the absence of saturation, the torques a and b are directly proportional to the currents through the corresponding windings. It is quite common to work with normalized currents and torques, so that the single-winding holding torque or the maximum current allowed in one motor winding is 1.0.
b -- torque applied by winding with equilibrium at S radians.
h -- holding torque of composite.
x -- equilibrium position, in radians.
S -- step angle, in radians.
The torque versus position curve shown in Figure 2.1 does not take into account the torque the motor must exert to overcome friction. Note that frictional forces may be divided into two large categories, static or sliding friction, which requires a constant torque to overcome, regardless of velocity, and dynamic friction or viscous drag, which offers a resistance that varies with velocity. Here, we are concerned with the impact of static friction. Suppose the torque needed to overcome the static friction on the driven system is 1/2 the peak torque of the motor, as illustrated in Figure 2.4.
Figure 2.4The dotted lines in Figure 2.4 show the torque needed to overcome friction; only that part of the torque curve outside the dotted lines is available to move the rotor. The curve showing the available torque as a function of shaft angle is the difference between these curves, as shown in Figure 2.5:
Figure 2.5Note that the consequences of static friction are twofold. First, the total torque available to move the load is reduced, and second, there is a dead zone about each of the equilibria of the ideal motor. If the motor rotor is positioned anywhere within the dead zone for the current equilibrium position, the frictional torque will balance the torque applied by the motor windings, and the rotor will not move. Assuming an ideal sinusoidal torque versus position curve in the absence of friction, the angular width of these dead zones will be:
d = 2 ( S / (π / 2)) arcsin( f / h ) = ( S / (π / 4)) arcsin( f / h )where:
d -- width of dead zone, in radians
S -- step angle, in radians
f -- torque needed to overcome static friction
h -- holding torque
The important thing to note about the dead zone is that it limits the ultimate positioning accuracy. For the example, where the static friction is 1/2 the peak torque, a 90° per step motor will have dead-zones 60° wide. That means that successive steps may be as large as 150° and as small as 30°, depending on where in the dead zone the rotor stops after each step!
The presence of a dead zone has a significant impact on the utility of microstepping! If the dead zone is x° wide, then microstepping with a step size smaller than x° may not move the rotor at all. Thus, for systems intended to use high resolution microstepping, it is very important to minimize static friction.
This entire discussion of static friction is oversimplified because we assumed that the force needed to overcome static friction is a constant that is independent of velocity. This is a useful approximation, but real sliding contact frequently exhibits another phenomonon, sometimes described as stiction. In real systems, there is frequently a near constant static friction, independent of velocity, so long as the velocity is nonzero. When the velocity falls sufficiently close to zero, however, the frictional force rises because the sliding surfaces stick.
Each time you step the motor, you electronically move the equilibrium position S radians. This moves the entire curve illustrated in Figure 2.1 a distance of S radians, as shown in Figure 2.6:
Figure 2.6The first thing to note about the process of taking one step is that the maximum available torque is at a minimum when the rotor is halfway from one step to the next. This minimum determines the running torque, the maximum torque the motor can drive as it steps slowly forward. For common two-winding permanent magnet motors with ideal sinusoidal torque versus position curves and holding torque h, this will be h/(20.5). If the motor is stepped by powering two windings at a time, the running torque of an ideal two-winding permanent magnet motor will be the same as the single-winding holding torque.
It shoud be noted that at higher stepping speeds, the running torque is sometimes defined as the pull-out torque. That is, it is the maximum frictional torque the motor can overcome on a rotating load before the load is pulled out of step by the friction. Some motor data sheets define a second torque figure, the pull-in torque. This is the maximum frictional torque that the motor can overcome to accelerate a stopped load to synchronous speed. The pull-in torques documented on stepping motor data sheets are of questionable value because the pull-in torque depends on the moment of inertia of the load used when they were measured, and few motor data sheets document this!
In practice, there is always some friction, so after the equilibrium position moves one step, the rotor is likely to oscillate briefly about the new equilibrium position. The resulting trajectory may resemble the one shown in Figure 2.7:
Figure 2.7Here, the trajectory of the equilibrium position is shown as a dotted line, while the solid curve shows the trajectory of the motor rotor.
The resonant frequency of the motor rotor depends on the amplitude of the oscillation; but as the amplitude decreases, the resonant frequency rises to a well-defined small-amplitude frequency. This frequency depends on the step angle and on the ratio of the holding torque to the moment of inertia of the rotor. Either a higher torque or a lower moment will increase the frequency!
Formally, the small-amplitude resonance can be computed as follows: First, recall Newton's law for angular acceleration:
T = µ AWhere:
T -- torque applied to rotorWe assume that, for small amplitudes, the torque on the rotor can be approximated as a linear function of the displacement from the equilibrium position. Therefore, Hooke's law applies:
µ -- moment of inertia of rotor and load
A -- angular acceleration, in radians per second per second
T = -k θwhere:
k -- the "spring constant" of the system, in torque units per radianWe can equate the two formulas for the torque to get:
θ -- angular position of rotor, in radians
µ A = -k θNote that acceleration is the second derivitive of position with respect to time:
A = d2θ/dt2so we can rewrite this the above in differential equation form:
d2θ/dt2 = -(k/µ) θTo solve this, recall that, for:
f( t ) = a sin btThe derivitives are:
df( t )/dt = ab cos btNote that, throughout this discussion, we assumed that the rotor is resonating. Therefore, it has an equation of motion something like:
d2f( t )/dt2 = -ab2 sin bt = -b2 f(t)
θ = a sin (2π f t)This is an admissable solution to the above differential equation if we agree that:
a = angular amplitude of resonance
f = resonant frequency
b = 2π fSolving for the resonant frequency f as a function of k and µ, we get:
b2 = k/µ
f = ( k/µ )0.5 / 2πIt is crucial to note that it is the moment of inertia of the rotor plus any coupled load that matters. The moment of the rotor, in isolation, is irrelevant! Some motor data sheets include information on resonance, but if any load is coupled to the rotor, the resonant frequency will change!
In practice, this oscillation can cause significant problems when the stepping rate is anywhere near a resonant frequency of the system; the result frequently appears as random and uncontrollable motion.
Up to this point, we have dealt only with the small-angle spring constant k for the system. This can be measured experimentally, but if the motor's torque versus position curve is sinusoidal, it is also a simple function of the motor's holding torque. Recall that:
T = -h sin( ((π / 2) / S) θ )The small angle spring constant k is the negative derivitive of T at the origin.
k = -dT / dθ = - (- h ((π / 2) / S) cos( 0 ) ) = (π / 2)(h / S)Substituting this into the formula for frequency, we get:
f = ( (π / 2)(h / S) / µ )0.5 / 2π = ( h / ( 8π µ S ) )0.5Given that the holding torque and resonant frequency of the system are easily measured, the easiest way to determine the moment of inertia of the moving parts in a system driven by a stepping motor is indirectly from the above relationship!
µ = h / ( 8π f 2 S )For practical purposes, it is usually not the torque or the moment of inertia that matters, but rather, the maximum sustainable acceleration that matters! Conveniently, this is a simple function of the resonant frequency! Starting with the Newton's law for angular acceleration:
A = T / µWe can substitute the above formula for the moment of inertia as a function of resonant frequency, and then substitute the maximum sustainable running torque as a function of the holding torque to get:
A = ( h / ( 20.5 ) ) / ( h / ( 8π f 2 S ) ) = 8π S f 2 / (20.5)Measuring acceleration in steps per second squared instead of in radians per second squared, this simplifies to:
Asteps = A / S = 8π f 2 / (20.5)Thus, for an ideal motor with a sinusoidal torque versus rotor position function, the maximum acceleration in steps per second squared is a trivial function of the resonant frequency of the motor and rigidly coupled load!
For a two-winding permanent-magnet or variable-reluctance motor, with an ideal sinusoidal torque-versus-position characteristic, the two-winding holding torque is a simple function of the single-winding holding torque:
h2 = 20.5 h1Where:
h1 -- single-winding holding torqueSubstituting this into the formula for resonant frequency, we can find the ratios of the resonant frequencies in these two operating modes:
h2 -- two-winding holding torque
f1 = ( h1 / ... )0.5This relationship only holds if the torque provided by the motor does not vary appreciably as the stepping rate varies between these two frequencies.
f2 = ( h2 / ... )0.5 = ( 20.5 h1 / ... )0.5 = 20.25 ( h1 / ... )0.5 = 20.25 f1 = 1.189... f1
In general, as will be discussed later, the available torque will tend to remain relatively constant up until some cutoff stepping rate, and then it will fall. Therefore, this relationship only holds if the resonant frequencies are below this cutoff stepping rate. At stepping rates above the cutoff rate, the two frequencies will be closer to each other!
If a rigidly mounted stepping motor is rigidly coupled to a frictionless load and then stepped at a frequency near the resonant frequency, energy will be pumped into the resonant system, and the result of this is that the motor will literally lose control. There are three basic ways to deal with this problem:
Use of elastomeric motor mounts or elastomeric couplings between motor and load can drain energy out of the resonant system, preventing energy from accumulating to the extent that it allows the motor rotor to escape from control.
Or, viscous damping can be used. Here, the damping will not only draw energy out of the resonant modes of the system, but it will also subtract from the total torque available at higher speeds. Magnetic eddy current damping is equivalent to viscous damping for our purposes.
Figure 2.8 illustrates the use of elastomeric couplings and viscous damping in two typical stepping motor applications, one using a lead screw to drive a load, and the other using a tendon drive:
Figure 2.8In Figure 2.8, elastomeric moter mounts are shown at a and elastomeric couplings between the motor and load are shown at b and c. The end bearing for the lead screw or tendon, at d, offers an opportunity for viscous damping, as do the ways on which the load slides, at e. Even the friction found in sealed ballbearings or teflon on steel ways can provide enough damping to prevent resonance problems.
A resonating motor rotor will induce an alternating current voltage in the motor windings. If some motor winding is not currently being driven, shorting this winding will impose a drag on the motor rotor that is exactly equivalent to using a magnetic eddy current damper.
If some motor winding is currently being driven, the AC voltage induced by the resonance will tend to modulate the current through the winding. Clamping the motor current with an external inductor will counteract the resonance. Schemes based on this idea are incorporated into some of the drive circuits illustrated in later sections of this tutorial.
The high level control system can avoid driving the motor at known resonant frequencies, accelerating and decelerating through these frequencies and never attempting sustained rotation at these speeds.
Recall that the resonant frequency of a motor in half-stepped mode will vary by up to 20% from one half-step to the next. As a result, half-stepping pumps energy into the resonant system less efficiently than full stepping. Furthermore, when operating near these resonant frequencies, the motor control system may preferentially use only the two-winding half steps when operating near the single-winding resonant frequency, and only the single-winding half steps when operating near the two-winding resonant frequency. Figure 2.9 illustrates this:
Figure 2.9The darkened curve in Figure 2.9 shows the operating torque achieved by a simple control scheme that delivers useful torque over a wide range of speeds despite the fact that the available torque drops to zero at each resonance in the system. This solution is particularly effective if the resonant frequencies are sharply defined and well separated. This will be the case in minimally damped systems operating well below the cutoff speed defined in the next section.
An important consideration in designing high-speed stepping motor controllers is the effect of the inductance of the motor windings. As with the torque versus angular position information, this is frequently poorly documented in motor data sheets, and indeed, for variable reluctance stepping motors, it is not a constant! The inductance of the motor winding determines the rise and fall time of the current through the windings. While we might hope for a square-wave plot of current versus time, the inductance forces an exponential, as illustrated in Figure 2.10:
Figure 2.10The details of the current-versus-time function through each winding depend as much on the drive circuitry as they do on the motor itself! It is quite common for the time constants of these exponentials to differ. The rise time is determined by the drive voltage and drive circuitry, while the fall time depends on the circuitry used to dissipate the stored energy in the motor winding.
At low stepping rates, the rise and fall times of the current through the motor windings has little effect on the motor's performance, but at higher speeds, the effect of the inductance of the motor windings is to reduce the available torque, as shown in Figure 2.11:
Figure 2.11The motor's maximum speed is defined as the speed at which the available torque falls to zero. Measuring maximum speed can be difficult when there are resonance problems, because these cause the torque to drop to zero prematurely. The cutoff speed is the speed above which the torque begins to fall. When the motor is operating below its cutoff speed, the rise and fall times of the current through the motor windings occupy an insignificant fraction of each step, while at the cutoff speed, the step duration is comparable to the sum of the rise and fall times. Note that a sharp cutoff is rare, and therefore, statements of a motor's cutoff speed are, of necessity, approximate.
The details of the torque versus speed relationship depend on the details of the rise and fall times in the motor windings, and these depend on the motor control system as well as the motor. Therefore, the cutoff speed and maximum speed for any particular motor depend, in part, on the control system! The torque versus speed curves published in motor data sheets occasionally come with documentation of the motor controller used to obtain that curve, but this is far from universal practice!
Similarly, the resonant speed depends on the moment of inertia of the entire rotating system, not just the motor rotor, and the extent to which the torque drops at resonance depends on the presence of mechanical damping and on the nature of the control system. Some published torque versus speed curves show very clear resonances without documenting the moment of inertia of the hardware that may have been attached to the motor shaft in order to make torque measurements.
The torque versus speed curve shown in Figure 2.11 is typical of the simplest of control systems. More complex control systems sometimes introduce electronic resonances that act to increase the available torque above the motor's low-speed torque. A common result of this is a peak in the available torque near the cutoff speed.
In a permanent magnet or hybrid stepping motor, the magnetic field of the motor rotor changes with changes in shaft angle. The result of this is that turning the motor rotor induces an AC voltage in each motor winding. This is referred to as the counter EMF because the voltage induced in each motor winding is always in phase with and counter to the ideal waveform required to turn the motor in the same direction. Both the frequency and amplitude of the counter EMF increase with rotor speed, and therefore, counter EMF contributes to the decline in torque with increased stepping rate.
Variable reluctance stepping motors also induce counter EMF! This is because, as the stator winding pulls a tooth of the rotor towards its equilibrium position, the reluctance of the magnetic circuit declines. This decline increases the inductance of the stator winding, and this change in inductance demands a decrease in the current through the winding in order to conserve energy. This decrease is evidenced as a counter EMF.
The reactance (inductance and resistance) of the motor windings limits the current flowing through them. Thus, by ohms law, increasing the voltage will increase the current, and therefore increase the available torque. The increased voltage also serves to overcome the counter EMF induced in the motor windings, but the voltage cannot be increased arbitrarily! Thermal, magnetic and electronic considerations all serve to limit the useful torque that a motor can produce.
The heat given off by the motor windings is due to both simple resistive losses, eddy current losses, and hysteresis losses. If this heat is not conducted away from the motor adequately, the motor windings will overheat. The simplest failure this can cause is insulation breakdown, but it can also heat a permanent magnet rotor to above its curie temperature, the temperature at which permanent magnets lose their magnetization. This is a particular risk with many modern high strength magnetic alloys.
Even if the motor is attached to an adequate heat sink, increased drive voltage will not necessarily lead to increased torque. Most motors are designed so that, with the rated current flowing through the windings, the magnetic circuits of the motor are near saturation. Increased current will not lead to an appreciably increased magnetic field in such a motor!
Given a drive system that limits the current through each motor winding to the rated maximum for that winding, but uses high voltages to achieve a higher cutoff torque and higher torques above cutoff, there are other limits that come into play. At high speeds, the motor windings must, of necessity, carry high frequency AC signals. This leads to eddy current losses in the magnetic circuits of the motor, and it leads to skin effect losses in the motor windings.
Motors designed for very high speed running should, therefore, have magnetic structures using very thin laminations or even nonconductive ferrite materials, and they should have small gauge wire in their windings to minimize skin effect losses. Common high torque motors have large-gauge motor windings and coarse core laminations, and at high speeds, such motors can easily overheat and should therefore be derated accordingly for high speed running!
It is also worth noting that the best way to demagnetize something is to expose it to a high frequency-high amplitude magnetic field. Running the control system to spin the rotor at high speed when the rotor is actually stalled, or spinning the rotor at high speed against a control system trying to hold the rotor in a fixed position will both expose the rotor to a high amplitude high-frequency field. If such operating conditions are common, particularly if the motor is run near the curie temperature of the permanent magnets, demagnetization is a serious risk and the field strengths (and expected torques) should be reduced accordingly! | http://homepage.divms.uiowa.edu/~jones/step/physics.html | 13 |
54 | Statistics and data analysis procedures can broadly be split into two parts: quantitative techniques and graphical techniques. Statistics is a mathematical science pertaining to the collection analysis interpretation or explanation and presentation of Data. Data analysis is the process of looking at and summarizing Data with the intent to extract useful Information and develop conclusions Quantitative techniques are the set of statistical procedures that yield numeric or tabular output. Examples of quantitative techniques include hypothesis testing, analysis of variance, point estimation, confidence intervals, and least squares regression. A statistical hypothesis test is a method of making statistical decisions using experimental data In Statistics, ANOVA is short for analysis of variance Analysis of variance is a collection of Statistical models and their associated procedures in which the observed In Statistics, point estimation involves the use of sample Data to calculate a single value (known as a Statistic) which is to serve as a "best In Statistics, a confidence interval (CI is an interval estimate of a Population parameter. In statistics linear regression is a form of Regression analysis in which the relationship between one or more Independent variables and another variable called These and similar techniques are all valuable and are mainstream in terms of classical analysis.
On the other hand, there is a large collection of statistical tools that we generally refer to as graphical techniques. These include: scatter plots, histograms, probability plots, residual plots, box plots, block plots, and biplots. A scatter graph or scatter plot is a type of Display using Cartesian coordinates to display values for two Variables for a set of data In Statistics, a histogram is a Graphical display of tabulated frequencies, shown as Bars It shows what proportion of cases fall into each of The probability plot is a Graphical technique for assessing whether or not a Data set follows a given distribution such as the normal or Weibull In Descriptive statistics, a boxplot (also known as a box-and-whisker diagram or plot) is a convenient way of graphically depicting groups of numerical Biplots are a type of graph used in Statistics. A biplot allows information on both Samples and Variables of a data matrix to be displayed Exploratory data analysis (EDA) relies heavily on these and similar graphical techniques. Exploratory data analysis (EDA is an approach to analyzing data for the purpose of formulating hypotheses worth testing complementing the tools of conventional Graphical procedures are not just tools used in an EDA context; such graphical tools are the shortest path to gaining insight into a data set in terms of testing assumptions, model selection and statistical model validation, estimator selection, relationship identification, factor effect determination, and outlier detection. Model selection is the task of selecting a Statistical model from a set of potential models given data Model validation is possibly the most important step in the model building sequence In Statistics, an outlier is an observation that is numerically distant from the rest of the data. In addition, good statistical graphics can provide a convincing means of communicating the underlying message that is present in the data to others.
Famous graphics were designed by William Playfair who published what could be called the first pie chart and the well known diagram that depicts the evolution of England's imports and exports. William Playfair ( Sept 22, 1759 – Feb 11, 1823) was a Scottish Engineer and Political economist, who is considered the founder of A pie chart (or a circle graph) is a circular Chart divided into sectors illustrating relative magnitudes or frequencies or percents England is a Country which is part of the United Kingdom. Its inhabitants account for more than 83% of the total UK population whilst its mainland Other notorious graph makers were Florence Nightingale who used statistical graphics to persuade the British Government to improve army hygiene, John Snow (physician) who plotted deaths from cholera in London in 1854 to detect the source of the disease, and Charles Joseph Minard who designed a large portfolio of maps of which the one depicting Napoleon's campaign in Russia is the best known. Florence Nightingale, OM, RRC (in her own pronunciation ˈflɒɾəns ˈnaɪtɪŋgeɪl 12 May 1820 – 13 August 1910 who came to be known as "The John Snow ( 15 March 1813 &ndash 16 June 1858) was a British physician and a leader in the adoption of Anaesthesia and medical Cholera, sometimes known as Asiatic cholera or epidemic cholera, is an infectious Gastroenteritis caused by the Bacterium London ( ˈlʌndən is the capital and largest urban area in the United Kingdom. Year 1854 ( MDCCCLIV) was a Common year starting on Sunday (link will display the full calendar of the Gregorian Calendar (or a Common year Charles Joseph Minard (27 March 1781 in Dijon &ndash 24 October 1870 in Bordeaux) was a French Civil engineer noted for his inventions in the field Napoleon Bonaparte (15 August 1769 – 5 May 1821 was a French military and political leader who had a significant impact on the History of Europe. Russia (Россия Rossiya) or the Russian Federation ( Rossiyskaya Federatsiya) is a transcontinental Country extending A special type of statistical graphic are the so called isotypes. In Graphic design and Sociology, an Isotype is a system of Pictograms designed by the Austrian educator and philosopher Otto Neurath These are graphical tools designed by Otto Neurath with the specific purpose of achieving changes in society through visual education of the masses. Otto Neurath (1882 - 1945 was an Austrian philosopher of science, sociologist, and political economist.
If one is not using statistical graphics, then one is forfeiting insight into one or more aspects of the underlying structure of the data.
This article incorporates text from a public domain publication of the National Institute of Standards and Technology, a U. The public domain is a range of abstract materials &ndash commonly referred to as Intellectual property &ndash which are not owned or controlled by anyone S. government agency. | http://www.citizendia.org/Statistical_graphics | 13 |
55 | Before we get into actually solving partial differential
equations and before we even start discussing the method of separation of
variables we want to spend a little bit of time talking about the two main
partial differential equations that we’ll be solving later on in the chapter. We’ll look at the first one in this section
and the second one in the next section.
The first partial differential equation that we’ll be
looking at once we get started with solving will be the heat equation, which
governs the temperature distribution in an object. We are going to give several forms of the
heat equation for reference purposes, but we will only be really solving one of
We will start out by considering the temperature in a 1-D
bar of length L. What this means is that we are going to
assume that the bar starts off at and ends when we reach . We are also going to so assume that at any
location, x the temperature will be
constant an every point in the cross section at that x. In other words,
temperature will only vary in x and
we can hence consider the bar to be a 1-D bar.
Note that with this assumption the actual shape of the cross section (i.e. circular, rectangular, etc.) doesn’t matter.
Note that the 1-D assumption is actually not all that bad of
an assumption as it might seem at first glance.
If we assume that the lateral surface of the bar is perfectly insulated
(i.e. no heat can flow through the
lateral surface) then the only way heat can enter or leave the bar as at either
end. This means that heat can only flow
from left to right or right to left and thus creating a 1-D temperature
The assumption of the lateral surfaces being perfectly
insulated is of course impossible, but it is possible to put enough insulation
on the lateral surfaces that there will be very little heat flow through them
and so, at least for a time, we can consider the lateral surfaces to be
Okay, let’s now get some definitions out of the way before
we write down the first form of the heat equation.
We should probably make a couple of comments about some of
these quantities before proceeding.
The specific heat, ,
of a material is the amount of heat energy that it takes to raise one unit of
mass of the material by one unit of temperature. As indicated we are going to assume, at least
initially, that the specific heat may not be uniform throughout the bar. Note as well that in practice the specific
heat depends upon the temperature.
However, this will generally only be an issue for large temperature
differences (which in turn depends on the material the bar is made out of) and
so we’re going to assume for the purposes of this discussion that the
temperature differences are not large enough to affect our solution.
The mass density, ,
is the mass per unit volume of the material.
As with the specific heat we’re going to initially assume that the mass
density may not be uniform throughout the bar.
The heat flux, ,
is the amount of thermal energy that flows to the right per unit surface area
per unit time. The “flows to the right”
bit simply tells us that if for some x
and t then the heat is flowing to the
right at that point and time. Likewise
if then the heat will be flowing to the left at
that point and time.
The final quantity we defined above is
and this is used to represent any external
sources or sinks (i.e. heat energy
taken out of the system) of heat energy.
If then heat energy is being added to the system
at that location and time and if then heat energy is being removed from the
system at that location and time.
With these quantities the heat equation is,
While this is a nice form of the heat equation it is not
actually something we can solve. In this
form there are two unknown functions, u
and so we need to get rid of one of them.
With Fourier’s law we can
easily remove the heat flux from this equation.
Fourier’s law states that,
where is the thermal
conductivity of the material and measures the ability of a given material to
conduct heat. The better a material can
conduct heat the larger will be.
As noted the thermal conductivity can vary with the location in the
bar. Also, much like the specific heat
the thermal conductivity can vary with temperature, but we will assume that the
total temperature change is not so great that this will be an issue and so we
will assume for the purposes here that the thermal conductivity will not vary
Fourier’s law does a very good job of modeling what we know
to be true about heat flow. First, we
know that if the temperature in a region is constant, i.e. ,
then there is no heat flow.
Next, we know that if there is a temperature difference in a
region we know the heat will flow from the hot portion to the cold portion of
the region. For example, if it is hotter
to the right then we know that the heat should flow to the left. When it is hotter to the right then we also
know that (i.e.
the temperature increases as we move to the right) and so we’ll have and so the heat will flow to the left as it
should. Likewise, if (i.e.
it is hotter to the left) then we’ll have and heat will flow to the right as it should.
Finally, the greater the temperature difference in a region
(i.e. the larger is) then the greater the heat flow.
So, if we plug Fourier’s law into (1), we
get the following form of the heat equation,
Note that we factored the minus sign out of the derivative
to cancel against the minus sign that was already there. We cannot however, factor the thermal
conductivity out of the derivative since it is a function of x and the derivative is with respect to x.
Solving (2) is quite difficult due to
the non uniform nature of the thermal properties and the mass density. So, let’s now assume that these properties
are all constant, i.e.,
where c, and are now all fixed quantities. In this case we generally say that the
material in the bar is uniform. Under these assumptions the heat equation
For a final simplification to the heat equation let’s divide
both sides by and define the thermal diffusivity to be,
The heat equation is then,
To most people this is what they mean when they talk about
the heat equation and in fact it will be the equation that we’ll be
solving. Well, actually we’ll be solving
with no external sources, i.e. ,
but we’ll be considering this form when we start discussing separation of
variables in a couple of sections. We’ll
only drop the sources term when we actually start solving the heat equation.
Now that we’ve got the 1-D heat equation taken care of we
need to move into the initial and boundary conditions we’ll also need in order
to solve the problem. If you go back to
any of our solutions of ordinary differential equations that we’ve done in
previous sections you can see that the number of conditions required always
matched the highest order of the derivative in the equation.
In partial differential equations the same idea holds except
now we have to pay attention to the variable we’re differentiating with respect
to as well. So, for the heat equation
we’ve got a first order time derivative and so we’ll need one initial condition
and a second order spatial derivative and so we’ll need two boundary
The initial condition that we’ll use here is,
and we don’t really need to say much about it here other
than to note that this just tells us what the initial temperature distribution
in the bar is.
The boundary conditions will tell us something about what
the temperature and/or heat flow is doing at the boundaries of the bar. There are four of them that are fairly common
The first type of boundary conditions that we can have would
be the prescribed temperature
boundary conditions, also called Dirichlet
conditions. The prescribed
temperature boundary conditions are,
The next type of boundary conditions are prescribed heat flux, also called Neumann conditions. Using Fourier’s law these can be written as,
If either of the
boundaries are perfectly insulated, i.e. there is no heat flow out of them
then these boundary conditions reduce to,
and note that we will often just call these particular
boundary conditions insulated
boundaries and drop the “perfectly” part.
The third type of boundary conditions use Newton’s law of cooling and are
sometimes called Robins conditions. These are usually used when the bar is in a
moving fluid and note we can consider air to be a fluid for this purpose.
Here are the equations for this kind of boundary condition.
where H is a
positive quantity that is experimentally determined and and give the temperature of the surrounding fluid
at the respective boundaries.
Note that the two conditions do vary slightly depending on
which boundary we are at. At we have a minus sign on the right side while
we don’t at . To see why this is let’s first assume that at
we have . In other words the bar is hotter than the
surrounding fluid and so at the heat flow (as given by the left side of
the equation) must be to the left, or negative since the heat will flow from
the hotter bar into the cooler surrounding liquid. If the heat flow is negative then we need to have
a minus sign on the right side of the equation to make sure that it has the
If the bar is cooler than the surrounding fluid at ,
i.e. we can make a similar argument to justify the
minus sign. We’ll leave it to you to
If we now look at the other end, ,
and again assume that the bar is hotter than the surrounding fluid or, . In this case the heat flow must be to the
right, or be positive, and so in this case we can’t have a minus sign. Finally, we’ll again leave it to you to
verify that we can’t have the minus sign at is the bar is cooler than the surrounding
fluid as well.
Note that we are not actually going to be looking at any of
these kinds of boundary conditions here.
These types of boundary conditions tend to lead to boundary value
problems such as Example 5 in
the Eigenvalues and Eigenfunctions section of the previous chapter. As we saw in that example it is often very
difficult to get our hands on the eigenvalues and as we’ll eventually see we
will need them.
It is important to note at this point that we can also mix
and match these boundary conditions so to speak. There is nothing wrong with having a
prescribed temperature at one boundary and a prescribed flux at the other boundary
for example so don’t always expect the same boundary condition to show up at
both ends. This warning is more
important that it might seem at this point because once we get into solving the
heat equation we are going to have the same kind of condition on each end to
simplify the problem somewhat.
The final type of boundary conditions that we’ll need here
are periodic boundary
conditions. Periodic boundary conditions
Note that for these kinds of boundary conditions the left
boundary tends to be instead of as we were using in the previous types of
boundary conditions. The periodic
boundary conditions will arise very naturally from a couple of particular
geometries that we’ll be looking at down the road.
We will now close out this section with a quick look at the
2-D and 3-D version of the heat equation.
However, before we jump into that we need to introduce a little bit of
The del operator
is defined to be,
depending on whether we are in 2 or 3 dimensions. Think of the del operator as a function that
takes functions as arguments (instead of numbers as we’re used to). Whatever function we “plug” into the operator
gets put into the partial derivatives.
So, for example in 3-D we would have,
This of course is also the gradient of the function .
The del operator also allows us to quickly write down the
divergence of a function. So, again
using 3-D as an example the divergence of can be written as the dot product of the del
operator and the function. Or,
Finally, we will also see the following show up in the our
This is usually denoted as,
and is called the Laplacian. The 2-D version of course simply doesn’t have
the third term.
Okay, we can now look into the 2-D and 3-D version of the heat
equation and where ever the del operator and or Laplacian appears assume that
it is the appropriate dimensional version.
The higher dimensional version of (1) is,
and note that the specific heat, c, and mass density, ,
are may not be uniform and so may be functions of the spatial variables. Likewise, the external sources term, Q, may also be a function of both the
spatial variables and time.
Next, the higher dimensional version of Fourier’s law is,
where the thermal conductivity, ,
is again assumed to be a function of the spatial variables.
If we plug this into (5) we
get the heat equation for a non uniform bar (i.e. the thermal properties may be functions of the spatial
variables) with external sources/sinks,
If we now assume that the
specific heat, mass density and thermal conductivity are constant (i.e. the bar is uniform) the heat
where we divided both sides by to get the thermal diffusivity, k in front of the Laplacian.
The initial condition for the 2-D or 3-D heat equation is,
depending upon the dimension we’re in.
The prescribed temperature boundary condition becomes,
where or ,
depending upon the dimension we’re in, will range over the portion of the
boundary in which we are prescribing the temperature.
The prescribed heat flux condition becomes,
where the left side is only being evaluated at points along
the boundary and is the outward unit normal on the surface.
Newton’s law of cooling will become,
where H is a
positive quantity that is experimentally determine, is the temperature of the fluid at the
boundary and again it is assumed that this is only being evaluated at points
along the boundary.
We don’t have periodic boundary conditions here as they will
only arise from specific 1-D geometries.
We should probably also acknowledge at this point that we’ll
not actually be solving (7) at any point, but we will
be solving a special case of it in the Laplace’s | http://tutorial.math.lamar.edu/Classes/DE/TheHeatEquation.aspx | 13 |
106 | See something needing your input? Click here to Join us in providing quality, expert-guided information to the public for free!
From Citizendium, the Citizens' Compendium
In mathematics, an ellipse is a planar locus of points characterized by having a constant sum of distances to two given fixed points in the plane. In figure 1, the two fixed points are F1 and F2, these are the foci of the ellipse. Consider an arbitrary point P1 on the ellipse that has distance F1P1 to F1 and distance F2P1 to F2, and let d be the sum of distances of P1 to the foci,
then for all points of the ellipse the sum of distances is also d. Thus, for another arbitrary point P2 on the ellipse with distance F1P2 to F1 and distance F2P2 to F2, by definition, the sum of distances of P2 to the foci is equal to d,
The horizontal line segment between S1 and S2 in figure 1, going through the foci, is known as the major axis of the ellipse. Traditionally, the length of the major axis is indicated by 2a. The vertical dashed line segment, drawn halfway between the foci and perpendicular to the major axis, is referred to as the minor axis of the ellipse; its length is usually indicated by 2b. The major and the minor axis are distinguished by a ≥ b.
Clearly both ellipse axes are symmetry axes, reflection about either of them transforms the ellipse into itself. Basically, this is a consequence of the fact that reflection preserves (sums of) distances. The intersection of the axes is the center of the ellipse.
The two foci and the points S1 and S2 are connected by reflection about the minor axis. Hence the distance S2F2 =: p is, by symmetry, equal to the distance S1F1. The distance of S2 to F1 is equal to 2a − p. By the definition of the ellipse their sum is equal to d, hence
The sum d of distances from any point on the ellipse to the foci is equal to the length of the major axis.
Special cases. There are two extreme cases:
(a) The first occurs when the two foci coincide. Then a = b and the ellipse is a circle — a special case of an ellipse — and the coinciding foci are the center of the circle. If, in addition, d = 0 then the circle degenerates to a point. (In a circle, any diameter can be chosen as the major axis or as the minor axis.)
(b) The second extreme case occurs when the distance of the foci equals d. Then b = 0 and the ellipse degenerates to the line segment bounded by the foci.
(Remark: Usually, in common language, these extreme cases are not referred to as an ellipse because "circle" (or "point") and "line segment" describe them better, but in mathematics they are included because they satisfy the definition.)
In the work of the Greek mathematician Apollonius (c. 262–190 BC) the ellipse arose as the intersection of a plane with a cone. Apollonius gave the ellipse its name, though the term ἔλλειψις (elleipsis, meaning "falling short") was used earlier by Euclid (c. 300 BC) in the construction of parallelograms with areas that "fell short". Apollonius applied the word to the conic section that at present we call ellipse. See Ref. for the—in modern eyes—complicated reasoning by which Apollonius tied the shape of certain conic sections to Euclid's concept of deficient areas.
In figure 2 a cone with a circular base is shown. It has a vertical symmetry axis, an axis of revolution. A cone can be generated by revolving around the axis a line that intersects the axis of rotation under an angle α (strictly between 0 and 90 degree). A horizontal plane (plane perpendicular to the axis of the cone) — that does not contain the vertex — intersects the cone in a circle (a special ellipse). A plane that intersects the axis in an angle greater than α intersects the cone in an ellipse. (Otherwise, the intersection is either a parabola or a hyperbola.) If the plane contains the vertex, the ellipse degenerates to a point; if the plane is perpendicular to the axis the ellipse is a circle.
The eccentricity e of an ellipse (usually denoted by e or ε) is the ratio of the distance OF2 (cf. figure 3) to the length a (half the major axis), that is, e := OF2 / a. Let be a vector of length a along the x-axis, then
The following two vectors have common endpoint at P, see figure 3,
Now choose P as the intersection P1 of the positive y-axis with the ellipse; then its position vector is:
By symmetry, the distance of this point P1 to either focus is equal, thus the length of the corresponding vector (with endpoint on the y-axis) is equal to the length a of the semi-major axis. For the following two inner products (indicated by a centered dot) we find,
Hence, (in fact the Pythagoras theorem applied to P1OF2),
so that the eccentricity is given by
Remark: The two extreme values for the eccentricity correspond to the extreme forms of an ellipse: The vaule 0 corresponds to the circle, the value 1 to the line segment.
Consider an ellipse that is located with respect to a Cartesian frame as in figure 3 (a ≥ b > 0, major axis on x-axis, minor axis on y-axis). Then:
(Canonical equation of an ellipse) A point P=(x,y) is a point of the ellipse if and only if
Note that for a = b this is the equation of a circle. An ellipse may be seen as a unit circle in which the x and the y coordinates are scaled independently, by 1/a and 1/b, respectively. (An ellipse degenerated to a line segment cannot be described with such an equation.)
Part 1: We first consider an arbitrary point P of the ellipse. Introduce the vectors
By definition of ellipse, the sum of the lengths is 2a
Multiplying equation (1) by
(the first coordinate of the vector ) we obtain
By adding and subtracting equations (1) and (2) we find expressions for the distance of P to the foci,
Squaring both equations
adding them, substituting the earlier derived value for e2, and reworking gives
Division by b2 finally gives
Part 2: Conversely, for any point P whose coordinates x and y satisfy this equation, the sum of its distances from the foci
To show this we calculate
and substitute for f and
After an analogous calculation for F2 we get (note that because and )
Second degree equation
The algebraic form of the previous section describes an ellipse in a special position. Rotation and translation transforms it into an equation of second degree in x and y:
(all variables are real). Such an equation always describes a conic section.
It represents a non-degenerate ellipse (minor axis not 0) if and only if the following conditions are satisfied:
- or, equivalently,
where t1 and t2 are defined as the solutions of the following system of linear equations:
(These equations have a unique solution since, by the first condition, the determinant AC − B2 ≠ 0.)
We now switch to matrix-vector notation and write f(x,y) as
The superscript T stands for transposition (row vector becomes column vector and vice versa).
We first show that the conditions are sufficient:
Since, by assumption, the determinant det(Q) = AC−B2 ≠ 0, the matrix Q is invertible. With the help of the inverse Q−1 the equation for f can be rewritten to
Note that this uses
i.e., that both the matrix Q and its inverse are symmetric.
In the definition of t the minus sign is introduced to get the translation of the origin as depicted in figure 4.
Now we substitute r′ in the expression for f. (This corresponds to shifting the origin of the coordinate system to the center of the ellipse):
Thus, by translation of the origin over t the linear terms in f(r) have been eliminated, only two quadratic terms (in x′ := x−t1 and y′ := y−t2), one bilinear term, and one constant term (ft) appear in the equation for f. (The "price paid" for it is the requirement det(Q) ≠ 0.)
In the next step we rotate the coordinate system (around the origin in O') such that the coordinate axes coincide with the axes of the ellipse. This will eliminate the bilinear term and "decouple" x′ and y′, the components of r′.
Let us recall that any real symmetric matrix may be diagonalized by an orthogonal matrix. For the (2×2)-case:
where the last matrix on the right is the identity matrix I. Now
Switching back to a quadratic equation
we see that an ellipse is obtained if the parameters α1, α2, and ft are non-zero and if the signs of α1 and α2 are equal and opposite to the sign of ft.
It is known that the determinant of a matrix is invariant under similarity transformations, hence
and the signs of α1 and α2 are equal.
The trace A+C of the matrix is also invariant under similarity transformations. Thus
and we can apply the assumption
and conclude that in both cases the second order equation represents an ellipse. This shows that the conditions given are sufficient.
The conditions are also necessary:
In the coordinate system determined by its axes, the equation clearly satisfies the conditions, and — since determinant and trace are preserved — they stay satisfied if the system is rotated and shifted. Thus the conditions are necessary if the determinant is not equal to 0. In fact, it is necessary without this assumption on the determinant (see second-order curve).
Clearly, in order to determine a priori whether the quadratic equation represents an ellipse, it is not necessary to actually perform the diagonalization of Q. It is sufficient to check the condition and determine the sign of ft = f(t) by solving the equation given for the vector t.
Polar representation relative to focus
The length g of a vector (cf. figure 5) from the focus F2 to an endpoint P on the ellipse
is given by the polar equation of an ellipse (with eccentricity less than 1)
where 2ℓ is known as the latus rectum (lit. erect side) of the ellipse; it is equal to 2g for θ = 90° (twice the length of the vector when it makes a right angle with the major axis).
Earlier [Eq. (3)] it was derived for the distance from the right focus F2 to P that
Expressing x from
and the polar equation for the ellipse follows.
Before drafting was done almost exclusively by the aid of computers, draftsmen used a simple device for drawing ellipses, a trammel. Basically, a trammel is a rigid bar of length a (semi-major axis). In the top drawing of figure 6 the bar is shown as a blue-red line segment bounded by a black and a blue bead. On this bar a segment of length b (semi-minor axis) is marked; this is the red segment on the bar. Two beads fixed to the rigid bar move back and forth along the x-axis and y-axis, respectively. The blue bead fixed at one end of the bar moves along the y-axis, the red bead, which marks the beginning of the red segment of length b, moves along the x-axis. The endpoint of the bar (the black bead in figure 6) moves along an ellipse with semi-major axis a and semi-minor axis b and typically has a pen fixed to it.
The fact that the trammel construction works is proved very easily, cf. the bottom drawing in figure 6,
which indeed is the equation for an ellipse.
A device called a trammel point is used to guide a woodworking router in making elliptical cuts.
It is possible to construct an ellipse of given major and minor axes by the aid of a compass, a ruler, three thumbtacks, and a piece of string, see figure 8.
First draw the major axis AB, and then obtain with the compass its perpendicular bisector intersecting AB in the midpoint E. Along the bisector one measures off the length of the minor axis CD. Given that the distances CF and CG are the semi-major axis (AB/2), one can determine the foci by drawing an arc with the compass using C as center and AB/2 as radius. One now pins the thumbtacks in the foci and the point C and fixes a piece of string around the triangle FGC (i.e, its length equals the perimeter of the triangle). Removing the thumbtack at C, and keeping the string taut, one draws the ellipse by moving the pencil from C to A, D, B, and back to C.
Clearly this procedure can be used in the garden to create an elliptic lawn or flowerbed, which is why the procedure is sometimes referred to as the gardener's construction.
- ↑ The points S1 and S2 are the main vertices of the ellipse.
- ↑ The quantities a and b are referred to as semi-major and semi-minor axis, respectively. Note that, just as diameter of circle, semi-axis does not only refer to the line segment itself, but also to its length.
- ↑ The shortest distance of a focus to a point on the ellipse (= p, as can be seen from equation (3), for instance) is the periapsis of the ellipse; the longest distance, S1F2=S2F1=2a−p, is the apoapsis. These two (Greek) terms are mainly used in astronomy when orbits of planets are described.
- ↑ M. Kline, Mathematical Thought from Ancient to Modern Times, Oxford UP, New York (1972)
Figures 7 and 8 are from George Watson Kittredge, The New Metal Worker Pattern Book, David Williams Company, New York, (1901) Online | http://en.citizendium.org/wiki/Ellipse | 13 |
143 | Differential geometry can be used in the design of springs
Differential geometry is the investigation and characterization of the local properties of curves and surfaces in space; that is, in the neighbourhood of a point on the curve or surface. It can be extended to the properties of space itself, but we shall not go that far. We'll study curves in Euclidean three-dimensional space, which are characterized by curvature and torsion, and illustrate the less familiar property, torsion, by appeal to helical springs. Then, we shall investigate the curvature of surfaces. All these things will be done by relating these properties to the derivatives of the parametric equations specifying the curves or surfaces.
Another article treats curves in a plane, which can be described by introducing a rectangular coordinate system and using the parametric equations x = x(t) and y = y(t). First, let's review the properties of a plane curve. A tangent vector to the curve is (dx,dy) = (x'dt,y'dt). Here, we represent a vector by its components within parentheses, separated by commas. Later, we shall also use the same notation for the scalar product of the two vectors between the parentheses. The element of arc length ds = √(x'2 + y'2)dt. This can be integrated, at least in principle, to give s as a function of t. Then, we can express the curve with arc length as parameter: x = x(s), y = y(s). This so greatly simplifies the algebra that we shall assume the curve is specified in this way in what follows. The vector a = dr/ds = (x',y') is a unit vector, since x'2 + y'2 = (dx/ds)2 + (dy/ds)2 = (ds/ds)2 = 1, and is called the tangent unit vector. The change in a with a movement ds along the curve is (x"ds,y"ds). Since the change in a unit vector is perpendicular to the vector and corresponds to a rotation of dθ, dθ/ds = √(x"2 + y"2). The unit vector b = r"/|r"| is perpendicular to a, and is called the normal vector. Indeed, (x',y')·(x",y") = x'x" + y'y" = (1/2)(d/ds)(x'2 + y'2) = (1/2)(d/ds)(1) = 0. We can write the derivative dθ/ds = |b| = dθ/Rdθ) = 1/R. The length R defined in this equation is the radius of curvature, and its reciprocal, 1/R is the curvature itself. The normal vector b points toward the centre of curvature. For a straight line, x" = y" = 0, so that R = ∞ and the curvature is zero, as expected.
As a concrete example, consider a circle of radius b and centre at the origin. This circle can be described by x = b cos 2πt, y = b sin 2πt, where b is the radius and the parameter t is the number of revolutions around the circle. Then, dx/dt = -2πb sin 2πt, dy/dt = 2πb cos 2πt, so (dx/dt)2 + (dy/dt)2 = (2πb)2 and ds = 2πb dt, or s = 2πbt, which is no surprise. Now we can describe the circle by x = b cos(s/b), y = b sin(s/b). Clearly, x'2 + y'2 = 1, and the tangent unit vector is [-sin(s/b),cos(s/b)], which is clearly a unit vector and perpendicular to the radius vector. The second derivatives are x" = (-1/b)cos(s/b), y" = (-1/b)sin(s/b). The length of this vector is 1/b, and b = [-cos(s/b),-sin(s/b)], which is directed towards the origin. 1/b = 1/R, so the radius of curvature is b, again no surprise.
Let us now go into curves in three dimensions. We now have three parametric equations, x = x(t), y = y(t), z = z(t). The arc length ds = √[(dx/dt)2 + (dy/dt)2 + (dz/dt)2]dt. If this can be integrated, we can express the curve with arc length as parameter, x = x(s), y = y(s), z = z(s). Then, a = dr/ds = (x',y',z') is the tangent vector. The plane perpendicular to this vector is called the normal plane. The curve is represented by a point in this plane, and passes through it perpendicularly. The vector (x",y",z") = r" is perpendicular to the tangential unit vector, which can easily be seen by dotting it into r'. A tangent plane is any plane containing the tangent. The tangent plane that contains r" is locally equivalent to the plane of a plane curve, and is called the osculating plane. In the neighborhood of the point under consideration, the curve lies approximately in this plane. The unit vector b = r"/|r"| is called the normal unit vector, and lies in the osculating plane. The unit vector c = a x b is perpendicular to the osculating plane, and lies in the normal plane. It is called the binormal. The plane perpendicular to the normal unit vector is called the rectifying plane, and c lies in this plane.
We now have an orthonormal triad of unit vectors, the tangent, normal and binormal, and three mutually perpendicular planes, the osculating plane, the rectifying plane and the normal plane, at every point of the curve. The curvature 1/R can be defined exactly as for plane curves, 1/R = |r"|, so that it reduces to the desired form for plane curves. For space curves, we have the possibility that the osculating plane varies in orientation. We define the torsion as the change in angle of the osculating plane with movement along the curve, so that 1/T = dθ/ds, where dθ is the change in direction of c in a distance ds along the curve, or 1/T = |c'|. The torsion is the reciprocal of the length T, which does not have a vivid geometrical interpretation. It is, however, dimensionally correct. At any point, the curve is penetrating the osculating plane, and the sign of T gives the direction of this penetration, whether in the direction of c or opposite to it.
It's very instructive to consider a concrete example. Probably the most useful and commmon space curve is the helix. This curve is found in screw threads, helical springs, and corkscrews. In fact, when I looked for a physical example the first helix I hit upon was a corkscrew. To describe a helix, all we need is to add the z-equation z = at to the parametric equations of a circle. The constant a is the pitch of the helix, the amount it advances per turn. With our preceding equations, this gives a right-handed helix that rotates clockwise as t increases, as seen from the starting point. If you look at the helix from the other direction, the rotation is also clockwise, so right-handedness is an intrinsic property of this helix. By a slight change in the y equation, we can obtain a left-handed helix, which cannot be superimposed on the right-handed helix. A screw with right-handed helical threads advances into a piece threaded with a right-handed helix when it is rotated clockwise. A left-handed screw will not fit into a right-handed nut whichever way it is rotated, but advances into a left-handed nut when rotated anticlockwise. The turnbuckle of a screen-door tension brace has a right-handed screw on one end, and a left-handed screw on the other.
Examine your helix, and picture how the osculating plane changes as you go along the helix. This can be done in imagination or with a sketch, but is easier to see with an actual helix. Its inclination to the z-axis remains constant, but the binormal rotates about the z-axis, making one turn for each revolution. Imagine one osculating plane, and look at how the curve passes through it. A helix can be unwound to make a right triangle, with legs a and 2πb. From this triangle, the inclination of the osculating planes is θ = tan-1(a/2πb). The arc length of one turn of the helix is √(a2 + 4π2b2).
In terms of arc length, the helix is x = b cos(2πs/p), y = b sin(2πs/p), z = as/p, where p = √(a2 + 4π2b2) is the length of one turn of the helix. Then, x' = -(2πb/p) sin(2πs/p), y' = (2πb/p) cos(2πs/p), z' = a/p. It is easy to verify that x'2 + y'2 + z'2 = 1, so that the unit tangential vector is a = (x',y',z'). Then, x" = -(2π/p)2b cos(2πs/p), y" = -(2π/p)2b sin(2πs/p), z" = 0. Therefore, 1/R = (2π/p)2b. Inserting the value of p, we have 1/R = 4π2b/[a2 + (2πb)2]. If a → 0, 1/R → 1/b, the expected result.
The torsion can be obtained from the definition. In fact, 1/T = 2πsin(θ)/p, where θ is the inclination of the osculating plane and p the arc length of one turn. This gives 1/T = 2πa/[a2 + (2πb)2]. As a → 0, 1/T → = 0, as expected. T is taken as positive for a right-handed helix, and negative for a left-handed helix. A helix is characterized by a constant torsion.
The changes in the reference unit vectors with displacement along the curve can be expressed in terms of R and T. The definitions of these quantities give da/ds = b/R and dc/ds = -b/T. Since c = a x b, we can also find the change in b in terms of these two derivatives. In fact, db/ds = -a/R + c/T. These three formulas are the Frenet-Serret formulas. They show that the parameters R and T are sufficient to describe the local behavior of a curve in space, which is an important result. Note that if 1/T is zero, then c is constant, and the curve is a plane curve. Torsion is an essential accompaniment of a space curve. If 1/R is zero, the curve is a straight line, and 1/T has no significance.
In general the torsion is given by 1/T = r'·r" x r'''/|r"|. Note that the third derivative enters in this expression, and the sign of T has been specified. If the result is worked out for a helix, it will be found to agree with our previous choice. We note that the torsion of a helix is a constant, which makes working with it very easy. Derivations of these formulas will be found in Widder.
This definition of torsion agrees with the usual definition in strength of materials as twist per unit length, as in the design of helical springs. A helical spring is in the form of a helix with a certain torsion. When the spring is extended or compressed, the torsion changes, and the change in torsion times the total length of the spring wire is the torsional deflection. Consider a long circular rod of diameter d and length L. If a torque Q is applied at one end of the rod and the other is held fixed, the end of the rod at which Q is applied will rotate by an amount θ = QL/GJ, where G is the modulus of rigidity of the material (in psi, for example) and J is the polar moment of inertia of area, πd4/32 for a cylindrical rod. For steel, G = 11.5 x 106 psi. The same rotation occurs in a spring where the rod is wound into a helix of radius b, and the axial force F is applied at the centre of the spring, so that Q = Fb. The helical form is much easier to handle than the straight rod, and the deflection can be larger without exceeding the elastic limit.
The torsion of the helix is 1/T = 2πa/p, where p is the length of rod for one turn of the helix. If there are N turns, and the pitch a changes by da, then the total change in length of the spring will be δ = Nda. The change in the torsion will be d(1/T) = (2π/p2)da, so that θ = d(1/T)(Np) = (2π/p)δ. We are assuming that the material of the spring is fixed to the osculating plane. When we equate the two expressions for θ, we find (2π/p)δ = FbpN/GJ, or δ = Fbp2N/GJ. If the spring is relatively "flat" so that p = 2πb, we get δ = 8F(2b)2N/Gd4, which can be compared with the result obtained in machine design texts. The spring constant k = F/δ = Gd4/8(2b)3N. The ratio of the helix diameter to the rod diameter, 2b/d, is called the spring index. The solid height of a spring is the height when compressed so that all turns touch. The free length is the length when under no load. The ends of springs are prepared in different ways to suit the application.
Now let's talk about surfaces in space. We are interested in finding the normal vector and tangent plane to the curve at a point on the surface, the length of curves drawn on the surface, and the curvature of the surface. All this can be done fairly easily with differential geometry. First, it is necessary to specify the surface. Two useful ways are, first, to define the surface implicitly by an equation f(x,y,z) = 0, and, second, to specify the surface parametrically by three equations r = r(u,v). The two parameters u,v are a two-dimensional coordinate system on the surface. For example, they may be the latitude and longitude of a point r on the surface of a sphere. An equation z = f(x,y) is easily expressed in the first form, f(x,y) - z = 0, and is a special case of the parametric representation r = [x, y, f(x,y)]. We shall assume that all functions have continuous second derivatives, and are single-valued.
Using the parametric representation, we can find two tangent vectors by differentiation. These are eu = ∂r/∂u and ev = ∂r/∂v. For notational ease, we shall denote these by xu and xv, and the radius vector simply by x. These are the directions on the surface corresponding to changes du and dv in the parameters. We assume that these two directions are not the same, as they certainly would not be in any useful parametrization. Then, the local normal to the surface is in the direction of xu x uv. The unit normal vector is ζ. It satisfies (ζ,uu) = 0 and (ζ,uv) = 0. The tangent plane to the surface is (x - x0)·ζ, and a normal line is x = x0 +tζ. These expressions all contain vectors, of course.
If we use f(x,y,z) = 0 and do not single out any coordinate for special treatment, we can find the normal by differentiating this expression, to find fxdx + fydy + fzdz = 0. This restricts the vector (dx,dy,dz) to lie in the tangent plane. Therefore, the vector grad f = (fx,fy,fz) is normal to the surface. In fact, ζ = grad f/|grad f|.
For further investigation, we shall use x = x(u,v) [a vector!]. Then, as we have seen, ζ = xu x xv. There is an arbitrary sign, whose effect we shall ignore. A differential vector in the tangent plane is then dx = xudu + xvdv. The square of distance in the tangent plane, ds2 is the length of this vector, or ds2 = (xu,xu)du2 + 2(xu,xv)dudv + (xv,xv)dv2. We have used (xu,xv) = (xv,xu). The expressions in parentheses are, of course, scalar products. This may be expressed more concisely as ds2 = Edu2 + 2Fdudv + Gdv2. This equation, called the metric form for the sphere, expresses length on the surface as a function of du and dv. The angle between two curves on the surface can be expressed in terms of these quantities in a simlar way, by expanding (dx1,dx2). If one curve is v = h(u) and the other is v = k(u), the result is cos θ = (E + Fh' + Fk' + Gh'k')/[√(E + 2Fh' + Gh'2)√(E + 2Fk' + gk'2)].
To study the curvature of the surface, we consider the curvature of normal sections of the surface. Normal sections are the intersection of the surface with planes that contain the surface normal. We must prepare ourselves to accept that the curvature will depend on the direction of the normal plane, and that the curvature of the surface cannot be expressed in terms of a single parameter. To find the curvature, we must consider changes in the direction of the normal, dζ = ζudu + ζvdv. Let us form -(dx,dζ) = -(xu,ζ u)du2 + (xu,ζv)dudv + (xv,xu)dvdu + (xv,xv)dv2. The motivation for considering this scalar product will be seen in the sequel. The scalar products can be expressed in terms of ζ by differentiating the conditions that ζ is normal to xu and xv. For example, (ζu,xu) + (ζ,xuu) = 0. This introduces the second derivatives that are essential for expressing curvature. We find that (dx,dζ) = edu2 + 2fdudv + gdv2, where e = (ζ,xuu), f = (ζ,xuv), g = (ζ,xuu). These are just the components of the second derivative vectors in the direction of the normal.
Suppose the tangent vector for a certain curve on the surface is α = dx/ds, where ds is arc length along the curve. Since (α,ζ) = 0, we find by differentiation that (dα/ds,ζ) + (α,dζ/ds) = 0. The derivative of α is given by the Frenet-Serret formulas as dα/ds = β/r = ±ζ/r. Since ζ is a unit vector, we then find that 1/r = ±(dx/ds,dζ/ds) = ±[(dx,dζ)/ds2. We have already found expressions for the scalar products in this expression in terms of du and dv. In fact, 1/r = [edu2 + 2fdudv + gdv2] / [Edu2 + 2Fdudv + Gdv2]. We can also specify a certain direction with the ratio h' = dv/du. The maximum and minimum values of 1/r are called the principal curvatures. Their product is the Gaussian curvature, and their average is the mean curvature.
As a concrete example, consider a sphere of radius a, where x = a sin u cos v, y = a sin u sin v, z = a cos u. Here, u and v are the polar angles. In this case, it is easy to see that the normal vector is (sin u cos v, sin u sin v, cos u). The first derivatives are xu = (a cos u cos v, a cos u sin v, -a sin u) and xv = (-a sin u sin v, a sin u cos v, 0). If we had not realized that the normal could be obtained easily, it could be found from the cross product of these tangent vectors. This gives E = a2, F = 0, G = a2 sin2u, so ds2 = a2du2 + a2sin2u dv2. We can see by inspection that this result is valid, considering displacements along a meridian and along a circle of latitude: ds2 = (adu)2 + (a sin u dv)2. If u and v are orthogonal coordinates, then F = 0.
The second derivatives are xuu = (-a sin u cos v, -a sin u sin v, -a cos u), xuv = (-a cos u sin v, a cos u cos v, 0), xvv = (-a sin u cos v, -a sin u sin v, 0). Now we find e = a, f = 0, g = a sin2 u. The curvature is 1/r = (1/a)[(1 + sin2u h'2)/(1 + sin2u h'2)] = 1/a, independent of direction. This is, of course, expected, and verifies our equations.
The reader may wish to consider a cylinder of radius a. Take u as the polar angle, and v as the vertical coordinate, so that x = a cos u, y = a sin u, z = v. Then, E = a2, F = 0, G = 1, e = a, f = g = 0. The curvature is 1/r = a/(a2 + h'2). The maximum curvature occurs for h' = 0, 1/r = 1/a, and the minimum for h' = ∞, where 1/r = 0. The Gaussian curvature is zero, while the mean curvature is 1/2a.
Since the metric form is ds2 = a2dφ2 + a2cos2φdθ2 for a sphere where we have now taken φ = latitude and θ = longitude, the ratio of the north-south distance dy to an east-west distance dx in a small displacement is dy/dx = dφ/cos φ dθ. This is the slope of the displacement, or the tangent of its azimuth from north. Whenever F = 0, the coordinates are rectangular and we can form this ratio simply by looking at the metric form. Let's now consider a cylinder wrapped around the sphere, where z replaces the latitude φ, and ds2 = a2dθ2 + dz2. Here, dy/dx = dz/adθ.
Now suppose we map the sphere on the cylinder, so that the point φ,θ on the sphere is represented by the point z,θ on the cylinder. This can be done by a arbitrary z = z(φ), but a very useful mapping results when angles are preserved. That is, two displacements that make an angle w on the sphere are to make the same angle when mapped onto the cylinder. This will be true when the angle between the meridian and a displacement is preserved, since we only have to refer the displacements to the meridian. This means that the slopes of the displacements are to be the same. We know the slopes, so we have the requirement that dφ/cos φ dθ = dz/adθ, or dz/dφ = a/cos φ. This integrates to z = a ln tan(φ/2 + π/4) + C. If z = 0 when φ = 0, then C = 0. This is the desired relation between z and φ, which gives a Mercator map. The scale is determined by the factor a. Distance representing longitude is given by x = a θ.
A straight line on a Mercator map is called a loxodrome, Greek for "slanting course." It is a course of constant compass heading, very convenient for navigation. It represents the shortest distance only when approximately north-south or east-west, since the scale of a Mercator map is not constant, but changes with latitude. Any long course is divided into a series of loxodromes. A Mercator map is not good for comparing regions at different distances from the equator, but always gives an accurate appreciation of the shape of limited areas. It is not applicable to polar regions. It is an interesting exercise to compare loxodromes on a Mercator map with courses on a globe with the same end points. No good way to represent the whole globe on a plane map has yet been found. Look at the various attempts displayed in atlases; the best tear the surface of the globe into disconnected regions.
D. V. Widder, Advanced Calculus (New York: Dover, 1989). Chapter 3.
V. M. Faires, Design of Machine Elements, 4th ed. (New York: Macmillan, 1965). Chapter 6, pp. 184-187. Helical springs.
Composed by J. B. Calvert
Created 16 January 2005
Last revised 17 January 2005 | http://mysite.du.edu/~etuttle/math/space.htm | 13 |
206 | A gear or cogwheel is a rotating machine part having cut teeth, or cogs, which mesh with another toothed part in order to transmit torque. Two or more gears working in tandem are called a transmission and can produce a mechanical advantage through a gear ratio and thus may be considered a simple machine. Geared devices can change the speed, torque, and direction of a power source. The most common situation is for a gear to mesh with another gear; however, a gear can also mesh with a non-rotating toothed part, called a rack, thereby producing translation instead of rotation.
The gears in a transmission are analogous to the wheels in a pulley. An advantage of gears is that the teeth of a gear prevent slipping.
When two gears of unequal number of teeth are combined, a mechanical advantage is produced, with both the rotational speeds and the torques of the two gears differing in a simple relationship.
In transmissions which offer multiple gear ratios, such as bicycles and cars, the term gear, as in first gear, refers to a gear ratio rather than an actual physical gear. The term is used to describe similar devices even when the gear ratio is continuous rather than discrete, or when the device does not actually contain any gears, as in a continuously variable transmission.
The earliest known reference to gears was circa A.D. 50 by Hero of Alexandria, but they can be traced back to the Greek mechanics of the Alexandrian school in the 3rd century B.C. and were greatly developed by the Greek polymath Archimedes (287–212 B.C.). The Antikythera mechanism is an example of a very early and intricate geared device, designed to calculate astronomical positions. Its time of construction is now estimated between 150 and 100 BC.
Comparison with drive mechanisms
The definite velocity ratio which results from having teeth gives gears an advantage over other drives (such as traction drives and V-belts) in precision machines such as watches that depend upon an exact velocity ratio. In cases where driver and follower are proximal, gears also have an advantage over other drives in the reduced number of parts required; the downside is that gears are more expensive to manufacture and their lubrication requirements may impose a higher operating cost.
External vs internal gears
An external gear is one with the teeth formed on the outer surface of a cylinder or cone. Conversely, an internal gear is one with the teeth formed on the inner surface of a cylinder or cone. For bevel gears, an internal gear is one with the pitch angle exceeding 90 degrees. Internal gears do not cause output shaft direction reversal.
Spur gears or straight-cut gears are the simplest type of gear. They consist of a cylinder or disk with the teeth projecting radially, and although they are not straight-sided in form, the edge of each tooth is straight and aligned parallel to the axis of rotation. These gears can be meshed together correctly only if they are fitted to parallel shafts.
Helical or "dry fixed" gears offer a refinement over spur gears. The leading edges of the teeth are not parallel to the axis of rotation, but are set at an angle. Since the gear is curved, this angling causes the tooth shape to be a segment of a helix. Helical gears can be meshed in parallel or crossed orientations. The former refers to when the shafts are parallel to each other; this is the most common orientation. In the latter, the shafts are non-parallel, and in this configuration the gears are sometimes known as "skew gears".
The angled teeth engage more gradually than do spur gear teeth, causing them to run more smoothly and quietly. With parallel helical gears, each pair of teeth first make contact at a single point at one side of the gear wheel; a moving curve of contact then grows gradually across the tooth face to a maximum then recedes until the teeth break contact at a single point on the opposite side. In spur gears, teeth suddenly meet at a line contact across their entire width causing stress and noise. Spur gears make a characteristic whine at high speeds. Whereas spur gears are used for low speed applications and those situations where noise control is not a problem, the use of helical gears is indicated when the application involves high speeds, large power transmission, or where noise abatement is important. The speed is considered to be high when the pitch line velocity exceeds 25 m/s.
A disadvantage of helical gears is a resultant thrust along the axis of the gear, which needs to be accommodated by appropriate thrust bearings, and a greater degree of sliding friction between the meshing teeth, often addressed with additives in the lubricant.
For a 'crossed' or 'skew' configuration, the gears must have the same pressure angle and normal pitch; however, the helix angle and handedness can be different. The relationship between the two shafts is actually defined by the helix angle(s) of the two shafts and the handedness, as defined:
- for gears of the same handedness
- for gears of opposite handedness
Where is the helix angle for the gear. The crossed configuration is less mechanically sound because there is only a point contact between the gears, whereas in the parallel configuration there is a line contact.
Quite commonly, helical gears are used with the helix angle of one having the negative of the helix angle of the other; such a pair might also be referred to as having a right-handed helix and a left-handed helix of equal angles. The two equal but opposite angles add to zero: the angle between shafts is zero – that is, the shafts are parallel. Where the sum or the difference (as described in the equations above) is not zero the shafts are crossed. For shafts crossed at right angles, the helix angles are of the same hand because they must add to 90 degrees.
Double helical gears, or herringbone gears, overcome the problem of axial thrust presented by "single" helical gears, by having two sets of teeth that are set in a V shape. A double helical gear can be thought of as two mirrored helical gears joined together. This arrangement cancels out the net axial thrust, since each half of the gear thrusts in the opposite direction resulting in a net axial force of zero. This arrangement can remove the need for thrust bearings. However, double helical gears are more difficult to manufacture due to their more complicated shape.
For both possible rotational directions, there exist two possible arrangements for the oppositely-oriented helical gears or gear faces. One arrangement is stable, and the other is unstable. In a stable orientation, the helical gear faces are oriented so that each axial force is directed toward the center of the gear. In an unstable orientation, both axial forces are directed away from the center of the gear. In both arrangements, the total (or net) axial force on each gear is zero when the gears are aligned correctly. If the gears become misaligned in the axial direction, the unstable arrangement will generate a net force that may lead to disassembly of the gear train, while the stable arrangement generates a net corrective force. If the direction of rotation is reversed, the direction of the axial thrusts is also reversed, so a stable configuration becomes unstable, and vice versa.
Stable double helical gears can be directly interchanged with spur gears without any need for different bearings.
A bevel gear is shaped like a right circular cone with most of its tip cut off. When two bevel gears mesh, their imaginary vertices must occupy the same point. Their shaft axes also intersect at this point, forming an arbitrary non-straight angle between the shafts. The angle between the shafts can be anything except zero or 180 degrees. Bevel gears with equal numbers of teeth and shaft axes at 90 degrees are called miter gears.
The teeth of a bevel gear may be straight-cut as with spur gears, or they may be cut in a variety of other shapes. Spiral bevel gear teeth are curved along the tooth's length and set at an angle, analogously to the way helical gear teeth are set at an angle compared to spur gear teeth. Zerol bevel gears have teeth which are curved along their length, but not angled. Spiral bevel gears have the same advantages and disadvantages relative to their straight-cut cousins as helical gears do to spur gears. Straight bevel gears are generally used only at speeds below 5 m/s (1000 ft/min), or, for small gears, 1000 r.p.m.
Hypoid gears resemble spiral bevel gears except the shaft axes do not intersect. The pitch surfaces appear conical but, to compensate for the offset shaft, are in fact hyperboloids of revolution. Hypoid gears are almost always designed to operate with shafts at 90 degrees. Depending on which side the shaft is offset to, relative to the angling of the teeth, contact between hypoid gear teeth may be even smoother and more gradual than with spiral bevel gear teeth. Also, the pinion can be designed with fewer teeth than a spiral bevel pinion, with the result that gear ratios of 60:1 and higher are feasible using a single set of hypoid gears. This style of gear is most commonly found driving mechanical differentials; which are normally straight cut bevel gears; in motor vehicle axles.
Crown gears or contrate gears are a particular form of bevel gear whose teeth project at right angles to the plane of the wheel; in their orientation the teeth resemble the points on a crown. A crown gear can only mesh accurately with another bevel gear, although crown gears are sometimes seen meshing with spur gears. A crown gear is also sometimes meshed with an escapement such as found in mechanical clocks.
Worm-and-gear sets are a simple and compact way to achieve a high torque, low speed gear ratio. For example, helical gears are normally limited to gear ratios of less than 10:1 while worm-and-gear sets vary from 10:1 to 500:1. A disadvantage is the potential for considerable sliding action, leading to low efficiency.
Worm gears can be considered a species of helical gear, but its helix angle is usually somewhat large (close to 90 degrees) and its body is usually fairly long in the axial direction; and it is these attributes which give it screw like qualities. The distinction between a worm and a helical gear is made when at least one tooth persists for a full rotation around the helix. If this occurs, it is a 'worm'; if not, it is a 'helical gear'. A worm may have as few as one tooth. If that tooth persists for several turns around the helix, the worm will appear, superficially, to have more than one tooth, but what one in fact sees is the same tooth reappearing at intervals along the length of the worm. The usual screw nomenclature applies: a one-toothed worm is called single thread or single start; a worm with more than one tooth is called multiple thread or multiple start. The helix angle of a worm is not usually specified. Instead, the lead angle, which is equal to 90 degrees minus the helix angle, is given.
In a worm-and-gear set, the worm can always drive the gear. However, if the gear attempts to drive the worm, it may or may not succeed. Particularly if the lead angle is small, the gear's teeth may simply lock against the worm's teeth, because the force component circumferential to the worm is not sufficient to overcome friction. Worm-and-gear sets that do lock are called self locking, which can be used to advantage, as for instance when it is desired to set the position of a mechanism by turning the worm and then have the mechanism hold that position. An example is the machine head found on some types of stringed instruments.
If the gear in a worm-and-gear set is an ordinary helical gear only a single point of contact will be achieved. If medium to high power transmission is desired, the tooth shape of the gear is modified to achieve more intimate contact by making both gears partially envelop each other. This is done by making both concave and joining them at a saddle point; this is called a cone-drive. or "Double enveloping"
Worm gears can be right or left-handed, following the long-established practice for screw threads.
Non-circular gears are designed for special purposes. While a regular gear is optimized to transmit torque to another engaged member with minimum noise and wear and maximum efficiency, a non-circular gear's main objective might be ratio variations, axle displacement oscillations and more. Common applications include textile machines, potentiometers and continuously variable transmissions.
Rack and pinion
A rack is a toothed bar or rod that can be thought of as a sector gear with an infinitely large radius of curvature. Torque can be converted to linear force by meshing a rack with a pinion: the pinion turns; the rack moves in a straight line. Such a mechanism is used in automobiles to convert the rotation of the steering wheel into the left-to-right motion of the tie rod(s). Racks also feature in the theory of gear geometry, where, for instance, the tooth shape of an interchangeable set of gears may be specified for the rack (infinite radius), and the tooth shapes for gears of particular actual radii are then derived from that. The rack and pinion gear type is employed in a rack railway.
In epicyclic gearing one or more of the gear axes moves. Examples are sun and planet gearing (see below) and mechanical differentials.
Sun and planet
Sun and planet gearing was a method of converting reciprocating motion into rotary motion in steam engines. It was famously used by James Watt on his early steam engines in order to get around the patent on the crank.
A harmonic drive is a specialized gearing mechanism often used in industrial motion control, robotics and aerospace for its advantages over traditional gearing systems, including lack of backlash, compactness and high gear ratios.
A cage gear, also called a lantern gear or lantern pinion has cylindrical rods for teeth, parallel to the axle and arranged in a circle around it, much as the bars on a round bird cage or lantern. The assembly is held together by disks at either end into which the tooth rods and axle are set. Lantern gears are more efficient than solid pinions, and dirt can fall through the rods rather than becoming trapped and increasing wear.
Sometimes used in clocks, the lantern pinion should always be driven by a gearwheel, not used as the driver. The lantern pinion was not initially favoured by conservative clock makers. It became popular in turret clocks where dirty working conditions were most commonplace. Domestic American clock movements often used them.
All cogs of each gear component of magnetic gears act as a constant magnet with periodic alternation of opposite magnetic poles on mating surfaces. Gear components are mounted with a backlash capability similar to other mechanical gearings. At low load, such gears work without touching, giving increased reliability without noise.
- Rotational frequency, n
- Measured in rotation over time, such as RPM.
- Angular frequency, ω
- Measured in radians per second. rad/second
- Number of teeth, N
- How many teeth a gear has, an integer. In the case of worms, it is the number of thread starts that the worm has.
- Gear, wheel
- The larger of two interacting gears or a gear on its own.
- The smaller of two interacting gears.
- Path of contact
- Path followed by the point of contact between two meshing gear teeth.
- Line of action, pressure line
- Line along which the force between two meshing gear teeth is directed. It has the same direction as the force vector. In general, the line of action changes from moment to moment during the period of engagement of a pair of teeth. For involute gears, however, the tooth-to-tooth force is always directed along the same line—that is, the line of action is constant. This implies that for involute gears the path of contact is also a straight line, coincident with the line of action—as is indeed the case.
- Axis of revolution of the gear; center line of the shaft.
- Pitch point, p
- Point where the line of action crosses a line joining the two gear axes.
- Pitch circle, pitch line
- Circle centered on and perpendicular to the axis, and passing through the pitch point. A predefined diametral position on the gear where the circular tooth thickness, pressure angle and helix angles are defined.
- Pitch diameter, d
- A predefined diametral position on the gear where the circular tooth thickness, pressure angle and helix angles are defined. The standard pitch diameter is a basic dimension and cannot be measured, but is a location where other measurements are made. Its value is based on the number of teeth, the normal module (or normal diametral pitch), and the helix angle. It is calculated as:
- in metric units or in imperial units.
- Module, m
- A scaling factor used in metric gears with units in millimeters whose effect is to enlarge the gear tooth size as the module increases and reduce the size as the module decreases. Module can be defined in the normal (mn), the transverse (mt), or the axial planes (ma) depending on the design approach employed and the type of gear being designed. Module is typically an input value into the gear design and is seldom calculated.
- Operating pitch diameters
- Diameters determined from the number of teeth and the center distance at which gears operate. Example for pinion:
- Pitch surface
- In cylindrical gears, cylinder formed by projecting a pitch circle in the axial direction. More generally, the surface formed by the sum of all the pitch circles as one moves along the axis. For bevel gears it is a cone.
- Angle of action
- Angle with vertex at the gear center, one leg on the point where mating teeth first make contact, the other leg on the point where they disengage.
- Arc of action
- Segment of a pitch circle subtended by the angle of action.
- Pressure angle,
- The complement of the angle between the direction that the teeth exert force on each other, and the line joining the centers of the two gears. For involute gears, the teeth always exert force along the line of action, which, for involute gears, is a straight line; and thus, for involute gears, the pressure angle is constant.
- Outside diameter,
- Diameter of the gear, measured from the tops of the teeth.
- Root diameter
- Diameter of the gear, measured at the base of the tooth.
- Addendum, a
- Radial distance from the pitch surface to the outermost point of the tooth.
- Dedendum, b
- Radial distance from the depth of the tooth trough to the pitch surface.
- Whole depth,
- The distance from the top of the tooth to the root; it is equal to addendum plus dedendum or to working depth plus clearance.
- Distance between the root circle of a gear and the addendum circle of its mate.
- Working depth
- Depth of engagement of two gears, that is, the sum of their operating addendums.
- Circular pitch, p
- Distance from one face of a tooth to the corresponding face of an adjacent tooth on the same gear, measured along the pitch circle.
- Diametral pitch,
- Ratio of the number of teeth to the pitch diameter. Could be measured in teeth per inch or teeth per centimeter.
- Base circle
- In involute gears, where the tooth profile is the involute of the base circle. The radius of the base circle is somewhat smaller than that of the pitch circle.
- Base pitch, normal pitch,
- In involute gears, distance from one face of a tooth to the corresponding face of an adjacent tooth on the same gear, measured along the base circle.
- Contact between teeth other than at the intended parts of their surfaces.
- Interchangeable set
- A set of gears, any of which will mate properly with any other.
Helical gear nomenclature
- Helix angle,
- Angle between a tangent to the helix and the gear axis. It is zero in the limiting case of a spur gear, albeit it can considered as the hypotenuse angle as well.
- Normal circular pitch,
- Circular pitch in the plane normal to the teeth.
- Transverse circular pitch, p
- Circular pitch in the plane of rotation of the gear. Sometimes just called "circular pitch".
Several other helix parameters can be viewed either in the normal or transverse planes. The subscript n usually indicates the normal.
Worm gear nomenclature
- Distance from any point on a thread to the corresponding point on the next turn of the same thread, measured parallel to the axis.
- Linear pitch, p
- Distance from any point on a thread to the corresponding point on the adjacent thread, measured parallel to the axis. For a single-thread worm, lead and linear pitch are the same.
- Lead angle,
- Angle between a tangent to the helix and a plane perpendicular to the axis. Note that it is the complement of the helix angle which is usually given for helical gears.
- Pitch diameter,
- Same as described earlier in this list. Note that for a worm it is still measured in a plane perpendicular to the gear axis, not a tilted plane.
Subscript w denotes the worm, subscript g denotes the gear.
Tooth contact nomenclature
- Point of contact
- Any point at which two tooth profiles touch each other.
- Line of contact
- A line or curve along which two tooth surfaces are tangent to each other.
- Path of action
- The locus of successive contact points between a pair of gear teeth, during the phase of engagement. For conjugate gear teeth, the path of action passes through the pitch point. It is the trace of the surface of action in the plane of rotation.
- Line of action
- The path of action for involute gears. It is the straight line passing through the pitch point and tangent to both base circles.
- Surface of action
- The imaginary surface in which contact occurs between two engaging tooth surfaces. It is the summation of the paths of action in all sections of the engaging teeth.
- Plane of action
- The surface of action for involute, parallel axis gears with either spur or helical teeth. It is tangent to the base cylinders.
- Zone of action (contact zone)
- For involute, parallel-axis gears with either spur or helical teeth, is the rectangular area in the plane of action bounded by the length of action and the effective face width.
- Path of contact
- The curve on either tooth surface along which theoretical single point contact occurs during the engagement of gears with crowned tooth surfaces or gears that normally engage with only single point contact.
- Length of action
- The distance on the line of action through which the point of contact moves during the action of the tooth profile.
- Arc of action, Qt
- The arc of the pitch circle through which a tooth profile moves from the beginning to the end of contact with a mating profile.
- Arc of approach, Qa
- The arc of the pitch circle through which a tooth profile moves from its beginning of contact until the point of contact arrives at the pitch point.
- Arc of recess, Qr
- The arc of the pitch circle through which a tooth profile moves from contact at the pitch point until contact ends.
- Contact ratio, mc, ε
- The number of angular pitches through which a tooth surface rotates from the beginning to the end of contact. In a simple way, it can be defined as a measure of the average number of teeth in contact during the period in which a tooth comes and goes out of contact with the mating gear.
- Transverse contact ratio, mp, εα
- The contact ratio in a transverse plane. It is the ratio of the angle of action to the angular pitch. For involute gears it is most directly obtained as the ratio of the length of action to the base pitch.
- Face contact ratio, mF, εβ
- The contact ratio in an axial plane, or the ratio of the face width to the axial pitch. For bevel and hypoid gears it is the ratio of face advance to circular pitch.
- Total contact ratio, mt, εγ
- The sum of the transverse contact ratio and the face contact ratio.
- Modified contact ratio, mo
- For bevel gears, the square root of the sum of the squares of the transverse and face contact ratios.
- Limit diameter
- Diameter on a gear at which the line of action intersects the maximum (or minimum for internal pinion) addendum circle of the mating gear. This is also referred to as the start of active profile, the start of contact, the end of contact, or the end of active profile.
- Start of active profile (SAP)
- Intersection of the limit diameter and the involute profile.
- Face advance
- Distance on a pitch circle through which a helical or spiral tooth moves from the position at which contact begins at one end of the tooth trace on the pitch surface to the position where contact ceases at the other end.
Tooth thickness nomenclature
- Circular thickness
- Length of arc between the two sides of a gear tooth, on the specified datum circle.
- Transverse circular thickness
- Circular thickness in the transverse plane.
- Normal circular thickness
- Circular thickness in the normal plane. In a helical gear it may be considered as the length of arc along a normal helix.
- Axial thickness
- In helical gears and worms, tooth thickness in an axial cross section at the standard pitch diameter.
- Base circular thickness
- In involute teeth, length of arc on the base circle between the two involute curves forming the profile of a tooth.
- Normal chordal thickness
- Length of the chord that subtends a circular thickness arc in the plane normal to the pitch helix. Any convenient measuring diameter may be selected, not necessarily the standard pitch diameter.
- Chordal addendum (chordal height)
- Height from the top of the tooth to the chord subtending the circular thickness arc. Any convenient measuring diameter may be selected, not necessarily the standard pitch diameter.
- Profile shift
- Displacement of the basic rack datum line from the reference cylinder, made non-dimensional by dividing by the normal module. It is used to specify the tooth thickness, often for zero backlash.
- Rack shift
- Displacement of the tool datum line from the reference cylinder, made non-dimensional by dividing by the normal module. It is used to specify the tooth thickness.
- Measurement over pins
- Measurement of the distance taken over a pin positioned in a tooth space and a reference surface. The reference surface may be the reference axis of the gear, a datum surface or either one or two pins positioned in the tooth space or spaces opposite the first. This measurement is used to determine tooth thickness.
- Span measurement
- Measurement of the distance across several teeth in a normal plane. As long as the measuring device has parallel measuring surfaces that contact on an unmodified portion of the involute, the measurement will be along a line tangent to the base cylinder. It is used to determine tooth thickness.
- Modified addendum teeth
- Teeth of engaging gears, one or both of which have non-standard addendum.
- Full-depth teeth
- Teeth in which the working depth equals 2.000 divided by the normal diametral pitch.
- Stub teeth
- Teeth in which the working depth is less than 2.000 divided by the normal diametral pitch.
- Equal addendum teeth
- Teeth in which two engaging gears have equal addendums.
- Long and short-addendum teeth
- Teeth in which the addendums of two engaging gears are unequal.
Pitch is the distance between a point on one tooth and the corresponding point on an adjacent tooth. It is a dimension measured along a line or curve in the transverse, normal, or axial directions. The use of the single word pitch without qualification may be ambiguous, and for this reason it is preferable to use specific designations such as transverse circular pitch, normal base pitch, axial pitch.
- Circular pitch, p
- Arc distance along a specified pitch circle or pitch line between corresponding profiles of adjacent teeth.
- Transverse circular pitch, pt
- Circular pitch in the transverse plane.
- Normal circular pitch, pn, pe
- Circular pitch in the normal plane, and also the length of the arc along the normal pitch helix between helical teeth or threads.
- Axial pitch, px
- Linear pitch in an axial plane and in a pitch surface. In helical gears and worms, axial pitch has the same value at all diameters. In gearing of other types, axial pitch may be confined to the pitch surface and may be a circular measurement. The term axial pitch is preferred to the term linear pitch. The axial pitch of a helical worm and the circular pitch of its worm gear are the same.
- Normal base pitch, pN, pbn
- An involute helical gear is the base pitch in the normal plane. It is the normal distance between parallel helical involute surfaces on the plane of action in the normal plane, or is the length of arc on the normal base helix. It is a constant distance in any helical involute gear.
- Transverse base pitch, pb, pbt
- In an involute gear, the pitch on the base circle or along the line of action. Corresponding sides of involute gear teeth are parallel curves, and the base pitch is the constant and fundamental distance between them along a common normal in a transverse plane.
- Diametral pitch (transverse), Pd
- Ratio of the number of teeth to the standard pitch diameter in inches.
- Normal diametral pitch, Pnd
- Value of diametral pitch in a normal plane of a helical gear or worm.
- Angular pitch, θN, τ
- Angle subtended by the circular pitch, usually expressed in radians.
- degrees or radians
Backlash is the error in motion that occurs when gears change direction. It exists because there is always some gap between the trailing face of the driving tooth and the leading face of the tooth behind it on the driven gear, and that gap must be closed before force can be transferred in the new direction. The term "backlash" can also be used to refer to the size of the gap, not just the phenomenon it causes; thus, one could speak of a pair of gears as having, for example, "0.1 mm of backlash." A pair of gears could be designed to have zero backlash, but this would presuppose perfection in manufacturing, uniform thermal expansion characteristics throughout the system, and no lubricant. Therefore, gear pairs are designed to have some backlash. It is usually provided by reducing the tooth thickness of each gear by half the desired gap distance. In the case of a large gear and a small pinion, however, the backlash is usually taken entirely off the gear and the pinion is given full sized teeth. Backlash can also be provided by moving the gears further apart. The backlash of a gear train equals the sum of the backlash of each pair of gears, so in long trains backlash can become a problem.
For situations in which precision is important, such as instrumentation and control, backlash can be minimised through one of several techniques. For instance, the gear can be split along a plane perpendicular to the axis, one half fixed to the shaft in the usual manner, the other half placed alongside it, free to rotate about the shaft, but with springs between the two halves providing relative torque between them, so that one achieves, in effect, a single gear with expanding teeth. Another method involves tapering the teeth in the axial direction and providing for the gear to be slid in the axial direction to take up slack.
Shifting of gears
In some machines (e.g., automobiles) it is necessary to alter the gear ratio to suit the task, a process known as gear shifting or changing gear. There are several ways of shifting gears, for example:
- Manual transmission
- Automatic transmission
- Derailleur gears which are actually sprockets in combination with a roller chain
- Hub gears (also called epicyclic gearing or sun-and-planet gears)
There are several outcomes of gear shifting in motor vehicles. In the case of vehicle noise emissions, there are higher sound levels emitted when the vehicle is engaged in lower gears. The design life of the lower ratio gears is shorter, so cheaper gears may be used (i.e. spur for 1st and reverse) which tends to generate more noise due to smaller overlap ratio and a lower mesh stiffness etc. than the helical gears used for the high ratios. This fact has been utilized in analyzing vehicle generated sound since the late 1960s, and has been incorporated into the simulation of urban roadway noise and corresponding design of urban noise barriers along roadways.
A profile is one side of a tooth in a cross section between the outside circle and the root circle. Usually a profile is the curve of intersection of a tooth surface and a plane or surface normal to the pitch surface, such as the transverse, normal, or axial plane.
The fillet curve (root fillet) is the concave portion of the tooth profile where it joins the bottom of the tooth space.2
As mentioned near the beginning of the article, the attainment of a nonfluctuating velocity ratio is dependent on the profile of the teeth. Friction and wear between two gears is also dependent on the tooth profile. There are a great many tooth profiles that will give a constant velocity ratio, and in many cases, given an arbitrary tooth shape, it is possible to develop a tooth profile for the mating gear that will give a constant velocity ratio. However, two constant velocity tooth profiles have been by far the most commonly used in modern times. They are the cycloid and the involute. The cycloid was more common until the late 1800s; since then the involute has largely superseded it, particularly in drive train applications. The cycloid is in some ways the more interesting and flexible shape; however the involute has two advantages: it is easier to manufacture, and it permits the center to center spacing of the gears to vary over some range without ruining the constancy of the velocity ratio. Cycloidal gears only work properly if the center spacing is exactly right. Cycloidal gears are still used in mechanical clocks.
An undercut is a condition in generated gear teeth when any part of the fillet curve lies inside of a line drawn tangent to the working profile at its point of juncture with the fillet. Undercut may be deliberately introduced to facilitate finishing operations. With undercut the fillet curve intersects the working profile. Without undercut the fillet curve and the working profile have a common tangent.
Numerous nonferrous alloys, cast irons, powder-metallurgy and plastics are used in the manufacture of gears. However, steels are most commonly used because of their high strength-to-weight ratio and low cost. Plastic is commonly used where cost or weight is a concern. A properly designed plastic gear can replace steel in many cases because it has many desirable properties, including dirt tolerance, low speed meshing, the ability to "skip" quite well and the ability to be made with materials not needing additional lubrication. Manufacturers have employed plastic gears to reduce costs in consumer items including copy machines, optical storage devices, VCRs, cheap dynamos, consumer audio equipment, servo motors, and printers.
The module system
Countries which have adopted the metric system generally use the module system. As a result, the term module is usually understood to mean the pitch diameter in millimeters divided by the number of teeth. When the module is based upon inch measurements, it is known as the English module to avoid confusion with the metric module. Module is a direct dimension, whereas diametral pitch is an inverse dimension (like "threads per inch"). Thus, if the pitch diameter of a gear is 40 mm and the number of teeth 20, the module is 2, which means that there are 2 mm of pitch diameter for each tooth.
|This section requires expansion. (December 2009)|
Gears are most commonly produced via hobbing, but they are also shaped, broached, and cast. Plastic gears can also be injection molded; for prototypes, 3D printing in a suitable material can be used. For metal gears the teeth are usually heat treated to make them hard and more wear resistant while leaving the core soft and tough. For large gears that are prone to warp, a quench press is used.
Gear geometry can be inspected and verified using various methods such as industrial CT scanning, coordinate-measuring machines, white light scanner or laser scanning. Particularly useful for plastic gears, industrial CT scanning can inspect internal geometry and imperfections such as porosity.
Gear model in modern physics
Modern physics adopted the gear model in different ways. In the nineteenth century, James Clerk Maxwell developed a model of electromagnetism in which magnetic field lines were rotating tubes of incompressible fluid. Maxwell used a gear wheel and called it an "idle wheel" to explain the electrical current as a rotation of particles in opposite directions to that of the rotating field lines.
More recently, quantum physics uses "quantum gears" in their model. A group of gears can serve as a model for several different systems, such as an artificially constructed nanomechanical device or a group of ring molecules.
- Howstuffworks "Transmission Basics"
- Norton 2004, p. 462
- Lewis, M. J. T. (1993). "Gearing in the Ancient World". Endeavour 17 (3): 110–115 [p. 110]. doi:10.1016/0160-9327(93)90099-O.
- "The Antikythera Mechanism Research Project: Why is it so important?", Retrieved 2011-01-10 Quote: "The Mechanism is thought to date from between 150 and 100 BC"
- ANSI/AGMA 1012-G05, "Gear Nomenclature, Definition of Terms with Symbols".
- Khurmi, R.S, Theory of Machines, S.CHAND
- Schunck, Richard, "Minimizing gearbox noise inside and outside the box.", Motion System Design.
- Doughtie and Vallance give the following information on helical gear speeds: "Pitch-line speeds of 4,000 to 7,000 fpm [20 to 36 m/s] are common with automobile and turbine gears, and speeds of 12,000 fpm [61 m/s] have been successfully used." – p.281.
- Helical gears, retrieved 15 June 2009.
- McGraw Hill Encyclopedia of Science and Technology, "Gear", p. 742.
- Canfield, Stephen (1997), "Gear Types", Dynamics of Machinery, Tennessee Tech University, Department of Mechanical Engineering, ME 362 lecture notes.
- Hilbert, David; Cohn-Vossen, Stephan (1952), Geometry and the Imagination (2nd ed.), New York: Chelsea, p. 287, ISBN 978-0-8284-1087-8.
- McGraw Hill Encyclopedia of Science and Technology, "Gear, p. 743.
- Vallance Doughtie, p. 287.
- Vallance Doughtie, pp. 280, 296.
- Doughtie and Vallance, p. 290; McGraw Hill Encyclopedia of Science and Technology, "Gear", p. 743.
- McGraw Hill Encyclopedia of Science and Technology, "Gear", p. 744.
- Kravchenko A.I., Bovda A.M. Gear with magnetic couple. Pat. of Ukraine N. 56700 – Bul. N. 2, 2011 – F16H 49/00.
- ISO/DIS 21771:2007 : "Gears – Cylindrical Involute Gears and Gear Pairs – Concepts and Geometry", International Organization for Standardization, (2007)
- Hogan, C Michael; Latshaw, Gary L The Relationship Between Highway Planning and Urban Noise , Proceedings of the ASCE, Urban Transportation Division Specialty Conference by the American Society of Civil Engineers, Urban Transportation Division, 21 to 23 May 1973, Chicago, Illinois
- Smith, Zan (2000), "Plastic gears are more reliable when engineers account for material properties and manufacturing processes during design.", Motion System Design.
- Oberg, E; Jones, F.D.; Horton, H.L.; Ryffell, H.H. (2000), Machinery's Handbook (26th ed.), Industrial Press, p. 2649, ISBN 978-0-8311-2666-7.
- Siegel, Daniel M. (1991). Innovation in Maxwell's Electromagnetic Theory: Molecular Vortices, Displacement Current, and Light. University of Chicago Press. ISBN 0521353653.
- MacKinnon, Angus (2002). "Quantum Gears: A Simple Mechanical System in the Quantum Regime". Nanotechnology 13 (5): 678. arXiv:cond-mat/0205647. Bibcode:2002Nanot..13..678M. doi:10.1088/0957-4484/13/5/328.
- Sanduk, M. I. (2007). "Does the Three Wave Hypothesis Imply Hidden Structure?". Apeiron 14 (2): 113–125. Bibcode:2007Apei...14..113S.
- American Gear Manufacturers Association; American National Standards Institute (2005), Gear Nomenclature, Definitions of Terms with Symbols (ANSI/AGMA 1012-F90 ed.), American Gear Manufacturers Association, ISBN 978-1-55589-846-5.
- McGraw-Hill (2007), McGraw-Hill Encyclopedia of Science and Technology (10th ed.), McGraw-Hill Professional, ISBN 978-0-07-144143-8.
- Norton, Robert L. (2004), Design of Machinery (3rd ed.), McGraw-Hill Professional, ISBN 978-0-07-121496-4.
- Vallance, Alex; Doughtie, Venton Levy (1964), Design of machine members (4th ed.), McGraw-Hill.
- Buckingham, Earle (1949), Analytical Mechanics of Gears, McGraw-Hill Book Co..
- Coy, John J.; Townsend, Dennis P.; Zaretsky, Erwin V. (1985), Gearing, NASA Scientific and Technical Information Branch, NASA-RP-1152; AVSCOM Technical Report 84-C-15.
- Kravchenko A.I., Bovda A.M. Gear with magnetic couple. Pat. of Ukraine N. 56700 – Bul. N. 2, 2011 – F16H 49/00.
- Sclater, Neil. (2011). "Gears: devices, drives and mechanisms." Mechanisms and Mechanical Devices Sourcebook. 5th ed. New York: McGraw Hill. pp. 131-174. ISBN 9780071704427. Drawings and designs of various gearings.
|Wikimedia Commons has media related to: Gears|
- Geararium. Museum of gears and toothed wheels A place of antique and vintage gears, sprockets, ratchets and other gear-related objects.
- Kinematic Models for Design Digital Library (KMODDL) Movies and photos of hundreds of working models at Cornell University
- Mathematical Tutorial for Gearing (Relating to Robotics)
- Animation of an Involute Rack and Pinion
- Explanation Of Various Gears & Their Applications
- Booklet of all gear types in full-color illustrations PDF of gear types and geometries
- "Gearology" – A short introductory course on gears and related components
- American Gear Manufacturers Association website
- Gear Solutions Magazine, Your Resource for Machines Services and Tooling for the Gear Industry
- The Four Basic Styles of Gears
- Gear Technology, the Journal of Gear Manufacturing
- "Wheels That Can't Slip." Popular Science, February 1945, pp. 120–125. | http://en.wikipedia.org/wiki/Gear | 13 |
82 | Let’s say you throw a ball vertically with an initial velocity of v0 at time t0. As the ball rises, it slows down due to earth’s gravitational pull, eventually reaching a peak distance from the ground, which we will call h1. At that point, the direction of the ball reverses and it falls back down towards the ground. Finally, the ball strikes the ground at time t1. Let’s say that at that point, the ball undergoes an inelastic collision with the ground, loses energy, and rebounds at velocity v2 climbing to a height of h2, which is exactly half of h1. On the second bounce at time t2, the ball loses more energy so that it only climbs to a height of h3, which is half of h2. The ball continues bouncing losing half its kinetic energy every bounce, and as time progresses, the time between bounces gets smaller and smaller.
A simulation of a bouncing ball is illustrated in Figure 1. After around eleven bounces, the height of each bounce is so small you can’t see it on the graph anymore. But even then, the ball is still bouncing. The reader is left to ponder: how can we know when the ball stops bouncing? How many times does the ball bounce before it finally comes to rest? Or maybe, that isn’t the right question, since it would seem as though the ball can rebound to half its height infinitely many times. But if the ball bounces infinitely many times, doesn’t that mean that the ball should never stop bouncing? In this paper, we will explore this subject in depth, and determine when, if ever, the ball stops bouncing by using mathematics.
Developing a Mathematical Model
To begin, I will try to give an intuitive review of the physics equations of motion which apply to the ball. Since the ball can go either of two directions, up or down, we will need to apply physics equations of motion for a 1-dimensional universe. So let’s quickly derive the basic physics equations of motion for the ball using integral calculus. In this problem, we will assume that the Earth applies a constant force to the ball despite the distance of the ball from the Earth. This assumption is accurate enough for our analysis since it is impossible for a human being to throw a ball high enough so that the force of gravity between the ball and the earth changes a significant amount. In physics, the letter g is a constant which signifies the acceleration caused by the earth on a dropped object, which is approximately 9.8 m/s2. We will also ignore the affects of air resistance on the ball. Using Leibniz differential calculus notation, the velocity and acceleration due to gravity of a falling object are defined using the following equations:
Equation 1a states that the acceleration due to Earth’s gravitational pull (-g) is the rate of change of velocity (v). Manipulating the equation, we can solve for the velocity as a function of time.
Equation 1b simply states that velocity (v) is the rate of change of the height (h) of an object. Using Equation 2, we can solve for an expression of the height of the object as a function of time.
Equations 2 and 3 give the velocity and position of any object with constant acceleration moving in one-dimensional space. The values vs and hs are constants produced by the integration, which represent the velocity and height of the object at time ts. Now that we have equations which relate the height and velocity of the ball, we can use them to analyze the motion of the ball as it is affected by the Earth’s gravity. We will begin by analyzing a single bounce of the ball.
In Figure 2, we see what we will call a ‘bounce cycle’ spanning from time tn‑1 to tn. At time tn‑1, the ball undergoes an inelastic collision with the ground, causing the ball to reverse direction and lose kinetic energy. Half-way through the bounce cycle, the velocity decreases to zero, and at that point, the ball is at the maximum height hn. This height is related to the maximum height of the previous bounce cycle by the constant r, which is the rebound coefficient defined by r = hn/hn‑1 (we will assume 0 ≤ r < 1). At the end of the cycle, the ball is traveling at a speed of ‑vn as it once again collides with the ground at time tn, at which point the next bounce cycle begins. The ball will reach its maximum height hn at time tmax,n = (tn‑1 + tn)/2. We can find out at what time the ball reaches its maximum (tmax,n) by solving for the time at which the velocity of the ball is zero using Equation 2 with ts = tn‑1and vs = vn.
Next, we can then find the ball’s maximum height at time tmax,n using Equations 3 and 4 with ts = tn‑1, hs = 0, and vs = vn.
We now have expressions for the duration (Equation 4) and maximum height (Equation 5) of each bounce cycle; however, we prefer to have expressions which state them in terms of the initial velocity v0. To do so, we first need to solve for the coefficient of restitution. The coefficient of restitution is the ratio of the magnitude of the ball’s velocities immediately before and after impact with the ground. Or in general terms, the coefficient of restitution is vn/vn‑1. This is not to be confused with r, which we defined earlier to be the rebound coefficient hn/hn‑1; although, they’re related, and we can use r to solve for the coefficient of restitution. To do so, we will set hn = r hn-1 using Equation (5), then solve for vn/vn‑1.
The coefficient of restitution for our ball is therefore the square root of the rebound coefficient r. Using Equation 6, we can find the relation for any rebound velocity vn to the initial velocity v0.
Using this result, we can state the rebound velocity, maximum height, and bounce cycle duration for any one of the bounces in terms of the initial velocity of the ball, v0. By plugging Equation 7 into Equations 4 and 5, we find:
Now that we have an expression for the duration of the bounce cycle, we can begin to add the times up one by one, starting at time t0. We can calculate the total time until the nth bounce by adding up the first n bounce cycle times.
Combining Equations 8 and 10, we come up with the following geometric series.
As long as 0 ≤ r < 1, the geometric series in Equation 11 converges to a specific time tfinal, at which time the ball is no longer bouncing.
The implications of these results are quite profound. Through mathematics, we have shown that after the ball bounces an infinite number of times, it will finally come to a stop at time tfinal. Another way of solving for tfinal is by performing a 2nd‑degree polynomial fit using any three peak heights and times as data points. Using the first three peaks, we build up the following matrix.
Converting Equation 13 to reduced row echelon form results in the following polynomial 2nd‑degree polynomial fit:
This polynomial factors quite nicely into the following expression:
As a result, we end up with an equation for a scaled and delayed parabola. By plugging in various values of tn from Equation 11 into Equation 15, we verify that the results match the expected values of hn found in Equation 9. Also, it can be easily seen that the value of the function is zero when t is the same value of tfinal we found in Equation 12. Figure 3 shows the results of inscribing h(t) onto the waveform of the bouncing ball, showing that it passes directly through all maximum points for all the bounce cycles.
Using both Equations 12 and 15, we can know precisely the time at which a ball with rebound coefficient r and initial upward velocity v0 will come to a complete stop. For example, if we were to throw a ball with a rebound coefficient r = ˝ upward with initial velocity v0 = 10 m/s, the ball will finally come to rest in 6.97 seconds.
These results provide a disproof of Zeno’s paradox, which states that it shouldn’t be possible for a runner to complete a race, because he would have to travel half-way to the finish line infinitely many times. Using the example of the bouncing ball, we have shown that it is mathematically possible for someone to watch a ball bounce to half its height an infinite number of times with and live to tell about it. | http://www.feucht.us/writings/bouncing_ball.php | 13 |
145 | Ice sheets and glaciers form the largest component of perennial ice on this planet. Over 75% of the world's fresh water is presently locked up in these frozen reservoirs. Their extents, however, undergo considerable fluctuations. The expansion and reduction of ice sheets and glaciers from glacial periods to interglacial periods has been one of the most dramatic climatic signals in Earth history. Even on short time scales of a few years, changes in ice volume have modulated and continue to modulate, sea level. This has never been more important than it is today, given the increase in economic development of coastal areas. Recent evidence of the rapidity with which sea level has changed due to rapid changes in ice-sheet discharge has heightened our awareness of the dynamic nature of this icy component of our climate. While it is certain that small glaciers are experiencing broad retreat and contributing to sea level rise by increased melting, the contributions of the two largest ice sheets in Greenland and Antarctica, which contain over 98% of the world's ice, remain unknown.
Ice sheets, by their very presence, affect global climate. Greenland and Antarctica both are dominated by large cold ice sheets that rise to high elevations and are among the highest albedo objects on Earth. Despite these similar characteristics, their effects on the climate have important differences. Antarctica has a nearly circular symmetry which encourages the development of strong zonal circulations of the atmosphere and ocean around it. On the other hand, Greenland sits as an obstacle to zonal circulations, deflecting the flow of the jet stream and preventing the establishment of strong zonal circulation.
Understanding these roles of ice sheets and glaciers in climate has been the focus of a small group of scientists for a relatively short period of time. The size of the community and the relative "newness" of glaciology are due, in part, to the remoteness and harsh climate of glacierized regions. These facts are juxtaposed with the climatic importance of ice sheets. Satellite remote sensing has revolutionized this branch of climate science by empowering glaciologists with instruments capable of collecting spatially extensive, yet detailed, data sets of critical parameters. SAR, with its ability to collect ice-sheet data through cloud and throughout the extended polar night has already demonstrated unique capabilities that make it an indispensable new tool for the glaciologist. This is only the beginning: more recent techniques, such as interferometry, have expanded the uses of SAR in glacier and ice-sheet research even farther, and promise glaciologists increased capabilities to collect critical data.
There are a number of key questions being addressed by glaciologists to which SAR data can make substantial contributions.
Are glaciers and ice sheets useful indicators of current climate change?
Glaciers and ice sheets most commonly occur at high elevations and in remote areas difficult to access. Such areas are typically not included in climate monitoring networks. If ways can be found to extract useful climatic parameters from observations of the ice, then ice sheets and glaciers provide valuable extensions of more traditional climate monitoring networks.
The interplay of annual cycles of snow accumulation and melting generates a succession of snow facies that serve as valuable indicators of the climatic regime characteristic of any point on an ice sheet or glacier (Williams et al., 1991). Figure 5-1 illustrates the essential subsurface stratigraphic characteristics. Beginning with the highest elevations, areas that experience no melt comprise the dry-snow facies. At lower elevations, melt-water percolates into the underlying, sub-freezing snow pack as a network of vertical ice pipes and horizontal ice lenses, forming the percolation facies. The amount of melt tends to increase at lower elevations. Eventually, melt is extensive enough that the latent heat released by the internal refreezing process warms the snow pack to the melting temperature throughout, and a fraction of additional melt water is retained within the snow pack in liquid form (the remainder leaves the ice mass as runoff). This situation characterizes the soaked-snow facies. At the lowest elevations is the bare-ice facies, formed when all surface snow is removed by ablation. The upper edge of the bare ice facies marks the snow-line at the end of the mass balance year.
As climatic conditions change, so will the positions of these boundaries. Conveniently, the low surface slopes of glaciers (typically 10-2) and even lower surface slopes of ice sheets (typically 10-3) transform subtle shifts of only a few meters in the elevations of these boundaries to large horizontal shifts on the order of a few kilometers. In addition to indicating the altitudinal extent and intensity of melt, these facies differ in spectral albedo. Thus, as facies extents change, so does the net radiation balance of the ice sheets.
Glacier length is the glaciological parameter with the longest history of observation. While glacier-length changes clearly illustrate that climate has changed, glaciers (and especially ice sheets) are one of the most sluggish climate components. Thus, the inverse problem of detecting or inferring climate change from measurements of ice extent is very difficult. This general view does not hold, however, on smaller, steeper mountain glaciers whose response times are of the order of years to decades.
How do the ice sheets affect atmospheric and oceanic circulations?
The general effects of ice sheets on the atmosphere and oceans were discussed above. Clearly, changes in either the albedo of the surface (through a change in snow facies extents), or changes in the geometric shapes of the ice sheets will alter their climatic effect and perturb the global climate system. Usually these changes are discussed in the extreme cases of glacial periods, when surface temperatures in central North America or Europe fell an average of 10 to 12 degrees Celsius, but smaller changes in ice sheets will also perturb the climate (Dawson, 1992). The relationship is probably highly non-linear; the climate record from Greenland ice cores shows that dramatic climate changes can occur in much less than a decade (Alley et al., 1993).
Remnants of vast armadas of icebergs have been detected in the western North Atlantic, well beyond the reach of ice sheets (Heinrich, 1988). It has been suggested that the melting of these icebergs would introduce a sufficient quantity of fresh water into the North Atlantic to completely transform the global pattern of oceanic circulation, altering the climate in every corner of the world (Broecker and Denton, 1989).
What is the current mass balance of ice sheets and glaciers?
Ice-sheet or glacier mass balance is defined as the annual difference between mass gain and mass loss. It is important globally because it has a dominant effect on sea-level change. During the last glacial maximum, global sea level was approximately 125 meters lower, the water being locked up primarily in the now-extinct Laurentide and Fennoscandian ice sheets of the Northern Hemisphere and an expanded Antarctic ice sheet (Shackleton, 1987; Denton and Hughes, 1981). Present ice volume, which is contained mostly in the Antarctic and Greenland ice sheets, is sufficient to raise sea level approximately 80 meters (Bader, 1961, Drewry et al., 1982).
By comparison, the annual turnover in ice-sheet mass is modest. Annual snow accumulation over the Antarctic and Greenland ice sheets is equivalent to an ocean layer only 6 mm thick (Bentley and Giovenetto, 1991; Ohmura and Reeh, 1991). A nearly equivalent amount of ice is returned to the oceans through melting and iceberg discharge. Estimates of present mass balance are poorly constrained: the Greenland ice sheet appears relatively stable, the Antarctic ice sheet seems to be growing slowly and the remaining small glaciers and ice caps are wasting away rapidly (Meier, 1993). Oddly enough, this range of behaviors may be due to the same phenomenon: global warming. Warmer summer temperatures enhance ice melting but also increase the frequency of precipitation events, which could result in growth of ice sheets and glaciers. In Antarctica almost no summer melt occurs except on the Antarctic Peninsula so the increased snowfall would enlarge the ice sheet. Mountain glaciers experience a net reduction in mass at low elevations due to the warmer summer temperatures, but, in some cases, may be growing at higher elevations. In Greenland, the two opposing effects appear to be roughly balanced.
These conclusions are based on sparse data. A notable weakness in one or more of these present estimates of mass balance is that they fail to add up to the present rate of sea-level change, even when liberal error estimates and other contributing effects are included (Meier, 1993). Confidence in predictions of future sea-level change must be tempered until we better understand the current mass balance of the existing ice.
What are the controls on ice flow and are there inherent instabilities in ice flow that could lead to dramatic changes in the dynamics of ice sheets or glaciers?
The subject of ice dynamics ranges from deformation and recrystalization of individual crystals, to surge-type glaciers, to the flow of the Antarctic ice sheet. Most of the controlling processes lie hidden within or underneath the ice, but the effects are clearly evidenced by deformation and flow of the ice. This topic is relevant to global climate because of the effect altered ice flow has on ice volume and, therefore, on sea level.
The ice-sheet environment determines the magnitudes of both snow accumulation and melting, but ice dynamics determines the rate at which ice is delivered to the oceans or to ablation areas. Thus, ice dynamics is a major component of the mass balance. This becomes obvious when examining the record of sea level since the last glacial maximum 20,000 years ago (Figure 5-2). During this period, sea level rose episodically in a series of brief jumps rather than smoothly. Geologic evidence confirms that these jumps correspond to the partial or complete collapses of marine-based ice sheets. Such behavior apparently caused sea level to rise at rates of at least 35 mm/yr, more than twenty times the present rate of rise. Although no such collapses have occurred for the last 4000 years, the West Antarctic ice sheet is the last marine-based ice sheet on the planet, and it contains enough ice to raise sea level more than five meters (Drewry et al., 1982). There is a pressing need to determine if, when, and how rapidly this ice sheet may collapse.
How much of the heat absorbed by the surface snow pack is retained by the ice mass and how much escapes as meltwater?
Warm temperatures melt surface snow, and the resulting meltwater most often drains into the underlying snow pack. Residual colder winter temperatures in this snowpack conduct heat away from the meltwater. If the residence time of the meltwater is sufficiently long, or if the snow pack is sufficiently cold, the meltwater refreezes. In this case, melting recorded at the surface does not represent mass that leaves the glacier; rather, this mass is captured at depth within the snow pack. On the other hand, if a system of snow-pack drainage is well developed with surface streams and moulins (vertical cavities) that quickly transport water through the glacier or ice sheet, not only can the initial quantity of meltwater leave the glacier, but additional melting can occur at the interface between rushing water and the surrounding ice.
The importance of these two extreme cases is apparent when viewed from the perspective of mass balance (Pfeffer et al., 1991). In one case none of the surface meltwater actually leaves the glacier, while in the other case the mass lost from the glacier is actually more than the amount of meltwater produced at the surface. The relevance of this effect to climate change lies in the fact that different snow facies respond differently to changing temperatures. For example, a formerly dry-snow area that begins to experience melt will retain most of the meltwater for many years as the system of internal drainage (comprised of vertical ice pipes and horizontal ice lenses) develops. However, a percolation facies that warms will be less effective at retaining increased amounts of meltwater, possibly delaying its eventual transformation to a soaked-snow facies.
SAR provides the obvious benefits of a weather-independent, day-night imaging system. These advantages are particularly crucial in the ice-sheet and glacier environments where persistent clouds continue to hamper data acquisitions by visible imagers and where the polar night imposes a prolonged period of darkness. In addition, unlike visible imagers, radar penetrates the snow surface, which provides glaciologists with a tool capable of sensing internal properties of the ice sheet or glacier.
Before ERS-1 was launched, limited SAR data of ice sheets only hinted at the potential glaciological uses of SAR. ERS-1 data have allowed full demonstration of many of these uses and have expanded the glaciological applications of SAR to even broader horizons.
The list that follows identifies the key parameters of ice sheets and glaciers that can be measured with SAR, and describes how glaciologists will be able to use this information to answer the key questions identified above.
Radar penetrates the snow surface, so the measured backscatter is a combination of surface and volume scattering. This characteristic enables SAR to discriminate clearly between all the snow facies described above using backscatter amplitude data alone. This discrimination is most effective during middle to late winter when surface water is absent. Dry snow appears dark in SAR because both surface scattering and volume scattering are low. At lower elevations, in the percolation facies, backscatter rises dramatically due to volume scattering from the network of subsurface ice bodies. The soaked- snow facies is composed of larger snow grains than the dry-snow facies because both melt-water and warmer temperatures serve to accelerate grain growth. Thus, the soaked-snow facies is radar dark when water is present, but in winter the backscatter increases to a level intermediate to the radar-dark dry-snow facies and radar-bright percolation facies. Finally, the bare-ice facies is radar dark due to strong specular surface scattering. Figure 5-3 clearly shows these different snow facies in a SAR-image mosaic of the Greenland ice sheet. The northeast portion of the dry-snow facies is slightly brighter than the rest of this facies. This is believed to be due to a decrease in the accumulation rate in this region (Jezek, 1993; Ohmura and Reeh, 1991). Figure 5-4 shows that the facies correlate closely with surface elevation. The ability to discriminate all the snow facies, which is impossible with visible imagery, establishes the unique use of SAR in a monitoring program of ice sheets for indications of climate change over their broad expanses. SIR-C/X-SAR data of the Patagonian ice fields indicate that facies discrimination is also possible using multifrequency data (Forster and Isacks, 1994).
Water is strongly radar-absorptive. This permits the use of SAR for monitoring of the summer melt season on ice sheets and glaciers. Time-series SAR data of Greenland have demonstrated that the gradual refreezing of free water at depth can be detected and have shown that this free-water component exists for a surprisingly long time after the
snow surface has cooled below freezing (Figure 5-5).
Radar at lower frequencies penetrates more deeply into snow. Thus, multi-frequency data permit a depth-variable view of the snowpack. This has been most clearly demonstrated by SIR-C/X-SAR data of the Patagonian ice fields (Figure 5-6). While a capability does not yet exist to quantify either the amount of melting or refreezing, the multi-frequency SARs sensitivity to conditions at different depths is already useful in monitoring the thermal and hydrologic evolution of ice sheets as climate changes. Future development of this capability will increase SARs utility in this area even further.
Mass loss by iceberg calving exceeds mass loss by melting for the Antarctic ice sheet and is approximately equal to the amount of surface melting for the Greenland ice sheet (Bader, 1961). Thus, this is a critical term in determining ice-sheet mass balance (although it does not directly affect sea level). Icebergs are visible in cloud-free visible imagery, but the requirements of a monitoring program include routine and dependable acquisition of high-resolution imagery, even during the extended polar night. These requirements match the capabilities of SAR and make it the preferred instrument for this activity. Figure 5-7 shows that icebergs as easily identified in SAR imagery.
Most surface morphological features are seen by either SAR or a visible imaging system. SAR holds the dual advantages of its all-weather, day-night capability while visible imagers avoid image degradation caused by speckle. Specific features that can be identified include streams and lakes, flowbands (linear forms stretched longitudinally in the direction of motion), ice edge, moraines and crevasses (Figure 5-8). The ability to identify such features opens the door to monitoring their evolution.
Lakes can be an especially good indicator of surface hydrologic activity. They tend to form in the same surface depressions each summer (locations fixed in space by the flow of ice over bed undulations), and their size and numbers are indications of the intensity of melt. Thus, they serve as ancillary data to the climate monitoring activity already described.
Ice margins are important because they change in response to changes in the flow of the ice. The radar contrast at many ice margins is less distinct than the contrast in visible imagery. However, there are many situations where this generality does not hold. These include areas where persistent cloud cover impedes collection of visible data, where fresh snow has covered the visible contrast between ice and adjacent terrain, and where ice near the edge is covered by rock or other debris. In the last case, differences in the polarization signatures of the moraine-covered ice and ice-free debris may permit identification of the ice margin.
Crevasses present serious hazards to field personnel but are one of the most useful natural features for glaciological study. Their orientations provide information on the state of stress within the ice. A more quantitative use is the measurement of ice motion accomplished by following unique crevasses or crevasse patterns over time (see next section). Additionally, SAR has demonstrated an ability to detect crevasse fields where visible imagery cannot (Hartl et al., 1994; Vaughan et al., 1994). This is due to either the detection of micro-cracks, below the resolution of the visible imagery, or from detection of buried crevasses by penetrating through the surface layers of the snow.
Ice velocity is one of the most fundamental parameters in the study of ice dynamics. The proven ability to obtain this information from time-sequential imagery using a cross-correlation technique significantly expanded the amount and density of such data available to glaciologists (Bindschadler and Scambos, 1991). This technique tracks patches of the surface containing crevasses and other surface features from one image to another by searching for the matching pattern of surface features in a second image. Displacements can be measured to sub-pixel accuracy, but typical displacements should be at least a few pixels to minimize the impact of systematic errors in coregistration of images (typically 1 to 2 pixels) (Scambos et al., 1992). Although developed initially for visible imagery, this technique also has been shown to work with SAR imagery (see Figure 5-9) (Fahnestock et al., 1993). A requirement of this technique is that the same sets of surface features be resolvable in both images. This requirement is not met over most of the ice sheets, but is usually met in the most active flow regions.
The application of interferometric techniques using SAR data holds the potential of obtaining ice velocity data from any ice sheet region. The technique utilizes the phase measurement of the backscattered radar pulse from every ground pixel to make a sub-wavelength scale measurement of displacement (usually a few millimeters) at every pixel (Goldstein et al., 1993). In the ideal case, the two images would be collected from the same point in space (zero-baseline). In practice, however, the baseline between observation points is finite so the interferogram contains a combination of motion and topographic information. Images must be coregistered to sub-pixel accuracy and the backscatter signatures from the same pixel in each image must be correlated for the phase difference to have a physically meaningful interpretation. Either a different viewing geometry or metamorphic changes in the surface or subsurface of a target pixel can destroy the phase correlation for a particular pixel.
Successful ice-sheet interferograms have typically used time separations of only a few days (Goldstein et al., 1993, Rignot et al., in press, Joughin et al., in press). Figure 5-10 shows an example of an interferogram from the Bagley Icefield in Alaska. Interferometrically measured displacements are in the direction of view, which for satellite SARs is in the range of 20 to 40 degrees from vertical. If the general direction of flow is known (along the regional surface gradient), one velocity component is sufficient to estimate the total velocity. Greater precision in velocity requires that a second interferogram be obtained from a different viewing angle. This can usually be accomplished by acquiring image pairs from both ascending and descending orbits.
Because interferograms contain no absolute displacement information, only velocity differences are obtained. Nevertheless, velocity gradient data (strain rates) are highly useful. To obtain absolute velocities, a theoretical minimum of two control values are needed to provide a datum and to correct for along-track variations in baseline. In practice, more control is desirable and may be necessary.
As mentioned above, interferometry with a non-zero baseline includes both topographic and motion information because the measured range difference (in units of phase) is the result of both surface movement and topography- induced parallax. This mixture of topographic and motion information requires that the topography be known with sufficient accuracy to remove its effects from the interferometrically-determined phase differences, in order to extract ice displacements. Fortunately, by using a third SAR image, an extremely clean separation of the topographic and velocity signals is possible if the velocity and topography are constant over the interval spanned by the three images (Kwok and Fahnestock, in press). Figure 5-11 shows an example of this separation.
As with the interferometric velocity output, the extracted topographic output is relative elevation, rather than absolute elevation. In principle, a single absolute elevation is sufficient to provide the datum for an entire interferogram but, again, insufficient knowledge of the precise baselines for each interferogram require that more elevation control points be used. The range of elevations spanned by a two-pi cycle of phase difference depends upon the baseline. For topographic relief of a particular scale, there is an optimal range of baselines between the undesirable extremes of too short a baseline, where insufficient parallax is achieved to resolve elevation variations, and too long a baseline where phases decorrelate. The accuracy of the derived elevations also is dependent upon the baselines of the interferograms. In one study area, shown in Figure 5-12, relative elevation accuracies of less than 2 meters were obtained with ERS-1 data having an effective baseline of 520 meters (Joughin et al., 1994). Eventually, the flow of ice is expected to be well enough understood that it will be possible to invert topographic and surface motion data to obtain basal topography and basal shear stress, which are additional parameters needed for ice-dynamics studies.
The ability to make the measurements described above with SAR data was demonstrated only after the collection of a substantial amount of ice-sheet data by ERS-1. The list is probably complete for the C-band, single polarization SAR carried by ERS-1. Before these data were available to demonstrate these techniques, limited Seasat and airborne data could only suggest the potentials that awaited glaciologists. In the case of interferometry, no mention of this now-proven technique was ever made in the pre-ERS-1 ice-sheet community. To extend this analogy, then, it probably is impossible to predict accurately the future uses of a SAR enhanced with additional polarizations and frequencies because multiple-frequency and multiple-polarization data sets of ice sheets from which to extrapolate remain very sparse.
Therefore, additional necessary work is posed in the form of questions that still need to be answered. In the process of answering these questions, new potential uses of SAR are likely to be discovered.
Is there a "best" SAR frequency for ice sheets and glaciers?
It is known that single-band data (C-band) permit almost all the analyses summarized above because a wealth of such data has been collected, from which these techniques have been developed. Seasat provided a limited amount of L-band data that confirmed it also can be used for snow facies, icebergs, and surface morphology research (Bindschadler et al., 1987). Limited airborne P-band and X-band data have hinted that these frequencies may also be used (Jezek et al., 1993). Recently, Space-Shuttle-based multi-frequency (L- and C-band), multi-polarimetric data have been added to the data pool.
What has been missing is a methodical comparison of data of the same ice- sheet areas using different frequencies and including complex data so interferometric products can be examined. L-band and P-band penetrate deeper than C-band, but the quantitative advantages and disadvantages of sensing deeper, older snow have not been established. A more diffuse volume-scattering component might provide a more temporally stable signature of the various snow facies. Airborne data have highlighted major differences in backscatter signatures when the frequency shifts from P-band to C-band (with the intermediate L-band being more like P-band) (Jezek et al., 1993). Similar backscatter variations have been seen in SIR-C/X-SAR data (cf. Figure 5-5) (Forster and Isacks, 1994). These differences could lead to techniques to derive a number of important variables including: grain-size versus depth distributions (critical for the quantification of accumulation rates from passive microwave data); meltwater production; and the amount of free water retained by the snow pack (by following the depth of the winter freezing wave as it penetrates the snow pack).
Interferometric applications might be aided by lower frequencies that permit longer baselines and have a relatively greater and more temporally stable volume-scattering component, making them less sensitive to meteorological events on the surface that decorrelate successive images. However, the increased contribution of the deeper volume scattering component could lead to an enhanced geometric decorrelation sensitivity or lower signal-to-noise, thus restricting available interferometric image pairs.
P-band radar might even penetrate the full depth of some glaciers. This would make it possible to map subglacial topography using interferometric techniques. Obtaining both surface and bed topography leads directly to ice volume--one of the critical climatic parameters discussed at the outset of this chapter. If successful, this would substantially improve all existing ice-volume estimates because existing data have been collected along linear ground tracks or flight lines, so the resulting ice-thickness maps have been produced by spatial interpolation.
Can useful scientific information be obtained by studying polarization effects?
Limited polarization data have been used to determine the dielectric constant and to extract the small-scale surface roughness of portions of the Greenland ice sheet (Jezek et al., 1993). The dielectric constant can be used to derive, albeit indirectly, surface albedo. Albedo has obvious importance in the energy balance of the ice sheet. Surface roughness is also a necessary consideration in exchange of energy because it affects the near-surface wind profile. Field measurements of surface roughness suffer from sampling sparseness but would be a necessary component of surface-truth experiments designed to develop the ability to remotely determine surface roughness. Given that different radar frequencies are sensitive to surface roughness on different scales, a wide spectrum of surface roughness may be obtainable.
In winter, the percolation zone displays anomalous backscatter polarization behavior. Similar behavior has been observed elsewhere in the solar system, ranging from icy Jovian moons to the Martian polar cap to, perhaps, a polar cap on Mercury. The common denominator seems to be ice, though not necessarily water ice. Anomalously polarized backscatter is, however, otherwise very rare in the solar system. Theoretical explanations of this are speculative, but all bear on the depth distribution of volume scattering. In the case of the percolation zone in Greenland, this depth distribution is linked to the redistributions of melt water and heat. This is of considerable interest because melting is one of the major mass-loss mechanisms of the Greenland ice sheet and could affect the salinity balance of the North Atlantic. Understanding and using anomalously polarized backscatter as a remote- sensing tool could lead to a unique probe of physical properties of both terrestrial and extraterrestrial icy regions.
What are the physical processes that cause target decorrelation and what are their relative effects?
It has been hypothesized that the occurrence of windstorms, snowfall, surface- and depth-hoar formation, melting, and refreezing between the epochs of two SAR images are meteorological events that can alter the backscatter signature from the target sufficiently to decorrelate the phase data and prevent the generation of interference fringes. Few studies have been done to actually quantify the effects of any of these events on the correlation of successive images. Jezek and Rignot (1994) hypothesized that spatial patterns of decorrelation in one ERS-1 image pair of Greenland may actually be due to variations in the distribution of snow deposited between the images. At C-band, 10 cm of fresh snow adds one additional wavelength (or fringe) to the round-trip radar distance.
More studies are necessary and will lead not only to guidelines for improving the likelihood of collecting correlated image pairs from which interferometric products can be produced, but also will produce meteorologically meaningful data over a spatially broad scale as compared with local data provided by sparsely distributed ice-sheet meteorological stations. Independent views afforded by interferometric SAR could prove valuable in interpreting the data sets provided by passive microwave sensors or radar altimeters, both of which also derive some of their signal from the sub-surface snow pack.
Most of the SAR technique-development investigations for ice-sheet and glacier research require data at frequencies other than C-band. Now that the C-band data set is extensive enough over the ice covered areas, more limited coverage at other frequencies can be placed into a meaningful context. This research is a necessary prerequisite to the development of a satellite SAR mission at any frequency other than the proven C-band. The fidelity and sparseness of JERS- 1 L-band data of ice sheets has failed to provide a data set capable of verifying the utility of L-band data for glaciological studies.
A critical component of the collection of data at frequencies other than C-band is the collection of interferometric pairs spaced in time so that motion information as well as topographic information are included in the phase differences. A particularly useful data set would be the collection of interferometric triplets of a moving ice sheet in at least C-band and L-band.
Ground truth is a mandatory part in the development of any new remote sensing application. This is certainly true with SAR data of ice sheets where volume scattering is often the dominant backscatter component. Field measurements are the only certain means of documenting specific physical properties of the snow pack and the temporal changes in these properties between remote data collections. To the extent possible, these measurements should be contemporaneous with airborne or satellite SAR acquisitions. Scattering measurements made with ground-based radar systems provide a data set for comparison with the remote measurements. Standard techniques allow surface parties to collect depth profiles of density, water content, grain size, conductivity, permittivity, temperature, and icy inclusions. These persons also can record surface meteorological conditions and help optimize the collection of SAR data from remote platforms.
ERS-1 has provided an invaluable SAR data set which has been used to establish the scientific utility of SAR for ice-sheet and glacier research. A few interferometric pairs await analysis, but for the purposes of developing or demonstrating new techniques, the use of this data set is virtually complete. No plans exist to place ERS-1 into a short repeat cycle orbit so that interferometric opportunities from this single satellite have probably ceased. The continuation of the ERS series with ERS-2 and ENVISAT will allow monitoring of the Greenland ice sheet snow facies to proceed.
JERS-1 carries an L-band SAR. It promised the same capabilities as ERS-1, but damage caused by a faulty antenna deployment has compromised the quality of the data. Thus, it has not provided an adequate opportunity to assess the merits of L-band data of ice sheets and glaciers relative to C-band.
The AIRSAR facility (C-, L- and P-band antennas on a DC-8 aircraft) provides the best existing means to collect the multi-frequency and multi-polarization data sets needed to assess the relative merits of these different windows of the electromagnetic spectrum. Multiple antennas have the obvious advantage of collecting simultaneous and coincident data. From knowledge of the positions of snow facies and surface features gleaned from the ERS-1 data set, aircraft missions can be planned in a manner that optimizes the utility of the collected data.
A critical augmentation to AIRSAR is the ability to collect interferometric data by navigating the aircraft with GPS real-time corrections. This capability will be crucial in investigations of the frequency-specific characteristics of interferometric data. The missions should be flown at various times of the year during periods when particular meteorological events (i.e., onset of melting and freezing, snowfall, high surface winds, etc.) are most likely.
ERS-2, when launched, will continue the capability of the ERS-1 SAR. An exciting prospect is an ERS-1/ERS-2 tandem mission for the collection of interferometric data.
RADARSAT, carrying another C-band SAR, is planned for launch in 1995. After an extended initial operational period, a scheduled orbit maneuver will afford RADARSAT the first SAR-view of most of the Antarctic ice sheet, including the regions south of latitude 78deg.S. The primary goal of this maneuver is to map the Antarctic ice sheet with SAR. It will take approximately two weeks, less than one orbit cycle, after which time RADARSAT will return to the nominal north-looking configuration. No possibilities exist for interferometric data collection during this short period. A second mapping may occur later in an extended RADARSAT mission. This mapping is exploratory, and it remains tantalizingly impossible to predict all that may be discovered with these data.
SAR remains a technology that is grossly underutilized in proportion to its proven capability to assist glaciologists in answering some of the most pressing questions in their discipline. These questions have direct relevance to global climate and future sea-level change.
SAR interferometry can provide data sets whose regional collection was never before feasible, yet are crucial to glaciological studies. A mission designed to produce complete interferometric coverage of permanently ice-covered areas promises extraordinary glaciological returns.
The omission of large portions of the polar regions by virtually every satellite mission to date continues a long, but undesirable, tradition that restricts the glaciological utility of satellite data. At present more than 2/3 of the Earth's permanent ice cover cannot be viewed by existing spaceborne SARs. Modern awareness of the climatic importance of the polar regions must be expressed in the ability of new sensors to extend their view to the poles. As in the case of RADARSAT, this polar view need not be available continuously, but, unlike RADARSAT, when available, it should be for a number of repeat cycles so that the enormous utility of SAR interferometry can be applied to the glaciological problems of global significance.
This chapter closes with the following specific recommendations:
(1) An interferometric mission at C-band should be conducted that includes multiple-image views of all ice sheets and glaciers sufficient to yield detailed surface topography and surface-velocity data sets.
(2) Future SAR missions should include maneuvering and data collection capability sufficient to monitor all permanently ice-covered areas at least once per year.
(3) Airborne and surface measurements should be carried out to assess the relative merits of different frequencies and combinations of frequencies and polarizations in deriving parameters needed to answer pressing glaciological questions relevant to the global climate. | http://southport.jpl.nasa.gov/nrc/chapter5.html | 13 |
61 | Find Opposite-Angle Trigonometry Identities
The opposite-angle identities change trigonometry functions of negative angles to functions of positive angles. Negative angles are great for describing a situation, but they aren’t really handy when it comes to sticking them in a trig function and calculating that value. So, for example, you can rewrite the sine of –30 degrees as the sine of 30 degrees by putting a negative sign in front of the function:
The identity works differently for different functions, though. First, consider the identities, and then find out how they came to be.
The opposite-angle identities for the three most basic functions are
The rule for the sine and tangent of a negative angle almost seems intuitive. But what’s with the cosine? How can the cosine of a negative angle be the same as the cosine of the corresponding positive angle? Here’s how it works.
The functions of angles with their terminal sides in the different quadrants have varying signs. Sine, for example, is positive when the angle’s terminal side lies in the first and second quadrants, whereas cosine is positive in the first and fourth quadrants. In addition, positive angles go counterclockwise from the positive x-axis, and negative angles go clockwise.
With those points in mind, take a look at the preceding figure, which shows a –45-degree angle and a 45-degree angle.
First, consider the –45-degree angle. This angle has its terminal side in the fourth quadrant, so its sine is negative. A 45-degree angle, on the other hand, has a positive sine, so
In plain English, the sine of a negative angle is the opposite value of that of the positive angle with the same measure.
Now on to the cosine function. In light of the cosine’s sign with respect to the coordinate plane, you know that an angle of –45 degrees has a positive cosine. So does its counterpart, the angle of 45 degrees, which is why
So you see, the cosine of a negative angle is the same as that of the positive angle with the same measure.
Next, try the identity on another angle, a negative angle with its terminal side in the third quadrant. The preceding figure shows a negative angle with the measure of –120 degrees and its corresponding positive angle, 120 degrees.
The angle of –120 degrees has its terminal side in the third quadrant, so both its sine and cosine are negative. Its counterpart, the angle measuring 120 degrees, has its terminal side in the second quadrant, where the sine is positive and the cosine is negative. So the sine of –120 degrees is the opposite of the sine of 120 degrees, and the cosine of –120 degrees is the same as the cosine of 120 degrees. In trig notation, it looks like this:
When you apply the opposite-angle identity to the tangent of a 120-degree angle (which comes out to be negative), you get that the opposite of a negative is a positive. Surprise, surprise. So, applying the identity, the opposite makes the tangent positive, which is what you get when you take the tangent of 120 degrees, where the terminal side is in the third quadrant and is therefore positive. | http://www.dummies.com/how-to/content/find-oppositeangle-trigonometry-identities.html | 13 |
53 | Important: ATSUI is a legacy technology in Mac OS X v10.6 and later. Please use Core Text, described in Core Text Programming Guide, instead.
This chapter introduces and defines important typographic terms related to text—traditionally defined as the written representation of spoken language. The terms and concepts discussed in this chapter are used throughout the rest of the book and in ATSUI Reference. If you plan to use ATSUI, or any API that uses ATSUI (for example, MLTE), you should read this chapter.
This chapter starts by outlining the different components that make up text. It then describes how text is
measured and stored
arranged on a display device
adjusted in various ways to affect the text direction, kerning, alignment (or flushness), justification, and line breaks
drawn, highlighted, and hit-tested
Characters, Glyphs, and Fonts
A writing system’s alphabet, numbers, punctuation, and other writing marks consist of characters. A character is a symbolic representation of an element of a writing system; it is the concept of, for example, “lowercase a” or “the number 3.” It is an abstract object, defined by custom in its own language.
As soon as you write a character, however, it is no longer abstract but concrete. The exact shape by which a character is represented is called a glyph. The “characters” that ATSUI places on the screen are really glyphs.
Glyphs and characters do not necessarily have a one-to-one correspondence. For example, a single character may be represented by one or more glyphs (the character “lowercase i” could be represented by the combination of a “dotless-i” glyph and a “dot” glyph), and a single glyph can represent two or more characters (the two characters “f” and “i” can be represented by a single glyph as shown in Figure B-10).
Context—where the glyph appears in a line of text—also affects which glyph represents a character. Figure 1-1 shows examples of such contextual forms in a Roman font. Different forms of a glyph are used according to whether the glyph stands alone, occurs at the beginning of a word, occurs at the end of a word, or forms part of a new glyph, as in a ligature. For example, both forms of “X” can be used at the beginning of a word, but the form on the right side of the figure can be used only at the start of a word, whereas the form on the left side can be used alone or at the start of a word. Similarly, the “n” on the left side of the figure can be used alone or at the end of a word whereas the “n” on the right side can be used only to end a word. The last line in Figure 1-1 illustrates how “s” and “t” can be drawn separately or combined to form a ligature.
A font is a collection of glyphs, all of similar design, that constitute one way to represent the characters of one or more languages. Fonts usually have some element of design consistency, such as the shape of the ovals (known as the counter), the design of the stem, stroke thickness, or the use of serifs, which are the fine lines stemming from the upper and lower ends of the main strokes of a letter. Figure 1-2 shows some of the elements of glyphs that indicate they are members of the same family.
A font family is a group of fonts that share certain characteristics and have a common family name. Each font family has its own name, such as “New York,” “Geneva,” or “Symbol.” In contrast, a font always has a full name—for example, Geneva Regular or Times Bold. The full name determines which family the font belongs to and what typestyle it represents. (See “Typestyles” for more information.) Several fonts may have the same family names (such as Geneva, Geneva Bold, and so on) but are stored separately—these fonts are still part of the same font family. The font Geneva Italic, for example, shares many characteristics with Geneva Regular, but all of the glyphs slant at a certain angle. Though different, these fonts are part of the same font family.
Character Encoding and Glyph Codes
A computer represents characters in numeric form. A character encoding is a mapping that assigns numeric values (character codes) to characters. The mapping varies depending on the character set, which in turn may depend on the language as well as other factors.
Fonts associate each glyph with a double-byte code called its glyph code (which is not the same as a character code). Different fonts may have different glyph codes for the same glyphs, and a single font may have several glyph codes associated with a particular character because several glyphs may represent that character. Because the font and general textual context determine which glyph and which glyph codes represent characters, ATSUI transparently handles the details of mapping character codes to the correct glyph codes. Your application does not have to handle the details of obtaining glyphs from a font.
Different languages may have different requirements in terms of which glyphs they use from a font. A font contains some number of character encodings. Each encoding is an internal conversion table for interpreting a specific character set—that is, a way to map a character code to a glyph code for that font.
The reason a font can have multiple encodings is that the requirements for each writing system that the font supports may be different. A writing system is a method of depicting words visually. It consists of a character set and a set of rules for displaying, ordering, and formatting the glyphs associated with those characters. Writing systems can differ in line direction, the direction in which their glyphs are read, the size of the character set used to represent the script, and contextual variation (that is, whether a glyph changes according to its position relative to other glyphs). Writing systems have specific requirements for text display, text editing, character set, and fonts. A writing system can serve one or several languages. For example, the Roman writing system serves many languages, including French, Italian, and Spanish.
Most character sets and character encoding schemes developed in the past are limited in that they supported just one language or a small set of languages. Multilingual software has traditionally had to implement methods for supporting and identifying multiple character encodings. To interpret a character encoded numerically, you needed to know the text encoding system used to encode the character. Because text encoding systems are not unique, the same numeric encoding used in different systems may not represent the same character. The adoption of Unicode has changed this.
Unicode is a character encoding system designed to support the interchange, processing, and display of all the written texts of the diverse languages of the modern world. Unicode supplies enough numeric values to encode all characters available to all written texts. It provides a single model for text display and editing. Unicode also simplifies the handling of bidirectional text and characters that change according to their position in the sentence.
Unicode has three primary formats available for encoding: UTF-8, UTF-16, and UTF-32 (UTF stand for Unicode Transformation Format). UTF-8 is a single-byte format; UTF-16 is a double-byte format; and UTF-32 is a quadruple-byte format. ATSUI uses UTF-16, which uses two bytes to specify a character. Text that uses an encoding other than Unicode can be converted to Unicode using the Text Encoding Converter. See Programming With the Text Encoding Conversion Manager for more information.
Because Unicode includes the character repertoires of most common character encodings, it facilitates data interchange with other platforms. Using Unicode, text manipulated by your application and shared across applications and platforms can be encoded in a single coded character set.
Unicode provides some special features, such as combining or nonspacing marks and conjoining Jamo. These features are a function of the variety of languages that Unicode handles. If you have coded applications that handle text for the languages these features support, they should be familiar to you. If you have used a single coded character set such as ASCII almost exclusively, these features will be new to you. ATSUI lets you control how the special features available through Unicode are rendered.
For more information on Unicode, see http://www.unicode.org. For additional details on how ATSUI implements the Unicode specification, see “ATSUI Implementation of the Unicode Specification.”
ATSUI doesn’t allocate and manage buffers for your text; you are responsible for memory management. Instead ATSUI caches the text that you want rendered, storing the cached text as a sequence of glyph codes. Because ATSUI uses Unicode (UTF-16), each character uses at least 2 bytes of storage. (Surrogate and other characters may require more than 2 bytes of storage.) The storage order is the order in which text is stored. The text stored in memory is the source text; the text displayed is the display text, as shown in Figure 1-3.
Display order is the left-to-right (or top-to-bottom) order in which glyphs are drawn. ATSUI expects you to store your text in input order, which is the “logical” order, or the order in which the characters, not glyphs, would be read or pronounced in the language of the text. Because text of different languages may be read from left to right, right to left, or top to bottom, the input order is not necessarily the same as the display order of the text when it is drawn. Your application needs to differentiate between the order in which the character codes are stored and the order in which the corresponding glyphs are displayed. Figure 1-3 shows Hebrew glyphs that are stored one way and displayed another way.
As shown in Figure 1-3, the character codes that make up the text are numbered using zero-based offsets. Therefore, the first character code in the figure has an offset of 0.
ATSUI also uses a zero-based numbering scheme to index the glyphs that are actually displayed. The glyph index, which gives the glyph’s position in the display order, always starts at 0. Each glyph has a single index, even though its character code is 2 bytes (one
Most users use point size to specify the size of the glyphs in a document. Point size indicates the size of a font’s glyphs as measured from the baseline of one line of text to the baseline of the next line of single-spaced text. In the United States, point size is measured in typographic points, and there are 72.27 points per inch. However, ATSUI and the PostScript language both define 1 point to be exactly 1/72 of an inch. Although point size is a useful measure of the size of text, you may wish to use more exact measurements for greater control over placement of the glyphs on the display device. ATSUI permits fractional point sizes.
Font designers use a special vocabulary for the measurements of different parts of a glyph. Figure 1-4 shows the terms describing the most frequently used measurements. The bounding box of a glyph is the smallest rectangle that entirely encloses the drawn parts of the glyph. The glyph origin is the point that ATSUI uses to position the glyph when drawing. In Figure 1-4, notice that there is some space between the glyph origin and the edge of the bounding box: this space is the glyph’s left-side bearing. The left-side bearing value can be negative, which decreases the space between adjacent characters. The right-side bearing is space on the right side of the glyph; this value may or may not be equal to the value of the left-side bearing. The advance width is the full horizontal width of the glyph as measured from its origin to the origin of the next glyph on the line, including the side bearings on both sides.
Most glyphs in Roman fonts appear to sit astride the baseline, an imaginary horizontal line. The ascent is a distance above the baseline, chosen by the font’s designer and the same for all glyphs in a font, that often corresponds approximately to the tops of the uppercase letters in a Roman font. Uppercase letters are chosen because, among the regularly used glyphs in a font, they are generally the tallest.
The descent is a distance below the baseline that usually corresponds to the bottoms of the descenders (the “tails” on glyphs such as “p” or “g”). The descent is the same distance from the baseline for all glyphs in the font, whether or not they have descenders. The sum of ascent plus descent marks the line height of a font.
For vertical text, font designers may use additional measurements. The top-side bearing is the space between the top of the glyph and the top edge of the bounding box. The bottom-side bearing is the distance from the bottom of the bounding box to the origin of the next glyph. For vertical text, the advance height is the sum of the top-side bearing, the bounding-box height, and the bottom-side bearing. These metrics are useful if, for example, you want to display a horizontal font vertically. Likewise, vertical fonts such as kanji may also have horizontal metrics.
Every block of text has an image bounding rectangle, which is the smallest rectangle that completely encloses the filled or framed parts of a block of text. However, because of the height differences between glyphs—for example, between a small glyph, such as a lowercase “e”, and a taller, larger glyph, such an uppercase “M”, or even between glyphs of different fonts and point sizes—the image bounding rectangle may not be sufficient for your application’s purposes. Therefore, you can also use the typographic bounding rectangle, which, in most cases, is the smallest rectangle that encloses the full span of the glyphs from the ascent line to the descent line.
Figure 1-5 shows an example of how the typographic bounding rectangle and image bounding rectangle relate. The two rectangles are markedly different because the text has no ascenders or descenders. Whereas the image bounding rectangle encloses just the black bits of the drawn text, the typographic bounding rectangle takes into account the ascent and descent needed to accommodate all glyphs in a font. If text includes glyphs with ascenders and descenders, the typographic bounding rectangle doesn’t change, but the image bounding rectangle does.
Glyphs can be differentiated not only by font but by typestyle. A typestyle is a specific variation in the appearance of a glyph that can be applied consistently to all the glyphs in a font family. Some of the typical typestyles available on the Macintosh computer include plain, bold, italic, underline, outline, shadow, condensed, and extended. Other styles that may be available are demibold, extra-condensed, or antique.
A font variation is a setting along a particular variation axis. Font variations allow your application to produce a range of typestyles algorithmically. Each variation axis has a name that usually indicates the typestyle that the axis represents, a tag to represent that name (such as
'wght'), a set of maximum and minimum values for the axis, and the default value of the axis. The weight axis, for example, governs the possible values for the weight of the font; the minimum value may produce the lightest appearance of that font, the maximum value the boldest. The default value is the position along the variation axis at which that font falls normally. Because the axis is created by the font designer, font variations can be optimized for their particular font. Figure 1-6 shows a range of possible weights for a glyph, from the minimum weight to the maximum weight.
A font instance is a set of named variations identified by the font designer that matches specific values along one or more variation axes and associates those values with a name. For example, suppose a font has the variation axis
'wght' with a minimum value of 0.0, a default of 0.5, and a maximum value of 1.0. The corresponding font instance might have the name “Demibold” with a value along that variation axis of 0.8.
In Figure 1-6, the variation axis value of the glyph at the far right could represent the named instance “Extra Bold,” whereas the glyph at the far left could represent the named instance “Light.” The other values represented in the figure could likewise have instance names.
Font variations and font instances give your application the ability to provide whatever typestyles the font designer has decided to include with the font. They are available through ATSUI only if the font designer has defined variations and instances for a font.
When you arrange text on a display device, you can let ATSUI use default values to automatically handle many aspects of the text display or you can specify precisely how ATSUI should present and arrange the text. The specifics of text layout using ATSUI are discussed in later chapters. This section describes some of the general concepts of laying out text on a display device. For example, when laying out text, the following need to be considered:
text runs, style runs, and direction runs
contextual forms and ligatures
alignment (or flushness) and justification
kerning and tracking
Text direction consists of text orientation (horizontal or vertical) and the direction in which the text is read. Text in your application can be oriented in three common directions: horizontally, left to right; horizontally, right to left; and vertically, top to bottom. ATSUI allows your application, for example, to draw lines of text in multiple directions, as shown in Figure 1-7.
A baseline is an imaginary line that coincides with some point in a font—for example, the bottom, middle, or top of each glyph. The baseline of a glyph defines the position of the glyph with respect to other glyphs at different point sizes when all the glyphs are aligned. It represents a stable platform from which glyphs of different sizes and different writing systems grow proportionally.
Depending on the writing system, the baseline may be above, below, or through the center of each glyph, as shown in Figure 1-8. ATSUI provides your application with capabilities for using multiple baselines. For more information, see “Baseline Offsets Attribute Tag .”
A baseline delta is the distance between the baseline and the position of the glyph with respect to the baseline, as shown in Figure 1-9.
Various baselines can also be used to create special effects, such as drop capitals. A drop capital is an initial capital letter that is much larger than surrounding glyphs and embedded in them. Figure 1-10 is an example of drop capitals formed solely on the basis of the baselines in the font. The default baseline for this text is the Roman baseline for 18-point type. The hanging baseline of the drop capitals aligns with the hanging baseline of the regular text, creating the effect shown.
Leading Edges and Trailing Edges
Because text has a direction, the concept of which glyph comes “first” in a line of text cannot always be limited to the visual terms “left” and “right.” The leading edge is defined as the edge of a glyph you first encounter—such as the left foot of a Roman glyph—when you first read the text that includes that glyph. The trailing edge is the edge of a glyph encountered last.
Figure 1-11 shows how the concepts of leading edge and trailing edge change depending on the characteristics of the glyph. In the first example—a Roman glyph—the leading edge is on the left, because the reader encounters that side first. In the second example, the leading edge of the Hebrew glyph is on the right for the same reason.
Text Runs, Style Runs, and Direction Runs
In any segment of contiguous text, certain parts stand out as belonging together, because the glyphs share a certain font, typestyle, or direction. For the purposes of referring to individual segments of text, you can think of sequences of glyphs that are contiguous in memory and share a set of common attributes as runs.
ATSUI associates a block of text with a text run. A sequence of glyphs contiguous in memory that share the same style is a style run. As Figure 1-12 shows, a text run can be subdivided into several style runs. The figure shows three style runs.
A sequence of contiguous glyphs that share the same text direction is a direction run. As with text runs and style runs, the number of direction runs does not necessarily correlate to the number of style runs available, as shown in Figure 1-13, which has two direction runs.
Contextual Forms and Ligatures
A glyph’s position next to other glyphs or its position in a word or a line of text may determine its appearance. For some writing systems, such as Roman, alternate glyphs are used for aesthetic reasons; in other writing systems, use of alternate forms is required.
A contextual form is an alternate form of a glyph that is chosen depending on the glyph’s placement in a certain context, such as a certain word or line. Some contextual forms of initial and final forms of glyphs in a Roman font are shown in Figure 1-1. Other writing systems, such as Arabic, require different contextual forms of glyphs according to where they appear. Figure 1-14 shows the forms of the Arabic letter “ha” that appear alone and at the beginning, middle, or end of a word. The same character code is used for each case; ATSUI finds the appropriate glyph code.
Ligatures are two or more glyphs combined to form a single new glyph (whereas contextual forms are variations on the shape of one glyph). In the Roman writing system, ligatures are generally an optional aesthetic refinement. In other writing systems, special ligatures are required when certain glyphs appear next to one another. Some examples of ligatures used in a Roman font are shown in Figure 1-15.
In general, the font contains all of the information needed to determine when your application should use the appropriate contextual forms and ligatures. If your application allows alternate forms of glyphs to be used, ATSUI does the substitution for you.
Alignment and Justification
The text area is the space on the display device in which text you want to display should fit. The left, right, top, and bottom sides of that area are the margins. How you arrange the text depends on the effect you want to achieve. There are two primary methods of arranging text: alignment and justification. Alignment (also called flushness) is the process of placing text in relation to one or both margins. Figure 1-16 shows left, right, and center alignment of text. Justification is the process of typographically stretching or shrinking a line of text to fit within a given width. Your application can set the width of the space in which the line of text should appear. ATSUI then distributes the white space available on the line between words or even between glyphs, depending on the level of justification chosen.
There are other means of aligning or justifying a line—for example, stretching a glyph or decomposing a ligature. ATSUI can also handle complex justification such as that used in Arabic writing systems.
Kerning and Tracking
Kerning is an adjustment to the normal spacing between two or more specific glyphs. A kerning pair consists of two adjacent glyphs such that the position of the second glyph is changed with respect to the first. The font designer determines which glyphs participate in kerning and in what context. Any adjustments to glyph positions are specified relative to the point size of the glyphs. Kerning usually improves the apparent letter-spacing between glyphs that fit together naturally.
Figure 1-17 shows how glyphs are positioned differently with and without kerning. Note that the phrase is shorter when kerning is applied than when it is not applied.
Cross-stream kerning allows the automatic movement of glyphs perpendicular to the line orientation of the text. For example, when ATSUI applies cross-stream kerning to horizontal text, the automatic movement is vertical; this feature is required for writing systems such as Taliq, which is used in the Urdu language.
When your application lays out text, it has the option of using the interglyph spacing specified by the font designer or altering the spacing slightly in order to achieve a tighter fit between letters and improve the look of a line of text.
Your application can also use tracking if it has been specified by the font designer. In tracking, space is adjusted between all glyphs in the run. You can increase or decrease interglyph spacing by using a tracking setting, which is a value that specifies the relative tightness or looseness of interglyph spacing. Positive tracking settings result in an increase in the looseness of all glyphs in the run. Negative tracking settings result in a increase in the tightness of all glyphs. Normal tracking, tight tracking, and loose tracking are shown in Figure 1-18.
Special Font Features
Some special features are available in certain fonts. You can increase the control a user has over the presentation of text in a document if you provide access to these features when they are available in a font. The font provides the functionality for using these features. Table 1-1 shows some of the currently defined features. See “Font Features” for a more complete list.
Permits selection from different ranges of ligatures.
Controls the level of cursive connection in the font. This feature is used in fonts, such as cursive Roman fonts or Arabic fonts, in which glyphs are connected to each other. See Figure B-2 for an example.
Specifies that glyphs need to change their appearance in vertical runs of text. Figure B-23 shows how a vertical form of a glyph can be substituted for a horizontal form.
A swash is a variation, often ornamental, of an existing glyph. The smart swash feature controls contextual swash substitution, such as substituting a final glyph when a particular glyph appears at the end of a word. See Figure B-19 for an example of swashes.
Controls superscripts, subscripts, and ordinal forms. See Figure B-22 for an example of vertical positions.
Governs selection and generation of fractions. See Figure B-8 for examples of how fractions can be drawn.
Prevents the collision of long tails on glyphs with the descenders of other glyphs. See Figure B-17 for an example.
Allows fine typographic effects, such as the automatic conversion of two adjacent hyphens to an em dash.
Governs nonletter ornament sets of glyphs. See Figure B-16 for an example of ornamental glyphs.
Allows the font designer to group together collections of noncontextual substitutions into named sets.
Specifies the use, with Chinese fonts, of the traditional or simplified character forms. See Figure B-4 for an example.
Some of these features, such as typographic extras, are fancy elements that provide the user with alternate forms of glyphs or other ornamental designs. Other features are contextual and absolutely necessary for using that font correctly—for example, cursive connectors for Arabic. Fonts that require these features include them, whereas optional elements are included at the discretion of the font designer.
Your application determines how wide the text area for a line should be. At times, a user enters a line of text that does not fit neatly in the given text area and overlaps one of the margins. When this happens, you break the line of text and wrap the text onto the next line.
Figure 1-19 shows a line break made on the basis of a simple algorithm: The application backs up in the source text to the trailing edge of the last white space and then carries over all of the text following that white space to the next line.
Your application can devise more complex algorithms, such as breaking a word at an appropriate hyphenation point, if possible. ATSUI leaves the final decision about where to break the line up to your application. However, ATSUI provides you with functions designed to help you determine the best place to break a line. For more information, see “Breaking Lines” and “Flowing Text Around a Graphic .”
In some cases there may not be a glyph defined for a character code in a font. Your application can specify a search order for ATSUI to use when trying to locate a replacement glyph. If ATSUI cannot find any suitable replacement, it recommends substituting a glyph from the Last Resort font. The Last Resort font is a collection of glyphs that represent types of Unicode characters. If the font cannot represent a particular Unicode character, the appropriate "missing" glyph from the Last Resort font is used instead. For more information on font substitution, see “Using Font Fallback Objects.”
When the user places a pointer over a line of text and presses and releases a mouse button, the user expects that mouse click to produce a caret at the corresponding onscreen position. Similarly, when the user drags the mouse while pressing and holding the mouse button, the user expects the selected text to be highlighted. In each case, your application must recognize and respond to mouse events appropriately. This section discusses caret positioning, highlighting, and conversion of screen position to text offset in memory.
A caret position is a location on the screen that corresponds to an insertion point in memory. It lets the user know where in the text file the next insertion (or deletion) will occur. A caret position is always between glyphs on the screen, usually on the leading edge of one glyph and the trailing edge of another. The leading edge of a glyph is the edge that is encountered first when reading text of that glyph’s script system; the trailing edge is opposite from the leading edge. In left-to-right text, a glyph’s leading edge is its left edge; in right-to-left text, a glyph’s leading edge is its right edge.
In most situations for most text applications, the caret position is on the leading edge of the glyph corresponding to the character at the insertion point in memory, as shown in Figure 1-20. When a new character is inserted, it displaces the character at the insertion point, shifting it and all subsequent characters in the buffer forward by one character position.
The caret position is unambiguous in text with a single line direction. In such a case, the caret position is on the trailing and leading edges of characters that are contiguous in the text buffer; it thus corresponds directly to a single offset in the buffer. This is not always the case in bidirectional text.
In determining caret position, an ambiguous case occurs at direction boundaries because the offset in memory can map to two different glyph positions on the screen—one for the text in each line direction. In Figure 1-21, for example, the insertion point is at offset 3 in the buffer. If the next character to be inserted is Arabic, the caret should be drawn at position 3 on the screen; if the next character is English, the caret should be drawn at caret position 12.
The Mac OS codifies this relationship between text offset and caret position as follows:
For any given offset in memory, there are two potential caret positions:
the leading edge of the glyph corresponding to the character at that offset
the trailing edge of the glyph corresponding to the previous (in memory) character
In unidirectional text, the two caret positions coincide: the leading edge of the glyph for one character is at the same location as the trailing edge of the glyph for the previous character. In Figure 1-20, the offset of 3 yields caret positions on the leading edge of “D” and the trailing edge of “C”, which are the same unambiguous location.
At a boundary between text of opposite directions, the two caret positions do not coincide. Thus, in Figure 1-21, for an offset of 3 there are two caret positions: 12 and 3. Likewise, an offset of 12 yields two caret positions (also 12 and 3, but on the edges of two different glyphs).
At an ambiguous character offset, the current line direction (the presumed direction of the next character to be inserted) determines which caret position is the correct one:
If the current direction equals the direction of the character at that offset, the caret position is the leading edge of that character’s glyph. In Figure 1-21, if Roman text is to be inserted at offset 3 (occupied by a Roman character), the caret position is on the leading edge of that character’s glyph—that is, at caret position 12.
If the current direction equals the direction of the previous (in memory) character, the screen position is on the trailing edge of the glyph corresponding to that previous (in memory) character. In Figure 1-21, if Arabic text is to be inserted at offset 3, the caret position is on the trailing edge of the glyph of the character at offset 2—that is, at caret position 3.
Two common approaches for drawing the caret at direction boundaries involve the use of a dual caret and a single caret. A dual caret consists of two lines, a high caret and a low caret, each measuring half the text height (see Figure 1-22). The high caret is displayed at the primary caret position for the insertion point; the low caret is displayed at the secondary caret position for that insertion point. Which position is primary, and which is secondary, depends on the primary line direction:
The primary caret position is the screen location associated with the glyph that has the same direction as the primary line direction. If the current line direction corresponds to the primary line direction, inserted text will appear at the primary caret position. A primary caret is a caret drawn at the primary caret position.
The secondary caret position is the screen location associated with the glyph that has a different direction from the primary line direction. If the current line direction is opposite to the primary line direction, inserted text will appear at the secondary caret position. In Figure 1-22, the display of the Roman keyboard icon shows that the current line direction is not the same as the primary line direction, so the next character inserted will appear at the secondary caret position. A secondary caret is a caret drawn at the secondary caret position.
A single caret (or moving caret) is simpler than a dual caret (see Figure 1-23). It is a single, full-length caret that appears at the screen location where the next glyph will appear. At direction boundaries, its position depends on the keyboard script. At a direction boundary, the caret appears at the primary caret position if the current line direction corresponds to the primary line direction; it appears at the secondary caret position if the current line direction is opposite to the primary line direction. The moving caret is also called a jumping caret because its position jumps between the primary and secondary caret positions as the user switches the keyboard script between the two text directions represented.
When you place a caret between glyphs, you need to be able to relate the caret to the insertion point: a point between
Unichar offsets in the source text. The edge offset is a
Unichar offset between character codes in the source text that corresponds to the caret location between glyphs. In Figure 1-20, the edge offset is 3.
When displaying a selection range, an application typically marks it by highlighting, drawing the glyphs in inverse video or with a colored or outlined background. As part of its text-display tasks, your application is responsible for knowing what the selection range is and highlighting it properly, as well as for making the necessary changes in memory that result from any cutting, pasting, or editing operations involving the selection range. Figure 1-24 shows highlighting for a selection range in unidirectional text.
The Mac OS measures the limits of highlighting rectangles in terms of caret position. Thus, in Figure 1-24, in which the selection range consists of the characters at offsets 1 and 2 in memory, the ends of the highlighting rectangle correspond to caret positions for offsets 1 and 3. It’s equivalent to saying that the highlighting extends from the leading edge of the glyph for the character at offset 1 to the leading edge of the glyph for the character at offset 3.
If the displayed text has bidirectional runs, the selection range may appear as discontinuous highlighted text. This is because the characters that make up the selection range are always contiguous in memory, but characters that are contiguous in memory may not be contiguous onscreen. Figure 1-25 is an example of text whose selection range consists of a contiguous sequence of characters in memory, whereas the highlighted glyphs are displayed discontinuously.
In describing the boundaries of the highlighting rectangles in terms of caret position, note that for Figure 1-25, it is not possible to simply say that the highlighting extends from the caret position of offset 2 to the caret position of offset 6. Using the definitions of caret position given earlier, however, it is possible to define the selection range as two separate rectangles, one extending from offset 4 to offset 2, and another extending from offset 12 to offset 6 (assuming for the ambiguous offsets—4 and 12—that the current text direction equals the primary line direction).
Converting Screen Position to Text Offset
Caret positioning and highlighting, as just discussed, require conversion from text offset to screen position. But that is only half the picture; it is just as necessary to be able to convert from screen position to text offset. For example, if the user clicks the cursor within your displayed text, you need to be able to determine the offset in your text buffer equivalent to that mouse-down event. Hit-testing is the process of obtaining the location of an event relative to the position of onscreen glyphs and to the corresponding offset between character codes in memory. You can use the location information obtained from hit-testing to set the insertion point (that is, the caret) or selection range (highlighting).
ATSUI does most of this work for you. It provides functions that convert a screen position to a memory offset (and vice versa). These functions work properly with bidirectional text and with text that has been rendered with ligatures and contextual forms.
Determining the character associated with a screen position requires first defining the caret position associated with a given screen position. Once that is done, the previously defined relationship between caret position and text offset can be used to find the character.
Figure 1-26 shows the cursor positioned within a line of text at the moment of a mouse click. A mouse-down event can occur anywhere within the area of a glyph, but the caret position that is to be derived from that event must be a thin line that falls between two glyphs.
A line of displayed glyphs is divided by ATSUI into a series of mouse-down regions. A mouse-down region is the screen area within which any mouse click yields the same caret position. For example, a mouse click that occurs anywhere between the leading edge of a glyph and the center of that glyph results in a caret position at the leading edge of that glyph. For unidirectional text, mouse-down regions extend from the center of one glyph to the center of the next glyph (except at the ends of a line), as Figure 1-26 shows. A mouse click anywhere within the region results in a caret position between the two glyphs.
At line ends, and at the boundaries between text of different line directions, mouse-down regions are smaller and interpreting them is more complex. As Figure 1-27 shows, the mouse-down regions at direction boundaries extend only from the leading or trailing edges of the bounding glyphs to their centers. Note that the shaded part of Figure 1-26 is a single mouse-down region, whereas each of the shaded parts of Figure 1-27 are two mouse-down regions.
How do mouse-down regions relate to offsets? Using Figure 1-27 as an example, consider the two mouse-down regions 3a and 12a. Keep in mind that the primary line direction is right to left.
A mouse click within region 3a is associated with the trailing edge of the Arabic character. To insert Arabic text, your application might draw a primary caret (or single caret) at caret position 3, and place the insertion point at offset 3 in the buffer. (If you are drawing a dual caret, the secondary caret should be at caret position 12, which also corresponds to an insertion point at offset 4 in the buffer.)
A mouse click within region 12a is associated with the leading edge of the Roman character. To insert Roman text, your application might, draw a secondary caret (or single caret) at caret position 12, and place the insertion point at offset 3 in the buffer. (If you are drawing a dual caret, the primary caret should be at caret position 3, which also corresponds to an insertion point at offset 3 in the buffer.)
Thus mouse clicks in two widely separated areas of the screen can lead to an identical caret display and to a single insertion point in the text buffer. One, however, permits insertion of Roman text, the other Arabic text, and the insertions occur at different screen locations.
Mouse clicks in regions 3b and 12b in Figure 1-27 would lead to just the opposite situation: a primary caret at caret position 12, a secondary caret at caret position 3, and an insertion point at offset 12 in the text buffer.
© 2002, 2008 Apple Inc. All Rights Reserved. (Last updated: 2008-09-30) | http://developer.apple.com/legacy/library/documentation/Carbon/Conceptual/ATSUI_Concepts/atsui_chap2/atsui_concepts.html | 13 |
87 | To better understand certain problems involving aircraft
it is necessary to use some mathematical ideas from
the study of triangles.
Let us begin with some definitions and terminology which we will use on this slide.
We start with a right triangle. A right triangle is a
three sided figure with one angle equal to 90 degrees. A 90 degree angle is
called a right angle and that is where the right triangle gets its name.
We define the side of the triangle opposite from the right angle to
be the hypotenuse, h. It is the longest side of the three sides
of the right triangle. The word "hypotenuse" comes from two Greek words
meaning "to stretch", since this is the longest side.
We are going to label the other two sides a and b.
The Pythagorean Theorem is a statement relating the lengths
of the sides of any right triangle.
The theorem states that:
For any right triangle, the square of the hypotenuse
is equal to the sum of the squares of the other two sides.
Mathematically, this is written:
h^2 = a^2 + b^2
The theorem has been known in many cultures, by many names, for many years.
Pythagoras, for whom the theorem is named, lived in ancient Greece, 2500 years ago.
It is believed that he learned the theorem during his studies in Egypt. The
Egyptians probably knew of the relationship for a thousand years before
Pythagoras. The Egyptians knew of this relationship for a triangle with sides in the
ratio of "3 - 4 - 5".
5^2 = 3^2 + 4^2
25 = 9 + 16
Pythagoras generalized the result to any right triangle. There are many different
algebraic and geometric proofs of the theorem. Most of these begin with a
construction of squares on a sketch of a basic right triangle. On the figure at
the top of this page, we show squares drawn on the three sides of the triangle.
A square is the special case of a rectangle in which all the sides are equal
in length. The
areaA of a rectangle is the product of the sides.
So for a square with a side equal to a, the area is given by:
A = a * a = a^2
So the Pythagorean theorem states the area h^2 of the square drawn on the
hypotenuse is equal to the area a^2 of the square drawn on side a
plus the area b^2 of the square drawn on side b.
Here's an interactive Java program that let's you see that this area relationship is true:
We begin with a right triangle on which we have constructed squares on the two sides, one
red and one blue. We are going to break up the pieces of these two squares and move them into
the grey square area on the hypotenuse. We won't loose any material during the
operation. So if we can exactly fill up the square on the hypotenuse,
we have shown that the areas are equal. You work through the construction by clicking on the
button labeled "Next". You can go "Back" and repeat a section, or go all the way back tothe beginning by clicking on "Reset".
What is it doing? The first step rotates the triangle down onto the blue square.
cuts the blue square into three pieces, two triangles and a red rectangle.
The two triangles are exactly the same size as the original triangle.
The "bottom" of the original triangle exactly fits
the vertical side of the square, because the sides of a square are equal.
The red rectangle has its vertical sides equal to the base of the original triangle,
and its horizontal sides equal to the difference between the "bottom" side and the
"vertical" side of the original triangle.
Using the terminology from the figure at the top of this page, the dimensions
of the red rectangle are:
vertical length = b
horizontal length = b - a
The next step is to move the red rectangle
over adjacent to the red square. The rectangle sticks out the top of the red square
and the two triangles remain in the blue square. The next step is to move one of the
blue triangles vertically into the hypotenuse square. It fits exactly along the side of
hypotenuse square because the sides of a square are equal. The next step is to move the
other blue triangle into the hypotenuse square. (We are half way there!) The next step
is to slide the form of the original triangle to the left into the red region. The triangle cuts
the red region into three pieces, two triangles and a small yellow square.
The original triangle fits exactly into this region because of two reasons;
the vertical sides are identical, and the horizontal side of the red region is equal to
the length of the red square plus the horizontal length of the red rectangle which we
moved. The horizontal length of the red region is:
horizontal length = a + (b - a) = b
The horizontal length of the red region is exactly the length of the horizontal side
of the original triangle.
The yellow square has dimensions b - a on each side.
The next step is to move one of the red triangles into the
hypotenuse square. Again it's a perfect fit.
The next step is to move the final red triangle into
the hypotenuse square. Now if we look at the grey square that remains in the
hypotenuse square, we see that its dimensions are b - a; the long side
of the triangle minus the short side. The final step is to move the yellow square into
this hole. It's a perfect fit and we have used all the material from the original red
and blue squares. | http://www.grc.nasa.gov/WWW/K-12/airplane/pythag.html | 13 |
68 | Teacher professional development and classroom resources across the curriculum
Teacher professional development and classroom resources across the curriculum
In looking at both the sphere and the pseudosphere, we see that they are unlike the plane in that they are both curved surfaces. Furthermore, we saw that "straight" lines (lines of the shortest length, in other words) are not straight at all on these surfaces; rather, they are curves. In order to explore these surfaces, and others that do not obey Euclid's fifth postulate, we need to be able to discuss curves meaningfully. Let's start with a simple curve in a plane:
How can we describe this curve's curviness? We could compare it to a circle, a "perfect curve" in some respects, but it is evident that this curve is not really even close to being a circle. It has regions that seem more tightly curved than others, and it even has regions that curve in opposite directions. When we look at a curve in this way, in the broader context of the plane, we are viewing it extrinsically. By contrast, viewing a curve intrinsically, that is from the point of view of someone on the curve, yields a different perspective and different possible measurements.
So, perhaps a good thing to do would be, instead of talking about the curvature of the whole thing right away, to talk about the curvature at each point along the curve. As theoretical travelers along the curve, we could stop at each point and ask, "What size circle would define the curve in the immediate vicinity of this point?"
First, let's think of the tangent line at this point. This is a line that intersects our curve only at this one point (locally, that is—it's possible that the line might also intersect the curve at other more-distant points). We can also think of the tangent as the one straight line that best approximates our curve at this particular point. Let's then draw a line from this tangent point, perpendicular to the tangent line.
Let this line, called a "normal line" or just a "normal," be of a length that, if it were the radius of a circle, that circle would be the biggest possible circle that still touches the curve in only one place. In other words, the normal line should be the radius of the circle that best approximates the curve at this particular point. Such a circle is called an "osculating circle," which literally means "kissing" circle, because it just barely touches, or "kisses," the curve at this one point.
We can define the curvature of a curve at any particular point to be the reciprocal of the radius of the osculating circle that fits the curve at that point. Let's refer to this curvature as "k" from here on out.
What should we do, though, about the fact that some parts of the curve open upward, whereas other parts of the curve open down? If we designate that the normal always points to the same side of the curve—let's choose upward for our case—then when the normal happens to be on the same side as the osculating circle, we'll call this negative curvature, and when the normal happens to be on the opposite side from the osculating circle, we'll call this positive curvature. At any point where the line is flat (i.e., straight), we don't need an osculating circle, and we'll call this zero curvature. The choice of defining which curvature to consider positive and which to consider negative is completely arbitrary. The method chosen for this example is nice because, if we think of our planar curve as a landscape, then the positively curved areas are the hills and the negatively curved areas are the valleys.
An interesting feature of looking at a curve in this way is that, were we onedimensional beings living on the curve, we would not notice that it is curved at all. This is called an intrinsic view. The only thing that can be measured intrinsically on a curve is its length, and length alone tells us nothing about how curvy a one-dimensional object is.
Remember that the way we quantified the curvature of this curve was to compare it to a circle in the plane. Now, as one-dimensional beings, this requires envisioning one more dimension than would be available to our perception. The curvature becomes apparent only when the curve is viewed by an observer not on the curve itself—that is, one who can see it extrinsically in two dimensions.
Using this system, we can meaningfully talk about any curve in a plane, and we know from previous discussions that once we understand something in a lowerdimensional setting, we can generalize our thinking to a higher-dimensional setting. In this case, instead of talking about plane curves, we will return to our curved surfaces, such as the sphere and the pseudosphere.
Let's take a moment to compare and contrast our plane curve and our curved surface. Our plane curve, though drawn in a two-dimensional plane, is actually only a one-dimensional object. This is because, if you were an ant living on this curve, you would only have the option of traveling forward or backward. Because of this, you wouldn't even really know that your line was curved.
A curved surface, on the other hand, is two-dimensional. If you were an ant living on it, you could move forward, backward, right, or left. As you can see, however, a curved surface requires a third dimension to represent it extrinsically, just as a one-dimensional planar curve requires a second dimension for its extrinsic representation. We said that an ant on a plane curve cannot experience this second dimension and, thus, has no idea that his world is curved. Is the same true for an ant on the surface, however? It can't experience the third dimension, but might it still be able to find out if its world is curved?
To resolve this, we need to find a way to apply our concept of the osculating circle to a curved 2-D surface. Actually, we can begin the same way as before.
Let's pick a point on our surface and define the normal (remember, that's a line that is exactly perpendicular to the surface at this point). If it helps, imagine the plane that is tangent at this one point as a flat meadow, and envision the normal as a tree growing straight up in the middle of the meadow. Now that we have both our point and our normal set, we can look at slices of the surface that contain both the point and the normal.
It is clear that each of these slices through the surface will show a slightly different curve, yet all of them contain our point of interest. So, which of these slices is "the" curvature at this point? We have so many possibilities to choose from!
One path toward a solution involves considering the extreme values—in other words, the curve that is most positively curved and the curve that is most negatively curved. We call these the "principal curvatures." If we were then to take the average of these two quantities, we would have a mean curvature for this point.
Would an ant on this surface be able to find, or develop an awareness of, these principle curvatures? To do so, it would have to have some idea of a plane that is perpendicular to the plane of his current existence. The complicating factor here is that the ant has no idea that another perpendicular direction can even exist! It will have a great deal of difficulty trying to figure out curves that can only be seen with the aid of a perspective that it can't have.
All hope is not lost, however. Again, we can turn to Gauss. His Theorem Egregium says that there is a type of curvature that is intrinsic to a surface. That is, it can be perceived by one who lives on the surface. Usually, this curvature, called the Gaussian curvature, is simply defined as the product of the two principal curvatures. For our example here, however, that will not be good enough, because our ant can't even know the principal curvatures!
Instead of trying to find the principal curvatures, the ant can draw a circle on his surface and look at the ratio of the circumference to the diameter. This ratio is often known as pi, and in flat space it is about 3.14159. We usually consider pi to be a universal constant, and it can be, but that depends on which universe we are talking about. In a Euclidean universe, pi is indeed constant. In non- Euclidean universes, however, the value of pi depends on where exactly the circle is drawn—it's not a constant at all!
Consider a circular trampoline. The circumference of this trampoline is fixed, but the webbing in the center is flexible and can be thought of as a surface. When no one is standing on the trampoline, the ratio of its circumference to its diameter is indeed pi. Now, consider what happens when someone stands in the middle of the trampoline: the fabric stretches and the diameter, as measured on the surface, increases.
The circumference, however, remains unchanged as the surface is stretched. This causes the ratio of the circumference to the diameter, pi , to decrease. Our ant could indeed detect such a distortion! This would be positive curvature.
For an example of how an ant could detect the curvature of a surface, consider a globe. If we draw a small circle near the north pole, it will be more or less indistinguishable from circles drawn on a flat plane. Now draw a circle that is a bit bigger, say at the 45th parallel. This line is halfway between the north pole and the equator. Its radius, as measured on the surface, will be considerably longer in proportion to the circumference of the circle than was the case for the small polar circle. Therefore, the ratio of circumference to diameter will be smaller—in other words, larger diameter and smaller circumference.
Now consider the circle represented by the equator. The diameter of this circle, as measured on the surface, will be half the length of the circle! This would mean that for the equator, pi is equal to 2. This is quite a discrepancy from the customary 3.14… value, and it indicates that we must be on a curved surface. Negative curvature can be visualized as a saddle. Such a surface has more circumference for a given radius (and, hence, diameter) than we would expect with either flat or positive curvature.
Gaussian curvature is not as concerned with determining specific values of pi as it is with measuring how pi changes as the radius changes. The more curved a surface is, the faster pi will change for circles of increasing radius.
This idea, that there are certain properties that can be measured regardless of how our curve sits in space, was important in our topology unit and, as we have just seen, it plays a significant role in our discussion of curvature as well. These intrinsic properties of a surface—or the generalization of a surface, a manifold—are definable and measurable without regard to any external frame of reference. The Gaussian curvature is such a property, but the principal curvatures are not.
Recall that to find the principal curvatures, one must take perpendicular slices, which requires that our surface sit in some higher-dimensional space. This is an extrinsic view. The fact that the Gaussian curvature of a surface, as computed by the principal curvatures, yields an intrinsic quantity is quite remarkable. In fact, it is known to this day as "Gauss's Theorema Egregium," meaning "Gauss's Remarkable Theorem."
So, a natural question to ask might be: what kind of surface do we live on? We must have an intrinsic view of whatever space we inhabit—indeed, we have no way to get outside of it! A bit of thought, though, will lead to the realization that, unlike ants, we perceive a third dimension, so whatever this is that we are all living on, it is not a 2-D surface, but rather a 3-D manifold. A manifold can be thought of as a higher-dimensional surface, or it might help to think of it as a collection of points that sits in some larger collection of points.
Furthermore, our everyday experience includes a fourth degree of freedom, time. If we consider time to be part and parcel of our reality, then we are really living in a 4-D manifold called "spacetime." So, is our spacetime the 4-D equivalent of a flat, Euclidean plane, or is our reality curved in some spherical or hyperbolic way? For help in exploring this question, we'll turn to the ideas of a certain former patent clerk whose theories permanently altered the way in which we view our universe. First, however, we need to consider what happens when our surface is not as "nice" as a simple, smooth sphere or pseudosphere.
Next: 8.7 General Relativity | http://www.learner.org/courses/mathilluminated/units/8/textbook/06.php | 13 |
71 | Right Triangle is a term which almost everybody had used or is using in different situations like construction business, science and technology, aeronautics etc. A right triangle or right-angled triangle is a triangle in which one of the three angles is a right angle or 90-degree angle. The sides and angles of right Triangles are related in many ways and the formulas, theories and properties derived out them are the basis for trigonometry.
We represent the sides of a right triangle as ‘a’, ‘b’, and ‘c’ and these sides satisfy very important theorem of Trigonometry: Pythagorean Theorem or pythagoras theorem, which states that,
In right triangles, the Square of the length of the largest side which is known as hypotenuse (the side opposite to the Right Angle) is equal to the sum of the squares of the lengths of the other two sides which are known as legs (the two sides that meet at a right angle).
a2 + b2 = c2,
Where the largest side is universally denoted by 'c' and is known as the hypotenuse. The other two sides of lengths ‘a’ and ‘b’ of a right triangle are called legs or ‘catheti’.
If the lengths of all three sides of a right triangle are integers, then the right triangles are known as, Pythagorean triangles and their side lengths are collectively known as Pythagorean triple.
There are two types of right angled triangles or right triangles:
1. Scalene Right Angled Triangle: The scalene triangles are the right triangles which include one right angle and two other unequal angles with no equal sides.
2. Isosceles right angled triangle: An Isosceles Triangle is a right triangle which has one right angle with other two angles measuring 45 degree and out of three sides of this right triangle two are equal in length.
A Right Triangle is the triangle, which has one of the angles equal to 90 degrees. It means that if one of the angles of the triangle is Right Angle, then we say it is a right angled triangle. Now we will look at the applications of right Triangles. Suppose a tall building is given and the inclination of ladder is made with the building, such that the dis...Read More
Three angles are present in a Right Triangle angles one of them is of 90 degree and other two angles are less than 90 degrees and sum of all three angles is equals to 180 degrees. Three sides of triangle are in relationship which is described with the help of pythagoras theorem. Figure of a Right Angle triangle is shown below:
In above figure ...Read More
A Right Triangle can be defined as a triangle which contains one Right Angle that is 900 angle in it. All three sides of a right angle triangle are specifically defined and have different names. It consists of three sides that are termed as base, hypotenuse and perpendicular. Hypotenuse is the side of triangle which is just opposite to right angle and it is the lo...Read More
Pythagorean triples or Right Triangle angles triples both are same which consist of three integers which are positive in nature. Suppose these integers be a, b, c so that a2 + b2 = c2. Well known example of these triples is (3, 4, 5). If right angled triangle is in same Ratio of (3, 4, 5) then they will be called as Pythagorean triple for any positive Integ...Read More
The Pythagorean Theorem can only be applied on a Right Triangle; this theorem tells the relation between the sides of the triangle. As we know that if we are having a triangle it consist of three sides and Pythagor...Read More
A special Right Triangle is a right triangle having specific characteristics which make its calculations much easier or for which simple formulas exist. We can classify special right Triangles in two categories: the angles of a right triangle may form some simple relationships, such as 45–45–90. The right triangles in this category are “angle-based" right triangle...Read More
When one side of triangle is makes an angle of 90 degrees, then given triangle is called as a Right Triangle means each right triangle has 90 degree angle. As we all know that geometric Mean of 2 Numbers are ...Read More
A triangle is a geometric figure which is made of three straight lines and the sum of its angles is equal to 180 degree. A triangle is Right Triangle or right angled triangle if any one of its three ...Read More | http://www.tutorcircle.com/right-triangle-t4z8p.html | 13 |
113 | Worksheet Formulas and Functions
Worksheet formulas are tools used in cells to calculate results.
|How Formulas Calculate Values||A formula is an equation that analyzes data on
a worksheet. Formulas perform operations such as addition, multiplication, and comparison
on worksheet values; they can also combine values.
Formulas can refer to other cells on the same worksheet, cells on other sheets in the same workbook, or cells on sheets in other workbooks.
|The following example adds the value of cell
B4 and 25 and then divides the result by the sum of cells D5, E5, and F5.
= denotes a formula
B4 is a cell reference
+ is the Additional Operator
/ is a Division Operator
SUM(D5:F5) is a Worksheet Function
D5:F5 is a Range Reference
|Formula Syntax||Formulas calculate values in a specific order
that is known as the syntax. The syntax of the formula describes the process of the
A formula in Microsoft Excel begins with an equal sign (=), followed by what the formula calculates. For example, the following formula subtracts 1 from 5. The result of the formula is then displayed in the cell.
|Formula Cell References||A formula can refer to a cell. If you want one
cell to contain the same value as another cell, enter an equal sign followed by the
reference to the cell.
The cell that contains the formula is known as a dependent cell -- its value depends upon the value in another cell. Whenever the cell that the formula refers to changes, the cell that contains the formula also changes.
The following formula multiplies the value in cell B15 by 5. The formula will recalculate whenever the value in cell B15 changes.
Formulas can refer to cells or ranges of cells, or to names or labels that represent cells or ranges
|Worksheet Function||Microsoft Excel contains many predefined, or built-in, formulas known as functions. Functions can be used to perform simple or complex calculations. The most common function in worksheets is the SUM function, which is used to add ranges of cells. Although you can create a formula to calculate the total value of a few cells that contain values, the SUM worksheet function calculates several ranges of cells.|
|Frequently Used Formulas||The following are examples of some commonly
used formulas in Microsoft Excel.
To calculate the current balance for the first transaction (cell F7):
Calculates the running balance in a checkbook register. In this example, assume that cell D7 contains the current transaction's deposit, cell E7 contains any withdrawal amount, and cell F6 contains the previous balance. As you enter new transactions, copy this formula to the cell that contains the current balance for the new transaction.
To display the full name in the format
To display the full name in the format
Joins a first name stored in one cell with a last name stored in another cell. In this example, assume that cell D5 contains the first name, and cell E5 contains the last name.
Increases a numeric value stored in one cell by a percentage, such as 5 percent.
In this example, assume that cell F5 contains the original value.
If the percentage amount is stored in a cell (for example, cell F2):
The reference to cell F2 is an absolute cell reference so that the formula can be copied to other cells without changing the reference to F2.
Creates a total value for one range based on a value in another range.
For example, for every cell in the range B5:B25 that contains the value "Northwind", you want to calculate the total for the corresponding cells in the range F5:F25.
Creates a total value for one range based on two conditions.
For example, you want to calculate the total value of the cells in F5:F25 where B5:B25 contains "Northwind" and the range C5:C25 contains the region name "Western". Note This is an array formula and must be entered by pressing CTRL+SHIFT+ENTER.
Counts the number of occurrences of a value in a range of cells.
For example, the number of cells in the range B5:B25 that contain the text "Northwind".
Counts the number of occurrences of a value in a range of cells, based on a value in another range
For example, the number of rows in the range B5:B25 that contain the text "Northwind" and the text "Western" in the range C5:C25. Note This is an array formula and must be entered by pressing CTRL+SHIFT+ENTER.
|Calculation Operators in Formulas||Operators specify the type of calculation that you want to perform on the elements of a formula. Microsoft Excel includes four different types of calculation operators: arithmetic, comparison, text, and reference.|
|Arithmetic Operators||Arithmetic operators perform basic
mathematical operations such as addition, subtraction, or multiplication; combine numbers;
and produce numeric results.
operator Meaning Example
+ (plus sign) Addition 3+3
(minus sign) Subtraction 31
* (asterisk) Multiplication 3*3
/ (forward slash) Division 3/3
% (percent sign) Percent 20%
^ (caret) Exponentiation 3^2
(the same as 3*3)
|Comparison Operators||Comparison operators compare two values and
then produce the logical value TRUE or FALSE.
operator Meaning Example
= (equal sign) Equal to A1=B1
> (greater than sign) Greater than A1>B1
< (less than sign) Less than A1<B1
>= (greater than or equal to sign)
Greater than or equal to A1>=B1
<= (less than or equal to sign)
Less than or equal to A1<=B1
<> (not equal to sign) Not equal to A1<>B1
|Text Operator||The text operator "&" combines
one or more text values to produce a single piece of text.
Text operator & (ampersand)
Meaning Connects, or concatenates, two
values to produce one
continuous text value
Example "North" & "wind" produce
|Reference Operators||Reference operators combine ranges of cells
Operator Meaning Example
: (colon) Range operator, which produces one reference to all the cells between two references, including the two references B5:B15
, (comma) Union operator, which combines multiple references into one reference SUM(B5:B15,D5:D15)
(single space) Intersection operator, which produces one reference to cells common to two references SUM(B5:B15 A7:D7)
In this example, cell B7 is common to both ranges.
|Using Cell and Range References||A reference identifies a cell or a range of
cells on a worksheet and tells Microsoft Excel where to look for the values or data you
want to use in a formula.
With references, you can use data contained in different parts of a worksheet in one formula or use the value from one cell in several formulas.
You can also refer to cells on other sheets in the same workbook, to other workbooks, and to data in other programs. References to cells in other workbooks are called external references. References to data in other programs are called remote references.
By default, Microsoft Excel uses the A1 reference style, which labels columns with letters (A through IV, for a total of 256 columns) and labels rows with numbers (1 through 65536).
To refer to a cell, enter the column letter followed by the row number. For example, D50 refers to the cell at the intersection of column D and row 50. To refer to a range of cells, enter the reference for the cell in the upper-left corner of the range, a colon (:), and then the reference to the cell in the lower-right corner of the range. The following are examples of references.
To refer to Use
The cell in column A and row 10 A10
The range of cells in column A and
rows 10 through 20 A10:A20
The range of cells in row 15 and
columns B through E B15:E15
All cells in row 5 5:5
All cells in rows 5 through 10 5:10
All cells in column H H:H
All cells in columns H through J H:J
Depending on the task you want to perform in Microsoft Excel, you can use either relative cell references, which are references to cells relative to the position of the formula (D11), or absolute references ($D$11), which are cell references that always refer to cells in a specific location.
You can use the labels of columns and rows on a worksheet to refer to the cells within those columns and rows, or you can create descriptive names to represent cells, ranges of cells, formulas, or constant values.
If you want to analyze data in the same cell or range of cells on multiple worksheets within the workbook, use a 3-D reference. A 3-D reference includes the cell or range reference, preceded by a range of worksheet names. Microsoft Excel uses any worksheets stored between the starting and ending names of the reference.
|Using the Formula Palette||When you create a formula that contains a
function, the Formula Palette helps you enter worksheet functions.
As you enter a function into the formula, the Formula Palette displays the name of the function, each of its arguments, a description of the function and each argument, the current result of the function, and the current result of the entire formula. To display the Formula Palette, click the = (equal sign) Edit Formula in the formula bar.
You can use the Formula Palette to edit functions in formulas. Select a cell that contains a formula, and then click Edit Formula (=) to display the Formula Palette. The first function in the formula and each of its arguments are displayed in the palette. You can edit the first function or edit another function in the same formula by clicking in the formula bar anywhere within the function.
|Using Functions to Calculate Values||Functions are predefined formulas that perform
calculations by using specific values, called arguments, in a particular order, called the
For example, the SUM function adds values or ranges of cells, and the PMT function calculates the loan payments based on an interest rate, the length of the loan, and the principal amount of the loan.
Arguments can be numbers, text, logical values such as TRUE or FALSE, arrays, error values such as #N/A, or cell references. The argument you designate must produce a valid value for that argument. Arguments can also be constants, formulas, or other functions.
The syntax of a function begins with the function name, followed by an opening parenthesis, the arguments for the function separated by commas, and a closing parenthesis. If the function starts a formula, type an equal sign (=) before the function name.
As you create a formula that contains a function, the Formula Palette will assist you .
|Using the Formula Palette to Enter and Edit Formulas||When you create a formula that contains a
function, the Formula Palette helps you enter worksheet functions. As you enter a function
into the formula, the Formula Palette displays the name of the function, each of its
arguments, a description of the function and each argument, the current result of the
function, and the current result of the entire formula. To display the Formula Palette,
click = in the formula bar. An alternative way to start the Formula Palette, from the
Insert Menu, choose Function.
You can use the Formula Palette to edit functions in formulas. Select a cell that contains a formula, and then click = to display the Formula Palette. The first function in the formula and each of its arguments are displayed in the palette. You can edit the first function or edit another function in the same formula by clicking in the formula bar anywhere within the function. Worksheet functions are calculation tools that can be used on worksheets to perform decision-making, action-taking, and value-returning operations automatically.
|The functions are listed by category, such as "Financial", "Math & Trig", or "Statistical". When you select a function from the list box, the definition of the function and of its arguments will automatically appear for you, as well as the correct placement of commas and parentheses.|
|Types of Arguments||Arguments are the information that a function
uses to produce a new value or perform an action. Arguments are always located to the
right of the function name and are enclosed in parentheses. Most arguments are expected to
be of a certain data type.
Arguments to a function can be any of the following:
Workbook Functions Listed by Category
|When you need to analyze whether values in a
list meet a specific condition, or criteria, you can use a database worksheet function.
For example, in a list that contains sales information, you can count all the rows or
records in which the sales are greater than 1,000 but less than 2,500.
Some database and list management worksheet functions have names that begin with the letter "D." These functions, also known as Dfunctions, have three arguments database, field, criteria.
|With date and time functions, you can analyze and work with date and time values in formulas. For example, if you need to use the current date in a formula, use the TODAY worksheet function, which returns the current date based on your computer's system clock.|
|The engineering worksheet functions perform
engineering analysis. Most of these functions are of three types:
Note The engineering functions are provided by the Analysis ToolPak. If an engineering worksheet function is not available, run the Setup program to install the Analysis ToolPak. After you install the Analysis ToolPak, you must enable it by using the Add-Ins command on the Tools menu.
|Financial functions perform common business
calculations, such as determining the payment for a loan, the future value or net present
value of an investment, and the values of bonds or coupons.
Common arguments for the financial functions include:
|You can use the logical functions either to
see whether a condition is true or false or to check for multiple conditions.
For example, you can use the IF function to determine whether a condition is true or false: One value is returned if the condition is true, and a different value is returned if the condition is false.
|When you need to find values in lists or
tables or when you need to find the reference of a cell, you can use the lookup and
reference worksheet functions.
For example, to find a value in a table by matching a value in the first column of a table, use the VLOOKUP worksheet function. To determine the position of a value in a list, use the MATCH worksheet function.
|With math and trigonometry functions, you can perform simple and complex mathematical calculations, such as calculating the total value for a range of cells or the total value for a range of cells that meet a condition in another range of cells, or round numbers.|
|Statistical worksheet functions perform
statistical analysis on ranges of data.
For example, a statistical worksheet function can provide statistical information about a straight line plotted through a group of values, such as the slope of the line and the y-intercept, or about the actual points that make up the straight line.
|With text functions, you can manipulate text
strings in formulas.
For example, you can change the case or determine the length of a text string. You can also join, or concatenate, a date to a text string. The following formula is an example of how you can use the TODAY function with the TEXT function to create a message that contains the current date and formats the date in the "dd-mmm-yy" number format.
="Budget report as of "&TEXT(TODAY(),"dd-mmm-yy")
Back to Intro to Excel 97 Table of Contents
Direct questions regarding this site to the University of North Texas Computing Center Helpdesk. We can be reached by e-mail at [email protected] , or by phone at 940.565.2324. Hours: Sunday 1pm-midnight, Monday-Thursday 8am-midnight, Friday 8am-8pm, Saturday 9am-5pm.
UNT Computing Center - UNT Box 305398 - Denton, TX 76203 | http://www.unt.edu/training/Excel97/Excel97-functions.htm | 13 |
134 | Common Lisp, abbreviated CL, is a general-purpose, multi-paradigm programming language. It's a dialect of the Lisp programming language developed to standardize the divergent variants of Lisp which predated it. The Common Lisp specification is published as an ANSI standard. Several implementations of the Common Lisp standard are available, both commercial and open source.
- Object oriented
- Iterative compilation into efficient run-time programs
Common Lisp includes the Common Lisp Object System (CLOS), an object system that supports multimethods and method combinations. It is extensible through standard features such as Lisp macros (compile-time code rearrangement accomplished by the program itself) and reader macros (extension of syntax to give special meaning to characters reserved for users for this purpose).
As a dialect of Lisp, Common Lisp uses S-expressions to denote both code and data structure. Function and macro calls are written as lists, with the name of the function first, as in these examples:
(+ 2 2) ; adds 2 and 2, yielding 4.
(defvar *x*) ; Ensures that a variable *x* exists, ; without giving it a value. The asterisks are part of ; the name. The symbol *x* is also hereby endowed with ; the property that subsequent bindings of it are dynamic, ; rather than lexical. (setf *x* 42.1) ; sets the variable *x* to the floating-point value 42.1
;; Define a function that squares a number: (defun square (x) (* x x))
;; Execute the function: (square 3) ; Returns 9
;; the 'let' construct creates a scope for local variables. Here ;; the variable 'a' is bound to 6 and the variable 'b' is bound ;; to 4. Inside the 'let' is a 'body', where the last computed value is returned. ;; Here the result of adding a and b is returned from the 'let' expression. ;; The variables a and b have lexical scope, unless the symbols have been ;; marked as special variables (for instance by a prior DEFVAR). (let ((a 6) (b 4)) (+ a b)) ; returns 10
Data Types
Common Lisp has many data types, more than many other languages.
Scalar Types
Number types include integers, ratios, floating-point numbers, and complex numbers. Common Lisp uses bignums to represent numerical values of arbitrary size and precision. The ratio type represents fractions exactly, a facility not available in many languages. Common Lisp automatically coerces numeric values among these types as appropriate.
The symbol type is common to Lisp languages, but largely unknown outside them. A symbol is a unique, named data object with several parts: name, value, function, property list and package. Of these, value cell and function cell are the most important. Symbols in Lisp are often used similarly to identifiers in other languages: to hold value of a variable; however there are many other uses. Normally, when a symbol is evaluated, its value is returned. Some symbols evaluate to themselves, for example all symbols in keyword package are self-evaluating. Boolean values in Common Lisp are represented by the self-evaluating symbols T and NIL. Common Lisp has namespaces for symbols, called 'packages'.
Data Structures
As in almost all other Lisp dialects, lists in Common Lisp are composed of conses, sometimes called cons cells or pairs. A cons is a data structure with two slots, called its car and cdr. A list is a linked chain of conses. Each cons's car refers to a member of the list (possibly another list). Each cons's cdr refers to the next cons -- except for the last cons, whose cdr refers to the nil value. Conses can also easily be used to implement trees and other complex data structures; though it is usually advised to use structure or class instances instead. It is also possible to create circular data structures with conses.
Common Lisp supports multidimensional arrays, and can dynamically resize arrays if required. Multidimensional arrays can be used for matrix mathematics. A vector is a one-dimensional array. Arrays can carry any type as members (even mixed types in the same array) or can be specialized to contain a specific type of members, as in a vector of integers. Many implementations can optimize array functions when the array used is type-specialized. Two type-specialized array types are standard: a string is a vector of characters, while a bit-vector is a vector of bits.
Hash tables store associations between data objects. Any object may be used as key or value. Hash tables, like arrays, are automatically resized as needed.
Packages are collections of symbols, used chiefly to separate the parts of a program into namespaces. A package may export some symbols, marking them as part of a public interface. Packages can use other packages.
Classes are similar to structures, but offer more dynamic features and multiple-inheritance. (See CLOS.) Classes have been added late to Common Lisp and there is some conceptual overlap with structures. Objects created of classes are called Instances. A special case are Generic Functions. Generic Functions are both functions and instances.
Common Lisp supports first-class functions. For instance, it is possible to write functions that take other functions as arguments or return functions as well. This makes it possible to describe very general operations.
The Common Lisp library relies heavily on such higher-order functions. For example, the sort function takes a relational operator as an argument and key function as an optional keyword argument. This can be used not only to sort any type of data, but also to sort data structures according to a key.
(sort (list 5 2 6 3 1 4) #'>) ; Sorts the list using the > function as the relational operator. ; Returns (6 5 4 3 2 1).
(sort (list '(9 A) '(3 B) '(4 C)) #'< :key #'first) ; Sorts the list according to the first element of each sub-list. ; Returns ((3 B) (4 C) (9 A)).
The evaluation model for functions is very simple. When the evaluator encounters a form (F A1 A2...) then it is to assume that the symbol named F is one of the following:
- A special operator (easily checked against a fixed list)
- A macro operator (must have been defined previously)
- The name of a function (default), which may either be a symbol, or a sub-form beginning with the symbol lambda.
If F is the name of a function, then the arguments A1, A2, ..., An are evaluated in left-to-right order, and the function is found and invoked with those values supplied as parameters.
Defining Functions
The macro defun defines functions. A function definition gives the name of the function, the names of any arguments, and a function body:
(defun square (x) (* x x))
Function definitions may include declarations, which provide hints to the compiler about optimization settings or the data types of arguments. They may also include documentation strings (docstrings), which the Lisp system may use to provide interactive documentation:
(defun square (x) "Calculates the square of the single-float x." (declare (single-float x) (optimize (speed 3) (debug 0) (safety 1))) (* x x))
Anonymous functions (function literals) are defined using lambda expressions, e.g. (lambda (x) (* x x)) for a function that squares its argument. Lisp programming style frequently uses higher-order functions for which it is useful to provide anonymous functions as arguments.
Local functions can be defined with flet and labels.
(flet ((square (x) (* x x))) (square 3))
There are a number of other operators related to the definition and manipulation of functions. For instance, a function may be recompiled with the compile operator. (Some Lisp systems run functions in an interpreter by default unless instructed to compile; others compile every entered function on the fly.)
Defining Generic Functions and Methods
The macro defgeneric defines generic functions. The macro defmethod defines methods. Generic functions are a collection of methods.
Methods can specialize their parameters over classes or objects.
When a generic function is called, multiple-dispatch will determine the correct method to use.
(defgeneric add (a b))
(defmethod add ((a number) (b number)) (+ a b))
(defmethod add ((a string) (b string)) (concatenate 'string a b))
(add "Zippy" "Pinhead") ; returns "ZippyPinhead" (add 2 3) ; returns 5
Generic Functions are also a first class data type. There are many more features to Generic Functions and Methods than described above.
The Function Namespace
The namespace for function names is separate from the namespace for data variables. This is a key difference between Common Lisp and Scheme. Operators which define names in the function namespace include defun, flet, labels, defmethod and defgeneric.
To pass a function by name as an argument to another function, one must use the function special operator, commonly abbreviated as #'. The first sort example above refers to the function named by the symbol > in the function namespace, with the code #'>.
Scheme's evaluation model is simpler: there is only one namespace, and all positions in the form are evaluated (in any order) -- not just the arguments. Code written in one dialect is therefore sometimes confusing to programmers more experienced in the other. For instance, many Common Lisp programmers like to use descriptive variable names such as list or string which could cause problems in Scheme as they would locally shadow function names.
Whether a separate namespace for functions is an advantage is a source of contention in the Lisp community. It is usually referred to as the Lisp-1 vs. Lisp-2 debate. Lisp-1 refers to Scheme's model and Lisp-2 refers to Common Lisp's model. These names were coined in a 1988 paper by Richard P. Gabriel and Kent Pitman, which extensively compares the two approaches.
Other Types
Other data types in Common Lisp include:
- Pathnames represent files and directories in the filesystem. The Common Lisp pathname facility is more general than most operating systems' file naming conventions, making Lisp programs' access to files broadly portable across diverse systems.
- Input and output streams represent sources and sinks of binary or textual data, such as the terminal or open files.
- Common Lisp has a built-in pseudo-random number generator (PRNG). Random state objects represent reusable sources of pseudo-random numbers, allowing the user to seed the PRNG or cause it to replay a sequence.
- Conditions are a type used to represent errors, exceptions, and other "interesting" events to which a program may respond.
- Classes are first-class objects, and are themselves instances of classes called metaclasses.
- Readtables are a type of object which control how Common Lisp's reader parses the text of source code. By controlling which readtable is in use when code is read in, the programmer can change or extend the language's syntax.
Like programs in many other programming languages, Common Lisp programs make use of names to refer to variables, functions, and many other kinds of entities. Named references are subject to scope.
The association between a name and the entity which the name refers to is called a binding.
Scope refers to the set of circumstances in which a name is determined to have a particular binding.
Determiners of Scope
The circumstances which determine scope in Common Lisp include:
- the location of a reference within an expression. If it's the leftmost position of a compound, it refers to a special operator or a macro or function binding, otherwise to a variable binding or something else.
- the kind of expression in which the reference takes place. For instance, (GO X) means transfer control to label X, whereas (PRINT X) refers to the variable X. Both scopes of X can be active in the same region of program text, since tagbody labels are in a separate namespace from variable names. A special form or macro form has complete control over the meanings of all symbols in its syntax. For instance in (defclass x (a b) ()), a class definition, the (a b) is a list of base classes, so these names are looked up in the space of class names, and x isn't a reference to an existing binding, but the name of a new class being derived from a and b. These facts emerge purely from the semantics of defclass. The only generic fact about this expression is that defclass refers to a macro binding; everything else is up to defclass.
- the location of the reference within the program text. For instance, if a reference to variable X is enclosed in a binding construct such as a LET which defines a binding for X, then the reference is in the scope created by that binding.
- for a variable reference, whether or not a variable symbol has been, locally or globally, declared special. This determines whether the reference is resolved within a lexical environment, or within a dynamic environment.
- the specific instance of the environment in which the reference is resolved. An environment is a run-time dictionary which maps symbols to bindings. Each kind of reference uses its own kind of environment. References to lexical variables are resolved in a lexical environment, et cetera. More than one environment can be associated with the same reference. For instance, thanks to recursion or the use of multiple threads, multiple activations of the same function can exist at the same time. These activations share the same program text, but each has its own lexical environment instance.
To understand what a symbol refers to, the Common Lisp programmer must know what kind of reference is being expressed, what kind of scope it is uses if it is a variable reference (dynamic versus lexical scope), and also the run-time situation: in what environment is the reference resolved, where was the binding introduced into the environment, et cetera.
Kinds of Environment
Some environments in Lisp are globally pervasive. For instance, if a new type is defined, it is known everywhere thereafter. References to that type look it up in this global environment.
One type of environment in Common Lisp is the dynamic environment. Bindings established in this environment have dynamic extent, which means that a binding is established at the start of the execution of some construct, such as a LET block, and disappears when that construct finishes executing: its lifetime is tied to the dynamic activation and deactivation of a block. However, a dynamic binding is not just visible within that block; it is also visible to all functions invoked from that block. This type of visibility is known as indefinite scope. Bindings which exhibit dynamic extent (lifetime tied to the activation and deactivation of a block) and indefinite scope (visible to all functions which are called from that block) are said to have dynamic scope. Common Lisp has support for dynamically scoped variables, which are also called special variables. Certain other kinds of bindings are necessarily dynamically scoped also, such as restarts and catch tags. Function bindings cannot be dynamically scoped (but, in recognition of the usefulness of dynamically scoped function bindings, a portable library exists now which provides them).
Dynamic scope is extremely useful because it adds referential clarity and discipline to global variables. Global variables are frowned upon in computer science as potential sources of error, because they can give rise to ad-hoc, covert channels of communication among modules that lead to unwanted, surprising interactions.
In Common Lisp, a special variable which has only a top-level binding behaves just like a global variable in other programming languages. A new value can be stored into it, and that value simply replaces what is in the top-level binding. Careless replacement of the value of a global variable is at the heart of bugs caused by use of global variables. However, another way to work with a special variable is to give it a new, local binding within an expression. This is sometimes referred to as "rebinding" the variable. Binding a dynamically scoped variable temporarily creates a new memory location for that variable, and associates the name with that location. While that binding is in effect, all references to that variable refer to the new binding; the previous binding is hidden. When execution of the binding expression terminates, the temporary memory location is gone, and the old binding is revealed, with the original value intact. Of course, multiple dynamic bindings for the same variable can be nested.
In Common Lisp implementations which support multithreading, dynamic scopes are specific to each thread of execution. Thus special variables serve as an abstraction for thread local storage. If one thread rebinds a special variable, this rebinding has no effect on that variable in other threads. The value stored in a binding can only be retrieved by the thread which created that binding. If each thread binds some special variable *X*, then *X* behaves like thread-local storage. Among threads which do not rebind *X*, it behaves like an ordinary global: all of these threads refer to the same top-level binding of *X*.
Dynamic variables can be used to extend the execution context with additional context information which is implicitly passed from function to function without having to appear as an extra function parameter. This is especially useful when the control transfer has to pass through layers of unrelated code, which simply cannot be extended with extra parameters to pass the additional data. A situation like this usually calls for a global variable. That global variable must be saved and restored, so that the scheme doesn't break under recursion: dynamic variable rebinding take care of this. And that variable must be made thread-local (or else a big mutex must be used) so the scheme doesn't break under threads: dynamic scope implementations can take care of this also.
In the Common Lisp library, there are many standard special variables. For instance, the all standard I/O streams are stored in the top-level bindings of well-known special variables. The standard output stream is stored in *standard-output*.
Suppose a function foo writes to standard output:
(defun foo () (format t "Hello, world"))
It would be nice to capture its output in a character string. No problem, just rebind *standard-output* to a string stream and call it:
(with-output-to-string (*standard-output*) (foo))
-> "Hello, world" ; gathered output returned as a string
Common Lisp supports lexical environments. Formally, the bindings in a lexical environment have lexical scope and may have either indefinite extent or dynamic extent, depending on the type of namespace. Lexical scope means that visibility is physically restricted to the block in which the binding is established. References which are not textually (i.e. lexically) embedded in that block simply do not see that binding.
The tags in a TAGBODY have lexical scope. The expression (GO X) is erroneous if it is not actually embedded in a TAGBODY which contains a label X. However, the label bindings disappear when the TAGBODY terminates its execution, because they have dynamic extent. If that block of code is re-entered by the invocation of a lexical closure, it is invalid for the body of that closure to try to transfer control to a tag via GO:
(defvar *stashed*) ;; will hold a function (tagbody (setf *stashed* (lambda () (go some-label))) (go end-label) ;; skip the (print "Hello") some-label (print "Hello") end-label) -> NIL
When the TAGBODY is executed, it first evaluates the setf form which stores a function in the special variable *stashed*. Then the (go end-label) transfers control to end-label, skipping the code (print "Hello"). Since end-label is at the end of the tagbody, the tagbody terminates, yielding NIL. Suppose that the previously remembered function is now called:
(funcall *stashed*) ;; Error!
This situation is erroneous. One implementation's response is an error condition containing the message, "GO: tagbody for tag SOME-LABEL has already been left". The function tried to evaluate (go some-label), which is lexically embedded in the tagbody, and resolves to the label. However, the tagbody isn't executing (its extent has ended), and so the control transfer cannot take place.
Local function bindings in Lisp have lexical scope, and variable bindings also have lexical scope by default. By contrast with GO labels, both of these have indefinite extent. When a lexical function or variable binding is established, that binding continues to exist for as long as references to it are possible, even after the construct which established that binding has terminated. References to a lexical variables and functions after the termination of their establishing construct are possible thanks to lexical closures.
Lexical binding is the default binding mode for Common Lisp variables. For an individual symbol, it can be switched to dynamic scope, either by a local declaration, by a global declaration. The latter may occur implicitly through the use of a construct like DEFVAR or DEFPARAMETER. It is an important convention in Common Lisp programming that special (i.e. dynamically scoped) variables have names which begin and end with an asterisk. If adhered to, this convention effectively creates a separate namespace for special variables, so that variables intended to be lexical are not accidentally made special.
Lexical scope is useful for several reasons.
Firstly, references to variables and functions can be compiled to efficient machine code, because the run-time environment structure is relatively simple. In many cases it can be optimized to stack storage, so opening and closing lexical scopes has minimal overhead. Even in cases where full closures must be generated, access to the closure's environment is still efficient; typically each variable becomes an offset into a vector of bindings, and so a variable reference becomes a simple load or store instruction a base-plus-offset addressing mode.
Secondly, lexical scope (combined with indefinite extent) gives rise to the lexical closure, which in turn creates a whole paradigm of programming centered around the use of functions being first-class objects, which is at the root of functional programming.
Thirdly, perhaps most importantly, even if lexical closures are not exploited, the use of lexical scope isolates program modules from unwanted interactions. Due to their restricted visibility, lexical variables are private. If one module A binds a lexical variable X, and calls another module B, references to X in B will not accidentally resolve to the X bound in A. B simply has no access to X. For situations in which disciplined interactions through a variable are desirable, Common Lisp provides special variables. Special variables allow for a module A to set up a binding for a variable X which is visible to another module B, called from A. Being able to do this is an advantage, and being able to prevent it from happening is also an advantage; consequently, Common Lisp supports both lexical and dynamic scope.
A macro in Lisp superficially resembles a function in usage. However, rather than representing an expression which is evaluated, it represents a transformation of the program source code.
Macros allow Lisp programmers to create new syntactic forms in the language. For instance, this macro provides the until loop form, which may be familiar from languages such as Perl:
(defmacro until (test &body body) `(do () (,test) ,@body)) ;; example (until (= (random 10) 0) (write-line "Hello"))
All macros must be expanded before the source code containing them can be evaluated or compiled normally. Macros can be considered functions that accept and return abstract syntax trees (Lisp S-expressions). These functions are invoked before the evaluator or compiler to produce the final source code. Macros are written in normal Common Lisp, and may use any Common Lisp (or third-party) operator available. The backquote notation used above is provided by Common Lisp specifically to simplify the common case of substitution into a code template.
Variable capture and shadowing
Common Lisp macros are capable of what is commonly called variable capture, where symbols in the macro-expansion body coincide with those in the calling context, allowing the programmer to create macros wherein various symbols have special meaning. The term variable capture is somewhat misleading, because all namespaces are vulnerable to unwanted capture, including the operator and function namespace, the tagbody label namespace, catch tag, condition handler and restart namespaces.
Variable capture can introduce software defects. This happens in one of the following two ways:
- In the first way, a macro expansion can inadvertently make a symbolic reference which the macro writer assumed will resolve in a global namespace, but the code where the macro is expanded happens to provide a local, shadowing definition it which steals that reference. Let this be referred to as type 1 capture.
- The second way, type 2 capture, is just the opposite: some of the arguments of the macro are pieces of code supplied by the macro caller, and those pieces of code are written such that they make references to surrounding bindings. However, the macro inserts these pieces of code into an expansion which defines its own bindings that accidentally captures some of these references.
The Scheme dialect of Lisp provides a macro-writing system which provides the referential transparency that eliminates both types of capture problem. This type of macro system is sometimes called "hygienic", in particular by its proponents (who regard macro systems which do not automatically solve this problem as unhygienic).
In Common Lisp, macro hygiene is ensured one of two different ways.
One approach is to use gensyms: guaranteed-unique symbols which can be used in a macro-expansion without threat of capture. The use of gensyms in a macro definition is a manual chore, but macros can be written which simplify the instantiation and use of gensyms. Gensyms solve type 2 capture easily, but they are not applicable to type 1 capture in the same way, because the macro expansion cannot rename the interfering symbols in the surrounding code which capture its references. Gensyms could be used to provide stable aliases for the global symbols which the macro expansion needs. The macro expansion would use these secret aliases rather than the well-known names, so redefinition of the well-known names would have no ill effect on the macro.
Another approach is to use packages. A macro defined in its own package can simply use internal symbols in that package in its expansion. The use of packages deals with type 1 and type 2 capture.
However, packages don't solve the type 1 capture of references to standard Common Lisp functions and operators. The reason is that the use of packages to solve capture problems revolves around the use of private symbols (symbols in one package, which are not imported into, or otherwise made visible in other packages). Whereas the Common Lisp library symbols are external, and frequently imported into or made visible in user-defined packages.
The following is an example of unwanted capture in the operator namespace, occurring in the expansion of a macro:
;; expansion of UNTIL makes liberal use of DO (defmacro until (expression &body body) `(do () (,expression) ,@body)) ;; macrolet establishes lexical operator binding for DO (macrolet ((do (...) ... something else ...)) (until (= (random 10) 0) (write-line "Hello")))
The UNTIL macro will expand into a form which calls DO which is intended to refer to the standard Common Lisp macro DO. However, in this context, DO may have a completely different meaning, so UNTIL may not work properly.
Common Lisp solves the problem of the shadowing of standard operators and functions by forbidding their redefinition. Because it redefines the standard operator DO, the preceding is actually a fragment of non-conforming Common Lisp, which allows implementations to diagnose and reject it.
Common Lisp Object System
Common Lisp includes a toolkit for object oriented programming, the Common Lisp Object System or CLOS, which is one of the most powerful object systems available in any language. Originally proposed as an add-on, CLOS was adopted as part of the ANSI standard for Common Lisp. CLOS is a dynamic object system with multiple dispatch and multiple inheritance, and differs radically from the OOP facilities found in static languages such as C++ or Java. As a dynamic object system, CLOS allows changes at runtime to generic functions and classes. Methods can be added and removed, classes can be added and redefined, objects can be updated for class changes and the class of objects can be changed.
CLOS has been integrated into ANSI Common Lisp. Generic Functions can be used like normal functions and are a first-class data type. Every CLOS class is integrated into the Common Lisp type system. Many Common Lisp types have a corresponding class. There is more potential use of CLOS for Common Lisp. The specification does not say whether conditions are implemented with CLOS. Pathnames and streams could be implemented with CLOS. These further usage possibilities of CLOS for ANSI Common Lisp are not part of the standard. Actual Common Lisp implementations are using CLOS for pathnames, streams, input/output, conditions, the implementation of CLOS itself and more.
Comparison With Other Lisps
Common Lisp is most frequently compared with, and contrasted to, Scheme—if only because they are the two most popular Lisp dialects. Scheme predates CL, and comes not only from the same Lisp tradition but from some of the same engineers — Guy L. Steele, Jr., with whom Gerald Jay Sussman designed Scheme, chaired the standards committee for Common Lisp.
Common Lisp is a general-purpose programming language, in contrast to Lisp variants such as Emacs Lisp and AutoLISP which are embedded extension languages in particular products. Unlike many earlier Lisps, Common Lisp (like Scheme) uses lexical variable scope.
Most of the Lisp systems whose designs contributed to Common Lisp—such as ZetaLisp and Franz Lisp—used dynamically scoped variables in their interpreters and lexically scoped variables in their compilers. Scheme introduced the sole use of lexically-scoped variables to Lisp; an inspiration from ALGOL 68 which was widely recognized as a good idea. CL supports dynamically-scoped variables as well, but they must be explicitly declared as "special". There are no differences in scoping between ANSI CL interpreters and compilers.
Common Lisp is sometimes termed a Lisp-2 and Scheme a Lisp-1, referring to CL's use of separate namespaces for functions and variables. (In fact, CL has many namespaces, such as those for go tags, block names, and loop keywords.) There is a long-standing controversy between CL and Scheme advocates over the tradeoffs involved in multiple namespaces. In Scheme, it is (broadly) necessary to avoid giving variables names which clash with functions; Scheme functions frequently have arguments named lis, lst, or lyst so as not to conflict with the system function list. However, in CL it is necessary to explicitly refer to the function namespace when passing a function as an argument -- which is also a common occurrence, as in the sort example above.
CL also differs from Scheme in its handling of boolean values. Scheme uses the special values #t and #f to represent truth and falsity. CL follows the older Lisp convention of using the symbols T and NIL, with NIL standing also for the empty list. In CL, any non-NIL value is treated as true by conditionals such as if as are non-#f values in Scheme. This allows some operators to serve both as predicates (answering a boolean-valued question) and as returning a useful value for further computation.
Lastly, the Scheme standards documents require tail-call optimization, which the CL standard does not. Most CL implementations do offer tail-call optimization, although often only when the programmer uses an optimization directive. Nonetheless, common CL coding style does not favor the ubiquitous use of recursion that Scheme style prefers -- what a Scheme programmer would express with tail recursion, a CL user would usually express with an iterative expression in do, dolist, loop, or (more recently) with the iterate package.
Common Lisp is defined by a specification (like Ada and C) rather than by a single implementation (like Perl). There are many implementations, and the standard spells out areas in which they may validly differ.
In addition, implementations tend to come with library packages, which provide functionality not covered in the standard. Free Software libraries have been created to support such features in a portable way, most notably Common-Lisp.net and the Common Lisp Open Code Collection project.
Common Lisp has been designed to be implemented by incremental compilers. Standard declarations to optimize compilation (such as function inlining) are proposed in the language specification. Most Common Lisp implementations compile source code to native machine code. Some implementations offer block compilers. Some implementations can create (optimized) stand-alone applications. Others compile to bytecode, which reduces speed but eases binary-code portability. There are also compilers that compile Common Lisp code to C code. The misconception that Lisp is a purely-interpreted language is most likely due to the fact that Lisp environments provide an interactive prompt and that functions are compiled one-by-one, in an incremental way. With Common Lisp incremental compilation is widely used.
Commercial implementations include:
- Allegro Common Lisp
- Corman Common Lisp, which is particularly adapted to Microsoft Windows
- Scieneer Common Lisp, which is designed for high-performance scientific computing.
Freely redistributable implementations include:
- CMUCL, originally from Carnegie Mellon University, now maintained as Free Software by a group of volunteers. CMUCL uses a fast native-code compiler. It is available on Linux and BSD for Intel x86; Linux for Alpha; Mac OS X for Intel x86 and PowerPC; and Solaris, IRIX, and HP-UX on their native platforms.
- Steel Bank Common Lisp (SBCL), a branch from CMUCL. "Broadly speaking, SBCL is distinguished from CMU CL by a greater emphasis on maintainability." SBCL runs on the platforms CMUCL does, except HP/UX; in addition, it runs on Linux for PowerPC, SPARC, and MIPS, and has experimental support for running on Windows. SBCL does not use an interpreter by default; all expressions are compiled to native code unless the user switches the interpreter on.
- CLISP, a bytecode-compiling implementation, portable and runs on a number of Unix and Unix-like systems (including Mac OS X), as well as Microsoft Windows and several other systems.
- GNU Common Lisp (GCL), the GNU Project's Lisp compiler. Not yet fully ANSI-compliant, GCL is however the implementation of choice for several large projects including the mathematical tools Maxima, AXIOM and ACL2. GCL runs on Linux under eleven different architectures, and also under Windows, Solaris, and FreeBSD.
- Embeddable Common Lisp (ECL), designed to be embedded in C programs.
- Clozure CL, previously “OpenMCL”, a free software / open source fork of Macintosh Common Lisp. As the name implies, OpenMCL was originally native to the Macintosh. The renamed Clozure CL now runs on Mac OS X, Darwin, FreeBSD, and Linux for PowerPC and Intel x86-64. A port to Intel x86-32 for the preceding operating systems is in progress, as well as a port to 64-bit Windows.
- Movitz implements a Lisp environment for x86 computers without relying on any underlying OS.
- Macintosh Common Lisp 5.2 for Apple Macintosh computers with a PowerPC processor running Mac OS X.
- The Poplog system implements a version of CL, with POP-11, and optionally Prolog, and Standard ML (SML), allowing mixed language programming. For all, the implementation language is POP-11, which is compiled incrementally. It also has an integrated Emacs-like editor that communicates with the compiler.
- Armed Bear Common Lisp is a CL implementation that runs on the Java Virtual Machine. It includes a compiler to Java byte code, and allows access to Java libraries from CL. Armed Bear CL is a component of the Armed Bear J Editor, though it can be used independently.
- CLforJava is a CL implementation in Java that is actively developed at the College of Charleston.
Common Lisp is used in many commercial applications, including the Yahoo! Store web-commerce site, which originally involved Paul Graham and was later rewritten in C++ and Perl. Other notable examples include:
- Jak and Daxter video games for Playstation2
- OpenMusic is an object-oriented visual programming environment based on Common Lisp, used in Computer assisted composition.
- ITA Software's low fare search engine, used by travel websites such as Orbitz and Kayak.com and airlines such as American Airlines, Continental Airlines and US Airways.
There also exist open-source applications written in Common Lisp, such as:
- ACL2, a full-featured theorem prover for an applicative variant of Common Lisp.
- Maxima, a sophisticated computer algebra system.
- Stumpwm, a tiling, keyboard driven X11 Window Manager written entirely in Common Lisp.
- ↑ Document page at ANSI website
- ↑ Reddy, Abhishek (2008-08-22). Features of Common Lisp.
- ↑ Unicode support. The Common Lisp Wiki. Retrieved on 2008-08-21.
- ↑
- ↑ History and Copyright. Steel Bank Common Lisp.
- ↑ Clozure CL.
- ↑ Armed Bear Common Lisp.
- ↑ "In January 2003, Yahoo released a new version of the editor written in C++ and Perl. It's hard to say whether the program is no longer written in Lisp, though, because to translate this program into C++ they literally had to write a Lisp interpreter: the source files of all the page-generating templates are still, as far as I know, Lisp code." Paul Graham, Beating the Averages
External Links
- The CLiki, a Wiki for Free Software Common Lisp systems running on Unix-like systems.
- Common Lisp software repository.
- The Common Lisp directory - information repository for all things Common Lisp.
- History. Common Lisp HyperSpec.
- Lisping at JPL
- The Nature of Lisp Essay that examines Lisp by comparison with XML.
- Common Lisp Implementations: A Survey Survey of maintained Common Lisp implementations. | http://docforge.com/wiki/Common_Lisp | 13 |
64 | Prepared by Nicole Strangman & Tracey Hall
National Center on Accessing the General Curriculum
Note: Links have been updated on 8/24/09
Many people associate virtual reality and computer simulations with science fiction, high-tech industries, and computer games; few associate these technologies with education. But virtual reality and computer simulations have been in use as educational tools for some time. Although they have mainly been used in applied fields such as aviation and medical imaging, these technologies have begun to edge their way into the primary classroom. There is now a sizeable research base addressing the effectiveness of virtual reality and computer simulations within school curriculum. The following five sections present a definition of these technologies, a sampling of different types and their curriculum applications, a discussion of the research evidence for their effectiveness, useful Web resources, and a list of referenced research articles.
Definition and Types
Computer simulations are computer-generated versions of real-world objects (for example, asky scraper or chemical molecules) or processes (for example, population growth or biological decay). They may be presented in 2-dimensional, text-driven formats, or, increasingly, 3-dimensional, multimedia formats. Computer simulations can take many different forms, ranging from computer renderings of 3-dimensional geometric shapes to highly interactive, computerized laboratory experiments.
Virtual reality is a technology that allows students to explore and manipulate computer-generated, 3-dimensional, multimedia environments in real time. There are two main types of virtual reality environments. Desktop virtual reality environments are presented on an ordinary computer screen and are usually explored by keyboard, mouse, wand, joystick, or touchscreen. Web-based "virtual tours" are an example of a commonly available desktop virtual reality format. Total immersion virtual reality environments are presented on multiple, room-size screens or through a stereoscopic, head-mounted display unit. Additional specialized equipment such as a DataGlove (worn as one would a regular glove) enable the participant to interact with the virtual environment through normal body movements. Sensors on the head unit and DataGlove track the viewer's movements during exploration and provide feedback that is used to revise the display enabling real-time, fluid interactivity. Examples of virtual reality environments are a virtual solar system that enables users to fly through space and observe objects from any angle, a virtual science experiment that simulates the growth of microorganisms under different conditions, a virtual tour of an archeological site, and a recreation of the Constitutional Convention of 1787.
Applications Across Curriculum Areas
Computer simulations and virtual reality offer students the unique opportunity of experiencing and exploring a broad range of environments, objects, and phenomena within the walls of the classroom. Students can observe and manipulate normally inaccessible objects, variables, and processes in real-time. The ability of these technologies to make what is abstract and intangible concrete and manipulable suits them to the study of natural phenomena and abstract concepts, "(VR) bridges the gap between the concrete world of nature and the abstract world of concepts and models (Yair, Mintz, & Litvak, 2001, p.294)." This makes them a welcome alternative to the conventional study of science and mathematics, which require students to develop understandings based on textual descriptions and 2-D representations.
The concretizing of objects atoms, molecules, and bacteria, for example, makes learning more straightforward and intuitive for many students and supports a constructivist approach to learning. Students can learn by doing rather than, for example, reading. They can also test theories by developing alternative realities. This greatly facilitates the mastery of difficult concepts, for example the relation between distance, motion, and time (Yair et al.).
It is not therefore surprising that math and science applications are the most frequent to be found in the research literature. Twenty-two of the thirty-one studies surveyed in this review of the literature investigated applications in science; 6 studies investigated math applications. In contrast, only one study investigated applications in the humanities curriculum (specifically, history and reading). The two remaining addressed generalized skills independent of a curriculum area.
It is important to keep in mind, however, when reading this review, that virtual reality and computer simulations offer benefits that could potentially extend across the entire curriculum. For example, the ability to situate students in environments and contexts unavailable within the classroom could be beneficial in social studies, foreign language and culture, and English curricula, enabling students to immerse themselves in historical or fictional events and foreign cultures and explore them first hand. With regard to language learning, Schwienhorst (2002) notes numerous benefits of virtual reality, including the allowance of greater self-awareness, support for interaction, and the enabling of real-time collaboration (systems can be constructed to allow individuals in remote locations to interact in a virtual environment at the same time) (Schwienhorst, 2002).
The ability of virtual reality and computer simulations to scaffold student learning (Jiang & Potter, 1994; Kelly, 1997-98), potentially in an individualized way, is another characteristic that well suits them to a range of curriculum areas. An illustrative example of the scaffolding possibilities is a simulation program that records data and translates between notation systems for the student, so that he or she can concentrate on the targeted skills of learning probability (Jiang & Potter, 1994). The ability for students to revisit aspects of the environment repeatedly also helps put students in control of their learning. The multisensory nature can be especially helpful to students who are less visual learners and those who are better at comprehending symbols than text. With virtual environments, students can encounter abstract concepts directly, without the barrier of language or symbols and computer simulations and virtual environments are highly engaging, "There is simply no other way to engage students as virtual reality can (Sykes & Reid, 1999, p.61)." Thus, although math and science are the most frequently researched applications of these two technologies, humanities applications clearly merit the same consideration.
Evidence for Effectiveness
In the following sections, we discuss the evidence for the effectiveness of virtual reality and computer simulations based on an extensive survey of the literature published between 1980 and 2002. This survey included 31 research studies conducted in K-12 education settings and published in peer-reviewed journals (N=27) or presented at conferences (N=3) (it was necessary to include conference papers due to the low number of virtual reality articles in peer-reviewed journals). Every attempt was made to be fully inclusive but some studies could not be accessed in a timely fashion. Although the research base is somewhat small, particularly in the case of virtual reality, it provides some useful insights.
Numerous commentaries and/or descriptions of virtual reality projects in education have been published. Research studies are still relatively rare. We identified only 3 research investigations of virtual reality in the K-12 classroom: one journal article (Ainge, 1996) and two conference papers (Song, Han, & Yul Lee, 2000; Taylor, 1997).
Taylor's (1997) research was directed at identifying variables that influence students' enjoyment of virtual reality environments. After visiting a virtual reality environment, the 2,872 student participants (elementary, middle, and high school) rated the experience by questionnaire. Their responses were indicative of high levels of enjoyment throughout most of the sample. However, responses also indicated the need for further development of the interface both to improve students' ability to see in the environment and to reduce disorientation. Both factors were correlated with ratings of the environment's presence or authenticity, which itself was tightly associated with enjoyment. It's uncertain whether these technical issues remain a concern with today's virtual reality environments, which have certainly evolved since the time this study was published.
Whether or not virtual reality technology has yet been optimized to promote student enjoyment, it appears to have the potential to favorably impact the course of student learning. Ainge (1996) and Song et al. both provide evidence that virtual reality experiences can offer an advantage over more traditional instructional experiences at least within certain contexts. Ainge showed that students who built and explored 3D solids with a desktop virtual reality program developed the ability to recognize 3D shapes in everyday contexts, whereas peers who constructed 3D solids out of paper did not. Moreover, students working with the virtual reality program were more enthusiastic during the course of the study (which was, however, brief - 4 sessions). Song et al. reported that middle school students who spent part of their geometry class time exploring 3-D solids were significantly more successful at solving geometry problems that required visualization than were peers taught geometry by verbal explanation. Both studies, however, seem to indicate that the benefits of virtual reality experiences are often limited to very specific skills. For example, students taught by a VR approach were not any more effective at solving geometry problems that did not require visualization (Song et al.).
Clearly, the benefits of virtual reality experiences need to be defined in a more comprehensive way. For example, although numerous authors have documented student enjoyment of virtual reality (Ainge, 1996; Bricken & Byrne, 1992; Johnson, Moher, Choo, Lin, & Kim, 2002; Song et al.), it is still unclear whether virtual reality can offer more than transient appeal for students. Also, the contexts in which it can be an effective curriculum enhancement are still undefined. In spite of the positive findings reported here, at this point it would be premature to make any broad or emphatic recommendations regarding the use of virtual reality as a curriculum enhancement.
There is substantial research reporting computer simulations to be an effective approach for improving students' learning. Three main learning outcomes have been addressed: conceptual change, skill development, and content area knowledge.
Conceptual change. One of the most interesting curriculum applications of computer simulations is the generation of conceptual change. Students often hold strong misconceptions be they historical, mathematical, grammatical, or scientific. Computer simulations have been investigated as a means to help students confront and correct these misconceptions, which often involve essential learning concepts. For example, Zietsman & Hewson (1986) investigated the impact of a microcomputer simulation on students' misconceptions about the relationship between velocity and distance, fundamental concepts in physics. Conceptual change in the science domain has been the primary target for these investigations, although we identified one study situated within the mathematics curriculum (Jiang & Potter, 1994). All 3 studies that we directly reviewed (Jiang & Potter, 1994; Kangassalo, 1994; Zietsman & Hewson, 1986) supported the potential of computer simulations to help accomplish needed conceptual change. Stratford (1997) discusses additional evidence of this kind (Brna, 1987; Gorsky & Finegold, 1992) in his review of computer-based model research in precollege science classrooms (Stratford, 1997).
The quality of this research is, however, somewhat uneven. Lack of quantitative data (Brna, 1987; Jiang & Potter, 1994; Kangassalo, 1994) and control group(s) (Brna, 1987; Gorsky & Finegold, 1992; Jiang & Potter, 1994; Kangassalo, 1994) are recurrent problems. Nevertheless, there is a great deal of corroboration in this literature that computer simulations have considerable potential in helping students develop richer and more accurate conceptual models in science and mathematics.
Skill development. A more widely investigated outcome measure in the computer simulation literature is skill development. Of 12 studies, 11 reported that the use of computer simulations promoted skill development of one kind or another. The majority of these simulations involved mathematical or scientific scenarios (for example, a simulation of chemical molecules and a simulation of dice and spinner probability experiments), but a few incorporated other topic areas such as history (a digital text that simulated historical events and permitted students to make decisions that influenced outcomes) and creativity (a simulation of Lego block building). Skills reported to be improved include reading (Willing, 1988), problem solving (Jiang & Potter, 1994; Rivers & Vockell, 1987), science process skills (e.g. measurement, data interpretation, etc.; (Geban, Askar, & Ozkan, 1992; Huppert, Lomask, & Lazarowitz, 2002), 3D visualization (Barnea & Dori, 1999), mineral identification (Kelly, 1997-98), abstract thinking (Berlin & White, 1986), creativity (Michael, 2001), and algebra skills involving the ability to relate equations and real-life situations (Verzoni, 1995).
Seven (Barnea & Dori, 1999; Berlin & White, 1986; Huppert et al.; Kelly, 1997-98; Michael, 2001; Rivers & Vockell, 1987) of these twelve studies incorporated control groups enabling comparison of the effectiveness of computer simulations to other instructional approaches. Generally, they compared simulated explorations, manipulations, and/or experiments to hands-on versions involving concrete materials. The results of all 7 studies suggest that computer simulations can be implemented to as good or better effect than existing approaches.
There are interpretive questions, however, that undercut some of these studies' findings. One of the more problematic issues is that some computer simulation interventions have incorporated instructional elements or supports (Barnea & Dori, 1999; Geban et al.; Kelly, 1997-98; Rivers & Vockell, 1987; Vasu & Tyler, 1997) that are not present in the control treatment intervention. This makes it more difficult to attribute any advantage of the experimental treatment to the computer simulation per say. Other design issues such as failure to randomize group assignment (Barnea & Dori, 1999; Kelly, 1997-98; Rivers & Vockell, 1987; Vasu & Tyler, 1997; Verzoni, 1995) none of these studies specified that they used random assignment) and the use of ill-documented, qualitative observations (Jiang & Potter, 1994; Mintz, 1993; Willing, 1988) weaken some of the studies. When several of these flaws are present in the same study (Barnea & Dori, 1999; Kelly, 1997-98; Rivers & Vockell, 1987; Vasu & Tyler, 1997), the findings should be weighted more lightly. Even excluding such studies, however, the evidence in support of computer simulations still outweighs that against them.
Two studies reported no effect of computer simulation use on skill development (Mintz, 1993, hypothesis testing; Vasu & Tyler, 1997, problem solving). However, neither of these studies is particularly strong. Mintz (1993) presented results from a small sample of subjects and based conclusions on only qualitative, observational data. Vasu & Tyler (1997) provide no detailed information about the nature of the simulation program investigated in their study or how students interacted with it, making it difficult to evaluate their findings.
Thus, as a whole, there is good support for the ability of computer simulations to improve various skills, particularly science and mathematics skills. Important questions do remain. One of the more important questions future studies should address is the degree to which two factors, computer simulations' novelty and training for involved teachers and staff, are fundamental to realizing the benefits of this technology.
Content area knowledge. Another potential curriculum application for computer simulations is the development of content area knowledge. According to the research literature, computer programs simulating topics as far ranging as frog dissection, a lake's food chain, microorganismal growth, and chemical molecules, can be effectively used to develop knowledge in relevant areas of the curriculum. Eleven studies in our survey investigated the impact of working with a computer simulation on content area knowledge. All 11 researched applications for the science curriculum, targeting, for example, knowledge of frog anatomy and morphology, thermodynamics, chemical structure and bonding, volume displacement, and health and disease. Students who worked with computer simulations significantly improved their performance on content-area tests (Akpan & Andre, 2000; Barnea & Dori, 1999; Geban et al.; Yildiz & Atkins, 1996). Working with computer simulations was in nearly every case as effective (Choi & Gennaro, 1987; Sherwood & Hasselbring, 1985/86) or more effective (Akpan & Andre, 2000; Barnea & Dori, 1999; Geban et al.; Huppert et al.; Lewis, Stern, & Linn, 1993; Woodward, Carnine, & Gersten, 1988) than traditional, hands-on materials for developing content knowledge.
Only two studies (Bourque & Carlson, 1987; Kinzer, Sherwood, & Loofbourrow, 1989) report an inferior outcome relative to traditional learning methods. Both studies failed to include a pretest, without which it is difficult to interpret posttest scores. Students in the simulation groups may have had lower posttest scores and still have made greater gains over the course of the experiment because they started out with less knowledge. Or they may have had more knowledge than their peers, resulting in a ceiling effect. Moreover, Bourque & Carlson (1997) designed their experiment in a way that may have confounded the computer simulation itself with other experimental variables. Students who worked off the computer took part in activities that were not parallel to those experienced by students working with computer simulations. Only students in the hands-on group were engaged in a follow-up tutorial and post-lab problem solving exercise.
Experimental flaws such as these are also problematic for many of the 11 studies that support the benefits of using computer simulations. Neither Choi & Gennaro (1987), Sherwood and Hasselbring (1985/86), nor Woodward et al. included a pretest. Like Bourque & Carlson (1997, above) both Akpan & Andre (2000) and Barnea & Dori (1999) introduced confounding experimental variables by involving the computer simulation group in additional learning activities (filling out a keyword and definition worksheet and completing a self study, review and quiz booklet, respectively). In addition, four studies (Barnea & Dori, 1999; Huppert et al.; Woodward et al.; Yildiz & Atkins, 1996) did not clearly indicate that they randomized assignment, and two did not include a control group (Lewis et al.; Yildiz & Atkins, 1996).
Little of the evidence to support computer simulations' promotion of content knowledge is iron clad. Although further study is important to repeat these findings, the quality of evidence is nevertheless on par with that supporting the use of traditional approaches. Taking this perspective, there is reasonably good support for the practice of using computer simulations as a supplement to or in place of traditional approaches for teaching content knowledge. However, the same questions mentioned above in talking about the skill development literature, linger here and need to be addressed in future research.
Factors Influencing Effectiveness
Factors influencing the effectiveness of computer simulations have not been extensively or systematically examined. Below we identify a number of likely candidates, and describe whatever preliminary evidence exists for their influence on successful learning outcomes.
At this point, it appears that computer simulations can be effectively implemented across a broad range of grade levels. Successful learning outcomes have been demonstrated for elementary (Berlin & White, 1986; Jiang & Potter, 1994; Kangassalo, 1994; Kinzer et al.; Park, 1993; Sherwood & Hasselbring, 1985/86; Vasu & Tyler, 1997; Willing, 1988), junior high (Akpan & Andre, 2000; Choi & Gennaro, 1987; Jackson, 1997; Jiang & Potter, 1994; Lewis et al.; Michael, 2001; Roberts & Blakeslee, 1996; Verzoni, 1995; Willing, 1988) and high school students (Barnea & Dori, 1999; Bourque & Carlson, 1987; Geban et al.; Huppert et al.; Jiang & Potter, 1994; Kelly, 1997-98; Mintz, 1993; Rivers & Vockell, 1987; Ronen & Eliahu, 1999; Willing, 1988; Woodward et al.; Yildiz & Atkins, 1996; Zietsman & Hewson, 1986). Because the majority of studies (14/27) have targeted junior high and high school populations, there is weightier support for these grade levels. But although fewer in numbers studies targeting students in grades 4 through 6 are also generally supportive of the benefits of using computer simulations. At this point, the early grades, 1-3 (Kangassalo, 1994) are too poorly represented in the research base to draw any conclusions about success of implementation.
Only one study has directly examined the impact of grade level on the effectiveness of using computer simulations. Berlin & White (1986) found no significant difference in the effectiveness of this approach for 2nd and 4th grade students. In the absence of other direct comparisons, a metaanalysis of existing research to determine the average effect size for different grade levels would help to determine whether this is a strong determinant of the effectiveness of computer simulations.
Looking across students, even just those considered to represent the "middle" of the distribution, there are considerable differences in their strengths, weaknesses, and preferences (Rose & Meyer, 2002). Characteristics at both the group and individual level have the potential to influence the impact of any learning approach. Educational group, prior experience, gender, and a whole variety of highly specific traits such as intrinsic motivation and cognitive operational stage are just a few examples. Although attention to such factors has been patchy at best, there is preliminary evidence to suggest that some of these characteristics may influence the success of using computer simulations.
With respect to educational group, the overwhelming majority of research studies have sampled subjects in the general population, making it difficult to determine whether educational group in any way influences the effectiveness of computer simulations. Only two studies (Willing, 1988; Woodward et al.) specifically mention the presence of students with special needs in their sample. Neither study gets directly at the question of whether educational group influences the effectiveness of computer simulations. However, they do make some interesting and important observations. Willing (1988) describes her sample of 222 students as being comprised mostly of students whom were considered average but in addition special education students, students with learning disabilities, and students who were gifted. Although Willing does not thoroughly address educational group in her presentation and analysis of the results, she does share a comment by one of the teachers that even less able readers seemed at ease reading when using the interactive historical text. Findings from Woodward et al. suggest not only that computer simulations can be effective for students with learning disabilities but that they may help to normalize these students' performance to that of more average-performing peers. Students with learning disabilities who worked with a computer simulation outperformed students without learning disabilities who did not receive any treatment. In contrast, untreated students without learning disabilities outperformed students without learning disabilities who took part in a control intervention consisting of conventional, teacher-driven activities.
Like educational group, gender is a factor sometimes associated with disparate achievement, particularly in math and science subject areas. In relation to the impact of computer simulations, however, it does not appear to be an important factor. Four studies in our review (Barnea & Dori, 1999; Berlin & White, 1986; Choi & Gennaro, 1987; Huppert et al.) directly examined the influence of gender on the outcome of working with computer simulations, and none demonstrated any robust relationship. In fact, a study by Choi & Gennaro (1987) suggests that when gender gaps in achievement exist, they persist during the use of computer simulations.
In contrast, there is evidence, although at this point isolated, that prior achievement can strongly influence the effectiveness of computer simulations. Yildiz & Atkins (1996) examined how prior achievement in science influences the outcome of working with different types of multimedia computer simulations. Students' prior achievement clearly affected the calculated effect size but how so depended on the type of computer simulation. These findings raise the possibility of very complex interactions between prior achievement and the type of computer simulation being used. They suggest that both factors may be essential for teachers to consider when weighing the potential benefits of implementing computer simulations.
Huppert et al. investigated whether students' cognitive stage might influence how much they profit from working with a computer simulation. Working with a computer simulation of microorganismal growth differentially affected students' development of content understanding and science process skill depending on their cognitive stage. Interestingly, those with the highest cognitive stage (formative) experienced little improvement from working with the simulation, whereas students at the concrete or transitional operational stages notably improved. Thus, reasoning ability may be another factor influencing the usefulness of a computer simulation to a particular student.
There are many more potentially important variables that have rarely been considered or even described in research studies. For example, only a small number of studies have specified whether subjects are experienced (Choi & Gennaro, 1987; Yildiz & Atkins, 1996) or not (Bourque & Carlson, 1987) with using computers in the classroom. None have directly examined this variable's impact. More thoroughly describing the characteristics of sample populations would be an important first step toward sorting out such potentially important factors.
Teacher Training and Support
Given the unevenness of teachers' technology preparedness, training and support in using computer simulations seems like a potentially key factor in the effectiveness of using computer simulations in the classroom. As it the case with many of the other variables we've mentioned, few studies have described with much clarity or detail the nature of teacher training and support. Exceptions are Rivers & Vockell (1987) and Vasu & Tyler (1997), both of whom give quite thorough descriptions of staff development and available resources. This is another area that merits further investigation.
It has been suggested that combining computer simulation work with hands-on work may produce a better learning outcome than either method alone. Findings from Bourque & Carlson (1997) support this idea. They found that students performed best when they engaged in hands-on experimentation followed by computer simulation activities. However, Akpan & Andre (2000) report that students learned as much doing the simulated dissection as they did doing both the simulated and real dissection. This is an interesting question but one that will require additional research to squarely address.
Links to Learn More About Virtual Reality & Computer Simulations
Virtual Reality Society
The Virtual Reality Society (VRS), founded in 1994 is an international group dedicated to the discussion and advancement of virtual reality and synthetic environments. Its activities include the publication of an international journal, the organization of special interest groups, conferences, seminars and tutorials. This web site contains a rich history of article listings and publications on Virtual Reality.
Virtual Reality and Education Laboratory
This is the homepage of Virtual Reality and Education Laboratory at East Carolina University in Greenville, North Carolina. The Virtual Reality and Education Laboratory (VREL) was created in 1992 to research virtual reality (VR) and its applications to the K-12 curriculum. Many projects are being conducted through VREL by researchers Veronica Pantelidis and Dr. Lawrence Auld. This web site provides links to VR in the Schools, an internationally referred journal distributed via the Internet. There are additional links to some VR sites recommended by these authors as exemplars and interesting sites.
Virtual Reality Resources for K-12 Education
The NCSA Education & Outreach Group has compiled this web site containing links to multiple sites containing information and educational materials on Virtual Reality for Kindergarten through 12 grade classrooms.
Virtual Reality in Education: Learning in Virtual Reality
In collaboration with the National Center for Supercomputing Applications, the University of Illinois at Urbana-Champaign has created a five-year program to examine virtual reality (VR) in the classroom. One of the goals behind this program is to discover how well students can generalize their VR learning experiences outside of the classroom. This web site provides an explanation of the project with links to additional resources and Projects.
Human Interface Technology Laboratory, Washington Technology Center in Seattle
This web site is the home of the Human Interface Technology Laboratory of the Washington Technology Center in Seattle, Washington. Various Virtual Reality (VR) articles and books are referenced. In addition to the list of articles and books, the technology center provides a list of internet resources including organizations that are doing research on VR, VR simulation environments and projects about various aspects of virtual reality.
Applied Computer Simulation Lab, Oregon Research Institute
This web site is from the Oregon Research Institute. The researchers at the Applied Computer Simulation Lab have created virtual reality (VR) programs that help physically disabled children operate motorized wheelchairs successfully. This website connects the reader to articles and information about these VR projects. Another project that this team is working on involves creating virtual reality programs for deaf blind students to help them "learn orientation and mobility skills in three dimensional acoustical spaces."
Ainge, D. J. (1996). Upper primary students constructing and exploring three dimensional shapes: A comparison of virtual reality with card nets.Journal of Educational Computing Research, 14(4), 345-369.
Ainge presents information from a study that involved students in grades five, six and seven. The experimental group contained twenty students and the control group contained eleven. The program was the VREAM Virtual Reality Development System which allows for easy construction of 3D shapes. Ease of using Virtual Reality (VR) and student engagement with VR were observed informally. VR had little impact on shape visualization and name writing, but enhanced recognition. Students had no difficulty in using the VREAM program and the student's enthusiasm for virtual reality was unanimous and sustained. The author cautions that the positive results from this study must be regarded as tentative because of the small number of participants.
Akpan, J. P., & Andre, T. (2000). Using a computer simulation before dissection to help students learn anatomy. Journal of Computers in Mathematics and Science Teaching, 19 (3), 297-313.
Akpan and Andre examine the prior use of simulation of frog dissection in improving students' learning of frog anatomy and morphology. The study included 127 students ranging in age from 13-15 that were enrolled in a seventh-grade life science course in a middle school. The students had some experience in animal dissection, but no experience in the use of simulated dissection. There were four experimental conditions: simulation before dissection (SBD), dissection before simulation (DBS), simulation-only (S) or dissection only (DO). Students completed a pretest three weeks prior to the experiment and a posttest four days after the dissection was completed. Results of the study indicate that students receiving SBD and SO learned significantly more anatomy than students receiving DBS or DO. The authors suggest that computer-based simulations can offer a suitable cognitive environment in which students search for meaning, appreciate uncertainty and acquire responsibility for their own learning.
Barnea, N., & Dori, Y. J. (1999). High-school chemistry students' performance and gender differences in a computerized molecular modeling learning environment. Journal of Science Education and Technology, 8(4), 257-271.
The authors examined a new computerized molecular modeling (CMM) in teaching and learning chemistry for Israeli high schools. The study included three tenth grade experimental classes using the CMM approach and two other classes, who studied the same topic in a traditional approach, served as a control group. The authors investigated the effects of using molecular modeling on students' spatial ability, understanding of new concepts related to geometric and symbolic representations and students' perception of the model concept. In addition, each variable was examined for gender differences. Students in the experimental group performed better than control group students in all three performance areas. In most of the achievement and spatial ability tests no significant gender differences were found, but in some aspects of model perception and verbal argumentation differences existed. Teachers' and students' feedback on the CMM learning environment were found to be positive, as it helped them understand concepts in molecular geometry and bonding.
Berlin, D., & White, A. (1986). Computer simulations and the transition from concrete manipulation of objects to abstract thinking in elementary school mathematics. School Science and Mathematics, 86(6), 468-479.
In this article, the authors investigated the effects of combining interactive microcomputer simulations and concrete activities on the development of abstract thinking in elementary school mathematics. The students represented populations from two different socio-cultural backgrounds, including 57 black suburban students and 56 white rural students. There were three levels of treatment: (a) concrete-only activities, (b) combination of concrete and computer simulation activities, and (c) computer simulation-only activities. At the end of the treatment period, two paper-and-pencil instruments requiring reflective abstract thought were administered to all the participants. Results indicate that concrete and computer activities have different effects on children depending upon their socio-cultural background and gender. Learners do not react in the same way nor achieve equally well with different modes of learning activities. The authors suggest that mathematics' instruction should provide for the students' preferred mode of processing with extension and elaboration in an alternate mode of processing.
Bourque, D. R., & Carlson, G. R. (1987). Hands-on versus computer simulation methods in chemistry. Journal of Chemical Education, 64(3), 232-234.
Bourque and Carlson outline the results of a two-part study on computer-assisted simulation in chemical education. The study focused on examining and comparing the cognitive effectives of traditional hands-on laboratory exercise with a computer-simulated program on the same topic. In addition, the study sought to determine if coupling these two formats into a specific sequencing would provide optimum student learning. The participants were 51 students from general chemistry classes in high school and they worked microcomputers for the research activities. The students completed both a pretest and posttest. The results indicate that the hands-on experiment format followed by the computer-simulation format provided the highest cumulative scores for the examinations. The authors recommend using computer simulations as part of post laboratory activities in order to reinforce learning and support the learning process.
Bricken, M., & Byrne, C. M. (1992). Summer students in virtual reality: a pilot study on educational applications of virtual reality technology. Seattle, Washington: Washington University.
The goal of this study was to take a first step in evaluating the potential of virtual reality (VR) as a learning environment. The study took place at a technology-orientated summer day camp for students ages 5-18, where student activities center around hands-on exploration of new technology during one-week sessions. Information of 59 students was gathered during a 7-week period in order to evaluate VR in terms of students behavior and opinions as they used VR to construct and explore their own virtual worlds. Results indicate that students demonstrated rapid comprehension of complex concepts and skills.They also reported fascination with the software and a high desire to use VR to build expression of their knowledge and imagination. The authors concluded that VR is a significantly compelling creative environment in which to teach and learn.
Brna, P. (1987). Confronting dynamics misconceptions. Instructional Science, 16, 351-379.
The authors discuss problems students have with learning about Newtonian dynamics and kinematics focusing on the assumption that learning is promoted through confronting students with their own misconceptions. Brna explains a computer-based modeling environment entitled DYNLAB and describes as a study with high school boys in Scotland employing it.
Choi, B., & Gennaro, E. (1987). The effectiveness of using computer simulated experiments on junior high students' understanding of the volume displacement concept. Journal of Research in Science Teaching, 24(6), 539-552.
Choi and Gennaro compared the effectiveness of microcomputer simulated experiences with that of parallel instruction involving hands-on laboratory experiences for teaching the concept of volume displacement to junior high students. They also assessed the differential effect on students' understanding of the volume displacement using student gender as an additional independent variable. The researchers also compared both treatment groups in degree of retention after 45 days. The participants included 128 students from eight-grade earth science classes. It was found that the computer-simulated experiences were as effective as hands-on laboratory experiences, and those males, having had hands-on laboratory experiences performed better on the posttest than females having had the hands-on laboratory experiences. There were no significant differences in performance when comparing males with females using the computer simulation in the learning of the displacement concept. An ANOVA of the retention test scores revealed that males in both treatment conditions retained knowledge of volume displacement better than females.
Geban, O., Askar, P., & Ozkan, I. (1992). Effects of computer simulations and problem-solving approaches on high school students. Journal of Educational Research, 86(1), 5-10.
The purpose of this study was to investigate the effects of computer-simulated experiment (CSE) and the problem-solving approach on students' chemistry achievement, science process skills and attitudes toward chemistry at the high school level. The sample consisted of 200 ninth-grade students the treatment was carried out over nine weeks. Using the CSE, two experimental groups were compared as well as a control group employing a conventional approach. Four instruments were used in the study: Chemistry Achievement Test, Science Process Skill Test, Chemistry Attitude Scale, and Logical Thinking Ability Test. The results indicate that the computer-simulated experiment approach and the problem-solving approach produced significantly greater achievement in chemistry and science process skills than the conventional approach did. The CSE approach produced significantly more positive attitudes toward chemistry than the other two methods, with the conventional approach being the least effective.
Gorsky, P., & Finegold, M. (1992). Using computer simulations to restructure students' conceptions of force. Journal of Computers in Mathematics and Science Teaching, 11, 163-178.
Gorsky and Finegold report on the development and application of a series of computer programs which simulate the outcomes of students' perceptions regarding forces acting on objects at rest or in motion. The dissonance-based strategy for achieving conceptual change uses an arrow-based vector language to enable students to express their conceptual understanding.
Huppert, J., Lomask, S. M., & Lazarowitz, R. (2002). Computer simulations in the high school: Students' cognitive stages, science process skills and academic achievement in microbiology. International Journal of Science Education, 24(8), 803-821.
This study is based on a computer simulation program entitled: "The Growth Curve of Microorganisms," which required 181 tenth-grade biology students in Israel to use problem solving skills and simultaneously manipulate three independent in one simulated environment. The authors hoped to investigate the computer simulation's impact on students' academic achievement and on their mastery of science process skills in relation to cognitive stages. The results indicate that the concrete and transition operational students in the experimental group achieved higher academic achievement than their counterparts in the control group. Girls achieved equally with the boys in the experimental group. Students' academic achievement may indicate the potential impact a computer simulation program can have, enabling students with low reasoning abilities to cope successfully with learning concepts and principles in science that require high cognitive skills.
Jackson, D. F. (1997). Case studies of microcomputer and interactive video simulations in middle school earth science teaching. Journal of Science Education and Technology, 6(2), 127-141.
The author synthesizes the results of three cases studies of middle school classrooms in which computer and video materials were used to teach topics in earth and space science through interactive simulations. The cases included a range of middle school grade levels (sixth through eighth), teacher's levels of experience (student teacher through a 16-year veteran), levels of technology uses (interactive videodisk), and classroom organization pattern in relation to technological resources (teacher-centered presentations through small-group activities). The author was present in all class sessions and gathered data by performing teacher interviews, videotaping classes, taking interpretive field notes and copying the students' worksheets. In light of these findings, suggestions are made regarding improved design principles for such materials and how middle school science teachers might better conduct lessons using simulations.<
Jiang, Z., & Potter, W. D. (1994). A computer microworld to introduce students to probability. Journal of Computers in Mathematics and Science Teaching, 13(2), 197-222.
The objective of this paper is to describe a simulation-orientated computer environment (CHANCE) for middle and high school students to learn introductory probability and a teacher experiment to evaluate its effectiveness. CHANCE is composed of five experimental sub-environments: Coins, Dice, Spinners, Thumbtack and Marbles. The authors desired detailed information from a small sample rather than a large sample so the participants included three boys (a fifth, sixth and eighth grader) and a girl (a junior). They were divided into two groups: Group 1 consisted of the younger students and Group 2 of the older. Each group worked with the investigator on a computer for two 1-hour sessions per week for five weeks. The results indicate that the teaching and learning activities carried out in the experimental environment provided by CHANCE were successful and supported the authors' belief that CHANCE has great potential in teaching and learning introductory probability. The authors caution generalizing these results, as there were only four students included in the study.
Johnson, A., Moher, T., Choo, Y., Lin, Y. J., & Kim, J. (2002). Augmenting elementary school education with VR. IEEE Computer Graphics and Applications, March/April, 6-9.
This article reviews a project in which ImmersaDesk applications have been employed in an elementary school for two years to determine if virtual environments (VEs) have helped children make sense of mathematics and scientific phenomenon. Since the beginning of the project, more than 425 students from grades K-6 have used the ImmersaDesk. The ImmersaDesk contains a 6-foot by 4-foot screen that allows 3-4 students to interact with each other while interacting with the VE on the screen. The positive feedback from the students and teachers indicate that VR can successfully augment scientific education as well as help to equalize the learning environment by engaging students in all ability levels.
Kangassalo, M. (1994). Children's independent exploration of a natural phenomenon by using a pictorial computer-based simulation. Journal of Computing in Childhood Education, 5(3/4), 285-297.
This paper is one part of an investigation whose aim was to examine to what extent the independent use of pictorial computer simulations of a natural phenomenon could be of help in the organizing of the phenomenon and the forming on an integrated picture of it. The author concentrated on describing children's exploration process, specifically 11 seven-year-old first-graders. The selected natural phenomenon was the variations in sunlight and the heat of the sum as experienced on earth related to the positions of the earth and the sun in space. The children were divided into four groups according to what kind of conceptual models they had before the use of the simulation. Children's conceptual models before the use of the simulation formed a basis from which the exploration of the phenomenon was activated. Children used the computer simulation over four weeks and each child differed as to the amount of operating time within each session (average of 65 minutes). The more developed and integrated their conceptual model, the more children's exploration contained investigating and experimenting with aim.
Kelly, P. R. (1997-98). Transfer of learning from a computer simulation as compared to a laboratory activity. Journal of Educational Technology Systems, 26(4), 345-351.
In this article, Kelly discusses the computer program he wrote that simulates a mineral identification activity in an Earth Science classroom. The research question was to determine if students who used the computer simulation could transfer their knowledge and perform as well on the New York State Regents Earth Science Exam as well as students who received instruction in a laboratory-based exercise. The results indicated no significant difference in the test scores of the two groups.
Kinzer, C. K., Sherwood, R. D., & Loofbourrow, M. C. (1989). Simulation software vs. expository text: a comparison of retention across two instructional tools. Reading Research and Instruction, 28(2), 41-49.
The authors examined the performance differences between two fifth grade classes. The first class was taught material about a food chain through a computer simulation and the second class was taught the same material by reading an expository text. The results indicated that the children in the second class, the expository text condition, did significantly better on the posttest than the students who received the information through a computer simulation program.
Lewis, E. L., Stern, J. L., & Linn, M. C. (1993). The effect of computer simulations on introductory thermodynamics understanding. Educational Technology, 33(1), 445-458.
The authors' purpose was to demonstrate the impact on eighth grade students' ability to generalize information about hydrodynamics learned through computer simulations to naturally-occurring problems. Five classes studied the reformulated Computer as Lab Partner (CLP) curriculum which makes naturally occurring events possible through computer simulation. The results indicate that the students understood the simulations and successfully integrated the hydrodynamic simulation information into real-world processes.
Michael, K. Y. (2001). The effect of a computer simulation activity versus a hands-on activity on product creativity in technology education. Journal of Technology Education, 13(1), 31-43.
The purpose of this study was to determine if computer simulated activities had a greater effect on product creativity than hands-on activity. Michael defined a creative product as "one that possesses some measure of both unusualness (originality) and usefulness." He hypothesized that there would be no difference in product creativity between the computer simulated group and the hands-on group. The subjects were seventh grade technology education students. The experimental group used Gryphon Bricks, a virtual environment that allows students to manipulate Lego-type bricks. The control group used Classic Lego Bricks. The Creative Product Semantic Scale (CPSS) was used to determine product creativity. The results indicated no differences between the two groups in regard to product creativity, originality, or usefulness.
Mintz, R. (1993). Computerized simulations as an inquiry tool. School Science and Mathematics, 93(2), 76-80.
The purpose of this study was determine if being exposed to computerized simulations expands and improves students' classroom inquiry work. The subjects in this study were fourteen and fifteen years old. The virtual environment consisted of a fish pond in which students had three consecutive assignments and a new variable was added to each assignment. The subjects asked to inquire hypotheses, conduct experiments, observe and record data and draw conclusions. As the experiments progressed, the students were able to answer questions using fewer simulation runs. The results support the author's hypothesis that exposure to computerized simulations can improve students' inquiry work.
Park, J. C. (1993). Time studies of fourth graders generating alternative solutions in a decision-making task using models and computer simulations. Journal of Computing in Childhood Education, 4(1), 57-76.
The purpose of this study was to determine whether the use of computer simulations had any affect on the time it took students to respond to a given task. The participants in this study were fourth graders who were split into four groups. They were given a decision-making task that required either hands-on manipulation of objects or computer simulated object manipulation. Three modifications of the computer simulation were implemented into the study. The first modification was computer simulation with keyboard input. The second modification was computer simulation with keyboard input and objects present for reference. The third modification was computer simulation with light input. Results indicated that students took longer to complete a task when they had to manipulate it using the computer simulation.
Rivers, R. H., & Vockell, E. (1987). Computer simulations to stimulate scientific problem solving. Journal of Research in Science Teaching, 24(5), 403-415.
The authors' purpose was to find if computerized science simulations could help students become better at scientific problem solving. There were two experimental groups: one that received guided discovery and the other group had unguided discovery. There was also a control group that received no simulations. The results indicated that the students in the guided discovery condition performed better than the unguided discovery and control groups.
Roberts, N., & Blakeslee, G. (1996). The dynamics of learning in a computer simulation environment. Journal of Science Teacher Education, 7(1) 41-58.
The authors conducted a pilot study in which they researched to better understand expert computer simulations in a Middle School Science classroom. In light of the focus on hands-on science instruction, the authors wanted to study this variable along with varying pedagogical instructional procedures. The study was conducted with 8 student participants of diverse abilities. The first half of the experiment time was in the science classroom in collaboration with the teacher. The second half of the study was conducted away from the classroom. The authors report three findings about computer simulations; (a) computer simulations can be used effectively for learning and concept development when teachers select pedagogical style based on learner needs versus student learning gains; (b) students learn more effectively when teachers directly teach students to build basic science knowledge and promote engagement; and (c) student learning is improved when teachers vary presentation style between direct instruction and student exploration. The authors conclude that in the area of computer simulation, hands-on experience is only one of several important variables in science learning.
Ronen, M., & Eliahu, M. (1999). Simulation as a home learning environment - students' views. Journal of Computer Assisted Learning, 15,258-268.
The authors conducted a pilot study designed to research the possibility of integrating simulation-based activities into an existing homework structure during a 2 month period in a 9th grade setting. Students had simulation homework weekly which consisted of a 4-6 task assignment. Student views were collected using a questionnaire, personal student interviews, teacher interviews, and a final exam related to the content of the course. According to the authors, most students favored using simulations as a home learning process. They reported that this work was more stimulating, and the procedures enabled them to be more self-regulated learners. Teachers reported to be pleasantly surprised by the outcomes in student learning using the simulations, and realized reorganization of their physics instruction should occur to optimize the computer simulations. The authors conclude that the tool of computer simulations and others should be further explored.
Rose, D., & Meyer, A. (2002). Teaching Every Student in the Digital Age: Universal Design for Learning, ASCD.
This book is the first comprehensive presentation of the principles and applications of Universal Design for Learning (UDL)--a practical, research-based framework for responding to individual learning differences and a blueprint for the modern redesign of education. As a teacher in a typical classroom, there are two things you know for sure: Your students have widely divergent needs, skills, and interests; and you're responsible for helping every one attain the same high standards. This text lays the foundation of UDL, including neuroscience research on learner differences, the effective uses of new digital media in the classroom, and how insights about students who do not "fit the mold" can inform the creation of flexible curricula that help everyone learn more effectively. The second part of the book addresses practical applications of Universal Design for learning and how UDL principles can help you.
Schwienhorst, K. (2002). Why virtual, why environments? Simulation and Gaming, 33 (2), 196-209.
This article was written to help clarify the definitions of Computer-Assisted Language Learning (CALL) and the Virtual Reality concepts and the support of each in learning. The manuscript includes a review of theoretical perspectives regarding learner autonomy including; individual-cognitive views of learning, the personal construct theory, and the experiential and experimental approaches to learning. The author notes the instructional benefits of virtual reality environments as learning tools which include greater self-awareness, support for interaction, and the enabling of real time collaboration. Finally, the author makes the call for experimental research in this area to verify the theory.
Sherwood, R. D., & Hasselbring. T. (1985/86). A comparison of student achievement across three methods of presentation of a computer-based science simulation. Computers in the Schools, 2(4), 43-50.
The authors report on the results of a study that focused on presentation methods of computer-base simulations in science. Specifically, three presentation methods were analyzed (a) computer simulations with pairs of students working on one computer, (b) computer simulation with an entire class, and (c) a game type simulation without a computer, all conditions were studied in classrooms of sixth grade students. Results indicate that there may be a small benefit to large group simulation experience, especially for immediate measures. These results imply that a computer for every student may not be necessary for students to benefit from computer "instruction" using simulations. The authors noted that student interest and some gender preferences might also influence performance in the simulation and effect measurement results.
Song, K., Han, B., & Yul Lee, W. (2000). A virtual reality application for middle school geometry class. Paper presented at the International Conference on Computers in Education/International Conference on Computer-Assisted Instruction, Taipei, Taiwan.
Stratford, S. J. (1997). A review of computer-based model research in precollege science classroom. Journal of Computers in Mathematics and Science Teaching, 16(1), 3-23.
The author conducted a 10-year review of the literature on Computer-Based models and simulations in precollege science. Three main areas of Computer-Based Models were identified in the research; (a) preprogrammed simulations, (b) creating dynamic modeling environments, and (c) programming environments for simulations. Researchers noted that not enough empirical evidence was available to provide conclusive evidence about student performance. It was noted that anecdotal evidence supported high engagement in the computer-based models for most subjects. The author concluded by posing a number of future research studies, as this line of research is still in its infancy.
Sykes, W., & Reid, R. (1990). Virtual reality in schools: The ultimate educational technology. THE Journal (Technological Horizons in Education), 27(7), 61.
The authors conducted a pilot study in elementary and high school classrooms to study the use of virtual reality technology when used as an enhancement to the traditional curriculum. The major finding was that the engagement factor when using virtual reality enabled to students to be in a more active learning role. The authors argue that although most virtual reality applications in education are in science and mathematics at this time, the technology fits all curricula, and they see great potential across content and grade applications. Additional research should be conducted to validate these initial findings.
Taylor, W. (1997). Student responses to their immersion in a virtual environment. Paper presented at the Annual Meeting of the Educational Research Association, Chicago, Illinois.
The purpose of this study was to characterize students' responses to immersion in a virtual reality environment and their perceptions of this environment. Two thousand, eight hundred and seventy-two elementary, middle school, and high school students attended a thirty-minute presentation on virtual reality and then visited an immersive virtual environment. Following this virtual reality immersion, students answered a questionnaire, rating different facets of the experience. Questionnaire results suggest that although nearly every student enjoyed the experience of navigating a virtual environment such as this one, for many of them this task was quite difficult, and for some fairly disorienting. Results also suggested that the ability to see a virtual environment and navigate through it influences the environment's perceived authenticity. The authors suggest that future research be focused on technical improvements to virtual reality environments.
Vasu, E. S., & Tyler, D.K. (1997). A comparison of the critical thinking skills and spatial ability of fifth grade children using simulation software or Logo. Journal of Computing in Childhood Education,8(4) 345-363.
The authors conducted a 3-group experimental study examining the effects of using Logo, or software using problem-solving simulations. The experimental groups were taught a 4-step problem solving approach. No significant differences were found on spatial or critical thinking skills until controlling for Logo mastery. With this control, significant differences were found for spatial scores, but not for critical thinking. The authors conclude that findings in such research take significant student learning and practice time. Additionally, teachers need substantial training to implant the program with success. The authors recommend further research to investigate further the power of simulation software.
Verzoni, K. A. (1995, October). Creating simulations: Expressing life-situated relationships in terms of algebraic equations. Paper presented at the Annual Meeting of the Northeastern Educational Research Association, Ellenville, NY.
Verzoni investigated the development of student's to see connections between mathematical equations and live like problem solving environments. Students were required to use cause and effect relationships using computer simulation software. Forty-nine eighth grade students participated in a quasi-experimental treatment/control study with a posttest only measure. The reported results suggest that simulation activities developed student abilities to make essential connections between algebraic expressions and real life relationships. The intervention occurred over 9 class periods. The author worked to capitalize on the concept of providing a purpose for algebraic work by having students create life like simulations, and appealing to the learner's own interests and background knowledge.
Willing, K. R. (1988). Computer simulations: Activating content reading. Journal of Reading, 31(5) 400-409.
The author capitalizes on the notion of student motivation and engagement in developing this descriptive study. Students ranging from elementary to high school age, and the range of abilities, students with identified disabilities to students noted as able and gifted (N=222) participated in this study. Willing focused on reading instruction while using computer simulation software in 9 classrooms for a three-week period. Teachers introduced and taught a unit using a simulation software programs. Students worked in groups of 2 to 6, as independent learning groups. Observations focused on type of reading (silent, coral, aloud, sub-vocally, and in turns), group discussions about the content, vocabulary development (use of terms and language specific to varying simulations), and outcome of the simulation (could the group help the simulation survive). The author concludes that these preliminary indicators favor the use of simulations to stimulate learner interest and cooperation to read and understand the content of the life like computer simulation.
Woodward, J., Carnine, D., & Gersten, R. (1988). Teaching problem solving through computer simulations. American Educational Research Journal 25, 1, 72-86.
The authors' purpose in this research was to study the effectiveness of computer simulations in content area instruction, in this case, health with 30 secondary students with high incidence disabilities. Participants were randomly assigned to one of two instructional groups, (a) teacher instruction with traditional practice/application practice, and (b) teacher instruction plus computer simulation. Health content was common across the groups for the 12 days of intervention. At the conclusion of the intervention, participants were tested on content facts, concepts and health-related problem solving issues. Results indicated a significant difference favoring the simulation group, with greatest difference in the problem-solving skills area. The authors recommend the combination of effective teaching and strategic instructional processes in combination with computer simulations for students to increase factual and higher order thinking skills.
Yair, Y., Mintz, R., & Litvak, S. (2001). 3-D virtual reality in science education: An implication for astronomy teaching. Journal of Computers in Mathematics and Science Education 20, 3, 293-301.
This study introduces the reader to the Virtual Environment. This report summarizes the use of this technology to reinforce the hypothesis that the experience in the three-dimensional space will increase learning and understanding of the solar system. With this technology, students are able to observe and manipulate inaccessible objects, variables, and processes in real-time. The ability to make what is abstract and intangible concrete and manipulabe enables the learner to study natural phenomena and abstract concepts. Thus, according to the authors, bridging a gap between the concrete world of nature and the abstract world of concepts and models can be accomplished with the Virtual Environment. Virtual Environments allow for powerful learning experiences to overcome the previously uni-dimensional view of the earth and space provided in texts, and maps.
Yildiz, R., & Atkins, M. (1996). The cognitive impact of multimedia simulations on 14 year old students. British Journal of Educational Technology, 27(2), 106-115.
The authors of this research designed a study to evaluate the effectiveness of three types of multimedia simulations (physical, procedural and process) when teaching the scientific concept of energy to high school students. The researchers attempted to design a study in which good experimental design was employed with 6 cells of students with a pre-post test design. The authors report that greater and more varied patterns of interaction were found for the procedural and process simulations versus the physical group. They conclude that variations in student characteristics and simulation type effect outcomes. However, the physical simulation was found to have produced greater cognitive gain than the other simulations. The authors also emphasize the need for further control and experimentation in this area.
Zietsman, A.I., & Hewson, P.W. (1986). Effect of instruction using microcomputer simulations and conceptual change strategies on science learning. Journal of Research in Science Teaching, 23, 27-39.
The focus of this research was to determine the effects of instruction using microcomputer simulations and conceptual change strategies for 74 students in high school and freshmen year of college. The computer simulation program was designed based on the conceptual change model of learning. The author's report finding significant differences in pre to post measures for students receiving the simulations these students had more accurate conceptions of the construct of velocity. They conclude that science instruction that employs conceptual change strategies is effective especially when provided by the computer simulation.
This content was developed pursuant to cooperative agreement #H324H990004 under CFDA 84.324H between CAST and the Office of Special Education Programs, U.S. Department of Education. However, the opinions expressed herein do not necessarily reflect the position or policy of the U.S. Department of Education or the Office of Special Education Programs and no endorsement by that office should be inferred.
Cite this paper as follows:
Strangman, N., & Hall, T. (2003). Virtual reality/simulations. Wakefield, MA: National Center on Accessing the General Curriculum. Retrieved [insert date] from http://aim.cast.org/learn/historyarchive/backgroundpapers/virtual_simula... | http://aim.cast.org/learn/historyarchive/backgroundpapers/virtual_simulations | 13 |
98 | Chapter 3 Functions
3.1 Function calls
In the context of programming, a function is a named sequence of statements that performs a computation. When you define a function, you specify the name and the sequence of statements. Later, you can “call” the function by name. We have already seen one example of a function call:
>>> type(32) <type 'int'>
The name of the function is type. The expression in parentheses is called the argument of the function. The result, for this function, is the type of the argument.
It is common to say that a function “takes” an argument and “returns” a result. The result is called the return value.
3.2 Type conversion functions
Python provides built-in functions that convert values from one type to another. The int function takes any value and converts it to an integer, if it can, or complains otherwise:
>>> int('32') 32 >>> int('Hello') ValueError: invalid literal for int(): Hello
int can convert floating-point values to integers, but it doesn’t round off; it chops off the fraction part:
>>> int(3.99999) 3 >>> int(-2.3) -2
float converts integers and strings to floating-point numbers:
>>> float(32) 32.0 >>> float('3.14159') 3.14159
Finally, str converts its argument to a string:
>>> str(32) '32' >>> str(3.14159) '3.14159'
3.3 Math functions
Python has a math module that provides most of the familiar mathematical functions. A module is a file that contains a collection of related functions.
Before we can use the module, we have to import it:
>>> import math
This statement creates a module object named math. If you print the module object, you get some information about it:
>>> print math <module 'math' from '/usr/lib/python2.5/lib-dynload/math.so'>
The module object contains the functions and variables defined in the module. To access one of the functions, you have to specify the name of the module and the name of the function, separated by a dot (also known as a period). This format is called dot notation.
>>> ratio = signal_power / noise_power >>> decibels = 10 * math.log10(ratio) >>> radians = 0.7 >>> height = math.sin(radians)
The first example uses
The second example finds the sine of radians. The name of the variable is a hint that sin and the other trigonometric functions (cos, tan, etc.) take arguments in radians. To convert from degrees to radians, divide by 360 and multiply by 2 π:
>>> degrees = 45 >>> radians = degrees / 360.0 * 2 * math.pi >>> math.sin(radians) 0.707106781187
The expression math.pi gets the variable pi from the math module. The value of this variable is an approximation of π, accurate to about 15 digits.
If you know your trigonometry, you can check the previous result by comparing it to the square root of two divided by two:
>>> math.sqrt(2) / 2.0 0.707106781187
So far, we have looked at the elements of a program—variables, expressions, and statements—in isolation, without talking about how to combine them.
One of the most useful features of programming languages is their ability to take small building blocks and compose them. For example, the argument of a function can be any kind of expression, including arithmetic operators:
x = math.sin(degrees / 360.0 * 2 * math.pi)
And even function calls:
x = math.exp(math.log(x+1))
Almost anywhere you can put a value, you can put an arbitrary expression, with one exception: the left side of an assignment statement has to be a variable name. Any other expression on the left side is a syntax error1.
>>> minutes = hours * 60 # right >>> hours * 60 = minutes # wrong! SyntaxError: can't assign to operator
3.5 Adding new functions
So far, we have only been using the functions that come with Python, but it is also possible to add new functions. A function definition specifies the name of a new function and the sequence of statements that execute when the function is called.
Here is an example:
def print_lyrics(): print "I'm a lumberjack, and I'm okay." print "I sleep all night and I work all day."
def is a keyword that indicates that this is a function
definition. The name of the function is
The empty parentheses after the name indicate that this function doesn’t take any arguments.
The first line of the function definition is called the header; the rest is called the body. The header has to end with a colon and the body has to be indented. By convention, the indentation is always four spaces (see Section 3.13). The body can contain any number of statements.
The strings in the print statements are enclosed in double quotes. Single quotes and double quotes do the same thing; most people use single quotes except in cases like this where a single quote (which is also an apostrophe) appears in the string.
If you type a function definition in interactive mode, the interpreter prints ellipses (...) to let you know that the definition isn’t complete:
>>> def print_lyrics(): ... print "I'm a lumberjack, and I'm okay." ... print "I sleep all night and I work all day." ...
To end the function, you have to enter an empty line (this is not necessary in a script).
Defining a function creates a variable with the same name.
>>> print print_lyrics <function print_lyrics at 0xb7e99e9c> >>> print type(print_lyrics) <type 'function'>
The value of
The syntax for calling the new function is the same as for built-in functions:
>>> print_lyrics() I'm a lumberjack, and I'm okay. I sleep all night and I work all day.
Once you have defined a function, you can use it inside another
function. For example, to repeat the previous refrain, we could write
a function called
def repeat_lyrics(): print_lyrics() print_lyrics()
And then call
>>> repeat_lyrics() I'm a lumberjack, and I'm okay. I sleep all night and I work all day. I'm a lumberjack, and I'm okay. I sleep all night and I work all day.
But that’s not really how the song goes.
3.6 Definitions and uses
Pulling together the code fragments from the previous section, the whole program looks like this:
def print_lyrics(): print "I'm a lumberjack, and I'm okay." print "I sleep all night and I work all day." def repeat_lyrics(): print_lyrics() print_lyrics() repeat_lyrics()
This program contains two function definitions:
As you might expect, you have to create a function before you can execute it. In other words, the function definition has to be executed before the first time it is called.
Exercise 1 Move the last line of this program to the top, so the function call appears before the definitions. Run the program and see what error message you get.
Exercise 2 Move the function call back to the bottom and move the definition of
3.7 Flow of execution
In order to ensure that a function is defined before its first use, you have to know the order in which statements are executed, which is called the flow of execution.
Execution always begins at the first statement of the program. Statements are executed one at a time, in order from top to bottom.
Function definitions do not alter the flow of execution of the program, but remember that statements inside the function are not executed until the function is called.
A function call is like a detour in the flow of execution. Instead of going to the next statement, the flow jumps to the body of the function, executes all the statements there, and then comes back to pick up where it left off.
That sounds simple enough, until you remember that one function can call another. While in the middle of one function, the program might have to execute the statements in another function. But while executing that new function, the program might have to execute yet another function!
Fortunately, Python is good at keeping track of where it is, so each time a function completes, the program picks up where it left off in the function that called it. When it gets to the end of the program, it terminates.
What’s the moral of this sordid tale? When you read a program, you don’t always want to read from top to bottom. Sometimes it makes more sense if you follow the flow of execution.
3.8 Parameters and arguments
Some of the built-in functions we have seen require arguments. For example, when you call math.sin you pass a number as an argument. Some functions take more than one argument: math.pow takes two, the base and the exponent.
Inside the function, the arguments are assigned to variables called parameters. Here is an example of a user-defined function that takes an argument:
def print_twice(bruce): print bruce print bruce
This function assigns the argument to a parameter named bruce. When the function is called, it prints the value of the parameter (whatever it is) twice.
This function works with any value that can be printed.
>>> print_twice('Spam') Spam Spam >>> print_twice(17) 17 17 >>> print_twice(math.pi) 3.14159265359 3.14159265359
The same rules of composition that apply to built-in functions also
apply to user-defined functions, so we can use any kind of expression
as an argument for
>>> print_twice('Spam '*4) Spam Spam Spam Spam Spam Spam Spam Spam >>> print_twice(math.cos(math.pi)) -1.0 -1.0
The argument is evaluated before the function is called, so
in the examples the expressions
You can also use a variable as an argument:
>>> michael = 'Eric, the half a bee.' >>> print_twice(michael) Eric, the half a bee. Eric, the half a bee.
The name of the variable we pass as an argument (michael) has
nothing to do with the name of the parameter (bruce). It
doesn’t matter what the value was called back home (in the caller);
3.9 Variables and parameters are local
When you create a variable inside a function, it is local, which means that it only exists inside the function. For example:
def cat_twice(part1, part2): cat = part1 + part2 print_twice(cat)
This function takes two arguments, concatenates them, and prints the result twice. Here is an example that uses it:
>>> line1 = 'Bing tiddle ' >>> line2 = 'tiddle bang.' >>> cat_twice(line1, line2) Bing tiddle tiddle bang. Bing tiddle tiddle bang.
>>> print cat NameError: name 'cat' is not defined
Parameters are also local.
For example, outside
3.10 Stack diagrams
To keep track of which variables can be used where, it is sometimes useful to draw a stack diagram. Like state diagrams, stack diagrams show the value of each variable, but they also show the function each variable belongs to.
Each function is represented by a frame. A frame is a box with the name of a function beside it and the parameters and variables of the function inside it. The stack diagram for the previous example looks like this:
The frames are arranged in a stack that indicates which function
called which, and so on. In this example,
Each parameter refers to the same value as its corresponding argument. So, part1 has the same value as line1, part2 has the same value as line2, and bruce has the same value as cat.
If an error occurs during a function call, Python prints the
name of the function, and the name of the function that called
it, and the name of the function that called that, all the
way back to
For example, if you try to access cat from within
Traceback (innermost last): File "test.py", line 13, in __main__ cat_twice(line1, line2) File "test.py", line 5, in cat_twice print_twice(cat) File "test.py", line 9, in print_twice print cat NameError: name 'cat' is not defined
This list of functions is called a traceback. It tells you what program file the error occurred in, and what line, and what functions were executing at the time. It also shows the line of code that caused the error.
The order of the functions in the traceback is the same as the order of the frames in the stack diagram. The function that is currently running is at the bottom.
3.11 Fruitful functions and void functions
Some of the functions we are using, such as the math functions, yield
results; for lack of a better name, I call them fruitful
functions. Other functions, like
When you call a fruitful function, you almost always want to do something with the result; for example, you might assign it to a variable or use it as part of an expression:
x = math.cos(radians) golden = (math.sqrt(5) + 1) / 2
When you call a function in interactive mode, Python displays the result:
>>> math.sqrt(5) 2.2360679774997898
But in a script, if you call a fruitful function all by itself, the return value is lost forever!
This script computes the square root of 5, but since it doesn’t store or display the result, it is not very useful.
Void functions might display something on the screen or have some other effect, but they don’t have a return value. If you try to assign the result to a variable, you get a special value called None.
>>> result = print_twice('Bing') Bing Bing >>> print result None
The value None is not the same as the string
>>> print type(None) <type 'NoneType'>
The functions we have written so far are all void. We will start writing fruitful functions in a few chapters.
3.12 Why functions?
It may not be clear why it is worth the trouble to divide a program into functions. There are several reasons:
If you are using a text editor to write your scripts, you might run into problems with spaces and tabs. The best way to avoid these problems is to use spaces exclusively (no tabs). Most text editors that know about Python do this by default, but some don’t.
Tabs and spaces are usually invisible, which makes them hard to debug, so try to find an editor that manages indentation for you.
Also, don’t forget to save your program before you run it. Some development environments do this automatically, but some don’t. In that case the program you are looking at in the text editor is not the same as the program you are running.
Debugging can take a long time if you keep running the same, incorrect, program over and over!
Make sure that the code you are looking at is the code you are running.
If you’re not sure, put something like
Python provides a built-in function called len that
returns the length of a string, so the value of
Write a function named
>>> right_justify('allen') allen
A function object is a value you can assign to a variable
or pass as an argument. For example,
def do_twice(f): f() f()
Here’s an example that uses
def print_spam(): print 'spam' do_twice(print_spam)
You can see my solution at thinkpython.com/code/do_four.py.
Exercise 5 This exercise2 can be done using only the statements and other features we have learned so far.
You can see my solution at thinkpython.com/code/grid.py.
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey. | http://www.greenteapress.com/thinkpython/html/book004.html | 13 |
50 | A variable frequency drive (VFD), sometimes also called an adjustable speed drive (ASD), is an electronic device that allows users to change the speed at which a motors runs. The combination of a VFD with a motor is becoming increasingly popular as a final control element because it consumes less power than a motor running at full speed and a control valve. Indeed, the U.S. Department of Energy (DOE) is encouraging the use of VFD for motor control to reduce power consumption . However, obtaining good control depends upon proper selection and installation of the drive as well as understanding how control may differ with a VFD.
The VFD is one part of a total system that includes a motor and a load. The motor acts as a power transducer, converting electrical power to rotational mechanical power. AC induction motors, 600V and below, often are paired with a VFD. (We’ll discuss DC motors later.) These AC motors fall into classes with different torque speed curves (Figure 1). Most drive manufacturers assume use of a Class A motor and that the torque speed curve will be almost linear at the operating point (Figure 2). The VFD will shift the whole curve left or right to change the operating point. Note the “slip” in the figure. Every motor suffers from some slip or difference between rotor and stator fields. This is quite different than stiction, the term used with control valves for the needed stem force to overcome static friction. The load — mechanical conveyor, pump, fan, compressor or the like — has inertia, rotational friction and stiction. The process also has dynamic characteristics that may change when using a VFD instead of a control valve. It takes time to accelerate the load to operating speed and this is proportional to the inertia and the motor torque.
Figure 1 -- AC motor classes: Each class features
Today VFDs use a technique called pulse width modulation (PWM). First AC power is rectified to DC and filtered. Next, a solid-state semiconductor called an insulated gate bipolar transistor (IGBT) creates a voltage waveform to the motor that is a series of pulses of varying widths. The result is a varying frequency AC sine wave. The switching frequency determines the shaft rotational speed. Because the power waveform from a VFD isn’t purely sinusoidal, it’s important to only use motors specially designed to run with PWM VFDs — these are “inverter duty” motors, Class F winding. If a conventional motor is used, it may burn out.
A VFD also has other control electronics; these may include current, voltage and speed sensors.
The VFD electronics has limitations that affect control performance. One limitation is current. The inrush peak starting current of an “across the line” starter, one without a VFD, is eight times the full load current. Such a current would damage a VFD’s rectifiers and semiconductors. Another constraint: the drive electronics is designed to prevent the motor flux from saturating the core.
The process transmitter sending its output signal to the proportional-integral-derivative (PID) controller that acts as the outer loop senses the process dynamics; its output is cascaded to the VFD.
Within the drive electronics are algorithms that control the electrical motor power, frequency, voltage and current. The current and speed set the motor torque. So the drive doesn’t control just the speed, it also regulates the torque delivered to the rotor shaft. This torque produces the rotational force applied to the load (pump, etc.) that powers the process.
Figure 2 -- Class A motor: The curve with load is
Properly understanding the dynamics of a VFD control loop requires considering all elements of the system and how they interact.
Drive Control Strategies
A computer program connected to the drive or a human machine interface (HMI) front panel enables inputting data about the load and the motor as well as setting the drive control strategy, which usually takes advantage of proprietary functionality. This strategy together with the load dynamic behavior determines performance. Loops within the drive electronics can be configured to control speed (through an external encoder), voltage, current and, in some cases, motor flux. These are the inner loops of the process control cascade. When tuning, remember that the inner loops must be at least five times more responsive than the outer loop. Another term to describe performance is bandwidth; it’s inversely proportional to the time constant of the controller/motor with no load.
VFD manufacturers offer models with varying performance and cost. So, assess which is the best drive for the particular application.
Most drives have a defined, configured startup sequence that is to be run with the load disconnected. During this sequence the drive powers the stator and makes measurements that determine the characteristics of the motor. These motor constants then are used to tune the internal electronic program.
Drive control strategies generally fall into three categories :
Volts per Hertz control. This is the simplest method. As seen in the torque speed curve (Figure 2), the region to the right of the peak can be considered linear — therefore controlling the frequency will regulate the shaft speed. The supplied voltage is v = ir + dλ/dt, where λ is the flux linkage. The derivative term is directly related to the rotation; so the rotational speed is proportional to voltage, thereby volts per hertz. At high speeds the ir term is negligible. The speed error is large at less than 10% of the rated rpm. Some configurations can compensate for this. This strategy is recommended for fan and pump applications. The resolution is about 0.5% base speed over a 40:1 range.
Another problem with this strategy is the reduced motor torque at low speeds. This can be a serious problem conveying sticky solids or slurries. The torque decreases because the motor and wire ir drop is a larger percentage of the supplied power. Using a larger wire size can help solve this problem.
Constant-slip current control. This strategy regulates the slip or difference between the electrical speed and the actual speed. This is configured two ways, either optimum torque (maximum torque per amp) or maximum efficiency. Based on the motor constants found during configuration or testing, the drive electronics will calculate the flux linkages to avoid saturation. Two inner loops are used for this strategy. The speed command, usually the output from the outer process controller, is cascaded to a proportional-integral (PI) controller in the drive electronics. This PI controller compares the shaft speed to the set point and provides an output signal to a torque controller. This controller converts the torque set point to a current required to achieve that torque. A current sensor then corrects this current.
This strategy relies on three cascaded controllers. To obtain stable behavior each inner control loop must be several times faster than the loop it’s fed from. Anti-reset windup should be used for each loop. To accommodate the sluggish torque response, the drive performs more slowly than with the volts per hertz strategy. The stator current is controlled at or just slightly above its rated peak value during start up. This strategy has 0.1% of base speed regulation across an 80:1 range.
Figure 3 -- Blending issue: Original VFD led
Field-oriented control. The basis of this strategy is that the maximum torque between the rotor and stator magnetic fields occurs when the rotor current vector is perpendicular to the stator field (per Lorenz’s force equation). The current control is calculated from the torque command and an estimate of flux. The strategy can be implemented two ways: direct and rotor-oriented where Hall effect transducers measure the flux, and indirect where the flux is estimated. Industrial applications don’t employ flux sensors. This method offers improved transient performance. A 50-hp motor with load reaches full speed in 2 sec. and peak current remains at a steady-state value. High performance drives using this strategy have 0.001% base speed regulation across a 120:1 speed range. Because this method uses estimated machine constants, performance will deteriorate if the constants are incorrectly entered. The problem is most severe at low speeds. If the online machine parameter estimates aren’t correct, the drive will experience “hunting.”
Regardless of control strategy, as already noted, the drive changes the voltage or current to the motor via modulation. The IGBT is a switch either on or off. PWM produces a change in value by alternating the switching sequence. In this manner the output stator current will resemble a sine wave with steps inserted. The switching frequency, usually a configured entry, will determine how well the waveform will track. It’s preferable that phases are sequenced, not all switched at once. This reduces the effective switching speed and results in an offset between the actual and desired current, therefore necessitating some closed-loop strategy.
The resulting waveform produces a quantized output. The root-mean-square value actually will vary in discrete intervals. From a closed-loop perspective this injects a non-linear term in the analysis that will create limit cycles, especially at high process gains.
These motors are classified according to how they are wound. Motor performance, usually shown as a torque speed curve, differs with winding design.
In the case of a shunt wound or separately excited motor, a constant DC voltage is applied to the stator winding while a variable voltage is applied to the rotor winding. A permanent magnet DC motor behaves similarly to a separately excited one. A shunt wound DC motor has a single voltage source that powers both the rotor and stator. The shunt wound, permanent magnet and separately excited motors have a linear torque speed curve and, thus, can provide better speed regulation with varying loads. They come in several winding variations, such as compound windings.
In a series motor, the stator is wired in series with the rotor. The torque speed curve for this motor has a parabolic relationship. This motor frequently is used for high mechanical inertia loads, elevators, cranes, etc.
Proceed With Caution
There’s a misconception that a VFD provides better resolution than a control valve. This isn’t always the case. Virtually every modern electronic control system today features some sort of digital or microprocessor control. There has to be an interface between the analog domain and the digital one. Both AC and DC drives use digital-to-analog (D/A) converters. These converters don’t provide infinite resolution — rather they quantize the signal. This produces a so-called quantization error. The number of digital bits is inversely proportional to the error. Process applications frequently use a “10 bit” D/A converter. This breaks up the output signal into 210 or 1,024 discrete levels from the zero point to the maximum.
For example, consider a DC motor speed control powering a gear pump. In a simulated flow control problem, with 10-bit output resolution for a 22-gpm maximum pumping system, this error amounts to 0.03 gpm for a separately excited motor and 0.2 gpm for a series wound motor.
In another case, a continuous process employed a positive displacement pump powered by a VFD-controlled AC motor. Several fluids were blended and the final product quality depended on an exact ratio of all fluids. The precision of this process can be compared to that of in-line pH control — a very small deviation in reagent ratio may result in very large pH changes. This process suffered quality problems. Mass-meter flow rate data at the same set point were archived in an historian and gave the histogram in Figure 3. The positive displacement pump curve has a slope of 0.1 gal/rpm flow. The motor synchronous speed is 1,800 rpm. The adjustable frequency AC drive specification has 514 frequency divisions for the published resolution. The flow controller was trying to find the right output, midway between these peaks, but couldn’t. This caused the process signal to hunt around the set point. Always suspect a quantization problem if a bi-nodal process variable histogram distribution occurs with a continuous steady-state output signal. The plant replaced the VFD with one that had better output resolution and solved the problem. This example is extreme; however it demonstrates that the problem can occur. Drives available today have much better resolution, 0.01%.
As the above illustrates, a VFD can affect the plant. Startup and shutdown operations differ from those with a full-speed pump and control valve. Remember with a VFD you’re controlling the power applied to the load while with a control valve you’re introducing a controlled restriction to the power already applied to the load.
Variable speed pumps offer a linear response within a given inlet and discharge pressure range. However, you must limit the minimum speed to ensure that the pump’s discharge pressure never drops below the static pressure downstream — otherwise, a disastrous flow reversal can occur. Also, you must add an on-off valve and coordinate it with the variable speed pump to provide positive flow shutoff.
Just as we have acceleration and deceleration lanes on and off highways, drives have acceleration and deceleration ramps. These ramps are used to limit the motor starting current. They act as integrators in the loop dynamics. The default setting usually is 5 sec. to 10 sec., a typical flow loop reset value. If the integrator in the PI or PID controller is set faster than the VFD ramp, the resulting closed-loop performance will exhibit a limit cycle. This cycle isn’t due to reset windup or saturation but, rather, occurs because the controller integrator is acting faster than the load can respond. Many controllers offer a feature that allows the reset action to be changed based on actual valve travel. Configure an auxiliary VFD output signal proportional to the actual load to the controller function block to eliminate the limit cycle.
Another factor is the process time constant. For drive-powered liquid flow control loops, the process time constant is larger during start up; in contrast, flow control loops with control valves have a smaller time constant. The loop will perform sluggishly during start up compared to a flow loop with a control valve and pump powered by an induction motor without a drive.
Always check the drive’s deadband setting, which is used to reduce the reaction to noise. If not carefully adjusted, it can add deadtime and cycling.
An Attractive Option
A VFD can provide much needed power reduction and good operation. Torque and amps developed by an AC motor will determine the size of VFD needed for an application. Always size a VFD based on the load, which is expressed in amps, never on horsepower alone.
As with any instrument or control device, making the most of a VFD requires care and knowledge, including an appreciation of how control with a VFD differs from that with a control valve.
Figure 4 -- Problem avoided: Reset inhibiting of
Web-Exclusive Tips for Success
Making the most of a VFD requires paying careful attention to installation as well as staff skills and training.
Installation considerations. You can’t just mount and wire a VFD any which way. You must follow national electrical codes (NEC) as well as the manufacturer’s recommendations. Pay particular attention to the following six aspects:
1. Wire and cable size, insulation and shielding. Shielded power cable is necessary to prevent interference with other plant instrumentation and controls. Lengths are important; remember at reduced motor voltage and rated current, a larger percentage of the power delivered is dissipated in the power cable.
2. Power distribution. A grounded secondary is preferred. Use the AC line impedance to determine if an isolation transformer or line reactor should be installed.
3. Grounding. Building steel can be used. Ground the shields at a single point to avoid ground loops.
4. Drive installation. For best noise immunity separate the power and signal wiring. Mount the drive in a place that’s accessible; see the NEC for required distances. Consider heat dissipation — the VFD electronics generates heat that must be removed. Remember, the cooler the electronics, the longer the life.
5. Reflective wave issues. PWM waveforms create fast rise-time waveforms on the power cables. The peak value of these waves can be high and cause premature insulation breakdown. Power cable with XLPE insulation is recommended.
6. Electromagnetic interferences. Because of the high frequency components in the PWM waveform, the VFD generates interference. Twisted, shielded cable is recommended for the drive wiring.
A motor and load controlled by a VFD may suffer premature bearing failure and lubricant chemical breakdown due to electrical discharge of high voltages induced in the shaft, so-called electric discharge machining or EDM . The problem is relatively new and is attributed to common mode voltage being induced in the shaft due to common mode voltage between each phase and the neutral. The fast transients generated by the IGBT devices cause high frequencies that are induced in the shaft. The problem can be alleviated by several methods, one of which is to use shaft-grounding bushings.
1. Input supply voltage is landed on the correct terminals.
2. Output wiring going to the motor is landed on the correct terminals.
3. Control wiring is landed on the correct terminals.
Staff skills. Plant personnel don’t need to be rocket scientists to program VFDs — reasonable reading skills coupled with some application knowledge and motor information will suffice.
The first task is to gather as much information about the application as possible. How will it be started and stopped? Will a two-wire or three-wire control start the drive? Will it undergo a controlled ramp to stop, will load determine how long it will stop by using a coast to stop, or will the drive stop very fast and dissipate the energy through a dynamic brake? How will the drive be controlled? Will it need an analog signal or a digital signal commanding a preset speed? Will a 4-to-20-mA signal or a 0-to-10-V signal control the drive? Will the drive need to be configured to give an analog output to feed another process or another drive? Will the application require the use of a programmable digital output? Is it necessary to program the drive to close a set of “dry contacts” once it’s achieved a certain speed, using that as an input to feed a start command to another drive or another process?
Next, access the motor and accurately record all of the data from its nameplate:
1. the rated volts — most industrial AC motors are either 230 or 460 V;
2. the full load amps (FLA), the maximum current the motor is expected to draw under fully loaded conditions — this is probably the most important piece of information;
3. the frequency, which normally is 60 Hz but can vary depending on where the unit will be installed; and
4. the rated rpm — this is the load speed at which full load torque is delivered. A very useful formula to keep at hand is hp = (torque) × (speed)/ 5,250, with torque in lb-ft and speed in rpm. Slip affects this, so, an AC motor rated for 1,800 rpm may really only run at 1,750 rpm.
Finally, you’re set to program the drive. Assuming that it’s been installed properly by qualified individuals, the drive will have the appropriate AC power and control voltages. Status lights on the front of most drives will indicate if the drive is in a ready or faulted state. They also will tell you the status of the communications of the drive.
Some AC drives will have assisted startup routines. These types of programs actually will prompt you for the information needed for most applications and commonly adjusted parameters.
VFD training. All AC drive manufacturers offer drive programming training courses or “lunch and learns.” These, when presented properly, can provide a great foundation to build upon.
Understanding the basics of how a VFD works is crucial. This should start with the fundamentals of magnetism, include details about how AC and DC motors operate, the role of speed and torque, and then how the VFD controls the stator and rotor of an AC motor by rectification and using transistors to invert DC voltage back to a voltage an AC motor can use to rotate.
Robert Heider is an adjunct professor in the Department of Energy, Environmental and Chemical Engineering at Washington University, St. Louis. Clay Lynch is VFD and MCC product manager at French Gerleman Electric Co., Maryland Heights, Mo. E-mail them at [email protected] and [email protected]. | http://www.chemicalprocessing.com/articles/2009/043/?show=all | 13 |
50 | From Math Images
A fractal is a geometric shape that is self-similar and has a fractal dimension. In 1975, the father of fractals, Benoît Mandelbrot, coined the term from the Latin fractus, meaning “‘to break:’ to create irregular fragments,” which also describes a few of the methods used to create fractals.
The basic concept
This concept can be explained in a commonly used analogy involving the coastline of an island.
The perimeter of the island would grow as you decrease the size of your measuring device and increase the accuracy of your measurements. Also, the island would more or less self-similar (in terms of becoming more and more jagged and complex) as you continued to shorten your measuring device.
A few important properties pertaining to the nature of a fractal are:
In addition, there are other properties exhibited by fractals:
- Fine or complex structure at small scales
- Too irregular to be described by traditional geometric dimension
- Defined recursively
Although all fractals exhibit self-similarity, they do not necessarily have to possess exact self-similarity, which would mean that the parts look exactly like the whole. The coastline fractal explained above does not have exact self-similarity, but its parts are very similar to the whole, while fractals made by iterated function systems (such as Sierpinski's Triangle, shown at the right) have exact-similarity.
Fractal (Non Integer) Dimension
Fractals are too irregular to be defined by traditional or Euclidean geometry language. Euclidean geometry is constrained to having forms in a “dimension the same as that of the embedding space”. For example, a line has D = 1 because it is described as having only one direction needed to define it, as if on a number line where only one coordinate is needed to find a location in the one dimensional space. A square has D = 2, a square is created by drawing lines that create the top and bottom of the figure in on direction and then drawing two other lines perpendicular to that direction; analogous to the idea of identifying a point in the first dimension, in D = 2, two coordinates are needed to define a location in the two dimensional space. Fractals are instead described by what is called Hausdorff or fractal dimension that measures how fully a fractal seems to fill space. For instance, the Sierpinski triangle “has a dimension intermediate between that of a line and an area" and is present in a fractional dimension.
A fractal must have a recursive definition, meaning that the fractal is defined in terms of itself. Fractals can be described by a single equation or by a system of equations, and created by taking an initial starting value and applying the recursive equation(s) to that value over and over again (a process called iteration). This iteration takes the output calculated from the previous iteration as the input for the next statement. Similarly, if the recursive definition of a fractal is a process, that process is first applied to the starting geometric shape and then continuously iterated to the segments resulting from the previous iteration. Recursive can be seen as a kind of positive feedback loop, where the same definition is applied infinitely by using the results from the previous iteration to start the next iteration.
Click here to learn more about Iterated Functions.
Types of Fractals
Fractals are categorized by how they are generated.
Iterated function systems (IFS)
- A IFS fractal consists of one of more recursive equations or processes that describe the behavior of the fractal and are iterated (or applied continually). These fractals are always exactly self-similar and are made up of an infinite number of self-copies that are transformed by a function or set of functions.
- Fractals that are considered strange attractors are generated from a set of functions called attractor maps or systems. These systems are chaotic and dynamic. Initially, the functions appear to map points in a seemingly random order, but the points are in fact over time evolving towards a recognizable structure called an attractor (because it "attracts" the points into a certain shape).
- These fractals are created through stochastic methods, meaning that the behavior of these fractals depend on a random factor and usually probability restraints. One way to differentiate between chaotic and random fractals is to observe that chaotic fractals have errors (the difference between one plotted value to the next) that grow exponentially, while random fractal errors are simply random.
Escape-time (orbit) fractals
- Escape-time fractals are created in the complex plane with a single function, some , where z is a complex number. On a computer, each pixel corresponds to a complex number value. Each complex number value is applied recursively to the function until it reaches infinity or until it is clear that value will converge to zero. A color is assigned to each complex number value or pixel: the pixel is either colored black if the value converges to zero or the pixel is given a color based on the number of iterations (aka. escape time) it took for the value to reach infinity. The intermediary numbers that arise from the iterations are referred to as their orbit. The boundary between black and color pixels is infinite and increasingly complex.
Two Basic Fractals
These fractals illustrate some of the features described above.
The Sierpinski Triangle
- One method of generating the Sierpinski Triangle is with the growth rule, this method is described as a process similar to how a “child might assemble a castle from building blocks.” In the initial state we have a basic unit, a single triangular-shaped tile with a unit mass (M = 1) and unit length (L = 1). The Sierpinski triangle is “defined operationally as an ‘aggregation process’ obtained by a simple iterative process,” simply put, the triangle is made by adding unit tiles on top of unit tiles to infinity. In stage one, two unit triangles are added to the initial triangle in such a way so that the new formation appears to be a large red triangle with an inverted triangular piece missing from its middle (Figure 1). Now this object has a mass M = 3 and a length L = 2.
- With density defined as:
- note that the density has decreased by 3/4 in the first stage, or iteration. In the following stages, the density will continue to decrease in a monotonic fashion by a factor of . The factor by which the density decreases is a simple power law relating density and length and can be described as:
- and when plotted on a log-log graph, we can then observe a, the slope of the function. From two points, we get the expression for slope:
- With the slope, a, the fractal dimension of the Sierpinski triangle can be concluded by substituting into yielding:
- And by comparing this exponent D to a we conclude that Sierpinski's triangle is a shape with a fractal dimension,
The Koch Snowflake
- With the construction of the Koch snowflake,we start off with the initiator; in this case, the initiator is also an equilateral triangle. An initiator is a figure that precedes an iteration, it can be the initial figure before any iterations are carried out or it can be the figure at the iteration right before the next. Now to create the generator, we take one of the sides of the triangle and break the edge into thirds and attach a triangle onto the middle third of the side. The triangle added sits on the initiator and is a scaled version of the original figure. A generator is the figure that is created after iteration. The figure at first iteration resembles the Star of David, a star hexagon. Now, each of the sides becomes the initiator and each one is broken up into thirds where, once again, we will place a scaled equilateral triangle on the middle third. After this step, the rule is made clear, break the lengths of the triangle into thirds and add a triangle to the middle third section, and as with our previous fractal this is to be done ad infinitum. Figure 2 illustrates iterations of the snowflake.
- As the snowflake is iterated ad infinitum, the fractal nature is evident in the “segmenting process” of breaking the sides apart and the “cascading” of the shapes over the initiator. We can look at one-third of the snowflake, at the Koch curve, and see in greater detail, the self-similarity in the shape (Figure 3).
Examples of Fractals
To see all Fractal related pages, head over to the Fractals category.
Romanesco broccoli (Natural Fractal)
Harter-Heighway Dragon (IFS)
Julia Set (Escape-Time Fractal)
- ↑ 1.0 1.1 1.2 1.3 Mandelbrot, Benoît B. The Fractal Geometry Of Nature. New York, NY: W. H. Freeman, 1983. Print.
- ↑ 2.0 2.1 2.2 2.3 2.4 2.5 Bunde, Armin, and Shlomo Havlin. Fractals and Disordered Systems. 2nd ed. Berlin, Germany: Springer-Verlag Berlin Heidelberg, 1996. Print.
- ↑ http://en.wikipedia.org/wiki/Fractal
Cynthia Lanius, Cynthia Lanius' Lessons: A Fractal Lesson | http://mathforum.org/mathimages/index.php/Field:Fractals | 13 |
125 | Sunday, May 19, 2013
03:59 AM (GMT +5)
World War II
Tuesday, December 12, 2006
Join Date: Jan 2006
Thanked 122 Times in 31 Posts
World War II
World War II
Some 20 years after the end of World War I, lingering disputes erupted in an even larger and bloodier conflict—World War II. The war began in Europe in 1939, but by its end in 1945 it had involved nearly every part of the world. The opposing sides were the Axis powers—consisting mainly of Germany, Italy, and Japan—and the Allies—primarily France, Great Britain, the United States, the Soviet Union, and, to a lesser extent, China. Estimates of the number of casualties vary widely, but by any measure the war's human cost was enormous—35 million to 60 million deaths, with millions more wounded or left homeless.
The political consequences of World War II, like those of World War I, altered the course of 20th-century world history. The war resulted in the Soviet Union's dominance of the countries of eastern Europe and eventually enabled a Communist movement to take power in China. It also marked a decisive shift of power away from the countries of western Europe and toward the United States and the Soviet Union. The tense rivalry between those two countries and their allies, known as the Cold War, would influence events throughout the world for the next 50 years.
Buildup to War
The start of World War II climaxed a series of warlike acts between 1931 and 1939 by Germany, Italy, and Japan. The acts of these nations included taking territories that did not belong to them. The League of Nations had proven ineffective in halting the aggression of Japan in China, the Italian invasion of Ethiopia, and the German takeover of Austria. (See also Europe; League of Nations; World War I, “The Peace and Its Results.”)
The United States had protested the actions of these countries. Britain and France, however, agreed to let the German dictator Adolf Hitler and the Italian dictator Benito Mussolini take the territories they wanted. The British and French hoped this policy of appeasement would prevent another war. (See also fascism; Hitler, Adolf; Mussolini, Benito.)
On Sept. 30, 1938, Britain and France agreed in Munich to let Germany have a part of Czechoslovakia called the Sudetenland. Hitler said this would be his last territorial demand in Europe. In March 1939 Hitler broke this pact, taking over the rest of the country. This ended the British and French policy of appeasement. (See also Munich Pact.)
Prime Minister Neville Chamberlain of Great Britain and Premier Édouard Daladier of France promised aid to Poland in case of a Nazi attack (see Chamberlain, Neville; France, “History”). Hitler soon demanded the return of Danzig (Polish Gdańsk) to Germany and a strip of territory linking East Prussia with the rest of Germany. Poland refused.
In May 1939 Germany and Italy signed a pact pledging to support each other in war. Hitler and other German leaders believed Germany lost World War I because it had to fight on two fronts (see World War I). To prevent this in a new war Hitler and the Soviet dictator Joseph Stalin signed a ten-year nonaggression pact on Aug. 23, 1939 (see Stalin, Joseph). On September 1 Germany annexed Danzig and invaded Poland, and the war began.
The War Year by Year
The War During 1939
Two days after the invasion of Poland, Britain and France demanded that Germany withdraw its troops. When Germany refused Britain and France declared war on Germany.
The Poles were easily defeated by Germany's blitzkrieg, or “lightning war.” The first day the German Luftwaffe (air force) destroyed Poland's airfields and bases. Within a week it had crippled the lines of communication. At the same time German Panzer (armored and mechanized) divisions encircled the Polish armies.
The Soviet Union invaded eastern Poland on September 17. Poland was soon forced to surrender. Germany and the Soviet Union divided Poland between them.
Some Polish government officials, soldiers, pilots, and naval units managed to escape the swift Nazi and Soviet advances. They fled to Britain, where they continued the fight against Germany with the Allies. (See also Warsaw; Poland.)
The “phony war” in the West; war at sea
On the Western Front there was little fighting. The French were confident that a series of fortifications known as the Maginot Line in northeast France could not be broken through. The Germans had similar fortifications on their Siegfried Line, paralleling the Maginot Line in western Germany. Britain thought its Navy could successfully blockade Germany and thus starve it out of the war (see blockade). Because there was so little fighting, this period on the Western battlefields was referred to as the sitzkrieg, or “phony war.”
The war at sea, however, was active. Germany launched a counterblockade against the British. German submarines (U-boats), mines, and bombs sank many Allied merchant and passenger ships.
In December 1939 a dramatic sea battle took place off the coast of South America. British cruisers damaged the German pocket battleship Admiral Graf Spee which had been raiding Allied commerce in the South Atlantic. It was forced to take refuge in the harbor of Montevideo, Uruguay. The captain of the Graf Spee sank the ship rather than risk its capture. There would be no great naval battles in the Atlantic.
The most sensational German naval achievement during this period was a raid on Scapa Flow. On Oct. 14, 1939, a German U-boat made its way into this British naval base and torpedoed the battleship Royal Oak. By 1941 the Allies would lose more than 3.5 million tons of shipping to German submarine attacks.
The Soviet Union invaded Finland on November 30. Finland had refused to give the Soviets military bases. The large Soviet army was expected to defeat tiny Finland quickly. The Finns, however, held off the Soviets for several months.
Finland's Mannerheim Line of fortifications was finally broken through early in 1940. On March 12 Finland signed a peace treaty that gave the Soviet Union important Finnish territory (see Finland).
The War During 1940
Early in April Germany invaded Denmark and Norway. Denmark accepted the “protection” that Germany offered the two countries. Norway, however, declared war.
British troops landed in Norway, but they were unable to stop the German advance. In May the British forces were evacuated. On June 9 Norway fell. King Haakon VII escaped and set up a government in exile in London (see Norway).
Invasion of the Low Countries
On May 10 German forces invaded Belgium, The Netherlands, and Luxembourg. Luxembourg was occupied without resistance (see Luxembourg). Belgium and The Netherlands declared war.
Winston Churchill now replaced Neville Chamberlain as prime minister of Great Britain (see Churchill, Winston). The Allies sent troops into the Low Countries, but by May 14 the Dutch army had to give up the fight. The Netherlands was quickly brought under the rule of German occupation forces. Queen Wilhelmina fled to London, where she formed a government in exile (see Netherlands, The).
Dunkirk evacuation; Italy enters war
On May 13 German armored forces, in a surprise move, broke through the lightly defended Ardennes Forest area north of the Maginot Line. Their columns drove through to the English Channel, cutting off British and French troops in northern France and Belgium.
King Leopold of Belgium surrendered his army on May 28. The Allies had no choice but to attempt an escape by sea. From May 29 to June 4 the Allies evacuated 360,000 soldiers from Dunkirk. Their equipment was left behind. (See also Belgium; Dunkirk.)
The battle of France began on June 5. The Germans attacked along a 100-mile (160-kilometer) front from near Laon to the English Channel. They smashed through the French forces and headed for Paris. The French army fell apart. At this point Italy declared war on Britain and France.
Fall of France
The French government moved to Tours on June 11 and later to Bordeaux. The Germans occupied Paris on June 14. The French cabinet, defeatist and deeply divided, asked for an armistice. Marshal Philippe Pétain, the 84-year-old hero of World War I, became premier (see Pétain).
The Franco-German armistice was signed on June 22 in the forest of Compiègne. It was signed in the same railway car in which France had dictated its terms to a beaten Germany 22 years before. The Franco-Italian armistice was signed on June 24. More than half of France was now occupied by German troops. This included France's entire Atlantic coast and its northern area from Geneva almost to Tours.
Vichy government and Free French
Marshal Pétain built a fascist state with headquarters at Vichy in the unoccupied part of France. The Vichy government worked with the Germans. This was called collaboration.
Some of the French fighting forces escaped to Britain. Called the Free French, they carried on the fight against Germany under the leadership of the French general Charles de Gaulle. Loyal Frenchmen called partisans, also known as the Resistance, secretly supported the Free French in France.
A number of French ships joined the British, and others were interned in British harbors. Ships that resisted were destroyed by the British fleet. The official French government then broke off diplomatic relations with Britain.
The battle of Britain
Hitler expected that the fall of France would cause Britain to surrender. In July he urged Britain to make peace with Germany. Churchill refused.
At the start of the war Hitler had threatened mass air attacks against England. His threat was finally carried out in August 1940. Almost daily hundreds of German planes swarmed across the English Channel from bases in occupied France to bomb England.
The German air attack was to be followed by an invasion of England. Hermann Goering, World War I air ace and commander of the German Luftwaffe, had told Hitler his planes could drive the Royal Air Force (RAF) out of the skies. The Luftwaffe failed. The greatly outnumbered RAF destroyed the German bombers at a crippling rate. The battle of Britain, as the RAF defense of the country was called, was one of the most important battles in the history of the world. Never after October 1940 did Hitler seriously consider invading Britain.
Japan threatens in Far East
Japan was an industrialized country, but it had few natural resources. The United States was its principal supplier of raw materials prior to 1939, but the Japanese had already begun to look elsewhere.
Japanese expansion began in 1931–32 with the seizure of Manchuria, in northeastern China. In the following years Japan seized more territory bordering on Manchuria. By 1937 the Japanese had secured most of northeastern China, including Beijing, Shanghai, and Nanjing. The atrocities committed against civilians by the Japanese in their seizure of Nanjing were some of the most terrible of the war. The Chinese Nationalists under Chiang Kai-shek and the Communists under Mao Zedong put aside their civil war to oppose the Japanese, but to little avail. (See also Chiang Kai-shek; Mao Zedong.)
Germany's conquest of The Netherlands and France left undefended the rich Netherlands Indies and French Indochina. In September Japan threatened to invade French Indochina. By this threat it got air bases there to use in its war against China. The Chinese were isolated by Japan's seizure of their ports, roads, and railroads.
On September 27 Japan signed the Tripartite, or Axis, Pact with Germany and Italy. The pact joined the three nations in an effort to create a new world order. Their alliance would become known as the Rome-Berlin-Tokyo Axis, or the Axis powers. Under the agreement Germany and Italy would control Europe and Japan would control eastern Asia.
To check Japanese expansion the United States kept its fleet in the Pacific. It also placed economic restraints, or sanctions, on Japan, which depended on the United States for scrap iron, oil, cotton, and metals. In September 1940 the United States banned shipments of many of these materials to Japan. It also threatened to stop giving foreign credits, which Japan used to trade abroad.
Britain also helped to control Japan in the Far East because Britain held strategically located Singapore. This was the key to power in southeastern Asia (see Singapore). However, most of the British fleet was soon sent home to protect the British Isles.
How the United States helped the Allies
The fall of France left Britain and its empire fighting alone. On September 3 the United States transferred 50 old destroyers to Britain. In return the United States got 99-year leases on sites for air and naval bases in the British possessions of Newfoundland, Bermuda, British Guiana, and the British West Indies.
The United States also went to work speeding up its rearmament. A two-ocean navy was planned. An air force of 50,000 airplanes was started. In October the nation adopted peacetime compulsory military service for the first time in its history. (See also army; conscription; navy.)
The war in the Mediterranean and Near East
In the winter of 1940–41 Germany and Italy started a campaign against British power in the Mediterranean region. The British position in the Mediterranean was based on control of the two bottleneck passages to the sea—Gilbraltar at the western end and the Suez Canal in the east.
The Axis campaign was launched against Suez. An Italian attack in North Africa was coupled with a German drive through southeastern Europe. The object was to drive the British from the eastern Mediterranean. The Italian offensive was a failure. By early 1941 almost all Mussolini's East African empire was in British hands.
Germany, however, had more success in the Balkans. It overran Romania, Hungary, Bulgaria, Yugoslavia, and Greece. It was then able to come to Italy's aid. The British forces were driven out of Libya.
The War During 1941
Early in 1941 Britain announced that it soon would be unable to pay for the war materials it had been buying from the United States. The United States Congress gave the president authority to lend or lease arms and supplies to countries whose defense he thought important to the security of the United States. Under the Lend-Lease Act a steady stream of planes, tanks, guns, and other war goods rolled off American assembly lines to be sent to Britain and other Allied nations. The United States became known as the Arsenal of Democracy.
Getting these supplies across the Atlantic into the hands of British soldiers became a major problem. United States President Franklin D. Roosevelt announced he would take any measures necessary to ensure their delivery. On April 9 the United States took Greenland under its protection for the rest of the war. In July American troops replaced the British forces in Iceland.
On Aug. 9, 1941, Roosevelt met with Winston Churchill on a battleship off the coast of Newfoundland. After five days of talks they issued the Atlantic Charter (see Atlantic Charter). Although the United States was not yet officially in the war, this document outlined the Allies' war aims and called for the “final destruction of Nazi tyranny.” American and British military staffs had already agreed that in the event of a war against both Japan and Germany, the Allies would concentrate on the defeat of Germany first.
Germany invades the Soviet Union
Both Germany and the Soviet Union thought of their nonaggression pact of 1939 as temporary. It gave the Soviets time to build defenses against German attack. It gave Germany peace along its eastern frontiers during the war in the west. Throughout the spring of 1941, however, there were signs that the pact might be broken.
On June 22 Germany invaded the Soviet Union. Other nations quickly took sides in the ensuing conflict. Italy, Hungary, Finland, and Romania declared war on the Soviet Union. Britain pledged aid to the Soviet Union, and the United States promised war goods.
The new conflict
Germany's war on the Soviet Union locked in battle the two largest armies in the world. The front extended for 2,000 miles (3,200 kilometers) from the White Sea to the Black. Germany struck its heaviest blows on three sectors of this long front: (1) from East Prussia through the Baltic states toward Leningrad (now St. Petersburg); (2) from the northern part of German Poland through White Russia toward Moscow; and (3) from the southern part of German Poland through Ukraine toward Kiev.
The Germans drove rapidly forward, cutting off entire Soviet armies. Despite these tremendous losses, strong resistance by the Red Army and guerrilla warfare behind the German lines slowed the German drive. In addition, as the Soviets retreated they destroyed crops, factories, railways, utility plants, and everything else that would be of value to the advancing Nazis. This was known as a “scorched earth” policy.
German advance in the Soviet Union stopped
By the end of November the German assault on the Soviet Union had passed its peak of effectiveness. By December snow and cold weather had stopped the German offensive for the winter. The Soviets launched counteroffensives that drove the Germans back from the outskirts of Moscow and Leningrad.
While Germany was attacking the Soviet Union, British armies in Egypt struck at the Axis forces in Libya. The attack drove the Axis from Benghazi on December 25, Christmas Day. The British, on orders from Churchill, then ceased pressing their advantage and sent many of their forces to Greece to oppose the German invasion there. The British were defeated in Greece and withdrew to the island of Crete, which was later captured by German paratroopers.
Japan moves toward war
The German attack on the Soviets had led the Japanese to believe that German victory was certain. They immediately tried to profit by it. In July 1941 the Vichy government in France gave Japan bases in southern French Indochina. Japan moved in and massed troops against Thailand (Siam).
Other aggressive moves by Japan brought strong protests from the United States and Britain. The Japanese declared that they wanted peace, but they continued their warlike acts. In late July the United States, Britain, and the refugee Dutch government in London placed embargoes on the shipment of oil to Japan.
General Hideki Tojo became premier of Japan in October. In November he sent a special envoy to seek peace with the United States. This was a trick to throw the United States off guard. Japan was playing for time in which to get its armed forces into position for attack, for the Japanese had already decided on war. Their lack of natural resources meant they had to win quickly or lose to the United States, whose potential power was overwhelming. Japanese leaders were also convinced that once the Americans were involved in the European war, they would be willing to negotiate a peace in the Pacific.
On November 26 American Secretary of State Cordell Hull announced that the United States would give full economic cooperation to Japan. In return, however, he asked that Japan withdraw from China and stop collaborating with the Axis. On December 6 President Roosevelt appealed directly to the Japanese emperor, Hirohito, to work for peace.
The attack on Pearl Harbor
Early the following afternoon the Japanese ambassador presented Japan's reply to the American proposal. It accused the United States of standing in the way of the “new order in East Asia.” It ended by saying that further negotiations were useless. Even as he spoke Japanese forces were attacking Americans in Hawaii, in the Philippines, and elsewhere in the Pacific Ocean area.
The Japanese attacked Pearl Harbor, Hawaii, on Sunday, Dec. 7, 1941. The attack came without warning very early in the morning. It was made by Japanese submarines and carrier-launched aircraft. The United States Navy and Army forces were completely surprised.
More than 2,300 Americans were killed in the two-hour attack. Eight battleships were sunk or damaged. Many cruisers and destroyers were hit. Most of the United States planes were destroyed on the ground. Japanese losses were 129 men, several submarines, and 29 of the more than 350 airplanes that had made the attack.
Declaration of war
Two and a half hours after the surprise attack at Pearl Harbor the Japanese officially declared war on the United States and Britain. Britain declared war on December 8. The United States Congress declared that a state of war had existed since December 7.
On December 9 China issued a formal war declaration against Japan, Germany, and Italy. On December 11 Germany and Italy declared war on the United States, and the United States Congress voted declarations in return. During the same week nine Latin American nations entered the war against the Axis powers—Costa Rica, Cuba, the Dominican Republic, El Salvador, Guatemala, Honduras, Haiti, Nicaragua, and Panama.
Bolivia declared war against Japan. Most other Latin American nations either broke off diplomatic relations with the Axis countries or supported the United States. Bulgaria, Hungary, and Romania declared war on the United States. Japan and the Soviet Union carefully avoided war with one another.
On Jan. 1, 1942, the 26 nations then at war with the Axis powers joined in a declaration in which they pledged united efforts and no separate peace until victory was gained. Signed in Washington, D.C., this was called the Declaration by United Nations.
The War During 1942
At the beginning of 1942 the Allies were on the defensive in all the theaters of war. German submarine attacks continued to sink Allied ships and their cargo more quickly than the Allies could replace them. U-boat operations had spread from the area of the British Isles into the eastern Atlantic and then into the central and western Atlantic. This part of the war was called the battle of the Atlantic.
In the Pacific, Guam and Wake islands had fallen to the Japanese in December 1941. The Japanese had also taken Hong Kong from the British, and much of the American fleet lay in ruins at Pearl Harbor. (See also Guam; Hong Kong; Wake Island.)
Battle of the Atlantic Comes to the United States
With the entry of the United States into the war, German U-boats soon began operating off the coast of the United States. Although few in number they had great success, sinking Allied ships off the East coast from New York to Florida, off the Gulf coast, and in the Caribbean. Allied shipping losses were up to 10,000 tons a day and threatened the entire war effort. Not until a convoy system was implemented did the losses begin to decrease.
The Japanese continued to take new territory in the first half of 1942. In the Philippines there were heroic defensive stands by Filipinos and Americans at Bataan and Corregidor. But the Philippines fell in May. Also by May Singapore, the Netherlands Indies, Burma, and parts of New Britain and New Guinea were in Japanese hands. Australia was seriously threatened. Darwin in northern Australia was heavily bombed.
Against all these advances American and British forces fought desperately. In Burma a small American volunteer group of fliers, the Flying Tigers, shot down hundreds of enemy planes. On April 18, 1942, a small group of carrier-launched Army aircraft bombed Tokyo. Although the attack accomplished little at the time, this was a preview of things to come for the Japanese.
In April the Pacific theater of operations was divided into the Southwest Pacific Area under United States Gen. Douglas MacArthur and the Pacific Ocean Areas under United States Adm. Chester Nimitz (see MacArthur). The Southwest Pacific Area included the Dutch East Indies (less Sumatra), the Philippines, Australia, the Bismarck Archipelago, and the Solomon Islands. The Pacific Ocean Areas comprised virtually every area not under MacArthur.
Battle of Midway Island
In June a strong invasion force of Japanese moved directly against the Hawaiian Islands. American ships, Navy planes, and Army planes from Midway Island fought a four-day battle against the invaders. The Americans lost a carrier, a destroyer, and 150 planes. The invaders, however, were completely defeated. They lost four aircraft carriers, two heavy cruisers, three destroyers, and 330 planes. Meanwhile a Japanese force occupied several of the Aleutian Islands.
The battle of Midway ended serious Japanese expansion and is considered the turning point in the Pacific. Within two months Allied counterattacks began. United States Marines and Army forces attacked the Solomon Islands in August. A month later American and Australian forces started to drive the Japanese out of New Guinea.
The battle for Egypt
In the Mediterranean area the Axis forces had almost complete control of the sea. Supplies for the British forces in Egypt, the Near East, and India had to be shipped around Africa. In January 1942 the German general Erwin Rommel and his Afrika Korps started a new drive to seize the Suez Canal.
After losing Benghazi in January the British held the Axis forces in check until May. Then a powerful attack engulfed most of the British tank force and moved into Egypt. In July the British were finally able to stop the drive at El Alamein.
In August Gen. Bernard L. Montgomery was named field commander of British forces in Egypt (see Montgomery, Bernard). On October 23 the British started a devastating attack from El Alamein. Rommel's tank force was routed. By November 6 the British had driven the Axis forces from Egypt. El Alamein is considered the turning point of the war in North Africa.
North Africa invasion
American and British forces under the command of United States Gen. Dwight D. Eisenhower landed in French North Africa on Nov. 8, 1942. They captured the strategic points in Algeria and Morocco in a few days.
The Vichy government denounced the attack, and the Nazis occupied all France. French navy officers, however, kept the French fleet at Toulon from German use by scuttling it. The French in Africa soon ended all resistance.
The Russian Front in 1942
By the spring of 1942 the Soviet Union had regained one sixth of the territory it had lost in 1941. Then warm weather brought a new German assault. Sevastopol' fell to the Germans in July. They also advanced to within 100 miles (160 kilometers) of the Caspian Sea and the important oil fields near the city of Baku. In August the Germans attacked Stalingrad (now Volgograd). The Red Army in Stalingrad was determined to fight to the last man. This bloody resistance stopped the German attack. This was the turning point of the war in Europe. In November the Soviets counterattacked and began to drive the Germans back.
Burma and India
By May 1942 the British had withdrawn from Burma and focused on the defense of India. The Japanese had achieved their objective of cutting off supplies to the Chinese. The Allies continued to build up forces in India and look for ways to supply the Chinese. Lord Louis Mountbatten was appointed as Southeast Asian commander for the Allies.
The War During 1943
By 1943 the Japanese were on the defensive everywhere in the Pacific. Guadalcanal in the Solomon Islands finally fell to United States Marines and Army forces in February 1943. This ended six months of bloody jungle warfare. During the fight for Guadalcanal a large part of the Japanese fleet was destroyed.
In the spring and summer of 1943 General MacArthur and Adm. W.F. (Bull) Halsey worked closely together. Their aim was to drive the Japanese out of eastern New Guinea, the Solomons, and the Bismarck Archipelago. By early fall Allied efforts had cleared an outer ring of positions covering Australia. Meanwhile Americans and Canadians had also cleaned out Japanese forces in the Aleutians.
The Marine attack on Tarawa
In November a United States Marine-Army force invaded the Gilbert Islands. The attack on Tarawa resulted in some of the bloodiest fighting of the war, costing the Marine Corps some 3,000 casualties.
MacArthur's troops in the southwest Pacific continued their island-hopping attack into December. By the end of 1943 Australia was no longer threatened by the Japanese. Allied forces would soon be ready to invade the Philippines.
Success in the Mediterranean area
In February 1943 General Eisenhower was appointed commander in chief of the Allied armies in the North African theater of operations. His objective—to oust the Axis forces from North Africa—was accomplished by May. (See also Eisenhower, Dwight D.)
The Allies invaded Sicily in July. On July 25 Benito Mussolini was forced to resign as premier of Italy. He was then arrested. King Victor Emmanuel appointed Marshal Pietro Badoglio to succeed Mussolini. The British 8th Army invaded southern Italy on September 3. Premier Badoglio's government surrendered its armed forces unconditionally on September 8. This took Italy out of the war, but the Germans, under Field Marshal Albert Kesselring, continued to fight. The Allies were forced to battle their way up the Italian mainland throughout the fall and early winter of 1943. Mussolini was rescued by a daring German commando raid and put in charge of a puppet government in northern Italy.
Soviet counterattack in 1943
The Soviet counterattack against the Germans gained full power by January 1943. The Soviets forced the Axis armies from Stalingrad, Kharkiv, and Smolensk. The German defeat at Kursk (July 5–August 23) was the largest tank battle of the war. By the end of the year the Soviets had reached the Polish border of 1939.
The war at sea
The battle of the Atlantic was fiercely fought in 1943. The Germans kept as many as 240 submarines prowling the sea lanes in wolf packs. They sank about 700 merchant ships before the Allies developed several good defenses against undersea attacks. The Allies bombed German submarine bases without letup and convoyed ships with long-range bombers. There were also major advances in radar and sonar. Another critical factor in defeating the U-boats was the British ability to read top-secret German radio communications. The British had broken the German “Enigma” code used to control the U-boats and also to send important high command messages. The resulting intelligence, code named “Ultra,” was also provided to American and British commanders in France and Italy. By the end of the year the Allies had almost ended the submarine menace in the Atlantic. (See also radar; sonar.)
The war in the air
The Allies were producing enough airplanes by 1943 to carry the air war into the heart of Germany. The mass bombing of targets deep in enemy territory was called strategic bombing. The combined British-American bombing attacks began to take their toll on German industry.
In 1940 the Hurricane and Spitfire fighter aircraft of the RAF had proved equal to German fighters such as the Messerschmitt ME-109. The first American planes were not as effective, but the later Thunderbolt (P-47) and Mustang (P-51) were excellent.
For bombers the British used Lancasters and Halifaxes, which could carry one-ton and two-ton “blockbusters.” Early in the war the British made daylight bombing raids, but they suffered crippling losses. They then turned to night raids.
The American 8th Air Force preferred daylight bombing raids because targets could be hit more effectively. Americans flew in large numbers and in tight formations. The planes they used were the Flying Fortress (B-17) and the Liberator (B-24).
At first the Americans suffered serious losses just as the British had. When the Mustang fighter plane was brought into the theater, however, they were able to ward off attacks by the German fighters. The Mustang, widely regarded as the best fighter of the war, could carry more fuel than other fighters and escort the bombers on their deepest raids.
Other kinds of air warfare
The Americans also developed air transport on a worldwide scale. By the end of the war the United States Army Air Transport Command, with almost 3,000 planes, was flying a global network of 188,000 miles (303,000 kilometers) of routes. The Navy flew 420 planes over 65,000 miles (105,000 kilometers) of routes. In the China-Burma-India theater the 10th Air Force flew over the “hump” of the Himalayas, carrying supplies from India to China.
By 1943 the Allies were also using planes to carry their combat troops into action. Large transports (C-47s) carried paratroops who were dropped by parachute over their objectives. Airborne troops were also carried in gliders towed by transport planes. The Germans had pioneered the use of parachute and glider troops early in the war.
The War During 1944
In February 1944 Admiral Nimitz's forces advanced more than 2,000 miles (3,200 kilometers) from Hawaii to seize Kwajalein atoll and Enewetak in the Marshall Islands. The next advance was some 1,200 miles (1,900 kilometers) to the Marianas. By mid-August Saipan, Tinian, and Guam had fallen to the Allies. New, long-range Superfortress planes (B-29s) were used to bomb Japan. Plans were made to seize the Philippines as a base for the invasion of Japan.
In October General MacArthur's forces invaded the Philippines at Leyte Island. After savage fighting by land, sea, and air forces the conquest of Leyte was complete about Christmas Day 1944.
In China, Chiang Kai-shek's forces continued to be on the defensive. A Japanese attack toward Changsha, begun on May 27, won control not only of a stretch of the Beijing-Hankou railroad but also of several of the airfields from which the Americans had been bombing the Japanese.
Throughout the early months of 1944 the main pressure upon the Germans was caused by Soviet attacks. One drive carried the Soviet armies to the Baltic states by spring. In the southwest they also drove deep into Ukraine.
Other drives neutralized Finland, took Minsk and Pinsk in Poland, and forced Romania to ask for peace. The Soviet Union also forced Romania to declare war on Germany on August 24. When the Soviets invaded Bulgaria in September that country also declared war on Germany. The Soviets next plunged into Yugoslavia to unite with Yugoslav partisan forces under Marshal Tito. The Yugoslav capital, Belgrade, was captured on October 20. The year ended on the Eastern Front with the Germans driven back to their own borders.
The Italian campaign
In late 1943, following a meeting in Tehran between Roosevelt, Churchill, and Stalin, General Eisenhower was named supreme Allied commander in western Europe. Britain's General Alexander was made commander of the Allied forces in Italy.
An invasion force was landed at Anzio in January. Allied forces were pinned down on the beachhead, and by spring the attack looked hopeless. In May, however, a heavy attack broke through south of Cassino. The attackers joined the forces at Anzio and swept on to take Rome in June.
The Allies now invaded France, and the Italian campaign became a containing operation. Allied troop strength there was kept low, but the forces were charged with keeping German troops diverted from the main theater of war in France.
The Normandy invasion
Early on the morning of June 6 an invasion fleet of some 7,000 ships landed American, British, and Canadian divisions on Normandy beaches. Airborne divisions dropped behind the German lines. In the air Allies had complete command. This invasion was decisive and the outcome of the war in Europe depended upon its success.
In the first week the Allies established beachheads between Cherbourg and the city of Caen along a 60-mile- (97-kilometer-) wide strip. Within a week they drove about 20 miles (32 kilometers) inland. Casualties for the landing were about 15,000 out of some 150,000 engaged. The Germans never managed to mount a serious counterattack.
The British captured Caen on July 9. The Americans broke out of their beachhead positions on July 25. Armored columns headed inland, and Paris fell to the Allies on August 25. Victory seemed to be at hand, but soon the Allies outran their supply lines and German resistance increased.
The final German defense efforts
The Germans began to use new weapons against England: flying robotic bombs, called V-1s, launched from bases in France, and ballistic missiles, called V-2s, launched from The Netherlands. The V-bombs injured and killed many English civilians and caused great damage but had no effect on the outcome of the war (see guided missiles). East of the Rhine the Germans battled grimly to keep the Allies from entering Germany. In September, however, Allied troops crossed the German border east of Aachen.
As the cold, wet season advanced the Allied drive slowed down. Aided by the bad weather, the Germans launched a surprise counterattack on December 16. The main attack came south of Aachen in the Ardennes. The battle of the Bulge, as this attack was called, ended in final German defeat in this region. The year ended with the Allied forces in the west and east ready to throw their weight into the drive that would crush Nazi power.
The War During 1945
After driving the Germans from the Ardennes bulge the Allied armies advanced into Germany. By the end of March 1945 the Americans and British had advanced halfway across Germany.
The Germans also collapsed on other fronts. Budapest fell to the Soviets in February and Vienna in April. In Italy Mussolini was caught and shot by partisans on April 28. The next day the Germans in Italy surrendered unconditionally.
Hitler commits suicide; Germans surrender
Despite the utter hopelessness of the German cause Hitler remained defiant in his underground Berlin bunker. The Soviets attacked Berlin on April 21. To escape capture by the Soviets Hitler committed suicide the night of April 30.
On May 4 British General Bernard Montgomery received the surrender of the Germans in Denmark, The Netherlands, and northwestern Germany. Gen. Alfred Jodl signed a surrender at Reims on May 7. On May 8 Churchill, Stalin, and Harry S. Truman, the new U.S. president, announced that Gen. Wilhelm Keitel had surrendered unconditionally the day before. Now all attention turned to the Far East.
Defeat of Japan
In December 1944 British General William Slim's 14th Army launched a campaign to drive the Japanese from Burma (Myanmar). By May 1945 they had succeeded, recapturing Rangoon (Yangon). Chinese forces also went on the attack, and by April the entire Burma Road, from Mandalay to China, was open.
Early in 1945 General MacArthur's forces in the Pacific landed an invasion force at Lingayen Gulf in Luzon, in the Philippines. Effective resistance in Manila ended in late February. It took many months, however, for the Americans to clear out the last pockets of fanatical Japanese resistance in the Philippines.
Meanwhile Admiral Nimitz's forces seized Iwo Jima and Okinawa. At Iwo Jima Marine casualties were the heaviest suffered in any island invasion, more than 20,000. During the Okinawa campaign the Navy was attacked by kamikaze (suicide) planes. The pilots of these planes deliberately flew them into American ships.
On July 26 Allied leaders met in Potsdam, Germany. They demanded that Japan immediately surrender or face utter destruction. Japan fought on. On August 8 the Soviet Union attacked the Japanese in Manchuria.
At this point American scientists made a significant contribution to the war effort. During the war they had developed and successfully tested an atomic bomb (see nuclear energy). President Truman decided to use the bomb to avoid the millions of casualties expected if the Allies had to invade Japan. On August 6 a B-29 dropped an atomic bomb on Hiroshima, Japan, a major munitions center, destroying about three fifths of the city. When the Japanese still refused to surrender, a more powerful atomic bomb was dropped on the port city of Nagasaki, leaving it in ruins. As many as 120,000 people died in these two attacks.
After the bombing of Nagasaki, Emperor Hirohito ordered the surrender of Japan. The Japanese accepted Allied terms on August 15. On Sept. 2, 1945 (this date was September 1 in the United States), Japan formally surrendered aboard the battleship Missouri anchored in Tokyo Bay. General MacArthur accepted the surrender as Supreme Commander for the Allied Powers.
General MacArthur immediately established military occupation of the empire. American troops went ashore to liberate war prisoners and to make certain that the Japanese complied with the terms of surrender. All Japanese military forces were disarmed and sent home. The emperor and other government officials had to obey General MacArthur's orders. Many of Japan's war leaders were arrested and held for trial. Hirohito was not among them; his leadership was considered essential to a peaceful occupation.
Key Campaigns and Battles of World War II
The first part of this section describes military actions in Europe and North Africa. The rest is devoted to events in the Pacific area and in the Far Eastern theater of operations.
Europe and North Africa
Battle of Poland; early stalemate
On Sept. 1, 1939, the German Luftwaffe crossed the Polish frontier. Poland's air force was destroyed on the ground. On the same day swift-moving German Panzer divisions smashed into Poland from three directions. The Polish army was the fifth largest in Europe. It was not equipped, however, to meet the up-to-date mechanized units of the Nazis. The Germans crushed organized opposition in 16 days. The Soviet Union attacked Poland from the east on September 17. Warsaw surrendered on September 27. From Sept. 1, 1939, until May 10, 1940, there was no major action along the Western Front. French troops did not want to attack the strong German Siegfried Line. The Germans did not want to make an assault on the so-called “impregnable” French Maginot Line. Meanwhile the Russo-Finnish war (Nov. 30, 1939–March 12, 1940) gave the Soviet Union important parts of Finnish territory.
Conquest of Norway; battle of Flanders
On April 9, 1940, the Nazis occupied Denmark without opposition. On the same day they attacked Norway. Landing airborne infantry, parachute troops, and amphibious forces at many points, the Germans gained a solid foothold the first day of the attack. A group of Norwegian traitors led by Maj. Vidkun Quisling assisted the invaders. British troops landed on April 15 to aid the Norwegian forces, but the swift German advance forced them to flee central Norway May 1–3 and evacuate Narvik on June 9. This ended Allied resistance in Norway.
The Germans started an offensive against France on May 10, 1940. To protect their right flank the Germans roared into The Netherlands and Belgium. The same day German forces, led by Erwin Rommel's 7th Panzer Division, struck at the weak extension of the Maginot Line in the Ardennes Forest. Both attacks met quick success. Dutch resistance was crushed in four days, while to the south Nazi armored spearheads drove 220 miles (350 kilometers) to reach the French coast at Abbeville on May 21. The maneuver trapped the entire left wing of the Allied armies in a small pocket on the English Channel. Under heavy fire from German aircraft and artillery a hastily assembled fleet evacuated more than 360,000 Allied troops from Dunkirk to England between May 29 and June 4, 1940, leaving all their heavy equipment and artillery on the beach.
Battle of France; battle of Britain
The defeat in the battle of Flanders left the French helpless to halt the next German blow. On June 5, 1940, the Germans attacked along a 100-mile (160-kilometer) front. One armored spearhead raced south along the French coast closing the Channel ports to possible British aid. In the center the Germans knifed through the Somme and Aisne river lines and smashed toward Paris from east and west. Another column turned to the left after breaking through at Sedan and took the Maginot Line from the flank and rear. Paris fell to the Germans on June 14, and the French asked for surrender terms on June 17. Meanwhile Italy declared war on France on June 10 but engaged in no major action.
Late in 1940 Germany lost its first battle of the war when Britain fought off a savage air campaign. On Aug. 8, 1940, Hitler launched the air assault on England to soften the British Isles for invasion and gain the necessary air superiority. Huge formations of German bombers crossed the English Channel almost daily to blast seaports, industrial cities, and airfields. Despite this steady rain of bombs the British stood firm.
Counterattacks by fighter planes of the RAF took a terrible toll of the Luftwaffe raiders. The very effective employment of the RAF was helped by the new invention of radar. Britain's victory was certain on October 6 when the Nazis gave up daylight raids that had cost them some 1,700 planes and their crews. The RAF had lost more than 900 planes. Later in 1940 the Germans began night raids that spread terror and destruction, especially in London, but had little military value. In fact, the German shift from directly attacking the RAF to attacking cities enabled the RAF to rebuild its strength and ensure victory.
German drive through the Balkans; battle of the Atlantic
On April 6, 1941, the Nazi war machine drove into Yugoslavia and Greece to complete Hitler's plans for the conquest of the Balkans and the domination of the Mediterranean. Yugoslavia was overrun by April 17. Greece surrendered on April 30. Of the 74,000 British troops in Greece only about 44,000 escaped. Many of these fled to Crete, which was a refueling stop and support base for the fighting in Greece. These troops were captured or killed when German airborne troops took that island between May 20 and June 1, 1941. Although the German campaign in the Balkans was successful, it delayed an invasion of the Soviet Union for several crucial months.
Early in the war Germany blockaded Europe by attacking Allied ships with land-based planes and submarines. The Allies struck back by convoying ships with destroyers and planes from escort carriers. They were helped immeasurably by the fact that the British had broken the top-secret German code “Enigma,” used to direct submarine operations, and by improvements in radar and sonar. These defenses brought the submarine and air attacks under control by 1942. The Germans also tried to use surface raiders to interfere with Allied shipping, but this threat ended with the sinking of the German battleship Bismarck in May 1941.
Battles of El Alamein and Stalingrad
Italian forces and Rommel's Afrika Korps entered Egypt in a drive for the Suez Canal in June 1942. The British 8th Army held fast at El Alamein, about 60 miles (97 kilometers) southwest of Alexandria. On October 23 British infantry cut through the Axis lines in a bayonet charge that opened the way for an armored breakthrough. The attack forced the Axis back 1,300 miles (2,100 kilometers) across the desert.
The Germans attacked Stalingrad (now Volgograd) on Aug. 24, 1942. The Soviets resisted street by street and house by house. Control of Stalingrad became a personal goal for both Stalin and Hitler. The powerful German 6th Army under Gen. Friedrich von Paulus spent itself in a futile effort to dislodge the Soviet forces. The Soviets, led by Marshal Georgi Zhukov, counterattacked on November 19, and by Feb. 2, 1943, they had killed or captured 330,000 Germans and other Axis soldiers in one of the costliest battles of the war.
Invasion of North Africa; battle of Tunisia
British and Americans troops landed at Algiers, Oran, and Casablanca in French North Africa on Nov. 8, 1942. This was called Operation Torch. The invaders met little resistance and quickly drove inland. On November 15 the French in Africa joined the Allies.
American forces met their first defeat at the hands of the Germans in the battle of Kasserine Pass (Feb. 14–25, 1943). They rallied, however, to push through Tunisia. On April 7 United States troops met the British 8th Army as it advanced from the east. The Allies forced 250,000 Germans and Italians to surrender near Cape Bon on May 12.
Battle of Sicily; bombing of Ploesti
On July 10, 1943, the British and the Americans launched Operation Husky, the invasion of Sicily. The British 8th Army landed at Cape Passero, and the United States 7th Army led by Gen. George S. Patton won a beachhead at Gela. The Americans cut through the center of the island and swept up the western coast while the British, under Montgomery, went up the eastern coast. The Americans ended the campaign by capturing Messina on August 17.
On Aug. 1, 1943, 178 American B-24 (Liberator) bombers flew a 2,400-mile (3,900-kilometer) round trip from Libya to bomb Ploesti, Romania. The low-level attack did severe damage to the chief oil center of Hitler's Europe, but the United States 9th Air Force lost 54 planes in the raid. A year later the Ploesti target was knocked out in a savage three-day assault that cost 2,277 American airmen and 270 planes.
Battles of Salerno and Cassino
The British 8th Army invaded Italy at the toe of the boot on Sept. 3, 1943 (Operation Avalanche). Six days later the American 5th Army landed at Salerno, south of Naples. For six days German armor attacked savagely, but naval gunfire and close air support helped the invaders to break out of the beachhead September 15. The next day they joined the 8th Army coming from the south. The capture of Naples on October 1 rounded out this campaign and opened the way for a bitter winter struggle at Monte Cassino.
By December 1943 the 5th Army advance in Italy was stopped at the Gustav Line based on Cassino. Despite bombardment by air and artillery the Germans clung to their defenses. On Jan. 22, 1944, Allied troops landed behind the Gustav Line at Anzio but failed to break the stalemate. Finally, on May 18, the Allies overran Cassino. A week later the Cassino troops linked up with the Anzio forces. They then advanced 75 miles (120 kilometers) to take Rome on June 4.
Invasion of Normandy; battle of the hedgerows
The long-awaited invasion of Europe from the west came in June 1944. For almost three years bombers had pounded the French and German coasts. Then on June 6, 1944—known as D-Day—the Allies stormed ashore at Normandy from a fleet of about 4,000 ships. This was called Operation Overlord. United States Gen. Dwight D. Eisenhower was in overall command.
From the Normandy beaches the Allies thrust inland. They faced stiff German resistance at every hedgerow. Relentless attacks slowly forced the Germans back. American infantry captured Cherbourg on June 27.
Breakthrough at St.-Lô
On July 18, 1944, the United States 1st Army fought its way into St.-Lô where formidable German defenses blocked the advance. But after Allied planes delivered a crushing air bombardment, the 1st Army smashed through the German lines and broke out of the beachhead on July 25. Racing through the gap General Patton's 3rd Army captured Avranches on July 31. Four days later a daring attack by American tanks cut off the Brittany Peninsula. Meanwhile on July 18 the British and Canadians crossed the Orne River at Caen and struck south.
On the left flank of the 3rd Army the XV Corps pushed east to capture Le Mans on August 9 and then moved north to Argentan. Meanwhile the Canadian 1st Army advanced south to Falaise. By August 17 these two Allied thrusts had trapped the German 7th Army in a pocket between Argentan and Falaise. Five days later the Allies had captured 100,000 prisoners and killed many others who tried to escape.
Capture of Paris; invasion of southern France
The defeat of Falaise-Argentan broke the back of the German defenses in France. The 3rd Army then knifed across France. By August 20 the French capital was surrounded. The German garrison surrendered on August 25, and General de Gaulle and his Free French forces were able to enter the city without fighting.
The American 7th Army invaded southern France in Operation Anvil on Aug. 15, 1944. American infantry divisions from Italy made the attack. They were aided by American paratroops and British and French units. Overcoming weak German defenses, the 7th Army raced up the Rhone Valley to join the 3rd Army near Dijon on Sept. 11, 1944.
Battle of Aachen
Aachen was the first large German city to be taken. On Oct. 11, 1944, the veteran 1st Infantry Division of the United States 1st Army entered the outskirts of Aachen. The German defenders fought back savagely under Hitler's order to resist to the last man. They were not driven out until October 21. The city lay in ruins. Meanwhile divisions of the 1st Allied Airborne Army had crossed the Rhine in the Nijmegen-Arnhem area on September 20 but were driven back five days later.
Batle of the Bulge
On Dec. 16, 1944, the Germans launched a furious counterattack in the Ardennes. Twenty-four German divisions drove a bulge 60 miles (97 kilometers) wide and 45 miles (72 kilometers) deep into the American lines. They had been able to achieve complete surprise due to bad weather, which grounded Allied planes, excellent deception measures, and the failure of Allied intelligence.
The heroic resistance of Allied units, however, finally halted the Germans. The United States 1st, 2nd, 4th, and 99th Infantry divisions held the shoulders of the bulge at Monschau and Echternach. Other brave stands were made at St. Vith by the 7th Armored Division and at Bastogne by the 101st Airborne Division and Combat Command B of the 10th Armored Division. On December 26 the 4th Armored Division relieved encircled Bastogne, ending the crisis. The 1st and 3rd armies eliminated the bulge during January. The Germans lost 220,000 soldiers and 1,400 tanks and assault guns. Allied casualties totaled 40,000.
Attacking Germany from the air
The Allies had carried out a strategic bombing campaign against Germany. This had two goals: to destroy the German war industry and to break the morale of German civilians. When the Allies plunged into Germany they found firsthand evidence of the terrible destruction caused by the American and British saturation bombing raids. Industrial centers were crushed by a steady storm of bombs. Bridges were blown, railroad yards smashed, and harbors filled with the debris of sunken ships.
Most of this destruction took place in the later stages of the war. Of the 2,697,473 tons of bombs dropped on Nazi-held Europe less than one fifth fell before 1944 and less than one third before July 1944. In these attacks Allied planners had picked their targets carefully. Of the entire bomb tonnage 32.1 percent was dropped on transportation targets and 9.3 percent on oil, chemical, and rubber centers. Other vital targets had included ball-bearing factories, optical factories, and aircraft and steel industries. Meanwhile the Luftwaffe had been all but swept from the skies by Allied fighter planes.
Crossing the Rhine; the drive through Germany
On Feb. 10, 1945, a long, grinding drive by U.S. forces through the Hürtgen Forest took dams on the upper Roer River and ended danger of flooding the troops below. The 1st Army attacked and reached the Rhine at Cologne on March 7. The same day the United States 9th Armored Division captured the Ludendorff bridge at Remagen intact. This action breached the last natural German defensive position. In the meantime Patton's 3rd Army swept the Germans from the Saar and the Palatinate and unexpectedly crossed the Rhine near Oppenheim. By March 31 all seven Allied armies were advancing deep into Germany.
After crossing the Rhine the Americans sprang a trap on the defending Germans. North of the Ruhr the 9th Army drove straight east while the 1st Army broke out of their Remagen bridgehead and struck east and north. The two columns joined at Paderborn on April 1, cutting off German forces in the Ruhr. While the 15th Army held the west face of the pocket along the Rhine, units from the 1st and 9th armies drove in to crush the Germans. They took more than 300,000 prisoners.
The Canadian 1st Army routed the Germans in The Netherlands, and the British, Americans, and French swept through Germany. On April 25 United States and Soviet forces met at Torgau on the Elbe River. This sealed the fate of Germany. Meanwhile the 3rd and 7th armies plunged into Czechoslovakia and Austria.
Final attack in Italy
Fighting their way northward the Allies won campaigns in the Rome-Arno region (Jan. 22–Sept. 9, 1944) and in the north Apennines (Sept. 10, 1944–April 4, 1945). On April 9 they launched Operation Grapeshot, a mass assault designed to smash the Germans in northern Italy. The American 5th and the British 8th armies broke through German defenses and drove them across the Po River on April 23.
The Allies accepted the surrender of all German forces in Italy on April 29. The day before, Mussolini had been captured and killed by Italian partisans. Total Axis losses in Italy were 86,000 soldiers killed, 15,000 permanently disabled, and about 357,000 captured.
Fall of Berlin
The Soviet counteroffensive launched at Stalingrad slowly threw back the Germans along the entire front from Leningrad to Sevastopol'. During the autumn of 1944 the crushing Soviet advance forced the Germans to withdraw from the Balkans. In January 1945 the Soviets pushed across the German frontier.
On April 21 the Red Army attacked Berlin. Here the Nazis offered their last bitter resistance, defending the city with all their dying strength. But on May 2 the Soviet forces completed the conquest of the burned and battered German capital in the final decisive action in the European theater. Hitler committed suicide on April 30, and German forces surrendered on May 7.
The Pacific and Far East
Times and dates for the Pacific theater of war are given as of the time zone where the action took place.
The attack on Pearl Harbor
In a surprise attack the Japanese struck at the United States Pacific Fleet at Pearl Harbor on Dec. 7, 1941. The Japanese had used a similar surprise attack to start the Russo-Japanese War in 1904. The Japanese force was made up of some 350 planes from six carriers. Several submarines also joined in the two-hour attack. The attack, planned by Adm. Isoroku Yamamoto, commander of the Japanese navy, sank the battleships Arizona and Oklahoma and severely damaged six other battleships. Some 190 Army and Navy airplanes were destroyed on the ground. Japanese losses were 129 men, several submarines, and 29 planes. More than 2,300 Americans were killed. Fortunately for the Americans, none of their aircraft carriers was present, because carriers would change the way war was fought at sea.
Battle of Wake Island
The Japanese immediately followed up the advantage gained with the success of Pearl Harbor. Their goal was to win the Pacific before the Allies could regroup. On Dec. 8, 1941, the Japanese struck at Wake Island, a tiny American outpost 2,000 miles (3,200 kilometers) west of Honolulu, which was forced to surrender on December 23. Guam, the first American possession lost, had already fallen on Dec. 11, 1941.
Malaya and Singapore
The Japanese 25th army, under Gen. Tomoyuki Yamashita, invaded Malaya on Dec. 8, 1941, and overran the peninsula in an eight-week campaign. Their experience in jungle fighting and ability to make amphibious landings behind the British forces were key to their success. When the British battleship Prince of Wales and the battle cruiser Repulse tried to intercept Japanese troop convoys, Japanese aircraft sank them. This ended British naval efforts in the area.
On Feb. 2, 1942, the Japanese attacked the British naval base on Singapore. The island was prepared to resist assault from the sea but not from the skies or the jungles in the rear. Singapore surrendered on February 15, exposing the East Indies and the Indian Ocean to the Japanese advance.
Siege of Bataan
Japanese forces under Gen. Masaharu Homma landed on northern and southern Luzon in the Philippines during Dec. 10–24, 1941. To avoid encirclement General MacArthur abandoned Manila and withdrew his troops to the rugged peninsula of Bataan on Jan. 2, 1942. Here about 25,000 American and Filipino regulars and several thousand reservists held out until April 9. Then more than 35,000 exhausted defenders surrendered the peninsula. Later up to 10,000 of these prisoners died on an infamous “death march” to prison camps. United States Gen. Jonathan M. Wainwright and others escaped to Corregidor and continued their delaying action there, but on May 6 the Japanese overran the island and captured an additional 15,000 Americans and Filipinos. In March President Roosevelt had ordered MacArthur to Australia to take over defenses there.
Battle of the Coral Sea
On May 4–8, 1942, an American naval task force battled a Japanese invasion fleet in the Coral Sea, northeast of Australia. Surface ships did not exchange a shot. Action was confined to long-range attacks by carrier planes. American planes were based on the Lexington and the Yorktown. The Japanese withdrew on May 8. This was the first Japanese setback of the war. American losses included the Lexington, one destroyer, one tanker, 74 planes, and 543 men.
The decisive battle of Midway
On June 3–6, 1942, two American naval task forces and land-based planes from Midway Island intercepted 160 Japanese ships west of Midway. In a pitched air-sea battle the Japanese were repulsed, losing four carriers, two heavy cruisers, three destroyers, and 330 planes. This decisive defeat stopped the Japanese eastward advance in the Pacific. It is considered the turning point of the war in that theater. American losses included the carrier Yorktown, one destroyer, 150 planes, and 307 men.
Solomon Islands campaign; battle of Papua
On Aug. 7, 1942, the 1st Marine Division landed on Guadalcanal and seized Henderson Field. An additional Marine division and two Army divisions later reinforced them. After six months of bloody jungle warfare the Americans wiped out the last Japanese units on Feb. 8, 1943. New Georgia was taken on Aug. 6, 1943, by U.S. Army forces. Bougainville was invaded on Nov. 1, 1943, by the Marines, later reinforced by three Army infantry divisions.
In a series of five naval engagements between August and November, the U.S. Navy protected the Solomon Islands from invasion. A large portion of the Japanese fleet was destroyed but at a heavy cost in American ships. In the battle of Savo Island (August 8–9), a Japanese night attack was repulsed, but the United States lost three cruisers. In the battle of the Eastern Solomons (August 23–25), American carrier planes forced the Japanese fleet to withdraw. The battle of Cape Esperance (October 11–12) was an American night attack that again drove off the Japanese. In the battle of Santa Cruz Island (October 26), American and Japanese carriers exchanged blows; two Japanese carriers were sunk and about 100 planes shot down at a cost of the U.S. carrier Hornet and 74 planes. In the battle of Guadalcanal (November 13–15), Japanese attacks were repulsed with heavy losses on both sides, including two American cruisers; another American cruiser was sunk two weeks later off Lunga Point.
The Japanese attack in New Guinea had carried to within 30 miles (48 kilometers) of the Allied base of Port Moresby by Sept. 12, 1942. But American and Australian troops then drove the Japanese back over the Kokoda Trail through the Owen Stanley Mountains. Fighting in jungles and swamps, the Allies took Buna (Dec. 14, 1942) and Sanananda (Jan. 22, 1943) in Papua.
Battle of the Aleutians; New Guinea campaign
On June 4–6, 1942, the Japanese occupied the Aleutian Islands in the farthest point of their drive toward Alaska. Almost a year later, on May 11, 1943, the United States 7th Infantry Division bypassed Kiska and stormed ashore on Attu. In bitter, hand-to-hand fighting they wiped out the entire Japanese garrison by May 31. Kiska was retaken without opposition on Aug. 15, 1943.
From June 1943 to July 1944 the United States 6th Army, led by Gen. Walter Krueger, leapfrogged along the northern shore of New Guinea with amphibious, airborne, and overland attacks. This advance pushed 1,300 miles (2,100 kilometers) closer to Japan and bypassed 135,000 enemy troops.
Battles of the Gilberts and Marshalls
The seizing of the Gilberts (Operation Galvanic) opened the American advance in the central Pacific. Marines invaded Tarawa on Nov. 21, 1943, in the face of murderous crossfire from heavily fortified pillboxes. The island was conquered in four days at a cost of 913 Marine dead and 2,000 wounded.
The invasion of the Marshall Islands (Operation Flintlock) marked the first conquest of Japanese territory. Despite bitter enemy resistance the Marines took Namur, Roi, Kwajalein, and Enewetak between Jan. 31 and Feb. 22, 1944.
Battle of the Marianas
Supported by the United States 3rd Fleet, American ground troops assaulted the Mariana Islands. On June 15, 1944, Army and Marine divisions invaded Saipan. The Japanese resisted savagely with machine guns, small arms, and light mortars emplaced in caves and concrete pillboxes. The last desperate banzai charge was smashed on July 7 and all organized opposition ceased two days later.
Army and Marine divisions landed on Guam on July 21 and overran the island by August 10. Tinian was taken by the Marines (July 24–August 1). On Nov. 24, 1944, B-29 Superfortresses delivered their first major strike against Japan from bases on Guam and Saipan.
Battle of the Philippine Sea
The United States invasion of Saipan provoked the Japanese navy to counterattack, but the resulting battle of the Philippine Sea effectively eliminated Japanese carrier power. On June 19, American planes from 15 carriers of Task Force 58 destroyed 402 Japanese aircraft. Four United States ships suffered minor damage and 17 planes were lost. The following day American carrier planes located the Japanese fleet farther to the west. They destroyed about 300 more Japanese planes and sank two carriers, two destroyers, and one tanker and crippled 11 other vessels. Japanese antiaircraft fire and fighter planes shot down 16 American aircraft.
Battles of Burma, the Palaus
The 1942 Japanese conquest of Burma cut the Allied ground route to China. It ran by rail from Rangoon to Lashio and then over the Burma Road to Kunming. In the fall of 1943 the Allies launched Operation Capital to reopen a road into China.
Advancing from Assam, India, two American-trained Chinese divisions drove down the Hukawng Valley in northern Burma during October 1943. Behind the attacking Chinese, American engineers blasted out the new Ledo Road. Merrill's Marauders, a specially selected American combat team landed by parachutes and gliders, reinforced the Chinese in February 1944. A similar British force, known as Chindits or Wingate's Raiders, protected the southern flank of the advance. On Aug. 3, 1944, the veteran Allied jungle fighters captured Myitkyina. They cleared the way to Mongyu by January 1945.
On Sept. 15, 1944, U.S. Marines secured a beachhead on Peleliu Island in the Palau group. Army units reinforced the Marines on September 22. Japanese forces held out until mid-October.
Battle of China
The Japanese attack on China, begun in 1937, was intensified late in 1944 in an effort to wipe out forward bases of the United States 14th Air Force. From Sept. 8 to Nov. 26, 1944, the Japanese overran seven large air bases.
Less than a month later enemy columns had split unoccupied China, opening up a Japanese-dominated route from Malaya north to Korea. In the spring of 1945 the Chinese began a counteroffensive that regained much of the territory lost the previous year. Important elements in this drive were 35 divisions that United States Gen. Stilwell had helped to train and equip. Air support was provided by the 14th Air Force, based at Kunming, and the 10th Air Force, brought from India to Luichow.
Battles of Leyte and Leyte Gulf
The United States 6th Army invaded the east coast of Leyte on Oct. 20, 1944. Enemy resistance ended on Dec. 26, 1944, but mopping-up operations continued for many weeks. Japanese naval forces challenged the Leyte landings in a series of three engagements on Oct. 23–26, 1944. The United States 3rd and 7th fleets and a task force led by Adm. Marc A. Mitscher defeated the Japanese in all three engagements. Japan no longer had an effective navy. By this time, U.S. submarines had also effectively blockaded the home islands.
Battle of Luzon
On Jan. 9, 1945, General Krueger's 6th Army landed at Lingayen Gulf. The 8th Army, led by Gen. Robert L. Eichelberger, landed at Subic Bay on January 29 and at Batangas two days later. These attacks trapped the Japanese in a giant pincers, but they fought back fiercely in Manila, at Balete Pass, and in the Cagayan Valley.
Organized Japanese resistance ended on June 28, but large pockets of the enemy held out for many months. American prisoners were freed at Santo Tomás, Cabanatuan, Los Baños, and Baguio.
Battles of Iwo Jima and Okinawa
U.S. Marines invaded Iwo Jima on Feb. 19, 1945. It was conquered after desperate fighting on March 16. United States losses were 4,189 killed, 15,308 wounded, and 441 missing. Japanese losses were 22,000 killed and captured.
General Simon Buckner's 10th Army landed along the western coast of Okinawa on April 1, 1945. This Army-Marine force fought for 79 days, during which time they advanced only 14 miles (23 kilometers). Enemy resistance finally ended on June 21. Japanese losses were 109,629 killed and 7,871 captured. American casualties totaled 39,000, including 10,000 naval personnel of the supporting 5th Fleet. This fleet had been attacked by Japanese kamikaze planes. Twenty-six American ships were sunk and 168 were damaged.
Atomic bombs hit Japan
During July 1945, B-29 Superfortresses from the Marianas flew 1,200 sorties a week against the Japanese homeland. Other planes flew from recently captured Okinawa and Iwo Jima to join in the aerial assault. Meanwhile the 3rd Fleet sailed boldly into Japanese coastal waters and hammered targets with its guns and planes. The American intent was to make the Japanese believe that a general invasion was imminent.
The assault never happened. Fanatical Japanese resistance on Luzon, Iwo Jima, and Okinawa convinced American military planners that an invasion of the Japanese homeland would result in hundreds of thousands of Allied and Japanese casualties. To avoid these enormous losses, another, more devastating, means was found to end the war. On August 6 a B-29 named Enola Gay dropped an atomic bomb on Hiroshima, at the southern end of Honshu. The explosion vaporized everything in the immediate vicinity, completely burned about 4.4 square miles (11.4 square kilometers) of the city, and killed between 70,000 and 80,000 people. About 70,000 more were wounded. With an American victory seemingly assured, the Soviet Union declared war on Japan two days later.
The Japanese still would not surrender. On August 9 a second atomic bomb was dropped, this time on Nagasaki. It killed about 40,000 people and injured a similar number. The next day Hirohito, the Japanese emperor, ordered his government to ask for peace. Japan agreed to the terms of surrender on August 15 (the 14th in the United States), which is remembered as V-J (Victory over Japan) Day. The formal surrender was signed on September 2.
Consequences of the War
When World War II ended many countries throughout the world had to rebuild their war-damaged cities and lands. Some of the nations who won the war suffered almost as much as those who lost it.
The western Soviet Union and Poland had undergone as much war damage as Germany. Britain, France, and The Netherlands were as battered as Italy. In China and the Philippine Islands the losses were as great as in Japan.
The losses in life, money, resources, and production were so great they can only be estimated. In addition throughout Europe and eastern Asia death by famine and disease threatened the lives of people who had survived the war.
The Costs of the War
No one will ever know for certain the war cost in the number of people killed, disabled, and wounded. Many nations could not accurately count their losses.
Estimates of the total number of deaths range from about 35 million to 60 million. The military forces of the Allies and the Axis reported a total of about 14.5 million killed. The civilian population suffered even more than the military through air bombings, starvation, and epidemics. Campaigns of genocide in Europe and Asia were responsible for millions of deaths (see Holocaust). Estimated civilian deaths amounted to at least 20 million. The countries with the greatest number of civilian losses were the Soviet Union, 7 million; Poland, 5.7 million; China, 2.2 million; Yugoslavia, 1.2 million; Germany, 780,000; and Japan, 672,000. More than 12 million people were left homeless. For millions, suffering and hardship continued long after the war.
Total military costs were more than 1 trillion dollars. Property damage was estimated at almost as much (800 billion dollars). The war at sea cost 4,770 merchant vessels, with a gross tonnage of more than 21 million. This amounted to 27 percent of all the ships in existence at the start of the war.
In addition, war spending did not stop when the fighting ended. Care of the disabled, pensions, and other expenses continued. In the United States money spent for United Nations relief, occupation of foreign countries, and veterans' benefits raised the total cost by another 30 billion dollars.
Losses in Normal Production
The total number of people who served in the armed forces during the war was estimated at about 92 million. Figures for some of the nations are the Soviet Union, 22 million; Germany, 17 million; the United States, 14 million; and Britain, 12 million.
In 1943, the war year of peak employment in the United States, an additional 12,601,000 people worked in the basic war industries. In many other countries most of the workers had war jobs. The world lost years of peacetime production from all these people.
This expense to industry did not stop with the end of the war. Millions of people were not only taken from normal production, but they could not return to their usual work. Factories, railroads, and other business property had been destroyed. Millions of others had lost the money they needed or their business had been destroyed by the war.
Gains in Rebuilding, Science, Technology
There were, however, certain gains from the war. Much bomb damage had been done to slum areas of some cities. After the war these areas were rebuilt, giving people better places to live.
In many industries manufacturing methods had been improved. Automatic methods and machinery replaced costly handwork in countless operations. Machines were developed to squeeze and mold metal like putty. New alloys and plastics were developed.
Medicine and surgery made great advances. Penicillin might not have been produced for a generation in normal times. War insecticides such as DDT began a new age in controlling dangerous pests and disease carriers. The dangers posed by the use of pesticides would not be recognized for several years.
The development of jet and rocket propulsion offered prospects of air transportation at the speed of sound (see jet propulsion). The greatest advance of all was the releasing of atomic power, but peacetime benefits soon followed in the form of nuclear energy for power in industry. Nuclear power was adapted to new military uses in the construction of submarines and aircraft carriers (see nuclear energy).
The V-1 and V-2 guided missiles developed by the Germans during the war were an important step toward the modern space age. After the war V-2 equipment and German engineers were brought to the United States. The work of these German engineers along with that of American scientists resulted in the successful launching of American artificial Earth satellites. (See also guided missile; rocket; space exploration.)
The Hard Road to Peace
Throughout World War II there were important meetings among the heads of the Allied governments. At these conferences plans were made for winning the war and for the postwar world. The leaders hoped to avoid the mistakes made after World War I.
President Roosevelt and Prime Minister Churchill met at sea off the North American coast in August 1941. They produced the Atlantic Charter, which restated Woodrow Wilson's Fourteen Points in more simple terms and promised to end Nazi tyranny. (See also Atlantic Charter; Wilson, Woodrow.)
In 1943 President Roosevelt and Prime Minister Churchill conferred in Casablanca in January, in Washington, D.C., in August, and in Quebec in August. In 1943 in Moscow the foreign ministers of Britain, the Soviet Union, and China and Secretary of State Hull of the United States signed a pact to plan an international organization for peace.
The Tehran Conference and UNRRA
President Roosevelt and Prime Minister Churchill met with Chiang Kai-shek of China at Cairo, Egypt, in November 1943. From Cairo Roosevelt and Churchill went to Tehran in Iran to confer with Premier Stalin. They promised him a second front in France.
Representatives of 44 Allied nations met in Washington, D.C., and Atlantic City in November 1943. They set up the United Nations Relief and Rehabilitation Administration (UNRRA). The UNRRA fund for rehabilitating the postwar world was estimated at about 2 billion dollars. The United States was to provide 1.35 billion dollars of the total.
Planning Finance and World Peace
A group of monetary experts representing 44 states or governments held a conference at Bretton Woods, N.H., during July 1944. They agreed on a system for setting up an international lending agency. Countries in need of funds to finance international trade could borrow an amount equal to their contribution. This was called a “stabilization fund.” The plan also called for an International Bank for Reconstruction and Development to lend money for rehabilitation projects in member nations. The United States was expected to contribute the largest amount of money to both the stabilization fund and the World Bank.
In August 1944 representatives of the United States, the Soviet Union, Britain, and China met at Dumbarton Oaks estate in Washington, D.C. Preliminary plans were drawn up for assuring peace. These plans formed the basis for the organization of the United Nations the following year.
The Yalta Conference and the United Nations Charter
President Roosevelt and Prime Minister Churchill met with Premier Stalin at Yalta in the Crimea during Feb. 4–11, 1945. They discussed plans for ending the war, occupying Germany, and dealing with the defeated or liberated countries of eastern Europe. They also laid the basis for provisions of the United Nations charter.
President Roosevelt died on April 12. Vice president Harry S. Truman succeeded to the presidency on the same day. He announced he would follow Roosevelt's wartime and postwar policies (see Truman).
On April 25, 1945, delegates from 50 countries assembled in San Francisco to endorse a charter based on the Dumbarton Oaks proposals. The charter took effect on Oct. 24, 1945, officially creating the United Nations. (See also United Nations.)
Potsdam Meeting; Postwar Disagreement
In July and August 1945 President Truman, Premier Stalin, and Prime Minister Churchill met in Potsdam, a suburb of Berlin. They discussed peace settlements and drew up plans for reconstructing Europe.
In the midst of these discussions an election in Britain put the Labour party in power. That party's leader, Clement R. Attlee, succeeded Churchill as Britain's prime minister and replaced him at the Potsdam meeting.
After Japan's surrender the foreign secretaries of Britain, the Soviet Union, China, and France and American Secretary of State James F. Byrnes met in London in September 1945. After three weeks of disputes the meeting broke up without results.
During this time the Soviets demanded a share in the occupation of Japan. General MacArthur, however, was kept in sole command of Japan. The Soviet Union shared occupation of Korea with the United States.
Meanwhile, weaknesses in the prewar colonial empires began to surface—a trend that continued for many years. Revolts soon broke out in some of the regions released from Japanese control. In the Netherlands Indies Indonesian nationalists revolted and set up a republic in 1945. The Dutch failed to put down the revolt. By 1950 the Republic of Indonesia was formed. France had trouble reestablishing its authority in French Indochina against the resistance of Vietnamese nationalists. Britain was disturbed by rebellion in Burma, pressure for independence from India, and demands from Zionist Jews for entry into Palestine. (See also East Indies; Indochina; Indonesia.)
Postwar Relief; War Criminal Trials
By 1946 UNRRA had helped to return about 6 million people to their homes in western Europe. It had also distributed about 6 million tons of food. In 1947 UNRRA was discontinued. The problem of food relief was then handled by the individual nations.
In August 1945 the United States, Britain, the Soviet Union, and France wrote a charter for an Allied War Crimes Commission. The court established by the commission met at Nuremberg, Germany. It called before it 22 leading Nazis. In October 1946 the court sentenced most of the defendants. Ten of them were hanged. Seven were imprisoned, and three were acquitted. Others were sentenced later.
The British, Norwegians, and French also held separate war criminal trials. In Japan after V-J Day General MacArthur set up an Army commission to try war criminals. Hundreds were executed, and thousands more were put in prison.
British Power Declines; United States Problems
A significant postwar development was the decline of British power. Soon after the war Britain began to give up its empire. In 1947 it granted freedom to India, which split up into a Hindu state and a new Muslim nation named Pakistan. In the same year Britain turned over the Palestine problem to the United Nations. In 1948 the State of Israel was created. (See also India; Israel; Pakistan.)
For more than a year after the war the United States had problems that many foreign nations took for signs of weakness. Members of the armed forces demanded their release. By 1947 the Army was down from its war peak of more than 8 million soldiers to a peacetime strength of about 1 million. Congress passed the nation's second peacetime draft law in 1948.
There were also many shortages of consumer goods. Widespread labor troubles resulted in damaging strikes. Dissatisfaction with conditions brought a sweeping Republican victory in the 1946 Congressional elections.
The Marshall Plan
One of the causes of World War II was the collapse of the European economy. To avoid a repeat of this situation and to create a strong economy that would enable Europeans to resist Communist aggression, the United States decided to aid European recovery. Under the Marshall Plan, named after United States Secretary of State George C. Marshall, the United States provided more than 13 billion dollars to rebuild western Europe. The plan was a great success and laid the foundations for the European Economic Union (Common Market). The Soviets refused to allow the eastern European nations under their control to participate.
The Peace Treaties
Delegates from 21 member countries of the United Nations met in Paris on July 29, 1946, to draft treaties with Italy, Hungary, Bulgaria, Romania, and Finland. Representatives of the United States, Britain, the Soviet Union, and France signed the treaties in Paris on Feb. 10, 1947. Each treaty provided that border fortifications were to be limited to those needed to keep internal security. Guarantees were given against racial discrimination and the rebirth of fascist governments. The Balkan treaties provided for free navigation of the Danube.
The treaty with Italy
Territorial: Loss of colonies in Africa (Eritrea, Somaliland, and Libya); final disposition to be decided by the United States, Britain, the Soviet Union, and France within a year, with the possibility of United Nations control. The port of Trieste to be internationalized under United Nations control. The city of Fiume, most of the peninsula of Venezia Giulia, the commune of Zara, and the islands of Lagosta and Pelagosa ceded to Yugoslavia; the Dodecanese Islands to Greece, the Tenda and Briga valleys, and other small frontier areas, to France. Italy recognized the independence of Albania and Ethiopia.
Reparations: 360 million dollars: 100 million to the Soviet Union, 125 million to Yugoslavia, 105 million to Greece, 25 million to Ethiopia, 5 million to Albania.
Armaments: Combined strength of army, navy, air force, and police, 300,000 personnel. Allowed 200 tanks, 67,500 tons of warships, 200 fighter planes, and 150 noncombat planes; long-range artillery and aircraft carriers prohibited. Warships in excess of the 67,500-ton limitation to be distributed among the United States, Britain, the Soviet Union, and France.
The treaty with Bulgaria
Territorial: Parts of Macedonia and Thrace returned to Yugoslavia and Greece.
Reparations: 45 million dollars to Greece, 25 million to Yugoslavia.
Armaments: Army, navy, and air force limited to 65,500 personnel. Allowed 7,250 tons of warships, 70 combat planes, 20 noncombat planes.
The treaty with Hungary
Territorial: 1938 frontiers reestablished; restoration of part of Slovakia to Czechoslovakia, Ruthenia to the Soviet Union, Transylvania to Romania, and territory taken from Yugoslavia in 1941.
Reparations: 200 million dollars to the Soviet Union, 50 million to Yugoslavia, 50 million to Czechoslovakia.
Armaments: Army and air force limited to 70,000 personnel. Allowed 70 combat planes, 20 noncombat planes.
The treaty with Romania
Territorial: Southern Dobruja given to Bulgaria, northern Bucovina and Bessarabia given to the Soviet Union.
Reparations: 300 million dollars to the Soviet Union.
Armaments: Army, navy, and air force limited to 138,000 personnel. Allowed 15,000 tons of warships, 100 combat planes, and 50 noncombat planes.
The treaty with Finland
Territorial: Petsamo, Salla, and Karelia ceded to the Soviet Union; Porkkala Peninsula leased to the Soviet Union for 50 years; Aland Islands demilitarized.
Reparations: 300 million dollars to the Soviet Union.
Armaments: Army, navy, and air force limited to 41,900 personnel. Allowed 10,000 tons of warships and 60 planes.
The problem of Germany
At Potsdam in 1945 Allied leaders set up a temporary administration for Germany. The country was divided into American, British, French, and Soviet occupation zones. The American, British, and French zones together made up the western two thirds of Germany, while the Soviet zone comprised the eastern third. Control of Berlin, in the Soviet zone, was divided between the Soviet Union and the Western powers. Later, in 1961, the city would be physically partitioned by a concrete and barbed-wire wall. (See also Berlin.)
The victors were determined that Germany should not regain the industrial strength necessary for war. Wiping out all German industry, however, would have been disastrous. Western Europe depended on Germany for coal and heavy metal products. In return Germany normally bought huge quantities of foodstuffs from its neighbors.
The Allied powers also had to settle on a form of German government. The United States and Britain favored a federal type, with most matters entrusted to German states (Länder) and a federal government to deal with national matters such as currency. The Soviet Union preferred a strong central government with political parties directly represented so that Communists could dominate. France wanted a very loose federation, with international control of the Ruhr.
Representatives of the four powers convened in Moscow in 1947 to discuss treaties for Germany and Austria. Because of the postwar weakness of Britain and France, the conference was chiefly a contest between the United States and the Soviet Union.
The Soviet Union demanded 10 billion dollars in reparations from Germany in 20 years. The United States rejected this proposal on the grounds that the money could be made available only if the United States supplied an equivalent sum to support the Germans. If the reparations were to be paid without such support, it would greatly hamper Germany's economic recovery. The conference ended with no agreement. Later meetings also failed. Then, in June 1948, the Soviet Union blockaded the roads and railways to Berlin in an attempt to force the United States, Britain, and France out of the city.
The Western powers responded by supplying western Berlin with food, fuel, and medicine from outside by air. This airlift kept life going in western Berlin for 11 months, until the Soviet Union lifted the blockade in May 1949. In that same month the Western powers organized their occupation zones into a new nation called the Federal Republic of Germany, commonly known as West Germany. The Soviet Union then established its zone as the German Democratic Republic, or East Germany. When the Soviet Union continued to block a German peace treaty the United States in 1952 ratified a “peace contract” with West Germany. Germany would not be reunited as a nation until 1990. (See also Germany, “History.”)
The problems of Austria and Trieste
The postwar split between the Soviet Union and the West was also illustrated in Austria. After the war Austria was divided into four areas of occupation—American, British, French, and Soviet—with Vienna under the control of all four powers. In 1955, after repeated disagreements about terms, a peace treaty was signed in Vienna. Soviet and Allied occupation forces were withdrawn. (See also Austria.)
Another postwar trouble spot in Europe was Trieste. In 1945 the city and surrounding territory were divided into two zones—Zone A (including Trieste) was occupied by British and United States forces, Zone B by the Yugoslavs. The Italian peace treaty of 1947 established the Free Territory of Trieste under the jurisdiction of the United Nations. Conflicts between Yugoslavia and Italy over their claims to the territory continued, however. In an agreement signed by the two countries in 1954, Trieste and most of Zone A were given to Italy while Yugoslavia received Zone B and some added territory. (See also Trieste.)
Meanwhile the increased threat of Communism in Europe led to the formation of new pacts among the free nations. In 1949 ten nations of western Europe joined with the United States and Canada in establishing the North Atlantic Treaty Organization (NATO). Three years later the European Defense Community was founded. This group received NATO support. (See also Cold War; North Atlantic Treaty Organization.)
Postwar problems in Asia
In the Far East Communist military aggression created a new balance of power before the World War II peace treaty with Japan could be signed. On the Asia mainland Chinese Communist forces routed the armies of Nationalist China (1946–49). Mao Zedong, the Chinese Communist leader, proclaimed the People's Republic of China and allied it with the Soviet Union. This made China the strongest military power in the Far East. The defeated Nationalist forces, headed by Chiang Kai-shek, fled to the island of Taiwan. (See also Chiang Kai-shek; Mao Zedong; China; Taiwan.)
The successful Communist revolution in China set the pattern for Communist revolts in other Asian countries. In Malaya and the Philippines rebels waged campaigns of terrorism that lasted for a number of years. In Indochina Communist forces attacked French-supported Vietnam. They forced the French to partition Vietnam. (See also Indochina; Malaysia; Philippines; Vietnam.)
After World War II Korea was occupied in the north by the Soviet Union and in the south by the United States. When the Republic of Korea was established in the south, most of the United States forces withdrew. In 1950 Communist forces from North Korea and China invaded the new republic. Thus the increasing tensions in eastern Asia had finally exploded into warfare. (See also Korean War.)
Peace treaty with Japan
Japan struggled to rebuild itself after its crushing defeat. During this time all efforts by the Allies to frame a peace treaty were blocked by the Soviet Union's disagreements with the United States. Finally, in 1951, the United States sponsored a treaty that was endorsed by Japan and 48 other nations. The Soviet Union refused to sign. The Chinese signatory—the Nationalist government—signed in 1952. The chief provisions of the peace treaty were as follows:
Territorial: The independence of Korea was to be recognized; all claims to Taiwan, the Pescadores, the southern part of Sakhalin, and the Pacific islands that were formerly under Japanese mandate were to be surrendered.
Reparations: Because of limited economic capacity Japan was made to pay victimized nations only in goods manufactured in Japan from raw materials supplied by those nations.
Armaments: No limitations. Japan, however, agreed to abide by the antiaggression provisions contained in the charter of the United Nations. In addition, Article Nine of the new Japanese constitution prohibited all warfare except in defense. Negotiations between Japan and the Soviet Union continued until 1956, when a peace treaty was finally signed.
["Satisfaction is death of Struggle"]
[Naseer Ahmed Chandio] | http://www.cssforum.com.pk/css-optional-subjects/group-e-history-subjects/european-history/6713-world-war-ii.html | 13 |
92 | An extinction event (also known as a mass extinction or biotic crisis) is a widespread and rapid decrease in the amount of life on earth. Such an event is identified by a sharp change in the diversity and abundance of macroscopic life. It occurs when the rate of extinction increases with respect to the rate of speciation. Because the majority of diversity and biomass on Earth is microbial, and thus difficult to measure, recorded extinction events affect the easily observed, biologically complex component of the biosphere rather than the total diversity and abundance of life.
Over 98% of documented species are now extinct, but extinction occurs at an uneven rate. Based on the fossil record, the background rate of extinctions on Earth is about two to five taxonomic families of marine invertebrates and vertebrates every million years. Marine fossils are mostly used to measure extinction rates because of their superior fossil record and stratigraphic range compared to land organisms.
Since life began on Earth, several major mass extinctions have significantly exceeded the background extinction rate. The most recent, the Cretaceous–Paleogene extinction event, which occurred approximately 66 million years ago (Ma), was a large-scale mass extinction of animal and plant species in a geologically short period of time. In the past 540 million years there have been five major events when over 50% of animal species died. Mass extinctions seem to be a Phanerozoic phenomenon, with extinction rates low before large complex organisms arose.
Estimates of the number of major mass extinctions in the last 540 million years range from as few as five to more than twenty. These differences stem from the threshold chosen for describing an extinction event as "major", and the data chosen to measure past diversity.
Major extinction events
In a landmark paper published in 1982, Jack Sepkoski and David M. Raup identified five mass extinctions. They were originally identified as outliers to a general trend of decreasing extinction rates during the Phanerozoic, but as more stringent statistical tests have been applied to the accumulating data, the "Big Five" cannot be so clearly defined, but rather appear to represent the largest (or some of the largest) of a relatively smooth continuum of extinction events.
- Cretaceous–Paleogene extinction event (End Cretaceous, K-T extinction, or K-Pg extinction): 66 Ma at the Cretaceous.Maastrichtian-Paleogene.Danian transition interval. The K–T event is now officially called the Cretaceous–Paleogene (or K–Pg) extinction event in place of Cretaceous-Tertiary. About 17% of all families, 50% of all genera and 75% of all species became extinct. In the seas it reduced the percentage of sessile animals to about 33%. The majority of non-avian dinosaurs became extinct during that time. The boundary event was severe with a significant amount of variability in the rate of extinction between and among different clades. Mammals and birds emerged as dominant land vertebrates in the age of new life.
- Triassic–Jurassic extinction event (End Triassic): 200 Ma at the Triassic-Jurassic transition. About 23% of all families, 48% of all genera (20% of marine families and 55% of marine genera) and 70% to 75% of all species went extinct. Most non-dinosaurian archosaurs, most therapsids, and most of the large amphibians were eliminated, leaving dinosaurs with little terrestrial competition. Non-dinosaurian archosaurs continued to dominate aquatic environments, while non-archosaurian diapsids continued to dominate marine environments. The Temnospondyl lineage of large amphibians also survived until the Cretaceous in Australia (e.g., Koolasuchus).
- Permian–Triassic extinction event (End Permian): 251 Ma at the Permian-Triassic transition. Earth's largest extinction killed 57% of all families, 83% of all genera and 90% to 96% of all species. (53% of marine families, 84% of marine genera, about 96% of all marine species and an estimated 70% of land species, including insects The evidence of plants is less clear, but new taxa became dominant after the extinction. The "Great Dying" had enormous evolutionary significance: on land, it ended the primacy of mammal-like reptiles. The recovery of vertebrates took 30 million years, but the vacant niches created the opportunity for archosaurs to become ascendant. In the seas, the percentage of animals that were sessile dropped from 67% to 50%. The whole late Permian was a difficult time for at least marine life, even before the "Great Dying".
- Late Devonian extinction: 375–360 Ma near the Devonian-Carboniferous transition. At the end of the Frasnian Age in the later part(s) of the Devonian Period, a prolonged series of extinctions eliminated about 19% of all families, 50% of all genera and 70% of all species. This extinction event lasted perhaps as long as 20 Ma, and there is evidence for a series of extinction pulses within this period.
- Ordovician–Silurian extinction event (End Ordovician or O-S): 450–440 Ma at the Ordovician-Silurian transition. Two events occurred that killed off 27% of all families, 57% of all genera and 60% to 70% of all species. Together they are ranked by many scientists as the second largest of the five major extinctions in Earth's history in terms of percentage of genera that went extinct.
Despite the popularization of these five events, there is no fine line separating them from other extinction events; indeed, using different methods of calculating an extinction's impact can lead to other events featuring in the top five.
The older the fossil record gets, the more difficult it is to read. This is because:
- Older fossils are harder to find because they are usually buried at a considerable depth in the rock.
- Dating older fossils is more difficult.
- Productive fossil beds are researched more than unproductive ones, therefore leaving certain periods unresearched.
- Prehistoric environmental disturbances can disturb the deposition process.
- The preservation of fossils varies on land, but marine fossils tend to be better preserved than their sought after land-based counterparts.
It has been suggested that the apparent variations in marine biodiversity may actually be an artifact, with abundance estimates directly related to quantity of rock available for sampling from different time periods. However, statistical analysis shows that this can only account for 50% of the observed pattern, and other evidence (such as fungal spikes)[clarification needed] provides reassurance that most widely accepted extinction events are indeed real. A quantification of the rock exposure of Western Europe does indicate that many of the minor events for which a biological explanation has been sought are most readily explained by sampling bias.
Lesser extinctions
Lesser extinction events include:
Evolutionary importance
Mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the new dominant group is "superior" to the old and usually because an extinction event eliminates the old dominant group and makes way for the new one.
For example mammaliformes ("almost mammals") and then mammals existed throughout the reign of the dinosaurs, but could not compete for the large terrestrial vertebrate niches which dinosaurs monopolized. The end-Cretaceous mass extinction removed the non-avian dinosaurs and made it possible for mammals to expand into the large terrestrial vertebrate niches. Ironically, the dinosaurs themselves had been beneficiaries of a previous mass extinction, the end-Triassic, which eliminated most of their chief rivals, the crurotarsans.
Another point of view put forward in the Escalation hypothesis predicts that species in ecological niches with more organism-to-organism conflict will be less likely to survive extinctions. This is because the very traits that keep a species numerous and viable under fairly static conditions become a burden once population levels fall among competing organisms during the dynamics of an extinction event.
Furthermore, many groups which survive mass extinctions do not recover in numbers or diversity, and many of these go into long-term decline, and these are often referred to as "Dead Clades Walking". So analysing extinctions in terms of "what died and what survived" often fails to tell the full story.
Patterns in frequency
It has been suggested variously that extinction events occurred periodically, every 26 to 30 million years, or that diversity fluctuates episodically every ~62 million years. Various ideas attempt to explain the supposed pattern, including the presence of a hypothetical companion star to the sun, oscillations in the galactic plane, or passage through the Milky Way's spiral arms. However, other authors have concluded the data on marine mass extinctions do not fit with the idea that mass extinctions are periodic, or that ecosystems gradually build up to a point at which a mass extinction is inevitable. Many of the proposed correlations have been argued to be spurious. Others have argued that there is strong evidence supporting periodicity in a variety of records, and additional evidence in the form of coincident periodic variation in nonbiological geochemical variables.
Mass extinctions are thought to result when a long-term stress is compounded by a short term shock. Over the course of the Phanerozoic, individual taxa appear to be less likely to become extinct at any time, which may reflect more robust food webs as well as less extinction-prone species and other factors such as continental distribution. However, even after accounting for sampling bias, there does appear to be a gradual decrease in extinction and origination rates during the Phanerozoic. This may represent the fact that groups with higher turnover rates are more likely to become extinct by chance; or it may be an artefact of taxonomy: families tend to become more speciose, therefore less prone to extinction, over time; and larger taxonomic groups (by definition) appear earlier in geological time.
It has also been suggested that the oceans have gradually become more hospitable to life over the last 500 million years, and thus less vulnerable to mass extinctions,[note 1] but susceptibility to extinction at a taxonomic level does not appear to make mass extinctions more or less probable.
There is still debate about the causes of all mass extinctions. In general, large extinctions may result when a biosphere under long-term stress undergoes a short-term shock. An underlying mechanism appears to be present in the correlation of extinction and origination rates to diversity. High diversity leads to a persistent increase in extinction rate; low diversity to a persistent increase in origination rate. These presumably ecologically controlled relationships likely amplify smaller perturbations (asteroid impacts, etc.) to produce the global effects observed.
Identifying causes of particular mass extinctions
A good theory for a particular mass extinction should: (i) explain all of the losses, not just focus on a few groups (such as dinosaurs); (ii) explain why particular groups of organisms died out and why others survived; (iii) provide mechanisms which are strong enough to cause a mass extinction but not a total extinction; (iv) be based on events or processes that can be shown to have happened, not just inferred from the extinction.
It may be necessary to consider combinations of causes. For example the marine aspect of the end-Cretaceous extinction appears to have been caused by several processes which partially overlapped in time and may have had different levels of significance in different parts of the world.
Arens and West (2006) proposed a "press / pulse" model in which mass extinctions generally require two types of cause: long-term pressure on the eco-system ("press") and a sudden catastrophe ("pulse") towards the end of the period of pressure. Their statistical analysis of marine extinction rates throughout the Phanerozoic suggested that neither long-term pressure alone nor a catastrophe alone was sufficient to cause a significant increase in the extinction rate.
Most widely supported explanations
Macleod (2001) summarized the relationship between mass extinctions and events which are most often cited as causes of mass extinctions, using data from Courtillot et al. (1996), Hallam (1992) and Grieve et al. (1996):
- Flood basalt events: 11 occurrences, all associated with significant extinctions But Wignall (2001) concluded that only five of the major extinctions coincided with flood basalt eruptions and that the main phase of extinctions started before the eruptions.
- Sea-level falls: 12, of which seven were associated with significant extinctions.
- Asteroid impacts; One large impact associated with a mass extinction; there have been many smaller impacts but they are not associated with significant extinctions.[clarification needed]
The most commonly suggested causes of mass extinctions are listed below.
Flood basalt events
The formation of large igneous provinces by flood basalt events could have:
- produced dust and particulate aerosols which inhibited photosynthesis and thus caused food chains to collapse both on land and at sea
- emitted sulfur oxides which were precipitated as acid rain and poisoned many organisms, contributing further to the collapse of food chains
- emitted carbon dioxide and thus possibly causing sustained global warming once the dust and particulate aerosols dissipated.
Flood basalt events occur as pulses of activity punctuated by dormant periods. As a result they are likely to cause the climate to oscillate between cooling and warming, but with an overall trend towards warming as the carbon dioxide they emit can stay in the atmosphere for hundreds of years.
Sea-level falls
These are often clearly marked by worldwide sequences of contemporaneous sediments which show all or part of a transition from sea-bed to tidal zone to beach to dry land – and where there is no evidence that the rocks in the relevant areas were raised by geological processes such as orogeny. Sea-level falls could reduce the continental shelf area (the most productive part of the oceans) sufficiently to cause a marine mass extinction, and could disrupt weather patterns enough to cause extinctions on land. But sea-level falls are very probably the result of other events, such as sustained global cooling or the sinking of the mid-ocean ridges.
A study, published in the journal Nature (online June 15, 2008) established a relationship between the speed of mass extinction events and changes in sea level and sediment. The study suggests changes in ocean environments related to sea level exert a driving influence on rates of extinction, and generally determine the composition of life in the oceans.
Impact events
The impact of a sufficiently large asteroid or comet could have caused food chains to collapse both on land and at sea by producing dust and particulate aerosols and thus inhibiting photosynthesis. Impacts on sulfur-rich rocks could have emitted sulfur oxides precipitating as poisonous acid rain, contributing further to the collapse of food chains. Such impacts could also have caused megatsunamis and / or global forest fires.
Most paleontologists now agree that an asteroid did hit the Earth about 65 Ma, but there is an ongoing dispute whether the impact was the sole cause of the Cretaceous–Paleogene extinction event. There is evidence that there was an interval of about 300 ka from the impact to the mass extinction. In 1997, paleontologist Sankar Chatterjee drew attention to the proposed and much larger 600 km (370 mi) Shiva crater and the possibility of a multiple-impact scenario.
In 2007, a hypothesis was put forth that argued the impactor that killed the dinosaurs 65 Ma years ago belonged to the Baptistina family of asteroids. Concerns have been raised regarding the reputed link, in part because very few solid observational constraints exist of the asteroid or family. Indeed, it was discovered that 298 Baptistina does not share the same chemical signature as the source of the K–Pg (Chicxulub) impact. Although this finding may make the link between the Baptistina family and K-T impactor more difficult to substantiate, it does not preclude the possibility.
In 2010, another hypothesis was offered which implicated the newly discovered asteroid P/2010 A2, a member of the Flora family of asteroids, as a possible remnant cohort of the K–Pg (Chicxulub) impact.
Ocean asteroid impacts
Sea surface temperatures are normally below 50°C, but can easily exceed that temperature when an asteroid strikes the ocean thereby inducing a large thermal shock. Under those circumstances very large quantities of CO2 erupt from the ocean. As a heavy gas, the CO2 can quickly spread around the world in concentrations sufficient to suffocate air breathing fauna, selectively at low altitudes.
Asteroid impacts with the ocean may not leave obvious signs, but these impacts have the potential to be far more devastating to life on earth than impacts with land.
Sustained and significant global cooling
Sustained global cooling could kill many polar and temperate species and force others to migrate towards the equator; reduce the area available for tropical species; often make the Earth's climate more arid on average, mainly by locking up more of the planet's water in ice and snow. The glaciation cycles of the current ice age are believed to have had only a very mild impact on biodiversity, so the mere existence of a significant cooling is not sufficient on its own to explain a mass extinction.
It has been suggested that global cooling caused or contributed to the End-Ordovician, Permian-Triassic, Late Devonian extinctions, and possibly others. Sustained global cooling is distinguished from the temporary climatic effects of flood basalt events or impacts.
Sustained and significant global warming
This would have the opposite effects: expand the area available for tropical species; kill temperate species or force them to migrate towards the poles; possibly cause severe extinctions of polar species; often make the Earth's climate wetter on average, mainly by melting ice and snow and thus increasing the volume of the water cycle. It might also cause anoxic events in the oceans (see below).
Global warming as a cause of mass extinction is supported by several recent studies.
The most dramatic example of sustained warming is the Paleocene-Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions. It has also been suggested to have caused the Triassic-Jurassic extinction event, during which 20% of all marine families went extinct. Furthermore, the Permian–Triassic extinction event has been suggested to have been caused by warming.
Clathrate gun hypothesis
Clathrates are composites in which a lattice of one substance forms a cage around another. Methane clathrates (in which water molecules are the cage) form on continental shelves. These clathrates are likely to break up rapidly and release the methane if the temperature rises quickly or the pressure on them drops quickly—for example in response to sudden global warming or a sudden drop in sea level or even earthquakes. Methane is a much more powerful greenhouse gas than carbon dioxide, so a methane eruption ("clathrate gun") could cause rapid global warming or make it much more severe if the eruption was itself caused by global warming.
The most likely signature of such a methane eruption would be a sudden decrease in the ratio of carbon-13 to carbon-12 in sediments, since methane clathrates are low in carbon-13; but the change would have to be very large, as other events can also reduce the percentage of carbon-13.
It has been suggested that "clathrate gun" methane eruptions were involved in the end-Permian extinction ("the Great Dying") and in the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions.
Anoxic events
Anoxic events are situations in which the middle and even the upper layers of the ocean become deficient or totally lacking in oxygen. Their causes are complex and controversial, but all known instances are associated with severe and sustained global warming, mostly caused by sustained massive volcanism.
It has been suggested that anoxic events caused or contributed to the Ordovician–Silurian, late Devonian, Permian–Triassic and Triassic–Jurassic extinctions, as well as a number of lesser extinctions (such as the Ireviken, Mulde, Lau, Toarcian and Cenomanian–Turonian events). On the other hand, there are widespread black shale beds from the mid-Cretaceous which indicate anoxic events but are not associated with mass extinctions.
Hydrogen sulfide emissions from the seas
Kump, Pavlov and Arthur (2005) have proposed that during the Permian–Triassic extinction event the warming also upset the oceanic balance between photosynthesising plankton and deep-water sulfate-reducing bacteria, causing massive emissions of hydrogen sulfide which poisoned life on both land and sea and severely weakened the ozone layer, exposing much of the life that still remained to fatal levels of UV radiation.
Oceanic overturn
Oceanic overturn is a disruption of thermo-haline circulation which lets surface water (which is more saline than deep water because of evaporation) sink straight down, bringing anoxic deep water to the surface and therefore killing most of the oxygen-breathing organisms which inhabit the surface and middle depths. It may occur either at the beginning or the end of a glaciation, although an overturn at the start of a glaciation is more dangerous because the preceding warm period will have created a larger volume of anoxic water.
Unlike other oceanic catastrophes such as regressions (sea-level falls) and anoxic events, overturns do not leave easily identified "signatures" in rocks and are theoretical consequences of researchers' conclusions about other climatic and marine events.
A nearby nova, supernova or gamma ray burst
A nearby gamma ray burst (less than 6000 light years away) would be powerful enough to destroy the Earth's ozone layer, leaving organisms vulnerable to ultraviolet radiation from the sun. Gamma ray bursts are fairly rare, occurring only a few times in a given galaxy per million years. It has been suggested that a supernova or gamma ray burst caused the End-Ordovician extinction.
Plate tectonics
Movement of the continents into some configurations can cause or contribute to extinctions in several ways: by initiating or ending ice ages; by changing ocean and wind currents and thus altering climate; by opening seaways or land bridges which expose previously isolated species to competition for which they are poorly adapted (for example, the extinction of most of South America's native ungulates and all of its large metatherians after the creation of a land bridge between North and South America). Occasionally continental drift creates a super-continent which includes the vast majority of Earth's land area, which in addition to the effects listed above is likely to reduce the total area of continental shelf (the most species-rich part of the ocean) and produce a vast, arid continental interior which may have extreme seasonal variations.
Another theory is that the creation of the super-continent Pangaea contributed to the End-Permian mass extinction. Pangaea was almost fully formed at the transition from mid-Permian to late-Permian, and the "Marine genus diversity" diagram at the top of this article shows a level of extinction starting at that time which might have qualified for inclusion in the "Big Five" if it were not overshadowed by the "Great Dying" at the end of the Permian.
Other hypotheses
Many other hypotheses have been proposed, such as the spread of a new disease, or simple out-competition following an especially successful biological innovation. But all have been rejected, usually for one of the following reasons: they require events or processes for which there is no evidence; they assume mechanisms which are contrary to the available evidence; they are based on other theories which have been rejected or superseded.
Supervolcanic events may also been potential causes of mass extinctions. While none of the extinction events in Earth's past have been caused by any supervolcanic eruptions, the Toba catastrophe theory may have reduced the first humans down to a few thousand individuals.
Scientists have been concerned that human activities could cause more plants and animals to become extinct than any point in the past. Along with man-made changes in climate (see above), some of these extinctions could be caused by overhunting, overfishing, invasive species, or habitat loss.
The eventual warming and expanding of the Sun, combined with the eventual decline of atmospheric carbon dioxide could actually cause an even greater mass extinction, having the potential to wipe out even microbes, where rising global temperatures caused by the expanding Sun will gradually increase the rate of weathering, which in turn removes more and more carbon dioxide from the atmosphere. When carbon dioxide levels get too low (perhaps at 50 ppm), all plant life will die out, although simpler plants like grasses and mosses can survive much longer, until CO2 levels drop to 10 ppm. With all plants gone, atmospheric oxygen can no longer be replenished (except by algae), and is eventually removed by chemical reactions in the atmosphere, perhaps from volcanic eruptions. Eventually the loss of oxygen will cause all remaining multicellular life to die out via asphyxiation, leaving behind only microbes. When the Sun becomes 10% brighter, microbes too will die out. This is the most extreme instance of a climate-caused extinction event. Since this will only happen late in the Sun's life, such will cause the final mass extinction in Earth's history.
Effects and recovery
The impact of mass extinction events varied widely. After a major extinction event, usually only weedy species survive due to their ability to live in diverse habitats. Later, species diversify and occupy empty niches. Generally, biodiversity recovers 5 to 10 million years after the extinction event. In the most severe mass extinctions it may take 15 to 30 million years.
The worst event, the Permian–Triassic extinction event, devastated life on earth and is estimated to have killed off over 90% of species. Life seemed to recover quickly after the P-T extinction, but this was mostly in the form of disaster taxa, such as the hardy Lystrosaurus. The most recent research indicates that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety of niches, took much longer to recover. It is thought that this long recovery was due to the successive waves of extinction which inhibited recovery, as well as to prolonged environmental stress to organisms which continued into the Early Triassic. Recent research indicates that recovery did not begin until the start of the mid-Triassic, 4M to 6M years after the extinction; and some writers estimate that the recovery was not complete until 30M years after the P-Tr extinction, i.e. in the late Triassic.
The effects of mass extinctions on plants are somewhat harder to quantify, given the biases inherent in the plant fossil record. Some mass extinctions (such as the end-Permian) were equally catastrophic for plants, whereas others, such as the end-Devonian, did not affect the flora.
See also
- Nee, S. (2004). "Extinction, slime, and bottoms". PLoS Biology 2 (8): E272. doi:10.1371/journal.pbio.0020272. PMC 509315. PMID 15314670.
- Fichter, George S. (1995). Endangered animals. USA: Golden Books Publishing Company. p. 5. ISBN 1-58238-138-0.
- Butterfield, N. J. (2007). "Macroevolution and macroecology through deep time". Palaeontology 50 (1): 41–55. doi:10.1111/j.1475-4983.2006.00613.x.}
- Alroy, J. (2008). "Dynamics of origination and extinction in the marine fossil record". Proceedings of the National Academy of Sciences of the United States of America 105 (Supplement 1): 11536–11542. Bibcode:2008PNAS..10511536A. doi:10.1073/pnas.0802597105. PMC 2556405. PMID 18695240.
- Macleod, N.; Rawson, P. F.; Forey, P. L.; F. T. Banner, M. K. Boudagher-Fadel, P. R. Bown, J. A. Burnett, P. Chambers, S. Culver, S. E. Evans, C. Jeffery, M. A. Kaminski, A. R. Lord, A. C. Milner, A. R. Milner, N. Morris, E. Owen, B. R. Rosen, A. B. Smith, P. D. Taylor, E. Urquhart and J. R. Young (April 1997). "The Cretaceous-Tertiary biotic transition". Journal of the Geological Society 154 (2): 265–292. doi:10.1144/gsjgs.154.2.0265.
- "extinction". Math.ucr.edu. Retrieved 2008-11-09.
- Fastovsky DE, Sheehan PM (2005). "The extinction of the dinosaurs in North America". GSA Today 15 (3): 4–10. doi:10.1130/1052-5173(2005)015<4:TEOTDI>2.0.CO;2. ISSN 1052-5173.
- ). Labandeira CC, Sepkoski JJ (1993). "Insect diversity in the fossil record". Science 261 (5119): 310–5. Bibcode:1993Sci...261..310L. doi:10.1126/science.11536548. PMID 11536548.
- McElwain, J.C.; Punyasena, S.W. (2007). "Mass extinction events and the plant fossil record". Trends in Ecology & Evolution 22 (10): 548–557. doi:10.1016/j.tree.2007.09.003. PMID 17919771.
- Sahney S & Benton MJ (2008). "Recovery from the most profound mass extinction of all time". Proceedings of the Royal Society: Biological 275 (1636): 759–65. doi:10.1098/rspb.2007.1370. PMC 2596898. PMID 18198148.
- McGhee, G. R.; Sheehan, P. M.; Bottjer, D. J.; Droser, M. L. (2011). "Ecological ranking of Phanerozoic biodiversity crises: The Serpukhovian (early Carboniferous) crisis had a greater ecological impact than the end-Ordovician". Geology 40 (2): 147. doi:10.1130/G32679.1.
- Sole, R.V., and Newman, M., 2002. "Extinctions and Biodiversity in the Fossil Record – Volume Two, The Earth system: biological and ecological dimensions of global environment change" pp. 297–391, Encyclopedia of Global Environmental Change John Wilely & Sons.
- Smith, A.; A. McGowan (2005). "Cyclicity in the fossil record mirrors rock outcrop area". Biology Letters 1 (4): 443–445. doi:10.1098/rsbl.2005.0345. PMC 1626379. PMID 17148228.
- Smith, Andrew B.; McGowan, Alistair J. (2007). "The shape of the Phanerozoic marine palaeodiversity curve: How much can be predicted from the sedimentary rock record of Western Europe?". Palaeontology 50 (4): 765–774. doi:10.1111/j.1475-4983.2007.00693.x.
- Partial list from Image:Extinction Intensity.png
- Benitez, Narciso; et al. (2002). "Evidence for Nearby Supernova Explosions". Phys. Rev. Lett. 88 (8): 081101. doi:10.1103/PhysRevLett.88.081101.
- Benton, M.J. (2004). "6. Reptiles Of The Triassic". Vertebrate Palaeontology. Blackwell. ISBN 0-04-566002-6.
- Van Valkenburgh, B. (1999). "Major patterns in the history of carnivorous mammals". Annual Review of Earth and Planetary Sciences 27: 463–493. Bibcode:1999AREPS..27..463V. doi:10.1146/annurev.earth.27.1.463.
- Jablonski, D. (2002). "Survival without recovery after mass extinctions". PNAS 99 (12): 8139–8144. Bibcode:2002PNAS...99.8139J. doi:10.1073/pnas.102163299. PMC 123034. PMID 12060760.
- Raup, DM; Sepkoski Jr, JJ (1984). "Periodicity of extinctions in the geologic past". Proceedings of the National Academy of Sciences of the United States of America 81 (3): 801–5. Bibcode:1984PNAS...81..801R. doi:10.1073/pnas.81.3.801. PMC 344925. PMID 6583680.
- Different cycle lengths have been proposed; e.g. by Rohde, R.; Muller, R. (2005). "Cycles in fossil diversity". Nature 434 (7030): 208–210. Bibcode:2005Natur.434..208R. doi:10.1038/nature03339. PMID 15758998.
- R. A. Muller. "Nemesis". Muller.lbl.gov. Retrieved 2007-05-19.
- Adrian L. Melott and Richard K. Bambach (2010-07-02). "Nemesis Reconsidered". Monthly Notices of the Royal Astronomical Society. Retrieved 2010-07-02.
- Gillman, M.; Erenler, H. (2008). "The galactic cycle of extinction". International Journal of Astrobiology 7. Bibcode:2008IJAsB...7...17G. doi:10.1017/S1473550408004047.
- Bailer-Jones, C. A. L. (2009). "The evidence for and against astronomical impacts on climate change and mass extinctions: a review". International Journal of Astrobiology 8 (3): 213–219. Bibcode:2009IJAsB...8..213B. doi:10.1017/S147355040999005X.
- Overholt, A. C.; Melott, A. L.; Pohl, M. (2009). "Testing the link between terrestrial climate change and galactic spiral arm transit". The Astrophysical Journal 705 (2): L101–L103. Bibcode:2009ApJ...705L.101O. doi:10.1088/0004-637X/705/2/L101.
- Melott, A.L.; Bambach, R.K. (2011). "A ubiquitous ~62-Myr periodic fluctuation superimposed on general trends in fossil biodiversity. I. Documentation". Paleobiology 37: 92–112.
- Melott, A.L. et al. (2012). "A ~60 Myr periodicity is common to marine-87Sr/86Sr, fossil biodiversity, and large-scale sedimentation: what does the periodicity reflect?". Journal of Geology 120: 217–226. arXiv:1206.1804. Bibcode:2012JG....120..217M. doi:10.1086/663877.
- Arens, N. C.; West, I. D. (2008). "Press-pulse: a general theory of mass extinction?". Paleobiology 34 (4): 456. doi:10.1666/07034.1.
- Wang, S. C.; Bush, A. M. (2008). "Adjusting global extinction rates to account for taxonomic susceptibility". Paleobiology 34 (4): 434. doi:10.1666/07060.1.
- Budd, G. E. (2003). "The Cambrian Fossil Record and the Origin of the Phyla". Integrative and Comparative Biology 43 (1): 157–165. doi:10.1093/icb/43.1.157. PMID 21680420.
- Martin, R.E. (1995). "Cyclic and secular variation in microfossil biomineralization: clues to the biogeochemical evolution of Phanerozoic oceans". Global and Planetary Change 11 (1): 1. Bibcode:1995GPC....11....1M. doi:10.1016/0921-8181(94)00011-2.
- Martin, R.E. (1996). "Secular increase in nutrient levels through the Phanerozoic: Implications for productivity, biomass, and diversity of the marine biosphere". Palaios 11 (3): 209–219. doi:10.2307/3515230. JSTOR 3515230.
- Marshall, C.R.; Ward, P.D. (1996). "Sudden and Gradual Molluscan Extinctions in the Latest Cretaceous of Western European Tethys". Science 274 (5291): 1360–1363. Bibcode:1996Sci...274.1360M. doi:10.1126/science.274.5291.1360. PMID 8910273.
- Arens, N.C. and West, I.D. (2006). "Press/Pulse: A General Theory of Mass Extinction?"" 'GSA Conference paper' Abstract
- MacLeod, N (2001-01-06). "Extinction!".
- Courtillot, V., Jaeger, J-J., Yang, Z., Féraud, G., Hofmann, C. (1996). "The influence of continental flood basalts on mass extinctions: where do we stand?" in Ryder, G., Fastovsky, D., and Gartner, S, eds. "The Cretaceous-Tertiary event and other catastrophes in earth history". The Geological Society of America, Special Paper 307, 513–525.
- Hallam, A. (1992). Phanerozoic sea-level changes. New York: Columbia University Press. ISBN 0231074247.
- Grieve, R.; Rupert, J.; Smith, J.; Therriault, A. (1996). "The record of terrestrial impact cratering". GSA Today 5: 193–195.
- The earliest known flood basalt event is the one which produced the Siberian Traps and is associated with the end-Permian extinction.
- Some of the extinctions associated with flood basalts and sea-level falls were significantly smaller than the "major" extinctions, but still much greater than the background extinction level.
- Wignall, P.B. (2001), "Large igneous provinces and mass extinctions", Earth-Science Reviews vol. 53 issues 1–2 pp 1–33
- Speculated Causes of the End-Cretaceous Extinction
- Peters, S.E. (2008/06/15/online). "Environmental determinants of extinction selectivity in the fossil record". Nature 454 (7204): 626–9. Bibcode:2008Natur.454..626P. doi:10.1038/nature07032. PMID 18552839.
- Newswise: Ebb and Flow of the Sea Drives World's Big Extinction Events Retrieved on June 15, 2008.
- Keller G, Abramovich S, Berner Z, Adatte T (1 January 2009). "Biotic effects of the Chicxulub impact, K–T catastrophe and sea level change in Texas". Palaeogeography, Palaeoclimatology, Palaeoecology 271 (1–2): 52–68. doi:10.1016/j.palaeo.2008.09.007.
- Morgan J, Lana C, Kersley A, Coles B, Belcher C, Montanari S, Diaz-Martinez E, Barbosa A, Neumann V (2006). "Analyses of shocked quartz at the global K-P boundary indicate an origin from a single, high-angle, oblique impact at Chicxulub". Earth and Planetary Science Letters 251 (3–4): 264–279. Bibcode:2006E&PSL.251..264M. doi:10.1016/j.epsl.2006.09.009.
- Bottke W., Vokrouhlický D., Nesvorný D. (2007) An asteroid breakup 160 Myr ago as the probable source of the K/T impactor. Nature 449, 48–53
- Majaess D., Higgins D., Molnar L., Haegert M., Lane D., Turner D., Nielsen I. (2008). New Constraints on the Asteroid 298 Baptistina, the Alleged Family Member of the K/T Impactor, accepted for publication in the JRASC
- Reddy V., et al. (2008). Composition of 298 Baptistina: Implications for K-T Impactor Link, Asteroids, Comets, Meteors conference.
- Smashed asteroids may be related to dinosaur killer, Yahoo! News, Feb. 2, 2010
- "Cambridge Conference Correspondence". Retrieved 2011-01-01.
- P.J. Durrant, 2nd Edition 1952, General and Inorganic Chemistry, pp355
- "ATMOSPHERIC CARBON DIOXIDE AND MARINE INTERACTION December 2007
- Mayhew, Peter J.; Gareth B. Jenkins, Timothy G. Benton (January 7, 2008). "A long-term association between global temperature and biodiversity, origination and extinction in the fossil record". Proceedings of the Royal Society B: Biological Sciences 275 (1630): 47–53. doi:10.1098/rspb.2007.1302. PMC 2562410. PMID 17956842.
- Knoll, A. H.; Bambach, Canfield, Grotzinger (26 July 1996). "Fossil record supports evidence of impending mass extinction". Science 273 (5274): 452–457. Bibcode:1996Sci...273..452K. doi:10.1126/science.273.5274.452. PMID 8662528.
- Ward, Peter D.; Jennifer Botha, Roger Buick, Michiel O. De Kock, Douglas H. Erwin, Geoffrey H. Garrison, Joseph L. Kirschvink, Roger Smith (4 February 2005). "Abrupt and Gradual Extinction Among Late Permian Land Vertebrates in the Karoo Basin, South Africa". Science 307 (5710): 709–714. Bibcode:2005Sci...307..709W. doi:10.1126/science.1107068. PMID 15661973.
- Kiehl, Jeffrey T.; Christine A. Shields (September 2005). "Climate simulation of the latest Permian: Implications for mass extinction". Geology 33 (9): 757–760. Bibcode:2005Geo....33..757K. doi:10.1130/G21654.1.
- Hecht, J (2002-03-26). "Methane prime suspect for greatest mass extinction". New Scientist.
- Berner, R.A., and Ward, P.D. (2004). "Positive Reinforcement, H2S, and the Permo-Triassic Extinction: Comment and Reply" describes possible positive feedback loops in the catastrophic release of hydrogen sulfide proposed by Kump, Pavlov and Arthur (2005).
- Kump, L.R., Pavlov, A., and Arthur, M.A. (2005). "Massive release of hydrogen sulfide to the surface ocean and atmosphere during intervals of oceanic anoxia". Geology v. 33, p.397–400. Abstract. Summarised by Ward (2006).
- Ward, P.D. (2006). "Impact from the Deep". Scientific American October 2006.
- Wilde, P; Berry, W.B.N. (1984). "Destabilization of the oceanic density structure and its significance to marine "extinction" events". Palaeogeography, Palaeoclimatology, Palaeoecology 48 (2–4): 143–162. doi:10.1016/0031-0182(84)90041-5
- Corey S. Powell (2001-10-01). "20 Ways the World Could End". Discover Magazine. Retrieved 2011-03-29.
- Podsiadlowski, Ph. et al. (2004). "The Rates of Hypernovae and Gamma-Ray Bursts: Implications for Their Progenitors". Astrophysical Journal Letters 607: L17. arXiv:astro-ph/0403399. Bibcode:2004ApJ...607L..17P. doi:10.1086/421347.
- "Speculated Causes of the Permian Extinction". Hooper Virtual Paleontological Museum. Retrieved 16 July 2012.
- David Quammen (October 1998), "Planet of Weeds", Harper's Magazine, retrieved November 15, 2012
- Lehrmann, D.J., Ramezan, J., Bowring, S.A. et al. (December 2006). "Timing of recovery from the end-Permian extinction: Geochronologic and biostratigraphic constraints from south China". Geology 34 (12): 1053–1056. Bibcode:2006Geo....34.1053L. doi:10.1130/G22827A.1.
- Sahney, S. and Benton, M.J. (2008). "Recovery from the most profound mass extinction of all time" (PDF). Proceedings of the Royal Society: Biological 275 (1636): 759–65. doi:10.1098/rspb.2007.1370. PMC 2596898. PMID 18198148.
- Cascales-Miñana, B.; Cleal, C. J. (2011). "Plant fossil record and survival analyses". Lethaia: no–no. doi:10.1111/j.1502-3931.2011.00262.x.
- Benton, Michael J., "When Life Nearly Died—The Greatest Mass Extinction of All Time", Thames & Hudson Ltd, London, 2003 ISBN 978-0-500-28573-2
- Cowen, R. (1999). "The History of Life". Blackwell Science. The chapter about extinctions is available here
- Richard Leakey and Roger Lewin, 1996, The Sixth Extinction : Patterns of Life and the Future of Humankind, Anchor, ISBN 0-385-46809-1. Excerpt from this book: The Sixth Extinction
- Richard A. Muller, 1988, Nemesis, Weidenfeld & Nicolson, ISBN 1-55584-173-2
- Raup, D., and J. Sepkoski (1986). "Periodic extinction of families and genera". Science 231 (4740): 833–836. Bibcode:1986Sci...231..833R. doi:10.1126/science.11542060. PMID 11542060.
- Nemesis – Raup and Sepkoski
- Rohde, R.A. & Muller, R.A. (2005). "Cycles in fossil diversity". Nature 434 (7030): 209–210. Bibcode:2005Natur.434..208R. doi:10.1038/nature03339. PMID 15758998.
- Sepkoski, J.J. (1996). "Patterns of Phanerozoic extinction: a perspective from global data bases". In O.H. Walliser. Global Events and Event Stratigraphy. Berlin: Springer. pp. 35–51
- Ward, P.D., (2000) Rivers In Time: The Search for Clues to Earth's Mass Extinctions
- Ward, P.D., (2007) Under a Green Sky: Global Warming, the Mass Extinctions of the Past, and What They Can Tell Us About Our Future (2007) ISBN 978-0-06-113792-1
- White, R.V. and Saunders, A.D. (2005). "Volcanism, impact and mass extinctions: incredible or credible coincidences". Lithos 79 (3–4): 299–316. Bibcode:2005Litho..79..299W. doi:10.1016/j.lithos.2004.09.016.
- Wilson, E.O., 2002, The Future of Life, Vintage (pb), ISBN 0-679-76811-4
- Calculate the effects of an Impact
- The Current Mass Extinction Event
- Species Alliance (nonprofit organization producing a documentary about Mass Extinction titled "Call of Life: Facing the Mass Extinction)
- American Museum of Natural History official statement on the current mass extinction
- Interstellar Dust Cloud-induced Extinction Theory
- Extinction Level Event in short
- The Extinction Website
- Nasa's Near Earth Object Program
- Fossils Suggest Chaotic Recovery from Mass Extinction – LiveScience.com
- Sepkoski's Global Genus Database of Marine Animals – Calculate extinction rates for yourself!
- Phil Berardelli, Of Cosmic Rays and Dangerous Days at ScienceNOW, August 1, 2007.
Millions of years before present | http://en.wikipedia.org/wiki/Extinction_event | 13 |
121 | The following course outline is excerpted from The Mechanical Universe
preview book, published by The Corporation for Community College Television.
Lesson 1: Introduction to The Mechanical Universe
Provocative questions begin the quest of The Mechanical Universe. This introductory preview enters an Aristotelian world in conflict, introduces the revolutionary ideas and heroes from Copernicus through Newton, and, like a space shuttle from past to present, links the physics of the heavens to the physics of the Earth.
Text Assignment: Chapter 1 (Both texts)
- Be able to define the units of length, time and mass.
- Know what is meant by SI units and British units.
- Know what conversion factors are and be able to use them to convert from one system of units into another.
- Be able to express large or small numbers in scientific notation.
- Know common scientific prefixes for units.
Lesson 2: The Law of Falling Bodies
With the conventional wisdom of the Aristotelian world view, almost everyone could see that heavy bodies fell faster than lighter ones. Then along cam Galileo. His genius deduced that the distance a body has fallen at any instant is proportional to the square of the time spent falling. From that, speed and acceleration follow with the help of a mathematical tool called a derivative.
Text Assignment: Chapter 2 (Both texts)
- Know the definitions of average speed, average acceleration, speed, and acceleration.
- Recognize that the distance all bodies fall in a vacuum is proportional to the square of the time.
- Recognize that the speed of all falling bodies in a vacuum is proportional to time.
- Recognize that all bodies in a vacuum fall with the same constant acceleration.
- Identify the significant aspects of the historical environment which gave rise to the development of the law of falling bodies.
- Be able to use the following algebraic expressions to solve problems describing the motion of bodies in free fall
- (a) s = 1/2gt2, (b) v = gt, (c) a = g.
- Understand how the derivative is a limiting process.
Lesson 3: Derivatives
The function of mathematics in physical science. From a theoretical concept to a practical tool, the derivative helps to determine the instantaneous speed and acceleration of a falling body. Differentiation is developed further to calculate how any quantity changes in relation to another. The power rule, the product rule, the chain rule -- with a few simple rules, differentiating any function becomes a simple mechanical task.
Text Assignment: Chapter 3 (Both texts)
- Know the definition of a derivative.
- Know the relationship between tangent lines and derivatives.
- Be able to calculate simple derivatives using the rules of differentiation.
Lesson 4: Inertia
The rise of Galileo and his fall from grace. Copernicus conjectured that the Earth spins on its axis and orbits around the sun. Considering its implications, a rather dangerous assumption that prompted rather risky questions: Why do objects fall to Earth rather than hurtle off into space? And in this heretical scheme of things in which the Earth wasn't at the center, where was God? Risking more than his favored status in Rome, Galileo helped to answer such questions with the law of inertia.
Text Assignment: Chapter 4 (Both texts)
- Be able to state the law of inertia.
- Explain situations where the law of inertia applies.
- Distinguish between Aristotle's and Galileo's descriptions of natural motion.
- Recognize that the descriptions of motion are not the same when viewed from different frames of reference.
- Recognize that parabolic trajectories result from constant speed in the horizontal direction and constant acceleration in the vertical direction.
- Appraise the historical significance and universality of Galileo's law of inertia.
Lesson 5: Vectors
Physics must explain not only why and how much, but also where and which way. Physicists and mathematicians invented a way of describing quantities that have direction as well as magnitude. Laws that deal with such phenomena as distance and speed are universal. And
vectors, which describe quantities such as displacement and velocity, universally express the law of physics in a way that is the same for all coordinate systems.
Text Assignment: Chapter 5 (Both texts)
- Be able to add and subtract vectors using the parallelogram law of addition.
- Be able to obtain the rectangular components of vectors and use them in addition and subtraction.
- Be able to calculate the dot product of two vectors.
- Be able to form the cross product of two vectors.
Lesson 6: Newton's Laws
For all the phenomena of The Mechanical Universe, Isaac Newton laid down the laws. A refinement on Galileo's law of inertia, Newton's first law states that every body remains at rest or continues in uniform motion unless an unbalanced force acts on it. His second law, the most profound statement in classical mechanics, relates the causes to the changes of motion in every object in the cosmos. Newton's third law explains the phenomenon of interactions: for every action, there's an equal and opposite reaction.
Text Assignment: Chapter 6 (Both texts)
- Be able to discuss the definitions of force and mass and to state Newton's laws of motion.
- Be able to distinguish between mass and weight.
- Be familiar with the following units and know how they are defined: kilogram, newton, dyne, pound, slug.
- Know that forces always occur in action-reaction pairs and act on different bodies, so that they never can act as balancing forces for a body.
- Understand that the applicability of Newton's second law arises from it being a differential equation.
- Recognize that projectile motion is a consequence of Newton's laws.
Lesson 7: Integration
Newton and Leibniz sprint for the calculus. Winning the longest race in scientific history -- more than 2000 years, from the Golden Age of Greece to the end of the seventeenth century in Europe -- Newton and Leibniz arrived at the conclusion that differentiation and integration are inverse processes. Their exciting intellectual discovery, dramatically rerun to reflect the times, ended in an extremely controversial dead heat.
Text Assignment: Chapter 7 (Advanced text -- Chapter 3)
- Know the definition of antidifferentiation.
- Understand the relationship between antidifferentation and quadrature.
- Be able to state the Second Fundamental Theorem of Calculus.
- Know how to apply the Second Fundamental Theorem of Calculus to physics problems.
Lesson 8: The Apple and the Moon
The first authentic steps toward outer space. Seeking an explanation for Kepler's laws, Newton discovered that gravity described the force between any two particles in the universe. From an English orchard to Cape Canaveral and beyond, Newton's universal law of gravity reveals why an apple but not the moon falls to the Earth.
Text Assignment: Chapter 8 (Advanced text -- Chapter 7)
- Recognize that a gravitational force exists between any two objects and that the force is directly proportional to the product of the masses and inversely proportional to the square of the distance between them.
- Understand the functional dependence of the gravitational force on mass and distance.
- Use the following expressions to solve problems: F = GMm/r2, F = ma, a = GM/r2.
- Recognize that, for small enough velocities, the time for a projectile to fall to earth is independent of its horizontal velocity, but for very large horizontal velocities, the effect of the earth's curvature must be taken into consideration.
- Describe orbital motion in terms of the law of universal gravitation and inertia.
Lesson 9: Moving in Circles
The original Platonic ideal, with derivatives of vector functions. According to Plato, stars are heavenly beings that orbit the Earth with uniform perfection -- uniform speed and perfect circles. Even in this imperfect world, uniform circular motion make perfect mathematical sense.
Text Assignment: Chapter 9 (Advanced text -- Chapters 5 and 7)
- Understand the meaning of uniform circular motion.
- Describe the vector relationships between the radius, velocity, and acceleration in uniform circular motion.
- Be able to use the expressions a = v2/r = omega2r = 4 pi2r/T2 in problems involving circular motion.
- Be able to use Newton's laws to describe the dynamics of circular motion and to solve problems involving objects moving in circular paths.
Lesson 10: The Fundamental Forces
All physical phenomena of nature are explained by four forces. Two nuclear forces -- strong and weak -- dwell within the atomic nucleus. The fundamental force of gravity granges across the universe at large. So does electricity, the fourth fundamental force, which binds the atoms of all matter.
Text Assignment: Chapter 10 (Advanced text -- Chapter 8)
- Be able to identify which fundamental forces are responsible for a given common force.
- Describe the method Cavendish used to determine the universal gravitational constant G.
- Compare and contrast gravitational and electrical forces.
- Recognize that all contact forces arise from the electrical force acting in complicated ways.
- Be able to apply Newton's laws to solve problems involving pulleys and inclined planes.
- Know that the maximum static friction force and the kinetic friction force are proportional to the normal forces between the surfaces involved.
- Be able to apply Newton's laws in circular-motion problems.
Lesson 11: Gravity, Electricity, Magnetism
Forces at play in the Physics Theater. The gravitational force between two masses, the electric force between two charges, and the magnetic force between two magnetic poles -- all these forces take essentially the same mathematical form. Newton's script suggested connections between electricity and magnetism. Acting on scientific hunches, Maxwell saw the matter in an entirely new light.
Text Assignment: Chapter 11 (Advanced text -- Chapter 8)
- State one connection between electricity and magnetism.
- Give examples of the concept of "field."
- State some similarities and differences between the force of gravity and electricity.
- Explain how the speed of light in "buried" in the forces of electricity and magnetism.
Lesson 12: The Millikan Experiment
How does science progress? Through painstaking trial and error, illustrated with a dramatic re-creation of Robert Millikan's classic oil-drop experiment. Understanding the electric force on a charged droplet and viscosity, the measured the charge of a single electron.
Text Assignment: Chapter 12 (Advanced text -- Chapter 8)
- Be able to describe Millikan's method for measuring the charge of an electron.
- Be able to solve problems with viscous forces.
- Recognize that all charge is a multiple of fundamental unit of charge, which is the charge of the electron.
Lesson 13: Conservation of Energy
The myth of the energy crisis. According to one of the major laws of physics, energy is neither created nor destroyed.
Text Assignment: Chapter 13 (Advanced text - Chapter 10)
- Know the definitions of work, kinetic energy, and potential energy.
- Understand the relationship between work and energy.
- Be able to work problems using conservation of energy.
Lesson 14: Potential Energy
The nature of stability. Potential energy provides a clue, and a powerful model, for understanding why the world has worked the same way since the beginning of time.
Text Assignment: Chapter 14 (Advanced text -- Chapter 10)
- Be able to calculate the potential-energy function associated with a given conservative force.
- Be able to find the force F(x) from the potential-energy function U(x).
- Be able to locate equilibrium points and discuss their stability from a graph of the potential-energy function U(x).
- Be able to use the gravitational potential-energy and conservation of energy to solve the problems of escape velocity.
Lesson 15: Conservation of Momentum
If The Mechanical Universe is a perpetual clock, what keeps it ticking away till the end of time? Taking a cue from Descartes, momentum -- the product of mass and velocity -- is always conserved. Newton's laws embody the concept of conservation and momentum. This law provides a powerful principle for analyzing collisions, even at the local pool hall.
Text Assignment: Chapter 19 (Advanced text -- Chapter 11)
- Recognize conservation of momentum as a consequence of Newton's Second Law.
- Know when the momentum of a system is conserved.
- Recognize the connection between kinetic energy and momentum.
- Be able to solve problems involving elastic and inelastic collisions.
- Know the relationship between impulse and time average of force.
Lesson 16: Harmonic Motion
The music and mathematics of nature. The restoring force and inertia of any stable mechanical system cause objects to execute simple harmonic motion, a phenomenon that repeats itself in perfect time.
Text Assignment: Chapter 20 (Advanced Text -- Chapter 12)
- Know the general characteristics of simple harmonic motion, including the important property that the acceleration is proportional to the displacement and in the opposite direction.
- Know the relationship between simple harmonic motion and circular motion.
- Be able to work problems with objects on horizontal or vertical springs.
- Know the conditions under which the motion of a simple or physical pendulum is simple harmonic and be able to find the period of the motion.
- Understand the connection between simple harmonic motion and energy conservation.
Lesson 17: Resonance
The music and mathematics of nature, Part II. As Galileo noted, the swings of a pendulum increasingly grow with repeated, timed applications of a small force. When the frequency of an applied force matches the natural frequency of a system, large-amplitude oscillations result in the phenomenon of resonance. Resonance explains why a swaying bridge collapsed in a mild wind, and how a wineglass can be shattered by a human voice.
Text Assignment: Chapter 21 (Advanced Text -- Chapter 12)
- Be able to define forced oscillations.
- Be able to explain resonance and give a few examples.
- Understand the relationship between resonance and forced oscillatory motion.
Lesson 18: Waves
The medium disturbances of nature. With an analysis of simple harmonic motion and a stroke of genius, Newton extended mechanics to the propagation of sound.
Text Assignment: Chapter 22 (Advance text -- Chapter 12)
- Be able to understand the difference between transverse waves and longitudinal waves.
- Be able to state the relationships between the speed, period, frequency, wavelength and angular frequency for a harmonic wave.
- Know the dependence of wave speed on wavelength for deep and shallow water waves.
- Understand why Newton wasn't satisfied with his calculation of the speed of sound.
Lesson 19: Angular Momentum
An old momentum with a new twist. Kepler's second law of planetary motion, which is rooted here in a much deeper principle, imagined a line from the sun to a planet that sweeps out equal areas in equal times. Angular momentum is a twist on momentum -- the cross product of the radius vector and momentum. A force with twist is torque. When no torque acts on a system, the angular momentum of the system is conserved.
Text Assignment: Chapter 23 (Advanced Text -- Chapter 13)
- Know the definitions of torque and angular momentum.
- Know how to write the angular momentum of a system and a particle.
- Understand the connection between Kepler's second law and the law of conservation f angular momentum.
- Recognize the role of conservation of angular momentum in the formation of vortices and firestorms.
Lesson 20: Torques and Gyroscopes
Why a spinning top doesn't topple. When a torque acts on a spinning object, the angular momentum changes, but the object only precesses. The object may be a child's toy, or a part of a navigation system, or Earth itself.
Text Assignment: Chapter 24 (Advanced Text -- Chapter 15)
- Be able to explain why a spinning gyroscope precesses instead of falling over.
- Be able to explain how to make a gyroscope with a very small rate of precession.
- Explain how the Earth acts like a gyroscope.
Lesson 21: Kepler's Three Laws
The wandering mathematician. Kepler's three laws described the motion of heavenly bodies with unprecedented accuracy. However, the planets still moved in paths traced by the ancient Greek mathematicians -- the conic section called an ellipse.
Text Assignment: Chapter 25 (Advanced Text -- Chapter 16)
- Understand the historical significance of Kepler's laws.
- Be able to precisely state Kepler's laws.
- Understand the relationship between conic sections and Kepler's laws.
- Know the definition of eccentricity and the formula for a conic section in polar coordinates.
Lesson 22: The Kepler Problem
The combination of Newton's law of gravity and F = ma. The task of deducing all three of Kepler's laws from Newton's universal law of gravitation is known as the Kepler problem. Its solution is one of the crowning achievements of Western thought.
Text Assignment: Chapter 26 (Advanced Text -- Chapter 17)
- Be able to determine velocity in polar coordinates.
- Be able to state the formula for angular momentum in polar coordinates.
- Be able to state the Kepler problem in words.
- Understand how Newton's Laws give a solution to the Kepler problem.
Lesson 23: Energy and Eccentricity
The precise orbit of any heavenly body -- a planet, asteroid, or comet -- is fixed by the laws of conservation of energy and angular momentum. The eccentricity, which determines the shape of an orbit, is intimately linked to the energy and angular momentum of the heavenly body.
Text Assignment: Chapter 27 (Advanced Text -- Chapter 17)
- Understand the relationship between energy and eccentricity.
- Be able to characterize orbits by eccentricity.
- Be able to understand the concept of effective potential and how it relates to planetary motion.
- Understand how initial conditions affect the orbit of a planet, comet or satellite.
Lesson 24: Navigating in Space
Getting from here to there. Voyages to other planets require enormous expenditures of energy. However, the amount of energy expended can be minimized by using the same principles that guide planets around the solar system.
Text Assignment: Chapter 28 (Advanced Text -- Chapter 18)
- Explain how the force of gravity is used in interplanetary travel.
- Discuss the relationship of launch opportunities to planet orbit geometry.
- Distinguish between launch windows to inner and outer planets.
- Be able to calculate periods and velocities for transfer orbits between planets.
- Justify the use of transfer orbits.
- Describe the influence of gravity boosts on a satellite and on the boosting planet.
Lesson 25: From Kepler to Einstein
The orbiting planets, the ebbing and flowing of tides, the falling body as it accelerates -- these phenomena are consequences of the law of gravity. Why that's so leads to Einstein's general theory of relativity, and into the black hole, but not back out again.
Text Assignment: Chapter 29 (Advanced Text -- Chapter 17)
- Understand the derivation of Kepler's third law in planetary calculations.
- Understand the significance of the center of mass of the earth-sun system.
- Understand the causes of tidal motion.
- Understand the difference between inertial mass and gravitational mass.
- Understand qualitatively the concept of the black hole.
Lesson 26: Harmony of the Spheres
The music of the spheres.
Text Assignment: Chapter 30 (No corresponding chapter in the Advanced text)
- Be able to give a brief historical account of the Kepler problem.
- Understand the differences between Aristotle's Galileo's, Kepler's and Newton's physics.
- Be able to explain why they call mathematics the language of physics.
- Know the significance of conservation principles.
- Explain why some would say that mechanics is the basis of all western knowledge.
Beyond The Mechanical Universe
Lesson 27: Beyond The Mechanical Universe
Provocative questions begin the quest of Beyond The Mechanical Universe. This introductory preview enters the world of Electricity and Magnetism, goes on to 20th-century discoveries of Relativity and Quantum mechanics. The brilliant ideas of Faraday, Ampere, Maxwell, Einstein, Schrödinger, Heisenberg add to The Mechanical Universe of Newton.
Text Assignment: Chapter 31
Lesson 28: Static Electricity
To understand materials, one must first understand electricity, and to understand electricity, one must first understand materials. Eighteenth century electricians understood neither, but they knew what it took to spark the interest of an audience and put on an electrifying show. Coulomb's law and the principles of static electricity.
Text Assignment: Chapter 32
- Be able to recognize and discuss electrical phenomena.
- Be able to explain charging by friction, induction and contact.
- Be able to state Coulomb's Law and use it to find the force exerted on one point charge by another.
- Know the difference between an insulator and a conductor.
- Explain ACR, attraction, contact and repulsion.
- Be able to explain the principles of an electrostatic generating machine.
Lesson 29: The Electric Field
Michael Faraday's vision of lines of constant force in space laid the foundation for the modern idea of the field of force. Electric fields of static charges; Gauss' law and the conservation of flux.
Text Assignment: Chapter 33
- Be able to draw lines of force for simple charge systems and to obtain information about the direction and strength of an electric field from such a diagram.
- Know how to calculate the electric field for point charges and simple continuous distributions of charge.
- Know the definition of flux and l/r2 law.
- Be able to state Gauss' law and use it to find the electric field produced by various symmetrical charge distributions.
- Know that a speherically symmetric shell charge distribution produces zero electric field inside the shell and produces a field outside the shell the same as that of a point charge at the center of the shell.
- Be able to explain why the electric field inside a conductor is zero.
Lesson 30: Potential and Capacitance
Benjamin Franklin, the great 18th-century American scientist, who later dabbled in politics, was the first to propose a successful theory of the Leyden Jar. He gave positive and negative charges their names, and invented the parallel plate capacitor. Electrical potential, the potential of charged conductors, equipotentials and capacitance.
Text Assignment: Chapter 34
- Be able to sketch the equipotential surfaces given the electric field in a region.
- Be able to distinguish clearly between electric potential and electric potential energy.
- Know the definition of capacitance and be able to calculate the capacitance for a parallel plate capacitor.
- Be able to state the definition for the energy density for an electric field and discuss the concept of electrostatic field energy.
- Be able to calculate the electrostatic potential energy for a system of point charges.
- Be able to calculate the electric potential for various charge distributions.
Lesson 31: Voltage, Energy and Force
In a world of electric charges and currents, field, forces and voltages, what really matters? When is electricity dangerous or benign, spectacular or useful? The electric potential and its gradient; the potentials of atoms and metals; electric energy, and why sparks jump.
Text Assignment: Chapter 35
- Know the definition of a gradient.
- Be able to state the graphical relationship between electric field lines and equipotentials.
- Be able to state the approximate magnitudes of voltages and forces in matter.
- Know the explanation of how a lightening rod works.
- Be able to give the definition of the electron volt energy unit and the conversion between it and the joule.
- Be able to explain why sparks jump.
Lesson 32: The Electric Battery
Electricity changed from a curiosity to a central concern of science and technology in 1800, when Alessandro Volta invented the electric battery. Batteries make use of the internal properties of different metals to turn chemical energy directly into electric energy.
Text Assignment: Chapter 36
- Be able to understand the internal and external potentials of metals.
- Be able to explain the internal workings of an electric battery.
Lesson 33: Electric Circuits
Design and analysis of currents flowing in series and parallel circuits of resistors and capacitors depend not only on the celebrated laws of Ohm and Kirchhoff, but also on the less celebrated work of Charles Wheatstone.
Text Assignment: Chapter 37
- Be able to state the definitions of current and current density.
- Be able to state Ohm's law and distinguish between it and the definition of resistance.
- Be able to give the general relationship between potential difference, current, and power.
- Know the definitions of parallel and series circuit elements.
- Be able to apply Kirchhoff's rules and use them to analyze various simple dc circuits.
- Be able to find the time constant for an RC circuit and describe both the charge on the capacitor and the current as a function of time for charging and discharging a capacitor.
Lesson 34: Magnets
William Gilbert, personal physician by appointment to her Majesty Queen Elizabeth I of England, discovered that the earth behaves like a giant magnet. Magnetism as a natural phenomenon, the behavior of magnetic materials, and the motion of charged particles in a magnetic field.
Text Assignment: Chapter 38
- Be able to calculate the magnetic force on a current element and on a moving charge in a given magnetic field.
- Know the definition of torque and potential energy for a magnetic dipole.
- Be able to explain the concept of domains in ferromagnetic materials.
- Be able to use the definition of magnetic flux and discuss the significance of the result that the net magnetic flux out of a closed surface is zero.
- Be able to calculate the magnetic moment of a current loop and the torque exerted on a current loop in a magnetic field.
- Be able to discuss the magnetism of the Earth.
Lesson 35: Magnetic Fields
All magnetic fields can be thought to be produced by electric currents. The relationship between a current and the magnetic field it produces is a little peculiar geometrically, and takes some getting used to. The law of Biot and Sarvart, the force between electric currents, and Ampere's law.
Text Assignment: Chapter 39
- Be able to state the law of Biot-Sarvart and use it to calculate the magnetic field due to a straight current-carrying wire and on the axis of a circular current loop.
- Know the definition of Ampere's law and be able to discuss its uses and limitations.
- Be able to calculate the forces between currents.
- Be able to state the various units of magnetic field strength.
- Be able to show that the magnetic field cannot do work.
Lesson 36: Vector Fields and Hydrodynamics
At first glance, replacing the old idea of action at a distance by the new idea of the field of force seems to e an exercise in semantics. But it isn't, because fields have definite properties of their own suitable for scientific study. For example, electric fields are different in form from magnetic fields, and both kinds can better be understood by analogy to field of fluid flow.
Text Assignment: Chapter 40
- Know the definitions of flux and circulation.
- Be able to relate flux and circulation to the electric, magnetic and velocity flow fields.
- Be able to explain the difference between source and stirring fields.
- Be able to discuss analogies in energy and forces for vector fields.
Lesson 37: Electromagnetic Induction
After Oersted's 1820 discovery that electric currents create magnetism, it was obvious that in some way magnetism should be able to create electric currents. The discovery of electromagnetic induction, in 1831, by Michael Faraday and Joseph Henry was one of the most important of the 19th century, not only scientifically, but also technologically, because it is the means by which nearly all electric power is generated today.
Text Assignment: Chapter 41
- Be able to state Faraday's law and use it to find the emf induced by a changing magnetic flux.
- Be able to state Lenz's law and use it to find the direction of the induced current in various applications of Faraday's law.
- Be able to state the definitions of self inductance and mutual inductance.
- Be able to state the expression for the energy stored in a magnetic field and the magnetic energy density.
- Be able to apply Kirchhoff's laws to obtain the differential equation for an LR circuit and be able to discuss the behavior of the solution.
Lesson 38: Alternating Current
Electromagnetic induction makes it easy and natural to generate alternating current. Use of transformers makes it practical to distribute ac over long distances. Although Nikola Tesla understood all this, Thomas Edison chose not to, and thereby hangs a tale. Alternating current circuits obey a differential equation identical to the harmonic oscillator resonance equation.
Text Assignment: Chapter 42
- Be able to state the definition of rms current and relate it to the maximum current in an ac circuit.
- Know the phase relationships between voltages and currents for elements of an LRC circuit.
- Be able to discuss the relationship between an LRC circuit and a harmonic oscillator.
- Be able to describe a step-up and a step-down transformer.
- Be able to discuss the relationship between power transmission and voltage.
- Be able to state the resonance condition for an LRC circuit and to sketch the power versus angular frequency.
Lesson 39: Maxwell's Equations
By the 1860s all the pieces of the electricity and magnetism puzzle were in place, except one. The last piece, discovered by James Clerk Maxwell and called (unfortunately) the displacement current was just what was needed to produce electromagnetic waves called (among other things) light.
Text Assignment: Chapter 43
- Be able to write down Maxwell's equations and discuss the experimental basis of each.
- Be able to state the definition of Maxwell's displacement current and discuss its significance.
- Realize that Maxwell's equations reveal that light is an electromagnetic wave.
- Be able to state the expression for the speed of an electromagnetic wave in terms of electric and magnetic currents.
- Be able to comment on the symmetry of Maxwell's equations.
- Know the significance of Maxwell's equations in modern technological society.
Lesson 40: Optics
Maxwell's theory says that electromagnetic waves of all wavelengths, from radio waves to gamma-rays and including visible light, are all basically the same phenomenon. Many of the
properties of light are really just properties of waves, including reflection, refraction and diffraction. Ordinary light can be used to see things on a human scale, X-rays to "see" things on an atomic scale.
Text Assignment: Chapter 44
- Be able to discuss the nature and properties of various parts of the electromagnetic spectrum.
- Be able to state the law of reflection and Snell's law of refraction and relate them to the properties of waves.
- Be able to explain wave interference and diffraction.
- Be able to explain how we can "see" atoms.
Lesson 41: The Michelson-Morley Experiment
In 1887, in Cleveland, Ohio, an exquisitely designed measurement of the motion of the earth through the aether resulted in the most brilliant failure in scientific history.
Text Assignment: Chapter 45
- Know how to apply the Galilean Transformation for coordinates and velocities.
- Be able to describe the Michelson interferometer and explain its principles.
- Be able to state clearly why the Michelson-Morley experiment should have detected motion relative to the aether according to Newtonian Physics.
- Know what is meant by a Null experiment.
Lesson 42: The Lorentz Transformation
If the speed of light is to be the same for all inertial observers (as indicated by the Michelson-Morley experiment) the equations for time and space are not difficult to find. But what do they mean? They mean that the length of a meter stick, or the rate of ticking of a clock depends on who measure it.
Text Assignment: Chapter 46
- Be able to use the Lorentz Transformation to work problems relating time or space intervals in different reference frames.
- Be able to give some of the hypothetical explanations put forward to account for the Michelson-Morley experiment.
- Be able to discuss the concept of length contraction.
- Be able to understand and use spacetime diagrams.
- Be able to define and discuss the concept of simultaneity.
- Be able to define and discuss clock synchronization.
Lesson 43: Velocity and Time
Unlike Lorentz, Albert Einstein was motivated to perfect the central ideas of physics rather than to explain the Michelson-Morley experiment. The result was a wholly new understanding of the meaning of space and time, including such matters as the transformation of velocities, time dilation, and the twin paradox.
Text Assignment: Chapter 47
- Be able to state the Einstein postulates of Special Relativity.
- Be able to state the velocity transformation formula for Special Relativity and how it is different from Galilean relativity.
- Be able to define proper time and proper length and state the equations for time dilation and length contraction.
- Be able to explain the mu-meson experiment in terms of Einstein's theory.
- Know how to use spacetime diagrams for simple problems.
- Be able to clearly state the twin paradox, and discuss its solution.
Lesson 44: Mass, Momentum, Energy
The new meaning of space and time make it necessary to formulate a new mechanics. Starting from the conservation of momentum, it turns out among other things that E = mc 2.
Text Assignment: Chapter 48
- Be able to state the definition of relativistic momentum and e equations relating kinetic energy and the total energy of a particle to its speed.
- Be able to discuss the relation between mass and energy in Special Relativity and compute the binding energy of various systems from the known rest masses of their constituents.
- Be able to discuss the concept of relativistic mass.
Lesson 45: The Temperature and Gas Law
The ups and downs of scientific research are reflected in Boyle's experiments, and Charles' investigations. Hot new discoveries about the behaviors of gases make the connection between temperature and heat, and raise the possibility of an absolute scale.
Text Assignment: Chapter 15
- Be able to state the definitions of the Celsius temperature scale and the Fahrenheit temperature scale and convert temperatures given on one scale into those of the other.
- Be able to convert temperatures given on either the Celsius scale or the Fahrenheit scale into kelvins.
- Be able to state the equation of state for an ideal gas and give the value of the universal gas constant in joules per kelvin.
- Know that the average energy of a gas molecule at temperature T is of the order of kT, where k is Boltzmann's constant.
- Know that the absolute temperature T is a measure of the kinetic energy of a gas.
Lesson 46: Engine of Nature
There was a young man named Carnot
Whose logic was able to show
For a work source proficient
There's none so efficient
As an engine that simply won't go.
- David L. Goodstein, Physics undergraduate (1958)
Text Assignment: Chapter 16 (Advanced text -- Chapter 20)
- Be able to state the first law of thermodynamics and use it in solving problems.
- Be able to calculate the work done by a gas during various quasi-static processes and sketch the processes on a PV diagram.
- Be able to give the definition of the efficiency of a heat engine.
- Be able to describe a Carnot engine.
- Be able to use the expression for the efficiency of a Carnot engine.
Lesson 47: Entropy
This program illustrates the genius of Carnot, Part II, and the second law of thermodynamics. The efficiency of Carnot's ideal engine depends on the ratio between high and low temperatures in the running cycle. Carnot's theory begins with simple steam engines and ends with profound implications for the behavior of matter and the flow of time throughout the universe.
Text Assignment: Chapter 17 (Advanced Text -- Chapter 21)
- Be able to give a qualitative description of entropy.
- Be able to calculate the change in entropy of some irreversible processes.
- Be able to discuss the connection between the second law of thermodynamics and the entropy principle.
- Understand the role of entropy in the formation of ice.
Lesson 48: Low Temperatures
Solids, liquids, and gases are the substance of every substance in the physical world. With the quest for low temperatures came the discovery that, under the right conditions of temperature and pressure, all elements can exist in each of the basic states of matter.
Text Assignment: Chapter 18 (Advanced Text -- Chapter 22)
- Explain how you make something colder.
- Be able to list and give examples of the three basic states of matter.
- Be able to explain what a phase diagram is.
- Be able to reproduce a phase diagram fro water and explain why it is so unique.
- Know how gases are liquefied.
- Be able to explain the Joule-Thomson effect.
Lesson 49: The Atom
This program explores the history of the atom, from the ancient Greeks to the early 20th century, when discoveries by J.J. Thomson and Ernest Rutherford created a new crisis for the world of physics.
Text Assignment: Chapter 49
- Be able to summarize the kinetic theory and discuss the size of atoms.
- Be able to compare Thomson's model of an atom with Rutherford's planetary model of an atom.
- Be able to discuss why Rutherford's model of an atom conflicted with Maxwell's theory of charged particles.
- Be able to discuss the significance of Brownian motion in providing evidence for the existence of atoms.
Lesson 50: Particles and Waves
Even before the crisis of the atom, there was evidence that light, which was certainly a wave, could sometimes act like a particle. In the new physics, called quantum mechanics, not only does light come in quanta called photons, but electrons and other particles also interfere like waves.
Text Assignment: Chapter 50
- Be able to describe the evidence that lightwaves sometimes behave like particles.
- Be able to state the de Broglie relations for the frequency and wavelength of electron waves.
- Be able to discuss wave-particle duality.
- Be able to discuss the Heisenberg uncertainty principle.
- Be able to discuss the experimental evidence for the existence of electron waves.
- Be able to define probability amplitudes and discuss their meaning.
Lesson 51: From Atoms to Quarks
Electron waves confined by electric attraction to the nucleus help resolve the dilemma of the atom and account for the periodic table of the elements. Nucleons themselves obey a kind of period table, following inner rules that lead to the idea of quarks.
Text Assignment: Chapter 51
- Be able to define and discuss standing waves.
- Be able to describe the Bohr atom in terms of standing in Broglie electron waves.
- Be able to discuss the periodic table in terms of electronic structure.
- Be able to discuss quarks and their role in the structure of matter.
Lesson 52: The Quantum Mechanical Universe
A last, lingering look at where we've been, and perhaps a timid glance into the future, marks the close of the series The Mechanical Universe and Beyond....
Text Assignment: Chapter 52 | http://www.learner.org/catalog/extras/muprevbk/ | 13 |
83 | Prepared by Nicole Strangman and Tracey Hall
National Center on Accessing the General Curriculum
Note: Links have been updated on 11/6/12
Curriculum enhancements are add-ons directed at helping students to overcome curriculum barriers that impede access to, participation, and progress within the general curriculum. For many students, a primary barrier is printed text—a staple of classroom instruction. To give a few examples, students without a well-developed ability to see, decode, attend to, or comprehend printed text cannot learn from it and are severely disadvantaged throughout their education. Students' difficulties with printed text range from subtle to profound but every student can benefit from a curriculum enhanced with alternative media and text supports. The discussion below introduces a set of curriculum enhancements, which we call text transformations, that represent such alternatives.
Definition and Types of Text Transformations
We use the term text transformations as a broad classification inclusive of text modifications and innovative technology tools that alter or add to the features of printed text. To facilitate intelligent and productive discussion, within these two categories we have developed subcategories that group together similar enhancements. Although many enhancements rightly belong to multiple categories, to avoid redundancy, we have placed them in what appears to be the best fit.
Modified text is any text that has been changed from its original print format. The category encompasses texts with altered content or physical characteristics and printed texts presented in a different modality. Traditionally, teachers have carried out text modifications by hand—enlarging text on a photocopier, rewriting text with simplified language, or underlining main ideas in a textbook. This approach places an unnecessary burden on teachers, for whom it becomes very cumbersome—even infeasible—to accomplish on a class-wide scale. Technology can make this job easier to achieve with many more students. Because we feel that technology is essential to making modified texts a realistic kind of enhancement, we will discuss only technology-based modified texts.
Most text modifications begin with conversion to electronic text, because this conversion releases teachers and students from the rigidity of the print format. Once converted to an electronic form, text can, for example, be easily converted to modified text in the form of text-to-speech. It can also be converted to hypertext, which incorporates hyperlinks to existing or supplemental content. These hyperlinks may help explain difficult vocabulary or concepts, provide background information, or prompt self-reflection or the use of comprehension strategies. These same kinds of supports can be built-in through hyperlinks to images, sound, animation, and video—resulting in a hypermedia text.
Multimedia, video, and videodiscs are additional examples of modified texts. They represent a change of modality, and in some cases a change of content. These types of modified texts use images, moving images, and sound to provide information redundant with or supplementary to the text.
We define a technology tool as any technological device or program that affects the use of text or content that would otherwise be presented with text. Examples include spell checkers, word processors, word prediction software, speech recognition software, and computer/software programs. Word processors provide a means to generate text, edit its content, and alter its physical characteristics. Spell checkers, speech recognition software, and word prediction software also scaffold the writing process. Computer and software programs can offer multiple technological tools in one package, providing a non-print environment for teaching, studying, and practicing skills.
Application across Curriculum Areas
Text transformations have potential applications across a range of curriculum areas. Although reading and writing are by far the best studied applications, a wide range of subject areas is represented in the research base: reading (N=61), writing (N=25), spelling (N=7), English (N=2), language arts (N=1), mathematics (N=14), social studies (N=4), science (N=10), health (N=2), social problem solving (N=2), reasoning (N=1), and telling time (N=2). Research investigations of text transformations have not been evenly distributed across these different curriculum areas. Word processing and word prediction, for example, have mostly been evaluated as enhancements to the writing curriculum, text-to-speech as an enhancement to reading instruction, and video and videodiscs as enhancements to math and science curricula. To an extent, this inequality reflects the compatibility of different text transformations with different curriculum areas. However, it is useful to keep in mind when reading this review that the operations and skills supported by a text transformation in one curriculum area are likely to be beneficial to other curriculum areas as well.
Evidence for Effectiveness
The research literature is a valuable resource for evaluating the usefulness of enhancements and the ideal conditions for their classroom use. In the following sections, we digest the research findings for 10 enhancements, characterizing the extent to which each one is research validated and identifying the factors that influence its effectiveness. The discussion incorporates findings from an expansive survey of the peer-reviewed literature between 1980 and 2002. This survey included research studies conducted in K–12 education settings.
Modified Text Defined
Modified text represents a change of modality that alters or adds to features of printed text. It is any text that has been changed from its original print format. This may include altered content or physical characteristics and printed texts presented in a different modality. The types of modified text reviewed for this report include: electronic, text-to-speech, video and videodiscs and finally hypertext and hypermedia. Each represents a change of modality, and in some cases a change in content.
Our discussion of electronic text is restricted to studies implementing it in its purest form, that is, absent other media such as sound and images. Studies of media-supplemented texts are discussed in the sections on hypermedia and multimedia. Their exclusion leaves relatively few studies, but the four studies discussed below contribute fundamental insights into the use of computers in the classroom.
In its simplest form, electronic text consists of an online display of print material. Studies of simple electronic text enable researchers to address the basic question of whether there is some advantage to digital display alone. Casteel (1988–89) for example, compared reading comprehension of text passages under three conditions: when the text was chunked and displayed on the printed page, when the text was chunked on a computer screen, and when the text was unchunked on a computer screen. Although chunked passages were associated with significantly greater reading comprehension than were the unchunked passages, student performance was statistically equivalent in the online and offline conditions (Casteel, 1988–89). Consistent with these findings, both Reinking & Schreiner (1985) and Swanson & Trahan (1992) showed that the text medium (print or electronic) does not affect reading comprehension or reading rate (Reinking, 1985; Swanson & Trahan, 1992). In contrast, Swanson & Trahan (1992) provide evidence that electronic text better supports vocabulary learning. It is not clear, however, that Swanson & Trahan performed the necessary statistical controls when making the relevant comparisons. Moreover, the study design did not offer any control for the potential novelty effect of reading on the computer.
It is not surprising that merely displaying material on a computer screen does not bring about superior reading skill. Exact reproduction of print material on a computer screen fails to take full advantage of the electronic medium's flexibility, which allows for reformatting and enhancement of the text with, for example, supports for reading comprehension (Leong, 1995; Reinking, 1985) and vocabulary (Feldmann & Fish, 1991; Leong, 1995). Reinking & Schreiner (1985) designed a computerized version of expository text passages that included four supports for reading comprehension: definitions for difficult words, main ideas for each paragraph, background information, and simplified versions of the passages. Students that read these supported electronic texts significantly outperformed those who read a basic electronic or print version. These findings suggest that electronic text can be a highly beneficial learning tool when the flexibility of the medium is put to use. In contrast, Leong (1995) found no differential benefit of regular text passages, simplified text passages, regular text passages with explanations of difficult words, or regular text passages with explanations and prereading questions—all read on the computer with text-to-speech (Leong, 1995). However, the sample size in this study was rather small to effectively detect differences, and treatment effects might have been obscured by pre-test differences.
Factors Influencing Effectiveness
This body of literature is too small to draw many supported conclusions about the factors influencing the effectiveness of electronic texts in the classroom. However, the limited research does caution that some characteristics can undermine electronic text's effectiveness. Reinking & Schreiner (1985) found that poorer readers in their sample performed better offline than they did when reading online with optional viewing of reading comprehension supports. This finding suggests that students, or at least poorer readers, may need advisement on when and how to use online supports effectively. Although the ability to add various supports is a clear benefit to this medium, electronic texts must be designed carefully and accompanied with sufficient guidance that different types of learners can navigate the text efficiently and put to their advantage its innovative features.
Thirteen studies were identified relating to the effectiveness of text-to-speech or recorded speech as a learning tool in the classroom. The literature indicates that text-to-speech can be a valuable tool, but its effectiveness is contingent on numerous factors. These are discussed in the following sections.
Factors Influencing Effectiveness
Type of text-to-speech. Synthetic text-to-speech is more widely available and easier to generate than digitized text-to-speech, and this is reflected in the literature, where studies of synthetic speech predominate. Eight of the studies in this review used synthetic speech (Borgh & Dickson, 1992; Elbro, Rasmussen, & Spelling, 1996; Elkind, Cohen, & Murray, 1993; Farmer, Klein, & Bryson, 1992; Leong, 1992; Lundberg & Olofsson, 1993; Olson & Wise, 1992; Wise, 1992), four digitized speech (Davidson, Coles, Noyes, & Terrell, 1991; Dawson, Venn, & Gunter, 2000; van Daal & van der Leij, 1992), and one both (Hebert & Murdock, 1994). Four studies used recorded human speech (Abelson & Petersen, 1983; Davidson, Elcock, & Noyes, 1996; Montali & Lewandowski, 1996; Shany & Biemiller, 1995).
Six of the 8 studies evaluating synthetic speech reported some positive effect. Olson and Wise (1992) found that reading online with synthetic speech feedback led to significantly greater improvement on word and nonword recognition scores than did spending time out on a computer. Given the control group wasn't engaged in reading practice for the same amount of time, this is not surprising. Wise (1992) demonstrated an improvement in word recognition following time spent reading with text-to-speech. However, this study had no control group. Borgh & Dickson (1992) reported that writing on the computer with sentence level speech feedback led to significantly more sentence-level editing than did writing on the computer without the feedback. In a study by Elbro, Rasmussen, Spelling (1996) word recognition, comprehension, and fluency were all more positively affected by the use of synthetic, syllable- or letter name-level synthetic speech than by ordinary remedial training. Improvements in oral reading fluency were equivalent to the traditional instruction group, however. Additional positive effects have been reported for certain subpopulations within student samples. Both Leong (1992) and Lundberg and Oloffson (1992) reported a grade level-dependent advantage of reading with text-to-speech on reading comprehension.
Several of these authors evaluated other learning outcomes, with negative results. Elbro et al. (1996) could not establish any advantage of text-to-speech instruction over regular instruction for phonics and phonemic awareness—neither the computer instruction nor traditional instruction stimulated improvements in these skills. Lundberg & Oloffson (1993) found that word decoding scores were roughly the same whether students read online with speech feedback for targeted words or without it. Borgh and Dickson (1992) report no significant differences in the length, quality, or audience awareness of student compositions when they wrote with or without text-to-speech. Farmer et. al. and Elkind et. al. reported wholly negative findings, reporting no significant differences in vocabulary (Elkind et al.), word recognition (Farmer et al. 1992), reading comprehension (Elkind et al.; Farmer et. al., or total normal curve equivalent (Elkind et al.) between students who worked with and without text-to-speech.
There is little corroboration within the synthetic text-to-speech literature, which makes it difficult to draw conclusions. However, there is tentative evidence to suggest a beneficial impact of this text transformation tool on nonword recognition and sentence level editing. Evidence regarding its impact on word recognition and reading comprehension is contradictory and needs to be resolved.
Research investigations of digitized text-to-speech and recorded speech are few but generally favorable. There is some converging evidence to suggest that reading text with recorded or digitized text-to-speech effectively improves vocabulary (Davidson et al.; Davidson et al.; Hebert & Murdock, 1994). There is also some evidence to suggest that reading with the support of digitized text-to-speech favors the development of better word reading accuracy and fluency (Davidson et al. 1991; Dawson et al. 2000; Shany & Biemiller, 1995; van Daal & van der Leij, 1992). Although Montali & Landowski (1996) found that students made equivalent gains in word recognition scores when just listening to prerecorded text, reading the text on the computer, and reading a text-to-speech version of the text, the intervention lasted only 3 sessions. Thus, there is evidence, although somewhat limited, that digitized text-to-speech or recorded speech can promote several reading skills.
Another, perhaps more important, question that needs to be resolved is how the advantages of reading with text-to-speech compare to those of reading with a human model. Shany & Biemiller (1995) found that students who took part in regular reading sessions where they listened to and followed along with a book on tape showed improvements in reading, speed and verbal efficiency (speed and accuracy of reading aloud) equivalent to those made by students who engaged in teacher-assisted reading sessions. Neither form of practice improved letter or word naming speed. Although Dawson & Venn (2000) found that word reading is more accurate with a teacher than a digitized text-to-speech model, they sampled a very small number of students and present data that is very inconsistent. Abelson & Petersen (1983) found that listening to a book on tape during silent reading supports story recall, as well as listening to a reading by the teacher. Finally, Montali & Landwoski (1996) found that students reading with recorded speech scored significantly higher on a later test of reading comprehension than those in text- or audio-only conditions. Thus, there is literature support equivalent and in some cases greater effectiveness of recorded and digitized text-to-speech relative to live human speech. However, more research is needed to clearly identify which reading skills are best promoted by these technologies.
Another practically relevant question is whether one form of text-to-speech is more effective than the other. There are no direct statistical comparisons of digitized and synthetic text-to-speech. However, Hebert and Murdock (1994) conducted a quantitative experimental study comparing the two types. Three students with learning disabilities and speech impairments alternated between reading vocabulary words with definitions and sentences online using digitized speech, synthetic speech, or no speech. All 3 students scored highest on vocabulary tests during one of the text-to-speech treatments—but which type of text-to-speech was most advantageous was not consistent: in 1 case, synthetic text-to-speech brought the best results, and in 2 cases, digitized text-to-speech.
At this point, it is not clear whether one form of text-to-speech has an advantage over the other. Given the arduousness of developing digitized speech representations of classroom materials, the research does not yet justify prioritizing it over synthetic speech. However, the picture could very well change, when more studies of digitized text-to-speech are published.
Level of speech feedback. Text-to-speech can be used to provide different levels of speech feedback, including passage (Davidson et al.; Davidson et al.; Dawson et al.; Leong, 1992; Montali & Lewandowski, 1996), sentence (Borgh & Dickson, 1992; Elkind et al.; Hebert & Murdock, 1994), word (Davidson et al.; Davidson et al. 1996; Elbro et al.; Elkind et al.; Farmer et al.; Hebert & Murdock, 1994; Lundberg & Olofsson, 1993; Olson & Wise, 1992; Wise, 1992), onset rime (Olson & Wise, 1992), syllable (Olson & Wise, 1992), and subsyllable (Elbro et al.; Elkind et al.; Wise, 1992). Looking at the research base, there appears to be a clustering of positive findings around studies using word- or sentence-level feedback. However, this is not a fair way of interpreting the data because word- and sentence-level feedback have been investigated in far more studies than have other types of feedback. A better approach is to look at direct comparisons. Wise (1992), for example, compared four types of feedback: whole word, syllable, subsyllable, and single-grapheme-phoneme and found single-grapheme-phoneme feedback to be the most effective for improving word recognition (Wise, 1992). Olson & Wise (1992) reported that reading on the computer with onset rime, syllable, or whole word synthesized speech feedback all promoted significantly better word recognition than equivalent time out on the computer (Olson & Wise, 1992). However, for nonword recognition, only whole word and syllable feedback produced significantly better results than time out on the computer. Similarly, Elbro et al. (1996) found that whole word- and letter name-level synthetic speech feedback both offered benefits greater than traditional instruction. Both significantly improved word recognition, comprehension, and fluency. They differed only for syllable segmentation and nonword reading, for which syllable-level feedback was more effective. The apparent bottom line is that many types of speech feedback can be effective enhancements of reading instruction. Some types of text-to-speech may, however, have an advantage when it comes to elevating certain reading skills.
Grade level. Grade level is one of the most highly variable factors in this literature. Because the various studies targeted different grade-level mixtures of students, it is impossible to speak assuredly to the issue of whether and how age or grade-level influences the effectiveness of text-to-speech as a learning tool. However, certain worthwhile observations can be made.
Positive effects of one form or another have been demonstrated for students in grades 2 through 9. Abelson & Petersen (1983) did not find any statistically significant effect of age in their analysis. Leong (1992) reported an interaction between grade level and the effect of text-to-speech on text comprehension, but its meaning is unclear because the statistical tests necessary for interpretation were not performed. Lundberg & Oloffson (1993) are the only authors who clearly demonstrate grade level-dependent effects, showing that text-to-speech feedback improved reading comprehension for readers (students in grades 4, 6, and 7) but not beginning readers (students in grades 2 and 3). Word decoding improved for both groups.
Interestingly, Farmer et al. (1992), one of two groups to report entirely negative findings, were also the only group to include high school students in their sample. It is possible, although unverifiable without more research, that the contradictions in the literature regarding text-to-speech's effects on reading comprehension and word recognition reflect poor effectiveness of text-to-speech with older readers. As more studies begin to explore grade level as a factor, firmer statements can be made about the types of students for which text-to-speech is most beneficial as a curriculum enhancement.
Educational group. Nearly every study in this text-to-speech survey focused on students outside the average-performing population. Samples included below grade level students, below average students, a mixture of below and above average students, students with reading disabilities, students with learning disabilities or speech impairments, students with dyslexia or from special education classes, and regular education students. There is not enough overlap in the various study samples to justify conjecture about the impact of educational group status on the effectiveness of text-to-speech.
Video and Videodisc
Video and videodiscs offer a new way to deliver content and instruction to students, either as an alternative or supplement to traditional methods. Ten studies in this survey investigated the impact of video- (Bain, 1992) or videodisc-based (Bottge, 1999; Bottge & Hasselbring, 1993; Friedman, 1984; Hasselbring et al.; Kelly, 1986; Sherwood, Kinzer, Bransford, & Franks, 1987; Thorkildsen & Reid, 1989; Xin, Glaser, & Rieth, 1996; Xin & Rieth, 2001) instruction on student learning. In six of seven cases, video or videodisc-based instruction was demonstrated to have a positive impact on learning, superior to that of alternative forms of instruction. The remaining three studies (Friedman & Hofmeister, 1984; Thorkildsen & Rieth, 1989; Xin et al. 1996) were unable to establish superiority to more traditional approaches, because they did not include a non-videodisc control group.
Mathematics is a popular curriculum application for video and videodiscs. Hasselbring et al. and Kelly et al. evaluated a videodisc-based curriculum for fractions computation, called Mastering Fractions (MF). The MF videodisc offers lessons, exercises, guided and independent practice, quizzes, reviews, and feedback and correction relating to fractions concepts. Students interact with the program by responding orally to prompts. Kelly et al. compared the impact of 9 lessons with the Mastering Fractions (MF) videodisc-based curriculum to 9 lessons of a basal curriculum. Students in the MF group significantly outperformed students in the basal curriculum group on a criterion-referenced post-test and maintenance test. Unfortunately, these findings are weakened somewhat by the fact that instruction was delivered by experimenters and not by the teacher. Moreover, interpretation of the findings is problematic because both the medium and content of the two curricula were different, leaving unclear which of these factors contributed to the videodisc intervention's advantage.
A later study by Hasselbring et al. addressed these lingering questions by controlling for both medium and curriculum content. Students received 35 lessons with the videodisc-based MF program, a teacher-based replication (with transparencies in place of video), or the regular curriculum (a spiraling fractions curriculum). Mastering Fractions instruction, with or without videodisc, yielded significantly higher scores on a fractions post-test compared to the regular curriculum group. These findings suggest that the content of the MF program—not the videodisc medium it uses—gives it an advantage over traditional curricula.
Other studies, however, support the idea that the video/videodisc medium can itself offer a unique advantage. Bottge & Hasselbring (1993) and Bottge (1999) investigated an approach that uses a videodisc to situate mathematical problem-solving instruction within a meaningful, real-world, context, providing contextualized problem solving instruction. The approach uses videodisc-based math problems that unlike conventional word problems are not explicitly stated and therefore require the student to extract relevant information that is embedded within the video scenes. Bottge and Hasselbring (1993) found that students given 5 days of contextualized problem solving instruction (using the MF videodisc program and videodisc-based contextualized math problems) scored significantly higher on a test of contextualized problem solving than did those receiving conventional word problem instruction. These findings are weakened somewhat by the failure to perform a necessary statistical control and the extreme variability in post-test scores for the MF group. However, using the same basic paradigm, with the addition of cognitive strategy instruction in both conditions, Bottge (1999) obtained very similar findings. Students that received 10 days of videodisc-based contextualized problem solving instruction significantly outperformed students in the conventional instruction group on a contextualized problem solving post-test and transfer test. Collectively, these two studies argue that the videodisc medium may be able to support mathematics learning by contextualizing instruction in a way that traditional media presently do not.
Context is also important when reading. Sherwood et al. examined the use of videodisc to contextualize expository text. Before reading a science passage, half of the students in the study watched a videodisc that provided a "macro-context" for the text (for example, before reading a passage about tarantulas, students watched a segment of the movie Raiders of the Lost Ark that involved tarantulas). Students were later quizzed on science concepts. Students in the videodisc group scored significantly higher on the quiz than did students who read the passage without watching the videodisc. The lingering question in this one-day study is whether this is a lasting effect and not a consequence simply of students' excitement at watching video in class. Findings by Xin & Rieth (2001) suggest caution, demonstrating that traditional methods can be used to nearly equivalent success to anchor instruction. Students taught vocabulary through a traditional anchored instruction approach (using printed materials and the teacher) demonstrated improvements in cloze sentence completion equivalent to those taught with a videodisc-based anchored instruction approach. However, the videodisc group did make significantly greater performance gains on a word definition test. Thus, it cannot be assumed that the addition of a technology will drastically improve upon the effectiveness of a traditional approach. The choice should take into account other factors, such as the level of resources needed for each approach and the ease of integration of technology into the curriculum.
Factors Influencing Effectiveness
Subject area. The research literature has focused almost entirely on the potential of video and videodiscs for mathematics instruction, and even more exactly, for word problem solving and fractions computation. Exceptions are Sherwood et al., Blain et al., Thorkildsen & Reid (1989), Xin et al., and Xin and Rieth (2001).
Sherwood et al., in a study detailed above, found that contextualizing expository science passages through the use of videodisc significantly improved student understanding of the science concepts presented in the text. Xin et al. and Xin and Rieth (2001) provide evidence for the effectiveness of videodisc-based anchored instruction for teaching vocabulary. And a videodisc time telling program was an effective instructional tool in Thorkildsen and Reid's study (1989). Blain et al. provide evidence to support the use of a video-based approach to social problem solving instruction. Thus, there is a strong indication that video and videodiscs may be beneficial in a broad range of curriculum areas. This evidence must, however, be corroborated and expanded upon to even out knowledge across the curriculum.
Novelty. A factor not generally addressed in the literature, but of certain relevance to the use of video and videodiscs, is novelty. Any novel approach to instruction—but certainly one involving a medium as appealing as video—would be expected to engage and motivate students to a great degree. Only Blain et al. included a control for the novelty of the medium. It could be argued, based on their finding that the group with the highest performance scores also had the highest scores on attention, that the reported success of video- and videodisc-based approaches is tied to their novelty. To sort out this possibility, future studies should make a priority of incorporating the necessary control groups and conducting maintenance tests.
Grade level. Reflecting in part the research focus on algebra instruction, experimental studies of video and videodiscs have primarily sampled students in upper elementary and high school grades (grades 7–12). There is fairly strong evidence to support the use of video and videodiscs within this cross section of students. Applications in lower grades have received less attention, but there are promising findings regarding the use of videodisc approaches for teaching vocabulary in grades 4–6.
There is no indication in the existing research for an age-related difference in the effectiveness of video- and videodisc-based instruction. Sherwood et al. (1987) did not observe any differences in the effectiveness of their videodisc science intervention with 7th and 8th graders. However, no other study has incorporated grade-level as a factor in its analysis.
Educational group. A range of educational groups is represented in this body of research. The ten studies surveyed sampled students from general education classes (Bain, 1992), students from remedial math classes (Bottge & Hasselbring, 1993; Kelly, 1986), a mixture of students from general math and remedial math classes (Bottge, 1999), students with learning disabilities or mental retardation (Friedman, 1984; Xin et al.; Xin & Rieth, 2001), a mixture of students from regular and special education classes (Thorkildsen & Reid, 1989), students with above or below average math ability (Sherwood et al.), and students with average to high math ability (Hasselbring et al.). This diverse sample lends good support to the contention that video and videodiscs can be beneficial enhancements for students across a range of math abilities and educational groups.
Digital media offer a range of new options for teaching and learning. One of their advantages is that they allow for multiple representations of the same information—for example, as text, images, and speech. These representations can be redundant, presenting the same information in multiple media, or supplementary, providing background information or alternative perspectives. Simple devices like CD-ROMs make the presentation of multiple media simple and practical. Twelve studies in this survey relate to the use of multimedia in the classroom. Overall, the results of these studies are mixed, necessitating thoughtful interpretation.
One of the strongest arguments for multimedia enhancements is an investigation of picture-word processing by Chang & Osguthorpe (1990). These researchers investigated the impact on kindergartners of a tutoring program involving the use of picture-word processing to learn words and write simple sentences. The picture-word processing program allows students to write messages on a computer by selecting pictures on an electronic tablet. Kindergartners who underwent tutoring in place of their regular instruction over the course of 6 weeks made significantly greater reading gains (word identification and passage comprehension) and demonstrated a significantly greater enjoyment of reading than their peers (Chang & Osguthorpe, 1990). These findings suggest that a picture-word processing tutoring program is an effective way to improve early reading and writing instruction.
A more common and more widely investigated multimedia form is the CD-ROM storybook, which offers digital text together with features such as animations, sound, speech, and illustrations (Lewin, 1997, 2000; Matthew, 1997; Miller, 1994; Talley, Lancy, & Lee, 1997). Talley et al. findings suggest that exposure to CD-ROM storybooks is valuable for even the youngest students, helping pre-readers to develop an understanding of story structure and sequence.
Evidence regarding the impact of CD-ROM storybooks on elementary school readers is more plentiful but generally less persuasive. Lewin (1997) reported an association between increased vocabulary knowledge and reading with talking books. Nine poor readers who spent one month reading online using multimedia talking book software made daily gains averaging 5.7 words on the Common Words Knowledge test. Although promising, the case study methodology and lack of control group limits the meaningfulness of these findings. The findings are substantiated somewhat by Miller et al. who document vocabulary improvements following repeated readings with a CD-ROM storybook. In this case, CD-ROM reading was favorably compared to repeated reading of a traditional print storybook. However, the small sample size (N=4) again prevented statistical comparisons.
A methodologically stronger study by Matthew (1997) raises question as to whether CD-ROM storybooks are necessarily better than traditional print storybooks. Matthew (1997) engaged third grade students in an extended reading activity where they read, discussed, and wrote a re-telling of a traditional print or CD-ROM storybook, the latter offering animation, online definitions, and sound effects. Subsequent comprehension scores were statistically equivalent between the two groups. Although students who read the CD-ROM storybooks scored significantly higher on retellings, when members of the control group were switched over to the experimental intervention, results favored the traditional print storybook. These findings may perhaps be reconciled by some kind of practice effect in the control group. Nevertheless, they create a level of uncertainty.
Indeed, findings by Large, Beheshti, Breuleux, and Renaud (1994) further question the ability of multimedia materials to support comprehension and recall better than printed materials do. Sixth graders in this study scored equivalently on tests of free recall and comprehension whether reading from a multimedia encyclopedia (with text, animation, video, and sound), text-only digital encyclopedia, or print encyclopedia (with text and pictures). However, the circumstances of the study may not have been maximally favorable to the use of multimedia. Namely, the students in the study were novices to the technology and the intervention lasted only 2 short sessions. Thus, it is possible that students simply need more training to reap the benefits of multimedia materials for comprehension and recall.
The issue of needing to equip students to take full advantage of multimedia supports surfaced in another study by Lewin (2000) comparing the effectiveness of 2 different versions of talking book software with 5- and 6-year old low ability readers. The basic version offered onset-rime- and passage-level text-to-speech with highlighting, whereas the enhanced version offered, in addition, pronunciation hints (which could be made optional or mandatory) and 5 reinforcement activities to improve the use of reading cues and develop sight recognition of key vocabulary. Lewin (2000) found no significant difference in the two versions' impact on Commons Words Knowledge test scores. An important observation made by a teacher taking part in the study was that most students failed to access the pronunciation hints for their intended purpose, as a device for considering alternative word identification strategies. It is not clear that the students were given the instruction necessary to recognize this purpose and apply such strategies. Providing struggling readers, particularly early struggling readers, with such knowledge and guidance may be essential for them to take full advantage of more sophisticated vocabulary supports.
With sufficient training, students can learn to produce their own multimedia texts, offering an alternative to the traditional essay. Daiute & Morse (1994) trained students on multimedia software (scanning and digitizing, image editing, writing on computer) and over a series of 5–8 sessions had them create books on the topic of young people's interests in their city. The study produced some qualitative evidence to suggest that this multimedia composition approach was more engaging for students, but without a control group it is unclear whether the added engagement derived from the multimedia tools or simply the writing topic. Beichner (1994) also reported strong student engagement during the production of multimedia materials as part of a science curriculum. However, this study, as well, failed to provide quantitative data or incorporate a control group.
Glaser et al. and Bonk et al. investigated more elaborate forms of multimedia instruction. Glaser et al. investigated a multimedia anchored instruction unit where 8th grade students researched and reported on a topic using multimedia tools (Glaser, Rieth, Kinzer, Colburn, & Peter, 2000). Unfortunately, the only data collected were student-teacher interactions. Results suggest a reduction in teacher directives but do not speak to the question of changes in student performance. Bonk et al. did address learning outcomes in their study, which followed a large group of fifth and sixth grade students as they completed a multimedia weather unit involving email with other students, use of online weather databases, multimedia composition software such as Hyperstudio, and video (Bonk, Hay, & Fischler, 1996). The authors note improvement in several cognitive measures, including conceptual understanding, metacognitive task performance, and the ability to generate concept maps. Again, however, there was no control group to which to make comparisons.
A unique approach to multimedia enhancement of the curriculum was taken by Hay (1997). Hay developed narrated, captioned versions of instructional video that were tailored to students' developmental reading level. Students were assigned to one of three scripts differing in their vocabulary and narration rate. A large group of 4th graders worked with the videos over the course of 24 weeks. Unfortunately, Hay did not present the results of the comprehension and vocabulary assessments administered to the students, instead only commenting on student and teacher reactions to the technology, which were overall favorable (Hay, 1997).
Overall, the research base within the area of multimedia enhancements is weak. Many of the studies rely on qualitative observations, seek to generalize from small samples, and lack necessary control groups. There is some evidence to support the usefulness of multimedia enhancements in elevating engagement, vocabulary knowledge, and certain cognitive measures. However, in most cases the evidence presented does not support their advantage over the more traditional. The existing research is too flawed and the evidence too incomplete, however, to build an argument for or against multimedia enhancements.
Factors Influencing Effectiveness
Grade level. Most of the research base involving multimedia instruction targets students in grades three to six. Exceptions are the studies by Glaser et al. and Beichner (1994), which sampled students in grade eight and Lewin (2000), who sampled 5–6 year-old students. Clearly, it will be important that future studies investigate other grade levels.
Educational group. Research investigations into multimedia enhancements have focused nearly exclusively on students' in the general education classroom lacking special needs. There are some exceptions. Lewin (1997) sampled only poor readers, and Glaser et al., Beichner (1994), and Lewin (2000) used mixed samples. Half of the students in Glaser et al. sample were students with learning disabilities or mild mental retardation. Roughly 10% of Beichner's (1994) sample had some kind of disability. Lewin (2000) sampled low and high ability readers. None of these three, however, sought to differentiate the effects of the intervention on students from different educational groups. Thus, at this point, little can be said about the promise of multimedia for students with special needs.
Curriculum application. It's generally understood that different types of media are best suited to communicating different types of information (Rose & Meyer, 2002), but what should be used and when? There has been little research to address this question, but Large, Beheshti, Breuleux & Renaud (1995) made an effort by comparing different combinations of media for their effectiveness in communicating procedural content to sixth grade students (Large, Beheshti, Breuleux, & Renaud, 1995). Students were presented with procedural content in the form of digital text; digital text and animations; digital text, animations, and captions; or animations and captions. Although student recall was statistically equivalent across the 4 conditions, student enactment of the procedures was significantly more accurate when content was presented with a combination of text and animations or text, animations, and captions. Interestingly, the combination of animation and captions had no such benefit, suggesting that engagement alone was not responsible for the advantage of the other treatments. The authors suggest that combinations of text and animations or text, animations, and captions may be particularly well suited to instilling comprehension of procedural information. However, because of the brevity of the intervention (one session only), these findings require elaboration. Additional research of this kind, evaluating the ideal applications for various forms of multimedia, would be greatly beneficial.
Hypertext documents offer links within the text-to-text-only resources. Three studies in our survey conducted investigations relating to this type of enhancement.
Anderson-Inman and colleagues (Anderson-Inman, Chen, & Lewin, 1994; Horney & Anderson-Inman, 1994) have made extensive and insightful observations of eighth grade students working with hypertext. Importantly, they have shown that readers of this age, whether poor, average, or above average, are able to learn to use hypertext. They have also noted important differences in readers' interactions with hypertext, distinguishing several reader profiles and noting that not everyone uses text and resources in depth, integrating reading with accessing of hyperlinked supports. This work suggests that instruction on how to access hyperlinked resources purposefully is important to the successful integration of hypertext into classroom instruction.
Based on their student observations, Anderson-Inman et al. identified 5 major elements to hypertext literacy: traditional reading skills, facility with hardware, knowledge of a document's structure and navigation, ability to engage the text and enhancements with purpose, and ability to reevaluate the reading purpose. They point out that a student's skill in these areas, as well as his or her motivation and perception of the task, the design of the hypertext document, the instructional context, and teacher expectations can all influence the effectiveness of a hypertext environment for a particular student.
Horton et al. (1990) directly investigated the impact of hypertext on learning outcomes, testing the effects of a hypertext study guide on the social studies learning of four high school students classified as remedial or learning disabled (Horton, Boone, & Lovitt, 1990). The study guide presented passages from a History text together with study questions and leveled instructional cues based on students' responses to the questions. After working with the study guide, students answered the study guide questions significantly more accurately. However, students showed no improvement when quizzed on related questions absent from the study guide. These results suggest that a hypertext study guide with leveled supports can facilitate recall of study guide questions, but not perhaps generalized comprehension of the text. Because of the low sample size, however, none of these findings should be weighted too heavily.
Hypertext, although omnipresent in online learning environments, has received little attention by K–12 education researchers. The three studies presented here are suggestive of a beneficial impact on learning. However, much additional work is necessary to better evaluate this potential.
One quite active area of research is the investigation of hypermedia for use in the classroom. Liao (1998) identified 35 studies published between 1986 and 1997 that compared hypermedia to traditional instruction (Liao, 1998). However, much of this research was conducted in college or university settings or published outside the peer-reviewed literature and is therefore outside the scope of this review. Our survey identified 9 peer-reviewed studies (one a metaanalysis) involving hypermedia-based enhancements in the K–12 classroom. The hypermedia enhancements under investigation include hypermedia design software, hypermedia lessons, and hypermedia texts such as study guides.
A hypermedia study guide (Higgins & Boone, 1990; Higgins, Boone, & Lovitt, 1996; MacArthur, 1995) is an educational text presented in an electronic format, with embedded hyperlinks to multimedia supports such as explanatory notes, animated graphics, text-to-speech, definitions, and rereading prompts. Higgins and Boone (1990) and Higgins et al. investigated the effectiveness of a hypermedia study guide consisting of chapters from a social studies textbook supplemented by explanatory Notes, text Replacements (clarifying versions of the text or related graphics), and an Inquiry function that directed student movement through a series of study questions. Although both authors draw positive conclusions regarding the effectiveness of their study guide compared to traditional methods, the data do not support their claims. Higgins & Boone reported no significant differences between the impact of study guide, study guide plus lecture, and a combination of lecture, text, and worksheets save for on day 1 of the 10-day intervention, when students in the study guide group outperformed those in the study guide/lecture group, who outperformed those in the lecture only group. Likewise, Higgins et al. could not statistically differentiate study guide, study guide plus lecture, and lecture only based on daily quiz scores; which were statistically equivalent. Although the authors claim a statistical difference between retention scores, they report a p-value that is outside the conventional cut-off. These studies should not however be interpreted as evidence against the effectiveness of hypermedia because they probably lack the statistical power to detect any effect of hypermedia.
Higgins and Boone (1991) and Boone & Higgins (1993) investigated the benefits of hypermedia texts intended not as study guides but as lessons supplemental to the basal reading series. The texts differed across the three years of the study:
Year 1 hypermedia texts incorporated vocabulary and decoding support in the form of synthetic and digitized speech, structural analysis of words, animated graphics, computerized pictures, definitions, and synonyms.
Year 2 hypermedia texts incorporated the Year 1 supports as well as syntactic and semantic support in the form of graphical demonstration of pronoun/referent relationships.
Year 3 hypermedia texts incorporated the Year 1 and 2 supports as well as comprehension strategies.
Student participants worked with the hypermedia texts or spent an equivalent amount of time on non-computer reading-related activities. Progress was evaluated by comparing MacMillan Standardized Reading Test scores from the beginning and end of each year. Results for year 1 generally favored the hypermedia condition. Average total test scores for hypermedia readers in kindergarten, second, and third grade significantly exceeded those of the non-computer group (Boone & Higgins, 1993; Higgins, 1991). There were no significant differences in total test scores for first graders. Individual subtest scores were also compared, revealing more complex effects, favoring in some cases the hypermedia group and other cases the non-computer group. In contrast, in years 2 and 3 the only significant differences in overall test scores favored the non-computer group. For Year 2, as for Year 1, comparisons of individual subtest scores indicated some differences favoring the hypermedia group. For Year 3, however, only grade 3 vocabulary subtest scores favored the noncomputer group.
The results of this 3-year study are difficult to summate due to the tremendous number of statistical comparisons and experimental variables. However, as a whole they provide little support for hypermedia supplementation of basal reading series. Of the three sets of hypermedia materials, only those from Year 1 appeared to somewhat consistently raise reading test scores above normal. In fact, the most supported hypermedia materials (from Year 3) produced inferior results.
In contrast, MacArthur & Haynes (1995) found that an enhanced hypermedia study guide was more effective at developing student comprehension than a basic version containing fewer supports. Both study guides were developed from a science text. The basic version presented a text passage together with a chapter outline, a graphics window showing associated pictures or graphs, and a notebook window for entering and editing text (MacArthur, 1995). The enhanced version incorporated six additional aids:
ability to copy text into the notebook
display of the textbook questions within a separate window
teacher explanations (brief summaries of important ideas or simplified statements of the main ideas)
optional text reformatting by the teacher
MacArthur and Haynes' (1995) findings argue that the integration of multimedia text supports can strengthen the effectiveness of a hypermedia study guide. However, their study does not allow conclusions to be made regarding the effectiveness of different hypermedia study guides relative to more traditional methods.
Moore-Hart (1995) evaluated a hypermedia program designed to supplement an offline Multicultural Literacy Program "that integrates multicultural literature and literature-based activities with the reading/writing curriculum." The hypermedia program, Multicultural Links, combines a word processor with interactive hypermedia such as maps, annotations of multicultural books, minibiographies, and a multicultural calendar. Student participants in the study used Multicultural Links with the Multicultural Literacy Program, the Multicultural Literacy Program only, or the traditional basal reader program. Reading comprehension and vocabulary normal curve equivalent scores spoke in favor of the hypermedia condition. However, the presence of what appear to be significant pre-test differences on these measures raises questions about these findings. Pre-test reading comprehension and vocabulary scores were considerably lower in the hypermedia group, raising the possibility of a ceiling effect that would limit vocabulary and comprehension gains in the nonhypermedia groups (Moore-Hart, 1995).
The focus of the Moore-Hart study as with most others is on the reading of hypermedia texts. An exception is a study by Tierney et al. evaluating a project where students create hypermedia documents. The 10 ninth-grade participants, all with hearing impairments, wrote both conventional compositions and HyperCard texts over a series of 3, three- to five-hour sessions. Data, which are restricted to qualitative information from interviews and observations, indicate that students appreciated the multimedia composition options available with HyperCard and regarded the hypermedia-based projects more favorably (Tierney et al.).
Overall, with little solid evidence to show that hypermedia enhancements can improve upon the outcomes achieved with traditional K–12 instruction, it is difficult to build an argument in their favor. However, this may be a consequence of the poor quality of research in this area, making it important to conduct additional research.
Factors Influencing Effectiveness
Prior knowledge. A potentially important factor influencing a student's ability to efficiently interact with and learn from hypermedia materials is the subject matter knowledge that he or she brings to the text. Shin, Schallert & Savenye (1994) investigated the impact of prior knowledge on students' learning in a hypermedia environment by dividing student participants into low and high prior knowledge groups, based on the results of a subject area pre-test (Shin, Schallert, & Savenye, 1994). Students spent one session working with a hypermedia lesson on food groups and were then tested a second time. Students with high prior knowledge scored significantly higher overall. These findings lead to the unsurprising conclusion that students who can bring prior knowledge to a hypermedia lesson, as with any lesson, will have an advantage.
A more interesting aspect of the Shin et al. findings was the presence of an interaction between students' prior knowledge and the complexity of the hypermedia environment. When working with a hierarchically structured lesson (topics were linked one to the next in a hierarchical format) students with low prior knowledge performed significantly better than when they worked with the lesson in an open format (any topic could be accessed at any time, from any location in the lesson). However, the degree of openness in the environment did not significantly affect the scores of students with high prior knowledge. These findings suggest that the structure of the hypermedia environment can strongly affect the impact of prior knowledge. Simplifying the hypermedia environment may help to scaffold students that are unfamiliar with the subject area.
Learner control. Navigating a hypermedia environment can be intimidating and confusing for students because of the unfamiliar format and the unusual number of resources and possible paths (Horney & Anderson-Inman, 1994). One approach to making hypermedia texts less overwhelming is to reduce the number of potential paths through the text, giving students fewer options. Another approach is to coach students on the best navigation route through a particular hypermedia text. Shin et al. evaluated the impact of both types of scaffolds on student learning by comparing performance when working with 4 versions of the same hypermedia text: 1) a version offering free access to the various subtopics—students could access them in whatever order they chose, 2) a version offering limited access—subtopics were linked one to the next in a hierarchical format, 3) a version offering free access together with advisement on the best sequence to follow, 4) a version offering limited access with advisement. Students in the limited access conditions performed significantly better on the immediate post-test, but not on the delayed post-test, suggesting that limiting the openness of the hypermedia environment can facilitate student learning in the short term. Advisement did not significantly affect student performance, but this may be an artifact of the way the data was analyzed. Advisement would likely have had little impact on student performance in the limited access condition, where there is only one path to take. To overcome the weak or absent effect in the limited access groups, the effect of advisement in the free access groups would have to have been extremely powerful.
Shin et al. findings may account in part for the data reported by Higgins and Boone (1991) and Boone & Higgins (1993) showing the greatest success with the simplest hypermedia text, containing the fewest supports. The more elaborate texts may have been too complex for the students to use effectively.
Grade level. Six of the eight studies discussed above included students from multiple grades (Boone & Higgins, 1993; Higgins, 1991; Higgins & Boone, 1990; Higgins et al.; MacArthur, Graham, Schwartz, & Schafer, 1995; Moore-Hart, 1995), but only two (Boone & Higgins, 1993; Higgins, 1991) incorporated grade level as a factor. Their results show quite clearly that the effectiveness of hypermedia materials across grades K–3 is variable. For example, results for kindergarten students in Year 1 overwhelmingly favor the hypermedia group, whereas the scores of first graders in the two groups weren't any different. It is difficult however to pick out any consistent patterns concerning differences in hypermedia effectiveness associated with grade level. These patterns will certainly vary depending on the characteristics of the intervention.
Educational group. Nearly every study surveyed here included students with learning disabilities or students classified as below average (Boone & Higgins, 1993; Higgins, 1991; Higgins & Boone, 1990; Higgins et al.; MacArthur, 1995; Moore-Hart, 1995; Tierney et al.). Four studies incorporated educational group as a factor in their data analysis (Boone & Higgins, 1993; Higgins, 1991; Higgins & Boone, 1990; Higgins et al.). Educational group (remedial, regular education, and learning disabled) did not interact in a significant way with the effectiveness of hypermedia study guides investigated by Higgins and Boone (1990) and Higgins et al. In contrast, Higgins & Boone (1991) and Boone & Higgins (1993) reported numerous educational group differences in their 3-year longitudinal study of hypermedia reading materials. The researchers argue a promising role for hypermedia as an instructional tool for students whom have been classified as poor readers, but their data provide no evidence of a consistent advantage of the hypermedia intervention for any educational group. Moreover, it is not evident that they performed the proper statistical controls when making their educational group comparisons. Thus, it appears that educational group may be relevant to the effectiveness of hypermedia enhancements, but precisely how, it is still unclear.
Technology Tools Defined
Any technological device or program that affects the use of text—or content that would otherwise be presented with text summarizes the definition of Technology Tools for the purposes of this report. Examples include word processors, spell checkers, word prediction devices, speech recognition, and software/computer programs. Many of these tools provide scaffolds for users, many devices offer multiple technologies in one package.
The word processor is one of the most widely available technology tools today and, understandably, one that researchers are interested in evaluating as a learning tool. This discussion includes ten studies that evaluated the impact of word processor use on learning outcomes as well as two studies that evaluated students' ability to master the operation of a word processor.
Experimental studies have reported with good consistency a beneficial impact of writing or editing with a word processor on overall writing quality (Graham, 1988; MacArthur et al.; Rosenbluth & Reed, 1992; Williams & Williams, 2000) and fluency (positive impact reported by Crealock & Sitko, 1990; Graham, 1988; Kurth, 1987; MacArthur et al.; Outhred, 1987, 1989; Rosenbluth & Reed, 1992; Williams & Williams, 2000). There is also fairly clear evidence to counterindicate the use of word processing to reduce errors of capitalization and punctuation (Dalton & Hannafin, 1987; Graham, 1988; MacArthur et al.). With respect to some outcomes, namely style (Kerchner & Kistinger, 1984), thematic maturity (Kerchner & Kistinger, 1984), word usage (Kurth, 1987), vocabulary knowledge (Kerchner & Kistinger, 1984), number of revisions (Kurth, 1987), quality of revisions (Kurth, 1987), composition structure (Dalton & Hannafin, 1987), and composition organization (Dalton & Hannafin, 1987), the evidence is too sparse to draw any conclusions.
Spelling is an additional area of interest that has earned the attention of research investigators. The evidence here, however, is contradictory. Kerchner & Kistinger (1984) and Dalton & Hannafin (1987) reported no effect of word processing on spelling ability, whereas Outhred (1987, 1989) and Kurth (1987) both reported a positive effect. These seeming contradictions may be partially explained by the fact that the word processor in Kurth's (1987) study included a spell checker, which might have exaggerated the effects of the word processor itself on spelling. In addition, Outhred failed to provide statistical evidence for a spelling improvement. Thus, there isn't any strong evidence to recommend the use of a word processor without a spell checker strictly to improve spelling.
The research findings reported by Casteel (1988–89), discussed in the electronic text section, are worth emphasizing again here because they underline the important fact that simply displaying text within a word processor does not significantly advance student learning. All of the positive findings discussed above were from word processing interventions that involved more than simply moving the display of information from the printed page to the computer. The successful implementation of word processing enabled students to manipulate text in new ways, and this difference is likely to be responsible for the beneficial outcomes.
Another important question to ask when evaluating word processing as a classroom enhancement is how readily students can learn to master a word processor's use. This question has been largely overlooked in the research literature. Exceptions are Geoffrion (1982–83) and MacArthur & Shneiderman (1986) who evaluated how well students with special needs (specifically students with hearing impairments and learning disabilities, respectively) are able to use a word processor. Their qualitative research revealed a high frequency of errors in the use of editing operations, irrespective of the duration of training (from 1 to 6 sessions), suggesting that students with special needs may require direct instruction on points of difficulty (Geoffrion, 1982–83; MacArthur & Shneiderman, 1986). However, neither study addressed the quality of student revisions, leaving open the question of whether students need to fully master editing commands to make beneficial revisions.
Although the word processing literature is quite positive regarding the usefulness of this tool in the classroom, some degree of caution is warranted as only two of these studies (Crealock & Sitko, 1990; Kerchner & Kistinger, 1984) support their conclusions with quantitative data and statistics. Moreover, all these studies used technology that is by now rather antiquated. As the technology continues to evolve, these questions about word processing must be newly addressed.
Factors Influencing Effectiveness
Grade level. The word processing literature is rather evenly split across middle elementary and upper grades. Five of the studies discussed above sampled students from grades 4–6. The five remaining studies sampled junior high and high school students. Thus, there is little information regarding the use of word processing by students in lower grades.
Educational group. Ten of the twelve studies we have discussed sampled students with special needs, specifically students with learning disabilities (Crealock & Sitko, 1990; Kerchner & Kistinger, 1984; MacArthur et al.; MacArthur & Shneiderman, 1986; Outhred, 1989), remedial students (Dalton & Hannafin, 1987; Rosenbluth & Reed, 1992), special education students (Outhred, 1987), English language learners (Williams & Williams, 2000) and students with hearing impairments (Geoffrion, 1982–83). This work provides converging evidence that word processing can be an effective tool for students with special needs. However, little can be concluded regarding the benefits of word processors for average-performing students. Only one study sampled exclusively general education, average-performing students without disabilities (Kurth, 1987).
There is some evidence to suggest that the benefits of word processing are unevenly distributed across the spectrum of student ability levels. Qualitative work by Outhred (1987, 1989) suggests that students with different writing and spelling abilities may benefit differently from word processing. Outhred compared the effects of composing with a word processor and composing by hand on fluency and spelling in a group of elementary age readers with learning difficulties. For fluency, the editing medium made little difference for the highest reading age students, but students with the lowest reading age seemed to benefit from word processor use (Outhred, 1987). Interestingly, the students who wrote long stories by hand were less fluent when using a word processor, whereas those who wrote short stories by hand were more fluent when using a word processor. There was also some evidence, although less consistent, for differential spelling outcomes. The 1987 study found that all students' spelling improved when using the word processor, but in the 1989 study, only the worse spellers showed improvement.
Findings by Rosenbluth & Reed (1992) quantitatively compared outcomes between educational groups. Their findings indicate a differential impact on remedial and accelerated students, demonstrating significantly greater benefits of writing process-based instruction with the use of a computer for accelerated students. The question of differences in outcome for different educational groups is one that more studies should investigate.
Composition or strategy training. One way to potentially improve upon students' use of a word processor is to provide accompanying instruction in composition or editing strategy. Several studies have evaluated word processor use within the context of such instruction. Graham & MacArthur (1988) and MacArthur et al. investigated the effectiveness of interventions coupling composition strategy instruction to revision on the computer. Kerchner & Kistinger (1984) and Rosenbluth & Reed (1992) investigated word processor use embedded within a process approach to writing (where students learn spelling and other skills as the need arises). Crealock & Sitko (1990) evaluated composition training in combination with keyboard and word processor training. Although all of these studies report positive findings, none include the necessary comparison groups to draw conclusions regarding the usefulness of composition instruction beyond that of using word processing alone.
Another widely available and popular curriculum enhancement is the spell checker. This survey identified 8 research studies investigating the merits of spell checkers as a writing and editing support. Two of these studies evaluated the ability of various spell checkers to identify and offer corrections for spelling errors (MacArthur, Graham, Haynes, & De la Paz, 1996; Montgomery, 2001). Six investigated the impact of spell checker use on learning outcomes, specifically proofreading and editing success (Dalton, Winbury, & Morocco, 1990; Gerlach, Johnson, & Ouyang, 1991; Jinkerson, 1993; MacArthur et al.; McNaughton, Hughes, & Ofiesh, 1997; Zordell, 1990).
Research studies have made it clear that spell checkers suffer a number of flaws, primarily with respect to identifying and correcting the spelling errors of students with learning disabilities. Montgomery et al. (2001) analyzed misspellings in 199 writing samples from students with learning disabilities and then ran them through spell checkers. Although the 9 spell checkers evaluated had high error identification rates, they failed to identify the target word for an average of 47.5% of all misspellings. Likewise, MacArthur et al. (1996) report that on average the 10 spell checkers they analyzed provided a mean 47% incorrect suggestions. Spell checker performance in these studies was especially poor when the misspellings were severe and/or had a low level of phonetic match to the target word, a frequent characteristic of misspellings by students with learning disabilities.
However, MacArthur et al. also report that when presented with purely incorrect alternatives, students selected one of those alternatives only 22% of the time. Thus, although spell checkers are deficient at offering correct alternatives for misspellings of middle/elementary students with learning disabilities, this may not be a major problem for students.
Studies investigating the effects of spell checker use on learning outcomes support the idea that, in spite of their flaws, spell checkers are beneficial tools for students, including those with disabilities. These studies demonstrate an increase in the number of identified misspellings and the number of corrected misspellings, and a reduction in spelling error rates, when using a spell checker versus proofreading or editing off the computer. Improvements were reported after as little as one day spent using a spell checker.
The overall evidence is, however, less overwhelmingly convincing than it may seem due to pervasive methodological weaknesses in this literature. For example, Gerlach et al. (1991) do not include a control group or condition with which to compare the results for students working with spell checkers. A more rampant problem in the literature is lack of statistical validation. Only Jinkerson & Baggett (1993) demonstrated statistical significance of their findings. Four of the remaining studies provided quantitative data without statistics (Gerlach et al.; MacArthur et al.; McNaughton et al.; Zordell, 1990), and the 6th study was exploratory and provided only qualitative evidence for two students (Dalton et al.). The studies by McNaughton et al. (1997) and Zordell (1990), although not described as exploratory, included only a small number of students: 3 and 4, respectively.
Factors Influencing Effectiveness
Grade level and educational group. Spell checkers appear to be beneficial tools for students across a range of age and educational groups. Positive results were reported for students in Grades 3–9, 10, and 12 (our survey did not locate peer-reviewed work addressing other elementary and high school grades); including students of average ability (Gerlach et al.; Jinkerson, 1993) and students with learning disabilities (Dalton et al.; MacArthur et al.; McNaughton et al.). MacArthur et al. directly refuted the possibility that struggling spellers cannot use a spell checker as effectively as other students. They found no relationship between spelling ability and the number of errors corrected using a spell checker. Interestingly they did find a correspondence between spelling ability and number of misspelled words found: low spelling ability was correlated with a higher percentage of misspelled words found. Thus, low spelling ability does not appear to undermine successful use of a spell checker.
Method of evaluation. The literature establishes that using a spell checker can improve the identification and correction of misspellings while students proofread and edit on a computer. Does spell checker use lead to generalized spelling improvement? In Jinkerson & Baggett's (1993) study, students who had edited with a spell checker and students who had edited by hand did not score differently from one another on an oral spelling test administered after the treatment period. However, these scores are representative of only the students' performance at the conclusion of their intervention—not their improvement over its course. Thus, the possibility cannot be ruled out that the spell checker group made generalized spelling gains. Also, extending the duration of the intervention (which was only 1 session) would be expected to facilitate a more profound impact.
A related point, also involving generalizability, is raised by the findings of McNaughton et al. When students were tested with generic proofreading materials, spell checker use was found to have a positive impact. However, this improvement did not fully generalize to the students' own writing materials. More carefully delineating the contexts in which spell checkers are beneficial would be a useful step forward.
Strategy training. Embedding spell checker use within a training program is one potential way to improve upon its effectiveness as a learning tool. McNaughton et al. directly investigated this possibility by evaluating the embedding of spell checker use within 5-step proofreading strategy training. Although the combination proved effective, McNaughton et al. did not include a spell checker only control group. Therefore, it is impossible to draw conclusions about any added benefit that the training had.
Word prediction software is another tool that when combined with a word processor can support student writing. Our survey identified only 3 peer-reviewed studies evaluating word prediction software. These studies provide some intriguing—although preliminary—findings.
Zordell (1990) reported a variety of improvements in 4 special education students' writing following a 2-month period spent composing with a word processor with spell checker and word prediction software. The improvements included a drop in misspellings, an increase in word variety and correct use of word endings, and an improvement in attitude towards writing. Without a control group, however, it is unclear whether these improvements were due to the intervention or simply normal progress anticipated to occur over the course of a semester.
MacArthur (1998) compared the impact of word processing to word processing with speech synthesis and word prediction in a group of five, 9th and 10th grade students with learning disabilities (MacArthur, 1998). Students spent 4–9 sessions with each set of writing tools. Students' writing during the word prediction/speech synthesis segment contained an increased proportion of legible and correctly spelled words. The number of legible word sequences and the total number of words did not differ, and differences in composing rate and word recognition were inconsistent. Unfortunately, without a control group, it cannot be determined whether these improvements are a result of the speech synthesis or the word prediction.
Von Tetzchner, Rogne & Lilleeng (1997) report a case-study of a deaf student who was functionally illiterate on entering the 5th grade. A six-year intervention that combined a process approach to language, Norwegian sign language, and word processing with word prediction software led to dramatic improvements in the student's reading and writing skills (von Tetzchner, Rogne, & Lilleeng, 1997). The authors suggest that word prediction may have played an important role in this progress by ensuring appropriate levels of challenge and assisting the development of orthographic strategies.
These three sets of findings provide some promising evidence to support particular benefits of word prediction software for students with special needs. Strong conclusions cannot be made from such limited samples of students and without additional control groups, but this area deserves further study.
Speech recognition enables students to use their voice to write on the computer, of potentially instrumental use to a variety of students, including those who have learning disabilities or physical disabilities making it difficult to type. In the limited research literature, there is some support for the idea that speech recognition can improve student outcomes in reading and writing.
A qualitative study by Wetzel (2000) examined one 6th-grade student's writing after 12 sessions composing with the use of speech recognition software (Wetzel, 1996). Wetzel's observations suggest improvement of the student's writing, but Wetzel declines to draw conclusions about its quality, and by extension, the impact of speech recognition on writing performance. Clearly, this study requires repetition with a larger sample and quantitative methodology before such conclusions can be made. Stronger support for a beneficial impact of speech recognition on student learning comes from Raskind and Higgins (1999). This pair compared students' word recognition, spelling, reading comprehension, and phonological deletion after they spent 16 sessions performing writing exercises with speech recognition software or an equivalent amount of time taking a keyboarding class. All 4 measures exhibited significant differences favoring the speech recognition condition (Raskind, 1999). These findings raise the intriguing possibility that speech recognition can have beneficial effects on reading as well as writing. However, Raskind and Higgins' failure to rule out the possibility of preexisting group differences on the experimental measures is a significant methodological flaw that casts some uncertainty on their findings.
Clearly, research support for the hypothesis that speech recognition can support improved student outcomes in the areas of reading and writing is very limited at this time. Additional research is needed to uphold what are promising findings.
Factors Influencing Effectiveness
Educational group/grade level. The three studies identified in this survey sampled students with learning disabilities, age 11–14 years. The paucity of research in this area makes it impossible to draw conclusions about the potential impact of educational group or grade level on speech recognition's effectiveness.
Type of speech recognition. Today there are two major types of speech recognition systems available. Discrete speech recognition systems, the first to have been developed, require students to pause between words in order for their speech to be recognized. Later developed were continuous speech recognition systems, where speakers do not have to rest between word pronunciations. Naturally, researchers are interested in possible differences in the effectiveness of these two types of systems.
Higgins and Raskind (2000) compared the impact of reading with discrete speech recognition, continuous speech recognition, and an equivalent amount of time (16, 50-minute sessions) spent in a keyboarding class on the writing of 14-year-old students (Higgins & Raskind, 2000). Consistent with Raskind and Higgins (1999), the students composing with discrete speech recognition made significantly greater gains than the control group on word recognition, spelling, reading comprehension, and phonological processing. Students who worked with the assistance of continuous speech recognition also made significant gains relative to the control group, however these gains were confined to word recognition and reading comprehension.
The results of this one study suggest that discrete and continuous speech recognition are both beneficial enhancements for developing reading skills. Although it also suggests that the two types may differ in their effectiveness, this needs to be corroborated by additional research.
Software and Computer Programs
The most actively researched technology tool is clearly the software/computer program. This survey identified 37 studies evaluating software and computer programs. Two primary curriculum areas have been investigated: reading and mathematics, with a handful of additional studies investigating applications in other subject areas. To simplify the discussion, we will address the research in each of these curriculum areas separately.
Our survey identified 9 studies evaluating mathematics software and computer programs. All 9 of these studies sampled populations composed partially or entirely of students with learning disabilities, handicaps, or below grade level mathematics performance. The research is not only sizeable but also quite solid. As a whole, it suggests that computer-assisted mathematics instruction can be as effective as traditional methods of instruction.
A 3-day drill and practice software intervention and a 7-day tutorial-based software intervention had little impact on one high school student with a learning disability (Howell, 1987). Of course, the single-subject design and the brevity of the intervention could all have undercut more positive results. A controlled experimental study with 50 subjects conducted by Bahr & Rieth (1989) found that neither a drill and practice nor instructional game component of the commercially available Math Blaster software program significantly improved student performance on a timed written test of decimal multiplication. However, the intervention in this case, as well, was quite short—9 sessions (Bahr & Rieth, 1989).
Two studies with longer interventions report more positive findings. Nine students who for 18 days were instructed on multiplication and division story problems by a computer program delivering direct explicit strategy instruction (Gleason, Carnine, & Boriero, 1990) showed mathematics gains equivalent to those of ten peers receiving otherwise identical teacher-delivered instruction. In addition, twenty-seven students whose conventional remedial math instruction was replaced for 9 months by two computer programs, improved standardized arithmetic scores to the same degree as their peers (McDermott & Watkins, 1983). These two studies provide convincing evidence that mathematics software and computer programs can be as effective as traditional mathematics instruction.
A few studies suggest that computer and software programs can even improve upon the outcomes of traditional instruction. Trifiletti et al. report that students who spent 12 months receiving math instruction with SPARK-80 software instead of regular resource room math instruction, learned significantly more math skills and made significantly greater gains on the Key Math Diagnostic Arithmetic Test. SPARK-80 software teaches each of 112 basic mathematics skills using a combination of tutorial instruction, drill instruction, skill game, assessment, and word problems (Trifiletti, Frith, & Armstrong, 1984). Further support comes from Chiang (1986) who demonstrated improved multiplication scores following a 7- to 12-day intervention involving drill and practice and conceptual software programs teaching multiplication facts (Chiang, 1986). Chiang's evidence is weaker, however, due to the 6-student sample, the absence of statistical validation, and the lack of a control group to show greater effectiveness than a more traditional approach. Podell et. al. present positive evidence as well, showing an advantage over traditional instruction of a drill and practice program for developing subtraction speed and for some students (see Educational Group section below) addition speed. At the same time, addition and subtraction accuracy were unaffected by the intervention (Podell, Tournaki-Rein, & Lin, 1992).
The research literature suggests that mathematical computer and software programs are generally beneficial. At the same time, there is an indication that these instructional tools vary in their effectiveness or at least their effective conditions. Identifying features and conditions that are most favorable is a useful direction for research.
The overwhelming majority of the research into computer programs and software focuses on reading instruction as the curriculum application. The research is plentiful, numbering 26 studies, and as a whole speaks greatly in favor of using software and computer programs as part of reading instruction. Twenty-one studies demonstrated a positive impact of software and computer programs on reading skills. Of these 17, nine established greater effectiveness than control interventions involving the use of the computer (Barker & Torgesen, 1995; Das-Smaal, Klapwijk, & van der Leij, 1996; Hurford & Sanders, 1990; Jones, Torgensen, & Sexton, 1987; Mitchell & Fox, 2001; Torgesen, Waters, Cohen, & Torgesen, 1988; van den Bosch, van Bon, & Schreuder, 1995; Wise & Olson, 1995; Wise, Ring, & Olson, 2000), three established effectiveness equal to traditional methods (Mitchell & Fox, 2001; Nicolson, Fawcett, & Nicolson, 2000; Reitsma & Wesseling, 1998), and 7 demonstrated effectiveness superior to that of traditional approaches (Boone, Higgins, Notari, & Stump, 1996; Erdner, Guy, & Bush, 1998; Foster, Erickson, Foster, Brinkman, & Torgesen, 1994; Lin, Podell, & Rein, 1991; Olson, Wise, Ring, & Johnson, 1997; Reitsma & Wesseling, 1998; Wise, Ring, & Olson, 1999). Nine others demonstrated significant improvements over baseline performance or a no-intervention control group (Frederiksen, 1985; Heimann, Nelson, Tjus, & Gillberg, 1995; Holt-Ochsner, 1992; Hurford, 1990; Lynch, Fawcett, & Nicolson, 2000; Malouf, 1987-88; Marston, Deno, Kim, Diment, & Rogers, 1995; van den Bosch et al.; Wentink, van Bon, & Schreuder, 1997), and one equivalent improvement to a no-intervention control group (Wentink et al.). In fact, in only three studies did use of a computer or software program fail to improve certain targeted reading skills (Lynch et al.; van den Bosch et al.; Wentink et al.), and in only one did this use produce an outcome inferior to that of traditional methods (Lin et al.—vocabulary).
Favorable outcomes have been reported for many facets of reading instruction, including the five highlighted by the National Reading Panel: phonemic awareness (Barker & Torgesen, 1995; Foster et al.; Heimann et al.; Hurford, 1990; Hurford & Sanders, 1990; Mitchell & Fox, 2001; Olson et al.; Reitsma & Wesseling, 1998; Wise & Olson, 1995; Wise et al.), phonics/word recognition (Barker & Torgesen, 1995; Das-Smaal et al.; Erdner et al.; Holt-Ochsner, 1992; Jones et al.; Lynch et al.; Marston et al.; Olson et al.; Wentink et al.; Wise & Olson, 1995; Wise et al.), fluency (Frederiksen, 1985; Holt-Ochsner, 1992; Jones et al.; Torgesen et al.; van den Bosch et al.; Wentink et al.), vocabulary (Erdner et al.; Lin et al.), and comprehension (Erdner et al.; Holt-Ochsner, 1992; Lynch et al.; Wise & Olson, 1995; Wise et al.). There is however, strongest evidence to support applications for phonemic awareness, phonics/word recognition, and fluency instruction, at least in part because fewer research studies have been conducted in other areas.
For the most part, the evidence presented in these studies is quite strong. There are technical weaknesses that appear here and there, such as failing to randomize subject assignment (Erdner et al.; Nicolson et al.; Wentink et al.). However, in one respect nearly all of the research is lacking—establishment of the duration of the effects. Given the very brief interventions that were used in many of these studies, the question of whether they have more than a short-term impact is a very appropriate question to ask. Another weakness within this body of research relates to the selection of control groups. To establish that computer instruction is as or more effective than traditional methods and that the effect isn't merely due to the fleeting novelty of the medium, it is necessary to include at least two control groups: one taught by traditional methods and one given some time and/or instruction on the computer (but for purposes outside the targeted area of reading instruction). Very few studies (Mitchell & Fox, 2001; Nicolson et al.; Wise & Olson, 1995) satisfy these criteria.
Other: Spelling, Writing, and Geography
Spelling and writing have received much less attention than reading and math when it comes to investigating applications of computer and software programs. Chambless & Chambless (1994) conducted a very large 3-year study evaluating the impact of supplementing reading and writing instruction with computer reading and writing programs (Chambless & Chambless, 1994). Results suggest this kind of supplementation can significantly improve reading and writing achievement. MacArthur et al. (1990) provided evidence to support the use of a computer program to practice spelling. Students practicing spelling on the computer rather than off the computer spent significantly more time engaged and scored significantly higher on spelling tests (MacArthur, Haynes, Malouf, Harris, & Owings, 1990). Nicolson et al. (2000) reported that students working with a computer-based, multimedia literacy support computer program made gains in spelling performance equivalent to those working with a similar literacy program off the computer. Although Lynch et al. (2000) failed to demonstrate spelling improvement following use of a computerized IEP implementation program, they propose that this may be because of an ill-effective, spelling initiative that co-occurred with the intervention. Thus, studies suggest it may be worthwhile to further investigate the use of computer programs and software for writing and spelling.
Horton, Lovitt, & Slocum (1988) investigated the effectiveness of a tutorial computer program that teaches geography. Students working with the program made significantly greater gains in geography knowledge compared to peers who did offline work using an atlas (Horton, Lovitt, & Slocum, 1988). However, the study was very brief and did not address the possibility of novelty effects or the question of maintenance of learning effects.
Although a few studies within this research base suffer significant design flaws, there is strong evidence to support the effectiveness of computer and software programs as learning tools, particularly for mathematics, fluency, phonemic awareness, and phonics/word recognition. In the following sections, we discuss potential factors influencing this effectiveness.
Factors Influencing Effectiveness
Duration of intervention. It is possible to argue based on the literature that brief interventions, approximately 3–9 days (Bahr & Rieth, 1989; Howell, 1987) are less effective than longer ones. However, lengthy interventions are not always successful (Nicolson et al.), and significantly improved outcomes have also been reported after interventions lasting as few as 2 sessions (Chiang, 1986; Torgesen et al.) and even 5–8 hours (Foster et al.). Clearly, although intervention duration is important, it is not the sole determinant of outcome.
Drill and practice versus tutorial. Software and computer programs vary in terms of the relative quantities of instruction and practice that they provide. Our sample includes research studies of so-called "drill and practice" programs (Frederiksen, 1985; Howell, 1987; Jones et al.; Torgesen et al.), purely instructional programs (Collins, Carnine, & Gersten, 1987; Foster et al.; Howell, 1987), and programs that share both features (Chiang, 1986; Trifiletti et al.). There is research support for all three types of programs, but they have been directly compared in only one study. Bahr & Rieth (1989) found no difference between the effectiveness of drill and practice and instructional game components of a commercial mathematics software program. However, because neither component improved performance, there was a problematic floor effect. Thus, additional research is needed to address the effectiveness of instructional versus drill and practice programs.
Specific program features. As technology and our adeptness with it continue to evolve, computer and software programs become increasingly elaborate. Determining which of the many possible features are most effective at improving learning outcomes is an important task. A few groups have begun to pursue it. Axelrod, McGregor, & Sherman (1987), for example, have investigated the impact of different reinforcement schedules in the context of mathematics software. Their very small study, limited to 4 students, found no difference between outcomes when working with no reinforcement or scheduled reinforcement (Axelrod, McGregor, Sherman, & Hamlet, 1987).
Rieber (1990) focused on the types of illustrations and the forms of practice offered within computerized lessons. Results of their brief, 1-session study indicated that behavioral practice (multiple-choice questions after each lesson) and cognitive practice (a simulation activity) were equally effective for students (Rieber, 1990). However, students seemed to learn better when given animated as opposed to static graphics.
Feedback is a variable that has the potential to greatly influence student learning. Computer and software programs can extend the teacher's reach by enabling the provision of individualized feedback on a classwide basis. But what type of feedback is best? Collins et al. compared two different forms of feedback in the context of a reasoning skills computer program. Students trained on a program offering elaborative feedback performed better than those trained on a version offering only basic feedback (Collins et al.).
Also of interest is the value of introducing a game element to learning on the computer. Several researchers have investigated the impact of game elements within software and computer programs, and the results of their studies are somewhat complex. Christensen and Gerber's (1990) findings suggest that a game format may be distracting and counterproductive for students with learning disabilities (see Educational Group section below). Malouf (1987–88) also found some negative quality to a game format—students with learning disabilities working with a drill and practice vocabulary game performed less accurately on a word definition matching test than did students who practiced using a non-game format vocabulary program. However, the game version of the program appeared more effective at developing continuing motivation to learn these skills.
More of this kind of research is needed to squarely address the features that may impact the success of computer and software programs in elevating learning outcomes. Present findings suggest that different students may benefit from different features.
Educational group. Nearly all of the studies we identified, concentrated on students with special needs (students with disabilities or handicaps, remedial students, below average students, special education students, and students with autism). The studies by Trifiletti et al. and Jones et al. and the reading literature as a whole (see above) provide strong evidence that students with special needs can use software and computer programs effectively and to their benefit.
At the same time, Jones' et al. findings suggest that these enhancements may not succeed in normalizing student performance to that of average performing students. Moreover, performance comparisons of students from different educational groups suggest important differences in how they respond to software and computer programs. Christensen & Gerber (1990), for example, present evidence that students with disabilities may benefit to a lesser degree from a game format than do students without disabilities and may even find them distracting. Moreover, Podell et al. found that within a group of students trained via a drill and practice mathematics program, those with mild mental handicaps were slower to reach the speed criterion on addition problems. Boone et al. found that low-ability kindergartners responded to a computer intervention in the opposite manner to medium- and high-ability kindergartners, developing better letter identification when taught by traditional teacher lessons versus a multimedia computer version of those lessons.
Less information is available regarding the use of computer and software programs by students outside the special needs population. However, findings reported by Foster et al., Reitsma & Wesseling (1998), and Chambless & Chambless (1994) suggest that these enhancements can also be a powerful tool for such students.
Grade level. The studies included in this review span grade levels from kindergarten through high school and support positive outcomes with preschool children and students in grades 4 through 12. The reading research focuses more intently on the kindergarten and early elementary grades. Many studies sampled students from multiple grade levels. Although the possibility of grade level-dependent differences in the effectiveness of computer and software programs has not been directly addressed in the math literature, a few reading studies have addressed the question (Hurford, 1990; Mitchell & Fox, 2001; Nicolson et al.; Wise et al.). All but one of these studies (Mitchell & Fox, 2001—sampling K–1 students) found evidence for a difference in effectiveness across grade levels. Two studies (Hurford, 1990; Nicolson et al.) found a greater benefit of computer training for older students (3rd graders versus 1st or 2nd graders), and one a greater benefit for younger students (Wise et al.). It is unclear what sort of pattern was found by Wise et al., who did not detail the nature of the grade level by treatment interaction. These data are difficult to interpret due to the differences between the studies' design, but they seem to recommend a closer look at the influence of grade level.
Presence or absence of teacher-based instruction. Most of the studies discussed here investigated a stand-alone program of software-based instruction. Their findings are generally positive (an exception is McDermott & Watkins, 1983); a few studies even suggest a benefit beyond that of traditional instruction (McDermott & Watkins, 1983). Of interest, however, is how effective it is to supplement rather than replace a regular program with the use of software and computer programs. Studies by Erdner et al. and Howell et al. both suggest that a combinatorial approach that supplements the normal reading program with the use of software and computer programs delivers a much more substantial benefit than either component alone. Corroboration of these findings could be very consequential in determining how best to integrate the use of software and computer programs into regular classroom instruction.
This report is based in part on an earlier version conducted by Roxanne Ruzic and Kathy O'Connell, National Center on Accessing the General Curriculum. We would like to acknowledge the assistance of research assistant Melissa Mengel in collecting the research articles.
Ruzic, R. & O'Connell, K. (2001). An overview: enhancements literature review.
Links to Learn More about Text Transformations
This is the web page of AT&T Labs that describes the research conducted on text-to-speech (TTS) technology. The Next-Generation TTS converts English text into audible speech. The Next-Generation TTS was introduced in 1998 and continues to improve dramatically in the quality and naturalness of the voices. The web site has interactive demonstrations in which users can enter text and select one of five voices. The TTS is only for demonstration purposes only.
This web site has the AT&T TTS natural voices demonstrations similar to the previous listing. This version is updated to include three new voices. The languages that this engine supports include: U. S. English, German, Latin American Spanish, U.K. English, and Parisian French.
Hyper Text and Hypermedia
This technical web site provides information on adaptive hypertext and hypermedia. It includes links to conferences, journals, projects, and people in the field of multimedia.
This homepage of a University of Saskatchewan instructor gives a multitude of guidelines on web design for instruction. It includes links to teacher resources, site and page design, and multimedia.
This web site by Vanderbuilt University walks the viewer through an on-line web design tutorial. It also provides examples of hypertext and hypermedia, as well as giving tips for creating an effective web site.
"Writing HTML" was created to help teachers create learning resources that access information on the Internet. On this site, the viewer will be writing a lesson called Volcano Web. The tutorial, however, may be used by anyone who wants to create web pages.
Links to resources, history of hypertext, web design, and navigation structures.
Multimedia Supported Text
National Center for Accessible Media's web site on how to create captions for rich text-provides links out to different web sites as well. This page offers a development strategy split into two parts: Part 1: Creating Captions, for those starting from scratch, and Part 2: Adding Captions to Media, for those who already have a timed-text caption file.
Video and Videodisc Instruction
A web site designed for teachers that includes an article about the implications of DVD technology on education. This site also provides helpful DVD titles in numerous subject areas such as science, social studies, art/humanities, language arts and math.
This web site contains an article addressing video instruction as a constructivist tool. It specifically discusses The Adventures of Jasper Woodbury, a series of six videodiscs from the Optical Data Corporation. Each disc contains a story that includes embedded mathematical data and ends with a problem that students must use the data to solve. It includes a link to an example of the Jasper Series.
This web site gives the viewer an opportunity to experience the Jasper Series, including story and solution summaries for four different topics.
The Least Tern web site provides “training and support for the integration of technology into the curriculum” by offering online tutorials and classes for teachers on various software applications (from the web site).
Study Works Online, is a free learning site delivering original approaches that help students develop an understanding of math and science concepts usually taught from grades 7 to 12. StudyWorks Online gives students, parents, and teachers access to high-quality content, interactive activities, real-world examples, diagnostic testing, monitored learning forums, personalized guidance and software packages.
This web site lists the goals of integrating technology into all subjects of an elementary school classroom. The authors address how technology can be used to improve learning by listing the desirable software and on-line resources for each subject area.
Free demo videos of science for middle and high school age students; videos are available for purchase.
Dositey.com offers a collection of interactive educational modules and printable worksheets for grades K–8. The lessons and games are predominately focused in the subjects of math and the language arts.
This web site provides online tutorials on anatomy of the human body. It provides detailed medical information and gives the viewer a great overview of different parts of the body.
"Donner Online" is a type of web-based activity in which you learn about a topic by collecting information, images, and insights from the Internet, and then you "paste" them into a multimedia Scrapbook (a HyperStudio stack or a Web page) to share your learning with others. Includes a link to a Hypertext dictionary.
This web site is managed by the University of Alberta. This web site provides a series of internet links to several online word processing web sites and tutorials. Tutorials are provided for the software programs Microsoft Word, Word Perfect and Apple/Claris Works. Some of these tutorials are Windows compatible, others are Mac compatible and some are compatible with both platforms.
The Bay Con Group offers various free, online software tutorials. This page offers tips and tools for using Microsoft Word 97. The tutorial is comprehensive and provides users with basic step-by-step instructions for beginning Microsoft Word. The Bay Con Group has made this web site easily navigable for first time users.
Spellcheck.net is a site providing a free spell check program. Users can type, or paste in a word or multiple paragraphs (up to 5,000 characters) for the spell checker to process and correct. Each misspelled word is highlighted and alternative words are provided.
This web site is for the National Center to Improve Practice in Special Education through Technology, Media and Materials (NCIP). The general features and applications of word prediction software are explained through the use of case stories. The profiles of each child varies in every case story, but the examples can help parents and teachers understand how to use word prediction to assist their children or students. Links to research articles that have found benefits to word prediction software are listed along with descriptions of various word prediction software programs that are currently available.
This is a link to an article written by Charles A. MacArthur in Teaching Exceptional Children July/August 1998 about a third grade student with reading and writing learning disabilities. MacArthur presents a case story in which the student participated in a study of word prediction and speech synthesis. This computer program enabled the student to expand his writing and communication abilities while improving his spelling.
The Alliance for Technology Access is an organization that connects children and adults with disabilities to technology tools. This web site provides the reader with information about the Alliance, assistive technology and augmentative communication. Additionally, this ATA site provides links to a number of sources for word prediction including software programs such as Outloud, Intellitalk and Read and Write.
Nuance bills itself as a “provider of speech and imaging solutions for businesses and consumers” and sells TTS-concept-based and voice-recognition software products with a specialization in niche programs for the healthcare industry.
The CTD Resource Network, Inc.
This web site is the CTD resource network frequently asked questions page regarding Speech recognition software and usage. The site provides information about tools and wares available with manufacture descriptions and user comments. There are multiple links to sites to obtain specific information about speech recognition software.
Mississippi State University
This article contains research findings of Mississippi State University’s Internet-Accessible Speech Recognition Technology Project.
Abelson, A. G., & Petersen, M. (1983). Efficacy of "Talking Books" for a group of reading disabled boys. Perceptual and Motor Skills, 57, 567-570.
Axelrod, S., McGregor, G., Sherman, J., & Hamlet, C. (1987). Effects of video games as reinforcers for computerized addition performance. Journal of Special Education Technology, 9(1), 1-8.
Bahr, C. M., & Rieth, H. J. (1989). The effects of instructional computer games and drill and practice software on learning disabled students' mathematics achievement. Computers in the Schools, 6(3/4), 87-101.
Bain, A., Houghton, S., Sah, F. B., & Carroll, A. (1992). An evaluation of the application of interactive video for teaching social problem solving to early adolescents. Journal of Computer-based Instruction, 19(3), 92-99.
Barker, T. A., & Torgesen, J. K. (1995). An evaluation of computer-assisted instruction in phonological awareness with below average readers. Journal of Educational Computing Research, 13(1), 89-103.
Bonk, C. J., Hay, K. E., & Fischler, R. B. (1996). Five key resources for an electronic community of elementary student weather forecasters. Journal of Computing in Childhood Education, 7(1/2), 93-118.
Boone, R., & Higgins, K. (1993). Hypermedia basal readers: Three years of school-based research. Journal of Special Education Technology, 7(2), 86-106.
Boone, R., Higgins, K., Notari, A., & Stump, C. S. (1996). Hypermedia pre-reading lessons: learner-centered software for kindergarten. Journal of Computing in Childhood Education, 7(1/2), 39-69.
Borgh, K., & Dickson, W. P. (1992). The effects on children's writing of adding speech synthesis to a word processor. Journal of Research on Computing in Education, 24(4), 533-544.
Bottge, B. (1999). Effects of contextualized math instruction on problem solving of average and below-average achieving students. Journal of Special Education, 33(2), 81-92.
Bottge, B. A., & Hasselbring, T. S. (1993). A comparison of two approaches for teaching complex, authentic mathematics problems to adolescents in remedial math classes. Exceptional Children, 59(6), 556-566.
Casteel, C. A. (1988-89). Effects of chunked reading among learning disabled students: An experimental comparison of computer and traditional chunked passages. Journal of Educational Technology Systems, 17(2), 115-121.
Chambless, J. R., & Chambless, M. S. (1994). The Impact of Instructional Technology on Reading/Writing Skills of 2nd Grade Students. Reading Improvement, 31(3), 151-155.
Chang, L. L., & Osguthorpe, R. T. (1990). The effects of computerized picture-word processing on kindergartners' language development. Journal of Research in Childhood Education, 5(1), 73-84.
Chiang, B. (1986). Initial learning and transfer effects of microcomputer drills on LD students' multiplication skills. Learning Disability Quarterly, 9(118-123).
Collins, M., Carnine, D., & Gersten, R. (1987). Elaborated corrective feedback and the acquisition of reasoning skills: a study of computer-assisted instruction. Exceptional Children, 54(3), 254-262.
Crealock, C., & Sitko, M. (1990). Comparison between computer and handwriting technologies in writing training with learning disabled students. International Journal of Special Education, 5(2), 173-183.
Dalton, B., Winbury, N. E., & Morocco, C. C. (1990). "If you could just push a button": two fourth grade boys with learning disabilities learn to use a computer spelling checker. Journal of Special Education Technology, X(4), 177-191.
Dalton, D. W., & Hannafin, M. J. (1987). The effects of word processing on written composition. Journal of Educational Research, 80(6), 338-342.
Das-Smaal, E. A., Klapwijk, M. J. G., & van der Leij, A. (1996). Training of perceptual unit processing in children with a reading disability. Cognition and Instruction, 14(2), 221-250.
Davidson, J., Coles, D., Noyes, P., & Terrell, C. (1991). Using computer-delivered natural speech to assist in the teaching of reading. British Journal of Educational Technology, 22(2), 110-118.
Davidson, J., Elcock, J., & Noyes, P. (1996). A preliminary study of the effect of computer-assisted practice on reading attainment. Journal of Research in Reading, 19(2), 102-110.
Dawson, L., Venn, M., & Gunter, P. L. (2000). The effects of teacher versus computer reading models. Behavioral Disorders, 25(2), 105-113.
Elbro, C., Rasmussen, I., & Spelling, B. (1996). Teaching reading to disabled readers with language disorders: a controlled evaluation of synthetic speech feedback. Scandivian Journal of Psychology, 37, 140-155.
Elkind, J., Cohen, K., & Murray, C. (1993). Using computer-based readers to improve reading comprehension of students with dyslexia. Annals of Dyslexia, 43, 238-259.
Erdner, R. A., Guy, R. F., & Bush, A. (1998). The impact of a year of computer assisted instruction on the development of first grade learning skills. Journal of Educational Computing Research, 18(4), 369-386.
Farmer, M. E., Klein, R., & Bryson, S. E. (1992). Computer-assisted reading: effects of whole-word feedback on fluency and comprehension in readers with severe disabilities. Remedial and Special Education, 13(2), 50-60.
Feldmann, S. C., & Fish, M. C. (1991). Use of computer-mediated reading supports to enhance reading comprehension of high school students. Journal of Educational Computing Research, 7(1), 25-36.
Foster, K. C., Erickson, G. C., Foster, D. F., Brinkman, D., & Torgesen, J. K. (1994). Computer administered instruction in phonological awareness: evaluation of the DaisyQuest program. Journal of Research and Development in Education, 27(2), 126-137.
Frederiksen, J. R., Warren, B., & Roseberry, A. (1985). A componential approach to training reading skills: Part 1. Perceptual units training. Cognition and Instruction, 2, 91-130.
Friedman, S. G. & Hofmeister A. M. (1984). Matching technology to content and learners: A case study. Exceptional Children, 51, 130-134.
Geoffrion, L. D. (1982-83). The feasibility of word processing for students with writing handicaps. Journal of Educational Technology Systems, 11(3), 239-250.
Gerlach, G. J., Johnson, J. R., & Ouyang, R. (1991). Using an electronic speller to correct misspelled words and verify correctly spelled words. Reading Improvement, 28(3), 188-194.
Glaser, C. W., Rieth, H. J., Kinzer, C. K., Colburn, L. K., & Peter, J. (2000). A description of the impact of multimedia anchored instruction on classroom interactions. Journal of Special Education Technology, 14(2), 27-43.
Gleason, M., Carnine, D., & Boriero, D. (1990). Improving CAI effectiveness with attention to instructional design in teaching story problems to mildly handicapped students. Journal of Special Education Technology, 10(3), 129-136.
Graham, S. & MacArthur, C. (1988). Improving learning disabled students' skills at revising essays produced on a word processor: Self-instructional strategy training. Journal of Special Education, 22(2), 133-152.
Hasselbring, T. S., Fleenor, K., Sherwood, R., Griffith, D., Bransford, J., & Goin, L. (1987-88). An evaluation of a level-one instructional videodisc program. Journal of Educational Technology, 16(2), 151-169.
Hay, L. (1997). Tailor-made Instructional Materials Using Computer Multimedia Technology. Computers in the Schools, 13(1-2), 61-68.
Hebert, B. M., & Murdock, J. Y. (1994). Comparing three computer-aided instruction output modes to teach vocabulary words to students with learning disabilities. Learning Disabilities Research & Practice, 9(3), 136-141.
Heimann, M., Nelson, K. E., Tjus, T., & Gillberg, C. (1995). Increasing reading and communication skills in children with autism through an interactive multimedia computer program. Journal of Autism and Development Disorders, 25(5), 459-481.
Higgins, E. L., & Raskind, M. H. (2000). Speaking to read: The effects of continuous vs. discrete speech recognition systems on the reading and spelling of children with learning disabilities. Journal of Special Education Technology, 15(1), 19-30.
Higgins, K., & Boone, R. (1991). Hypermedia CAI: A supplement to an elementary school basal reader program. Journal of Special Education Technology, 11(1), 1-15.
Higgins, K., & Boone, R. (1990). Hypertext computer study guides and the social studies achievement of students with learning disabilities, remedial students, and regular education students. Journal of Learning Disabilities, 23(9), 529-540.
Higgins, K., Boone, R., & Lovitt, T. (1996). Hypertext support for remedial students and students with learning disabilities. Journal of Learning Disabilities, 29(4), 402-412.
Holt-Ochsner. (1992). Automaticity training for dyslexics: an experimental study. Annals of Dyslexia, 42, 222-241.
Horney, M. A., & Anderson-Inman, L. (1994). The ElectroText project: Hypertext reading patterns of middle school students. Journal of Educational Multimedia and Hypermedia, 3(1), 71-91.
Horton, S. V., Boone, R. A., & Lovitt, T. C. (1990). Teaching social studies to learning disabled high school students: effects of a hypertext study guide. British Journal of Educational Technology, 21(2), 118-131.
Horton, S. V., Lovitt, T. C., & Slocum, T. (1988). Teaching geography to high school students with academic deficits: effects of a computerized map tutorial. Learning Disability Quarterly, 11, 371-379.
Howell, R., Sidorenko, E., & Jurica, J. (1987). The effects of computer use on the acquisition of multiplication facts by a student with learning disabilities. Journal of Learning Disabilities, 20, 336-341.
Hurford, D. P. (1990). Training phonemic segmentation ability with a phonemic discrimination intervention in second- and third-grade children with reading disabilities. Journal of Learning Disabilities, 23(9), 564-569.
Hurford, D. P., & Sanders, R. E. (1990). Assessment and remediation of a phonemic discrimination deficit in reading disabled second and fourth graders. Journal of Experimental Child Psychology, 50, 396-415.
Jinkerson, L. & Baggett, P. (1993). Spell checkers: Aids in identifying and correcting spelling errors. Journal of Computing in Childhood Education, 4(3-4), 291-306.
Jones, K., Torgesen, J. K., & Sexton, M. (1987). Using computer guided practice to increase decoding fluency in LD children: a study using the Hint and Hunt I program. Journal of Learning Disabilities, 2, 122-128.
Kelly, B., Carnine, D., Gersten, R. S. & Grossen, B. (1986). The effectiveness of videodisc instruction in teaching fractions to learning disabled and remedial high school students. Journal of Special Education Technology, 8, 5-17.
Kerchner, L. B., & Kistinger, B. J. (1984). Language processing/word processing: written expression, computers, and learning disabled students. Learning Disability Quarterly, 7, 329-335.
Kurth, R. J. (1987). Using word processing to enhance revision strategies during student writing activities. Educational Technology, January, 13-19.
Large, A., Beheshti, J., Breuleux, A., & Renaud, A. (1995). Multimedia and comprehension: The relationship among text, animation, and captions. Journal of the American Society for Information Science, 46(5), 340-347.
Leong, C. K. (1992). Enhancing reading comprehension with text-to-speech (DECtalk) computer system. Reading and Writing: An Interdisciplinary Journal, 4, 205-217.
Leong, C. K. (1995). Effects of on-line reading and simultaneous DECtalk aiding in helping below-average and poor readers comprehend and summarize text. Learning Disability Quarterly, 18, 101-116.
Lewin, C. (1997). "Test driving" CARS: addressing the issues in the evaluation of computer-assisted reading software. Journal of Computing in Childhood Education, 8(2/3), 111-132.
Lewin, C. (2000). Exploring the effects of talking book software in UK primary classrooms. Journal of Research in Reading, 23(2), 149-157.
Liao, Y. C. (1998). Effects of hypermedia versus traditional instruction on students' achievement: a meta-analysis. Journal of Research on Computing in Education, 30(4), 341-359.
Lin, A., Podell, D. M., & Rein, N. (1991). The effects of CAI on word recognition in mildly mentally handicapped and nonhandicapped learners. Journal of Special Education Technology, 11(1), 16-25.
Lundberg, I., & Olofsson, A. (1993). Can computer speech support reading comprehension? Computers in Human Behavior, 9, 283-293.
Lynch, L., Fawcett, A. J., & Nicolson, R. I. (2000). Computer-assisted reading intervention in a secondary school: an evaluation study. British Journal of Educational Technology, 31(4), 333-348.
MacArthur, C. A. (1998). Word processing with speech synthesis and word prediction: Effects on the dialogue journal writing of students with learning disabilities. Learning Disability Quarterly, 21(2), 151-166.
MacArthur, C. A., Graham, S., Haynes, J. A., & De la Paz, S. (1996). Spelling checkers and students with learning disabilities: Performance comparisons and impact on spelling. Journal of Special Education, 30, 35-57.
MacArthur, C. A., Graham, S., Schwartz, S. S., & Schafer, W. D. (1995). Evaluation of a writing instruction model that integrated a process approach, strategy instruction, and word processing. Learning Disability Quarterly, 18(278-291).
MacArthur, C. A., Haynes, J. A., Malouf, D. B., Harris, K., & Owings, M. (1990). Computer assisted instruction with learning disabled students: achievement, engagement, and other factors that influence achievement. Journal of Educational Computing Research, 6(3), 311-328.
MacArthur, C. A., & Shneiderman, B. (1986). Learning disabled students' difficulties in learning to use a word processor: implications for instruction and software evaluation. Journal of Learning Disabilities, 19(4), 248-253.
MacArthur, C. A. & Haynes, J. B. (1995). Student assistant for learning from text (SALT): A hypermedia reading aid. Journal of Learning Disabilities, 28(3), 50-59.
Malouf, D. B. (1987-88). The effect of instructional computer games on continuing student motivation. Journal of Special Education, 21(4), 27-38.
Marston, D., Deno, S. L., Kim, D., Diment, K., & Rogers, D. (1995). Comparison of reading intervention approaches for students with mild disabilities. Exceptional Children, 62(1), 20-37.
Matthew, K. (1997). A comparison of the influence of interactive CD-ROM storybooks and traditional print storybooks on reading comprehension. Journal of Research on Computing in Education, 29(3), 263-275.
McDermott, P. A., & Watkins, M. W. (1983). Computerized vs. conventional remedial instruction for learning-disabled pupils. Journal of Special Education, 17(1), 81-88.
McNaughton, D., Hughes, C., & Ofiesh, N. (1997). Proofreading for students with learning disabilities: integrating computer and strategy use. Learning Disabilities Research & Practice, 12(1), 16-28.
Miller, L., Blackstock, J. & Miller, R. (1994). An exploratory study into the use of CD-ROM storybooks. Computers in Education, 22(1 & 2), 187-204.
Mitchell, M. J., & Fox, B. J. (2001). The effects of computer software for developing phonological awareness in low-progress readers. Reading Research and Instruction, 40(4), 315-332.
Montali, J., & Lewandowski, L. (1996). Bimodal reading: benefits of a talking computer for average and less skilled readers. Journal of Learning Disabilities, 29(3), 271-279.
Montgomery, D. J., Karlan, G. R., & Coutinho, M. (2001). The effectiveness of word processor spell checker programs to produce target words for misspellings generated by students with learning disabilities. Journal of Special Education Technology, 16(2), 27-41.
Moore-Hart, M. A. (1995). The effects of multicultural links on reading and writing performance and cultural awareness of fourth and fifth graders. Computers in Human Behavior, 11(3-4), 391-410.
Nicolson, R. I., Fawcett, A. J., & Nicolson, M. K. (2000). Evaluation of a computer-based reading intervention in infant and junior schools. Journal of Research in Reading, 23(2), 194-209.
Olson, R. K., Wise, B., Ring, J., & Johnson, M. (1997). Computer-based remedial training in phoneme awareness and phonological decoding: effects on the posttraining development of word recognition. Scientific Studies of Reading, 1(3), 235-253.
Olson, R. K., & Wise, B. W. (1992). Reading on the computer with orthographic and speech feedback. Reading and Writing: An Interdisciplinary Journal, 4, 107-144.
Outhred, L. (1987). To write or not to write: Does using a word processor assist reluctant writers? Australia & New Zealand Journal of Developmental Disabilities, 13(4), 211-217.
Outhred, L. (1989). Word processing: Its impact on children's writing. Journal of Learning Disabilities, 22(4), 262-264.
Podell, D. M., Tournaki-Rein, N., & Lin, A. (1992). Automatization of mathematics skills via computer-assisted instruction among students with mild mental handicaps. Education Training in Mental Retardation, September, 200-206.
Raskind, M. H. H., E. L. (1999). Speaking to read: The effects of speech recognition technology on the reading and spelling performance of children with learning disabilities. Annals of Dyslexia, 49, 251-281.
Reinking, D. S., & R., Schreiner (1985). The effects of computer-mediated text on measures of reading comprehension and reading behavior. Reading Research Quarterly, 20(5), 536-552.
Reitsma, P., & Wesseling, R. (1998). Effects of computer-assisted training of blending skills in kindergartners. Scientific Studies of Reading, 2(4), 301-320.
Rieber, L. P. (1990). Using computer animated graphics in science instruction with children. Journal of Educational Psychology, 82(1), 135-140.
Rose, D., & Meyer, A. (2002). Teaching Every Student in the Digital Age: Universal Design for Learning. ASCD.
Rosenbluth, G. S., & Reed, M. W. (1992). The effects of writing-process-based instruction and word processing on remedial and accelerated 11th graders. Computers in Human Behavior, 8, 71-95.
Shany, M. T., & Biemiller, A. (1995). Assisted reading practice: effects on performance for poor readers in grades 3 and 4. Reading Research Quarterly, 30(3), 382-395.
Sherwood, R. D., Kinzer, C. K., Bransford, J. D., & Franks, J. J. (1987). Some benefits of creating macro-contexts for science instruction: initial findings. Journal of Research in Science Teaching, 24(5), 417-435.
Shin, E. C., Schallert, D. L., & Savenye, W. C. (1994). Effects of learner control, advisement, and prior knowledge on young students' learning in a hypertext environment. Educational Technology Research & Development, 42(1), 33-46.
Swanson, H. L., & Trahan, M. F. (1992). Learning disabled readers' comprehension of computer mediated text: the influence of working memory, metacognition, and attribution. Learning Disabilities Research & Practice, 7, 74-86.
Talley, S., Lancy, D. F., & Lee, T. R. (1997). Children, storybooks, and computers. Reading Horizons, 38(2), 116-128.
Thorkildsen, R. J., & Reid, R. (1989). An investigation of the reinforcing effects of feedback on computer-assisted instruction. Journal of Special Education Technology, 9(3), 125-135.
Tierney, R. J., Kieffer, R., Whalin, K., Desai, L., Moss, A. G., Harris, J. E., & Hopper, J. (1997). Assessing the impact of hypertext on learners' architecture of literacy learning spaces in different disciplines: follow-up studies. Reading Online (1096-1232).
Torgesen, J. K., Waters, M., Cohen, A., & Torgesen, J. L. (1988). Improving sight-word recognition skills in LD children: An evaluation of three computer program variations. Learning Disability Quarterly, 2, 125-132.
Trifiletti, J. J., Frith, G. H., & Armstrong, S. (1984). Microcomputers versus resource rooms for LD students: A preliminary investigation of the effects on math skills. Learning Disability Quarterly, 7, 69-76.
van Daal, V. H. P., & van der Leij, A. (1992). Computer-based reading and spelling practice for children with learning disabilities. Journal of Learning Disabilities, 25(3), 186-195.
van den Bosch, K., van Bon, W. H. J., & Schreuder, R. (1995). Poor readers' decoding skills: effects of training with limited exposure duration. Reading Research Quarterly, 30(1), 110-125.
von Tetzchner, S., Rogne, S. O., & Lilleeng, M. K. (1997). Literacy intervention for a deaf child with severe reading disorder. Journal of Literacy Research, 29(1), 25-46.
Wentink, H. W. M. J., van Bon, W. H. J., & Schreuder, R. (1997). Training of poor readers' phonological decoding skills: evidence for syllable-bound processing. Reading and Writing: An Interdisciplinary Journal, 9, 163-192.
Wetzel, K. (1996). Speech-recognizing computers: A written-communication tool for students with learning disabilities. Journal of Learning Disabilities, 29(4), 371-380.
Williams, H. S., & Williams, P. N. (2000). Integrating reading and computers: an approach to improve ESL students reading skills. Reading Improvement, 37(3), 98-100.
Wise, B. W. (1992). Whole words and decoding for short-term learning: comparisons on a "talking-computer" system. Journal of Experimental Child Psychology, 54, 147-167.
Wise, B. W., & Olson, R. K. (1995). Computer-based phonological awareness and reading instruction. Annals of Dyslexia, 45, 99-122.
Wise, B. W., Ring, J., & Olson, R. K. (1999). Training phonological awareness with and without explicit attention to articulation. Journal of Experimental Child Psychology, 72, 271-304.
Wise, B. W., Ring, J., & Olson, R. K. (2000). Individual differences in gains from computer-assisted remedial reading. Journal of Experimental Child Psychology, 77, 197-235.
Xin, J. F., Glaser, C. W., & Rieth, H. (1996). Multimedia reading using anchored instruction and video technology in vocabulary lessons. Teaching Exceptional Children, Nov/Dec, 45-49.
Xin, J. F., & Rieth, H. (2001). Video-assisted vocabulary instruction for elementary school students with learning disabilities. Information Technology in Childhood Education Annual, 87-103.
Zordell, J. (1990). The use of word prediction and spelling correction software with mildly handicapped students. Closing the Gap, April/May, 10-12.
This content was developed pursuant to cooperative agreement #H324H990004 under CFDA 84.324H between CAST and the Office of Special Education Programs, U.S. Department of Education. However, the opinions expressed herein do not necessarily reflect the position or policy of the U.S. Department of Education or the Office of Special Education Programs and no endorsement by that office should be inferred.
Cite this paper as follows:
Strangman, N., & Hall, T. (2003). Text transformations. Wakefield, MA: National Center on Accessing the General Curriculum. Retrieved [insert date] from http://aim.cast.org/learn/historyarchive/backgroundpapers/text_transform... | http://aim.cast.org/learn/historyarchive/backgroundpapers/text_transformations | 13 |
50 | Basics: Making graphs with kinematics stuff
pre reqs: kinematics
Suppose there is some experiment in which you throw a ball up and collect position and time data (with video analysis). What do you do with this data? Your instructor told you to make a graph, but how do you do that?
Here is the fictional data you (or I) collected:
First idea: Use graph paper and plot what you see
That makes sense, doesn’t it? Well, where do you get graph paper? Surprisingly, there are many places online that offer free graph paper that you can print out. Here is the one (this is the first hit from google http://www.printfreegraphpaper.com/).
So I have my paper now. This is going to be a weird graph though – look, the position data is all between 2 and 3 meters. Won’t this graph have a ton of wasted space? Actually, when you make a graph, the axes do NOT have to start at zero, they can start where ever you want. The other mistake to NOT make is to try and force each square to be one division of data points. Let me explain: The graph paper I printed out is 30 boxes by 39 boxes. If you choose time to be on the side with 30 boxes, do not make each square represent 0.1 seconds. This would only use 6 of the available boxes. You want at LEAST use half of the graph paper. If your data takes up less than half the paper, you can always let each TWO squares be what one square was before. In this case, I can let each 4 squares equal 0.1 seconds.
For the vertical data, my smallest value is 2.06 meters and my largest is 2.69 for a difference of 0.63 meters. For this data I can let each square (division) be 0.02 meters. Here is what I get:
A couple of things to note:
- data takes up more than half of the paper
- The axes are labeled WITH units.
- It has a title, which is just a good idea.
But what now? Do you connect the dots? Well, remember the purpose of a graph is not to make a graph (although many students think the purpose of a graph is because the instructor said to do it). There has to be some reason for making a graph. In this case, you would probably want to find the acceleration of this object. To do that, you could describe this data as a mathematical function (like y(t)). The data looks like a parabola, but how do you fit that? Truthfully, with graph paper that is quite difficult.
Ok, let us (me) think about exactly what we want. I want to show that this is constant acceleration motion. In this case, the object should follow the function: (kinematics refresher)
But with graph paper, it is not trivial to fit a quadratic function. Can we cheat? Not really. If this had been a situation where the ball was dropped and the initial velocity was zero, then the function could be written as:
In this case the variable ?y is linear with respect to t2. But alas, the initial velocity is not zero. Then what is a student without a computer to do? There is one thing.
Plot velocity versus time
Velocity is linear with respect to time:
(is it confusing if I write v as a function of t? Maybe I should write it different). I could write:
Why is a linear relationship so important? Well, with a graph on graph paper, one CAN estimate a best fit straight line. Fine, well then how do I get velocity data? There is a way. Some may not like it, but there is a way. If you look at two consecutive positions, you can use them to get a velocity.
So, here is the plan. If I look at the first two times and positions, the ball (or whatever it was) went from 2.3 meters to 2.57 meters in 0.1 seconds. Yes, this was not constant motion. But I can get the average velocity. This is approximately:
But what time does this go with? How about I say it was the velocity at 0.05 seconds (halfway between 0.0 and 0.1 seconds). This is sort of cheating, but if the time interval is small it really doesn’t matter.
Doing this for the posted data, I have the new data:
Notice that before I had 7 data points, now I just have 6. I will go ahead and plot the data on a different sheet of graph paper (using the same ideas as before). Here is the finished graph (with the added best fit line):
A couple of things to point out:
- You probably see my mistake. If I were turning this in for a grade, I would probably redo this (or not do it in pen).
- I added my “best fit” line. This is really just a guess. There IS a way to determine the actual best fit line, but I will save that for another day. Notice that I did not draw a line from the first point to the last. If I had done that, what would be the point of all the data in between those?
- I picked two points from which to calculate the slope. The points chosen were as far apart as possible, not one of the points used to create the graph, and for ease I chose points that were on division lines.
Now I can calculate the slope – rise over run or:
The slope has units of acceleration and indeed it is the acceleration. Of course this is not -9.8 m/s^2 because I used realistic data. Of course you may get a better value of the acceleration if you had more data or if you had fit a quadratic function, but you work with what you have.
I think it is important that students understand the basics of graphing without using a spreadsheet or some other computer program. All too often students just feed numbers to a program and it spits out a picture. How do you know you can trust that program? You DO know that one day computers (and robots) will rule the world? You should prepare for that day now and understand how to do graphs by hand.
What if you want to use a program? How do you do that? I will save that for a part II. (this is longer than I expected) | http://www.wired.com/wiredscience/2008/09/basics-making-graphs-with-kinematics-stuff/ | 13 |
100 | Electricity and magnetism
The dot product Introduction to the vector dot product.
The dot product
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Let's learn a little bit about the dot product.
- The dot product, frankly, out of the two ways of multiplying
- vectors, I think is the easier one.
- So what does the dot product do?
- Why don't I give you the definition, and then I'll give
- you an intuition.
- So if I have two vectors; vector a dot vector b-- that's
- how I draw my arrows.
- I can draw my arrows like that.
- That is equal to the magnitude of vector a times the
- magnitude of vector b times cosine of the
- angle between them.
- Now where does this come from?
- This might seem a little arbitrary, but I think with a
- visual explanation, it will make a little bit more sense.
- So let me draw, arbitrarily, these two vectors.
- So that is my vector a-- nice big and fat vector.
- It's good for showing the point.
- And let me draw vector b like that.
- Vector b.
- And then let me draw the cosine, or let me, at least,
- draw the angle between them.
- This is theta.
- So there's two ways of view this.
- Let me label them.
- This is vector a.
- I'm trying to be color consistent.
- This is vector b.
- So there's two ways of viewing this product.
- You could view it as vector a-- because multiplication is
- associative, you could switch the order.
- So this could also be written as, the magnitude of vector a
- times cosine of theta, times-- and I'll do it in color
- appropriate-- vector b.
- And this times, this is the dot product.
- I almost don't have to write it.
- This is just regular multiplication, because these
- are all scalar quantities.
- When you see the dot between vectors, you're talking about
- the vector dot product.
- So if we were to just rearrange this expression this
- way, what does it mean?
- What is a cosine of theta?
- Let me ask you a question.
- If I were to drop a right angle, right here,
- perpendicular to b-- so let's just drop a right angle
- there-- cosine of theta soh-coh-toa so, cah cosine--
- is equal to adjacent of a hypotenuse, right?
- Well, what's the adjacent?
- It's equal to this.
- And the hypotenuse is equal to the magnitude of a, right?
- Let me re-write that.
- So cosine of theta-- and this applies to the a vector.
- Cosine of theta of this angle is equal to ajacent, which
- is-- I don't know what you could call this-- let's call
- this the projection of a onto b.
- It's like if you were to shine a light perpendicular to b--
- if there was a light source here and the light was
- straight down, it would be the shadow of a onto b.
- Or you could almost think of it as the part of a that goes
- in the same direction of b.
- So this projection, they call it-- at least the way I get
- the intuition of what a projection is, I kind of view
- it as a shadow.
- If you had a light source that came up perpendicular, what
- would be the shadow of that vector on to this one?
- So if you think about it, this shadow right here-- you could
- call that, the projection of a onto b.
- Or, I don't know.
- Let's just call it, a sub b.
- And it's the magnitude of it, right?
- It's how much of vector a goes on vector b over-- that's the
- adjacent side-- over the hypotenuse.
- The hypotenuse is just the magnitude of vector a.
- It's just our basic calculus.
- Or another way you could view it, just multiply both sides
- by the magnitude of vector a.
- You get the projection of a onto b, which is just a fancy
- way of saying, this side; the part of a that goes in the
- same direction as b-- is another way to say it-- is
- equal to just multiplying both sides times the magnitude of a
- is equal to the magnitude of a, cosine of theta.
- Which is exactly what we have up here.
- And the definition of the dot product.
- So another way of visualizing the dot product is, you could
- replace this term with the magnitude of the projection of
- a onto b-- which is just this-- times the
- magnitude of b.
- That's interesting.
- All the dot product of two vectors is-- let's just take
- one vector.
- Let's figure out how much of that vector-- what component
- of it's magnitude-- goes in the same direction as the
- other vector, and let's just multiply them.
- And where is that useful?
- Well, think about it.
- What about work?
- When we learned work in physics?
- Work is force times distance.
- But it's not just the total force
- times the total distance.
- It's the force going in the same
- direction as the distance.
- You should review the physics playlist if you're watching
- this within the calculus playlist. Let's say I have a
- 10 newton object.
- It's sitting on ice, so there's no friction.
- We don't want to worry about fiction right now.
- And let's say I pull on it.
- Let's say my force vector-- This is my force vector.
- Let's say my force vector is 100 newtons.
- I'm making the numbers up.
- 100 newtons.
- And Let's say I slide it to the right, so my distance
- vector is 10 meters parallel to the ground.
- And the angle between them is equal to 60 degrees, which is
- the same thing is pi over 3.
- We'll stick to degrees.
- It's a little bit more intuitive.
- It's 60 degrees.
- This distance right here is 10 meters.
- So my question is, by pulling on this rope, or whatever, at
- the 60 degree angle, with a force of 100 newtons, and
- pulling this block to the right for 10 meters, how much
- work am I doing?
- Well, work is force times the distance, but not just the
- total force.
- The magnitude of the force in the direction of the distance.
- So what's the magnitude of the force in the
- direction of the distance?
- It would be the horizontal component of this force
- vector, right?
- So it would be 100 newtons times the
- cosine of 60 degrees.
- It will tell you how much of that 100
- newtons goes to the right.
- Or another way you could view it if this
- is the force vector.
- And this down here is the distance vector.
- You could say that the total work you performed is equal to
- the force vector dot the distance vector, using the dot
- product-- taking the dot product, to the force and the
- distance factor.
- And we know that the definition is the magnitude of
- the force vector, which is 100 newtons, times the magnitude
- of the distance vector, which is 10 meters, times the cosine
- of the angle between them.
- Cosine of the angle is 60 degrees.
- So that's equal to 1,000 newton meters
- times cosine of 60.
- Cosine of 60 is what?
- It's square root of 3 over 2.
- Square root of 3 over 2, if I remember correctly.
- So times the square root of 3 over 2.
- So the 2 becomes 500.
- So it becomes 500 square roots of 3 joules, whatever that is.
- I don't know 700 something, I'm guessing.
- Maybe it's 800 something.
- I'm not quite sure.
- But the important thing to realize is that the dot
- product is useful.
- It applies to work.
- It actually calculates what component of what vector goes
- in the other direction.
- Now you could interpret it the other way.
- You could say this is the magnitude of a
- times b cosine of theta.
- And that's completely valid.
- And what's b cosine of theta?
- Well, if you took b cosine of theta, and you could work this
- out as an exercise for yourself, that's the amount of
- the magnitude of the b vector that's
- going in the a direction.
- So it doesn't matter what order you go.
- So when you take the cross product, it matters whether
- you do a cross b, or b cross a.
- But when you're doing the dot product, it doesn't matter
- what order.
- So b cosine theta would be the magnitude of vector b that
- goes in the direction of a.
- So if you were to draw a perpendicular line here, b
- cosine theta would be this vector.
- That would be b cosine theta.
- The magnitude of b cosine theta.
- So you could say how much of vector b goes in the same
- direction as a?
- Then multiply the two magnitudes.
- Or you could say how much of vector a goes in the same
- direction is vector b?
- And then multiply the two magnitudes.
- And now, this is, I think, a good time to just make sure
- you understand the difference between the dot product and
- the cross product.
- The dot product ends up with just a number.
- You multiply two vectors and all you have is a number.
- You end up with just a scalar quantity.
- And why is that interesting?
- Well, it tells you how much do these-- you could almost say--
- these vectors reinforce each other.
- Because you're taking the parts of their magnitudes that
- go in the same direction and multiplying them.
- The cross product is actually almost the opposite.
- You're taking their orthogonal components, right?
- The difference was, this was a a sine of theta.
- I don't want to mess you up this picture too much.
- But you should review the cross product videos.
- And I'll do another video where I actually compare and
- contrast them.
- But the cross product is, you're saying, let's multiply
- the magnitudes of the vectors that are perpendicular to each
- other, that aren't going in the same direction, that are
- actually orthogonal to each other.
- And then, you have to pick a direction since you're not
- saying, well, the same direction that
- they're both going in.
- So you're picking the direction that's orthogonal to
- both vectors.
- And then, that's why the orientation matters and you
- have to take the right hand rule, because there's actually
- two vectors that are perpendicular to any other two
- vectors in three dimensions.
- Anyway, I'm all out of time.
- I'll continue this, hopefully not too confusing, discussion
- in the next video.
- I'll compare and contrast the cross
- product and the dot product.
- See you in the next video.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | http://www.khanacademy.org/science/physics/electricity-and-magnetism/v/the-dot-product | 13 |
51 | To learn is to acquire knowledge or skill. Learning also may involve a change in attitude or behavior. Children learn to identify objects at an early age; teenagers may learn to improve study habits; and adults can learn to solve complex problems. Pilots and aviation maintenance technicians (AMTs) need to acquire the higher levels of knowledge and skill, including the ability to exercise judgment and solve problems. The challenge for the aviation instructor is to understand how people learn, and more importantly, to be able to apply that knowledge to the learning environment. This handbook is designed as a basic guide to educational psychology. This chapter addresses that branch of psychology directly concerned with how people learn.
Learning theory may be described as a body of principles advocated by psychologists and educators to explain how people acquire skills, knowledge, and attitudes. Various branches of learning theory are used in formal training programs to improve and accelerate the learning process. Key concepts such as desired learning outcomes, objectives of the training, and depth of training also apply. When properly integrated, learning principles, derived from theories, can be useful to aviation instructors and developers of instructional programs for both pilots and maintenance technicians.
Over the years, many theories have attempted to explain how people learn. Even though psychologists and educators are not in complete agreement, most do agree that learning may be explained by a combination of two basic approaches: behaviorism and the cognitive theories.
Behaviorists believe that animals, including humans, learn in about the same way. Behaviorism stresses the importance of having a particular form of behavior reinforced by someone, other than the student, to shape or control what is learned. In aviation training, the instructor provides the reinforcement. Frequent, positive reinforcement and rewards accelerate learning. This theory provides the instructor with ways to manipulate students with stimuli, induce the desired behavior or response, and reinforce the behavior with appropriate rewards. In general, the behaviorist theory emphasizes positive reinforcement rather than no reinforcement or punishment. Other features of behaviorism are considerably more complex than this simple explanation. Instructors who need more details should refer to psychology texts for a better understanding of behaviorism. As an instructor, it is important to keep in mind that behaviorism is still widely used today, because controlling learning experiences helps direct students toward specific learning outcomes.
Much of the recent psychological thinking and experimentation in education includes some facets of the cognitive theory. This is true in basic as well as more advanced training programs. Unlike behaviorism, the cognitive theory focuses on what is going on inside the student's mind. Learning is not just a change in behavior; it is a change in the way a student thinks, understands, or feels.
There are several branches of cognitive theory. Two of the major theories may broadly be classified as the information processing model and the social interaction model. The first says that the student's brain has internal structures which select and process incoming material, store and retrieve it, use it to produce behavior, and receive and process feedback on the results.
This involves a number of cognitive processes, including executive functions of recognizing expectancies, planning and monitoring performance, encoding and chunking information, and producing internal and external responses.
The social interaction theories gained prominence in the 1980s. They stress that learning and subsequent changes in behavior take place as a result of interaction between the student and the environment. Behavior is modeled either by people or symbolically. Cultural influences, peer pressure, group dynamics, and film and television are some of the significant factors. Thus, the social environment to which the student is exposed demonstrates or models behaviors, and the student cognitively processes the observed behaviors and consequences. The cognitive processes include attention, retention, motor responses, and motivation. Techniques for learning include direct modeling and verbal instruction. Behavior, personal factors, and environmental events all work together to produce learning.
Both models of the cognitive theory have common principles. For example, they both acknowledge the importance of reinforcing behavior and measuring changes. Positive reinforcement is important, particularly with cognitive concepts such as knowledge and understanding. The need to evaluate and measure behavior remains because it is the only way to get a clue about what the student understands. Evaluation is often limited to the kinds of knowledge or behavior that can be measured by a paper-and-pencil exam or a performance test. Although psychologists agree that there often are errors in evaluation, some means of measuring student knowledge, performance, and behavior is necessary.
Both the behavioristic and the cognitive approaches are useful learning theories. A reasonable way to plan, manage, and conduct aviation training is to include the best features of each major theory. This provides a way to measure behavioral outcomes and promote cognitive learning. The combined approach is not simple, but neither is learning.
The ability to learn is one of the most outstanding human characteristics. Learning occurs continuously throughout a person's lifetime. To define learning, it is necessary to analyze what happens to the individual. For example, an individual's way of perceiving, thinking, feeling, and doing may change as a result of a learning experience. Thus, learning can be defined as a change in behavior as a result of experience. This can be physical and overt, or it may involve complex intellectual or attitudinal changes which affect behavior in more subtle ways. In spite of numerous theories and contrasting views, psychologists generally agree on many common characteristics of learning.
|Aviation instructors need a good understanding of the general characteristics of learning in order to apply them in a learning situation. lf learning is a change in behavior as a result of experience, then instruction must include a careful and systematic creation of those experiences that promote learning. This process can be quite complex because, among other things, an individual's background strongly influences the way that person learns. To be effective, the learning situation also should be purposeful, based on experience, multifaceted, and involve an active process.|
Each student sees a learning situation from a different viewpoint. Each student is a unique individual whose past experiences affect readiness to learn and understanding of the requirements involved. For example, an instructor may give two aviation maintenance students the assignment of learning certain inspection procedures. One student may learn quickly and be able to competently present the assigned material. The combination of an aviation background and future goals may enable that student to realize the need and value of learning the procedures. A second student's goal may only be to comply with the instructor's assignment, and may result in only minimum preparation. The responses differ because each student ads in accordance with what he or she sees in the situation.
Most people have fairly definate ideas about what they want to do and achieve. Their goals sometimes are short term, involving a matter of days or weeks. On the other hand, their goals may be carefully planned for a career or a lifetime. Each student has specific intentions and goals. Some may be shared by other students. Students learn from any activity that tends to further their goals. Their individual needs and attitudes may determine what they learn as much as what the instruc- tor is trying to get them to learn. In the process of learning, the student's goals are of paramount significance. To be effective, aviation instructors need to find ways to relate new learning to the student's goals.
Since learning is an individual process, the instructor cannot do it for the student. The student can learn only from personal experiences; therefore, learning and knowledge cannot exist apart from a person. A person's knowledge is a result of experience, and no two people have had identical experiences. Even when observing the same event, two people react differently; they learn different things from it, according to the manner in which the situation affects their individual needs. Previous experience conditions a person to respond to some things and to ignore others.
All learning is by experience, but learning takes place in different forms and in varying degrees of richness and depth. For instance, some experiences involve the whole person while others may be based only on hearing and memory. Aviation instructors are faced with the problem of providing learning experiences that are meaningful, varied, and appropriate. As an example, students can learn to say a list of words through repeated drill, or they can learn to recite certain principles of flight by rote. However, they can make them meaningful only if they understand them well enough to apply them correctly to real situations. If an experience challenges the students, requires involvement with feelings, thoughts, memory of past experiences, and physical activity, it is more effective than a learning experience in which all the students have to do is commit something to memory.
It seems clear enough that the learning of a physical skill requires actual experience in performing that skill. Student pilots learn to fly aircraft only if their experiences include flying them; student aviation maintenance technicians learn to overhaul powerplants only by actually performing that task. Mental habits are also learned through practice. If students are to use sound judgment and develop decision-making skills, they need learning experiences that involve knowledge of general principles and require the use of judgment in salving realistic problems.
If instructors see their objective as being only to train their students' memory and muscles, they are underestimating the potential of the teaching situation. Students may learn much more than expected if they fully exercise their minds and feelings. The fact that these items were not included in the instructor's plan does not prevent them from influencing the learning situation.
Psychologists sometimes classify learning by types, such as verbal, conceptual, perceptual, motor, problem solving, and emotional. Other classifications refer to intellectual skills, cognitive strategies, and attitudinal changes, along with descriptive terms like surface or deep learning. However useful these divisions may be, they are somewhat artificial. For example, a class learning to apply the scientific method of problem solving may learn the method by trying to solve real problems. But in doing so, the class also engages in verbal learning and sensory perception at the same time. Each student approaches the task with preconceived ideas and feelings, and for many students, these ideas change as a result of experience. Therefore, the learning process may include verbal elements, conceptual elements, perceptual elements, emotional elements, and problem solving elements all taking place at once. This aspect of learning will become more evident later in this handbook when lesson planning is discussed.
Learning is multifaceted in still another way. While learning the subject at hand, students may be learning other things as well. They may be developing attitudes about aviation-good or bad-depending on what they experience. Under a skillful instructor, they may learn self-reliance. The list is seemingly endless. This type of learning is sometimes referred to as incidental, but it may have a great impact on the total development of the student.
Students do not soak up knowledge like a sponge absorbs water. The instructor cannot assume that students remember something just because they were in the classroom, shop, or airplane when the instructor presented the material. Neither can the instructor assume that the students can apply what they know because they can quote the correct answer verbatim. For students to learn, they need to react and respond, perhaps outwardly, perhaps only inwardly, emotionally, or intellectually. But if learning is a process of changing behavior, clearly that process must be an active one.
Although characteristics of learning and learning styles are related, there are distinctions between the two. Learning style is a concept that can play an important role in improving instruction and student success. It is concerned with student preferences and orientation at several levels. For example, a student's information processing technique, personality, social interaction tendencies and the instructional methods used are all significant factors which apply to how individual students learn. In addition, today's culturally diverse society, including international students, must be considered.
The key point is that all students are different, and training programs should be sensitive to the differences. Some students are fast learners and others have difficulties; and, as already mentioned, motivation, experience, and previous training affect learning style. Any number of adjectives may be used to describe learning styles. Some common examples include:
Theories abound concerning right- or left-brain dominance. In general, those with right-brain dominance are characterized as being spatially oriented, creative, intuitive, and emotional. Those with left-brain dominance are more verbal, analytical, and objective. However, the separate hemispheres of the brain do not function independently. For example, the right hemisphere may recognize a face, while the left associates a name to go with the face. The term dominance is probably misleading when applied to brain hemispheres; specialization would be a more appropriate word.
Learning style differences certainly depend on how students process information. Some rely heavily on visual references while others depend more on auditory presentations. For example, visual students learn readily through reading and graphic displays, and auditory students have more success if they hear the subject matter described. Another difference is that some learn more easily when an idea is presented in a mathematical equation, while others may prefer a verbal explanation of the same idea. In addition, where hands-on activities are involved, students also learn by feel. This is sometimes called kinesthetic learning.
Information processing theories contain several other useful classifications. As an example, in the holistic/serialist theory, the holist strategy is a top-down concept where students have a big picture, global perspective. These students seek overall comprehension, especially through the use of analogies. In contrast, the serialist student focuses more narrowly and needs well-defined, sequential steps where the overall picture is developed slowly, thoroughly, and logically. This is a bottom-up strategy.
Two additional information processing classifications describe deep-elaborative and the shallow-reiterative learners. Testing practices which demand comprehension, rather than a regurgitation of facts, obviously encourage students to adopt a deep-elaborative learning style. Detailed information on testing procedures, as well as curriculum design and instructor techniques, is included later in this handbook.
As indicated, personality also affects how students learn. Dependent students require a lot of guidance, direction, and external stimulation. These students tend to focus on the instructor. The more independent students require only a minimum amount of guidance and external stimulation. They are not overly concerned with how the lesson is presented.
Students with a reflective-type personality may be described as tentative. They tend to be uncertain in problem-solving exercises. The opposite applies to impulsive students. Typically, they dive right in with enthusiasm and are prone to make quick, and sometimes faulty, decisions.
The social interaction concept contains further classifications of student learning styles. Like most of the other information on learning styles, these classifications are derived from research on tendencies of undergraduate students.
Some generalizations about these classifications indicate that compliant students are typically task oriented, and anxious-dependent students usually score lower than others on standardized tests. Discouraged students often have depressed feelings about the future, and independent students tend to be older, intelligent, secure, and comfortable with the academic environment. Attention seekers have a strong social orientation and are frequently involved in joking, showing off, and bragging. In contrast, silent students usually are characterized by helplessness, vulnerability, and other disconcerting behaviorisms.
Other studies identify more categories that are easily recognized. Among these are collaborative, sharing students who enjoy working with others, and competitive students who are grade conscious and feel they must do better than their peers. Participant students normally have a desire to learn and enjoy attending class, and avoidant students do not take part in class activities and have little interest in learning.
The existing learning environment also influences learning style. In real life, most students find it necessary to adapt to a traditional style learning environment provided by a school, university, or other educational/training establishment. Thus, the student's learning style may or may not be compatible.
Instructors who can recognize student learning style differences and associated problems will be much more effective than those who do not understand this concept. Also, these instructors will be prepared to develop appropriate lesson plans and provide guidance, counseling, or other advisory services, as required.
Over the years, educational psychologists have identitied several principles which seem generally applicable to the learning process. They provide additional insight into what makes people learn most effectively.
Individuals learn best when they are ready to learn, and they do not learn well if they see no reason for learning. Getting students ready to learn is usually the instructor's responsibility. If students have a strong purpose, a clear objective, and a definite reason for learning something, they make more progress than if they lack motivation. Readiness implies a degree of single-mindedness and eagerness. When students are ready to learn, they meet the instructor at least halfway, and this simplifies the instructor's job.
Under certain circumstances, the instructor can do little, if anything, to inspire in students a readiness to learn. If outside responsibilities, interests, or worries weigh too heavily on their minds, if their schedules are overcrowded, or if their personal problems seem insoluble, students may have little interest in learning.
The principle of exercise states that those things most often repeated are best remembered. It is the basis of drill and practice. The human memory is fallible. The mind can rarely retain, evaluate, and apply new concepts or practices after a single exposure. Students do not learn to weld during one shop period or to perform crosswise landings during one instructional flight. They learn by applying what they have been told and shown. Every time practice occurs, learning continues. The instructor must provide opportunities for students to practice and, at the same time, make sure that this process is directed toward a goal.
The principle of effect is based on the emotional reaction of the student. It states that learning is strengthened when accompanied by a pleasant or satisfying feeling, and that learning is weakened when associated with an unpleasant feeling. Experiences that produce feelings of defeat, frustration, anger, confusion, or futility are unpleasant for the student. If, for example, an instructor attempts to teach landings during the first flight, the student is likely to feel inferior and be frustrated.
Instructors should be cautious. Impressing students with the difficulty of an aircraft maintenance problem, flight maneuver or flight crew duty can make the teaching task difficult. Usually it is better to tell students that a problem or maneuver, although difficult, is within their capability to understand or perform. Whatever the learning situation, it should contain elements that affect the students positively and give them a feeling of satisfaction.
Primacy, the state of being first, often creates a strong, almost unshakable, impression. For the instructor, this means that what is taught must be right the first time. For the student, it means that learning must be right. Unteaching is more difficult than teaching. If, for example, a maintenance student learns a faulty riveting technique, the instructor will have a difficult task correcting bad habits and reteaching correct ones. Every student should be started right. The first experience should be positive, functional, and lay the foundation for all that is to follow.
A vivid, dramatic, or exciting learning experience teaches more than a routine or boring experience. A student is likely to gain greater understanding of slow flight and stalls by performing them rather than merely reading about them. The principle of intensity implies that a student will learn more from the real thing than from a substitute. In contrast to flight instruction and shop instruction, the classroom imposes limitations on the amount of realism that can be brought into teaching. The aviation instructor should use imagination in approaching reality as closely as possible. Today, classroom instruction can benefit from a wide variety of instructional aids to improve realism, motivate learning, and challenge students. Chapter 7, Instructional Aids and Training Technologies, explores the wide range of teaching tools available for classroom use.
The principle of recency states that things most recently learned are best remembered. Conversely, the further a student is removed time-wise from a new fact or understanding, the more difficult it is to remember. It is easy, for example, for a student to recall a torque value used a few minutes earlier, but it is usually impossible to remember an unfamiliar one used a week earlier. Instructors recognize the principle of recency when they carefully plan a summary for a ground school lesson, a shop period, or a postflight critique. The instructor repeats, restates, or reemphasizes important points at the end of a lesson to help the student remember them. The principle of recency often determines the sequence of lectures within a course of instruction.
|Initially, all learning comes from perceptions which are directed to the brain by one or more of the five senses: sight, hearing, touch, smell, and taste. Psychologists have also found that learning occurs most rapidly when information is received through more than one sense.|
Perceiving involves more than the reception of stimuli from the five senses. Perceptions result when a person gives meaning to sensations. People base their actions on the way they believe things to be. The experienced aviation maintenance technician, for example, perceives an engine malfunction quite differently than does an inexperienced student. Real meaning comes only from within a person, even though the perceptions which evoke these meanings result from external stimuli. The meanings which are derived from perceptions are influenced not only by the individual's experience, but also by many other factors. Knowledge of the factors which affect the perceptual process is very important to the aviation instructor because perceptions are the basis of all learning.
There are several factors that affect an individual's ability to perceive. Some are internal to each person and some are external.
The physical organism provides individuals with the perceptual apparatus for sensing the world around them. Pilots, for example, must be able to see, hear, feel, and respond adequately while they are in the air. A person whose perceptual apparatus distorts reality is denied the right to fly at the time of the first medical examination.
A person's basic need is to maintain and enhance the organized self. The self is a person's past, present, and future combined; it is both physical and psychological. A person's most fundamental, pressing need is to preserve and perpetuate the self. All perceptions are affected by this need.
Just as the food one eats and the air one breathes become part of the physical self, so do the sights one sees and the sounds one hears become part of the psychological self. Psychologically, we are what we perceive. A person has physical barriers which keep out those things that would be damaging to the physical being, such as blinking at an arc weld or flinching from a hot iron. Likewise, a person has perceptual barriers that block those sights, sounds, and feelings which pose a psychological threat.
Helping people learn requires finding ways to aid them in developing better perceptions in spite of their defense mechanisms. Since a person's basic need is to maintain and enhance the self, the instructor must recognize that anything that is asked of the student which may be interpreted by the student as imperiling the self will be resisted or denied. To teach effectively, it is necessary to work with this life force.
Perceptions depend on one's goals and values. Every experience and sensation which is funneled into one's central nervous system is colored by the individual's own beliefs and value structures. Spectators at a ball game may see an infraction or foul differently depending on which team they support. The precise kinds of commitments and philosophical outlooks which the student holds are important for the instructor to know, since this knowledge will assist in predicting how the student will interpret experiences and instructions.
Goals are also a product of one's value structure. Those things which are more highly valued and cherished are pursued; those which are accorded less value and importance are not sought after.
Self-concept is a powerful determinant in learning. A student's self-image, described in such terms as confident and insecure, has a great influence on the total perceptual process. lf a student's experiences tend to support a favorable self-image, the student tends to remain receptive to subsequent experiences. lf a student has negative experiences which tend to contradict self-concept, there is a tendency to reject additional training.
A negative self-concept inhibits the perceptual processes by introducing psychological barriers which tend to keep the student from perceiving. They may also inhibit the ability to properly implement that which is perceived. That is, self-concept affects the ability to actually perform or do things unfavorable. Students who view themselves positively, on the other hand, are less defensive and more receptive to new experiences, instructions, and demonstrations.
It takes time and opportunity to perceive. Learning some things depends on other perceptions which have preceded these learnings, and on the availability of time to sense and relate these new things to the earlier perceptions. Thus, sequence and time are necessary.
A student could probably stall an airplane on the first attempt, regardless of previous experience. Stalls cannot really be learned, however, unless some experience in normal flight has been acquired. Even with such experience, time and practice are needed to relate the new sensations and experiences associated with stalls in order to develop a perception of the stall. In general, lengthening an experience and increasing its frequency are the most obvious ways to speed up learning, although this is not always effective. Many factors, in addition to the length and frequency of training periods, affect the rate of learning. The effectiveness of the use of a properly planned training syllabus is proportional to the consideration it gives to the time and opportunity factor in perception.
The element of threat does not promote effective learning. In fact, fear adversely affects perception by narrowing the perceptual field. Confronted with threat, students tend to limit their attention to the threatening object or condition. The field of vision is reduced, for example, when an individual is frightened and all the perceptual faculties are focused on the thing that has generated fear.
Flight instruction provides many clear examples of this. During the initial practice of steep turns, a student pilot may focus attention on the altimeter and completely disregard outside visual references. Anything an instructor does that is interpreted as threatening makes the student less able to accept the experience the instructor is trying to provide. It adversely affects all the student's physical, emotional, and mental faculties.
Learning is a psychological process, not necessarily a logical one. Trying to frighten a student through threats of unsatisfactory reports or reprisals may seem logical, but is not effective psychologically. The effective instructor can organize teaching to fit the psychological needs of the student. If a situation seems overwhelming, the student feels unable to handle all of the factors involved, and a threat exists. So long as the student feels capable of coping with a situation, each new experience is viewed as a challenge.
A good instructor realizes that behavior is directly influenced by the way a student perceives, and perception is affected by all of these factors. Therefore, it is important for the instructor to facilitate the learning process by avoiding any actions which may inhibit or prevent the attainment of teaching goals. Teaching is consistently effective only when those factors which influence perceptions are recognized and taken into account.
Insight involves the grouping of perceptions into meaningful wholes. Creating insight is one of the instructor's major responsibilities. To ensure that this does occur, it is essential to keep each student constantly receptive to new experiences and to help the student realize the way each piece relates to all other pieces of the total pattern of the task to be learned.
As an example, during straight-and-level flight in an airplane with a fixed-pitch propeller, the RPM will increase when the throttle is opened and decrease when it is closed. On the other hand, RPM changes can also result from changes in airplane pitch attitude without changes in power setting. Obviously, engine speed, power setting, airspeed, and airplane attitude are all related.
True learning requires an understanding of how each of these factors may affect all of the others and, at the same time, knowledge of how a change in any one of them may affect all of the others. This mental relating and grouping of associated perceptions is called insight.
Insight will almost always occur eventually, whether or not instruction is provided. For this reason, it is possible for a person to become an electrician by trial and error, just as one may become a lawyer by reading law. Instruction, however, speeds this learning process by teaching the relationship of perceptions as they occur, thus promoting the development of the student's insight.
As perceptions increase in number and are assembled by the student into larger blocks of learning, they develop insight. As a result, learning becomes more meaningful and more permanent. Forgetting is less of a problem when there are more anchor points for tying insights together. It is a major responsibility of the instructor to organize demonstrations and explanations, and to direct practice, so that the student has better opportunities to understand the interrelationship of the many kinds of experiences that have been perceived. Pointing out the relationships as they occur, providing a secure and nonthreatening environment in which to learn, and helping the student acquire and maintain a favorable self-concept are key steps in fostering the development of insight.
Motivation is probably the dominant force which governs the student's progress and ability to learn. Motivation may be negative or positive, tangible or intangible, subtle and difficult to identify, or it may be obvious.
Negative motivation may engender fear, and be perceived by the student as a threat. While negative motivation may be useful in certain situations, characteristically it is not as effective in promoting efficient learning as positive motivation.
Positive motivation is provided by the promise or achievement of rewards. These rewards may be personal or social; they may involve financial gain, satisfaction of the self-concept, or public recognition. Motivation which can be used to advantage by the instructor includes the desire for personal gain, the desire for personal comfort or security, the desire for group approval, and the achievement of a favorable self-image.
The desire for personal gain, either the acquisition of possessions or status, is a basic motivational factor for all human endeavor. An individual may be motivated to dig a ditch or to design a supersonic airplane solely by the desire for financial gain.
Students are like typical employees in wanting a tangible return for their efforts. For motivation to be effective, students must believe that their efforts will be suitably rewarded. These rewards must be constantly apparent to the student during instruction, whether they are to be financial, self-esteem, or public recognition.
Lessons often have objectives which are not obvious at first. Although these lessons will pay dividends during later instruction, the student may not appreciate this fact. It is important for the instructor to make the student aware of those applications which are not immediately apparent. Likewise, the devotion of too much time and effort to drill and practice on operations which do not directly contribute to competent performance should be avoided. The desire for personal comfort and security is a form of motivation which instructors often forget. All students want secure, pleasant conditions and a safe environment. If they recognize that what they are learning may promote these objectives, their attention is easier to attract and hold. Insecure and unpleasant training situations inhibit learning.
Everyone wants to avoid pain and injury. Students normally are eager to learn operations or procedures which help prevent injury or loss of life. This is especially true when the student knows that the ability to make timely decisions, or to act correctly in an emergency, is based on sound principles.
The attractive features of the activity to be learned also can be a strong motivational factor. Students are anxious to learn skills which may be used to their advantage. If they understand that each task will be useful in preparing for future activities, they will be more willing to pursue it.
Another strong motivating force is group approval. Every person wants the approval of peers and superiors. Interest can be stimulated and maintained by building on this natural desire. Most students enjoy the feeling of belonging to a group and are interested in accomplishment which will give them prestige among their fellow students.
Every person seeks to establish a favorable self-image. In certain instances, this self-image may be submerged in feelings of insecurity or despondency. Fortunately, most people engaged in a task believe that success is possible under the right combination of circumstances and good fortune. This belief can be a powerful motivating force for students. An instructor can effectively foster this motivation by the introduction of perceptions which are solidly based on previously learned factual information that is easily recognized by the student. Each additional block of learning should help formulate insight which contributes to the ultimate training goals. This promotes student confidence in the overall training program and, at the same time, helps the student develop a favorable self-image. As this confirmation progresses and confidence increases, advances will be more rapid and motivation will be strengthened.
Positive motivation is essential to true learning. Negative motivation in the form of reproofs or threats should be avoided with all but the most overconfident and impulsive students. Slumps in learning are often due to declining motivation. Motivation does not remain at a uniformly high level. It may be affected by outside influences, such as physical or mental disturbances or inadequate instruction. The instructor should strive to maintain motivation at the highest possible level. In addition, the instructor should be alert to detect and counter any lapses in motivation.
Levels of learning may be classified in any number of ways. Four basic levels have traditionally been included in aviation instructor training. The lowest level is the ability to repeat something which one has been taught, without understanding or being able to apply what has been learned. This is referred to as rote learning. Progressively higher levels of learning are understanding what has been taught, achieving the skill for application of what has been learned, and correlation of what has been learned with other things previously learned or subsequently encountered. Figure 1-3
For example, a flight instructor may explain to a beginning student the procedure for entering a level, left turn. The procedure may include several steps such as: (1) visually clear the area, (2) add a slight amount of power to maintain airspeed, (3) apply aileron control pressure to the left, (4) add sufficient rudder pressure in the direction of the turn to avoid slipping and skidding, and (5) increase back pressure to maintain altitude. A student who can verbally repeat this instruction has learned the procedure by rote. This will not be very useful to the student if there is never an opportunity to make a turn in flight, or if the student has no knowledge of the function of airplane controls.
With proper instruction on the effect and use of the flight controls, and experience in controlling the airplane during straight-and-level flight, the student can consolidate these old and new perceptions into an insight on how to make a turn. At this point, the student has developed an understanding of the procedure for turning the airplane in flight. This understanding is basic to effective learning, but may not necessarily enable the student to make a correct turn on the first attempt.
When the student understands the procedure for entering a turn, has had turns demonstrated, and has practiced turn entries until consistency has been achieved, the student has developed the skill to apply what has been learned. This is a major level of learning, and one at which the instructor is too often willing to stop. Discontinuing instruction on turn entries at this point and directing subsequent instruction exclusively to other elements of piloting performance is characteristic of piecemeal instruction, which is usually inefficient. It violates the building block concept of instruction by failing to apply what has been learned to future learning tasks. The building block concept will be covered later in more detail.
The correlation level of learning, which should be the objective of aviation instruction, is that level at which the student becomes able to associate an element which has been learned with other segments or blocks of learning. The other segments may be items or skills previously learned, or new learning tasks to be undertaken in the future. The student who has achieved this level of learning in turn entries, for example, has developed the ability to correlate the elements of turn entries with the performance of chandelier and lazy eights.
Besides the four basic levels of learning, educational psychologists have developed several additional levels. These classifications consider what is to be learned. Is it knowledge only, a change in attitude, a physical skill, or a combination of knowledge and skill? One of the more useful categorizations of learning objectives includes three domains: cognitive domain (knowledge), affective domain (attitudes, beliefs, and values), and psychomotor domain (physical skills). Each of the domains has a hierarchy of educational objectives.
The listing of the hierarchy of objectives is often called a taxonomy. A taxonomy of educational objectives is a systematic classification scheme for sorting learning outcomes into the three broad categories (cognitive, affective, and psychomotor) and ranking the desired outcomes in a developmental hierarchy from least complex to most complex.
The cognitive domain, described by Dr. Benjamin Bloom, is one of the best known educational domains.
It contains additional levels of knowledge and understanding and is commonly referred to as Bloom's taxonomy of educational objectives.
In aviation, educational objectives in the cognitive domain refer to knowledge which might be gained as the result of attending a ground school, reading about aircraft systems, listening to a preflight briefing, reviewing meteorological reports, or taking part in computer-based training. The highest educational objective level in this domain may also be illustrated by learning to correctly evaluate a flight maneuver, repair an airplane engine, or review a training syllabus for depth and completeness of training.
The affective domain may be the least understood, and in many ways, the most important of the learning domains. A similar system for specifying attitudinal objectives has been developed by D.R. Krathwohl.
Like the Bloom taxonomy, Krathwohl's hierarchy attempts to arrange these objectives in an order of difficulty.
Since the affective domain is concerned with a student's attitudes, personal beliefs, and values, measuring educational objectives in this domain is not easy. For example, how is a positive attitude toward safety evaluated? Observable safety-related behavior indicates a positive attitude, but this is not like a simple pass/fail test that can be used to evaluate cognitive educational objective levels. Although a number of techniques are available for evaluation of achievement in the affective domain, most rely on indirect inferences.
There are several taxonomies which deal with the psychomotor domain (physical skills), but none are as popularly recognized as the Bloom and Krathwohl taxonomies. However, the taxonomy developed by E.J.
Simpson also is generally acceptable.
Psychomotor or physical skills always have been important in aviation. Typical activities involving these skills include learning to fly a precision instrument approach procedure, programming a GPS receiver, or using sophisticated maintenance equipment. As physical tasks and equipment become more complex, the requirement for integration of cognitive and physical skills increases.
The additional levels of learning definitely apply to aviation flight and maintenance training. A comparatively high level of knowledge and skill is required. The student also needs to have a well-developed, positive attitude. Thus, all three domains of learning, cognitive, affective, and psychomotor, are pertinent.
These additional levels of learning are the basis of the knowledge, attitude, and skill learning objectives commonly used in advanced qualification programs for airline training. They also can be tied to the practical test standards to show the level of knowledge or skill required for a particular task. A list of action verbs for the three domains shows appropriate behavioral objectives at each level. Figure 1-7
Instructors who are familiar with curricula development will recognize that the action verbs are examples of performance-based objectives. Expanded coverage of the concept of performance-based objectives is included in Chapter 4 of this handbook.
Even though the process of learning is profound, the main objective or purpose of most instruction typically is teaching a concept, a generalization, an attitude, or a skill. The process of learning a psychomotor or physical skill is much the same, in many ways, as cognitive learning. To provide a real illustration of physical skill learning, try the following exercise:
On a separate sheet of paper, write the word "learning" 15 times with your left hand or with your right hand, if you are left handed. Try to improve the speed and quality of your writing as you go along.
The above exercise contains a practical example of the multifaceted character of learning. It should be obvious that, while a muscular sequence was being learned, other things were happening as well. The perception changed as the sequence became easier. Concepts of how to perform the skill were developed and attitudes were changed.
Thinking back over their past experiences in learning to perform certain skills, students might be surprised at how much more readily they learned those skills that appealed to their own needs (principle of readiness). Shorter initial learning time and more rapid progress in improving the skill normally occurred. Conversely, where the desire to learn or improve was missing, little progress was made. A person may read dozens of books a year, but the reading rate will not increase unless there is a deliberate intent to increase it. In the preceding learning exercise, it is unlikely that any improvement occurred unless there was a clear intention to improve. To improve, one must not only recognize mistakes, but also make an effort to correct them. The person who lacks the desire to improve is not likely to make the effort and consequently will continue to practice errors. The skillful instructor relates the lesson objective to the student's intentions and needs and, in so doing, builds on the student's natural enthusiasm.
Logically, the point has been emphasized that the best way to prepare the student to perform a task is to provide a clear, step-by-step example. Having a model to follow permits students to get a clear picture of each step in the sequence so they understand what is required and how to do it. In flight or maintenance training, the instructor provides the demonstration, emphasizing the steps and techniques. During classroom instruction, an outside expert may be used, either in person or in a video presentation. In any case, students need to have a clear impression of what they are to do.
After experiencing writing a word with the wrong hand, consider how difficult it would be to tell someone else how to do it. Even demonstrating how to do it would not result in that person learning the skill. Obviously, practice is necessary. The student needs coordination between muscles and visual and tactile senses. Learning to perform various aircraft maintenance skills or flight maneuvers requires this sort of practice. There is another benefit of practice. As the student gains proficiency in a skill, verbal instructions mean more. Whereas a long, detailed explanation is confusing before the student begins performing, specific comments are more meaningful and useful after the skill has been partially mastered.
In learning some simple skills, students can discover their own errors quite easily. In other cases, such as learning complex aircraft maintenance skills, flight maneuvers, or flight crew duties, mistakes are not always apparent. A student may know that something is wrong, but not know how to correct it. In any case, the instructor provides a helpful and often critical function in making certain that the students are aware of their progress. It is perhaps as important for students to know when they are right as when they are wrong. They should be told as soon after the performance as possible, and should not be allowed to practice mistakes. It is more difficult to unlearn a mistake, and then learn it correctly, than to learn correctly in the first place. One way to make students aware of their progress is to repeat a demonstration or example and to show them the standards their performance must ultimately meet.
The experience of learning to write a word with the wrong hand probably confirmed what has been consistently demonstrated in laboratory experiments on skill learning. The first trials are slow, and coordination is lacking. Mistakes are frequent, but each trial provides clues for improvement in subsequent trials. The student modifies different aspects of the skill such as how to hold the pencil, or how to execute finger and hand movement.
Graphs of the progress of skill learning, such as the one shown below, usually follow the same pattern. There is rapid improvement in the early stages, then the curve levels off and may stay level for a significant period of time. Further improvement may seem unlikely. This is a typical learning plateau.
A learning plateau may signify any number of conditions. For example, the student may have reached capability limits, may be consolidating levels of skill, interest may have waned, or the student may need a more efficient method for increasing progress. Keep in mind that the apparent lack of increasing proficiency does not necessarily mean that learning has ceased. The point is that, in learning motor skills, a leveling off process, or a plateau, is normal and should be expected after an initial period of rapid improvement. The instructor should prepare the student for this situation to avert discouragement. If the student is aware of this learning plateau, frustration may be minimized.
In planning for student performance, a primary consideration is the length of time devoted to practice. A beginning student reaches a point where additional practice is not only unproductive, but may even be harmful. When this point is reached, errors increase, and motivation declines. As a student gains experience, longer periods of practice are profitable.
Another consideration is the problem of whether to divide the practice period. Perhaps even the related instruction should be broken down into segments, or it may be advantageous to plan one continuous, integrated sequence. The answer depends on the nature of the skill. Some skills are composed of closely related steps, each dependent on the preceding one. Learning to pack a parachute is a good example. Other skills are composed of related subgroups of skills. Learning to overhaul an aircraft engine is a good example.
If an instructor were to evaluate the fifteenth writing of the word "learning," only limited help could be given toward further improvement. The instructor could judge whether the written word was legible and evaluate it against some criterion or standard, or perhaps even assign it a grade of some sort. None of these actions would be particularly useful to the beginning student. However, the student could profit by having someone watch the performance and critique constructively to help eliminate errors.
In the initial stages, practical suggestions are more valuable to the student than a grade. Early evaluation is usually teacher oriented. It provides a check on teaching effectiveness, can be used to predict eventual student learning proficiency, and can help the teacher locate special problem areas. The observations on which the evaluations are based also can identify the student's strengths and weaknesses, a prerequisite for making constructive criticism.
The final and critical question is, Can the student use what has been learned? It is not uncommon to find that students devote weeks and months in school learning new abilities, and then fail to apply these abilities on the job. To solve this problem, two conditions must be present. First, the student must learn the skill so well that it becomes easy, even habitual; and second, the student must recognize the types of situations where it is appropriate to use the skill. This second condition involves the question of transfer of learning, which is briefly discussed later in this chapter.
Memory is an integral part of the learning process. Although there are several theories on how the memory works, a widely accepted view is the multi-stage concept which states that memory includes three parts: sensory, working or short-term, and long-term systems. As shown in figure 1-9 on the following page, the total system operates somewhat like an advanced computer that accepts input (stimuli) from an external source, contains a processing apparatus, a storage capability, and an output function.
The sensory register receives input from the environment and quickly processes it according to the individual's preconceived concept of what is important. However, other factors can influence the reception of information by the sensory system. For example, if the input is dramatic and impacts more than one of the five senses, that information is more likely to make an impression. The sensory register processes inputs or stimuli from the environment within seconds, discards what is considered extraneous, and processes what is determined by the individual to be relevant. This is a selective process where the sensory register is set to recognize certain stimuli and immediately transmit them to the working memory for action. The process is called preceding. An example is sensory preceding to recognize a fire alarm. No matter what is happening at the time, when the sensory register detects a fire alarm, the working memory is immediatly made aware of the alarm and preset responses begin to take place.
Within seconds the relevant information is passed to the working or short-term memory where it may temporarily remain or rapidly fade, depending on the individual's priorities. Several common steps help retention in the short-term memory. These include rehearsal or repetition of the information and sorting or categorization into systematic chunks. The sorting process is usually called coding or chunking. A key limitation of the working memory is that it takes 5-10 seconds to properly code information. If the coding process is interrupted, that information is lost after about 20 seconds.
The working or short-term memory is not only time limited, it also has limited capacity, usually about seven bits or chunks of information. A seven-digit telephone number is an example. As indicated, the time limitation may be overcome by rehearsal. This means learning the information by a rote memorization process. Of course, rote memorization is subject to imperfections in both the duration of recall and in its accuracy. The coding process is more useful in a learning situation. In addition, the coding process may involve recoding to adjust the information to individual experiences. This is when actual learning begins to take place. Therefore, recoding may be described as a process of relating incoming information to concepts or knowledge already in memory.
Methods of coding vary with subject matter, but typically they include some type of association. Use of rhymes or mnemonics is common. An example of a useful mnemonic is the memory aid for one of the magnetic compass errors. The letters "ANDS'' indicate:
Variations of the coding process are practically endless. They may consist of the use of acronyms, the chronology of events, images, semantics, or an individually developed structure based on past experiences. Developing a logical strategy for coding information is a significant step in the learning process. In this brief discussion of memory, it may appear that sensory memory is distinct and separate from working or short-term memory. This is not the case. In fact, all of the memory systems are intimately related. Many of the functions of working or short-term memory are nearly identical to long-term memory functions.
What then is distinctive about the long-term memory? This is where information is stored for future use. For the stored information to be useful, some special effort must have been expended during the coding process in working or short-term memory. The coding should have provided meaning and connections between old and new information. If initial coding is not properly accomplished, recall will be distorted and it may be impossible. The more effective the coding process, the easier the recall. However, it should be noted that the long-term memory is a reconstruction, not a pure recall of information or events. It also is subject to limitations, such as time, biases, and, in many cases, personal inaccuracies. This is why two people who view the same event will often have totally different recollections.
Memory also applies to psychomotor skills. For example, with practice, a tennis player may be able to serve a tennis ball at a high rate of speed and with accuracy. This may be accomplished with very little thought. For a pilot, the ability to instinctively perform certain maneuvers or other tasks which require manual dexterity and precision provides obvious benefits. For example, it allows the pilot more time to concentrate on other essential duties such as navigation, communications with air traffic control facilities, and visual scanning for other aircraft.
As implied, one of the major responsibilities of the instructor is to help students use their memories effectively. Strategies designed to aid students in retention and recall of information from the long-term memory are included later in this chapter. At the same time, an associated phenomenon, forgetting, cannot be ignored.
A consideration of why people forget may point the way to help them remember. Several theories account for forgetting, including disuse, interference, and repression.
The theory of disuse suggests that a person forgets those things which are not used. The high school or college graduate is saddened by the lack of factual data retained several years after graduation. Since the things which are remembered are those used on the job, a person concludes that forgetting is the result of disuse. But the explanation is not quite so simple. Experimental studies show, for example, that a hypnotized person can describe specific details of an event which normally is beyond recall. Apparently the memory is there, locked in the recesses of the mind. The difficulty is summoning it up to consciousness.
The basis of the interference theory is that people forget something because a certain experience has overshadowed it, or that the learning of similar things has intervened. This theory might explain how the range of experiences after graduation from school causes a person to forget or to lose knowledge. In other words, new events displace many things that had been learned. From experiments, at least two conclusions about interference may be drawn. First, similar material seems to interfere with memory more than dissimilar material; and second, material not well learned suffers most from interference.
Freudian psychology advances the view that some forgetting is repression due to the submersion of ideas into the subconscious mind. Material that is unpleasant or produces anxiety may be treated this way by the individual, but not intentionally. It is subconscious and protective. The repression theory does not appear to account for much forgetfulness of the kind discussed in this chapter, but it does tend to explain some cases.
Each of the theories implies that when a person forgets something, it is not actually lost. Rather, it is simply unavailable for recall. The instructor's problem is how to make certain that the student's learning is readily available for recall. The following suggestions can help.
Teach thoroughly and with meaning. Material thoroughly learned is highly resistant to forgetting. This is suggested by experimental studies and it also was pointed out in the sections on skill learning. Meaningful learning builds patterns of relationship in the student's consciousness. In contrast, rote learning is superficial and is not easily retained. Meaningful learning goes deep because it involves principles and concepts anchored in the student's own experiences. The following discussion emphasizes five principles which are generally accepted as having a direct application to remembering.
Responses which give a pleasurable return tend to be repeated. Absence of praise or recognition tends to discourage, and any form of negativism in the acceptance of a response tends to make its recall less likely.
As discussed earlier, each bit of information or action which is associated with something to be learned tends to facilitate its later recall by the student. Unique or disassociated facts tend to be forgotten unless they are of special interest or application.
People learn and remember only what they wish to know. Without motivation there is little chance for recall. The most effective motivation is based on positive or rewarding objectives.
Although we generally receive what we learn through the eyes and ears, other senses also contribute to most perceptions. When several senses respond together, a fuller understanding and greater chance of recall is achieved.
Each repetition gives the student an opportunity to gain a clearer and more accurate perception of the subject to be learned, but mere repetition does not guarantee retention. Practice provides an opportunity for learning, but does not cause it. Further, some research indicates that three or four repetitions provide the maximum effect, after which the rate of learning and probability of retention fall off rapidly.
Along with these five principles, there is a considerable amount of additional literature on retention of learning during a typical academic lesson. After the first 10-15 minutes, the rate of retention drops significantly until about the last 5-10 minutes when students wake up again. Students passively listening to a lecture have roughly a five percent retention rate over a 24-hour period, but students actively engaged in the learning process have a much higher retention. This clearly reiterates the point that active learning is superior to just listening.
During a learning experience, the student may be aided by things learned previously. On the other hand, it is sometimes apparent that previous learning interferes with the current learning task. Consider the learning of two skills. If the learning of skill A helps to learn skill B, positive transfer occurs. If learning skill A hinders the learning of skill B, negative transfer occurs. For example, the practice of slow flight (skill A) helps the student learn short-field landings (skill B). However, practice in making a landing approach in an airplane (skill A) may hinder learning to make an approach in a helicopter (skill B). It should be noted that the learning of skill B may affect the retention or proficiency of skill A, either positively or negatively. While these processes may help substantiate the interference theory of forgetting, they are still concerned with the transfer of learning.
It seems clear that some degree of transfer is involved in all learning. This is true because, except for certain inherent responses, all new learning is based upon previously learned Experience. People interpret new things in terms of what they already know.
Many aspects of teaching profit by this type of transfer. It may explain why students of apparently equal ability have differing success in certain areas. Negative transfer may hinder the learning of some; positive transfer may help others. This points to a need to know a student's past experience and what has already been learned. In lesson and syllabus development, instructtors should plan for transfer by organizing course materials and individual lesson materials in a meaningful sequence. Each phase should help the student learn what is to follow.
The cause of transfer and exactly how it occurs is difficult to determine, but no one disputes the fact that transfer does occur. The significance of this ability for the instructor is that the students can be helped to achieve it. The following suggestions are representative of what educational psychologists believe should be done.
The formation of correct habit patterns from the beginning of any learning process is essential to further learning and for correct performance after the completion of training. Remember, primacy is one of the fundamental principles of learning. Therefore, it is the instructor's responsibility to insist on correct techniques and procedures from the outset of training to provide proper habit patterns. It is much easier to foster proper habits from the beginning of training than to correct faulty ones later.
Due to the high level of knowledge and skill required in aviation for both pilots and maintenance technicians, training traditionally has followed a building block concept. This means new learning and habit patterns are based on a solid foundation of experience and/or old learning. Everything from intricate cognitive processes to simple motor skills depends on what the student already knows and how that knowledge can be applied in the present. As knowledge and skill increase, there is an expanding base upon which to build for the future.
Copyright ©1999-2007 Dynamic Flight, Inc. All rights reserved.
Page Last Updated on: Nov-11-2003 | http://www.dynamicflight.com/avcfibook/learning_process/ | 13 |
99 | Louisville in the American Civil War
Louisville in the American Civil War was a major stronghold of Union forces, which kept Kentucky firmly in the Union. It was the center of planning, supplies, recruiting and transportation for numerous campaigns, especially in the Western Theater. By the end of the war, Louisville had not been attacked once, although skirmishes and battles, including the battles of Perryville and Corydon, took place nearby.
1850-1860: The gathering storm
During the 1850s, Louisville became a vibrant and wealthy city, but together with the success, the city also harbored racial and ethnic tensions. It attracted numerous immigrants, had a large slave market from which enslaved African Americans were sold to the Deep South, and had both slaveholders and abolitionists as residents. In 1850 Louisville became the tenth largest city in the United States. Louisville's population rose from 10,000 in 1830 to 43,000 in 1850. It became an important tobacco market and pork packing center. By 1850, Louisville's wholesale trade totaled $20 million (USD) in sales. The Louisville-New Orleans river route held top rank in freight and passenger traffic on the entire Western river system.
Not only did Louisville profit from the river, in August 1855, its citizens greeted the arrival of the locomotive "Hart County" at Ninth and Broadway and connection to the nation via railroad. The first passengers arrived by train on the Louisville and Frankfort Railroad. James Guthrie, president of the Louisville & Frankfort, pushed the railroad along the Shelbyville turnpike (Frankfort Avenue) through Gilman's Point (St. Matthews) and on to Frankfort. The track entered Louisville on Jefferson Street and ended at Brook Street. The state paid tribute to James Guthrie by naming the small railroad community of Guthrie, Kentucky in Todd County after him.
Leven Shreve, a Louisville civic leader, became the first president of the Louisville and Nashville Railroad (L&N), which was to prove more important for trade. It led to the developing western states and linked with Mississippi River traffic. With the railroads, Louisville could manufacture furniture and other goods, and export products to Southern cities. Louisville was on her way to becoming an industrial city. The Louisville Rolling Mill built girders and rails, and other factories made cotton machinery, which was sold to Southern customers. Louisville built steamboats. Louisville emerged with an iron-working industry; the plant at Tenth and Main was called Ainslie, Cochran, and Company.
Louisville also became a meat packing city, becoming the second largest city in the nation to pack pork, butchering an average of 300,000 hogs a year. Louisville led the nation in hemp manufacturing and cotton bagging. Farmington Plantation, owned by John Speed, was one of the larger hemp plantations in Louisville. Hemp was Kentucky's leading agricultural product from 1840 to 1860, and the leading commodity crop of the fertile Bluegrass Region. Jefferson County led all other markets in gardening and orchards. The sales of livestock, quality horses and cattle, was also important.
Attracted by jobs and pushed by political unrest and famine, European immigrants flowed into the city from Germany and Ireland; most of them were Catholic, unlike the Protestants who lived in the city. By 1850, 359,980 immigrants arrived in the United States, and by 1854, 427,833 immigrants arrived to seek out a new living. With the increase in new immigrants in the city, native Louisville residents felt threatened by change, and began to express anti-foreign, anti-Catholic sentiments. In 1841, the growth in population prompted the Catholic archdiocese to move the bishop's seat from Bardstown to Louisville. The archdiocese began construction on a new Catholic cathedral, which was completed in 1852. This asserted Catholic presence in the city.
In 1843, a new political party arose, called the American Republican Party. On July 5, 1845, the American Republican party changed their name to the Native American Party and held their first national convention in Philadelphia. The party opposed liberal immigration policies. On June 17, 1854, the Order of the Star Spangled Banner held their second national convention in New York City. The members were "native Americans" and anti-Catholic. When their members answered questions about the group, they responded with "I know nothing about it," becoming the Know-Nothing or Native American Party. The new political party gained national support. The Know-Nothing party encouraged and tapped into the nation's prejudice and fears that Catholic immigrants would take control of the United States. Hostility to Catholics had a long history based on national rivalries in Europe. By 1854, the Know-Nothings gained control of Jefferson County's government.
Ethnic tension came to a boil in 1855, during the mayor's office election. On August 6, 1855, "Bloody Monday" erupted, in which Protestant mobs bullied immigrants away from the polls and began rioting in Irish and German neighborhoods. Protestant mobs attacked and killed at least twenty-two people. The rioting began at Shelby and Green Street (Liberty) and progressed through the city's East End. After burning houses on Shelby Street, the mob headed for William Ambruster's brewery in the triangle between Baxter Avenue and Liberty Street. They set the place ablaze and ten Germans died in the fire. When the mob burned Quinn's Irish Row on the North side of Main between Eleventh and Twelfth Streets, some of the tenants died in the fire; the mob shot and killed others. The Know-Nothing party won the election in Louisville and many other Kentucky counties.
As in other cities, slavery was a consuming topic; some of Louisville's economy was built on its thriving slave market. Slave traders' revenues, and those from feeding, clothing and transporting the slaves to the Deep South, all contributed to the city's economy. The direct use of slaves as labor in the central Kentucky economy had lessened by 1850. But throughout the 1850s, the state slaveholders sold 2500-4000 slaves annually downriver to the Deep South. Slave pens were located on Second between Market and Main Streets.
The Kansas-Nebraska Act of 1854 added to the controversy, as it threatened potentially lucrative expansion of slavery to western states. Louisville also had a free black population, among whom some managed to acquire property. Washington Spradling, freed from slavery in 1814, became a barber. By the 1850s, he owned real estate valued at $30,000. With its agriculture, shipping trade and industry, and slave markets, Louisville was a city that shared in cultures of both the agricultural South and the industrial North.
1860: The eve of war
In the November 1860 Presidential election, Kentucky voters gave native Kentuckian Abraham Lincoln less than one percent of the vote. Kentuckians did not like Lincoln, because he stood for the eradication of slavery and his Republican Party aligned itself with the North. But, neither did they vote for native son John C. Breckinridge and his Southern Democratic Party, generally regarded as secessionists. In 1860, people in the state held 225,000 slaves, with Louisville's slaves comprising 7.5 percent of the population. The voters wanted both to keep slavery and stay in the Union.
Most Kentuckians, including residents of Louisville, voted for John Bell of Tennessee, of the Constitutional Union Party. It stood for preserving the Union and keeping the status quo on slavery. Others voted for Stephen Douglas of Illinois, who ran for the Democratic Party ticket. Louisville cast 3,823 votes for John Bell. Douglas received 2,633 votes.
On December 20, 1860, South Carolina seceded from the Union, other Southern states followed, and by early 1861 eleven Southern states seceded from the Union, except Kentucky. Senator Henry Clay from Kentucky had worked for compromise and the state followed his lead.
1861: War breaks out
On April 12, 1861, Confederate Brigadier General Pierre G. T. Beauregard ordered the firing on Fort Sumter, located in the Charleston, South Carolina harbor, thus starting the Civil War. At the time of the Battle of Fort Sumter, the fort's commander was Union Major Robert Anderson of Louisville.
After the attack on Fort Sumter, President of the United States Abraham Lincoln called for 75,000 volunteers. Kentucky Governor Beriah Magoffin refused to send any men to act against the Southern states, and both Unionists and secessionists supported his position. On April 17, 1861, Louisville hoped to remain neutral and spent $50,000 for the defense of the city, naming Lovell Rousseau as brigadier general. Rousseau formed the Home Guard. When Unionists asked Lincoln for help, he secretly sent arms to the Home Guard. The U. S. government sent a shipment of weapons to Louisville and kept the rifles hidden in the basement of the Jefferson County Courthouse.
Louisville residents were divided as to which side they should support. Economic interests and previous relationships often determined alliances. Prominent Louisville attorney James Speed, brother of Lincoln's close friend Joshua Fry Speed, strongly advocated keeping the state in the Union. Louisville Main Street wholesale merchants, who had extensive trade with the South, often supported the Confederacy. Blue-collar workers, small retailers, and professional men, such as lawyers, supported the Union. On April 20, two companies of Confederate volunteers left by steamboat for New Orleans, and five days later, three more companies departed for Nashville on the L & N Railroad. Union recruiters raised troops at Eighth and Main, and the Union recruits left for Indiana to join other Union regiments.
On May 20, 1861, Kentucky declared its neutrality. An important state geographically, Kentucky had the Ohio River as a natural barrier. Kentucky's natural resources, manpower, and the L&N Railroad made both the North and South respect Kentucky's neutrality. President Lincoln and Confederate President Jefferson Davis both maintained hands-off policies when dealing with Kentucky, hoping not to push the state into one camp or the other. From the L&N depot on Ninth and Broadway in Louisville and the steamboats at Louisville wharfs, supporters of the Confederacy sent uniforms, lead, bacon, coffee and war material south. Although Lincoln did not want to upset Kentucky's neutrality, on July 10, 1861, a federal judge in Louisville ruled that the United States government had the right to stop shipments of goods from going south over the L&N railroad.
On July 15, 1861, the War Department authorized United States Navy Lieutenant William "Bull" Nelson to establish a training camp and organize a brigade of infantry. Nelson commissioned William J. Landram, a colonel of cavalry; and Theophilus T. Garrard, Thomas E. Bramlette, and Speed S. Fry colonels of infantry. Landram turned his commission over to Lieutenant Colonel Frank Wolford. When Garrard, Bramlette, and Fry established their camps at Camp Dick Robinson in Garrard County, and Wolford erected his camp near Harrodsburg, Kentucky's neutrality effectively ended. Brigadier General Rousseau established a Union training camp opposite Louisville in Jeffersonville, Indiana, naming the camp after Joseph Holt. Governor Magoffin protested to Lincoln about the Union camps, but he ignored Magoffin, stating that the will of the people wanted the camps to remain in Kentucky.
In August 1861, Kentucky held elections for the State General Assembly, and Unionists won majorities in both houses. Residents of Louisville continued to be divided on the issue of which side to join. The Louisville Courier was very much pro-Confederate, while the Louisville Journal was pro-Union.
On September 4, 1861, Confederate General Leonidas Polk, outraged by Union intrusions in the state, invaded Columbus, Kentucky. As a result of the Confederate invasion, Union General Ulysses S. Grant entered Paducah, Kentucky. Jefferson Davis allowed Confederate troops to stay in Kentucky. General Albert Sidney Johnston, commander of all Confederate forces in the West, sent General Simon Bolivar Buckner of Kentucky to invade Bowling Green, Kentucky. Union forces in Kentucky saw Buckner's move toward Bowling Green as the beginning of a massive attack on Louisville. With twenty thousand troops, Johnston established a defensive line stretching from Columbus in western Kentucky to the Cumberland Gap, controlled by Confederate General Felix Zollicoffer.
On September 7, the Kentucky State legislature, angered by the Confederate invasion, ordered the Union flag to be raised over the state capitol in Frankfort and declared its allegiance with the Union. The legislature also passed the "Non-Partisan Act", which stated that "any person or any person's family that joins or aids the so-called Confederate Army was no longer a citizen of the Commonwealth." The legislature denied any member of the Confederacy the right to land, titles or money held in Kentucky or the right to legal redress for action taken against them.
With Confederate troops in Bowling Green, Union General Robert Anderson moved his headquarters to Louisville. Union General George McClellan appointed Anderson as military commander for the District of Kentucky on June 4, 1861. On September 9, the Kentucky legislature asked Anderson to be made commander of the Federal military force in Kentucky. The Union army accepted the Louisville Legion at Camp Joe Holt in Indiana into the regular army. Louisville mayor John M. Delph sent two thousand men to build defenses around the city.
On October 8, Anderson stepped down as commander of the Department of the Cumberland and Union General William Tecumseh Sherman took charge of the Home Guard. Lovell Rousseau sent the Louisville Legion along with another two thousand men across the river to protect the city. Sherman wrote to his superiors that he needed 200,000 men to take care of Johnston's Confederates. The Louisville Legion and the Home Guard marched out to meet Buckner's forces, but Buckner did not approach Louisville. Buckner's men destroyed the bridge over the Rolling Fork River in Lebanon Junction and with the mission completed, Buckner's men returned to Bowling Green.
Louisville became a staging ground for Union troops heading south. Union troops flowed into Louisville from Ohio, Indiana, Pennsylvania and Wisconsin. White tents and training grounds sprang up at the Oakland track, Old Louisville and Portland. Camps were also established at Eighteenth and Broadway, and along the Frankfort and Bardstown turnpikes.
1862-63: Louisville under threats of attack
By early 1862, Louisville had 80,000 Union troops throughout the city. With so many troops, entrepreneurs set up gambling establishments along the north side of Jefferson from 4th to 5th Street, extending around the corner from 5th to Market, then continuing on the south side of Market back to 4th Street. Photography studios and military goods shops, such as Fletcher & Bennett on Main Street, catered to the Union officers and soldiers. Also capitalizing on the troops, brothels were quickly opened around the city.
In January 1862, Union General George Thomas defeated Confederate General Felix Zollicoffer at the Battle of Mill Springs, Kentucky. In February 1862, Union General Ulysses Grant and Admiral Andrew Foote's gunboats captured Fort Henry and Fort Donelson on the Kentucky and Tennessee border. Confederate General Albert Sidney Johnston's defensive line in Kentucky crumbled rapidly. Johnston had no choice but to fall back to Nashville, Tennessee. No defensive preparations had been made at Nashville, so Johnson continued to fall back to Corinth, Mississippi.
Although the threat of invasion by Confederates subsided, Louisville remained a staging area for Union supplies and troops heading south. By May 1862, the steamboats arrived and departed at the wharf in Louisville with their cargoes. Military contractors in Louisville provided the Union army with two hundred head of cattle each day, and the pork packers provided thousands of hogs daily. Trains departed for the south along the L&N railroad.
In July 1862, Confederate generals Braxton Bragg, commander of the Army of Mississippi, and Edmund Kirby Smith, commander of the Army of East Tennessee, planned an invasion of Kentucky. On August 13, Smith marched with 9,000 men out of Knoxville toward western Kentucky and arrived in Barbourville. On August 20, Smith announced that he would take Lexington. On August 28, Bragg's army moved west. At the Battle of Richmond, Kentucky, on August 30, Smith's Confederate forces defeated Union General William "Bull" Nelson's troops, capturing the entire force. This left Kentucky with no Union support. Nelson managed to escape back to Louisville. Smith marched into Lexington and sent a Confederate cavalry force to take Frankfort: Kentucky's capitol.
Union General Don Carlos Buell's army withdrew from Alabama and headed back to Kentucky. Union General Henry Halleck, commander of all Union forces in the West, sent two divisions from General Ulysses Grant's army, stationed in Mississippi, to Buell. Confederate General John Hunt Morgan, of Lexington, Kentucky, managed to destroy the L&N railroad tunnel at Gallatin, Tennessee, cutting off all supplies to Buell's Union army. On September 5, Buell reached Murfreesboro, Tennessee and headed for Nashville. On September 14, Bragg reached Glasgow, Kentucky. On that same day, Buell reached Bowling Green, Kentucky.
Bragg decided to take Louisville. One of the major objectives of the Confederate campaign in Kentucky was to seize the Louisville and Portland Canal and sever Union supply routes on the Ohio River. One Confederate officer suggested destroying the Louisville canal so completely that "future travelers would hardly know where it was." On September 16, Bragg's army reached Munfordville, Kentucky. Col. James Chalmers attacked the Federal garrison at Munfordville, but Bragg had to bail him out. Bragg arrived at Munfordville with his entire force, and the Union force soon surrendered.
Buell left Bowling Green and headed for Louisville. Fearing that Buell would not arrive in Louisville to prevent Bragg's army from capturing the city, Union General William "Bull" Nelson ordered the construction of a hasty defensive line around the city. He also ordered the placement of pontoon bridges across the Ohio to facilitate the evacuation of the city or to receive reinforcements from Indiana. Two pontoon bridges built of coal barges were erected, one at the location of the Big Four Bridge, and the other from Portland to New Albany. The Union Army arrived in time to prevent the Confederate seizure of the city. On September 25, Buell's tired and hungry men arrived in the city.
Bragg moved his army to Bardstown but did not take Louisville. Bragg urged General Smith to join his forces to take Louisville, but Smith told him to take Louisville on his own.
With the Confederate army under Bragg preparing to attack Louisville, the citizens of Louisville panicked. On September 22, 1862 General Nelson issued an evacuation order: "The women and children of this city will prepare to leave the city without delay." He ordered the Jeffersonville ferry to be used for military purposes only. Private vehicles were not allowed to go aboard the ferry boats without a special permit. Hundreds of Louisville residents gathered at the wharf for boats to New Albany or Jeffersonville. With Frankfort in Confederate hands for about a month, Governor Magoffin maintained his office in Louisville and the state legislature held their sessions in the Jefferson County Courthouse. Troops, volunteers and impressed labor worked around the clock to build a ring of breastworks and entrenchments around the city. New Union regiments flowed into the city. General William "Bull" Nelson took charge of the defense of Louisville. He sent Union troops to build pontoon bridges at Jeffersonville and New Albany to speed up the arrival of reinforcements, supplies and, if needed, the emergency evacuation of the city.
Instead of taking Louisville, Bragg left Bardstown to install Confederate Governor Richard Hawes at Frankfort. On September 26, five hundred Confederate cavalrymen rode into the area of Eighteenth and Oak, capturing fifty Union soldiers. Confederates placed pickets around Middletown on the 26th, and on the 27th their soldiers repelled Union forces from Middletown near Shelbyville Pike. Southern forces reached two miles from the city, but were not numerous enough to invade it. On September 30, Confederate and Union pickets fought at Gilman's Point in St. Matthews and pushed the Confederates back through Middletown to Floyd's Fork.
The War Department ordered "Bull" Nelson to command the newly formed Army of the Ohio. When Louisville prepared for the Confederate army under Bragg, General Jefferson C. Davis (not to be confused with Confederate President Jefferson Davis), who could not reach his command under General Don Carlos Buell, met with General Nelson to offer his services. General Nelson gave him the command of the city militia. General Davis opened an office and assisted organizing the city militia. On Wednesday, General Davis visited General Nelson in his room at the Galt House. General Davis told General Nelson that his brigade he assigned Davis was ready for service and asked if he could obtain arms for them. This led to an argument in which Nelson threatened Davis with arrest. General Davis left the room, and, in order to avoid arrest, crossed over the river to Jeffersonville, where he remained until the next day, when General Stephen G. Burbridge joined him. General Burbridge had also been relieved of command by General Nelson for a trivial cause. General Davis went to Cincinnati with General Burbridge and reported to General Wright, who ordered General Davis to return to Louisville and report to General Buell, and General Burbridge to remain in Cincinnati.
General Davis returned to Louisville and reported to Buell. When General Davis saw General Nelson in the main hall of the Galt House, fronting the office, he asked the Governor of Indiana, Oliver Morton to witness the conversation between him and General Nelson. The Governor agreed and the two walked up to General Nelson. General Davis confronted General Nelson and told him that he took advantage of his authority. Their argument escalated and Nelson slapped Davis in the face, challenging him to a duel. In three minutes, Davis returned, with a pistol he had borrowed, and shot and killed Nelson. The General whispered: "It's all over," and died fifteen minutes later.
With General Nelson dead, the command switched over to General Don Carlos Buell. On October 1, the Union army marched out of Louisville with sixty thousand men. Buell sent a small Federal force to Frankfort to deceive Bragg as to the exact direction and location of the Federal army. The ruse worked. On October 4, the small Federal force attacked Frankfort and Bragg left the city and headed back for Bardstown, thinking the entire Federal force was headed for Frankfort. Bragg decided that all Confederate forces should concentrate at Harrodsburg, Kentucky, ten miles (16 km) northwest of Danville. On October 8, 1862, Buell and Bragg fought at Perryville, Kentucky. Bragg's 16,000 men attacked Buell's 60,000 men. Federal forces suffered 845 dead, 2,851 wounded and 515 missing, while the Confederate toll was 3,396. Although Bragg won the Battle of Perryville tactically, he wisely decided to pull out of Perryville and link up with Smith. Once Smith and Bragg joined forces, Bragg decided to leave Kentucky and head for Tennessee.
After the battle, thousands of wounded men flooded into Louisville. Hospitals were set up in public schools, homes, factories and churches. The Fifth Ward School, built at 5th and York Streets in 1855, became Military Hospital Number Eight. The United States Marine Hospital also became a hospital for the wounded Union soldiers from the battle of Perryville. Constructed between 1845 and 1852, the three-story Greek revival style Louisville Marine Hospital contained one hundred beds. It became the prototype for seven U.S. Marine Hospital Service buildings, including Paducah, Kentucky, which later became Fort Anderson. Union surgeons erected the Brown General Hospital, located near the Belknap campus of the University of Louisville, and other hospitals were erected at Jeffersonville and New Albany, Indiana. By early 1863, the War Department and the U.S. Sanitary Commission erected nineteen hospitals. By early June 1863, 930 deaths had been recorded in the Louisville hospitals. Cave Hill Cemetery set aside plots for the Union dead.
Louisville also had to contend with Confederate prisoners. Located at the corner of Green Street and 5th Street, the Union Army Prison, also called the "Louisville Military Prison", took over the old "Medical College building." Union authorities moved the prison near the corner of 10th and Broadway Streets. By August 27, 1862, Confederate prisoners of war were taken to the new military prison. The old facility continued to house new companies of Provost Guards. From October 1, 1862 to December 14, 1862 the new Louisville Military Prison housed 3,504 prisoners. In December 1863, the prison held over 2,000 men, including political prisoners, Union deserters, and Confederate prisoners of war.
Made of wood, the prison covered an entire city block, stretching from east to west between 10th and 11th Streets and north to south between Magazine and Broadway Streets. Its main entrance was located on Broadway near 10th Street. A high fence surrounded the prison with at least two prison barracks. The prison hospital was attached to the prison and consisted of two barracks on the south and west sides of the square with forty beds in each building. The Union commander at the Louisville Military Prison was Colonel Dent. In April 1863, Captain Stephen E. Jones succeeded him. In October 1863, military authorities replaced Captain Jones with C. B. Pratt.
A block away, Union authorities took over a large house on Broadway between 12th and 13th Streets and converted it into a military prison for women.
Emancipation Proclamation
On September 22, 1862, President Lincoln issued the Emancipation Proclamation, which declared that as of January 1, 1863, all slaves in the rebellion states would be free. Although this did not affect slaveholding in Kentucky at the time, owners felt threatened. Some Kentucky Union soldiers, including Colonel Frank Wolford of the 1st Kentucky Cavalry, quit the army in protest of freeing the slaves. The proclamation presaged an end to slavery.
So many slaves arrived at the Union camp that the Army set up a contraband camp to accommodate them. The Reverend Thomas James, an African Methodist Episcopal minister from New York, supervised activities at the camp and set up a church and school for the refugees. Both adults and children started learning to read. Under direction by generals Stephen G. Burbridge and John M. Palmer, James monitored conditions at prisons and could call on US troops to protect slaves from being held illegally, which he did several times.
The Union's recruitment of slaves into the army (which gained them freedom) turned some slaveholders in Kentucky against the US government. In later years, the depredations of guerrilla warfare in the state, together with Union measures to try to suppress it, and the excesses of General Burbridge as military governor of Kentucky, were probably more significant in alienating more citizens. Civic rights were overridden during the crisis. These issues turned many against the Republican administration.
After the war ended, the Democrats regained power in central and western Kentucky, which the former slaveholders and their culture dominated. Because this area was the more populous and the Democrats also passed legislation essentially disfranchising freedmen, the white Democrats controlled politics in the state and sent mostly their representatives to Congress for a century. In the mid-1960s, the federal Civil Rights Act and Voting Rights Act ended legal segregation of public facilities and protected voting rights of minorities.
The Taylor Barracks at Third and Oak in Louisville recruited black soldiers for the United States Colored Troops. Slaves gained freedom in exchange for service to the Union. Slave women married to USCT men received freedom, as well. To secure legal freedom for the many slave women arriving alone at the contraband camp, Burbridge directed James to marry them to available USCT soldiers, if both parties were willing. Black Union soldiers who died in service were buried in the Louisville Eastern Cemetery.
In the Summer of 1863, Confederate John Hunt Morgan violated orders and led his famous raid into Ohio and Indiana to give the northern states a taste of the war. He traveled with his troops through north-central Kentucky, trekking from Bardstown to Garnettsville, a now defunct town in Otter Creek Park. They took the Lebanon garrison, capturing hundreds of Union soldiers and then releasing them on parole. Before crossing the Ohio River into Indiana, Morgan and his crew arrived in Brandenburg, where they proceeded to capture two steamers, the John B. McCombs and the Alice Dean; the Alice Dean burned after their crossing.
After the fall of New Orleans and the capture of Vicksburg, Mississippi on July 4, 1863, the Mississippi and Ohio Rivers were open to Union boats without harassment. On December 24, 1863, a steamboat from New Orleans reached Louisville.
1864: Military rule
Widespread guerrilla warfare in the state meant a widespread breakdown in the society, causing residents to suffer. In Kentucky, the Union defined a guerrilla as any member of the Confederate army who destroyed supplies, equipment or money. On January 12, 1864, Union General Stephen G. Burbridge, formerly supervising Louisville, succeeded General Jeremiah Boyle as Military Commander of Kentucky.
On February 4, 1864 at the Galt House, Union generals Ulysses S. Grant, William S. Rosecrans, George Stoneman, Thomas L. Crittenden, James S. Wadsworth, David Hunter, John Schofield, Alexander McCook, Robert Allen, George Thomas, Stephen Burbridge and Read Admiral David Porter met to discuss the most important campaign of the war. It would divide the Confederacy into three parts. In a follow-up meeting on March 19, generals Grant and William Tecumseh Sherman met at the Galt House to plan the Spring campaign. Grant took on Confederate General Robert E. Lee at Richmond and Sherman confronted General Joseph E. Johnston, capturing Atlanta, Georgia in the process.
On February 21, 1864, Jefferson General Hospital, the third-largest hospital during the Civil War, was established across the Ohio River at Port Fulton, Indiana to tend to soldiers injured due to the war.
On July 5, 1864, President Abraham Lincoln temporarily suspended the writ of habeas corpus, which meant a person could be imprisoned without trial, his house searched without warrant, and the individual arrested without charge. Lincoln also declared martial law in Kentucky, which meant that military authorities had the ultimate rule. Civilians accused of crimes would be tried not in a civilian court, but instead a military court, in which the citizen's rights were not held as under the Constitution. On the same day, General Burbridge was appointed military governor of Kentucky with absolute authority.
On July 16, 1864, Burbridge issued Order No. 59: "Whenever an unarmed Union citizen is murdered, four guerrillas will be selected from the prison and publicly shot to death at the most convenient place near the scene of the outrages." On August 7, Burbridge issued Order No. 240 in which Kentucky became a military district under his direct command. Burbridge could seize property without trial from persons he deemed disloyal. He could also execute suspects without trial or question.
During the months of July and August, Burbridge initiated building more fortifications in Kentucky, although Sherman's march through Georgia effectively reduced the Confederate threat to Kentucky. Burbridge received permission from Union General John Schofield to build fortifications in Mount Sterling, Lexington, Frankfort and Louisville. Each location was to have a small enclosed field work of about two hundred yards along the interior crest, with the exception of Louisville, which would be five hundred yards. Other earthworks were planned to follow in Louisville. All the works were to be built by soldiers, except at Frankfort, where the state would assign workers, and at Louisville, where the city would manage it. Lt. Colonel. J. H, Simpson, of the Federal Engineers, furnished the plans and engineering force.
Eleven forts protected the city in a ring about ten miles (16 km) long from Beargrass Creek to Paddy's Run. The first work built was Fort McPherson, which commanded the approaches to the city via the Shepherdsville Pike, Third Street Road, and the Louisville & Nashville Railroad. The fort was to serve as a citadel if an attack came before the other forts were completed. The fort could house one thousand men. General Hugh Ewing, Union commander in Louisville, directing that municipal authorities furnish laborers for fortifications, ordered the arrest of all "loafers found about gambling and other disreputable establishments" in the city for construction work, and assigned military convicts as laborers. It was typical of military commanders to press citizens into service.
Each fort was a basic earth-and-timber structure surrounded by a ditch with a movable drawbridge at the entrance to the fort. Each was furnished with an underground magazine to house two hundred rounds of artillery shells. The eleven forts occupied the most commanding positions to provide interlocking cross fire between them. A supply of entrenching tools was collected and stored for emergency construction of additional batteries and infantry entrenchments between the fortifications. As it happened, the guns in the Louisville forts were never fired except for salutes.
With orders No. 59 and No. 240, Burbridge began a campaign to suppress guerrilla activity in Kentucky and Louisville. On August 11, Burbridge commanded Captain Hackett of the 26th Kentucky to select four men to be taken from prison in Louisville to Eminence, Henry County, Kentucky, to be shot for unidentified outrages. On August 20, suspected Confederate guerrillas J. H. Cave and W. B. McClasshan were taken from Louisville to Franklin, Simpson County, to be shot for an unidentified reason. The commanding officer General Ewing declared that Cave was innocent and sought a pardon from Burbridge, but he refused. Both men were shot.
On October 25, Burbridge ordered four men, Wilson Lilly, Sherwood Hartley, Captain Lindsey Dale Buckner and M. Bincoe, to be shot by Captain Rowland Hackett of Company B, 26th Kentucky for the alleged killing of a postal carrier near Brunerstown (present day Jeffersontown). This was in retaliation for the killing by guerrillas allegedly led by Captain Marcellus Jerome Clarke, sometimes called "Sue Mundy". On November 6, two men named Cheney and Morris were taken from the prison in Louisville and transported to Munfordville and shot in retaliation for the killing of Madison Morris, of Company A, 13th Kentucky Infantry. James Hopkins, John Simple and Samuel Stingle were taken from Louisville to Bloomfield, Nelson County, and shot in retaliation for the alleged guerrilla shooting of two black men. On November 15, two Confederate soldiers were taken from prison in Louisville to Lexington and hung at the Fair Grounds in retaliation. On November 19, eight men were taken from Louisville to Munfordville to be shot for retaliation for the killing of two Union men.
By the end of 1864, Burbridge ordered the arrest of twenty-one prominent Louisville citizens, plus the chief justice of the State Court of Appeals, on treason charges. He had captured guerrillas brought to Louisville and hanged on Broadway at 15th or 18th Streets. General Ewing was effectively out of the loop and often bedridden from attacks of rheumatism. As he was ordered to rejoin his brother-in-law General Sherman, Ewing has escaped the condemnation of Burbridge's actions in Louisville.
By the November elections of 1864, Burbridge tried to interfere with the election for President. Despite military interference, Kentucky citizens voted overwhelmingly for Union General George B. McClellan over Lincoln. Twelve counties were not allowed to post their returns. In December 1864, President Lincoln appointed James Speed as the U.S. Attorney General.
1865-66: War comes to a close
Although the Confederacy began to fall apart in January 1865, Burbridge continued executing guerrillas. On January 20, 1865 Nathaniel Marks, formerly of Company A, 4th Kentucky, C.S. was condemned as a guerrilla. He claimed his innocence, but was shot by a firing squad in Louisville. On February 10, Burbridge's term as military governor came to an end. Secretary of War Edwin M. Stanton replaced Burbridge with Major General John Palmer.
On March 12, Union forces captured 20-year-old Captain M. Jerome Clarke, the alleged "Sue Mundy", along with Henry Medkiff and Henry C. Magruder, ten miles (16 km) south of Brandenburg near Breckinridge County. The Union Army hanged Clarke three days later just west of the corner of 18th and Broadway in Louisville, after a military trial in which he was charged as a guerilla. During the secret three-hour trial, Clarke was not allowed counsel or witnesses for his defense, although he asked to be treated as a prisoner of war. Magruder was allowed to recover from war injuries before being executed by hanging on October 29.
On April 9, Confederate General Robert E. Lee surrendered to Union General Ulysses Grant, and on April 14, Confederate General Joseph Johnston surrendered to Union General William T. Sherman, ending the Civil War.
On May 15, Louisville became a mustering-out center for troops from midwestern and western states. On June 4, 1865, military authorities established the headquarters of the Union Armies of the West in Louisville. During June 1865, 96,796 troops and 8,896 animals left Washington, D.C. for the Ohio Valley. There 70,000 men took steamboats to Louisville and the remainder embarked for St. Louis and Cincinnati. The troops boarded ninety-two steamboats at Parkersburg and descended the river in convoys of eight boats, to the sounds of cheering crowds and booming cannon salutes at every port city. For several weeks, Union soldiers crowded Louisville. On July 4, 1865, Union General William T. Sherman visited Louisville to conduct a final inspection of the Armies of the West. By mid-July the Armies of the West disbanded and the soldiers headed home.
On December 18, the Kentucky legislature repealed the Expatriation Act of 1861, allowing all who served in the Confederacy to have their full Kentucky citizenship restored without fear of retribution. The legislature also repealed the law that defined any person who was a member of the Confederacy as guilty of treason. The Kentucky legislature allowed former Confederates to run for office. On February 28, 1866, Kentucky officially declared the war over.
After the war, Louisville returned to growth, with an increase in manufacturing, establishment of new factories, and transporting goods by train. The new industrial jobs attracted both black rural workers, including freedmen from the South, and foreign immigrants. It was a city of opportunity for them. Ex-Confederate officers entered law, insurance, real estate and political offices, largely taking control of the city. This led to the jibe that Louisville joined the Confederacy after the war was over.
Women sympathizing with the Confederacy organized many groups, including in Kentucky. During the postwar years, Confederate women ensured the burial of the dead, including sometimes allocating certain cemeteries or sections to Confederate veterans, and raised money to build memorials to the war and their losses. By the 1890s, the memorial movement came under the control of the United Daughters of the Confederacy (UDC) and United Confederate Veterans (UCV), who promoted the "Lost Cause". Making meaning after the war was another way of writing its history. In 1895, the women's group supported the erection of a Confederate monument near the University of Louisville campus.
Civil War defenses of Louisville (1864-65)
Around 1864-65, city defenses, including eleven forts ordered by Union General Stephen G. Burbridge, formed a ring about ten miles (16 km) long from Beargrass Creek to Paddy's Run. Nothing remains of these constructions. They included, from east to west:
- Fort Elstner between Frankfort Ave. and Brownsboro Road, near Bellaire, Vernon and Emerald Aves.
- Fort Engle at Spring Street and Arlington Ave.
- Fort Saunders at Cave Hill Cemetery.
- Battery Camp Fort Hill (2) (1865) between Goddard Ave., Barrett and Baxter Streets, and St. Louis Cemetery.
- Fort Horton at Shelby and Merriweather Streets (now site of city incinerator plant).
- Fort McPherson on Preston Street, bounded by Barbee, Brandeis, Hahn and Fort Streets.
- Fort Philpot at Seventh Street and Algonquin Parkway.
- Fort St. Clair Morton at 16th and Hill Streets.
- Fort Karnasch on Wilson Ave. between 26th and 28th Streets.
- Fort Clark (1865) at 36th and Magnolia Streets.
- Battery Gallup (1865) at Gibson Lane and 43rd Street.
- Fort Southworth on Paddy's Run at the Ohio River (now site of city sewage treatment plant). Marker at 4522 Algonquin Parkway.
Also in the area were Camp Gilbert (1862) and Camp C. F. Smith (1862), both at undetermined locations.
||This article needs additional citations for verification. (April 2009)|
- Yater, George, 61.
- Beach, p. 16-17
- Beach, p. 18.
- Beach, p. 20.
- White p.11
- White pp.20,36
- The Murder of General Nelson, Harper's Weekly, October 18, 1862.
- Head, James, 155-158.
- James, Thomas. Life of Rev. Thomas James, by Himself, Rochester, N.Y.: Post Express Printing Company, 1886, at Documenting the American South, University of North Carolina, accessed 3 Jun 2010
- Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol.17, 2000, pp.12-13 Accessed 10 Mar 2008
- James, Thomas. Life of Rev. Thomas James, by Himself, Rochester, N.Y.: Post Express Printing Company, 1886, at Documenting the American South, University of North Carolina, accessed 3 Jun 2010
- McDowell, Robert E. (1962). City of Conflict: Louisville in the Civil War 1861-1865. Louisville Civil War Roundtable Publishers. p. 159.
- Beach, pp. 154-156.
- Beach, p. 177.
- Beach, p. 184.
- Beach, pp. 198, 201, 202.
- Beach, p. 202.
- "Speed, James". The Encyclopedia of Louisville (1 ed.). 2001.
- Vest, Stephen M., "Was She or Wasn't He?," Kentucky Living, November 1995, 25-26, 42.
- Beach, p. 228.
- David Blight, Race and Reunion: Civil War in American Memory, Cambridge, MA: Harvard University Press, 2001, pp.258-260
- Johnson, Leland R. (1984). The Falls City Engineers a History of the Louisville District Corps of Engineers United States Army 1870-1983. United States Army Engineer District.
See also
- History of Louisville, Kentucky
- History of slavery in Kentucky
- Kentucky in the American Civil War
- Lexington in the American Civil War
- Louisville Mayors during the Civil War:
- Louisville-area Civil War monuments
- Louisville-area museums with Civil War artifacts
- Beach, Damian (1995). Civil War Battles, Skirmishes, and Events in Kentucky. Louisville, Kentucky: Different Drummer Books.
- Bush, Bryan S. (1998). The Civil War Battles of the Western Theatre (2000 ed.). Paducah, Kentucky: Turner Publishing, Inc. pp. 22–23, 36–41. ISBN 1-56311-434-8.
- Head, James (2001). The Atonement of John Brooks: The Story of the True Johnny "Reb" Who Did Not Come Marching Home. Florida: Heritage Press. ISBN 1-889332-42-9.
- Johnson, Leland R. A History of the Louisville District Corps of Engineers United States Army. pp. 103–120.
- McDowell, Robert E. (1962). City of Conflict: Louisville in the Civil War, 1861-1865. Louisville, Kentucky: Louisville Civil War Roundtable.
- Nevin, David (1983). The Road to Shiloh: Early Battles in the West. Alexandria, Virginia: Time-Life Books, Inc. pp. 11–12, 42–103. ISBN 0-8094-4712-6.
- Street, James (1985). The Struggle for Tennessee: Tupelo to Stones River. Richmond, Virginia: Time-Life Books, Inc. pp. 8–67. ISBN 0-8094-4760-6.
- Thomas, Samuel, ed. (1971, reprint 1992). Views of Louisville Since 1766. Louisville, Kentucky: Merrick Printing Company.
- White, J. Andrew (1993). Louisville on the Fingertips of an Invasion.
- Yater, George H. (1979). Two Hundred Years at the Falls of the Ohio: A History of Louisville and Jefferson County. Louisville, Kentucky: The Heritage Corporation. pp. 82–96. ISBN 0-9603278-0-0.
Further reading
- Bush, Bryan S. (2008). Lincoln and the Speeds: The Untold Story of a Devoted and Enduring Friendship. Morley, Missouri: Acclaim Press. ISBN 978-0-9798802-6-1.
- Bush, Bryan S. (2008). Louisville and the Civil War: A History & Guide. Charleston, South Carolina: The History Press. ISBN 978-1-59629-554-4.
- Cotterill, R. S. "The Louisville and Nashville Railroad 1861-1865," American Historical Review (1924) 29#4 pp. 700–715 in JSTOR
- Coulter, E. Merton. The Civil War and Readjustment in Kentucky (1926), the standard scholarly study of the state
- "Joshua and James Speed" — Article by Civil War historian/author Bryan S. Bush
- Brief History of the 5th Kentucky Infantry from The Union Regiments of Kentucky | http://en.wikipedia.org/wiki/Louisville_in_the_American_Civil_War | 13 |
57 | Explanation of Artificial Gravity by Ron Kurtus - Succeed in Understanding Physics. Key words: physical science, space station, spacecraft, acceleration, centrifugal force, radian, angular velocity, weightlessness, School for Champions. Copyright © Restrictions
by Ron Kurtus (8 October 2009)
Artificial gravity is a force that simulates the effect of gravity but is not caused by the attraction to the Earth. There is a need for artificial gravity in spacecraft to counter the effect of weightlessness on the astronauts.
Acceleration and centrifugal force can duplicate the effects of gravity. Albert Einstein used the concept of artificial or virtual gravity in his General Theory of Relativity to give a different explanation of gravity.
A rotating circular space station can create artificial gravity for its passengers. The rate or rotation necessary to duplicate the Earth's gravity depends on the radius of the circle. Equations can be derived to determine the rotation rate and radius to simulate the effect of gravity.
Questions you may have include:
- When is artificial gravity needed?
- How can artificial gravity be created?
- How fast must a circular space station rotate?
This lesson will answer those questions.
Useful tool: Metric-English Conversion
Artificial gravity needed
Artificial gravity is needed in spaceships that are in orbit around the Earth, as well as ones that are so far out that the effect of gravity or gravitation is negligible.
The International Space Station is in orbit around the Earth at approximately 350 km. Because the centrifugal force keeping the space station in orbit counters the force of gravity at that altitude, astronauts in the station do not feel the effect of gravity. Anything or anybody that is not tied down will float within the Space Station.
Astronauts in any spaceship that is far enough away from the Earth that the effect of gravity or gravitation is negligible will also feel the effects of weightlessness. The gravitation on a spaceship that is about 15,000 km from Earth is about 1/10 the gravity on the ground.
Thus, artificial gravity is needed to facilitate the tasks the astronauts must do, to make them more comfortable and to avoid negative health effects from weightlessness.
Ways to create artificial gravity
Constant acceleration and centrifugal force are ways to create artificial gravity, such that a person could not tell the force was not gravity and all the laws of gravity hold.
One way to simulate a gravitational force is to accelerate the spaceship. This is similar to the effect you feel when you are in an accelerating elevator, where you can feel heavier when the elevator is moving upward.
In developing his General Theory of Relativity, Albert Einstein noted that you could not tell the difference between gravity and constant acceleration. He used this example to state his theory that gravity or gravitation was not a force but an action related to inertia on moving objects.
Unfortunately, creating artificial gravity is impractical be depending on acceleration alone. There is a limit to the velocity of a spaceship.
A better way to create this artificial gravity than constant acceleration is to use centrifugal force, which is an outward force caused by an object being made to follow a curved path instead of a straight line, as dictated by the Law of Inertia.
If a spaceship was in a large, circular shape that was rotating at a given speed, the crew on the inside could feel the centrifugal force as artificial gravity.
In the 1968 movie 2001: A Space Odyssey, a rotating centrifuge in the spacecraft provided artificial gravity for the astronauts. A person could walk inside the circle with his feet toward the exterior and his head toward the center, the floor and ceiling would curve upwards.
A rotating spacecraft will produce the feeling of gravity on its inside hull. The rotation drives any object inside the spacecraft toward the hull, thereby giving the appearance of a gravitational pull directed outward.
Rotating space station creates artificial gravity
Rate of rotation to duplicate gravity
It is worthwhile to determine the radius of the space station centrifuge and its rate of rotation that will simulate the force of gravity.
Centrifugal force equation
When you swing an object around you that is tied to a string, the outward force is equal to:
F = mv2/r
- F is the outward force of the object in newtons (N) or pounds (lb)
- m is the mass of the object in kilograms (kg) or pound-mass (lb)
- v is the linear or straight-line velocity of the object in meters/second (m/s) or feet/second (ft/s)
- r is the radius of the motion or the length of the string in m or ft
It is a good practice to verify that the units you are using are correct for the equation.
F N = (m kg)(v m/s)2/r m
N = (kg)(m2/s2)/m
kg-m/s2 = kg-m/s2
A similar verification can be done using feet and pounds.
Angular velocity equation
A better way to write the force equation is to use angular velocity, which will then lead to revolutions per minute.
ω = v/r
v = ωr
where ω (lower-case Greek letter omega) is the angular velocity in radians per second.
Note: A radian is the distance along a curve divided by the radius
Substituting for v in F = mv2/r, you get
F = mω2r
Relate to gravity
Since the centrifugal force is F = mω2r and the force due to gravity is F = mg, you can combine the two equations to get the relationship between the radius, rate of rotation and g:
mg = mω2r
g = ω2r
Solving for ω:
ω = √(g/r)
Also, solving for r:
r = g/ω2
Convert radians per second to rpm
The units for ω are inconvenient for defining the rate of rotation of the space station. Instead of radians per second, it would be better to state the units as revolutions per minute (rpm). Conversion factors are:
1 radian = 1/2π of a full circle (π is "pi", which is equal to about 3.14)
ω radians per second is ω/2π is revolutions per second
ω/2π revolutions per second is 60ω/2π revolutions per minute
60ω/2π = 9.55ω rpm
Let Ω (capital Greek letter omega) be the rate of rotation in rpm.
Ω = 9.55ω rpm
Ω = 9.55√(g/r)
r = 91.2g/Ω2
Suppose the space station had a radius of r = 128 ft. How fast would it have to turn to create an acceleration due to gravity of g = 32 ft/s2?
Ω = 9.55√(g/r)
Ω = 9.55√(32/128) rpm
Ω = 9.55√(1/4) rpm
Ω = 9.55/2 rpm
Ω = 4.775 rpm
If you wanted the space station to rotate at only 2 rpm, how many meters must the radius be to simulate gravity?
r = 91.2g/Ω2
r = (91.2)(9.8)/(22) meters
r = 233.44 m
Artificial gravity is a force that simulates Earth's gravity. There is a need for artificial gravity in spacecraft to counter the effect of weightlessness on the astronauts.
Acceleration and centrifugal force can duplicate the effects of gravity. A rotating circular space station can create artificial gravity for its passengers. The rate or rotation necessary to duplicate the Earth's gravity depends on the radius of the circle.
Think of ways to improve on nature
Resources and references
Artificial gravity - Wikipedia
Simulating Gravity in Space - From Batesville, Indiana HS Physics class
Artificial Gravity and the Architecture of Orbital Habitats - Theodore W. Hall - Space Future; detailed technical paper
Artificial Gravity - Technical resources from Theodore W. Hall -
The Physics of Artificial Gravity - Popular Science magazine
Simulated Gravity with Centripetal Force - Oswego City School District Exam Prep Center, New York
What do you think?
Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible.
Click on a button to send an email, Facebook message, Tweet, or other message to share the link for this page:
Students and researchers
The Web address of this page is:
Please include it as a link on your website or as a reference in your report, document, or thesis.
Where are you now? | http://www.schoolforchampions.com/science/gravity_artificial.htm | 13 |
211 | In geometry, a triangle center (or triangle centre) is a point in the plane that is in some sense a center of a triangle akin to the centers of squares and circles. For example the centroid, circumcenter, incenter and orthocenter were familiar to the ancient Greeks, and can be obtained by simple constructions. Each of them has the property that it is invariant under similarity. In other words, it will always occupy the same position (relative to the vertices) under the operations of rotation, reflection, and dilation. Consequently, this invariance is a necessary property for any point being considered as a triangle center. It rules out various well-known points such as the Brocard points, named after Henri Brocard (1845–1922), which are not invariant under reflection and so fail to qualify as triangle centers.
Even though the ancient Greeks discovered the classic centers of a triangle they had not formulated any definition of a triangle center. After the ancient Greeks, several special points associated with a triangle like Fermat point, nine-point center, symmedian point, Gergonne point, and Feuerbach point were discovered. During the revival of interest in triangle geometry in the 1980s it was noticed that these special points share some general properties that now form the basis for a formal definition of triangle center. As of 15 October 2012[update], Clark Kimberling's Encyclopedia of Triangle Centers contains an annotated list of 5,389 triangle centers.
A real-valued function f of three real variables a, b, c may have the following properties:
- Homogeneity: f(ta,tb,tc) = tnf(a,b,c) for some constant n and for all t > 0.
- Bisymmetry in the second and third variables: f(a,b,c) = f(a,c,b).
If a non-zero f has both these properties it is called a triangle center function. If f is a triangle center function and a, b, c are the side-lengths of a reference triangle then the point whose trilinear coordinates are f(a,b,c) : f(b,c,a) : f(c,a,b) is called a triangle center.
This definition ensures that triangle centers of similar triangles meet the invariance criteria specified above. By convention only the first of the three trilinear coordinates of a triangle center is quoted since the other two are obtained by cyclic permutation of a, b, c. This process is known as cyclicity.
Every triangle center function corresponds to a unique triangle center. This correspondence is not bijective. Different functions may define the same triangle center. For example the functions f1(a,b,c) = 1/a and f2(a,b,c) = bc both correspond to the centroid. Two triangle center functions define the same triangle center if and only if their ratio is a function symmetric in a, b and c.
Even if a triangle center function is well-defined everywhere the same cannot always be said for its associated triangle center. For example let f(a, b, c) be 0 if a/b and a/c are both rational and 1 otherwise. Then for any triangle with integer sides the associated triangle center evaluates to 0:0:0 which is undefined.
In some cases these functions are not defined on the whole of ℝ3. For example the trilinears of X365 are a1/2 : b1/2 : c1/2 so a, b, c cannot be negative. Furthermore in order to represent the sides of a triangle they must satisfy the triangle inequality. So, in practice, every function's domain is restricted to the region of ℝ3 where a ≤ b + c, b ≤ c + a, and c ≤ a + b. This region T is the domain of all triangles, and it is the default domain for all triangle-based functions.
Other useful domains
There are various instances where it may be desirable to restrict the analysis to a smaller domain than T. For example:
- The centers X3, X4, X22, X24, X40 make specific reference to acute triangles,
namely that region of T where a2 ≤ b2 + c2, b2 ≤ c2 + a2, c2 ≤ a2 + b2.
- When differentiating between the Fermat point and X13 the domain of triangles with an angle exceeding 2π/3 is important,
in other words triangles for which a2 > b2 + bc + c2 or b2 > c2 + ca + a2 or c2 > a2 + ab + b2.
- A domain of much practical value since it is dense in T yet excludes all trivial triangles (ie points) and degenerate triangles
(ie lines) is the set of all scalene triangles. It is obtained by removing the planes b = c, c = a, a = b from T.
- The centers X3, X4, X22, X24, X40 make specific reference to acute triangles,
Not every subset D ⊆ T is a viable domain. In order to support the bisymmetry test D must be symmetric about the planes b = c, c = a, a = b. To support cyclicity it must also be invariant under 2π/3 rotations about the line a = b = c. The simplest domain of all is the line (t,t,t) which corresponds to the set of all equilateral triangles.
The point of concurrence of the perpendicular bisectors of the sides of triangle ABC is the circumcenter. The trilinear coordinates of the circumcenter are
- a(b2 + c2 − a2) : b(c2 + a2 − b2) : c(a2 + b2 − c2).
Let f(a,b,c) = a(b2 + c2 − a2). Then
- f(ta,tb,tc) = (ta) ( (tb)2 + (tc)2 − (ta)2 ) = t3 ( a( b2 + c2 − a2) ) = t3f(a,b,c) (homogeneity)
- f(a,c,b) = a(c2 + b2 − a2) = a(b2 + c2 − a2) = f(a,b,c) (bisymmetry)
so f is a triangle center function. Since the corresponding triangle center has the same trilinears as the circumcenter it follows that the circumcenter is a triangle center.
1st isogonic center
Let A'BC be the equilateral triangle having base BC and vertex A' on the negative side of BC and let AB'C and ABC' be similarly constructed equilateral triangles based on the other two sides of triangle ABC. Then the lines AA', BB' and CC' are concurrent and the point of concurrence is the 1st isogonic center. Its trilinear coordinates are
- csc(A + π/3) : csc(B + π/3) : csc(C + π/3).
Expressing these coordinates in terms of a, b and c, one can verify that they indeed satisfy the defining properties of the coordinates of a triangle center. Hence the 1st isogonic center is also a triangle center.
|1||if a2 > b2 + bc + c2||(equivalently A > 2π/3)|
|Let f(a,b,c) =||0||if b2 > c2 + ca + a2 or c2 > a2 + ab + b2||(equivalently B > 2π/3 or C > 2π/3)|
|csc(A + π/3)||otherwise||(equivalently no vertex angle exceeds 2π/3).|
Then f is bisymmetric and homogeneous so it is a triangle center function. Moreover the corresponding triangle center coincides with the obtuse angled vertex whenever any vertex angle exceeds 2π/3, and with the 1st isogonic center otherwise. Therefore this triangle center is none other than the Fermat point.
The trilinear coordinates of the first Brocard point are c/b : a/c : b/a. These coordinates satisfy the properties of homogeneity and cyclicity but not bisymmetry. So the first Brocard point is not (in general) a triangle center. The second Brocard point has trilinear coordinates b/c : c/a : a/b and similar remarks apply.
The first and second Brocard points are one of many bicentric pairs of points, pairs of points defined from a triangle with the property that the pair (but not each individual point) is preserved under similarities of the triangle. Several binary operations, such as midpoint and trilinear product, when applied to the two Brocard points, as well as other bicentric pairs, produce triangle centers.
Some well-known triangle centers
Classical triangle centers
|X1||Incenter||I||1 : 1 : 1|
|X2||Centroid||G||bc : ca : ab|
|X3||Circumcenter||O||cos A : cos B : cos C|
|X4||Orthocenter||H||sec A : sec B : sec C|
|X5||Nine-point center||N||cos(B − C) : cos(C − A) : cos(A − B)|
|X6||Symmedian point||K||a : b : c|
|X7||Gergonne point||Ge||bc/(b + c − a) : ca/(c + a − b) : ab/(a + b − c)|
|X8||Nagel point||Na||(b + c − a)/a : (c + a − b)/b: (a + b − c)/c|
|X9||Mittenpunkt||M||b + c − a : c + a − b : a + b − c|
|X10||Spieker center||Sp||bc(b + c) : ca(c + a) : ab(a + b)|
|X11||Feuerbach point||F||1 − cos(B − C) : 1 − cos(C − A) : 1 − cos(A − B)|
|X13||Fermat point||X||csc(A + π/3) : csc(B + π/3) : csc(C + π/3) *|
| sin(A + π/3) : sin(B + π/3) : sin(C + π/3)
sin(A − π/3) : sin(B − π/3) : sin(C − π/3)
| sec(A − π/3) : sec(B − π/3) : sec(C − π/3)
sec(A + π/3) : sec(B + π/3) : sec(C + π/3)
|X99||Steiner point||S||bc/(b2 − c2) : ca/(c2 − a2) : ab/(a2 − b2)|
(*) : actually the 1st isogonic center, but also the Fermat point whenever A,B,C ≤ 2π/3
Recent triangle centers
In the following table of recent triangle centers, no specific notations are mentioned for the various points. Also for each center only the first trilinear coordinate f(a,b,c) is specified. The other coordinates can be easily derived using the cyclicity property of trilinear coordinates.
|X21||Schiffler point||1/(cos B + cos C)|
|X22||Exeter point||a(b4 + c4 − a4)|
|X111||Parry point||a/(2a2 − b2 − c2)|
|X173||Congruent isoscelizers point||tan(A/2) + sec(A/2)|
|X174||Yff center of congruence||sec(A/2)|
|X175||Isoperimetric point||− 1 + sec(A/2) cos(B/2) cos(C/2)|
|X179||First Ajima-Malfatti point||sec4(A/4)|
|X181||Apollonius point||a(b + c)2/(b + c − a)|
|X192||Equal parallelians point||bc(ca + ab − bc)|
|X356||Morley center||cos(A/3) + 2 cos(B/3) cos(C/3)|
|X401||Bailey point||[sin(2B) sin(2C) − sin2(2A)] csc A|
General classes of triangle centers
In honor of Clark Kimberling who created the online encyclopedia of more than 5000 triangle centers, the triangle centers listed in the encyclopedia are collectively called Kimberling centers.
Polynomial triangle center
A triangle center P is called a polynomial triangle center if the trilinear coordinates of P can be expressed as polynomials in a, b and c.
Regular triangle center
A triangle center P is called a regular triangle point if the trilinear coordinates of P can be expressed as polynomials in Δ, a, b and c, where Δ is the area of the triangle.
Major triangle center
A triangle center P is said to be a major triangle center if the trilinear coordinates of P can be expressed in the form f(A) : f(B) : f(C) where f(A) is a function of A alone.
Transcendental triangle center
A triangle center P is called a transcendental triangle center if P has no trilinear representation using only algebraic functions of a, b and c.
Isosceles and equilateral triangles
Let f be a triangle center function. If two sides of a triangle are equal (say a = b) then
- f(a, b, c) = f(b, a, c) since a = b
- = f(b, c, a) by bisymmetry
so two components of the associated triangle center are always equal. Therefore all triangle centers of an isosceles triangle must lie on its line of symmetry. For an equilateral triangle all three components are equal so all centers coincide with the centroid. So, like a circle, an equilateral triangle has a unique center.
|Let f(a,b,c) =||−1||if a ≥ b and a ≥ c|
This is readily seen to be a triangle center function and (provided the triangle is scalene) the corresponding triangle center is the excenter opposite to the largest vertex angle. The other two excenters can be picked out by similar functions. However as indicated above only one of the excenters of an isosceles triangle and none of the excenters of an equilateral triangle can ever be a triangle center.
A function f is biantisymmetric if f(a,b,c) = −f(a,c,b) for all a,b,c. If such a function is also non-zero and homogeneous it is easily seen that the mapping (a,b,c) → f(a,b,c)2f(b,c,a) f(c,a,b) is a triangle center function. The corresponding triangle center is f(a,b,c) : f(b,c,a) : f(c,a,b). On account of this the definition of triangle center function is sometimes taken to include non-zero homogeneous biantisymmetric functions.
New centers from old
Any triangle center function f can be normalized by multiplying it by a symmetric function of a,b,c so that n = 0. A normalized triangle center function has the same triangle center as the original, and also the stronger property that f(ta,tb,tc) = f(a,b,c) for all t > 0 and all (a,b,c). Together with the zero function, normalized triangle center functions form an algebra under addition, subtraction, and multiplication. This gives an easy way to create new triangle centers. However distinct normalized triangle center functions will often define the same triangle center, for example f and (abc)−1(a+b+c)3f .
Assume a,b,c are real variables and let α,β,γ be any three real constants.
|α||if a < b and a < c||(equivalently the first variable is the smallest)|
|Let f(a,b,c) =||γ||if a > b and a > c||(equivalently the first variable is the largest)|
|β||otherwise||(equivalently the first variable is in the middle)|
Then f is a triangle center function and α : β : γ is the corresponding triangle center whenever the sides of the reference triangle are labelled so that a < b < c. Thus every point is potentially a triangle center. However the vast majority of triangle centers are of little interest, just as most continuous functions are of little interest. The Encyclopedia of Triangle Centers is an ever-expanding list of interesting ones.
If f is a triangle center function then so is af and the corresponding triangle center is af(a,b,c) : bf(b,c,a) : cf(c,a,b). Since these are precisely the barycentric coordinates of the triangle center corresponding to f it follows that triangle centers could equally well have been defined in terms of barycentrics instead of trilinears. In practice it isn't difficult to switch from one coordinate system to the other.
There are other center pairs besides the Fermat point and the 1st isogonic center. Another system is formed by X3 and the incenter of the tangential triangle. Consider the triangle center function given by :
|cos(A)||if the triangle is acute.|
|f(a,b,c) =||cos(A) + sec(B)sec(C)||if the vertex angle at A is obtuse.|
|cos(A) − sec(A)||if either of the angles at B or C is obtuse.|
For the corresponding triangle center there are four distinct possibilities:
- cos(A) : cos(B) : cos(C) if the reference triangle is acute (this is also the circumcenter).
- cos(A) + sec(B)sec(C) : cos(B) − sec(B) : cos(C) − sec(C) if the angle at A is obtuse.
- cos(A) − sec(A) : cos(B) + sec(C)sec(A) : cos(C) − sec(C) if the angle at B is obtuse.
- cos(A) − sec(A) : cos(B) − sec(B) : cos(C) + sec(A)sec(B) if the angle at C is obtuse.
Routine calculation shows that in every case these trilinears represent the incenter of the tangential triangle. So this point is a triangle center that is a close companion of the circumcenter.
Bisymmetry and invariance
Reflecting a triangle reverses the order of its sides. In the image the coordinates refer to the (c,b,a) triangle and (using "|" as the separator) the reflection of an arbitrary point α : β : γ is γ | β | α. If f is a triangle center function the reflection of its triangle center is f(c,a,b) | f(b,c,a) | f(a,b,c) which, by bisymmetry, is the same as f(c,b,a) | f(b,a,c) | f(a,c,b). As this is also the triangle center corresponding to f relative to the (c,b,a) triangle, bisymmetry ensures that all triangle centers are invariant under reflection. Since rotations and translations may be regarded as double reflections they too must preserve triangle centers. These invariance properties provide justification for the definition.
Hyperbolic triangle centers
The study of triangle centers traditionally is concerned with Euclidean geometry, but triangle centers can also be studied in hyperbolic geometry. Using gyrotrigonometry, expressions for trigonometric barycentric coordinates can be calculated that have the same form for both euclidean and hyperbolic geometry. In order for the expressions to coincide, the expressions must not encapsulate the specification of the anglesum being 180 degrees.
Tetrahedron centers and n-simplex centers
- List of classical and recent triangle centers: "Triangle centers". Retrieved 2009-05-23.
- Summary of Central Points and Central Lines in the Plane of a Triangle (Accessed on 23 may 2009)
- Kimberling, Clark (1994). "Central Points and Central Lines in the Plane of a Triangle". Mathematics Magazine 67 (3): 163–187. doi:10.2307/2690608. JSTOR 2690608.
- Weisstein, Eric W. "Triangle Center". MathWorld–A Wolfram Web Resource. Retrieved 25 May 2009.
- Weisstein, Eric W. "Triangle Center Function". MathWorld–A Wolfram Web Resource. Retrieved 1 July 2009.
- Bicentric Pairs of Points, Encyclopedia of Triangle Centers, accessed 2012-05-02
- Weisstein, Eric W. "Kimberling Center". MathWorld–A Wolfram Web Resource. Retrieved 25 May 2009.
- Weisstein, Eric W. "Major Triangle Center". MathWorld–A Wolfram Web Resource. Retrieved 25 May 2009.
- Hyperbolic Barycentric Coordinates, Abraham A. Ungar, The Australian Journal of Mathematical Analysis and Applications, AJMAA, Volume 6, Issue 1, Article 18, pp. 1-35, 2009
- Hyperbolic Triangle Centers: The Special Relativistic Approach, Abraham Ungar, Springer, 2010
- Barycentric Calculus In Euclidean And Hyperbolic Geometry: A Comparative Introduction, Abraham Ungar, World Scientific, 2010
- For detailed descriptions and nice diagrams of certain specific triangle centers: MathWorld–A Wolfram Web Resource: Weisstein, Eric W. "Triangle Centers". Retrieved 23 May 2009.
- For a discussion on the distribution of triangle centers: The Triangle is a Busy Place – The Distribution of Triangle Centers Project (Accessed on 25 May 2009)
- URL of Clark Kimberling's Encyclopedia of Triangle centers: ETC
- Computer-Generated Encyclopedia of Euclidean Geometry The first part of the encyclopedia contains more than 3000 computer-generated statements of theorems in Triangle Geometry.
- A list of links to geometry pages in the internet: Links on Geometry | http://en.m.wikipedia.org/wiki/Triangle_center | 13 |
76 | This chapter is a first of a series of chapters that describe the interactive environment in which you use GAP.
The normal interaction with GAP happens in the so-called
This means that you type an input, GAP first reads it,
evaluates it, and then shows the result.
Note that the term print may be confusing since there is a GAP
The exact sequence in the read-eval-print loop is as follows.
To signal that it is ready to accept your input,
GAP shows the prompt
When you see this, you know that GAP is waiting for your input.
Note that every statement must be terminated by a semicolon. You must
also enter return (i.e., strike the ``return'' key)
before GAP starts to read and evaluate your input.
(The ``return'' key may actually be marked with the word
Enter and a
returning arrow on your terminal.)
Because GAP does not do anything until you enter return, you can
edit your input to fix typos and only when everything is correct enter
return and have GAP take a look at it (see Line Editing). It is
also possible to enter several statements as input on a single line. Of
course each statement must be terminated by a semicolon.
It is absolutely acceptable to enter a single statement on several lines.
When you have entered the beginning of a statement, but the statement is
not yet complete, and you enter return,
GAP will show the partial prompt
When you see this, you know that GAP is waiting for the rest
of the statement. This happens also when you forget
; that terminates every GAP statement.
Note that when return has been entered and the current statement is not
yet complete, GAP will already evaluate those parts of the input that
are complete, for example function calls that appear as arguments in
another function call which needs several input lines.
So it may happen that one has to wait some time for the partial prompt.
When you enter return, GAP first checks your input to see if it is syntactically correct (see Chapter The Programming Language for the definition of syntactically correct). If it is not, GAP prints an error message of the following form
gap> 1 * ; Syntax error: expression expected 1 * ; ^
The first line tells you what is wrong about the input, in this case the
* operator takes two expressions as operands, so obviously the right
one is missing. If the input came from a file (see Read), this line
will also contain the filename and the line number. The second line is a
copy of the input. And the third line contains a caret pointing to the
place in the previous line where GAP realized that something is wrong.
This need not be the exact place where the error is, but it is usually
Sometimes, you will also see a partial prompt after you have entered an
input that is syntactically incorrect. This is because GAP is so
confused by your input, that it thinks that there is still something to
follow. In this case you should enter
further error messages, until you see the full prompt again. When you
see the full prompt, you know that GAP forgave you and is now ready to
accept your next -- hopefully correct -- input.
If your input is syntactically correct, GAP evaluates or executes it, i.e., performs the required computations (see Chapter The Programming Language for the definition of the evaluation).
If you do not see a prompt, you know that GAP is still working on your last input. Of course, you can type ahead, i.e., already start entering new input, but it will not be accepted by GAP until GAP has completed the ongoing computation.
When GAP is ready it will usually show the result of the computation,
i.e., the value computed. Note that not all statements produce a value,
for example, if you enter a
for loop, nothing will be printed, because
for loop does not produce a value that could be shown.
Also sometimes you do not want to see the result. For example if you have computed a value and now want to assign the result to a variable, you probably do not want to see the value again. You can terminate statements by two semicolons to suppress showing the result.
If you have entered several statements on a single line GAP will first read, evaluate, and show the first one, then read, evaluate, and show the second one, and so on. This means that the second statement will not even be checked for syntactical correctness until GAP has completed the first computation.
After the result has been shown GAP will display another prompt, and wait for your next input. And the whole process starts all over again. Note that if you have entered several statements on a single line, a new prompt will only be printed after GAP has read, evaluated, and shown the last statement.
In each statement that you enter, the result of the previous statement
that produced a value is available in the variable
last. The next to
previous result is available in
last2 and the result produced before
that is available in
gap> 1; 2; 3; 1 2 3 gap> last3 + last2 * last; 7
Also in each statement the time spent by the last statement, whether it
produced a value or not, is available in the variable
time. This is an
integer that holds the number of milliseconds.
... ) F
View shows the objects obj1, obj2... etc. in a short form
on the standard output.
View is called in the read--eval--print loop,
thus the output looks exactly like the representation of the
objects shown by the main loop.
Note that no space or newline is printed between the objects.
... ) F
View is in general that the shown form
is not required to be short,
and that in many cases the form shown by
gap> z:= Z(2); Z(2)^0 gap> v:= [ z, z, z, z, z, z, z ]; [ Z(2)^0, Z(2)^0, Z(2)^0, Z(2)^0, Z(2)^0, Z(2)^0, Z(2)^0 ] gap> ConvertToVectorRep(v);; v; <a GF2 vector of length 7> gap> Print( v ); [ Z(2)^0, Z(2)^0, Z(2)^0, Z(2)^0, Z(2)^0, Z(2)^0, Z(2)^0 ]gap>
Another difference is that
\n, are processed
specially (see chapter Special Characters).
PrintTo can be used to print to a file (see PrintTo).
gap> for i in [1..5] do > Print( i, " ", i^2, " ", i^3, "\n" ); > od; 1 1 1 2 4 8 3 9 27 4 16 64 5 25 125
gap> g:= SmallGroup(12,5); <pc group of size 12 with 3 generators> gap> Print( g, "\n" ); Group( [ f1, f2, f3 ] ) gap> View( g ); <pc group of size 12 with 3 generators>gap>
PrintObj, respectively, for each argument.
By installing special methods for these operations,
it is possible to achieve special printing behavior for certain objects
(see chapter Method Selection in the Programmer's Manual).
The only exceptions are strings (see Chapter Strings and Characters),
for which the default
ViewObj methods as well as the
View print also the enclosing doublequotes, whereas
The default method for
ViewObj is to call
So it is sufficient to have a
PrintObj method for an object in order
If one wants to supply a ``short form'' for
one can install additionally a method for
Displays the object obj in a nice, formatted way which is easy to read (but might be difficult for machines to understand). The actual format used for this depends on the type of obj. Each method should print a newline character as last character.
gap> Display( [ [ 1, 2, 3 ], [ 4, 5, 6 ] ] * Z(5) ); 2 4 1 3 . 2
One can assign a string to an object that
SetName (see SetName).
Name (see Name) returns the string previously assigned to
the object for printing, via
The following is an example in the context of domains.
gap> g:= Group( (1,2,3,4) ); Group([ (1,2,3,4) ]) gap> SetName( g, "C4" ); g; C4 gap> Name( g ); "C4"
When an error has occurred or when you interrupt GAP (usually by
C) GAP enters a break loop, that is in most respects
like the main read eval print loop (see Main Loop). That is, you can
enter statements, GAP reads them, evaluates them, and shows the
result if any. However those evaluations happen within the context in
which the error occurred. So you can look at the arguments and local
variables of the functions that were active when the error happened and
even change them. The prompt is changed from
indicate that you are in a break loop.
gap> 1/0; Rational operations: <divisor> must not be zero not in any function Entering break read-eval-print loop ... you can 'quit;' to quit to outer loop, or you can replace <divisor> via 'return <divisor>;' to continue
If errors occur within a break loop GAP enters another break loop at a
deeper level. This is indicated by a number appended to
brk> 1/0; Rational operations: <divisor> must not be zero not in any function Entering break read-eval-print loop ... you can 'quit;' to quit to outer loop, or you can replace <divisor> via 'return <divisor>;' to continue brk_02>
There are two ways to leave a break loop.
The first is to quit the break loop.
To do this you enter
quit; or type the eof (end of file) character,
which is usually ctrl-
D except when using the
-e option (see
Section Command Line Options).
Note that GAP code between
quit; and the end of the input line
brk_02> quit; brk>
In this case control returns to the break loop one level above or
to the main loop, respectively.
So iterated break loops must be left iteratively.
Note also that if you type
quit; from a
gap> prompt, GAP will exit
(see Leaving GAP).
If you leave a break loop with
quit without completing a command
it is possible (though not very likely) that data structures
will be corrupted or incomplete data have been stored in objects.
Therefore no guarantee can be given that calculations afterwards
will return correct results! If you have been using options
a break loop generally leaves the options stack with options you no
longer want. The function
ResetOptionsStack (see ResetOptionsStack)
removes all options on the options stack, and this is the sole intended
purpose of this function.
The other way is to return from a break loop. To do this you type
If the break loop was entered because you interrupted GAP,
then you can continue by typing
If the break loop was entered due to an error,
you may have to modify the value of a variable before typing
(see the example for IsDenseList) or you may have to return a value
;) to continue the computation;
in any case, the message printed on entering the break loop will
tell you which of these alternatives is possible.
For example, if the break loop was entered because a variable had no
assigned value, the value to be returned is often a value that this
variable should have to continue the computation.
brk> return 9; # we had tried to enter the divisor 9 but typed 0 ... 1/9 gap>
By default, when a break loop is entered, GAP prints a trace of the
innermost 5 commands currently being executed. This behaviour can be
configured by changing the value of the global variable
OnBreak. When a break loop is entered, the value of
checked. If it is a function, then it is called with no arguments. By
default, the value of
Where (see Where).
gap> OnBreak := function() Print("Hello\n"); end; function( ) ... end
gap> Error("!\n"); Error, ! Hello Entering break read-eval-print loop ... you can 'quit;' to quit to outer loop, or you can 'return;' to continue brk> quit;
In cases where a break loop is entered during a function that was called
with options (see Chapter Options Stack), a
quit; will also cause the
options stack to be reset and an
Info-ed warning stating this is
InfoWarning level 1 (see Chapter Info functions).
Note that for break loops entered by a call to
the lines after ``
Entering break read-eval-print loop ...'' and before
brk> prompt can also be customised, namely by redefining
OnBreakMessage (see OnBreakMessage).
Also, note that one can achieve the effect of changing
As mentioned above, the default value of
a call to
Error (see Error) generally gives a trace back up to
five levels of calling functions. Conceivably, we might like to have
a function like
Error that does not trace back without globally
OnBreak. Such a function we might call
and here is how we might define it. (Note
not a GAP function.)
gap> ErrorNoTraceBack := function(arg) # arg is a special variable that GAP > # knows to treat as a list of arg'ts > local SavedOnBreak, ENTBOnBreak; > SavedOnBreak := OnBreak; # save the current value of OnBreak > > ENTBOnBreak := function() # our `local' OnBreak > local s; > for s in arg do > Print(s); > od; > OnBreak := SavedOnBreak; # restore OnBreak afterwards > end; > > OnBreak := ENTBOnBreak; > Error(); > end; function( arg ) ... end
Here is a somewhat trivial demonstration of the use of
gap> ErrorNoTraceBack("Gidday!", " How's", " it", " going?\n"); Error, Gidday! How's it going? Entering break read-eval-print loop ... you can 'quit;' to quit to outer loop, or you can 'return;' to continue brk> quit;
Now we call
Error with the same arguments to show the difference.
gap> Error("Gidday!", " How's", " it", " going?\n"); Error, Gidday! How's it going? Hello Entering break read-eval-print loop ... you can 'quit;' to quit to outer loop, or you can 'return;' to continue brk> quit;
Observe that the value of
OnBreak before the
was restored. However, we had changed
OnBreak from its default value;
OnBreak to its default value, we should do the following.
gap> OnBreak := Where;;
When a break loop is entered by a call to
Error (see Error) the
message after the ``
Entering break read-eval-print loop ...'' line is
produced by the function
OnBreakMessage, which just like
OnBreak (see OnBreak) is a user-configurable global variable that is a
function with no arguments.
gap> OnBreakMessage(); # By default, OnBreakMessage prints the following you can 'quit;' to quit to outer loop, or you can 'return;' to continue
Perhaps you are familiar with what's possible in a break loop, and so don't need to be reminded. In this case, you might wish to do the following (the first line just makes it easy to restore the default value later).
gap> NormalOnBreakMessage := OnBreakMessage;; # save the default value gap> OnBreakMessage := function() end; # do-nothing function function( ) ... end
OnBreak still set away from its default value, calling
as we did above, now produces:
gap> Error("!\n"); Error, ! Hello Entering break read-eval-print loop ... brk> quit; # to get back to outer loop
However, suppose you are writing a function which detects an error
OnBreakMessage needs to be changed only locally,
i.e., the instructions on how to recover from the break loop need
to be specific to that function. The same idea used to define
ErrorNoTraceBack (see OnBreak) can be adapted to achieve
this. The function
CosetTableFromGensAndRels (see CosetTableFromGensAndRels)
is an example in the GAP code where the idea is actually used.
] ) F
shows the last nr commands on the execution stack during whose execution
the error occurred. If not given, nr defaults to 5. (Assume, for the
following example, that after the last example
OnBreak (see OnBreak)
has been set back to its default value.)
gap> StabChain(SymmetricGroup(100)); # After this we typed ^C user interrupt at bpt := S.orbit; called from SiftedPermutation( S, (g * rep) ^ -1 ) called from StabChainStrong( S.stabilizer, [ sch ], options ); called from StabChainStrong( S.stabilizer, [ sch ], options ); called from StabChainStrong( S, GeneratorsOfGroup( G ), options ); called from StabChainOp( G, rec( ) ) called from ... Entering break read-eval-print loop ... you can 'quit;' to quit to outer loop, or you can 'return;' to continue brk> Where(2); called from SiftedPermutation( S, (g * rep) ^ -1 ) called from StabChainStrong( S.stabilizer, [ sch ], options ); called from ...
Note that the variables displayed even in the first line of the
called from line) may be already one environment level higher
DownEnv (see DownEnv) may be necessary to access them.
At the moment this backtrace does not work from within compiled code (this
includes the method selection which by default is compiled into the kernel).
If this creates problems for debugging, call GAP with the
(see Advanced Features of GAP) to avoid loading compiled code.
(Function calls to
Info and methods installed for binary operations are
handled in a special way. In rare circumstances it is possible therefore
that they do not show up in a
Where log but the log refers to the last
proper function call that happened before.)
The command line option
-T to GAP disables the break loop. This
is mainly intended for testing purposes and for special
applications. If this option is given then errors simply cause GAP
to return to the main loop.
In a break loop access to variables of the current break level and higher levels is possible, but if the same variable name is used for different objects or if a function calls itself recursively, of course only the variable at the lowest level can be accessed.
] ) F
] ) F
DownEnv moves up nr steps in the environment and allows one to inspect
variables on this level; if nr is negative it steps down in the environment
again; nr defaults to 1 if not given.
UpEnv acts similarly to
but in the reverse direction. (The names of
UpEnv are the
wrong way 'round; I guess it all depends on which direction defines is
``up'' -- just use
DownEnv and get used to that.)
gap> OnBreak := function() Where(0); end;; # eliminate back-tracing on gap> # entry to break loop gap> test:= function( n ) > if n > 3 then Error( "!\n" ); fi; test( n+1 ); end;; gap> test( 1 ); Error, ! Entering break read-eval-print loop ... you can 'quit;' to quit to outer loop, or you can 'return;' to continue brk> Where(); called from test( n + 1 ); called from test( n + 1 ); called from test( n + 1 ); called from <function>( <arguments> ) called from read-eval-loop brk> n; 4 brk> DownEnv(); brk> n; 3 brk> Where(); called from test( n + 1 ); called from test( n + 1 ); called from <function>( <arguments> ) called from read-eval-loop brk> DownEnv( 2 ); brk> n; 1 brk> Where(); called from <function>( <arguments> ) called from read-eval-loop brk> DownEnv( -2 ); brk> n; 3 brk> quit; gap> OnBreak := Where;; # restore OnBreak to its default value
Note that the change of the environment caused by
DownEnv only affects
variable access in the break loop. If you use
return to continue a
calculation GAP automatically jumps to the right environment level
Note also that search for variables looks first in the chain of outer functions which enclosed the definition of a currently executing function, before it looks at the chain of calling functions which led to the current invocation of the function.
gap> foo := function() > local x; x := 1; > return function() local y; y := x*x; Error("!!\n"); end; > end; function( ) ... end gap> bar := foo(); function( ) ... end gap> fun := function() local x; x := 3; bar(); end; function( ) ... end gap> fun(); Error, !! called from bar( ); called from <function>( <arguments> ) called from read-eval-loop Entering break read-eval-print loop ... you can 'quit;' to quit to outer loop, or you can 'return;' to continue brk> x; 1 brk> DownEnv(1); brk> x; 3
foo which contained the definition of
bar is found
before that of
fun which caused its execution. Using
can access the
... ) F
Error signals an error from within a function. First the messages
messages are printed, this is done exactly as if
return; to continue execution with the
statement following the call to
ErrorCount returns a count of the number of errors (including user
interruptions) which have occurred in the GAP session so far.
This count is reduced modulo 228 on 32 bit systems,
260 on 64 bit systems.
The count is incremented by each error, even if GAP was
started with the
-T option to disable the break loop.
The normal way to terminate a GAP session is to enter either
quit; (note the semicolon) or an end-of-file character (usually
ctrl-D) at the
gap> prompt in the main read eval print loop.
An emergency way to leave GAP is to enter
Before actually terminating, GAP will call (with no arguments) all
of the functions that have been installed using
typically perform tasks such as cleaning up temporary files created
during the session, and closing open files. If an error occurs during
the execution of one of these functions, that function is simply
abandoned, no break loop is entered.
gap> InstallAtExit(function() Print("bye\n"); end); gap> quit; bye
During execution of these functions, the global variable
will be set to
true if GAP is exiting because the user typed
false otherwise. Since
QUIT is considered as an emergency
measure, different action may be appropriate.
If, when GAP is exiting due to a
quit or end-of-file (ie not due
QUIT) the variable
SaveOnExitFile is bound to a string value,
then the system will try to save the workspace to that file.
GAP allows one you to edit the current input line with a number of editing
commands. Those commands are accessible either as control keys or as
escape keys. You enter a control key by pressing the ctrl key, and,
while still holding the ctrl key down, hitting another key key. You
enter an escape key by hitting esc and then hitting another key key.
Below we denote control keys by ctrl-key and escape keys by
esc-key. The case of key does not matter, i.e., ctrl-
a are equivalent.
Typing ctrl-key or esc-key for characters not mentioned below always inserts ctrl-key resp. esc-key at the current cursor position.
The first few commands allow you to move the cursor on the current line.
A move the cursor to the beginning of the line.
B move the cursor to the beginning of the previous word.
B move the cursor backward one character.
F move the cursor forward one character.
F move the cursor to the end of the next word.
E move the cursor to the end of the line.
The next commands delete or kill text.
The last killed text can be reinserted, possibly at a different position,
with the ``yank'' command ctrl-
H or del delete the character left of the cursor.
D delete the character under the cursor.
K kill up to the end of the line.
D kill forward to the end of the next word.
esc-del kill backward to the beginning of the last word.
X kill entire input line, and discard all pending input.
Y insert (yank) a just killed text.
The next commands allow you to change the input.
T exchange (twiddle) current and previous character.
U uppercase next word.
L lowercase next word.
C capitalize next word.
The tab character, which is in fact the control key ctrl-
I, looks at
the characters before the cursor, interprets them as the beginning of an
identifier and tries to complete this identifier. If there is more than
one possible completion, it completes to the longest common prefix of all
those completions. If the characters to the left of the cursor are
already the longest common prefix of all completions hitting tab a
second time will display all possible completions.
tab complete the identifier before the cursor.
The next commands allow you to fetch previous lines, e.g., to correct typos, etc. This history is limited to about 8000 characters.
L insert last input line before current character.
P redisplay the last input line, another ctrl-
redisplay the line before that, etc. If the cursor is
not in the first column only the lines starting with the
string to the left of the cursor are taken.
N Like ctrl-
P but goes the other way round through the
< goes to the beginning of the history.
> goes to the end of the history.
O accepts this line and perform a ctrl-
Finally there are a few miscellaneous commands.
V enter next character literally, i.e., enter it even if it
is one of the control keys.
U execute the next line editing command 4 times.
esc-num execute the next line editing command num times.
L redisplay input line.
The four arrow keys (cursor keys) can be used instead of ctrl-
P, and ctrl-
In most cases, it is preferable to create longer input (in particular GAP
programs) separately in an editor, and to read in the result via
Read by default reads from the directory in which GAP was
started (respectively under Windows the directory containing the GAP
binary), so you might hav eto give an absolute path to the file.
If you cannot create several windows, the
Edit command may be used to
leave GAP, start an editor, and read in the edited file automatically.
Edit starts an editor with the file whose filename is given by the
string filename, and reads the file back into GAP when you exit the
You should set the GAP variable
EDITOR to the name of
the editor that you usually use, e.g.,
This can for example be done in your
.gaprc file (see the sections on
operating system dependent features in Chapter Installing GAP).
etc subdirectory of the GAP installation
we provide some setup files for the editors
vim is a powerful editor that understands the basic
vi commands but
provides much more functionality. You can find more information about it
(and download it) from http://www.vim.org.
To get support for GAP syntax in vim, create in your home directory a
.vim and a subdirectory
.vim/indent (If you are not using
Unix, refer to the
vim documentation on where to place syntax files).
Then copy the file
etc/gap.vim in this
.vim directory and copy the file
Then edit the
.vimrc file in your home directory. Add lines as in the
if has("syntax") syntax on " Default to no syntax highlightning endif " For GAP files augroup gap " Remove all gap autocommands au! autocmd BufRead,BufNewFile *.g,*.gi,*.gd source ~/.vim/gap.vim autocmd BufRead,BufNewFile *.g,*.gi,*.gd set filetype=gap comments=s:##\ \ ,m:##\ \ ,e:##\ \ b:# " I'm using the external program `par' for formating comment lines starting " with `## '. Include these lines only when you have par installed. autocmd BufRead,BufNewFile *.g,*.gi,*.gd set formatprg="par w76p4s0j" autocmd BufWritePost,FileWritePost *.g,*.gi,*.gd set formatprg="par w76p0s0j" augroup END
See the headers of the two mentioned files for additional comments. Adjust details according to your personal taste.
Setup files for
(x)emacs are contained in the
] ) F
With no arguments,
SizeScreen returns the size of the screen as a list
with two entries. The first is the length of each line, the second is the
number of lines.
With one argument that is a list,
SizeScreen sets the size of the
screen; x is the length of each line, y is the number of lines.
Either value x or y may be missing (i.e. left unbound), to leave this
value unaffected. It returns the new values. Note that those parameters
can also be set with the command line options
Section Command line options).
To check/change whether line breaking occurs for files and streams see PrintFormattingStatus and SetPrintFormattingStatus.
The screen width must be between 20 and 256 characters (inclusive) and the depth at least 10 lines. Values outside this range will be adjusted to the nearest endpoint of the range.
[Top] [Up] [Previous] [Next] [Index]
GAP 4 manual | http://itee.uq.edu.au/~gap/gap4r4/doc/htm/ref/CHAP006.htm | 13 |
60 | Three Different Concepts of Probability
The classical interpretation of probability is a theoretical probability based on the physics of the experiment, but does not require the experiment to be performed. For example, we know that the probability of a balanced coin turning up heads is equal to 0.5 without ever performing trials of the experiment. Under the classical interpretation, the probability of an event is defined as the ratio of the number of outcomes favorable to the event divided by the total number of possible outcomes.
Sometimes a situation may be too complex to understand the physical nature of it well enough to calculate probabilities. However, by running a large number of trials and observing the outcomes, we can estimate the probability. This is the empirical probability based on long-run relative frequencies and is defined as the ratio of the number of observed outcomes favorable to the event divided by the total number of observed outcomes. The larger the number of trials, the more accurate the estimate of probability. If the system can be modeled by computer, then simulations can be performed in place of physical trials.
A manager frequently faces situations in which neither classical nor empirical probabilities are useful. For example, in a one-shot situation such as the launch of a unique product, the probability of success can neither be calculated nor estimated from repeated trials. However, the manager may make an educated guess of the probability. This subjective probability can be thought of as a person's degree of confidence that the event will occur. In absence of better information upon which to rely, subjective probability may be used to make logically consistent decisions, but the quality of those decisions depends on the accuracy of the subjective estimate.
Outcomes and Events
An event is a subset of all of the possible outcomes of an experiment. For example, if an experiment consists of flipping a coin two times, the possible outcomes are:
- heads, heads
- heads, tails
- tails, heads
- tails, tails
Given that the probability of each outcome is known, the probability of an event can be determined by summing the probabilities of the individual outcomes associated with the event.
A composite event is an event defined by the union or intersection of two events.
The union of two events is expressed by the "or" function.
For example, the probability that either Event A or Event B (or both) will occur is expressed by P(A or B).
The intersection of two events is the probability that both events will occur and is expressed by the "and" function.
For exampe, the probability that both Event A and Event B will occur is expressed by P(A and B).
Law of Addition
Consider the following Venn diagram in which each of the 25 dots represents an outcome and each of the two circles represents an event.
In the above diagram, Event A is considered to have occurred if an experiment's outcome, represented by one of the dots, falls within the bounds of the left circle. Similarly, Event B is considered to have occurred if an experiment's outcome falls within the bounds of the right circle. If the outcome falls within the overlapping region of the two circles, then both Event A and Event B are considered to have occurred.
There are 5 outcomes that fall in the definition of Event A and 6 outcomes that fall in the definition of Event B. Assuming that each outcome represented by a dot occurs with equal probability, the probability if Event A is 5/25 or 1/5, and the probability of Event B is 6/25. The probability of Event A or Event B would be the total number of outcomes in the orange area divided by the total number of possible outcomes. The probability of Event A or Event B then is 9/25.
Note that this result is not simply the sum of the probabilities of each event, which would be equal to 11/25. Since there are two outcomes in the overlapping area, these outcomes are counted twice if we simply sum the probabilities of the two events. To prevent this double counting of the outcomes common to both events, we need to subtract the probability of those two outcomes so that they are counted only once. The result is the law of addition, which states that the probability of Event A or Event B (or both) occurring is given by:
This addition rule is useful for determining the probability that at least one event will occur. Note that for mutually exclusive events there is no overlap of the two events so:
and the law of addition reduces to:
Sometimes it is useful to know the probability that an event will occur given that another event occurred. Given two possible events, if we know that one event occurred we can apply this information in calculating the other event's probability. Consider the Venn diagram of the previous section with the two overlapping circles. If we know that Event B occurred, then the effective sample space is reduced to those outcomes associated with Event B, and the Venn diagram can be simplified as shown:
The probability that Event A also has occurred is the probability of Events A and B relative to the probability of Event B. Assuming equal probability outcomes, given two outcomes in the overlapping area and six outcomes in B, the probability that Event A occurred would be 2/6. More generally,
|P(A given B) =||
P(A and B)
Law of Multiplication
The probability of both events occurring can be calculated by rearranging the terms in the expression of conditional probability. Solving for P(A and B), we get:
For independent events, the probability of Event A is not affected by the occurance of Event B, so P(A given B) = P(A), and
The articles on this website are copyrighted material and may not be reproduced,
stored on a computer disk, republished on another website, or distributed in any
form without the prior express written permission of QuickMBA.com. | http://www.quickmba.com/stats/probability/ | 13 |
51 | In order to better understand the parts of a jet engine and how this form of propulsion works, let us briefly revisit the different types of jet engines. The earliest form of jet propulsion was the turbojet. This type of powerplant employs a series of compressor blades mounted on rotating disks that squeeze incoming air to a higher pressure and temperature. The compressed air is then mixed with fuel and ignited. The high temperature air passes through another series of rotating blades called a turbine, which causes this turbine to spin. The turbine rotors are attached to a central shaft running the length of the engine, so the turbine's spinning motion causes this shaft to rotate as well. Also attached to the same central shaft is the compressor section at the front of the engine. This connection between the turbine and compressor sections causes the compressor stages to continue spinning, bringing in additional air to keep the engine functioning in a repeating cycle. Once the airflow moves past the turbines, it is exhausted through a nozzle. The expansion of the high pressure gases against the nozzle walls creates thrust that pushes the engine forward, as well as the vehicle it is attached to.
Another type of jet that is similar to but more efficient than the turbojet is the turbofan. A turbofan uses the same components described above with the addition of a large fan in front of the compressor section. This fan is connected to the same central shaft that turns the compressor blades so that the fan also rotates. As it does so, the fan accelerates a wide column of air to further increase thrust. Air passing through the center portion of the fan enters the compressor where it moves through the core of the engine just like on a typical turbojet. Air accelerated through the fan's outer diameter, however, flows around the engine core without passing through it. This process can be better understood by studying an animation of airflow through a turbofan.
There are two basic types of turbofans that are differentiated by the relative amount of air that flows through the fan and around the engine core versus the amount of air that flows through the core itself. A turbofan with a low-bypass ratio means that most of the air flowing through the engine passes through the turbojet core and very little through the outer fan bypass duct. Such engines are most common on military combat aircraft like fighters. Today's commercial airlines, like the 757, are instead fitted with high-bypass turbofan engines. A high-bypass turbofan consists of a very large diameter fan and a much smaller-diameter turbojet core within.
As shown in the above diagram, a turbofan contains many rotating compressor and turbine disks, all of which are smaller than the fan at the front of the engine. The rotary components within a high-bypass turbofan, in particular, are often only a small fraction of the fan diameter. Most specifications describing the overall size of an engine will list the fan diameter since this measurement defines the engine's maximum width. Our earlier article about the 757 uses this dimension to compare the diameter of the engine nacelle to that of the fuselage.
With that introduction aside, let's take a closer look at the specific question of debris found at the Pentagon following the attack on 11 September 2001. While researching this topic, we came across many websites claiming that it was not American Airlines Flight 77 that was hijacked and purposefully crashed into the Pentagon during the terrorist attack that morning. The authors of these sites instead believe that any of a variety of military aircraft, cruise missiles, or other weapons were used by the US government to attack the Pentagon, and the stories of terrorist hijackings were simply a cover up for the sinister scheme.
According to the accepted story, American Airlines Flight 77 was hijacked by five al Qaeda terrorists as it was traveling from Washington DC to Los Angeles. The aircraft involved in this hijacking was a Boeing Model 757-200 with the Boeing customer code 757-223 and the registration number N644AA. This same aircraft is pictured above in a photo taken at Logan International Airport in Boston on 7 August 2001. The terrorists steered the plane into the west side of the Pentagon killing 59 passengers and crew as well as 125 victims on the ground.
Those who doubt this version of events point to wreckage at the Pentagon as proof that some other kind of aircraft or missile was actually responsible for the attack. Probably the one piece of debris that has prompted the most debate is the following photo of what looks like a rotary disk from the interior of the plane's engine. This disk could be part of a fan, a compressor, or a turbine rotor from inside the engine, but the blades are not present and were presumably knocked off in the impact.
Based on the sizes of the person standing next to the debris and other objects in the photographs that we can use for comparison, it has been estimated that the disk is approximately 25 to 30 inches (63.5 to 76.2 cm) across. Obviously, this piece is far smaller than the maximum engine diameter of 6 feet (1.8 m) or more leading many to draw the conclusion that the item is not from a 757 engine. That conjecture causes conspiracy theorists to believe that a much smaller vehicle must have struck the Pentagon instead.
However, we have already seen that rotating components within a turbofan engine can vary widely in size. In order to determine whether this component could have possibly come from a 757, we need to take a closer look at the engine installed aboard the aircraft registered N644AA. Boeing offered two different engine options to customers of the 757-200. Airlines could choose between the Pratt & Whitney PW2000 family or the Rolls-Royce RB211 series. The particular engine model chosen by American Airlines for its 757 fleet was the RB211-535E4B triple-shaft turbofan manufactured in the United Kingdom. A drawing illustrating the overall size of this engine is pictured below.
Note the relative sizes of the forward portion of this engine compared to the central core. Clearly, the section housing the fan is much wider than the turbojet core that contains the compressor and turbine components. We can get a clearer view of the relative sizes of components within this engine in the following cut-away drawing of the RB211-535.
Using these images and other diagrams of the RB211-535 engine, we can obtain approximate dimensions of the engine's rotary disks for comparison to the item found in the Pentagon rubble. Our best estimate is that the engine's twelve compressor disk hubs (without blades attached) are about 36% the width of the fan. The five turbine disk hubs appear to be slightly smaller at approximately 34% the fan diameter. According to Brassey's World Aircraft & Systems Directory and Jane's, the fan diameter of the RB211-535E4B engine is 74.5 inches (189.2 cm). It then follows that the compressor disk hubs are approximately 27 inches (69 cm) across while the turbine disk hubs are about 25 inches (63.5 cm) in diameter. Both of these dimensions fit within the range of values estimated for the engine component pictured in the wreckage at the Pentagon.
We can take this analysis a step further by also exploring some of the alternate theories that have been put forward by those believing this object comes from a different aircraft. Two of the most common claims we have seen suggest that the plane used in the attack was a Douglas A-3 Skywarrior or a Northrop Grumman RQ-4 Global Hawk. The A-3 is an airborne jamming aircraft originally ordered by the US Navy during the 1950s. The type is now retired from front-line service though a handful are still used for testing purposes by the defense contractor Raytheon. The Global Hawk is an unmanned aerial vehicle used by the Air Force for reconnaissance missions. Neither of these planes bears more than a superficial resemblance to the 757, but we will accept the possibility that they could be mistaken for a commercial airliner given the confusion on September 11.
For the sake of this investigation, the only issue we shall consider is the engines that power both planes. The A-3 was equipped with two Pratt & Whitney J57 turbojets like that pictured below. The J57 dates to the early 1950s and is rather antiquated by today's standards.
For some reason, most of the conspiratorial sites instead make extensive reference to the A-3 being powered by a Pratt & Whitney JT8D engine. Moreover, these sites claim that the JT8D is a turbojet. The JT8D is actually a low-bypass turbofan that was developed for use aboard commercial aircraft like the 727 and 737. We have not found any source that indicates the JT8D was ever used on the A-3 Skywarrior, so it is unclear why the originators of the A-3 theory are so infatuated with this particular powerplant. Nevertheless, we will include it in our investigation for completeness.
The Global Hawk, meanwhile, is powered by a single Rolls-Royce AE3007H turbofan. The AE3007 is built by the Allison Engine division of Rolls-Royce located in Indianapolis, Indiana.
Using photos and cut-away drawings of these three engines, we can estimate the diameters of the compressor and turbine rotor hubs just as we did for the RB211. While the compressor and turbine disks on the J57 and JT8D are larger by percentage than those of the RB211, the maximum diameters of all three engines are considerably smaller as summarized in the following table.
|Engine||Overall Diameter||Compressor Hub Diameter
|Turbine Hub Diameter
|PW J57||40.5 in (102.9 cm)||16 in (40.6 cm)||18 in (45.7 cm)|
|PW JT8D||49.2 in (125 cm)||21.5 in (54.6 cm)||22.5 in (57.1 cm)|
|RR AE3007H||43.5 in (110.5 cm)||14 in (35.6 cm)||15 in (38.1 cm)|
This analysis indicates that all three of these engines are too small to match the engine component photographed at the Pentagon. Some sites also suggest the part might be from the aircraft's auxiliary power unit (APU). An APU is essentially a small jet engine mounted in the tail of an aircraft that provides additional power, particularly during an emergency. However, APUs tend to be much smaller than jet engines, and the component pictured at the Pentagon is too large to match any found in an APU. It has also been suggested that the attack was conducted by a cruise missile like the Tomahawk or Storm Shadow, but these and other weapons are powered by engines no more than 15 inches (38 cm) across. These powerplants are obviously far too small to account for the Pentagon wreckage.
Whatever piece this is, it appears to be only the central hub of a compressor or turbine stage. Normally, each of these rotating stages would be fitted with several curved blades mounted along its circumference. These blades were apparently knocked off the rotor hub found in the wreckage due to the force of the impact. The loss of these blades is unfortunate since different manufacturers often adopt unique shapes for their fan, compressor, and turbine blades that would make the source of the component much easier to identify. Nonetheless, we have been able to locate the following picture of the intermediate pressure compressor section of the RB211 that appears to match several characteristics of the Pentagon debris. Note that this photo appears to be from the RB211-524 which is an uprated relative of the RB211-535 used on the Boeing 747 and 767. This engine model contains seven intermediate pressure compressor stages compared to the six of the RB211-535. However, the compressor disks used on both engines are believed to be nearly identical.
One similarity between the two photos can be seen in the cleats along the edge of the Pentagon object. These devices are called dovetail slots and provide attachment points for the compressor blades. The shapes of these slots on the Pentagon wreckage appear to match those on the RB211 assembly shown on the left. Furthermore, the "nosepiece" jutting out from the center of the disk in the Pentagon photo shares commonalities with the central shaft visible in the RB211 photo.
The above analysis indicates that the Pentagon debris does in fact match the characteristics of a rotor disk from the Rolls-Royce RB211-535. The wreckage is most likely a compressor stage given the shape of the dovetail slots. It is difficult to be certain exactly which compressor disk it is since the six rotors of the intermediate pressure section and six high pressure compressor disks are of similar size. The primary difference from one compressor to the next is the smaller span of the compressor blades as the air flows further into the engine, but these blades are no longer attached to the wreckage. However, additional clues in the photo suggest the rotor disk is probably from the high pressure section or perhaps the very last disk of the intermediate pressure compressors.
We believe the disk is most likely a high pressure compressor because of the shape of the piece jutting from its center. This item is part of the central shaft that runs the length of the engine to connect the rotating stages of the fan, compressor, and turbine. The RB211 is described as a triple-shaft or triple-spool turbofan since it uses three concentric shafts to drive the various rotating elements of the engine. These shafts are differentiated by the relative pressure of the gases passing through the rotor stages attached to them. The longest shaft with the smallest diameter located closest to the centerline connects the low pressure (LP) engine components. This shaft is rotated by the LP turbines, located just ahead of the nozzle, and drives the large fan at the front of the engine.
The LP shaft rotates within a shorter, larger diameter shaft that drives the intermediate pressure (IP) system. This shaft rotates at a higher rate and connects the IP turbine stage to the six IP compressor disks. The third concentric shaft is the shortest with the largest diameter and highest rate of rotation to drive the engine's high pressure (HP) system. This shaft connects the HP turbine to the six HP compressors. The debris photographed at the Pentagon appears to be connected to three concentric shafts of increasing diameter, suggesting that it most likely comes from the high pressure system of a triple-spool turbofan like the RB211-535.
Alternatively, this disk could be the last of the intermediate pressure compressor rotors and the larger diameter shaft we see might be the start of the shaft on which the high pressure compressors are mounted. If a photo of the opposite side of the disk were available, it should be easier to tell exactly which section of compressors it comes from. Nevertheless, the evidence documented above clearly shows that the size and shape of this debris is consistent with the RB211 turbofan carried by a 757 but not so with engines used aboard smaller aircraft and missiles.
Since this article was first published, we have received several comments from readers citing a quote from Rolls-Royce spokesman John W. Brown who said, "It is not a part from any Rolls-Royce engine that I'm familiar with..." The critics go on to suggest that this statement disproves all of our analysis indicating the disk is a compressor stage from the Rolls-Royce RB211-535. However, a simple review of the source of this quote shows just the opposite. The material is from an article titled "Controversy Swirling Over September 11 Pentagon Mystery: Industry Experts Can't Explain Photo Evidence" written by Christopher Bollyn that appeared on the pro-conspiracy website American Free Press.
The article describes John Brown as a spokesman for Rolls-Royce in Indianapolis, Indiana. This location is home to the Allison Engine factory that builds the AE3007H turbofan used aboard the Global Hawk. Brown's quote regarding the mystery wreckage states that, "It is not a part from any Rolls Royce engine that I'm familiar with, and certainly not the AE 3007H made here in Indy." Furthermore, the article correctly notes that the RB211 is not built in Indianapolis but at the Rolls-Royce plant in Derby, England. Since Brown is a spokesman for Allison Engines, which was an independent company that only became a subsidary of Rolls-Royce in 1995, it stands to reason that an engine built in the United Kingdom would be one he's not "familiar with." The article even goes on to point out that Brown could not identify specific parts from one engine or another since he is not an engineer or assembly line technician who would be familiar with the internal components of turbine engines.
For what it's worth (and it isn't worth much, given the author's apparent lack of journalistic skill), the Bollyn article actually supports the evidence assembled on this site. The article provides quotes from Honeywell Aerospace indicating that the piece did not come from an APU, from Allison Engines suggesting that it is not a component found in the turbofan used on Global Hawk, and from Teledyne Continental Motors indicating that it is not part of a cruise missile engine. All of these conclusions match those explained above.
Perhaps an even more conclusive piece of wreckage was found among other debris photographed at the Pentagon following the impact. The component shown below has appeared on several conspiracy theory websites claiming that it does not match any known component of the RB211 and must come from a different type of engine.
This item appears to be part of the casing of the combustion section. This portion of a jet engine is located aft of the compressors, and it is here that the high pressure air is mixed with fuel and ignited. The circumference of this circular casing contains several holes through which fuel injectors spray the jet fuel needed for combustion. The number, size, and shape of these nozzle holes is generally unique to a particular engine type and can be used to identify the engine model it comes from. Based on the curvature of the debris, the piece shown here appears to be about half of the total combustor case. We can count six nozzle ports along its circumference from the region nearest the floor around to the top, which represents about one-third of the total case. We can also see six screw holes along the circumference of each nozzle port that are used to attach the fuel injectors.
The best photos we have found illustrating the design of this part on the RB211-535 series engine come from the Boeing 757 maintenance manual. A diagram from this manual shown below illustrates components of the RB211-535 high pressure system and its location relative to the rest of the engine.
An even better close-up view of the combustion case is provided below. Note the overall design pattern of the fuel injector ports including the shape of the holes, the spacing between them, and their relative sizes compared to the diameter of the case itself. Also note the locations of the six screw holes around the circumference of each nozzle port. There also appear to be a total of 18 injector holes around the circumference of this casing, and six of these holes can be found over one-third of the case diameter. All of these measurements match up perfectly to the Pentagon photo.
In summary, we have studied two key pieces of wreckage photographed at the Pentagon shortly after September 11 and found them to be entirely consistent with the Rolls-Royce RB211-535 turbofan engine found on a Boeing 757 operated by American Airlines. The circular engine disk debris is just the right size and shape to match the compressor stages of the RB211, and it also shows evidence of being attached to a triple-shaft turbofan like the RB211. While many have claimed the wreckage instead comes from a JT8D or AE3007H turbofan, we have shown that these engines are too small to match the debris. Furthermore, we have studied what clearly looks like the outer shell of a combustion case and found that its fuel injector nozzle ports match up exactly to those illustrated in Boeing documentation for the RB211-535 engine. There is simply no evidence to suggest these items came from any other engine model than the RB211-535, and the vast majority of these engines are only used on one type of plane--the Boeing 757.
Nonetheless, the various sites we have discovered over the course of answering this question raise a number of
other questions about pieces of wreckage found at the crash site. While this article focused only on the engine
debris asked about in the original question, we may revisit this subject in the future to better explore the claims
of a Pentagon cover up following September 11.
- answer by Joe Yoon
- answer by Jeff Scott, 12 March 2006
One common question we've seen on sites critical of a Boeing 757 striking the Pentagon is why is there so little engine wreckage. The only identifiable engine components seen so far are the two discussed above. Since these pieces represent so little of the two large engines carried by a 757, those believing in conspiracy suggest that these small items were planted and the lack of more substantial debris is proof of a cover-up. If a 757 truly hit the Pentagon, they argue, then where is the rest of the two engines? This argument ignores the simple fact that a lack of photos of other engine parts does not mean that none existed, only that other engine components were either not photographed or the photos have not yet been released.
Luckily, several readers have pointed out an additional picture that was released as an exhibit during the Zacarias Moussaoui terrorist trial in 2006. This photo, shown below, clearly includes a sizeable and relatively intact portion of a gas turbine engine.
This debris appears to contain two rotating engine disks with part of the engine's central shaft protruding forward. Behind the two disks is another component called a frame. Frames are fixed, non-rotating components that provide attachment points holding the engine together. This debris must come from the aft section of an engine given the shape of the blade attachment points visble along the circumference of the two rotating disks. These attachments have a highly cambered, or curved, banana-like shape indicative of the blades used in the turbine section of a jet engine. The blades used on fan and compressor stages, by comparison, have a much straighter and less curved shape.
The RB211-535 engine used on the 757 contains five turbine disks--three low pressure turbines that power the fan, one intermediate pressure turbine driving the intermediate pressure compressors, and one high pressure turbine that turns the high pressure compressors. The debris shown here contains two of the three low pressure turbines and possibly the remains of the third. The protruding shaft also appears to be composed of two separate shafts of differing diameter. A small portion of the inner shaft, from the engine's low pressure system, appears to protrude from inside a second larger diameter shaft surrounding it. This larger diameter shaft corresponds to the intermediate pressure system and would connect to an additional turbine disk that is no longer attached.
Though it is difficult to conclusively identify these components as part of an RB211-535 engine without better pictures of an intact engine for comparison, perhaps the most identifiable object shown in debris is the aft frame. Along the circumference of this frame are several fixed blades called stators that help guide the air as it flows through the engine. We can see the remains of four of these stators in the photo over about a third of its diameter.
The above illustration from Rolls-Royce shows another cut-away of the RB211-535 that provides a good view of the internal components of the engine. In particular, note the frame just aft of the turbine stages. The spacing of the stator vanes shown here matches those seen in the Pentagon photo well and provides additional evidence that the debris is in fact from an RB211 engine.
To give a better idea of how the three engine components we have discussed relate to one another, the above image
shows a diagram of the high pressure system within the RB211-535 engine. Also included are the objects identified
in the Pentagon wreckage and their relative locations within the engine. As discussed in the main article, all
three of these pieces of debris are identical matches to or at least consistent with the components found in the
Rolls-Royce RB211-535 turbofan aboard a Boeing 757.
- answer by Jeff Scott, 6 May 2006
In the aftermath of 9/11, I have heard many claims that a 757 could not possible have hit the Pentagon because the plane cannot fly so low to the ground since ground effect prevents this from happening. Is there any truth to this claim? ... I don't think any pilot could control an aircraft like that and hit the Pentagon. No 757 could fly like that, especially the terrorist supposedly flying Flight 11 who was an unskilled amateur pilot yet magically flew with total perfection.
Read More Articles:
|Aircraft | Design | Ask Us | Shop | Search|
|About Us | Contact Us | Copyright © 1997-2012| | http://www.aerospaceweb.org/question/conspiracy/q0265.shtml | 13 |